Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Communications Network Software Science Technology

3D-Printed Deep Learning Neural Network Uses Light Instead of Electrons (newatlas.com) 76

Matt Kennedy from New Atlas reports of an all-optical Diffractive Deep Neural Network (D2NN) architecture that uses light diffracted through numerous plates instead of electrons. It was developed by Dr. Aydogan Ozcan and his team of researchers at the Chancellor's Professor of electrical and computer engineering at UCLA. From the report: The setup uses 3D-printed translucent sheets, each with thousands of raised pixels, which deflect light through each panel in order to perform set tasks. By the way, these tasks are performed without the use of any power, except for the input light beam. The UCLA team's all-optical deep neural network -- which looks like the guts of a solid gold car battery -- literally operates at the speed of light, and will find applications in image analysis, feature detection and object classification. Researchers on the team also envisage possibilities for D2NN architectures performing specialized tasks in cameras. Perhaps your next DSLR might identify your subjects on the fly and post the tagged image to your Facebook timeline. For now though, this is a proof of concept, but it shines a light on some unique opportunities for the machine learning industry. The research has been published in the journal Science.
This discussion has been archived. No new comments can be posted.

3D-Printed Deep Learning Neural Network Uses Light Instead of Electrons

Comments Filter:
  • Perhaps your next DSLR might identify your subjects on the fly and post the tagged image to your Facebook timeline.

    No thank you.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      These people have a fundamentally different value system, so they just don't get it.

    • Re: Perhaps (Score:4, Interesting)

      by volodymyrbiryuk ( 4780959 ) on Thursday August 02, 2018 @06:42AM (#57055520)
      Exactly this. They have bleeding edge technology on their hands and want to use it to post on FB? Get the fuck out of here.
      • by Anonymous Coward

        It's a silly marketing tactic they're employing by saying that. They're just trying to make it relevant and useful to John Q Idiot.

        • by Potor ( 658520 )
          No - that explanation is to make the tech sound innocuous. The real use is government and police surveillance.
      • by mikael ( 484 )

        Analytics is the biggest market for big data processing. All those juicy camera photographs sent to Facebook - Identifying people, places, associations, parties, events, whatever...

  • "If we build it, they will come" and then they'll buy the patent for billions of dollars. We're going to be RICH guys. RICH!

    Meanwhile, in the real world, no one really cares.

    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Thursday August 02, 2018 @05:49AM (#57055356)
      Comment removed based on user account deletion
      • by mikael ( 484 )

        There wasn't enough computing power back in the late 1980's. For a desktop PC with an 66MHz x86, graphics boards with i860's, TMS34020's and some TMS320x0's were the most performance you could get. Even then, those boards were around $1200, and you were lucky to get a C compiler let alone Fortran or C++. Even then rendering a single frame of the Mandelbrot set would still take minutes.

        Today, you can buy a PC or server with custom DNN hardware, able to process Terabytes of image data.

      • Either had hardly any practical application at the time they where introduced. Fast forward and companies are now poring billions into this kind of technology. Point being nobody cared about most things, before they suddenly became the next big thing.

        Bullshit. There just wasn't enough computing power nor big enough data sets to do much with the technology.

        If someone had invented an analogue analogue to the neuron as a component may be in IC form factor, the world would be a very different place today.

      • The concept of learning algorithms was developed in the 60's and Deep Learning was coined in the late 80's. Either had hardly any practical application at the time they where introduced. Fast forward and companies are now poring billions into this kind of technology. Point being nobody cared about most things, before they suddenly became the next big thing.

        Making a machine learning network does not require any of these new things. The ones first used in the 1960's were much slower, but just as good at learning.
        https://www.youtube.com/watch?... [youtube.com]

        Also, Neural networks (that are not really AI at all) only need to be used for the learning. Once done they can be reduced to only the active logic and constructed as code or hard circuits. But they are useful for detecting logic that no one had recognized before that.

    • Back in the 60's lasers were considered a solution without a problem.

  • Avoid the paywall (Score:5, Informative)

    by Anonymous Coward on Thursday August 02, 2018 @05:24AM (#57055286)

    and the "magazine" article, get the research paper straight from the horse's mouth for free.

    http://innovate.ee.ucla.edu/wp... [ucla.edu]

  • by viperidaenz ( 2515578 ) on Thursday August 02, 2018 @05:32AM (#57055314)

    They develop the network, then 3D print it. Once it's printed it performs a static task.

    • as regular "Machine Learning" does during normal operations if you don't do training cycles.

      • by Viol8 ( 599362 )

        Except you with a normal network you have the option to continue training it if you want to and even if you don't , downloading a new one is simple. When this needs to be updated it simply goes in the bin and a new one has to be fabricated. Not terribly efficient.

        • Except you with a normal network you have the option to continue training it if you want to and even if you don't , downloading a new one is simple. When this needs to be updated it simply goes in the bin and a new one has to be fabricated. Not terribly efficient.

          ... for learning new tasks, but potentially very efficient for utilizing already learned tasks. Ignore the click-bait about machine learning, and instead focus on the fact that they created an OCR system that uses no power for processing other than the initial light input.

        • by mysidia ( 191772 )

          When this needs to be updated it simply goes in the bin and a new one has to be fabricated. Not terribly efficient.

          You don't mass print it until you are delivering --- it is the same way when you Lithography is being used to manufacture CPUs and other integrated circuits: Once produced there are no further updates without building a new one.

          This is an Optical ASIC that takes a complex input and spits out a result requiring no additional use
          of electrical power, and no stateful units such as Memo

          • No EM interference to worry about..
            Just dust, moisture, temperature changes, vibration. Basically anything that can interfere with the optical path or bend/misalign the filters.

            • by mysidia ( 191772 )

              Just dust, moisture, temperature changes, vibration.

              It would take some rather extreme conditions to bend layered sheets of plastic, especially inside a decent enclosure.
              The interface points could be a weak spot, but there are a number of ways they could protected, such as putting the detectors flush with the media and filling the space with epoxy.

              • dust and moisture interfere with the light.
                Dust gets in the way, moisture changes the refractive index.

                Temperature makes things expand and contract. Different materials do s at different rates.

                • by mysidia ( 191772 )

                  The medium is a 3D printed plastic plate --- It seems you are totally ignorant to the basic properties of plastics.

                  Dust gets in the way, moisture changes the refractive index.

                  Dust and Moisture cannot penetrate plastics. It's trivial to make sure the environment is clean when first 3D printing the media. You can bury a plastic plate/block in a pile of sand or a mound of dirt, dig it up, and spray it off --- not a single spec of dirt or sand or drop of water will penetrate into the material. A plastic p

                  • Plastic expands and contracts with temperature just like everything else Having many separate layers that need to be precisely aligned means that the expansion of one plate is going to have an impact on the entire system.

                    If you've every done 3D printing, you'll know that PLA, ABS, Nylon, etc, all shrink when they cool down, leading to potential warping and having to print your ABS design 1.5 to 2.5% larger than you want it to be when it's cooled down. That's a 2.5% shrinkage with a 200 degree temperature di

    • by Hentes ( 2461350 )

      Yep. From the paper:

      We introduce an all-optical deep learning framework, where the neural network is physically formed by multiple layers of diffractive surfaces that work in collaboration to optically perform an arbitrary function that the network can statistically learn. While the inference/prediction of the physical network is all-optical, the learning part that leads to its design is done through a computer.

      Now this is still a really cool tech with a number of potential applicatio

  • by bickerdyke ( 670000 ) on Thursday August 02, 2018 @05:37AM (#57055324)

    That sounds like one of the coolest magic tricks ever performed.

    Set up a lamp shining on a screen and have someone paint a number on a glass plate. Put it in front of the lamp to project it on the screen. Add a stack of plates and instead of the number you will see a bright spot in a field on screen representing the number. And that without any hidden active equipment making a "decision" of any kind. Just a combination of refracting patterns.

  • As long as it take more space and more weight than an IC performing an equivalent task, this will stay a nice research subject.
    We replaced hardware radio receptor by software ones, I don't see why we would replace software neural network by hardware ones... Plus, they miss the 'plasticity' of software ones and a scratch would make it 'dead' :(

    • As long as it take more space and more weight than an IC performing an equivalent task

      But it doesn't use power and should have a high radiation/EMP robustness. And is literally as fast as lightning [xkcd.com]. Any of these factory may mitigate space or weight

      I already wrote in a different post that it's really cool because it mixes hi-tech with absolute low-tech.

      • by Anonymous Coward

        Using light, it is much faster than even a purpose-built IC. You train some network to 'perfection', then you have very low production cost in the form of perforated sheets. Useful, if you have a static but high-speed task. Such as recognising stuff on a production line. If the task ever changes, you retrain on a computer and replace the perforated sheets.

      • ideal for weapon systems that need to identify targets very quickly. Might be able to help an aircraft carrier survive for another 10 minutes in an all-out war with Russia or China.

    • I don't see why we would replace software neural network by hardware ones.

      1. Speed (it is inherently parallel).
      2. Energy consumption.

    • I'm guessing the only value of such a thing, beyond research, would be if it is super fast for specialized applications.
      Say you wanted to implement voice recognition with specific keyword capture across all the conversations in the united states at the same time in real time. Something like this 'might' be useful , assuming you can't perform the same task on normal hardware, which i wouldn't know without testing.

    • by mysidia ( 191772 )

      As long as it take more space and more weight than an IC performing an equivalent task, this will stay a nice research subject

      Space and weight are mainly only a concern for portable devices, and to a lesser extent, personal electronics.

      Weight isn't that important, and there is PLENTY of cheap land available in the US for industrial applications.
      So imagine you take a picture of your mobile phone and want send it to the cloud to figure out what that picture is, tag all the objects, and categorize it

  • by Anonymous Coward

    So basically this is like 3D printing a "machine" that is capable of instantaneously evaluating a R^2 -> R^2 function for some degree of precision (specially in the detectors side) and assuming you have already found a good enough NN representing the function.

    Disclaimer: I have not read the paper.

  • ....the Matrix started.
  • So this is the modern version of punch cards?

    • So this is the modern version of punch cards?

      More like those old physical video game DRM systems with a colored filter you placed over an obfuscated image to then read the password, but with multiple layers of filters.

  • and you have the most 2018 article.

  • by Anonymous Coward

    Unless it's also an IoT device that mines bitcoins in the cloud I'm not interested.

    • Unless it's also an IoT device that mines bitcoins in the cloud I'm not interested.

      No, it just posts all your Bitcoins to Zuckerberg

  • by itamblyn ( 867415 ) on Thursday August 02, 2018 @07:17AM (#57055644) Homepage
    The impressive part of the work is the fact that they were able to print materials that control light so well, not the actual network itself. The weight optimization and topology design were still done using standard computing hardware - this is just a physical realization of a trained network. You could build something equivalent out of water and tubes (though it would be slower and wetter obviously). The cool part is the optical control which is now possible, not the fact that MNIST works.
  • by cascadingstylesheet ( 140919 ) on Thursday August 02, 2018 @07:30AM (#57055684) Journal
    ... that's some serious buzzword bingo there.
    • ... that's some serious buzzword bingo there.

      That's what happens when the salesmen get to write the press release... 8-P

  • by jbmartin6 ( 1232050 ) on Thursday August 02, 2018 @08:00AM (#57055826)
    I hope I live to see the day when we just say "manufactured" instead of "3D-printed"
    • Re:Someday (Score:4, Insightful)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday August 02, 2018 @08:51AM (#57056130) Homepage Journal

      We still say "machined" when we could just say something was manufactured via machining. And we will continue to machine forged billets for the foreseeable future, because forged metals have a superior internal structure as compared to sintered. Additive manufacturing is likely to never completely replace other forms of manufacturing, at least not until we develop true nanotechnology and can build our own structures at that level. And it is highly doubtful that will happen in any of our lifetimes.

      • Re:Someday (Score:5, Interesting)

        by quanminoan ( 812306 ) on Thursday August 02, 2018 @09:47AM (#57056486)

        Additive manufacturing will never replace common manufacturing techniques like casting or forging. This isn't just due to structure, but energy costs as well. Rastering layer by layer, constantly heating and cooling, is far more energy intensive than a "melt and pour". Batch injection molding 1000 parts will always be faster than build by layer.

        Not to shit on AM however - complexity becomes free. With AM you can make a gadget as optimum as you want with intricate bio-inspired structural conmponents, and so long as production volumes are relatively low, say 10,000, AM might be able to compete based on the combination of better performance and material cost savings. Things like rocket engine components are ideal for this.

        Not necessarily a revolution, but a very welcome new tool. CNC machining probably had a greater impact, but went largely unnoticed by the general public. Advanced automation (think robotics mixed with machine learning and AI) is probably poised to make a larger contribution than AM, IMO.

  • While the slashdot summary uses the term "light", the paper [ucla.edu] states that they used a 0.4 THz source — not the frequency/wavelength most people think of when they hear the word "light".

  • This must be at the top of many search requests, but does it actually make sense?
  • It reminds me of the intro sequence from an Ian Banks book (I can't remember which one) where a robot is on a ship which is attacked and is forced to eject different levels of consciousness each of which uses a different underlying technology. The photonic brain was the third most powerful level of consciousness it had. It eventually has to resort to it's "primitive" silicon brain because everything else has been corrupted. Maybe the book was by Vinge Verge. I can't remember...

They are called computers simply because computation is the only significant job that has so far been given to them.

Working...