Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Earth IBM AI Science Hardware Technology

IBM Creates World's First Artificial Phase-Change Neurons (arstechnica.com) 69

An anonymous reader writes from a report via Ars Technica: IBM has created the world's first artificial nanoscale stochastic phase-change neurons and has already created and used a population of 500 of them to process a signal in a similar manner as the brain. Ars Technica reports: "Like a biological neuron, IBM's artificial neuron has inputs (dendrites), a neuronal membrane (lipid bilayer) around the spike generator (soma, nucleus), and an output (axon). There's also a back-propagation link from the spike generator back to the inputs, to reinforce the strength of some input spikes. The key difference is in the neuronal membrane. In IBM's neuron, the membrane is replaced with a small square of germanium-antimony-tellurium (GeSbTe or GST). GST, which happens to be the main active ingredient in rewritable optical discs, is a phase-change material. This means it can happily exist in two different phases (in this case crystalline and amorphous), and easily switch between the two, usually by applying heat (by way of laser or electricity). A phase-change material has very different physical properties depending on which phase it's in: in the case of GST, its amorphous phase is an electrical insulator, while the crystalline phase conducts. With the artificial neurons, the square of GST begins life in its amorphous phase. Then, as spikes arrive from the inputs, the GST slowly begins to crystallize. Eventually, the GST crystallizes enough that it becomes conductive -- and voila, electricity flows across the membrane and creates a spike. After an arbitrary refractory period (a resting period where something isn't responsive to stimuli), the GST is reset back to its amorphous phase and the process begins again." The research has been published via the journal Nature.
This discussion has been archived. No new comments can be posted.

IBM Creates World's First Artificial Phase-Change Neurons

Comments Filter:
  • by Anonymous Coward

    with millions of neurons doing a way better job.
    But IBM's attempt can probably be patented.

    • Even the stupidest post requires more intelligence than what the cleverest computer can currently produce. Indeed.
    • by Rei ( 128717 ) on Thursday August 04, 2016 @05:15AM (#52642741) Homepage

      Firing rates:
        * Human neuron: a couple hundred milliseconds
        * Chip: A couple dozen nanoseconds. (note: not microseconds!)

      Size:
        * Human neuron: 4-100um on each axis
        * Chip: Currently 100nm square on a thin wafer, with a 90nm process; scalable to 14nm process.

      Now, let's not get ahead of ourselves: they are far from demonstrating the ability to emulate a human brain here. But if they do manage to implement a system that properly models human neural activity, the potential to vastly outperform the brain should be obvious. The number of neurons that make up the human brain could be packed into a single layer chip a third of a square centimeter (times some factor to account for the interconnects) operating at ten million times the speed. To say nothing of the ease of integrating it directly with storage, networking, and general purpose computing hardware.

      And there is motive to advance this field, too. Neural nets are starting to have direct consumer applications [techcrunch.com] (leaps and bounds improvements in image recognition, image enhancement, bandwidth reduction, etc). And we're talking about neural net chips that could readily be sized as a coprocessor in a phone. If there's a market, they'll make them. And advance them with time.

      No, IBM is far from having a "brain on a chip". But it's very interesting research, to say the least.

      • by 110010001000 ( 697113 ) on Thursday August 04, 2016 @07:19AM (#52643175) Homepage Journal
        * Human neuron: an actual neuron
        * Chip: nothing like a neuron. Doesn't even act like a neuron.
        Just because someone calls something a "neuron" or "neural network" doesn't make it anything like a brain or even an approximation of how the brain works.
        • Thank you for bringing that up. These things are neurons in the same way hoverboards hover. They don't. It is a new trend to misname everything to make it sound way better than it really is.

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          Chip: nothing like a neuron. Doesn't even act like a neuron.

          No one wants airplanes that grow feathers, poop, flap, or tweet. We just want ones that fly faster, higher, and carry larger payloads than birds. Likewise, we don't want artificial neurons that age, require oxygen, or contract Alzheimer's disease. We just want ones that process information faster and more precisely. So, could you please identify something related to information processing that "real" neurons do better? It's easy to be skeptical. What's hard is to put your ideas on the line and see what you

      • by Tim12s ( 209786 )

        * Human Neuron: Cannot be exposed to deep space radiation. Will die. Cannot manage extreme G-force. Learning new skills take ages.
        * Chip: Can be shielded to deep space radiation. Will not die. Deep space advanced robotics applications. Can take extreme G-force. Can be weaponised. Could create medical robotics with "plugins" for each medical discipline.

  • IBM Video (Score:3, Informative)

    by Anonymous Coward on Wednesday August 03, 2016 @11:32PM (#52641875)

    IBM made a video a while ago which was a pretty interesting watch if you're interested in this stuff:

    "From BrainScales to Human Brain Project: Neuromorphic Computing Coming of Age"
    https://www.youtube.com/watch?v=g-ybKtY1quU

    IBM actually put out quite a lot of interesting tech related videos :)
    https://www.youtube.com/user/IBMLabs

  • by RyanFenton ( 230700 ) on Wednesday August 03, 2016 @11:33PM (#52641877)

    Neurons work primarily in terms of communicating - I'd say they're basically communicating machines as much as muscles are movement machines. They store states, query other neurons, take external inputs, and work together to do virtually everything an animal can do, as a macroscopic being. As they grow, they have to figure out their particular role based on their inputs and outputs.

    So, why can't we just query them for their contents? With stories like this, we're making artificial nerves - shouldn't there be some way we can signal the nerves, push some simple neurotransmitters, and experiment until we get enough singnal+noise to figure out the 'language'? Even in simple creatures, it seems like we should be able to do this enough to ask a neuron its contents, then query neighbors, until we at least get a loose map of queryable resources.

    Every once in a while I search google scholar and the like to see what folks are doing along these lines, and I never seem to see anyone take this approach, or even attempt to reach for mechanisms of this form. But if we can see, learn, imagine in real-time, and so on, there has to at least some analogue of an informational query system we can use, static purpose neuron maps just wouldn't make sense even with the scale, even with specialization.

    Ryan Fenton

    • Personally I feel that breakthrough in artificial brains may be doomed to be locked with advances in understanding of the medical field; as you've pointed out we've yet to understand an individual neuron's role in the grand scheme or what makes all this mush of neurons self-aware. So far we've only examined cause and effect by pumping chemicals and electrical jolts into a live brain which seems almost primordial compared to what we don't know.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      "singnal+noise to figure out the 'language'"

      Why would there be one language? We understand how the basic neuron works, the back propagation reinforcement, the weighting of inputs and so on, BUT how they configure themselves depends on the data they are fed.

      That is what Google's deep dream is, it's a look into the layers in a neural network so you can see how its configured. But the dream is different depending on the training data set. Even the same training data set, fed in a different order, ends up with

      • by Anonymous Coward

        I agree that there is no language. The closest you could come to a language is an understanding of what makes neurons make the connections they do, and why they strengthen/weaken signals from other neurons. As the AI researcher below stated about finding these things out, " it's very difficult to do, and so far no one's been able to figure out a good way to do it.". It actually sounds damn near impossible to do, at least with out present level of technology. If we could make microscopic self assembling

    • It's a bit difficult (Score:5, Informative)

      by Okian Warrior ( 537106 ) on Thursday August 04, 2016 @12:06AM (#52642013) Homepage Journal

      So, why can't we just query them for their contents?

      (I'm an AI researcher by day.)

      It's a very good idea, and something that many researchers have thought about. The problem is that it's very difficult to do, and so far no one's been able to figure out a good way to do it.

      The cerebral cortex is composed of "columns", where each column is about the thickness of a human hair. If you could peel the cortex off and lay it flat, it would be about as thick as a business card. To all appearances, the cortex is composed of identical columns, with some slight variations for I/O columns and such.

      Each column contains roughly 100 neurons in a handful of types. Any individual neuron makes between 2000 and 15000 connections with other neurons, and some neurons in a column make connections with neurons in other columns.

      So you have 100 cells in a column the thickness of a human hair and length which is the thickness of a business card.

      It's difficult to make a wire thin enough to contact one neuron, it's impossible to manoeuvre such a wire to get it in place to touch one neuron, it has to have insulation everywhere except the tip to avoid signals from other neurons, and the very faint signals have to be amplified close to the source to avoid noise.

      It's just about impossible to map the connections between neurons because there are so many of them and the connections are much *much* smaller than the nerves themselves. Also, you have to do this without killing the nerve, and killing other nerves you have to go through to get the connections.

      And this has to be done while the organism is living, and keeping it living while drilling into the head cavity is a trick in itself (and dealing with the resulting pain, blood loss, &c.). You can get some information from non-mammals (such as sea worms), but then none of those have a mammalian cortex to study.

      Every once in awhile I read about new techniques using fiber optics and related technologies, but there's still the issue of routing the sensor (whatever it may be) to the neurons in a way that doesn't chop through other nerves.

      One technology I read about has a pad with tiny needles laid down on the cortex. The needles can be made using chip fabrication technology, and you can have amplifiers on the chip at the base of the needles... but this still can only be applied to the *surface* of the cortex, and only connects to those nerves which are physically at the top of the column, and not the ones inside.

      All in all, it's an extremely difficult problem that no one's figured out yet.

      • by wierd_w ( 1375923 ) on Thursday August 04, 2016 @02:18AM (#52642365)

        I seem to remember some research that showed small spicule structures inside the axons leading to the terminating dendrites, which seemed to be the physical medium of data storage and decision making inside individul neurons.

        If that is the case, then a combination of a novel signaling method (say, an artificially imposed communication protocol using an assortment of photon emission spectra, created using seveal biotag luminescence proteins attached to different parts of this spice assemblage) then having a small sensor array stuck on the top of the cortex is not such a liability. You can get deep signal data without having to jam a huge electrode in there and severing the structures you are trying to examine in operation, by observing the emitted energy at the surface. Rather than an electrical interface, it is a photo multiplier based amplifier, which filters noise with multiple sensor columns (needles).

        Bonus if you can include a photomultiplier mechanism inside the axon itself to make it flash its activity states more brightly. It may be necessary to increase the metabolic activity of the animal neurons through further genetic manipulation in order to get enough optical signal without degrading the activity going on inside the axon to do that though.

        Another radical idea may be to "stake" a single, custom engineered neuron onto such a phototamplifying sensor needle, by coating the needle in cellular membrane proteins, gaining direct structural connections to this spicule structure in the process, and letting this staked neuron migrate its own dendrites into the region of animal neural tissue being examined. that solves the wiring problem, and possibly some of the power generation problem for the photoamplification, and some others as well.

      • As a young lad I imagined that telepathy could work by transmitting information by EMF. That is the natrium kalium ions carrying the charge when a neuron fires would as a whole emit a EM pattern that another neural construct could be uniquly capable of picking up (say twins), thus replicating the thought pattern remotly. Forgive me, it was a young boy daydreaming, however the thought comes to bear again reading your post. At what point would an EMF meter be of high enough resolution to pick up activites at
        • by Rei ( 128717 )

          Are you talking about EEG [google.is] or fMRI [google.is] or the like? Not nearly the sort of resolution you're looking for, by many orders of magnitude. You can see a comparison of techniques here [wikipedia.org].

          • Nope, those would be invasive, however that part about "Magnetoencephalography" pretty much covers it, that was my line of thought. Just do that. Get better at it and do that :).
            • by Rei ( 128717 )

              1) Neither EEG nor fMRI are "invasive"
              2) You can't just "get it better", that's the whole point.

              • Sorry right, my eye caught the MRI part and went from there, my bad.
                Why cant we get better at it? Advance the underlying technology and ...
      • by Rei ( 128717 )

        It's always seemed a losing battle to try to read out directly from neurons as they are, that you have to have modifications to make your task easier. Such as photoluminescence (as neurons are not totally opaque on those scales). Obviously you're never going to be able able to read out an entire mammalian brain with a single CCD sensor, but as for individual patches on a per-sensor basis...

        • You should have a look at these guys [nih.gov], who bolted a single-photon microscope to a mouse spinal cord, in order to watch calcium transients in the awake, behaving animals. Mouse is small enough that they can image the full depth of the cord.
      • Re: (Score:3, Informative)

        by tburkhol ( 121842 )

        Everything you're talking about is focused on the electrical state of the cells, which is a tiny but easily measured part of their computational state. Even synaptic communication involves a mixture of neurotransmitters with other small molecule messengers and proteins that don't affect synaptic potential directly. Nevermind the more general diffusible factors. A neuron model that replicates only the electrical behavior of a neuron is going to miss most of the learning capacity.

      • by notil ( 4291169 )
        Ah, this is so excellent. Thanks for sharing. I'm a neuroscience/medicine student and this is what keeps me awake at night...We'll get there one day!
    • by Anonymous Coward
      I get the feeling that you are attempting to understand these pseudo-neurons as if they are a special case of a logic gate. Based on what has been published they seem more like sensors tuned to react to a specific range of stimulus. Cascade enough of these and the resulting device has the required sensitivity to detect conditions which tell you things like what string of seemingly random entropy was used to encipher a communications channel, what sort of molecules will bind to a specific type of cancerous c
    • Concentrating on the individual neurons might not be the most useful thing. The usefulness of neurons comes from the way that they are "wired" together - the network is key. It's analogous to "the network is the computer".
    • Because the brain doesn't work that way. It isn't just a collection of queryable neurons in a known state. No one really knows how it works. This type of AI research is a joke. They just call what they built to be "neurons", but they aren't anything like neurons in the brain.
      • I'd say for the large part, we know somewhere north of 95% of "how" it works. The problem is we will probably never have a good approximation of its FPGA layout.
        The magic of the brain isn't that its core functionality is complicated or hard to grasp- it's in the sheer scale of the connectivity of the network. A single brain dwarfs all human made networks of any kind, combined.
  • I've got a secret I've been hiding under my skin
    My heart is human, my blood is boiling, my brain IBM.

  • Things that we have yet to see actually used in something practical.
  • My body is available as a host
  • by ZecretZquirrel ( 610310 ) on Thursday August 04, 2016 @07:11AM (#52643139)
    ...artificial neurons will buy IBM stock.
  • I was an undergrad at NMSU in 1992 working for a team developing artificial neural network processing elements in the 1990's. By 1992 a paper was published [ieee.org] for using the PEs for pulse streamling filtering. The specific PE used for that example had electrically isolated dendritic inputs and axionic output. The team had preciously developed a PE that had dendritic input and axionic output on the same circuit. These behaved exactly the same as the ones from IBM, without the need for fancy phase-change mate

  • https://youtu.be/hXeO8Kzz3bo [youtu.be] In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple
  • They always have such a colorful way of slashing their stock value (and "overhead").

Avoid strange women and temporary variables.

Working...