IBM Creates World's First Artificial Phase-Change Neurons (arstechnica.com) 69
An anonymous reader writes from a report via Ars Technica: IBM has created the world's first artificial nanoscale stochastic phase-change neurons and has already created and used a population of 500 of them to process a signal in a similar manner as the brain. Ars Technica reports: "Like a biological neuron, IBM's artificial neuron has inputs (dendrites), a neuronal membrane (lipid bilayer) around the spike generator (soma, nucleus), and an output (axon). There's also a back-propagation link from the spike generator back to the inputs, to reinforce the strength of some input spikes. The key difference is in the neuronal membrane. In IBM's neuron, the membrane is replaced with a small square of germanium-antimony-tellurium (GeSbTe or GST). GST, which happens to be the main active ingredient in rewritable optical discs, is a phase-change material. This means it can happily exist in two different phases (in this case crystalline and amorphous), and easily switch between the two, usually by applying heat (by way of laser or electricity). A phase-change material has very different physical properties depending on which phase it's in: in the case of GST, its amorphous phase is an electrical insulator, while the crystalline phase conducts. With the artificial neurons, the square of GST begins life in its amorphous phase. Then, as spikes arrive from the inputs, the GST slowly begins to crystallize. Eventually, the GST crystallizes enough that it becomes conductive -- and voila, electricity flows across the membrane and creates a spike. After an arbitrary refractory period (a resting period where something isn't responsive to stimuli), the GST is reset back to its amorphous phase and the process begins again." The research has been published via the journal Nature.
Inferior compared to my brain ... (Score:1)
with millions of neurons doing a way better job.
But IBM's attempt can probably be patented.
Re: (Score:2)
Re:Inferior compared to my brain ... (Score:5, Interesting)
Firing rates:
* Human neuron: a couple hundred milliseconds
* Chip: A couple dozen nanoseconds. (note: not microseconds!)
Size:
* Human neuron: 4-100um on each axis
* Chip: Currently 100nm square on a thin wafer, with a 90nm process; scalable to 14nm process.
Now, let's not get ahead of ourselves: they are far from demonstrating the ability to emulate a human brain here. But if they do manage to implement a system that properly models human neural activity, the potential to vastly outperform the brain should be obvious. The number of neurons that make up the human brain could be packed into a single layer chip a third of a square centimeter (times some factor to account for the interconnects) operating at ten million times the speed. To say nothing of the ease of integrating it directly with storage, networking, and general purpose computing hardware.
And there is motive to advance this field, too. Neural nets are starting to have direct consumer applications [techcrunch.com] (leaps and bounds improvements in image recognition, image enhancement, bandwidth reduction, etc). And we're talking about neural net chips that could readily be sized as a coprocessor in a phone. If there's a market, they'll make them. And advance them with time.
No, IBM is far from having a "brain on a chip". But it's very interesting research, to say the least.
Re:Inferior compared to my brain ... (Score:5, Insightful)
* Chip: nothing like a neuron. Doesn't even act like a neuron.
Just because someone calls something a "neuron" or "neural network" doesn't make it anything like a brain or even an approximation of how the brain works.
Re: (Score:3)
Thank you for bringing that up. These things are neurons in the same way hoverboards hover. They don't. It is a new trend to misname everything to make it sound way better than it really is.
Re: (Score:2)
No it isn't. Marketing has been around for a long time.
Re: (Score:2)
I suppose now people are so used to it that they expect misleading marketing in science too. Sad.
Re: (Score:2, Insightful)
Chip: nothing like a neuron. Doesn't even act like a neuron.
No one wants airplanes that grow feathers, poop, flap, or tweet. We just want ones that fly faster, higher, and carry larger payloads than birds. Likewise, we don't want artificial neurons that age, require oxygen, or contract Alzheimer's disease. We just want ones that process information faster and more precisely. So, could you please identify something related to information processing that "real" neurons do better? It's easy to be skeptical. What's hard is to put your ideas on the line and see what you
Re: (Score:2)
* Human Neuron: Cannot be exposed to deep space radiation. Will die. Cannot manage extreme G-force. Learning new skills take ages.
* Chip: Can be shielded to deep space radiation. Will not die. Deep space advanced robotics applications. Can take extreme G-force. Can be weaponised. Could create medical robotics with "plugins" for each medical discipline.
Re: (Score:2, Informative)
IBM Video (Score:3, Informative)
IBM made a video a while ago which was a pretty interesting watch if you're interested in this stuff:
"From BrainScales to Human Brain Project: Neuromorphic Computing Coming of Age"
https://www.youtube.com/watch?v=g-ybKtY1quU
IBM actually put out quite a lot of interesting tech related videos :)
https://www.youtube.com/user/IBMLabs
Something that always bothers with these stories.. (Score:4, Interesting)
Neurons work primarily in terms of communicating - I'd say they're basically communicating machines as much as muscles are movement machines. They store states, query other neurons, take external inputs, and work together to do virtually everything an animal can do, as a macroscopic being. As they grow, they have to figure out their particular role based on their inputs and outputs.
So, why can't we just query them for their contents? With stories like this, we're making artificial nerves - shouldn't there be some way we can signal the nerves, push some simple neurotransmitters, and experiment until we get enough singnal+noise to figure out the 'language'? Even in simple creatures, it seems like we should be able to do this enough to ask a neuron its contents, then query neighbors, until we at least get a loose map of queryable resources.
Every once in a while I search google scholar and the like to see what folks are doing along these lines, and I never seem to see anyone take this approach, or even attempt to reach for mechanisms of this form. But if we can see, learn, imagine in real-time, and so on, there has to at least some analogue of an informational query system we can use, static purpose neuron maps just wouldn't make sense even with the scale, even with specialization.
Ryan Fenton
Re: (Score:3)
Personally I feel that breakthrough in artificial brains may be doomed to be locked with advances in understanding of the medical field; as you've pointed out we've yet to understand an individual neuron's role in the grand scheme or what makes all this mush of neurons self-aware. So far we've only examined cause and effect by pumping chemicals and electrical jolts into a live brain which seems almost primordial compared to what we don't know.
Re: (Score:3)
I dunno... we figured out how atoms worked by smashing them...
Re: (Score:3, Insightful)
"singnal+noise to figure out the 'language'"
Why would there be one language? We understand how the basic neuron works, the back propagation reinforcement, the weighting of inputs and so on, BUT how they configure themselves depends on the data they are fed.
That is what Google's deep dream is, it's a look into the layers in a neural network so you can see how its configured. But the dream is different depending on the training data set. Even the same training data set, fed in a different order, ends up with
Re: (Score:1)
I agree that there is no language. The closest you could come to a language is an understanding of what makes neurons make the connections they do, and why they strengthen/weaken signals from other neurons. As the AI researcher below stated about finding these things out, " it's very difficult to do, and so far no one's been able to figure out a good way to do it.". It actually sounds damn near impossible to do, at least with out present level of technology. If we could make microscopic self assembling
It's a bit difficult (Score:5, Informative)
So, why can't we just query them for their contents?
(I'm an AI researcher by day.)
It's a very good idea, and something that many researchers have thought about. The problem is that it's very difficult to do, and so far no one's been able to figure out a good way to do it.
The cerebral cortex is composed of "columns", where each column is about the thickness of a human hair. If you could peel the cortex off and lay it flat, it would be about as thick as a business card. To all appearances, the cortex is composed of identical columns, with some slight variations for I/O columns and such.
Each column contains roughly 100 neurons in a handful of types. Any individual neuron makes between 2000 and 15000 connections with other neurons, and some neurons in a column make connections with neurons in other columns.
So you have 100 cells in a column the thickness of a human hair and length which is the thickness of a business card.
It's difficult to make a wire thin enough to contact one neuron, it's impossible to manoeuvre such a wire to get it in place to touch one neuron, it has to have insulation everywhere except the tip to avoid signals from other neurons, and the very faint signals have to be amplified close to the source to avoid noise.
It's just about impossible to map the connections between neurons because there are so many of them and the connections are much *much* smaller than the nerves themselves. Also, you have to do this without killing the nerve, and killing other nerves you have to go through to get the connections.
And this has to be done while the organism is living, and keeping it living while drilling into the head cavity is a trick in itself (and dealing with the resulting pain, blood loss, &c.). You can get some information from non-mammals (such as sea worms), but then none of those have a mammalian cortex to study.
Every once in awhile I read about new techniques using fiber optics and related technologies, but there's still the issue of routing the sensor (whatever it may be) to the neurons in a way that doesn't chop through other nerves.
One technology I read about has a pad with tiny needles laid down on the cortex. The needles can be made using chip fabrication technology, and you can have amplifiers on the chip at the base of the needles... but this still can only be applied to the *surface* of the cortex, and only connects to those nerves which are physically at the top of the column, and not the ones inside.
All in all, it's an extremely difficult problem that no one's figured out yet.
Re:It's a bit difficult (Score:4, Interesting)
I seem to remember some research that showed small spicule structures inside the axons leading to the terminating dendrites, which seemed to be the physical medium of data storage and decision making inside individul neurons.
If that is the case, then a combination of a novel signaling method (say, an artificially imposed communication protocol using an assortment of photon emission spectra, created using seveal biotag luminescence proteins attached to different parts of this spice assemblage) then having a small sensor array stuck on the top of the cortex is not such a liability. You can get deep signal data without having to jam a huge electrode in there and severing the structures you are trying to examine in operation, by observing the emitted energy at the surface. Rather than an electrical interface, it is a photo multiplier based amplifier, which filters noise with multiple sensor columns (needles).
Bonus if you can include a photomultiplier mechanism inside the axon itself to make it flash its activity states more brightly. It may be necessary to increase the metabolic activity of the animal neurons through further genetic manipulation in order to get enough optical signal without degrading the activity going on inside the axon to do that though.
Another radical idea may be to "stake" a single, custom engineered neuron onto such a phototamplifying sensor needle, by coating the needle in cellular membrane proteins, gaining direct structural connections to this spicule structure in the process, and letting this staked neuron migrate its own dendrites into the region of animal neural tissue being examined. that solves the wiring problem, and possibly some of the power generation problem for the photoamplification, and some others as well.
Re: (Score:1)
Re: (Score:3)
Are you talking about EEG [google.is] or fMRI [google.is] or the like? Not nearly the sort of resolution you're looking for, by many orders of magnitude. You can see a comparison of techniques here [wikipedia.org].
Re: (Score:1)
Re: (Score:2)
1) Neither EEG nor fMRI are "invasive"
2) You can't just "get it better", that's the whole point.
Re: (Score:1)
Why cant we get better at it? Advance the underlying technology and
Re: (Score:2)
It's always seemed a losing battle to try to read out directly from neurons as they are, that you have to have modifications to make your task easier. Such as photoluminescence (as neurons are not totally opaque on those scales). Obviously you're never going to be able able to read out an entire mammalian brain with a single CCD sensor, but as for individual patches on a per-sensor basis...
Re: (Score:2)
Re: (Score:3, Informative)
Everything you're talking about is focused on the electrical state of the cells, which is a tiny but easily measured part of their computational state. Even synaptic communication involves a mixture of neurotransmitters with other small molecule messengers and proteins that don't affect synaptic potential directly. Nevermind the more general diffusible factors. A neuron model that replicates only the electrical behavior of a neuron is going to miss most of the learning capacity.
Re: (Score:1)
Re: (Score:1)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
The magic of the brain isn't that its core functionality is complicated or hard to grasp- it's in the sheer scale of the connectivity of the network. A single brain dwarfs all human made networks of any kind, combined.
Re:Not the first (Score:4, Informative)
Well said, sir! Also, simulated neural networks in software have been around since rather far back in the last century (on paper, back to 1933 and Nicholas Rashevsky), and as the author of a very large, very advanced neural network modeling tool, I'd further add that we can already build rather large simulated neural networks in software, and even build composite NNs using a mix of software and parallel hardware.
A transistor-based gate is already in some sense a neuron, and it isn't that difficult to build collections of them that perform even more like a neuron. The problem is that even if we do so, we don't really have any good idea what to do with it, and we have a very hard time scaling it up to the number of connections visible in the human brain. By "very hard", I mean "not possible to achieve, not likely to become possible to achieve, any time soon", at least not without a serious breakthrough. We are two orders of magnitude short of matching the number of neurons in the human brain in JUST transistor count, and we cannot come anywhere near 1000 to 10000 connections per transistor. And finally, transistors are not neurons, and even if they were neurons we have no idea how to build a massive, amorphous neural network and then train it somehow (or program it somehow) to do useful work.
It is enormously difficult to write a good simulated neural network program to do relatively simple tasks such as noisy pattern identification or predictive modeling of unstructured high dimensional data, even with complete control over the algorithm. There seems to be this feeling out there that if one just builds an artificial brain with a lot of artificial neurons and hit it with data it will somehow "wake up" and smell the metaphorical coffee of life and do some sort of useful work. I personally think this is enormously optimistic, but then, I actually have some grasp of the mathematical complexity of the optimization problem involved.
This is more in the category of building (or rebuilding with more modern technology) a unit that MAY prove useful if we ever have a breakthrough on the half-dozen serious obstacles associated with AI via NNs, most of which can actually be made and will actually be made (if at all) with simulated NNs. Only after simulated NNs demonstrate a clear pathway to going from a collection of artificial "neurons" with some specific algorithmic functionality and ability to be interconnected at a very fine scale to a useful, profitable, neural network that does actual work worth doing will anybody bother to dump a billion or so dollars into a foundry for artificial neuron devices. And there may even be a few such applications today -- some networks are very simple algorithmically, but they are also the least extendable to really hard problems or problems we cannot already solve efficiently other ways. Letter recognition, maybe.
The human brain has a quadrillion or so synaptic connections, and it is difficult to even start estimate the volume of the phase space represented by all of those connections. The "switches" are indeed much slower than they are in computers, but they run in parallel as well as serially, and it is estimated that they are "equivalent" to a terabit per second processor in their full-parallel speed. We can achieve similar scales in simulation on parallel supercomputers, of course, but not with anywhere near the number of "neurons" or "synapses" and if we really use TIPS scale computing resources, they probably aren't going to be doing their "AI" with NNs anyway for anything but selected problems.
So cool top article, good on IBM, and all that, but I'm not holding my breath for a phone that actually completes words sanely using its IBM(tm) Neural Processor...
rgb
Re: (Score:2)
A baby's brain isn't an amorphous soup of neurons.
Domo arigato, Mr. Roboto... (Score:2)
I've got a secret I've been hiding under my skin
My heart is human, my blood is boiling, my brain IBM.
Re: (Score:1)
The prophets have spoken. :)
IBM creates a lot of things (Score:2)
I for one welcome our AI overlords (Score:1)
Maybe... (Score:3)
NMSU has a very similar device in 1992 (Score:1)
I was an undergrad at NMSU in 1992 working for a team developing artificial neural network processing elements in the 1990's. By 1992 a paper was published [ieee.org] for using the PEs for pulse streamling filtering. The specific PE used for that example had electrically isolated dendritic inputs and axionic output. The team had preciously developed a PE that had dendritic input and axionic output on the same circuit. These behaved exactly the same as the ones from IBM, without the need for fancy phase-change mate
Re: (Score:2)
As far as a 'brain' goes, though, it still doesn't account for consciousness
A neat hypothesis. Have any evidence to support it?
There's quite a bit of evidence to support (though I'll readily admit not conclusively) that you're quite wrong.
Video of IBM's Artificial Neurons and Synapses (Score:1)
innovation? (Score:1)