IBM Shows Off Brain-Inspired Microchips 106
An anonymous reader writes "Researchers at IBM have created microchips inspired by the basic functioning of the human brain. They believe the chips could perform tasks that humans excel at but computers normally don't. So far they have been taught to recognize handwriting, play Pong, and guide a car around a track. The same researchers previously modeled this kind of neurologically inspired computing using supercomputer simulations, and claimed to have simulated the complexity of a cat's cortex — a claim that sparked a firestorm of controversy at the time. The new hardware is designed to run this same software much more efficiently."
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
C'mon, people. The world doesn't work that way. Take your skynets and your laser-bearing sharks and your Soviet Russias and your petrified Natalie Portmans (with hot grits) and get off my scientifically accurate lawn.
If you simulate a brain, you get just that: a brain. It's probably not going to have any particularly exciting levels of intelligence, and unless yo
Re: (Score:2)
Scientifically accurate?
Maybe right now you have a point. Although, my first thought was not about Skynet, but that something modeled after a cat's brain would be driving my car. I have seen how those little bastards react to a laser beam on the ground (funny you mentioned lasers) and the last thing I need doing 75mph down the freeway is some joker in another car shining a laser erratically in front of my car.
That being said, the fear of Skynet actually comes from a reasoned and logical viewpoint. It has
Re: (Score:2)
There are a lot of problems with your post, I'm afraid, and they're mostly within your understanding of what gives rise to what we call human behaviour.
The first issue pops up in your quip about cats: why the hell would anyone program a car to behave like a cat? Developing a cortex that not only simulates the pathways but the twitch responses and activation thresholds of a particular living organism is such a phenomenal amount of exquisitely-detailed work that it would make absolutely no sense to repurpose
Re: (Score:2)
By the time we are capable of creating a computer that acts human, we will know, exhaustively and in every detail, what it means to be human. And we will be able to pick and choose without uncertainty what we are putting into it. And if we put sentience and a sense of self into it... well, then the product is going to be protected by law as an individual; there will be psychologists, philosophers, and neurologists lining up left and right to make sure it happens. And dealing with tyrannical or temperamental behaviour, or the responsibility of interacting with others, is going to be no different from the same situation between humans. It will be just as ethically impressionable as anyone else.
I wouldn't bet on it being so planned. As you probably know, a lot of discoveries and breakthroughs are serendipitous. I would imagine creating a true AI would be the same- especially considering the topic. It seems like it would be one of those things where an extremely small detail can make all the difference. Like changing a bit of code in a recursive-heavy function. We're attempting to make AI now. All it takes is one person to get it right suddenly.
And we're no where close to knowing what makes a
Re: (Score:2)
We're not talking about a pure AI, we're talking about an emulated brain based on a human one, probably.
All evidence regarding artificial intelligence suggests very strongly that it won't be in the form of a breakthrough or serendipitous discovery. The mind is such a fabulously intricate thing that the only way we could ever achieve a comparable system is through careful, exhaustive scientific study. All efforts to produce human-like intelligence thus far have
Re: (Score:2)
Re: (Score:2)
The first issue pops up in your quip about cats: why the hell would anyone program a car to behave like a cat? Developing a cortex that not only simulates the pathways but the twitch responses and activation thresholds of a particular living organism is such a phenomenal amount of exquisitely-detailed work that it would make absolutely no sense to repurpose that work for any function other than simulating that living organism. Do you really want a car that spends 80%+ of its life curling up in dark corners, sleeping, licking itself, and coughing up hairballs? The feline fascination with laser pointers is equally exotic and remote. It's an instinctual behaviour found in predatory animals. If you were going to use a feline cortex as the basis for a semi-autonomous vehicle, the biases would be much more subtle and affect things like the learning process, not irrelevant surface features like predatory or survival instincts.
Is this a Turing Test?
Because you completely missed my introduction starting with humor. Or the joke was just bad, in which case I apologize.
Re: (Score:2)
Or maybe I am a robot. Bzzt, bzzt. Insert silicon wafer.
Re: (Score:1)
why the hell would anyone program a car to behave like a cat?
Whoosh.
Re: (Score:2)
Re: (Score:2)
When we turn on the proverbial hypothetical sentient artificial intelligence, it won't have an instinct for survival or even a concept of self unless we explicitly instill those things in it; it will just be a glorified thinking machine capable of experiencing thought, like the brain inside of a worm or infant.
By the time we are capable of creating a computer that acts human, we will know, exhaustively and in every detail, what it means to be human. And we will be able to pick and choose without uncertainty what we are putting into it
Here are some links for you. Check them out and see if you still believe what you posted: Emergence [wikipedia.org]/Strong Emergence [wikipedia.org], Complex Systems [wikipedia.org], Chaos Theory [wikipedia.org] and Unintended Consequences [wikipedia.org].
Re: (Score:2)
Re: (Score:2)
...unless you train it to be a bloodthirsty killer and a brilliant strategist, it's not going to be particularly malevolent...
All science fiction authors who have ever written a story about a purely malevolent AI without a plausible origin need to get shot right now.
So.... who trained you?
Re: (Score:2)
Re: (Score:2)
The gods of sarcasm themselves. Don't forget to read the last line of the post for extra evidence of self-awareness.
Oh, I did, and I got it. I was mostly trying to point that that a brain may not need specific training to have their thoughts turn to mass-murder. Of course, a thought is not an action, but its usually assumed that that distinction is only made in higher lifeforms.
I wonder, can a cat think about an action, and its future possible consequences?
Re: (Score:2)
Regarding cats and planning: most likely not [guardian.co.uk].
Regarding training: we're exposed to the idea of mass murder in a comprehensible form due to exposure to it in culture. We are exposed to sources that make us aware of the mental states and motives for such an action, even if we could not previously understand it. By having these experiences, we build up an idea of what circumstances under which one would go on a mass murdering spree, and what one would hope to gain from it. This provides us with the tools to, fo
Re: (Score:2)
Re: (Score:2)
But all of those technologies are controllable. The military is all about ensuring that every component can be completely trusted. I'm sure you've heard of ruggedized computers and cellphones meant for military use, and the rigorous tests consumer products must go through before being considered battlefield-ready. Guidance and aircraft control programs have to go through years of exhaustive analysis to make sure that every line of code does exactly what it should do under every possible condition.
Sentient a
Re: (Score:2)
But all of those technologies are controllable. The military is all about ensuring that every component can be completely trusted.
The military doesn't have 100% control now, and likely never will. Because of that, they are far more concerned with risk / reward.
The military already trusts computers more than it should. Yes, what they use is tested, and the code reviewed, but there are always n+1 bugs in every program. There is always the chance that a bad chip can cause extremely weird behavior. They know this, and it is acceptable because the chances are small. But they are always there. Take the various computers that supposedl
Re: (Score:2)
There has never been a system that can launch nuclear weapons without human involvement. The closest thing to that is Perimeter [wikipedia.org], which still requires human intervention to fire. The American counterpart strategy was to keep bombers in the air around the clock. Neither superpower ever developed an autonomous launch system.
Generals trust computers to carry out orders, but they don't trust them to make decisions. The design of Perimeter is nothing if not a testament to that. They've seen all of the [wikipedia.org] old [wikipedia.org] sci-fi [wikipedia.org]
Re: (Score:2)
If you read the Dead Hand article you linked to, then you know that it doesn't necessarily require human intervention to fire. Some claim it is always functioning. Some claim it never did. Some claim it has to be manually switched on. However, considering part* of its purpose was to guarantee retaliation in the event of a surprise attack, I wouldn't be surprised at all to learn it was the former. Some quotes from Russian officials in that article would lead me to believe that was well. Again, differen
Re: (Score:2)
On the topic of Perimeter's autonomy, most of the contradictory quotes are from bureaucrats who may have been playing the nuclear deterrent wargame, much like the spooks at the RAND Corporation once did. The Wired article goes on about it at length [wired.com], and since it's much more recent, I'm inclined to trust it more. It also discusses the self-control aspect of Perimeter.
Re: (Score:2)
On the topic of different cultures trusting computers to a different extent, I remember reading once that there was a particular kind of critical situation wherein a jetliner is not sure whether to trust the autopilot or the human pilot. Boeing (American) planes opt to trust the human, and Airbus (European) planes trust the autopilot.
Still—that's a safety system, not a weapons platform.
Re: (Score:2)
So you turn the machine loose on the enemy, maybe it gets out of hand, a general lifts the cover on The Red Button, and *pow* no more problem (self destruct charge took out a pre-school and a bus load of nuns, but hey, war is hell)
Re: (Score:2)
Re: (Score:3)
M5, Dr. Daystrom, what have you wrought? (Score:4, Funny)
http://en.wikipedia.org/wiki/The_Ultimate_Computer [wikipedia.org]
Chips from the brain have been known to attack starships. Watch out Captain Dunsel. It's clear that IBM is using Star Trek as a source of ideas. Gene Roddenberry has predicted the 21st century again...
Re: (Score:1)
Not a problem! I just patented a system of 3 laws preventing those chips to harm humans
Re: (Score:1)
Yep, 'Three laws safe!' - well we all know where the got (gets) us, dont we, boys and girls?
Re: (Score:2)
"Due to a patent licensing issue, our knockoff brain-chips have no safeguards against harming humans. However, you get them at 75% off!"
Nice job breaking it, hero.
Re: (Score:2)
Somewhere... (Score:2)
Re: (Score:2)
Re: (Score:1)
Right now 12 out of 17, or 70% of all comments are useless, irrelevant, SPAM.
including yours.
Re: (Score:1)
What it does show is that IBM won't have too much trouble scaling their chips up to model the average slashdotters brain.
Cat brains? (Score:2, Funny)
Re: (Score:2)
Well, they are a fantastic example of hyper-threading.
Re: (Score:2, Funny)
And they taught it to drive ? My cat is a terrible driver.
sorry, but cat brain been done already. (Score:1)
Sorry, but cat brain [wikimedia.org] has already been done some decade ago.
Finally... (Score:1)
Similar to what happened 30 years ago... (Score:4, Interesting)
Re: (Score:2)
Very interesting article, thank you for sharing.
Re: (Score:1)
No. The first post has prior art. So do everyone who ever posted one. Sorry, was your post meant to be funny?
The "power" of a cat's brain? (Score:4, Funny)
If it gets out of control, we just need the equivalent of either a laser pointer or catnip to bring it to its knees.
Cargo Cult of the Neuroscience World (Score:3, Informative)
This project attempts to build something as close to a brain as we currently can. However, trying to replicate something by copying only its most outwardly obvious features probably won't work, and IBM's attempt to recapitulate thought reminds me of the fiasco that were the cargo cults, where natives created effigies of technology they didn't understand because they thought through their imitation of colonizers, cargo would magically be delivered to them. From http://en.wikipedia.org/wiki/Cargo_cult [wikipedia.org]:
(begin quote)
The primary association in cargo cults is between the divine nature of "cargo" (manufactured goods) and the advanced, non-native behavior, clothing and equipment of the recipients of the "cargo". Since the modern manufacturing process is unknown to them, members, leaders, and prophets of the cults maintain that the manufactured goods of the non-native culture have been created by spiritual means, such as through their deities and ancestors, and are intended for the local indigenous people, but that the foreigners have unfairly gained control of these objects through malice or mistake.[3] Thus, a characteristic feature of cargo cults is the belief that spiritual agents will, at some future time, give much valuable cargo and desirable manufactured products to the cult members.
(end quote)
Computational folks can still make progress studying how the brain works, but I think we should focus on understanding first which problems brains solve better than computers, and second which computational tricks are used that our computer scientists haven't yet discovered. Merely emulating a close approximation to the best understanding we have of neural hardware looks splashy, but isn't guaranteed to teach us anything, let alone replicate human intelligence.
Re: (Score:3, Insightful)
If the emulation is successful, one can do to it what you can't easily do with the real thing: Manipulate it in any conceivable way to examine its inner workings, save its state and do different tests on exactly the same "brain" without the effects of earlier experiments disturbing (e.g. if some stimulus is new to it, then it will be new to it even the 100th time), and basically do arbitrary experiments with it without PETA complaining.
Re: (Score:2)
You imply (I notice you don't come right out and say it) that they're "trying to replicate something by copying only its most outwardly obvious features." Care to back that up? What are the outward features they're copying? What are the non-obvious ones they should be copying?
There is lots of research into which problems brains solve better than computers, and a fairly good list. We also have a rough idea of how brains make these computations better than computers, and have had a fair bit of success cop
Re: (Score:1)
There aren't a lot of details on IBMs artificial neural networks, but generally ANNs only model a few characteristics of actual brains. It's very superficial.
For example, the central auditory system [wikipedia.org] in the mammalian brain includes many different types of neurons with very different sizes, shapes, and response properties. These are organized into tissues that are further organized into circuits. There is a significant architecture there.
To contrast, many ANNs use a simple model of a neuron (input, weight,
Re: (Score:2)
They're not building perceptrons like you might for a high school science fair project. IBM has put considerable effort into cortical mapping, uses simulated neurons that exhibit spiking behaviour, simulates axonal delays, has made some effort at realistic synapses, etc. (http://www.almaden.ibm.com/cs/people/dmodha/SC09_TheCatIsOutofTheBag.pdf)
But wait... are you the original AC who was criticizing IBM for simply trying to copy the features of a brain without understanding it? Are you suggesting that IBM
Re: (Score:1)
Thanks for the link, but it's still a pretty simple neural model. Just not as simple as many other common models, which is why they take great care to call it "biologically inspired." But, the focus of the research is on simulation, not intelligence.
To the original point, the researchers have simulated a better approximation of NNs without shedding any light on the "computational tricks" that make brains so smart. While the paper makes clear that this is a model that can be used to test neural theories,
Re: (Score:2)
A lot of the machine learning algorithms we use today are based on statistical or classification techniques that are mathematically connected to neural networks, and their development has in part been inspired by them. Many of our machine vision and hearing algorithms are based on phenomenon that have been observed in the brain's visual and auditory cortex. The differences of Gaussians in SIFT, or the wavelets in SURF for example.
Have we got a machine that wakes up one day, says hello and asks for a chees
Re: (Score:1)
A lot of the machine learning algorithms we use today are based on statistical or classification techniques that are mathematically connected to neural networks, and their development has in part been inspired by them.
If you are saying these techniques were borne from mathematical properties of biological neural networks, you are just wrong. Get a ML textbook - it's all about curve fitting, probability theory, decision theory, information theory, statistics, optimization.
Many of our machine vision and hearing algorithms are based on phenomenon that have been observed in the brain's visual and auditory cortex. The differences of Gaussians in SIFT, or the wavelets in SURF for example.
Wrong again. You're zero for two. Actually, if you can cite a biology paper concluding cortex uses wavelets, I'll give you this one. Good luck.
Have we got a machine that wakes up one day, says hello and asks for a cheeseburger? No, of course not. That's kind of the end goal, isn't it?
No, that's not the goal of ML or AI, and that has nothing to do with anything I've written. Quit with the
Re: (Score:2)
BBC Article On This (Score:2)
IBM produces first 'brain chips' [bbc.co.uk]
Bonus geek points for spotting the error on this page.
Re: (Score:1)
"... while the other contains 65,636 learning synapses."
Re: (Score:2)
Maybe an intern had an accident and, uh, "donated" his brain to science.
"Extra? What extra? It's always been designed with 65,636 synapses. No, that doesn't look like human tissue to me at all. Listen, who's the scientist here?"
Come to think of it, maybe the whole thing is made from interns' brains. It would definitely be cheaper.
Re: (Score:2)
Has that been fixed, or did you mis-read it? The page currently states,
One chip has 262,144 programmable synapses, while the other contains 65,536 learning synapses.
262,144 = 2^18
65,536 = 2^16
IBM is way behind (Score:4, Interesting)
Re: (Score:1)
Actually, three of the lead researchers on this project are graduates from the Boahen lab and work for IBM creating this chip. They know the design decisions the put in place creating Neurogrid and are not behind in any sense compared to the work they had done with Neurogrid. The neuromorphic community is quite small and there is a fair amount of inbreeding. Qualcomm and UCSD are also working towards some medium to large scale hardware simulators but they are not out of fab yet.
Ok , its a neural net in hardware. Is this new? (Score:2)
I'm sure this has been done before , or am I missing something here?
Re: (Score:2)
That was indeed the first thing I thought about.
The basic functionality of neural networks have been long understood. I have at home an antique article (1963!) and schematic of an electronic neuron (with a couple of transistors).
One of the things Carver Mead was involved in the late 80's was the design of VLSI neuron structures.
So, no, this is not really new, but perhaps that with the larger integration, the IBM researchers could add better or more learning circuitry.
Re: (Score:3)
Re: (Score:2)
Why try to simulate humans? (Score:2)
Why aspire to simulate human brains? We create more than we need already...
Artificial Intelligence always beats real stupidity.
"We are all born ignorant, but one must work hard to remain stupid" -Ben Franklin
Re: (Score:1)
That's immortality for the machines, not for the people.
I'd agree that the biggest problem with human-level AI is that the economics of it are terrible. For your gazillions of research dollars and hours you get something that's already on tap and cheap as dirt, especially in the third world.
Even once you get your amazing smart supercomputer, you still have to train it (human employees are mostly good to go), house it (it'll be bigger than your average high-rise apartment), and feed it/cool it with enough e
Re: (Score:2)
I'm sure this has been done before , or am I missing something here?
No, this has not been done before. The neurons being implemented here are (to a limited degree) far closer in functionality to a "real" neuron than a conventional neural net (which isn't really close at all). This project is IBM's takeaway from the Blue Brain project of a couple of years ago. Henry Markram and Modha had a parting of ways over how the neurons were to be implemented. Markram wanted the neurons to be as biologically accurate as possible (at the expense of performance) while Modha felt they wer
Just great (Score:1)
Re: (Score:2)
I step away from the new PC for a minute and come back to find browser tabs open to newegg and the sound "awww yeah" coming from the speaker.
Apparently, FTFA, if you stepped away from the PC, you would be more likely to find the browser tabs on "laser pointers" and "bulk catnip".
Re: (Score:2)
Seriously though...I need it to sort and classify my porn collection.
I can simulate a cat's brain... (Score:1)
Why would I want a computer that ... (Score:2)
why do this? (Score:2)
it's a bit hard to understand what the point of this research is. if you actually want to understand neural behavior, simulations are obviously a better path: arbitrarily scalable and more flexible (in reinforcement schedules, etc). if the hope is to produce something more efficient than simulation, great, but where's the stats on fan-in, propagation delay, wire counts, joules-per-op, etc. personally, I find that some people simply have a compulsion to try to replicate neurons in silico - not for any rea
Re: (Score:2)
TFA states what the goal is - running more complex software on simpler computers. It even says what the joules-per-op is - 45 picojoules per event, about 1000 times less than conventional computers.
Re: (Score:2)
it's a bit hard to understand what the point of this research is.
The (unstated) point is that there is a race afoot to be the first to develop a system that will achive AGI.
For the first time ever, we've entered an era where we are beginning to see hardware powerful enough to perform large scale cortical simulations. Not simple ANNs, but honest to god, biologically accurate simulations of full cortical columns.
Having said that, Modha's penchant for jumping the shark is well documented. Rather than insisting on nothing less than biologically accurate neural circuitry (as
Re: (Score:2)
"Jumping the shark" [wikipedia.org]
Re: (Score:2)
If these brains work like managements brains... (Score:1)
But Why?!? (Score:2)
A microchip with about as much brain power as a garden worm...
They invented the Mother-in-Law?
Reminds me of MoNETA (Score:2)
AI (Score:1)
Hooray! (Score:2)