Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI IBM News Science

IBM Shows Off Brain-Inspired Microchips 106

An anonymous reader writes "Researchers at IBM have created microchips inspired by the basic functioning of the human brain. They believe the chips could perform tasks that humans excel at but computers normally don't. So far they have been taught to recognize handwriting, play Pong, and guide a car around a track. The same researchers previously modeled this kind of neurologically inspired computing using supercomputer simulations, and claimed to have simulated the complexity of a cat's cortex — a claim that sparked a firestorm of controversy at the time. The new hardware is designed to run this same software much more efficiently."
This discussion has been archived. No new comments can be posted.

IBM Shows Off Brain-Inspired Microchips

Comments Filter:
  • by tekrat ( 242117 ) on Thursday August 18, 2011 @10:27AM (#37130374) Homepage Journal

    http://en.wikipedia.org/wiki/The_Ultimate_Computer [wikipedia.org]

    Chips from the brain have been known to attack starships. Watch out Captain Dunsel. It's clear that IBM is using Star Trek as a source of ideas. Gene Roddenberry has predicted the 21st century again...

  • Ray Kurzweil is laughing at all the nay-sayers right about now.
  • What, they couldn't think of anything more psychotic?
  • ...something for the zombie PCs to eat
  • by jimwormold ( 1451913 ) on Thursday August 18, 2011 @10:32AM (#37130444)
    ... and very timely of The Register to bring it up: http://www.reghardware.com/2011/08/18/heroes_of_tech_david_may/ [reghardware.com]
  • by kmdrtako ( 1971832 ) on Thursday August 18, 2011 @10:37AM (#37130536)

    If it gets out of control, we just need the equivalent of either a laser pointer or catnip to bring it to its knees.

  • by Anonymous Coward on Thursday August 18, 2011 @10:40AM (#37130572)

    This project attempts to build something as close to a brain as we currently can. However, trying to replicate something by copying only its most outwardly obvious features probably won't work, and IBM's attempt to recapitulate thought reminds me of the fiasco that were the cargo cults, where natives created effigies of technology they didn't understand because they thought through their imitation of colonizers, cargo would magically be delivered to them. From http://en.wikipedia.org/wiki/Cargo_cult [wikipedia.org]:

    (begin quote)
    The primary association in cargo cults is between the divine nature of "cargo" (manufactured goods) and the advanced, non-native behavior, clothing and equipment of the recipients of the "cargo". Since the modern manufacturing process is unknown to them, members, leaders, and prophets of the cults maintain that the manufactured goods of the non-native culture have been created by spiritual means, such as through their deities and ancestors, and are intended for the local indigenous people, but that the foreigners have unfairly gained control of these objects through malice or mistake.[3] Thus, a characteristic feature of cargo cults is the belief that spiritual agents will, at some future time, give much valuable cargo and desirable manufactured products to the cult members.
    (end quote)

    Computational folks can still make progress studying how the brain works, but I think we should focus on understanding first which problems brains solve better than computers, and second which computational tricks are used that our computer scientists haven't yet discovered. Merely emulating a close approximation to the best understanding we have of neural hardware looks splashy, but isn't guaranteed to teach us anything, let alone replicate human intelligence.

    • Re: (Score:3, Insightful)

      If the emulation is successful, one can do to it what you can't easily do with the real thing: Manipulate it in any conceivable way to examine its inner workings, save its state and do different tests on exactly the same "brain" without the effects of earlier experiments disturbing (e.g. if some stimulus is new to it, then it will be new to it even the 100th time), and basically do arbitrary experiments with it without PETA complaining.

    • by ceoyoyo ( 59147 )

      You imply (I notice you don't come right out and say it) that they're "trying to replicate something by copying only its most outwardly obvious features." Care to back that up? What are the outward features they're copying? What are the non-obvious ones they should be copying?

      There is lots of research into which problems brains solve better than computers, and a fairly good list. We also have a rough idea of how brains make these computations better than computers, and have had a fair bit of success cop

      • by bouldin ( 828821 )

        There aren't a lot of details on IBMs artificial neural networks, but generally ANNs only model a few characteristics of actual brains. It's very superficial.

        For example, the central auditory system [wikipedia.org] in the mammalian brain includes many different types of neurons with very different sizes, shapes, and response properties. These are organized into tissues that are further organized into circuits. There is a significant architecture there.

        To contrast, many ANNs use a simple model of a neuron (input, weight,

        • by ceoyoyo ( 59147 )

          They're not building perceptrons like you might for a high school science fair project. IBM has put considerable effort into cortical mapping, uses simulated neurons that exhibit spiking behaviour, simulates axonal delays, has made some effort at realistic synapses, etc. (http://www.almaden.ibm.com/cs/people/dmodha/SC09_TheCatIsOutofTheBag.pdf)

          But wait... are you the original AC who was criticizing IBM for simply trying to copy the features of a brain without understanding it? Are you suggesting that IBM

          • by bouldin ( 828821 )

            Thanks for the link, but it's still a pretty simple neural model. Just not as simple as many other common models, which is why they take great care to call it "biologically inspired." But, the focus of the research is on simulation, not intelligence.

            To the original point, the researchers have simulated a better approximation of NNs without shedding any light on the "computational tricks" that make brains so smart. While the paper makes clear that this is a model that can be used to test neural theories,

            • by ceoyoyo ( 59147 )

              A lot of the machine learning algorithms we use today are based on statistical or classification techniques that are mathematically connected to neural networks, and their development has in part been inspired by them. Many of our machine vision and hearing algorithms are based on phenomenon that have been observed in the brain's visual and auditory cortex. The differences of Gaussians in SIFT, or the wavelets in SURF for example.

              Have we got a machine that wakes up one day, says hello and asks for a chees

              • by bouldin ( 828821 )

                A lot of the machine learning algorithms we use today are based on statistical or classification techniques that are mathematically connected to neural networks, and their development has in part been inspired by them.

                If you are saying these techniques were borne from mathematical properties of biological neural networks, you are just wrong. Get a ML textbook - it's all about curve fitting, probability theory, decision theory, information theory, statistics, optimization.

                Many of our machine vision and hearing algorithms are based on phenomenon that have been observed in the brain's visual and auditory cortex. The differences of Gaussians in SIFT, or the wavelets in SURF for example.

                Wrong again. You're zero for two. Actually, if you can cite a biology paper concluding cortex uses wavelets, I'll give you this one. Good luck.

                Have we got a machine that wakes up one day, says hello and asks for a cheeseburger? No, of course not. That's kind of the end goal, isn't it?

                No, that's not the goal of ML or AI, and that has nothing to do with anything I've written. Quit with the

    • No, actually I think there will be many good applications for this style of processing regardless of biologically accurate it is. Massive parallelism, co-locating data and computation, some analog computation perhaps... these are directions that computation is taking due to the breakdown of the Von Neumann architecture due to physical limits. Nobody expects (I hope) this new chip to compute anything that couldn't already be done - eventually - on a conventional desktop PC. But if it's possible to drasti
  • IBM produces first 'brain chips' [bbc.co.uk]

    Bonus geek points for spotting the error on this page.

    • Bonus geek points for spotting the error on this page.

      "... while the other contains 65,636 learning synapses."
      • Maybe an intern had an accident and, uh, "donated" his brain to science.

        "Extra? What extra? It's always been designed with 65,636 synapses. No, that doesn't look like human tissue to me at all. Listen, who's the scientist here?"

        Come to think of it, maybe the whole thing is made from interns' brains. It would definitely be cheaper.

      • Has that been fixed, or did you mis-read it? The page currently states,

        One chip has 262,144 programmable synapses, while the other contains 65,536 learning synapses.

        262,144 = 2^18
        65,536 = 2^16

  • IBM is way behind (Score:4, Interesting)

    by codeAlDente ( 1643257 ) on Thursday August 18, 2011 @10:52AM (#37130766)
    IBM has been working fast and furious ever since Kwabena Boahen showed them a chip (that actually was based on neural architecture) that matched the performance of their massive Blue Brain cluster, but used something like 5-10 W. Sounds like they're still playing catch-up. http://science.slashdot.org/story/07/02/13/0159220/Building-a-Silicon-Brain [slashdot.org]
    • by Anonymous Coward

      Actually, three of the lead researchers on this project are graduates from the Boahen lab and work for IBM creating this chip. They know the design decisions the put in place creating Neurogrid and are not behind in any sense compared to the work they had done with Neurogrid. The neuromorphic community is quite small and there is a fair amount of inbreeding. Qualcomm and UCSD are also working towards some medium to large scale hardware simulators but they are not out of fab yet.

  • I'm sure this has been done before , or am I missing something here?

    • by chthon ( 580889 )

      That was indeed the first thing I thought about.

      The basic functionality of neural networks have been long understood. I have at home an antique article (1963!) and schematic of an electronic neuron (with a couple of transistors).

      One of the things Carver Mead was involved in the late 80's was the design of VLSI neuron structures.

      So, no, this is not really new, but perhaps that with the larger integration, the IBM researchers could add better or more learning circuitry.

    • by Dr_Ish ( 639005 )
      As best I can tell from the scant information in the article, this is merely a hardware implementation of standard neural network architectures. Many of these were described, as software implementations in the mid-1980s by Rumelhart, McClelland et. al. in their two volume work*Parallel Distributed Processing* [mit.edu]. Many of the putatively revolutionary features of this implementation, like on-board memory and modifiable connections are described. Since that time, neural network technology has advanced quite a bit
      • In previous decades, alternative computing hardware never made sense economically. Sequential, digital, uniform memory access computers had been progressing so rapidly that special-purpose parallel/connectionist machines were obsolete almost before they hit the market. Now we are hitting the physical limits of the conventional architecture, which may create niches for alternate ones. (Arguably, GPUs already prove this point.)
    • Why aspire to simulate human brains? We create more than we need already...
      Artificial Intelligence always beats real stupidity.

      "We are all born ignorant, but one must work hard to remain stupid" -Ben Franklin

    • I'm sure this has been done before , or am I missing something here?

      No, this has not been done before. The neurons being implemented here are (to a limited degree) far closer in functionality to a "real" neuron than a conventional neural net (which isn't really close at all). This project is IBM's takeaway from the Blue Brain project of a couple of years ago. Henry Markram and Modha had a parting of ways over how the neurons were to be implemented. Markram wanted the neurons to be as biologically accurate as possible (at the expense of performance) while Modha felt they wer

  • I step away from the new PC for a minute and come back to find browser tabs open to newegg and the sound "awww yeah" coming from the speaker.
    • I step away from the new PC for a minute and come back to find browser tabs open to newegg and the sound "awww yeah" coming from the speaker.

      Apparently, FTFA, if you stepped away from the PC, you would be more likely to find the browser tabs on "laser pointers" and "bulk catnip".

    • by sycodon ( 149926 )

      Seriously though...I need it to sort and classify my porn collection.

  • all I need is a chip with a sleep timer. No other functions are required.
  • provides random responses to input? I can imagine loading it with a bunch of facts and it ignoring all them while it launches into an angry rant and conspiracy theories. I get that at Slashdot already.
  • it's a bit hard to understand what the point of this research is. if you actually want to understand neural behavior, simulations are obviously a better path: arbitrarily scalable and more flexible (in reinforcement schedules, etc). if the hope is to produce something more efficient than simulation, great, but where's the stats on fan-in, propagation delay, wire counts, joules-per-op, etc. personally, I find that some people simply have a compulsion to try to replicate neurons in silico - not for any rea

    • by bws111 ( 1216812 )

      TFA states what the goal is - running more complex software on simpler computers. It even says what the joules-per-op is - 45 picojoules per event, about 1000 times less than conventional computers.

    • it's a bit hard to understand what the point of this research is.

      The (unstated) point is that there is a race afoot to be the first to develop a system that will achive AGI.

      For the first time ever, we've entered an era where we are beginning to see hardware powerful enough to perform large scale cortical simulations. Not simple ANNs, but honest to god, biologically accurate simulations of full cortical columns.

      Having said that, Modha's penchant for jumping the shark is well documented. Rather than insisting on nothing less than biologically accurate neural circuitry (as

  • It won't even get off the ground, they'll spend too much "thinking" about them, and the workers will take the ideas to other companies that actually pay for their hard labor. IBM blows now.
  • A microchip with about as much brain power as a garden worm...

    They invented the Mother-in-Law?

  • An interesting article about the 'Great Brain Race' which also mentions IBM's SyNAPSE project can be found at IEEE Spectrum. http://spectrum.ieee.org/robotics/artificial-intelligence/moneta-a-mind-made-from-memristors/0 [ieee.org]
  • "We don't know who struck first, us or them. But we do know that is was us that scorched the sky"
  • Finally, something for my zombie processes to eat!

Sigmund Freud is alleged to have said that in the last analysis the entire field of psychology may reduce to biological electrochemistry.

Working...