Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
IBM Supercomputing Science

IBM Takes a (Feline) Step Toward Thinking Machines 428

bth writes "A computer with the power of a human brain is not yet near. But this week researchers from IBM Corp. are reporting that they've simulated a cat's cerebral cortex, the thinking part of the brain, using a massive supercomputer. The computer has 147,456 processors (most modern PCs have just one or two processors) and 144 terabytes of main memory — 100,000 times as much as your computer has."
This discussion has been archived. No new comments can be posted.

IBM Takes a (Feline) Step Toward Thinking Machines

Comments Filter:
  • by enriquevagu ( 1026480 ) on Wednesday November 18, 2009 @10:49AM (#30143426)
    From TFA: "The simulation, which runs 100 times slower than an actual cat's brain,"

    This reminds me of the Spinnaker project [man.ac.uk], that pretended to simulate a brain (ok, a smaller one, say a fly's brain) in real time. According to their calculations, the processing power of each neuron is very small, so a simple ARM core could handle some 1000 (correct me, this is what I remember) neurons in real time. The complex point was the interconnections between neurons. Obviously, this is much more powerful, despite the 100x slowdown: A much larger brain, and not using specific hardware.

  • by Tom Boz ( 1570397 ) on Wednesday November 18, 2009 @10:52AM (#30143478)
    "The latest feat, being presented at a supercomputing conference in Portland, Ore., doesn't mean the computer thinks like a cat, or that it is the progenitor of a race of robo-cats." See, this is why no one on /. reads TFA; when we do, we're habitually disappointed! I'd much rather blindly believe the summary...
  • The Paper (Score:4, Informative)

    by glwtta ( 532858 ) on Wednesday November 18, 2009 @10:56AM (#30143526) Homepage
    Here's the actual paper [modha.org] (pdf).

    Although, of course, posting the piece of pap that explains how many processors my machine has makes so much more sense.

    Wasn't Slashdot supposed to be for a semi-technical audience? Hell, even a semi-literate one.
  • by tpjunkie ( 911544 ) on Wednesday November 18, 2009 @10:57AM (#30143538) Journal
    Having done neuroscience research, (if only on a master's degree level), I can say that the cat brain is particularly well studied, mapped out, and understood by neuroscientists. It is used as a model organism by many neuroscientists, and has a number of similarities with the human brain in its layout and function, much moreso than the mouse or rat brain.
  • by quadrox ( 1174915 ) on Wednesday November 18, 2009 @11:02AM (#30143632)

    Uh yeah, because evolution started with creatures that had 4 limbs and 5 toes/fingers on each, right? These didn't evolve over time, right?

    I'm sorry, but you are wrong for obvious reasons.

  • by killmenow ( 184444 ) on Wednesday November 18, 2009 @11:07AM (#30143696)

    According to my math:

    1TByte = 1024 GBytes

    1GByte = 1024 MBytes

    1MByte = 1024 KBytes

    1KByte = 1024 Bytes

    so 114 TB = 1024 * 1024 * 1024 * 1024 * 114 = 1,243,443,256,646,464 bytes

    My machine has 8 GBytes of RAM in it which is (1024 * 1024 * 1024 * 8) 8,589,934,592 bytes

    So that machine has ~ 144,755.846896 times more memory than mine.

    Or I'm missing something but hey, I was told there would be no math.

  • by WAG24601G ( 719991 ) on Wednesday November 18, 2009 @11:09AM (#30143716)
    I should also point out that they are only simulating the cerebral cortex, which is the 'wrinkly' outer portion of the brain. There is a great deal more to the brain than the cerebral cortex, but we generally associate it with what makes us human. Humans have a uniquely large cerebrum compared to our mid-brains. The rest of the brain becomes increasingly important the farther you venture from Homo sapiens in taxonomy. It's becoming increasingly apparent that even the highest order human behaviors (like language) depend on sub-cortical organs, like the putamen. Therefore, while TFA is a great step for neural simulation... it's nothing like a robot cat.
  • by Anonymous Coward on Wednesday November 18, 2009 @11:20AM (#30143884)

    OK, I'll bite.

    Evolution started with organismes made of rings (can't remember my Bio classes ATM). Earthworms are the classic example, loads of rings all alike, with 1 spécial ring on each end. Then the rings evolved (enter centipedes), and spécialised.

    If you want some modern examples, ask where your spine came from (lots of identicle parts, remember...), or why you have a diaphragme between you lungs and your stomach.

  • by kalirion ( 728907 ) on Wednesday November 18, 2009 @11:21AM (#30143890)

    There's a much easier way [google.com] :)

  • Re:Moore's Law (Score:3, Informative)

    by Spatial ( 1235392 ) on Wednesday November 18, 2009 @11:30AM (#30144074)
    Moore's law predicts that the transistor count will double.
  • Re:news for nerds (Score:3, Informative)

    by mcgrew ( 92797 ) * on Wednesday November 18, 2009 @12:01PM (#30144572) Homepage Journal

    The quote was from an AP story on Yahoo. It isn't slashdot, after all.

  • Re:news for nerds (Score:5, Informative)

    by nschubach ( 922175 ) on Wednesday November 18, 2009 @12:06PM (#30144658) Journal

    I was trying to figure out who they were talking about when they said "your computer." ;)

    The review looks like it was written for a grade school presentation with that and the processor comment.

  • by mmacdona86 ( 524915 ) on Wednesday November 18, 2009 @12:13PM (#30144750)
    Reading the TFA, it looks like they went to some trouble to model some specific brain structures and synapse properties, including inter-area connectivity and learning, in the model. So it's not "Just a big neural net." However the accuracy of the simulation is limited--both by what we know about the detailed structure of the cat's brain and by the number and complexity of the structures they decided to model.
  • by holmstar ( 1388267 ) on Wednesday November 18, 2009 @12:16PM (#30144784)
    Five or 6 legged cattle are are not that uncommon... I've personally seen a five legged calf. The extra legs are always non-functional, and are generally surgically removed shortly after birth.
  • Re:news for nerds (Score:5, Informative)

    by Chris Burke ( 6130 ) on Wednesday November 18, 2009 @12:24PM (#30144906) Homepage

    Technically is a single Core2Duo/Quad or Core iX CPU considered SMP? I would guess no they are not.

    Funnily enough, a single Core i7 or Opteron is SMP, but if you have multiple, then it isn't, it's NUMA because not all the processors have Symetric access to memory.

    Core 2 is SMP for all standard configurations.

  • by Animats ( 122034 ) on Wednesday November 18, 2009 @03:01PM (#30147158) Homepage

    Actually, the simulation isn't the big deal. This is: [modha.org] "We have developed a new algorithm, BlueMatter, that exploits the Blue Gene supercomputing architecture to noninvasively measure and map the connections between all cortical and sub-cortical locations within the human brain using magnetic resonance diffusion weighted imaging." So they're also developing techniques to extract the wiring diagram of living brains. That's significant.

    Don't read too much into the amount of supercomputer hardware required. They're running what's basically a circuit simulator, and those are inefficient but flexible. When NVidia develops a new graphics chip, they test and debug by compiling the VHDL into C, and running it, slowly, on about thirty racks of 1U servers. When that's working, the VHDL is compiled down to IC masks and the consumer part that's a few centimeters across is fabricated. That kind of shrink ratio should be expected once the R&D effort figures out what to fab.

  • by Anonymous Coward on Wednesday November 18, 2009 @04:45PM (#30148352)

    Possessive "its" never needs an apostrophe -- it is a direct analog of "his" and "hers"...

  • by Pedrito ( 94783 ) on Wednesday November 18, 2009 @05:55PM (#30149254)
    From TFA, it doesn't sound like they simulated the cerebral cortex of a cat. It sounds like they simulated a neural net with a comparable number of neurons. Not the same thing.

    What article did you read? The one linked to in the post clearly says they simulated a portion of cat cortex and, in fact, that's largely what they did. There's more here [modha.org] about some of the specifics. It's not an entirely accurate simulation, but it's pretty close. Not all neuron types are represented and it's largely cortical, thalamus and reticular nucleus neurons. They've created cortical hypercolumns which is the way a real cortex is laid out. They've omitted the layer 1 neurons, but otherwise the cortex is probably pretty functional for what they're doing. I think it's a pretty amazing feat.
  • by jcaplan ( 56979 ) on Thursday November 19, 2009 @12:54AM (#30153096) Journal
    TFA is bunk. (Yes, I read it.) 12 pages of bunk. Much of the article is about the computational challenges and blathers on about number of processors used and memory. Under key scientific results, they find that their model propagates waves at about the same rate as is found physiologically. So they connected a bunch of nodes in a way that produced synchronous behavior at a certain frequency. I could tune any model you give me to produce this behavior. (I have no special talent here, anyone writing models could.) Yawn. They ramble on about signals propagating between layers at reasonable rates, too. And ...?

    What about their simulation doing anything like what a cat might naturally do, such as detect a moving object? Nope. Instead they go on to discuss the scaling of their model, profiling and performance modeling. Perhaps one reason their model shows absolutely nothing is that they have connected their simulated neurons randomly. Yes. Randomly. Or as they put it: "The coordinates of target thalamocortical modules for each cell are determined using a Gaussian spatial density profile centered on the topographic location of the source thalamocortical module". Yep, thats random. Since their model doesn't ever change connection strengths (one form of learning) these random connections never change.

    I recently heard a description of the ways you can fool someone with computational neuroscience. Here are a couple of them: "Two card monte" Write a paper that spans two fields, but has no significant results in either. The specialists in one field will feel that the work done in their field is trivial, but that exciting stuff from the other field in the paper is what makes it so special. The specialists from the other field may feel the same way. Somebody snookered the conference organizers into thinking they were doing any neuroscience at all. The other was called "Turning the prayer wheel" or burning compute cycles to gain scientific merit. Fancy hardware is cool, but it can produce absolutely trivial results as this paper confirms.

    I don't mean to say that this research is entirely pointless. Indeed it has succeeded in siphoning significant funding from DARPA which might otherwise have gone into developing [killer] robot dogs [youtube.com].

There are two ways to write error-free programs; only the third one works.

Working...