Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Biotech Supercomputing Science

Mouse Brain Simulated Via Computer 268

Mordok-DestroyerOfWo writes "Researchers from the IBM Almaden research lab and the University of Nevada have created a simulation of half a mouse brain on the BlueGene L supercomputer. 'Half a real mouse brain is thought to have about eight million neurons each one of which can have up to 8,000 synapses, or connections, with other nerve fibres. Modelling such a system, the trio wrote, puts "tremendous constraints on computation, communication and memory capacity of any computing platform."' Although there's more to creating a mind than setting up the infrastructure, does this mean that we may see a system for human mental storage within our lifetimes?"
This discussion has been archived. No new comments can be posted.

Mouse Brain Simulated Via Computer

Comments Filter:
  • No randomness? (Score:1, Interesting)

    by Anonymous Coward on Saturday April 28, 2007 @01:36PM (#18912653)
    Penrose said unique thought and intellegence requires cosmic rays firing random neurons. Without this you have a deterministic machine, and not a brain.
  • Umm (Score:5, Interesting)

    by Tx ( 96709 ) on Saturday April 28, 2007 @01:43PM (#18912715) Journal
    FTA:

    Half a real mouse brain is thought to have about eight million neurons

    and

    the researchers created half a virtual mouse brain that had 8,000 neurons


    How can it be half a mouse brain if it has 1/1000 the number of a real half mouse brain? Their simulated neurons also had less synapses than the real thing. So is the 8000 a typo, or am I missing something?
  • by zappepcs ( 820751 ) on Saturday April 28, 2007 @01:44PM (#18912721) Journal
    You sir, have hit the nail mostly on the head. Lately we humans are discovering something new about ourselves almost daily. The genetic link to why some of us have body clocks that are slower than others is one, genetic links to everything from sexuality to diseases. We are learning slowly that we really aren't that complex. We just didn't know that yet.

    The short answer to the original question is no. The reason is that the methods used to implement the models is incapable of truly mimicking the human brain. One piece of evidence for that is the fact that we understand how computers work but not the brain. From what it appears, there is the equivalent of many computers inside our heads, each doing their own thing and communicating with the others as needed, but in very complex chemical ways as well as electrochemical. If you thought modeling planetary weather was difficult, this is orders of magnitude more difficult.

    The good news is that we are trying, and from that will come many good things though I worry about what kind of damage we will do if we can figure out gene therapies that can cure cancer as well as sexual orientation. This stuff really is SciFi writers playground. We should all worry too. GM food is in your future if not already in your stomach. Perhaps next will be a special bed that you go to sleep on and wake up a good citizen in the morning?
  • by RyanFenton ( 230700 ) on Saturday April 28, 2007 @02:17PM (#18912957)
    It's cool that we can create the basic scale of the infrastructure of a (half) mouse brain - but if we're really going to simulate a brain, we need the ability to read the contents of a real one in order to verify our simulation. Otherwise, we have little basis for saying that input X gives the sensation of movement, and would have effect/output Y in terms of changed state/response.

    I wonder what the current state of neuron state reading is - would we ever theoretically be able to read the state of a brain beyond the external outputs? Could we ever get a sinlgle state that would be the 'ROM' of a person's memories and mental state, that you could place in a simulation and have that person's memories 'wake up' in a simulation? I wonder how close we could get.

    Ryan Fenton
  • by giorgiofr ( 887762 ) on Saturday April 28, 2007 @02:25PM (#18913005)
    No, he was referring to Goedel's theorem whereby any sufficiently complex system is unable to describe itself. Thus, being able to understand/describe ourselves completely would mean that we are not very complex. I hold the opposite view, i.e. we will not be able to describe ourselves fully precisely because we are too complex, but Goedel's theroem might be proven wrong in the future. That'd be great news for transhumanists.
  • by kestasjk ( 933987 ) on Saturday April 28, 2007 @02:30PM (#18913037) Homepage

    We are learning slowly that we really aren't that complex. We just didn't know that yet.
    This is kind of like how we used to think living things spontaneously came into being, and how life was driven by a mysterious essence. Now we know it's simply trillions upon trillions of interacting cells reading from a database of genetic code and transcribing it into proteins, reacting oxygen to produce energy using intricate membranes and switching genes on and off during growth using hormones travelling down blood vessels, protected by an immune system that learns about different bacteria and viruses throughout life, all protected by a skin that constantly grows, sheds and repairs itself.

    We used to think that the liver was responsible for anger, and the heart was responsible for love, because those are the things that seemed to react when we felt those emotions. But boy did those bafflingly complex notions fly out of the door when we discovered emotion is due to having a mass of billions of interconnected ...

    I could go on and on and I have a very simplified laymans view of how the whole thing works.. I don't know how you can say we're starting to realize how simple we are, we're realizing how complex we are.

    GM foods, by the way, haven't had their actual genomes modified, they have new genes added that create new proteins that can do things like attack insects. It's nothing as complicated as actually changing an existing gene in a useful way, which would be much more difficult because of the ways genes interact in so many ways.
  • by reporter ( 666905 ) on Saturday April 28, 2007 @02:33PM (#18913053) Homepage
    In the simulation of the mouse brain, IBM is making a big assumption: the brain operates only in the domain of Newtonian (a.k.a. classical) physics. So, the IBM programmers just encode the simple physical laws (governing the flow of electrical energy) in the C language.

    However, there is an alternate theory of consciousness, based on quantum physics [quantumconsciousness.org]. It is inherently non-deterministic and cannot be modeled in a computer.

    Hence, IBM's big assumption may be wrong. However, at least, the IBM experiment will tell us whether the operation of the brain is strictly Newtonian. If this artifical brain behaves differently from a mouse brain, then we would know that non-Newtonian physics is crucial to the operation of a flesh-and-blood brain.

  • by suv4x4 ( 956391 ) on Saturday April 28, 2007 @03:21PM (#18913343)
    So was I the only one who read "system for mental storage" as meaning the transference of a human conciousness into a computer?

    That's just as unlikely. People used to computer technology know that the hardware structure and the software state are two completely different things. This is why you can build a model of the hardware, feed it the state, and bang, you have a Gameboy emulator (or whatever).

    But with biology, those two are intermixed. Brain saves information by changing the connections and structure itself. This means that you can build a model of a generic human brain, run it, and you have full blown AI.

    But you can't feed it the state of any human being. As every human being has different "wiring", hence won't "play" in your model.

    Someone mentioned Smalltalk. Smalltalk kinda works like a brain in that regard. State is structure is state.
  • by DogFacedJo ( 949100 ) on Saturday April 28, 2007 @05:00PM (#18913909)
    You made a number of spurious statements to support your thesis that IBM made a big assumption:

    It is possible that brain activity occurs via the microtubules, but this has not been well shown.
    Quantum physics is not *efficient* to simulate on modern computers, as the non-deterministic aspects tend to drive the model exponential. This does not prevent extremely large deterministic computers from modelling inefficiently, nor does it prevent prevent Quantum Computers from modelling more quickly (kudos to other reply who posted this point faster).
    That theory of consciousness is not a particularly scientific theory. I say this since the fundamental thesis appears to be that there is 'something about consciousness' that prevents it from being possible to be simulated on a computer, as opposed to a more specific thesis. Care seems to have been taken to avoid testable claims, like the ability to solve particular classes of problem on a computer, or the ablility for a computer to pass some sort of Turing test. The heavy reliance on a slippery definition of 'consciousness' is critical. Lastly the main authors are not supporting their cases by publishing papers in decent journals, but instead by selling books and videos.

    Even Penrose (of string theory fame), attempting the Lucasian argument in An Emperor's New Mind, resorted to choosing 'a mathematicians' ability to, in principle, prove any true theorem.' as the most viable testable aspect of consciousness. Since a computer will always (because of Godel's incompleteness) have statements that it cannot prove, Penrose argued that a mathematician must thus be more than any computer could be. The supposition that a mathematician's ability to freely choose between formal systems gives it the ability to prove anything is a bit of an eye popper for me, even with Penrose's 'in principle' tacked on.
    Penrose followed the rebuttal well: In the same way that any computer is existing within a formal system, and thus is unable to prove certain theorems (Godel's), humans exist in physical reality - which is simulable (yes, including quantum physics) by either a large or quantum computer, given all the physics known to science at this time. This means that anything subject to known physical laws can prove no more than whatever some astronomical ultra-computer capable of simulating that subject, could prove.
    The result of this painful train of thought, for Penrose, was the supposition that there must be some fundamental new physics, operating within the brain, that enables us to have the potential to solve any mathematical problem for which a solution is out there. Penrose hopes that this physics is lingering near the microtubules, but he is totally clear that it is not normal Quantum Physics, since that doesn't escape computer simulable activity.

    I am not bringing up Penrose as a Straw-man - I feel he did the best job of analyzing and supporting the position. In particular, I am not saying 'blah - just Penrose et. al. not understanding what the scientific method is' In fact Penrose is well aware of the scientific method, and classifies definitions of science along a strong-weak path with strong-definitions requiring that theories be thoroughly disproven to be considered scientific. He considers his own views - namely that a scientific theory should be disprovable 'in principle', to be a weaker than normal definition.
    Penrose's knowledge of computation and physics, and the quality of his arguments, far surpass the other writers in this area. He is the only fair target. Besides, his webpages have never used the blink tag. Penrose is cited on the parent's link, but it is hard to criticise the position of that linked author without having bought their video or book.

    That said, I obviously disagree with his position, and that of the parent. In particular, obviously, none of this series of experiments at IBM can or will shed
  • by Anonymous Coward on Saturday April 28, 2007 @08:39PM (#18914985)
    Read closer - mouse cortex was not simulated. A wad of neurons with the same number of cells and connections as half a mouse brain was simulated. It had no topology and was not computing anything. That's not a significant achievement in cognitive science. It's putting decades old neural network really big and simulating it really fast. Nobody said that couldn't be done. It just takes some elbow grease.
    An achievement would be to understand and simulate the network topology of half a mouse brain. or heck even just one of the visual, auditory, motor or olfactory systems. but they didn't do that either.
  • by Anonymous Coward on Sunday April 29, 2007 @04:43AM (#18917125)
    Speaking as a neuroscientist who records neural activity from animal brains I can tell you two things:

    a. The quantum stuff is complete rubbish. The neuro sections in Penrose's books don't make any sense. He has no reasonable mechanism for the supposed quantum processes.

    b. Brains quite definitely work in a non-linear way. Furthermore, in we are now seeing that they work iteratively in a non-linear way. e.g. you can feed certain regions of brain two similar inputs and over time the way those inputs are coded will diverge. This makes them more distinguishable. In other words, there is likely to be sensitive dependence to initial conditions in some areas of the brain. It's non-linear. It's quite plausible, therefore, that consciousness originates in processes that are chaotic or near-chaotic: non-deterministic. No quantum computing needed.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...