Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Medicine Biotech

Artificial Brain '10 Years Away' 539

SpuriousLogic writes "A detailed, functional artificial human brain can be built within the next 10 years, a leading scientist has claimed. Henry Markram, director of the Blue Brain Project, has already built elements of a rat brain. He told the TED global conference in Oxford that a synthetic human brain would be of particular use finding treatments for mental illnesses. Around two billion people are thought to suffer some kind of brain impairment, he said. 'It is not impossible to build a human brain and we can do it in 10 years,' he said."
This discussion has been archived. No new comments can be posted.

Artificial Brain '10 Years Away'

Comments Filter:
  • don't believe it (Score:5, Insightful)

    by timpdx ( 1473923 ) on Thursday July 23, 2009 @12:58AM (#28791683)
    Maybe we can build the *equivalent* of a human brain (number of neural connections in software, silicon or combination), but we don't even know how the thing functionally works as it is. How are we going to model it?
  • by Anonymous Coward on Thursday July 23, 2009 @12:58AM (#28791689)

    When can I put my ghost in a shell?

  • by fuzzyfuzzyfungus ( 1223518 ) on Thursday July 23, 2009 @01:00AM (#28791703) Journal
    I'd be pretty concerned about the ethics of experimenting on an artficial brain complex enough to reasonably simulate a human one. "Human rights" aren't terribly well grounded, theoretically; but to the degree that they are, mental complexity seems to be a vital factor(given that we don't generally execute retarded people, it isn't the only one, but it is a big one). Being made of meat isn't obviously a salient factor, nor is being born to human parents.

    An artificial brain of that complexity would be, in effect, a moral person. If you are willing to experiment on one, you might as well just use hobos and orphans and not have to wait a decade for fancy computers(though a simulation would have the huge advantage of read system state out of memory, no mucking around with FMRIs and stuff).
  • 10 years? (Score:5, Insightful)

    by Saija ( 1114681 ) on Thursday July 23, 2009 @01:01AM (#28791713) Journal
    I've been listening "in 10 years we'll have X awesome technology", but time come and go and nothing has changed, so, i'll be expecting this artificial brain so i could drive my flying car(you know, that 3D driving thingie) to arrive at the entrance of the spacial elevator so i could bang some lunar chicks.
    Btw 10 years and i still have some bad english
  • Yeah. RIght. (Score:5, Insightful)

    by aepervius ( 535155 ) on Thursday July 23, 2009 @01:04AM (#28791731)
    In 10 years we will have artificial brain, in 50 we will have fusion. In 20 we will have true AI and cyborg. And in 5 years the date estimate for the 3 above will probably not have changed by much (I say probably as we could do leap and bound forward, but at the moment I don't see that as probable).
  • by setagllib ( 753300 ) on Thursday July 23, 2009 @01:10AM (#28791773)

    A lot of what makes a brain's connections is genetic, and a lot is learned. It wouldn't even begin to function without the genetic component, and it wouldn't survive long or perform any useful task without the learned component. Getting the genetic part right is incredibly difficult (it took evolution millions of years before any organisms could just walk), and fundamentally necessary to get any use out of the brain.

  • by Anonymous Coward on Thursday July 23, 2009 @01:13AM (#28791791)

    And yet, we are just "atoms". Ever heard of emergent properties?

  • by Zironic ( 1112127 ) on Thursday July 23, 2009 @01:21AM (#28791839)

    What the heck are you talking about? None of this is metaphysical, it's theoretically possible with good enough imaging tools to make a 1:1 copy.

  • Re:Yeah. RIght. (Score:5, Insightful)

    by Anpheus ( 908711 ) on Thursday July 23, 2009 @01:24AM (#28791855)

    This is different from AI, and is coming from someone whose expertise on the subject is demonstrable. He's not talking about AI, he's talking about simulating all of the tissue in a human brain and providing it with stimuli to determine reactions.

    He's not saying it'll necessarily be a good ol' buddy ol' pal right off the bat. Probably not. Probably won't even be capable of simple arithmetic for years. On the other hand, we could simulate things like lesions effecting far away parts of the brain, various known "paths" that signals travel in the brain and ways to alter those paths or correct flaws, etc.

    As well, we could simulate the effect of various drugs on large-scale phenomena in the brain to help try and understand (a.) what a drug will do before it undergoes testing, and (b.) why exactly it is that makes these drugs work so well. Both questions are currently unanswerable. We know what a drug does, but rarely do we understand the full extent of why a particular drug helps certain conditions.

  • by killthepoor187 ( 1600283 ) on Thursday July 23, 2009 @01:25AM (#28791865)

    What makes you think we couldn't offer it stimuli? That would be one way to learn a hell of a lot about how it works. There's your learned component.
    Also, who's to say we couldn't mimic the genetic component too? There is nothing magical about dna that makes it impossible to simulate. Although the whole protein folding thing seems rather difficult atm, there is no reason to say that we couldn't have that problem solved in 10 years.

  • by im_thatoneguy ( 819432 ) on Thursday July 23, 2009 @01:29AM (#28791883)

    While I 100% agree with the need to protect sapient rights regardless of species or construction material you do have to approach this one slightly differently since the stakes are different.

    If I was a silicon brain you could just back me up. As long as you disabled my pain processors you could do whatever you wanted to me. I would even be proud to be helping so many of my organic cousins at nothing but inconvenience. And since I'm a silicon brain with no where to go yet I wouldn't really have anything else to do except be retarded or schizophrenic from time to time.

  • by MindlessAutomata ( 1282944 ) on Thursday July 23, 2009 @01:32AM (#28791915)

    You are assuming that a computer program of that nature would be, for some reason, not conscious or thinking like a person. Yet why should you differentiate between a computer program and a physical neurons 'n glial cells, etc? I see no basis for doing so, as the matter itself, inert, is nothing. We only get a "person" when that matter if functioning. Why shouldn't consciousness, personhood, simply be the computational states and not the matter itself? It's true there are physical differences between a computer program and brain (for example, the synaptic gaps) but these could be simulated as well.

    I have no reason to believe that consciousness/personhood is anything but substrate neutral. Man, machine, machine-man, or computer program, any of these can potentially be conscious. Unless you want to postulate silly metaphysical things such as souls, which are vague and poorly defined--and unnecessary, for a soul does not apparently hold that which makes us what we are, that is, our memories or inclinations.

  • Re:10 years? (Score:5, Insightful)

    by mcrbids ( 148650 ) on Thursday July 23, 2009 @01:39AM (#28791957) Journal

    I've been listening "in 10 years we'll have X awesome technology", but time come and go and nothing has changed, so, i'll be expecting this artificial brain so i could drive my flying car(you know, that 3D driving thingie) to arrive at the entrance of the spacial elevator so i could bang some lunar chicks.

    Not everything predicted has come true, to be sure. But think about it: you are leaving a post on a computer located hundreds or thousands of miles away, along with hundreds of other people, and I, hundreds or thousands of miles away, am replying. Neither of us pays much at all for this service, which is nearly ubiquitous.

    You can casually watch television shows on demand, on your phone. Which, BTW, is roughly analogous to the pocket communicators on the original series of "Star Trek", except that they couldn't watch shows or take video/pictures or blog or play solitaire on them.

    There is sufficient storage in your computer to track every single man, woman, and child on earth, many times over. The price of photovoltaic solar cells has followed a consistent, exponential drop in price (half price every 5-ish years) and is now close to parity with coal.

    Cars are many, many, many times safer than they used to be - most accidents now result in basically no significant injuries, even when the car is totalled, thanks to crumple zones. Flat panel TVs are commonplace, with resolutions that rival photographic paper. Flexbile, folding displays are available, if (still) expensive.

    I'm not sure what kind of changes you would expect, but these are just a few of the awesome technologies that I've seen unfold in my 30-something years. I mean, what do you want?!?!

  • by fuzzyfuzzyfungus ( 1223518 ) on Thursday July 23, 2009 @01:51AM (#28792025) Journal
    Cancer cures have been pretty underwhelming; but 5 and 10 year survival rates for many flavors of cancer have been heading steadily in the right direction. The efficacy of pain control, anti-emetics, and other ancillary stuff has seen some improvement as well(unsexy; but not puking your guts up, as much, during treatment is definitely worth something). Also, there has been some interesting work in cancer prevention which is even better. The HPV vaccines, for instance, show a great deal of promise in preventing a substantial percentage of cervical, anal, and penile cancers, while reductions in smoking should reduce lung cancer incidence rather nicely.

    Talk is generally PR hype; but sometimes the PR department is attached to people who do real work.
  • by blahplusplus ( 757119 ) on Thursday July 23, 2009 @01:58AM (#28792069)

    "An artificial brain of that complexity would be, in effect, a moral person."

    No it wouldn't, just because something mimics consciousness does not mean it is conscious. This is a common fallacy amongst people who take a naive form of physicalism to extremes. Can you aenesthetize an artificial brain? The fact that anesthesia exists is proof positive that consciousness is inherently tied to the structures that produce it, just because you can build circuits that mimic consciousness does not mean they are alive, or even equivalent.

    The nature of consciousness is inextricably linked with what causes consciousness self-awareness to emerge beyond unconscious processing and intelligence, like say a computer.

  • by crazybit ( 918023 ) on Thursday July 23, 2009 @02:14AM (#28792169)
    don't forget the unexplained brain features that haven't been documented because science can't explain them - like twins feeling what the other feels and people with transplanted organs perceiving memories of the donor. They can't even completely explain how memories are created.

    How can science can't imitate what it can't yet explain and measure? Our brains connect with reality in many ways our conscious self can't perceive.
  • by fuzzix ( 700457 ) <flippy@example.com> on Thursday July 23, 2009 @02:45AM (#28792321) Journal

    don't forget the unexplained brain features that haven't been documented because science can't explain them - like twins feeling what the other feels and people with transplanted organs perceiving memories of the donor.

    Explain?! It hasn't even been observed yet.

    You might as well say "But your precious science has yet to explain psychic powers and zombies!"

  • by Animats ( 122034 ) on Thursday July 23, 2009 @03:17AM (#28792507) Homepage

    It probably is within reach to build a hardware equivalent of a human brain. We don't know how to architect it, but building enough custom ICs and interconnecting them is probably within reach. The right architecture for simulating neurons probably involves some huge number of fast processors with limited memory, like a graphics board.

    I'm encouraged that this guy is trying to model a mouse brain. About twenty years ago, I was at a seminar by Rod Brooks. He was talking about trying to jump from insect-level AI, where he'd made some progress, to human-level AI. I asked him why he was trying to make such a big jump; a mouse brain might be within reach. He said "Because I don't want to go down in history as the person who created the world's greatest robot mouse". So instead, Brooks did Cog, a stationary robot with head and arms which tries to fake acting human and didn't really lead anywhere. Taking a smaller step might work better.

    Reaching for mouse-level AI is promising. Mice and humans have about 85% DNA commonality. All the mammals seem to have have roughly similar brain components, although the size ratios of the different sections vary widely. Humans have about 1000x the brain mass of a mouse. So if we can get a solid simulation of a mouse brain, it may be mostly a scaleup from there.

    The classic mistake in AI is that someone comes up with a reasonable idea, and then thinks they're one step from human-level AI. That's approaching the problem as if it were easy. Fifty years in, we can now conclude it is hard. So taking smaller bites is indicated.

    When we build an artificial brain, it will be rack-mounted in 19 inch racks.

  • by maxwell demon ( 590494 ) on Thursday July 23, 2009 @03:24AM (#28792533) Journal

    I think simulating the body reactions is a few orders of magnitude simpler than simulating the brain. Especially since it only needs to simulate the experience (e.g. the simulated stomach doesn't need to simulate the digestion of simulated food, it's enough if it emulates the filling state (and probably a few other simple data points).

  • by Opportunist ( 166417 ) on Thursday July 23, 2009 @03:24AM (#28792535)

    I mean, fusion power has been 10 years away for the last 40some years...

  • by BillyBlaze ( 746775 ) <tomfelker@gmail.com> on Thursday July 23, 2009 @03:31AM (#28792569)

    Science does ignore things outside of the universe, but amazingly enough, everything that matters is, by definition, inside it.

    In other words, suppose there is a soul. If we can still make a brain simulator that acts conscious, then it doesn't really matter, because it had no observable effect. If, because humans have souls and computers don't, we can't make a conscious brain simulator, then the soul has an observable effect, and can be reasoned about with science. Now, in the first case, you might say that the brain simulator acts conscious but isn't. It would be a lot like saying people with a different skin color act conscious but aren't, though - not morally defensible.

    Religions are not dualist because their ability to reason without evidence has allowed them to see some great truth that science has missed. They're dualist because they were conceived before we came to the great realization that the behavior of living things emerges from the physical laws.

  • Re:10 years? (Score:3, Insightful)

    by Tom ( 822 ) on Thursday July 23, 2009 @04:02AM (#28792679) Homepage Journal

    That's mostly because the media isn't reporting science stuff very well.

    AI researcher says: "We're working on a pattern-matching system based on the way the human brain functions, and we think we will have a working prototype within five to ten years."

    Mainstream media headline: "Intelligent robots will conquer the world five years from now."

    We did make a huge progress in AI, for example. The people who really thought a computer would have human intelligence within their life were always in the minority. But of course, someone saying "in a few years, your computer will be smarter than you" will get a lot more headlines and interviews than someone saying "in a few years, pattern-matching in neural networks will be advanced enough to allow object recognition with a margin of error less than 10% on a known set."

    I'm quite sure that this guy will do what he claims to be able to. I'm also sure the end-result will not be spectacular enough to make it to the frontpage. It'll be a human brain. That doesn't make it have a human mind. We're still not very sure what exactly the mind is made of, but among other things we're fairly sure that you need a human body to have a human mind. A brain alone lacks senses, for example, and when you stop to think about it, you begin to realize that how much of our internal model is built upon metaphors of the external world.

  • by Anonymous Coward on Thursday July 23, 2009 @04:11AM (#28792717)

    Sounds like an interesting ethics question: if you could "back up" a person (or simulation) and perfectly restore them later, would currently unethical medical experiments then become ethical as long as you restored from backup afterward?

  • by Tom ( 822 ) on Thursday July 23, 2009 @04:16AM (#28792753) Homepage Journal

    Evidence for that claim, please?

    Everything I know about the subject points to the opposite. We need our senses and input from the external world to build our model of the world in the internal. Without sensory input, you would never have become yourself, nor anything even close. The body is a lot more than a biological car. There's a lot of feedback between the body and the brain.

  • by mrrudge ( 1120279 ) on Thursday July 23, 2009 @04:28AM (#28792817) Homepage
    Personally, I have a problem killing animals too, I've never understood the generally accepted insanity that includes both the fluffybunnywunny and the rabbit pie.

    I guess it's somewhere around empathy. If we can emotionally relate to the intelligence it's a horrendous crime, if we can picture the deceased as an aggressor, or sufficiently different to ourselves then it's somewhere between nothing and a victory.

    Current electronics do nothing to stimulate a feeling of empathy, they're tools, extensions of ourselves. Once a machine brain can talk and reason with you, you might just feel different ?
  • by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Thursday July 23, 2009 @04:32AM (#28792827) Homepage

    You have to see it from an evolutionist point of view. The reason why we feel pain, is because it keeps us away from dangerous things. The reason why we don't mind killing pigs is because we have to eat something. The reason why we don't kill each other is because that wouldn't be very healthy for the survival of the race. The reason there is love is because it makes producing babies easier. And so on, we are what we are because it is good for survival and most of our core morals build up on that.

    The fun part of course is that little of those morals still work when it comes to computers. Death for example becomes kind of a non-issue when you can copy, suspend and resume a program. Death on the other side is a big deal with biologic things, because you can't copy them. The death of a biologic thing is pretty final, the death of a computer program is not.

    I don't really doubt that we one day will be able to build a computer capable of human-like intelligence, but when we do that, our moral system will have a really hard time to keep up with reality, as its not build around logic, but for most part just on our survival instincts.

  • by Knutsi ( 959723 ) on Thursday July 23, 2009 @04:42AM (#28792877)

    I sometimes wonder though, if the component that gives intelligence is not necessarily that complicated. We seem very capable of adapting to new, abstract input, and this indicates to me that intelligence might be a generic mechanism. Allot of organisms are capable of learning, not just us. That's intelligence as far as I see.

    My personal hypothesis (for what it's worth) is that what we will be able to build will be intelligent, but not necessarily very human. Humans have a genetic component, which includes instincts such as social behavior, and I think intelligence is a layer on top of this that helps us achieve the goals these instincts sets out for us. In the end, the instincts dictate what outcome appears good and bad, and reinforces the patterns of behavior that led to those outcomes.

    It might be that once we set out to explore these underlying insticts, and how to replicate them in a brain like system, they might also prove to be surprisingly simple:

    • A smile from a human = good outcome (social) - possible by image analysis
    • Aggressive sounds from a human looking at you (that is stronger than you) = bad outcome - possible by sound/image analysis
    • Spider or snake-like shape near you = bad outcome - image analysis
    • Smell of fruit = good outcome - chemical analysis of air

    Probably it will be somewhat more complex than this, but I think we might be surprised once we get there. We might also find that tweaking instincts will make the brains, and their attached bodies, be human like or very very different. We might be able to create a brain for whom life is ALL about good feedback from humans (these creatures already live amongst us :p), or ones that are merciless killing machines.

    I think no field will yield more knowledge and understanding of ourselves than the brain-builders in the decades to come.

  • by serviscope_minor ( 664417 ) on Thursday July 23, 2009 @04:57AM (#28792963) Journal

    My comment about anesthesia was that a simulation of a thing, is *not* the thing in question it's hard for naive physicalists to grasp,

    What you fail to see is that conciousness is not a physical thing. Your physicalist rules which apply to thing (a simulation of X is not an X) therefore do not apply to conciousness. Perhaps if you dould define conciousness, the debate might become easier. I suspect you can't because noone has so far.

    "I think, therefore I am". That's all you really know. You can't tell that anyone else around you is really real. They appear "concious" and so you choose to call them "concious". You deduce that purely by ovserving their behaviour and actions: you observe no internal process. So why can't a machine be deemed concious by the same rules?

  • by crazybit ( 918023 ) on Thursday July 23, 2009 @04:58AM (#28792975)

    Our brains connect with reality in many ways our conscious self can't perceive.

    What evidence have you? Go on and make shit up, spout it off. Meanwhile Scientists who actually live to figure this shit out for reals and progress human knowledge will model the human brain and study it. In 10 years time they may say as a matter of fact our brains connect with reality in many ways our conscious self can't perceive. If and when they prove it, you'll still be wrong.

    When a bacteria gets in your bloodstream your don't consciously perceive it, but still your brains sends those white cells to the battle. So there you have a brain connection to reality that conscious can't perceive.

    In addition, this process was undocumented & "unknown" for almost all know human history, but it always existed. How many brain processes do you think are still undocumented & unmeasured - but exist?

  • by Gerafix ( 1028986 ) on Thursday July 23, 2009 @05:10AM (#28793027)
    Personal anecdotes are not evidence. The plural of anecdote is not evidence. We can imitate many things without fully understanding the natural process, to think otherwise is pure delusion.
  • by ZeroExistenZ ( 721849 ) on Thursday July 23, 2009 @06:00AM (#28793195)

    How do you define one's psyche and how is "mental health" or "mental illness" defined, and on what set of values?

    Say I'm a chronic masturbator (to be in tune with the slashdot mentality) and it's considered "defective behavior" even though my body rewards me to do continue that habit.

    So, he would build a synthetic copy of my brain, emulate my current state and that's it.

    Now, my brain is in constant evolution, I have eroding neurons, I learn new things making new neuron-paths, which his machine wouldn't be able to the way I imagine it.

    Would he allow the brain to rewrite and rewire itself? And if so, how? Are these processes well understood enough?

    If they would be understood, and able to emulate, will they write "virtual medication" to influence the virtual brain to test side-effects or the propagation of a certain chemical interacting with the brain?

    If the last is possible, will we end up with sentien beings who are stuck in the same state for an eternity? Wouldn't that be sortof agonizing?

  • by erroneus ( 253617 ) on Thursday July 23, 2009 @06:40AM (#28793349) Homepage

    What most neuroscience appears to be missing is that the brain isn't an electrical system, but an electro-chemical system. To my knowledge, no one has done anything to simulate how the chemical interactions work with the signal passing and processing aspects of neurology. I think it is quite apparent that there are a great many connections between the chemical balance of the human body and how well things are working in various parts of the human body work. We already have some clues in observing how stuff like lithium helps to dampen activities in the brain preventing or suppressing many results of "mental disease." So if chemical influence can have such a profound affect, I find it is more than reasonable that chemical influence can also be a profound cause.

    It would appear that scientists are trying to "memory map" the brain as a computer which is simply the wrong approach I believe. Sure there will be some improvement in understanding of how some aspects of things work, but I think they will quickly reach a plateau with this approach.

  • by mcvos ( 645701 ) on Thursday July 23, 2009 @08:12AM (#28793783)

    I assume that we'd basically adopt a strategy of "enlightened plagiarism": use our (nontrivial) imaging and structural analysis technology to get the best idea we can of the structure of a real brain(without necessarily understanding what it does, or why it is structured as it is).

    I'm not convinced our imaging technology is going to be good enough for that in 10 years, though.

    Every decade somebody claims we'll be able to simulate the human brain or build a human-level AI within 10 years, and always they're wrong, because they're only focusing on their own tiny aspect of the human brain or human intelligence, and ignore the complexity of other aspects or the complexity of how all those parts fit together. This overconfidence goes back to the 1950.

    In other words: I'll believe it when I see it.

  • by As_I_Please ( 471684 ) on Thursday July 23, 2009 @08:19AM (#28793831)

    When a bacteria gets in your bloodstream your don't consciously perceive it, but still your brains sends those white cells to the battle. So there you have a brain connection to reality that conscious can't perceive.

    In addition, this process was undocumented & "unknown" for almost all know human history, but it always existed. How many brain processes do you think are still undocumented & unmeasured - but exist?

    The brain is not involved in immune responses.

    As for your second point, who cares what scientists didn't know centuries ago? We know a great deal right now!

  • by JaumPaw ( 48149 ) on Thursday July 23, 2009 @08:31AM (#28793933)

    But in the end, it is the brain that translates the state into a "feeling". It is in the brain where the feeling occurs.
    For example - people who had their arms severed may still feel pain in their phantom arm.
    Also, various drugs may contract or expand our feeling of the body itself, deny any feeling or make us hyper-sensitive.

    What I'm saying is that consciousness is within the brain and the brain alone. The sensory inputs are very important to its upkeep but this is because OUR brain is "designed" (by evolution, I mean) to be that way - but it wouldn't make -sense- otherwise :)

  • Re:10 years? (Score:1, Insightful)

    by Anonymous Coward on Thursday July 23, 2009 @09:48AM (#28794639)

    Flying cars. We want flying cars!

    Really, the problem is expectations in terms of life and happiness, and it just doesn't happen. The 'future' is supposed to be clean, and neat, and fun, and everything is easy. But we're human so it doesn't work that way. Add in the fact that there are a number of very specific things that we want (flying cars!) and we feel a sense of disappointment at what the future has brought us. You can say 'Gosh, flat screen TVs, computers that would amaze people, more automation than you would believe in manufacturing, houses that don't burn down all the time, cheap (relatively) energy and amazingly cheap food, lots of diseases and cancers are curable, etc.' but people still won't be happy.

    Someone is going to create an artificial brain. I don't think that it's going to be 10 years from now, but eventually it will happen. And it will solve a lot of problems. And people will still be miserable and complain that they don't have flying cars.

  • by maxume ( 22995 ) on Thursday July 23, 2009 @10:21AM (#28794999)

    You are arguing in circles.

    If we don't understand human consciousness well enough to explain why Mr. Jones is conscious and Mr. Robo-Jones, who is outwardly indistinguishable from Mr. Jones, is not conscious, then we have to assume that Mr. Robo-Jones is conscious, at least up until the point we figure out a way of explaining the difference. Asserting that he isn't until he proves he that he is conscious leaves lots of room to start treating people who can't explain their own consciousness like automobiles.

  • by wytcld ( 179112 ) on Thursday July 23, 2009 @10:24AM (#28795043) Homepage

    it's theoretically possible with good enough imaging tools to make a 1:1 copy.

    Several problems with that:

    - When you're at the quantum level, you can't image it without changing it.
    - Okay, so you've changed it. You're after general structure not the details of the instant? But what if the old AI guys were right, and the essence of being a mind is in the programming, not the hardware? Shuffling your image of the quantum-level stuff may mean you get a good image of the hardware, and miss getting a functional program for it entirely.
    - Where are you going to store your image? This is not trivial. The human brain is orders of magnitude more complex than any other physical system known. Is there enough storage capacity on the planet to store the complete image details for one moment's slice of one human brain?
    - Once you store something that complex, how in heck are you going to fabricate a duplicate? Over what span of time, with what tools, can you build to that spec?

    Research projects like this are betting that with some drastic simplification you can build something roughly like a human brain, and that this roughest approximation will have useful parallels in operation. But the human brain isn't just electron firings. It's chemical cascades, electromagnetic fields, processing not just across synapses but within them, and quite possibly processing on the quantum level.

    He's going to build something like that? In ten years? Really?

  • Re:Awesome (Score:2, Insightful)

    by Verdatum ( 1257828 ) on Thursday July 23, 2009 @10:34AM (#28795145)
    The comment isn't old, it's appropriate. 50+ years ago, we were promised flying cars. We were all going to have them. We were going to have them 10 years ago. We don't. 10 years from now, I rather doubt we'll have flying cars OR artificial human brains. It's the standard estimation flub that researchers make in order to secure funding.
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Thursday July 23, 2009 @03:11PM (#28798697)
    Comment removed based on user account deletion

There are two ways to write error-free programs; only the third one works.

Working...