Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing Science

Effort to Create Virtual Brain Begins 454

bryan8m writes "An IBM supercomputer running on 22.8 teraflops of processing power will be involved in an effort to create the first computer simulation of the entire human brain. From the article: 'The hope is that the virtual brain will help shed light on some aspects of human cognition, such as perception, memory and perhaps even consciousness.' It should also help us understand brain malfunctions and 'observe the electrical code our brains use to represent the world.'"
This discussion has been archived. No new comments can be posted.

Effort to Create Virtual Brain Begins

Comments Filter:
  • by IO ERROR ( 128968 ) * <errorNO@SPAMioerror.us> on Monday June 06, 2005 @05:26AM (#12733988) Homepage Journal
    All it takes to simulate a human brain is 22.8 teraflops? I thought I was smarter than that.

    Seriously, they expect it to take a decade to complete. By 2015, we could probably get processors with that kind of power from the local computer store. Then everyone could have their own virtual brain...wait, are they going to GPL this?

    So what happens if this thing develops a consciousness?

    • by buswolley ( 591500 ) on Monday June 06, 2005 @05:31AM (#12734004) Journal
      Well we kill it of course.

      We kill things with consciousness all the time.

    • by Slashcrunch ( 626325 ) on Monday June 06, 2005 @05:32AM (#12734006) Homepage
      As far as how much processing power is needed to simulate the brain, I've met quite a few people for whom a C64 and a tape drive would be more than sufficient... and maybe some duct tape.
    • by Anonymous Coward
      All it takes to simulate a human brain is 22.8 teraflops? I thought I was smarter than that.

      TFA does mention mouse brain, and human only as the goal in 2015... plenty of time to increase the flops.

      • by ArsenneLupin ( 766289 ) on Monday June 06, 2005 @06:05AM (#12734115)
        TFA does mention mouse brain,

        ... and the output of the computer will be a two-digit number.

    • I agree, 22.8 teraflops is waaaay not enough.

      But if a cpu in 2015 can simulate 100 billion neurons sending signals to each other a couple hundred times a second over 100 trillion morphing connections asynchronously ... sign me up!
      • Hold yer horses there laddie.

        A neuron is *very* simple. Maybe just a sigmoid function over a sum. If thing actually is doing 22.8 terraflops (unlikely, I'm guessing that's the theoretical peak for the machine) then that gives 228 instructions per neuron. That is in the right range for operation.

        There are not 'morphing' connections, they tend to mostly stabilize within the first few years of life. I can't remember the figure, its maybe on the order of a 1000 connections per neuron, so 228 floating point op
        • by Oligonicella ( 659917 ) on Monday June 06, 2005 @07:09AM (#12734280)
          You're talking out yer butt on a couple of issues. First, they are not simple. Their many axions each have many dentrites. Their responses change depending upon the hormone bath that they live in. Second, they do indeed 'morph' throughout life. They can even repair. This is especially true of the dendrites.

          You're pretty correct on the wiring, although not at the level you wrote. The basic connectivity and structure is known, but each and every brain is wired from experience, not just birth.

          It's worth trying, and we will learn a lot regardless. We just won't learn as much about the brain as one might think.
        • by Somato_gastric ( 832012 ) on Monday June 06, 2005 @07:44AM (#12734444)

          Hold your horses! There is abundant evidence that single neurons can perform more complex operations than a mere 'sigmoid fuunction'. That is a working approximation that can be useful from the point of view of simulations but that is all.

          Single neurons can potentially perform computations at the level of the of the passive cable equation. At the level of active membrane properties when added to those passive canle equation solutions. At the level of genetic instructions becoming activated in the nucleus and dendrites in response to activity. And finally the plasticity or learning rules that neurons use are not only computational very important but probably quite varied from brain region to region. Spike timing dependent plasticity for example allows the brain to pick out persistent correlations within highly noisy inputs. None of this is included in the impoverised neural-network viewpoint of 'sigmoids'

          The real question is why are they doing this? Markram is a top researcher and knows what he is doing. But i quesiton the motivations of big blue. i wouldnt be suprised if they didnt give two hoots about the science but rather are only doing this so that they can get the kind of publicity that posts on slashdot bring. Remember 'Deep Blue'? Lets hope they dont treat Markram like they did Kasparov

    • by Bones3D_mac ( 324952 ) on Monday June 06, 2005 @05:40AM (#12734039)
      I'm more worried about how long it'll take for this thing to get bored, once it reaches that state. If they are going for the full human experience, how are they going to prevent sensory deprivation?

      Will they use some kind of skin grafting onto a chip to let it "feel" things using the nerves in it, instead of simply simulating it with pressure/temperature sensors?

      And what of other stuff like taste and smell?
      • You're ... worried... about it getting bored. Wow. Think you're taking this maybe a touch too far?

        I think you've been spending a little bit too much time in science fiction fantasy land.
      • skin grafting onto a chip to let it "feel" things

        Are you processing what I'm processing??

        err.. thinking.
        • > > skin grafting onto a chip to let it "feel" things
          >
          > Are you processing what I'm processing??

          Seeing as how they're using slices of mouse brain, I believe the correct answer would be along the lines of...

          "Umm, I think so, Brain, but a billion parallelized microprocessors and a human named CmdrTaco? What would the children look like?"

    • by Ford Prefect ( 8777 ) on Monday June 06, 2005 @05:42AM (#12734050) Homepage
      All it takes to simulate a human brain is 22.8 teraflops? I thought I was smarter than that.

      From the article:
      ... [T]he initial phase of Blue Brain will model the electrical structure of neocortical columns - neural circuits that are repeated throughout the brain. ... "These are the network units of the brain," says Markram. Measuring just 0.5 millimetres by 2 mm, these units contain between 10 and 70,000 neurons, depending upon the species.

      In other words, one day they hope to simulate a whole brain, but to begin with they'll be modelling the behaviour of a particular neural unit - with physical data derived from many, many slices of mouse brains.

      In terms of deciphering the behaviour of relatively large numbers of neurons, it could be incredibly useful (and once the model is tuned would mean fewer messy, difficult and unpleasant experiments involving live animals, brain electrodes and whatnot) - but it's admittedly only a small first step toward modelling a whole brain of any species. Still, it's one of the necessary building blocks - and any moral issues are left as an exercise for the reader... ;-)
      • physical data derived from many, many slices of mouse brains
        I think if they are going to be getting into the details of the brain, there is quite a bit difference between humans and mice.
    • by baryon351 ( 626717 ) on Monday June 06, 2005 @05:42AM (#12734051)
      > So what happens if this thing develops a consciousness?

      Yes. That's what has me thinking. Not that I think we should stop, but it's going to be a disturbing moment when the techs running these things get to a point where they ask a simulation brain questions, get it to perform tasks, get it to react like a human does...

      ...and it says it's scared. or alone. or just wants a friend.
    • ...If they don't then it would be kept in check by copyright law, reproducing itself would be infringing...
    • by venicebeach ( 702856 ) on Monday June 06, 2005 @05:42AM (#12734054) Homepage Journal
      All it takes to simulate a human brain is 22.8 teraflops? I thought I was smarter than that.

      You are.

      According to the Business Week article [businessweek.com] this thing will be simulating about 10 thousand neurons. The human brain has about 100 billion neurons. This will be simulating a small section of cortex, not an entire brain. The goal seems to be to understand how cortical columns work, not to create a simulated mind. They actually will not even have enough "neurons" to match one human cortical column, but will probably still learn alot about the circuitry....

      • The goal seems to be to understand how cortical columns work, not to create a simulated mind. They actually will not even have enough "neurons" to match one human cortical column, but will probably still learn alot about the circuitry....

        Again from the article:

        Two new models will be built, one a molecular model of the neurons involved. The other will clone the behavioural model of columns thousands of times to produce a complete neocortex, and eventually the rest of the brain.

        Sounds like they'll use

      • So ten years from now computers still won't go "I'm sorry Dave, but I cannot allow that"? I'm very disappointed.
      • There's a company called Artificial Development who are trying to simulate a 20 billion neuron brain. They call it CCortex.

        http://www.ad.com/ [ad.com]

        They've been at it for several years so looks like IBM are a bit behind.


      • My guess is that the Business Week article linked in the parent comment is better than the New Scientist article at explaining the researcher's intentions. Here's a quote from the Business Week article: "The Blue Brain Project will search for novel insights into how humans think and remember."

        If you've been around scientific research, it is not difficult to understand that this research has little chance of producing anything valuable.

        There are several reasons:

        1) The research is equivalent to tryin
    • by Anonymous Coward on Monday June 06, 2005 @05:43AM (#12734055)
      A decade?
      Give me a shovel and a dark night and I'll get you some real brains, second-hand. And at only 1/2 the cost.

      Sincerely,
      Igor

    • So what happens if this thing develops a consciousness?

      How would you tell? Seriously. It's not like you can just stick a ruler in and measure the length of the consciousness gland.
    • Those estimations of processing capability are nonsense anyway.

      Is a machine that does 100 teraflops, but which does multiplication by adding in a loop better than a 50 teraflops machine which does it with a more intelligent algorithm?

      I'm pretty sure that eventually we'll understand how the brain works, which will enable us to produce something that emulates its function, but in a much more efficient way. Just like we can make machines that are better at multiplication I'm sure that some day we'll make mac
    • > All it takes to simulate a human brain is 22.8 teraflops?
      > I thought I was smarter than that.

      A rough guess seems to come in at around 100 teraops or more.

      In a paper by Hans Moravec [transhumanist.com], one guess is 10^14 instructions per second (Extrapolation of retina
      equivalent computer operations.)

      While another by Ralph Merkle [merkle.com], suggests 10^13 - 10^16 operations per second, based on power consumption,

      and yet another by Robert McEachern [aeiveos.com] suggests 10^17 FLOPS (Floating Point Operation Per Second, more comparable to c
      • It's meaningless to guess how many OPS/FLOPS it's take to simulate a human brain (or any other physical object) without stating what type of simulation you're talking about. In the case of a brain, a molecular simulation is going to take many orders more OPS/FLOPS than a neuron-by-neuron simulation, whcih would in turn take many orders more OPS/FLOPS than a neural assembly (e.g. cortical microcolumn) simultion, etc, etc. If we actually knew how the brain functioned in high level terms, then we could perform
  • Obligatory... (Score:3, Insightful)

    by ArbiterOne ( 715233 ) on Monday June 06, 2005 @05:27AM (#12733989) Homepage
    "We marveled at our own magnificence as we gave birth- to A.I."
  • by harlemjoe ( 304815 ) on Monday June 06, 2005 @05:28AM (#12733993)
    "without your space helmet Dave, you're going to find that rather difficult"

    2001 [imdb.com]
  • Longer article (Score:4, Informative)

    by andymar ( 690982 ) on Monday June 06, 2005 @05:29AM (#12733996)
  • by racecarj ( 703239 ) on Monday June 06, 2005 @05:37AM (#12734025)
    What's interesting about this type of study is the possible philosophical arguments that come up...

    Our brains are made of mostly water, carbon, etc.... which form neurons. This is only important in the sense that we are what we are because these neurons are able to take a set structure, where neurons interconnect, and then have a specific function, where they fire.

    There's nothing magical about these neurons. Let's say that you could replace these neurons with say, ultra-small marbles, that could take the same structure and perform the same function... It is logical to think that this marble-brain would be an actual brain, the same as any other. It would be a person.

    So if they're simulating a brain virtually, but this virtual construct simulates the structure and function correctly, would this virtual brain be aware? Would it be a "person"? I personally, would say that it would. But then, is it moral to ever shut such a simulation off (murder)? Or create it in a virtual world without any other virtual brains to talk to (torture)? Or create it at all for the use of an experiment?
    • This view is called functionalism [wikipedia.org].

      But in regards to this simulation, it is not being built to do the things that a human brain does. That is, as far as I can tell from the article, it does not have any perceptual, motor, or cognitive functions, it is simply an isolated circuit designed to understand how assemblies of neurons work together.

      A growing movement in cognitive neuroscience stresses an understanding of the mind as an "embodied". That is, much of our cognition relies upon and draws from the p
    • I'm a little curious as to how the marbles would interconnect in your scenario. It's not so much the neurons, per se, but more the way they work together and the way they change how they work together...

      Oversimplification which loses sight of that fact does nothing for your argument.
  • by shadowcode ( 852856 ) on Monday June 06, 2005 @05:39AM (#12734034) Journal
    In 10 years, I bet the first readout will read;
    "I think you ought to know that I'm feeling very depressed"
  • by Einherjer ( 569603 ) on Monday June 06, 2005 @05:39AM (#12734035) Homepage
    They needed a simple brain to begin their modelling with.

    They decided on George W. Bush.

    Let's just hope....

    hmmm....

    I for one welcome our new artificial dumb military overlord.
    • Re:In other news (Score:3, Interesting)

      by AndroidCat ( 229562 )
      What is the difficulty with writing a PDP-8 program to emulate Jerry Ford?

      Figuring out what to do with the other 3K.

      Yep, presidential brain simulation jokes just never get old!
    • Dateline: 2012
      In further developments, the allegedly dimwitted IBM computer 'test brain' has again outpolled the latest Democratic presidential hopeful, leaving the former "major" political party now in third place and scrambling for some good news. Leading mainstream media sources have suggested anonymously that somehow this computer has managed to run a global repressive conspiracy, convince congress to throw the country into a war for its personal enrichment, and personally engineered a massive McCarthy
  • by art6217 ( 757847 )
    The real brain has content - the instinct, the way of learning from experience, and the knowledge learned from the experience. It's a bit like a computer -- there must be at leat some sensible bootstrap code that knows how to populate the circuits with other code and data. What about the `bootstrap' in the simulation? Is it only a random net of randomly initialized neocortical columns? Would not it be a bit like a huge net of random, though primitively adaptive, gates, that ones calls a processor?
    It is su
    • This is an excellent point. It's one thing to simulate a large number of neurons and an even larger number of synapses, but this is only the first small step toward simulating a real cortical column.

      In order to simulate a mammalian cortical column, the weight and bias of each synapse needs determined (experimentally or by simulation through trial and error) relative to the other synapses in that column (and there are probably tens of millions of synapses in a column consisting of 70,000 neurons).

      This
  • by Anonymous Coward
    When started they'll have to keep the simulation going or else they'll kill him/her/ver! :(
  • Umm... (Score:5, Funny)

    by Anonymous Coward on Monday June 06, 2005 @05:43AM (#12734058)
    Is it a male or a female brain they're simulating?

    They work quite differently you know.
    Some even speculate that one of those two kinds of brain might need even less than 22.8 Teraflops to simulate.
    • Re:Umm... (Score:4, Funny)

      by AndroidCat ( 229562 ) on Monday June 06, 2005 @06:41AM (#12734209) Homepage
      Some early work was done with both. They set them up to monitor each other's output for correctness. There was a snag in that the output of the male brain was always flagged as incorrect. Removing the interface or even powering down the female brain made no difference, the male brain was always wrong.
      • Source? Would be an interesting read.

        You're a bit diffuse about some things I'd like to see more about, like how a powered down brain couldn't be wrong and what the brains were "correct" about at all.
  • not there yet (Score:2, Informative)

    Looking at this title and having already read a fair amount on neural physiology I thought, we do not have enough information to do this yet. Then I read the article and it is a ten year long project, and possibly for a mouse brain (clarification would be nice).
  • Will come to nothing (Score:3, Interesting)

    by countach ( 534280 ) on Monday June 06, 2005 @05:48AM (#12734070)

    My prediction is that this project will achieve very little. I doubt they know as much as they think they do, but more importantly they won't be able to bootstrap this thing to be comparable to a real person.
  • I highly doubt in success of this. Simply because any typical CPU is very unlikely to simulate a neural tissue. Many operations executed in short sequence in series, so one CPU must simulate lots and lots of neurons, each separately, taking time for each of them and simply the total speed will suck.
    On the other hand, a good setup of several FPGA boards, where a small group of gates could work as a neuron, and there would be billions of them, all working in paralell (just like brain does), this could work. P
    • Parallelism is all well and good - but if you're simply recreating the brain with hardward, then you're no closer to understanding what's going on. You need that central processing unit to process the information that it may be able to find.

      When we're looking at the question of how the brain works, we need these interpretation stages, because when we look at it as is (either by looking at a physical brain or a hardware model of same) there's just too much chaos to pick out the useful order.
      • So add an extra "debug layer" - ability to snoop at each cell, examine its state and modify it. Have random access to the whole thing, but remove the need for sequential processing of all the uninteresting areas.
  • by Adelbert ( 873575 ) on Monday June 06, 2005 @05:58AM (#12734091) Journal
    About a year ago, I read this [amazon.com] book. It's very interesting, and the arguments put forth in it contradict the possibility of simulating the human brain in the way IBM intends.

    While it is true that Moore's Law suggests we will soon have the processing power of the human brain, that doesn't mean we will soon have AI on our hands. If we built this computer and fed into it a "Hello World" program written in Pascal, it isn't going to suddenly become self-aware.

    We only have one type of working brain, so it would make sense to replicate this in every way possible in order to create a simulated intelligence. However, this has a great deal of complexity that we neither have the bioloical knowledge to understand nor the technical knowledge to emulate. Literally millions of neurons are connected inside us, forming cortical maps and working at different levels of awareness, from the lower, barely perceptible levels (reflex actions), to the higher, seemingly conscious, levels (deciding whether to order toast or a bagel for brunch).

    Anyone who's interested in AI (or indeed the operation of the human brain) should read Steve Grand's book. It is highly enlightening, and very thought-provoking.

  • I saw something like this on the 'Superfriends'. It didn't end well.

  • An AI Essay (Score:3, Interesting)

    by tezza ( 539307 ) on Monday June 06, 2005 @06:00AM (#12734098)
    I stumbed across this when looking for a Java Rules Engine:

    From Socrates to Expert Systems [berkeley.edu].

    It argues that rules based AI is a dead end. It also classified levels of expertise.

    It would seem like this non-rules-based IBM brain simulation method would be one which could possibly go beyond the 'advanced beginner' stage that Professor Hubert Dreyfus proves that rules base systems are limited to.

    • Well, -rules- aren't the big problem. The problem is adding new rules (by itself) and especially adding new methods of adding new rules. If you make a program that knows how to efficiently build its own database AND modify its structure to improve the efficiency, you're home. Genetic algorithms are closest to that, except "self-learning" gene would be enormous, therefore only gargantuan population and simply unobtainable computational power could make it work. Plus write an aim function to evaluate that...
  • This is to shed *some* light on aspects of human cognition. Reason magazine just had a cover story about this sort of thing.

    "If the human mind was simple enough to understand, we'd be too simple to understand it." -- Emerson Pugh

    Of course, back when he said that 720k really was all the memory you would ever need.

    My how things do change. One step closer to a neural shunt every day.
  • Brain != Thinking (Score:3, Insightful)

    by arstchnca ( 887141 ) <arst3chnica@gmail.com> on Monday June 06, 2005 @06:17AM (#12734156)
    For those who don't feel up to actually reading an article, the Blue Brain project does not intend to create artificial intelligence, but rather a replication of the physical side of the human mind - the brain. The 22.8 teraflops mentioned in the summary are going to be used to manage a database of "neural architecture." The whole project has little, if anything, to do with concsiousness.

    As of this posting, there have been several "what if" posts about the project accidentally leading to the creation of artificial intelligence. Systems such as the fictitious Skynet will not rival the flexibility and depth of a single human mind until we fully understand the mind ourself. Lisa Fittipaldi, an astonishingly talented painter, is able to create beautiful scenes on what was once a blank canvas. At the same time, Ms. Fittipaldi is unable to paint an accurate portrait - she is blind.

    We can only recreate what we understand.
  • Wishful thinking (Score:5, Insightful)

    by bloodredsun ( 826017 ) <martin@nosPam.bloodredsun.com> on Monday June 06, 2005 @06:31AM (#12734191) Journal

    As someone who's spent many years as a neurophysiology researcher before becoming a programmer I feel I may have a bit more insight than the average person. What this project boils down to is a simplistic model of the simplist unit of operation of one area of the brain (neocortical column). Anyone who has followed research into areas such as epilepsy and memory will know of the massive gaps in our understanding of the realtionship of the brain and the mind. So this "first computer simulation of the entire human brain" is neither accurate in the sense that they are not simulating the human brain, nor are they the first to try what they are attempting. They only difference here is that they have the very public backing of a major corporation who understand the benefit of good publicity.

    This sort of research is fascinating and despetately needs to be done, but it does no one any favours when people associate tabloid style headlines to it. The days when we wear Richard Morgan style "stacks" are still as far away as ever unfortunately.

    • my thoughts exactly; it may be a few years since i studied neural networks at university, but unless someone has sneakily made a quantum leap forward, any claims of simulating entire brains or creating self-aware computers is still science fiction.

      it constantly amazes me that people still assume that once a certain amount of computing 'power' is available, a computer could suddenly become sentient, as if someone just flicked a switch.

      we don't even know what sentience and consciousness really mean ourselve
  • (Note: I've used 'wires' and 'components' arbitrarily, these can be real (hardware simulation) or simulated (software simulation) or whichever way you prefer.)

    The question of morality of this replication of a brain (mouse, human, whatever - let's speak hypothetically, it's easier) boils down to the existence of a soul.

    If you have a wiring model that responds to stimuli in the same way as the real brain being modelled would be, then there's no way to distinguish between the two.

    This is made more complicat
  • Not the first (Score:4, Informative)

    by Silver Sloth ( 770927 ) on Monday June 06, 2005 @06:42AM (#12734210)
    This was all covered back in the late sixties/early seventies by the great Donald Michie http://www.aiai.ed.ac.uk/~dm/dm.html [ed.ac.uk] If only there had been the processing power back then. The project was stopped because 'computers will never be powerful enough' such is the foresight of civil servants.
  • by forii ( 49445 ) on Monday June 06, 2005 @06:52AM (#12734233)
    In the early '90s, I heard that one of the supercomputers at Caltech was able to simulate the complete behavior of a single neuron. Scaling this up by 100 billion times, and then using a rough bastardization of Moore's law, and saying that computational power doubles every 18 months, this leads to a prediction of using a supercomputer (whatever that is at the time) to simulate an entire brain about 50 years after that point.

    Based on this (incredibly rough and inaccurate) analysis, I would predict that this type of project will be successful around the year 2040.
    • by IdahoEv ( 195056 ) on Monday June 06, 2005 @11:06AM (#12736270) Homepage
      As someone who is receiving my PhD in "Computation and Neural Systems" from Caltech this week, and having worked briefly in that lab, I can tell you that the simulation you read about, which is called GENESIS [genesis-sim.org] probably simulated the neuron in much greater detail than is ultimately required to create a brain. It simulated the entire physiology and chemistry of the neuron ... every ion flow, trans-membrane voltage, etc. One of the many goals is to explore precisely what information-processing behavior arises from the chemistry and biology.

      But, once you determine that information-processing behavior, one should in theory be able to simulate that without a detailed model of the underlying structure. I mean, if I know that impulses from X input synapses cause the voltage at the soma to raise/lower according to a certain time function, and that a certain voltage at the soma causes an action potential to be fired, which will trigger the neuron's own output synapses to fire Y milliseconds later, I should be able to simulate these properties without going to the pain of modelling the ion channels, capacitance, and resistance of every patch of membrane on the whole neuron's surface.

      That should buy a few years' worth of Moore's law for your prediction. Consider yours an upper bound, and assume we can make shortcuts to bring it sooner than 2040.

      I actually think the top supercomputers are within spitting distance of modelling a human brain - or at least smaller mammalian brains now. The trouble is that despite what TFA leads you to believe, far too little is known yet about the interconnections of those neurons. Even less is known about their learning functions. The state of the art in much of the brain is to stick a few electrodes in, hope you find a couple of neurons that are connected in some way, record for a while and then do statistics on their firing patterns to estimate the strength an type of their pairwise connection. Then by using that they hope to work backwards to deducing the connection patterns of whole clusters of neurons. It's slow, messy work.

      The group in TFA uses thin slices of brain where they can more accurately observe which neurons are connected to which, and which neurons they are recording from. It's a useful technique, but since the connections in the brain are three-dimensional, taking thin slices fundamentally alters the structure. It can't tell us anything.

      Much of the brain is still a black box, effectively. It will still be a while before we can model an entire brain, regardless of CPU power available. My personal gut feeling is that the understanding of the neuronal network is far more the limiting factor at this point.

  • What if it works? (Score:3, Interesting)

    by Malor ( 3658 ) on Monday June 06, 2005 @07:43AM (#12734435) Journal
    I doubt they'll get to full human-brain awareness level anytime soon, but ... what if they do? What happens if they create a sentient being inside their simulator? When they're done with the simulation and it's time to start on something new, is turning off the machine killing the 'creature' inside?

    And even if it's not as smart as a human, what then? What ethical guidelines are appropriate? When is it okay to destroy a thinking being, even if you created it yourself? And how complex must it be? Killing a beagle or a dolphin isn't murder, after all, but it's still considered wrong in many cases to do so.

    Are AIs cute and cuddly and protected by humane-treatment laws, or scary and kill-on-sight, like spiders and snakes are for many people?

    How smart does an AI have to be to have rights against termination?

    We've been sort of doodling around with these thoughts for a long time, but it's getting to the point where we may actually need the answers.....
  • by Muad'Dave ( 255648 ) on Monday June 06, 2005 @08:29AM (#12734686) Homepage

    ... make sure you install a huge fire axe near the main power cord in case this thing decides it doesn't need us anymore!

  • by johnrpenner ( 40054 ) on Monday June 06, 2005 @09:28AM (#12735157) Homepage
    Is the Brain a Digital Computer? [soton.ac.uk]
    John Searle

    There is a well defined research question: "Are the computational procedures by which the brain processes information the same as the procedures by which computers process the same information?"

    What I just imagined an opponent saying embodies one of the worst mistakes in cognitive science. The mistake is to suppose that in the sense in which computers are used to process information, brains also process information. To see that that is a mistake contrast what goes on in the computer with what goes on in the brain. In the case of the computer, an outside agent encodes some information in a form that can be processed by the circuitry of the computer. That is, he or she provides a syntactical realization of the information that the computer can implement in, for example, different voltage levels. The computer then goes through a series of electrical stages that the outside agent can interpret both syntactically and semantically even though, of course, the hardware has no intrinsic syntax or semantics: It is all in the eye of the beholder. And the physics does not matter provided only that you can get it to implement the algorithm. Finally, an output is produced in the form of physical phenomena which an observer can interpret as symbols with a syntax and a semantics.

    But now contrast that with the brain. In the case of the brain, none of the relevant neurobiological processes are observer relative (though of course, like anything they can be described from an observer relative point of view) and the specificity of the neurophysiology matters desperately. To make this difference clear, let us go through an example. Suppose I see a car coming toward me. A standard computational model of vision will take in information about the visual array on my retina and eventually print out the sentence, "There is a car coming toward me". But that is not what happens in the actual biology. In the biology a concrete and specific series of electro-chemical reactions are set up by the assault of the photons on the photo receptor cells of my retina, and this entire process eventually results in a concrete visual experience. The biological reality is not that of a bunch of words or symbols being produced by the visual system, rather it is a matter of a concrete specific conscious visual event; this very visual experience. Now that concrete visual event is as specific and as concrete as a hurricane or the digestion of a meal. We can, with the computer, do an information processing model of that event or of its production, as we can do an information model of the weather, digestion or any other phenomenon, but the phenomena themselves are not thereby information processing systems.

    In short, the sense of information processing that is used in cognitive science, is at much too high a level of abstraction to capture the concrete biological reality of intrinsic intentionality. The "information" in the brain is always specific to some modality or other. It is specific to thought, or vision, or hearing, or touch, for example. The level of information processing which is described in the cognitive science computational models of cognition , on the other hand, is simply a matter of getting a set of symbols as output in response to a set of symbols as input.

    We are blinded to this difference by the fact that the same sentence, "I see a car coming toward me", can be used to record both the visual intentionality and the output of the computational model of vision. But this should not obscure from us the fact that the visual experience is a concrete event and is produced in the brain by specific electro-chemical biological processes. To confuse these events and processes with formal symbol manipulation is to confuse the reality with the model. The upshot of this part of the discussion is that in the sense of "information" used in cognitive science it is simply false to say that the

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...