Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Downloading The Mind 515

bluemug writes "The Canadian Broadcasting Corporation's popular science radio show Quirks and Quarks aired a piece this weekend about Ray Kurzweil's ideas on downloading human minds to silicon. (The interview is available in MP3 or OGG.) Kurzweil figures we'll have strong AI by 2029 and be able to copy a human mind about a decade after that. Book your appointment now!"
This discussion has been archived. No new comments can be posted.

Downloading The Mind

Comments Filter:
  • by irn_bru ( 209849 ) on Monday October 21, 2002 @06:23AM (#4494179)
    Download my brain and it'll be the buggiest chip since then pentium [com.com]

  • Sure! (Score:5, Funny)

    by Koyaanisqatsi ( 581196 ) on Monday October 21, 2002 @06:24AM (#4494180)
    Book your appointment now

    Yeah, I'll go check it out on my flying car, while the robot takes care of things at home.
    • Re:Sure! (Score:5, Funny)

      by troc ( 3606 ) <[troc] [at] [mac.com]> on Monday October 21, 2002 @06:30AM (#4494204) Homepage Journal
      Flying car?

      That's soooo last decade. Surely you will be using your household teleportation booth?

      You could combine it with a trip to the tactile 3D hologram suite.

      At least tell me you have a subspace communications port or how else will you download the 1Tb bug updates to Microsoft Windows XXXP with "use this browser or else" Interuniversenet Explorationer?

      Not that I am doubting there's enough silicon out there for a few brain dumps, no, of course not. And anyway Skynet will protect us from the Matrix.

      Troc
    • Re:Sure! (Score:5, Insightful)

      by cshotton ( 46965 ) on Monday October 21, 2002 @07:24AM (#4494429) Homepage
      I've met Ray on several social occasions and discussed his vision for the future. There is a huge flaw (or blind spot) in his vision. All of his massive advances in AI and general computing functionality are based on extrapolating trends like Moore's Law, Metcalf's Law, etc. into the near future. He infers that because they predict that massive CPU power and network bandwidth will be available, that the software to match it will naturally come along, too. Ray's a hardware guy, for sure. Unfortunately, we've already seen a plateau in the demand for CPU cycles and network bandwidth. Without market forces to drive these trends, why assume they'll be sustained?

      The problem with depending on hardware and network advances to drive his vision is that software engineering simply cannot keep up with the pace of advances on the hardware front. Anyone who has ever read the "Mythical Man Month" understands this at a basic level. Humans simply cannot organize themselves well enough to tackle software projects of the magnitude that Ray envisions, at least not by 2029.

      Ray dismisses this argument by saying we'll have software that writes the software. Well, there's a tautology for you. If you can't write the software you need because it's too complicated, how can you possibly be expected to write the software to write that software? Genetic algorithms are useful for some very specific trial and error sorts of problems. But using them to random walk our way to a billion lines of debugged, functional AI code seems a bit of a stretch.

      My money sure isn't on Ray's pony...

      • Re:Sure! (Score:3, Interesting)

        by richieb ( 3277 )
        Ray dismisses this argument by saying we'll have software that writes the software. Well, there's a tautology for you. If you can't write the software you need because it's too complicated, how can you possibly be expected to write the software to write that software?

        I agree with you. To be able to solve a problem using a computer, you have to know how to solve the problem in the first place. Kurtzweil (and his string AI buddies) are counting on some "emergent miracle" to occur.

        Now, such an event occured in nature (i.e. evolution), but it took little longer than 30 years. :-)

      • Re:Sure! (Score:5, Insightful)

        by nanojath ( 265940 ) on Monday October 21, 2002 @10:29AM (#4495754) Homepage Journal
        I think you can go beyond the mere technical objections (which are entirely reasonable) and get down with the ideas the basic assumptions feed into. Human intelligence is the result of a, just to throw a random figure out there, 3.5 billion year process of evolution (the exact figure is heavily debatable but that's a ballpark).


        The capacity for thought we have is an intensely complex combination of the neural processes of survival and reproduction, with all those billions of years behind it, plus the geologically recent development of a whole lot of extra cognitive juice in the frontal lobe department, plus a couple of million years of tweaking this wetware system in the context of social, tool-using behavior, plus several tens of thousands years of social behavior combined with the meta-social instruction of language, art, text and such...


        We have some bare inklings and theories of how we acquire language, intelligence, social functioning. The barest inklings. We're working on it. There is still a helluva lot of controversy on what exactly intelligence is, and no end in sight.


        At the dint of enormous effort we have computers that can take a stab at interpreting meaning of isolated phrases based on context and a whole lotta cultural and semantic training by humans. The most powerful computers built for the job can hit-and-miss beat the finest human chess players... a game with a fully mathematically limited scope which is almost entirely susceptible to a brute force approach. "Seeing" computers can be trained to make some decent interpretations based on heavily patterned information. Voice recognition still has to be tuned to every individual, and it's pretty damn iffy for all that. Nowhere near to a computer that can hold anything resembling a conversation.


        So where in hell do we get an estimate like "Strong AI by...?" As far as I'm concerned science has barely framed the question of what that would mean... and only in qualitative terms at that. So I'll tell you where these pointless predictions come from: ballpark some meaningless figure about biology - numbers of neural gaps, firing rates, impulses per second, whatever. Connect it in some arbitrary manner to some measurable function in a computer, extrapolate based on some law of technological development with far less than a century of statistical evidence and no basis mechanism whatever behind it (statistical evidence without an explanation is ALWAYS suspect in interpretation) and - viola! - you're a futurist. Or, as I like to say, a worthless dumbass.


        ANd how do you get from there to the process of downloading consciousness, despite the fact that there is not even an inkling of a glimmer of the slightest valid theory about how an active and continuously shifting neourochemical proccess of personality and intellectual template, stored memory and present cognition (not to even touch the primal, the emotional, the glandular, the spiritual) gets translated to something that can be interpreted by a machine or stored in a meaningful sense or caused to be active outside of a biological framework? Well you just pull that one right out of your ass because it doesn't have even the flimsiest basis in the "reality" of doodling with a few facts and figures on your scientific calculator.


        I'm not sure exactly why but the idea of people making carreers based on this bullshit makes me so mad I could kick puppies.

      • Re:Sure! (Score:5, Informative)

        by Alsee ( 515537 ) on Monday October 21, 2002 @02:51PM (#4498444) Homepage
        Genetic algorithms are useful for some very specific trial and error sorts of problems. But using them to random walk...

        Then you do not understand genetic algorithms. If you are a programmer I recommend you read up on them, they are far more powerful than a simple random walk. Mathematical analysis shows tremendous implicit parallelism. You aren't merely working on X individuals, you are working on X individuals times Y schema, where Y is a monsterously large number. Mutation is the least signifigant thing going on in the evolution process.

        Unfortunately it is to complicated and mathmatical to explain here, but if you are up for it try this Google search on "genetic algorithms" "implicit parallelism". [google.com]

        Remember, this is the process that created humans. When people hear "evolution" they usualy think "mutation". Mutation is almost insignifigant. The power lies in recombination and immense implicit parallelism.

        -
  • Does this mean that in a few years I can copy my thinking, work from home and have the computer do my work?

    Time to invest in a better couch!

    CC
  • by Anonymous Coward on Monday October 21, 2002 @06:26AM (#4494189)
    All subjects will be forced to spend a day with themselves before they are allowed back in the general population.
  • by Anonymous Coward
    I wish I had thought of copying an idea out of Neuromancer!
  • Eternal life? (Score:5, Interesting)

    by Jeppe Salvesen ( 101622 ) on Monday October 21, 2002 @06:28AM (#4494193)
    I have been thinking about this for a while now.. If you can download the mind - will we be able to upload it as well at some point in the future? I'm thinking along the lines of falling asleep in a body that's in its 70s, and then waking up in a body in its teens. It would certainly be interesting to relive my teens. A few things that could be done differently..
    • Re:Eternal life? (Score:5, Insightful)

      by nagora ( 177841 ) on Monday October 21, 2002 @06:35AM (#4494220)
      If you can download the mind - will we be able to upload it as well at some point in the future?

      It wouldn't work like that. Imagine that the copy of your mind is uploaded into a new body before you die. Do you think your consciousness would suddenly transfer to the new body? All the copying could do is create a new consciousness in the new body, your old one (ie, you) would still grow old and die in the old body, never to return.

      This is also the argument against Star Trek transporters. You die each time you use one but a new person is created at the other end that thinks it's you. You don't know anything about this, of course, you've just been disintegrated!

      TWW

      • Re:Eternal life? (Score:2, Insightful)

        by zmooc ( 33175 )
        Why would it just create a new consciousness? What is consciousness more than some electrical signals running through a certain configuration of brain-cells (and influenced by chemicals from the blood). What's in there that cannot theoretically be copied, stored or emulated?
        • Re:Eternal life? (Score:3, Interesting)

          by sporty ( 27564 )
          So are you implying that if you made a copy of yourself, you, as a person, would have dual consciousness? Or would you have your own consciousness right next to a person who is exactly like you? Or is that not how twins work :)
          • Re:Eternal life? (Score:5, Insightful)

            by BlueGecko ( 109058 ) <benjamin.pollack@ g m ail.com> on Monday October 21, 2002 @07:16AM (#4494376) Homepage
            Probably the best way to look at it is like a fork statement in a Unix program. Suddenly, one program becomes two, with different PIDs, etc. And you could make the case that the one with the original PID is the original program, and I suppose that in many ways you'd be right, but the fact remains that the new PID begins with EXACTLY the same memory and EXACTLY the same register states. The only thing different is where that memory is located. Clearly, the programs are seperate entities, yet they'll follow exactly the same patterns (ignoring their different data input) and we might argue that, from the program's perspective, the PID is entirely artbitrary and it has absolutely no way of knowing which one was not the original program, since they start off in identical states but respond to different stimuli. I actually think that twins are a very good metaphor, only you have to imagine that the twins both have common memories for a certain number of years.

            I honestly don't think words like "dual consciousness" or "death" even apply in situations like this; we need some new vocabulary.
            • If you can download the mind - will we be able to upload it as well at some point in the future?
              Probably the best way to look at it is like a fork statement in a Unix program.

              Only on Slashdot... :)
            • Re:Eternal life? (Score:4, Insightful)

              by greenhide ( 597777 ) <jordanslashdot AT cvilleweekly DOT com> on Monday October 21, 2002 @08:58AM (#4494920)
              Sorry, this makes an assumption that most people make:
              It's all in our heads.
              Starting with Rene Descartes, all focus on consciousness and being in the scientific world has shifted over to our brain. And yet, clearly that is not the only part of our consciousness. All you need to do is get a cold, and you'll discover just how much your physical body effects your emotional and mental condition.

              Our bodies are just as much a part of ourselves as our brains are. Also, don't forget that the makeup of brains is neurons and nerve cell connections. Well, surprise, surprise--we have nerve endings all over our bodies, and I'm willing to bet that they're used, albeit at a low percentage perhaps, when we think and process the world as well.

              As far as I understand it, we haven't yet developed the ability to remove a brain and find out what it's thinking -- it only works when it's inside a human body. And unless I'm out of the loop, brain transplantation has not yet been done on humans [216.247.9.207]. So far, "brain transplants" has meant inserting healthy cells into Parkinson patients. In this case, the individual cells are simply subsumed by the whole brain and used as dopamine factories.

              The brain is not a hard disk or cpu. It's not running linux, windoze, or even (despite Steve Jobs' assertions) OS X.

              Our understanding of the brain organ, and by extension, the "mind" -- which may or may not overlap 100% with the brain -- is so woefully inadequate as to make any talk of uploading or downloading anything on it silly at best.
      • Thought experiment (Score:5, Interesting)

        by Fweeky ( 41046 ) on Monday October 21, 2002 @07:00AM (#4494306) Homepage
        Say you have a class of nanobot which can absorb and replace the function of a single neuron.

        You inject yourself with a load of them, and it starts absorbing neurons and taking their place. Eventually, your entire mind ends up running on these replacements, each of which behaves just like the organic neuron it replaced. You've been concious all the way through.

        Now, assume each of these is able to communicate it's inputs to a machine on the outside which is able to simulate neurons en masse. They start to disable themselves and telling those around them to get their signal from this machine instead of them.

        Eventually, you end up with a load of simulated neurons which are running on this machine, linked to the nerves through whatever method they use to communicate and a bunch of these neuronbots.

        The simulated one is functionally identical to the original organic brain, except now it's got the potential to be pysically a lot more robust. Continuity was never lost, and all that was destroyed was a few neurons at a time, who's function was replaced.
        • by perfects ( 598301 ) on Monday October 21, 2002 @07:43AM (#4494491)
          Thought experiments like that are fun, and yours is convincing, but consider some of the implications...

          During the transfer from inside to outside, suppose you use a machine that has redundant circuits. Each nanobot is replace by a trio of simulated external neurons, so that they can check each other for errors. (If the presumably-binary output of the three disagree, the majority wins and the disagreeing unit syncs to the final result.)

          Ok, up to now it's exactly the same situation that you describe, but with additional reliability.

          But after the transfer is complete... The trio-links are broken, resulting in 3 perfectly synchronized systems.

          Which one is you?

          I'm not sure that "continuity" proves anything. Maybe your original consciousness would die slowly, neuron by neuron, as the new consciousness comes to life. If it even does come to life.

          Honestly, I don't think the human race yet has the terms to describe the problem, much less speculate about the answer. It's fun to talk about, but so was "how many angels can dance on the head of a pin?"
          • by kisrael ( 134664 )
            I think the Buddhists had it right. Our sense of self is, fundamentally, an illusion.

            I used to assume that my "inner voice", my internal monologue, was "me" in some fundamental way. And that when I slept, yeah, I was unconscious, but there must be some "pilot light of me" that was still burning.

            After reading Dennett's "Consciousness Explained [amazon.com]", though, now I'm inclined to think that thought experiments like these reinforce the idea that there's no there, there. All 3 of those copies are you, for what it's worth, which isn't as much as you assume. (Though your implication "if it even *does* come to life", implying they would be some kind of zombies...well, if those 3 are zombies, just imitating "real consciousness", and yet they ARE *accurate* copies, able to grown and learn just like 'you'...well pal, you're a zombie too.)

            Some of this thinking informs the essay I advertise in my .sig, so you might want to go there for more info. Also, the book Permutation City [amazon.com] by Greg Egan has some interesting ideas, but contains some unlikelihoods, even after you accept the fundamental "we can download minds" premise in it.
      • Re:Eternal life? (Score:5, Interesting)

        by ChrisJones ( 23624 ) <cmsj-slashdot@noSPaM.tenshu.net> on Monday October 21, 2002 @07:02AM (#4494315) Homepage Journal
        Isn't that arguably the same thing? You probably don't have many (if any at all) of the cells you had when you were born, so you've been mostly disintegrated many times, you just didn't notice it.
        If there is a continuity of conciousness, which the transporters provide, then you are really the same person, you are just made of different atoms.
        As for the down/uploading brain contents thing, well, that is a bit more complex - if you can copy the contents of a brain and upload them to another, then you have fork()'d yourself. Either you kill the old body and have it's fork of your conciousness die, or you have two of yourself.
        I'm not sure if the human mind could cope with the trauma of first finding itself in a new body, then seeing its old body die. It sounds simple enough, but it would take quite an adjustment!
        Besides, I don't actually believe it's possible, I find it reassuring that our brains are probably too complex for us to possibly understand ;)

        Either route, uploading or transporters, is a great way to build a clone army of yourself though :)
        • Either you kill the old body and have it's fork of your conciousness die, or you have two of yourself. I'm not sure if the human mind could cope with the trauma of first finding itself in a new body, then seeing its old body die. It sounds simple enough, but it would take quite an adjustment!

          You think it's tough for the _copy_?? What about the original?

          "Okay, the transfer is complete, we're going to have to kill you now."

          "Hey! Wait a minute! It didn't work! I'm still here in this body!"

          "Well of course you are, but the copy of you is doing just fine, so you need to die now in order to maintain the illusion of continuity of consciousness."

          "But I don't want to die! That's why I signed up for this!"

          "Sorry, you should have read the fine print. Now, we have a number of Suicide Packages available for your convience, or for an added fee you can take advantage of our Euthanasia Program."

      • by FTL ( 112112 ) <slashdot.neil@fraser@name> on Monday October 21, 2002 @07:03AM (#4494322) Homepage
        > It wouldn't work like that. Imagine that the copy of your mind is uploaded into a new body before you die. Do you think your consciousness would suddenly transfer to the new body? All the copying could do is create a new consciousness in the new body, your old one (ie, you) would still grow old and die in the old body, never to return.

        > This is also the argument against Star Trek transporters. You die each time you use one but a new person is created at the other end that thinks it's you. You don't know anything about this, of course, you've just been disintegrated!

        I don't have a problem with this. When I go to sleep, my current consciousness is discarded, and when I wake up a new consciousness (with all my memories) is created. This fact doesn't keep me awake at night.

        Good night. Sleep well...

      • Wasn't there an award winning Canadian short cartoon about that "Star Trek problem"?

        I'd love to pin it down. Great, concise story telling.
      • It wouldn't work like that. Imagine that the copy of your mind is uploaded into a new body before you die. Do you think your consciousness would suddenly transfer to the new body? All the copying could do is create a new consciousness in the new body, your old one (ie, you) would still grow old and die in the old body, never to return.
        You're right and wrong at the same time. The thing is, the new body will presumably have all of your memories and remember the transfer process, right? So from his perspective, your mind really did switch bodies, and you can't even make the argument that "well, since I'm here now, before the transfer process, sentient, I won't be able to go into the new body," because he will remember having had the exact same thought in the past as well yet will be in the new body. So saying that you die and the other body is not you is certainly the wrong way to look at it, even though at first it seems highly intuitive. The right way to look at it gets even more complicated, and if it's OK with you, I'm going to stop contemplating it before my mind explodes.
      • Re:Eternal life? (Score:3, Informative)

        by jparp ( 316662 )
        The example Kurzweil uses is his "Spiritual Machines" book concerning this, goes something like the following:
        It has been shown that people can live and be conscious with only the right side, and only the left side of the brain. No imagine if you split your brain in half, and put your left brain in China and your right brain in the US. Then imagine you could some how connect them wirelessly. Then imaginbe you connect your brain wirlelessly to additional pieces.
        The point being: who says you consciousness has to exist in a single location. Many would argue that concsiousness is formed out of the complexity of thwe whole. If this is the case, maintaining consciousness through the whole upload / download transitions, is the same problem as having a distrivuted mind. If you can do one, then you can do the other.

        • Re:Eternal life? (Score:3, Interesting)

          by sg_oneill ( 159032 )
          Interestingly, Greg Egan in his novel "Permutation City" (Read it folks, Greg Egan is amazing) makes a similar arguement. In it he has his clones (Downloaded dudes) split among distributed processors, he runs them backwards , forwards , at different speeds in parts and all synched up. The clone feels entirely coherent.... I won't tell you where it leads to. His conclusion is astonishing, but it makes for a fascinating read.
    • Probably not without physically restructuring your brainscells' connections with nano-bots or something:)...which might be possible if you've got enough processing power to emulate a brain on the sub-cell-level (to calculate and test the new configuration) and a lot of really really really small tiny miniscule nano-bots to do the dirty work.
    • Re:Eternal life? (Score:5, Insightful)

      by chuckychesthair ( 576920 ) on Monday October 21, 2002 @06:38AM (#4494232)
      but how do we get the young bodies? Will we grow bodies without minds? (this will certainly affect the quality of the brain you'll get) Or will this be a case of rich people buying the body of a poor person to relive their teens? And the poor people will just be computerized early.. (or they save the money they make on selling their body and wait for a few hundred years so they can buy someone elses young body from the interest)

      too many questions...
  • Brain Dump (Score:3, Funny)

    by erinacht ( 592019 ) on Monday October 21, 2002 @06:29AM (#4494201) Homepage
    I think mine would be a bit runny and not hold together very well, 5 years using VB does that...
    • by rumrum ( 320622 )
      Brain Dump: Brings new meaning to the phrase brain fart... er, well, maybe not.
    • by MercuryWings ( 615234 ) on Monday October 21, 2002 @08:16AM (#4494658) Journal
      Ooh! Ooh! Can't resist - my anti-MS slant just kicked in....

      What if these mind download systems were MS brand? Contemplate the following hypothetical...

      Doc: Good morning Mr. Jones, glad to see you're awake. You might be wondering why you're in the hospital? Well, it looks like your Brain (20)95 (tm) suffered a complete crash, and we had to restore you from tape. While we were at it, we installed the new Brain XP. You might find the new brain a little different - our new Media Play v.324 now plays ads directly into your thought processes once a minute. Can it be disabled? Well, no, not really...it's been integrated so that any attempt for removal will cause a complete cyber-lobotomy.

      Also Mr. Jones we've upgraded the DRM portion of your brain. The new EULA - which by the way you've agreed to just by thinking with your new brain - now says that you must give all ownership of any independent thoughts you may have to Microsoft. Any thought that include open source software (including linux) will cause a nasty jolting pain down your side. Our lawyers tell us it is necessary to prevent the viral-like nature of GPL from infecting your new brain.

      Now if you dont like your new Brain XP, you'll be happy to note that you have a 30 day trial period before you have to 'activate' your brain. It's a reasonable fee - just 50% more than the total sum of your yearly income and three of your children. You have only two children, you say? No problem - we'll mark you in for the increased payment amount. If after the 30 days you dont want to use your Brain XP, then you dont to do anything - your brain will automatically shut down. We'll restore you from tape to one of our previously supported products.

      Which reminds me - now that we've released Brain XP, we no longer support any of our previous Brain products. We hope this wont be an inconvenience to you.

  • Optimistic guess (Score:5, Informative)

    by 91degrees ( 207121 ) on Monday October 21, 2002 @06:30AM (#4494203) Journal
    Strong AI was one of the first conceived applications for a Turing complete machine. This was thought of before even early concepts like code breaking.

    In that half century and a bit, we still haven't got anywhere near a machine that we would consider intelligent. Even quite clever machines like ALICE are rather dimwitted.
  • by jeroenb ( 125404 ) on Monday October 21, 2002 @06:30AM (#4494206) Homepage
    I would say that actually understanding what all the data means is about a billion times more complex than just mirroring all the structures from someone's brain and storing it on some digital medium. And I'm not sure that extremely advanced AI is going to help with figuring it out. Although some evolutionary system is probably the best way of figuring it out.

    Sounds like something that would still take centuries instead of decades from now though. Then again I wouldn't have believed I'd live to see everyone carrying a phone everywhere they go ten years ago.
  • Insanity? (Score:2, Interesting)

    by Anonymous Coward
    i assume there would have to be some form of input into this brain in a box. as if it didnt have equivilent of eyes ears etc it would be a hell of a shock to the mind.

    ok so its only a copy and you would still be you but how is your mind copy going to cope going from flesh to silicon? would it be some form of tourtue?
  • by atomicdragon ( 619181 ) on Monday October 21, 2002 @06:31AM (#4494212)
    So in 2029 we won't download music, just the brains of musicians containing the music.
  • by ConsoleDeamon ( 611610 ) on Monday October 21, 2002 @06:32AM (#4494213) Journal
    Finaly a way to preseve "MY" way of thinking to the future.
  • Great! (Score:2, Funny)

    by dolo666 ( 195584 )
    Now I can steal Spiderman's mind forever! Mwhaahhahahha!!
  • by SecGreen ( 577669 ) on Monday October 21, 2002 @06:34AM (#4494218)
    I'm sorry, but further research on this subject is in direct violation of the DMCA and must be halted immediately...
  • by manon ( 112081 )
    "We'd all like to live forever, but biology won't cooperate." The idea is very nice, putting one's brain on a chip but I think something very important is being forgotten: emotion. We all need some sort of affection. I can't see how we are going to deal with our need for affection if we are stored in a chip."It would also release us from the limitations of our bodies, and allow us, paradoxically, to fulfill even more of our human potential, in a computer." I think having a body is just another part that makes us human. Tell me is I'm wrong but I thing that if you put a human brain in a chip, you put in the feelings a person has. People are also (in many cases ever at first) attracted to how a person looks. Please don't tell me you can fall in love with the view of a chip.
    • Have you seen that new Itanium with 8MB cache?
    • I don't think we'd just transfer the brain to a chip and leave it at that - we'd be entering a whole new mode of being. The brainchips could be housed in almost any robotic form allowing all sorts of new pursuits (exploring the sea bottom) or just control bodies by remote (with sufficiently advanced tech those bodies could be organic as well.)

      I'd be especially interested in the modifications we could make to brain function in a silicon type environment.
      For example: say we can't travel faster than light (likely), an explorer ship with silicon humans could lower the clock speed of their brains to make the journey subjectively shorter - or pause them entirely until the automatic systems turn them back on. In moments of danger the speed could be increased until the passage of what we think of as time is almost halted compared to the rate they are thinking.

      In the end we cannot make this step expecting to remain who we are. That is very much shaped by what we are and a loosening of the physical restraints around our minds will result in changes to what we consider 'human'. Even living forever (or at least a long time) would change what it means to be human - think of the storage capacity you'd need for meaningful rememberance etc...
  • so um... (Score:4, Funny)

    by acehole ( 174372 ) on Monday October 21, 2002 @06:36AM (#4494224) Homepage
    When can i get minds on kazaa?... for backup purposes of course ;)

    --
  • doubtful (Score:5, Interesting)

    by jo-do-cus ( 597235 ) <johocus@zonnet.nl> on Monday October 21, 2002 @06:36AM (#4494226)
    some objections:
    * It seems mr. Kurzweil thinks he knows what the mind is, what intelligence is and what consciousness is. In fact, these things are very abstract ideas about phenomena that we all experience, but we don't know what they are, or how they come into being. We can't even be sure the mind exists as a physical phenomenon (!).
    * Mr Kurzweil also skips the following issue: if I download my mind's content onto a computer, does that mean I am immortal? Or is this just a copy and does the 'real' me just die?
    And something I always get a little angry about. Kurzweil says:
    We'd all like to live forever

    well, thanks, but i don't. I am very glad with the notion that life, for all its splendour, has an end and that i can be sure to find rest someday. Please speak for yourself, Mr. Kurzweil!

    I really think Mister Kurzweil is trying to get some publicity with his fantastic statements. But it doesn't really show a deep insight into the philosophical, anatomical, physical etc. etc. etc questions that consciousness and the mind pose. We understand so very little about these things today, that I think his claims are stupid, maybe even ridiculous.
    • Re:doubtful (Score:3, Interesting)

      by zmooc ( 33175 )
      We can't even be sure the mind exists as a physical phenomenon (!).

      So what else could it be? And what does the mind do that cannot be done by a phsycial phenomenon?

      Mr Kurzweil also skips the following issue: if I download my mind's content onto a computer, does that mean I am immortal? Or is this just a copy and does the 'real' me just die?

      How is this an issue? It is if you regard the brain as some super-natural phenomenon as you describe in your first point. Otherwise, it'd indeed just be a copy that feels like it just woke up in another "body" which indeed may be "immortal" if it is in digital form. The original (if he survived being run through the copier/"reader") just lives on in the weird world where a copy of him with a consiousness that was once the identical to his/hers, exists. But you're right there's still a lot that we don't understand but it may one day be a lot easier to copy those things to a digital form and emulate/copy them without fully understanding them; you can always copy something which you don't understand as long as you understand the parts it's made of on the level of detail of which you'd want to make a copy.

      • by Jayson ( 2343 )
        So what else could it be? And what does the mind do that cannot be done by a phsycial phenomenon?

        It could be something that we cannot measure. If we are only what we can touch, then that leaves no room for free-will and you are a determinist in the strongest sense. People on /. have such little faith it seems.

  • It won't be silicon! (Score:5, Informative)

    by footNipple ( 541325 ) <footnipple&indiatimes,com> on Monday October 21, 2002 @06:36AM (#4494228)
    I love this topic, but it's still too early here for me to be articulate.

    If we are ever able to do this, it won't be onto silicon. It will have to be some sort of quantum based media

    For further reading on the quantum brain start with Roger Penrose's book "Shadows of the Mind"...I think...It's been a while since I've read it.

    Also, check out this link and other links related to a "Quantum Brain" google search:
    http://www.consciousness.arizona.edu/hameroff/Pe n-Ham/Orch_OR_Model/The%20Orch%20OR%20Paper.htm
    Great Stuff!

  • Strong AI (Score:5, Informative)

    by Anonymous Coward on Monday October 21, 2002 @06:38AM (#4494234)
    First, this isn't the idea of Ray Kurzweil, but has been in Science Fiction books for 50 years or so.

    Second, strong AI has yet to prove it's really working. Quantum mechanics and other (not-so) recent developments have shown that there may be much more in our brains than just bits and bytes. Not to mention that there are other places in the human body where significant, yet nearly unexplored "preprocessing" takes place (f.e. in the eyes).
  • by sela ( 32566 ) on Monday October 21, 2002 @06:38AM (#4494237) Homepage

    I bought his book some time ago, hoping it would entertain me during a long business trip. But unfortunately, instead of getting entertained, I've found myself incredibly annoyed by its superficial approach and over-optimistic predctions about the pace of technology will advance.

    He seem to be oblivient to the obstacles people in the AI field are facing. Either he is underestimating the complexity of the human mind or he's overestimating the advance in AI research. Anyone how read something about the way our mind works, like Steven Pinker's excellent book "How the mind works" could see what a challange we're facing and realize we're not going to overcome those hurdles in 20-30 years. No way Hose!
  • My brain uses a Commodore 64 style of copy protection, these brain gadgets always make my read head slam repeatedly into the front of my skull. tat-tat-tat-tat-tat
  • In 2029? (Score:5, Funny)

    by tcdk ( 173945 ) on Monday October 21, 2002 @06:39AM (#4494239) Homepage Journal
    Then reading predictions I usually go by this guide:

    "In the next year" means: "We have a working prototype"

    "Within three years" means: "We think that we know what we are doing and are applying for patents".

    "In five years" means: "We have a great idea, but no f*cking clue as to how we are going to implement it"

    "In ten (or more) years" means: "I ate way to much chili and had a really strange dream, which you may get a kick out of. But really I got no clue at all, and my prediction is so far in the future that everybody will have forgotten about it if I'm wrong, but if I'm right I'll pull if out of my hat and wave it in your face!".
  • by russianspy ( 523929 ) on Monday October 21, 2002 @06:43AM (#4494256)
    I actually read about half of the book. I could not finish it as I was unable to read cause I was laughing too hard. I am not saying he's TOTALLY wrong. There may be a time when we will have computers that will be smarter than we are. When we will be able to download our minds into the computer. All of that is fine, his timeline is totally unrealistic.

    A couple of points:
    1. The estimates as to how much processing power is in an average human brain vary quite a bit. Is each neuron a bit? It can have multiple inputs - maybe it's something closer to a byte or a word? How and where is memory stored? Just haveing the raw processing power does not mean we will have the knowledge to USE it. We are seriously lacking in the knowledge departament.
    2. Social implications. How many good technologies are set back, or even stopped because the people are not ready for it? Do you really think that an average person will simply accept and approve of the ability to live forever in a computer? All the religions of the world are going to have a field day with that. Don't think so? We've had genetically modified crops for a while now. They're safe and far more efficient. Why are there still countries that will not allow such crops to be used for human consumption?

    In the end it reminds me of a story I've heard of a long time ago. I'm going from memory so you'll have to forgive me if I get the details wrong.

    It happens during the height of Artificial Intelligence (when a lot of people thought we will have talking, seeing, thinking computers in just a few decades ;-) ). There was a conference, where one of the scientists started making wild predictions. Something like Kurzweil. Computers are supposed to be able to see (image recognition) as well as humans in 20 years, think in 30, etc. One of the other scientists has asked that guy:

    "Why are you saying this? All of those problems are quite hard. It is unlikely anyone will achieve those things in that time."

    The first scientist answered:

    "True, but notice that every date I've given is AFTER my retirement."

    What a way to generate funding, eh? This kind of things simply hurt the field in general.

    And that's my gripe for this week. I feel a LOT better now, thank you!
  • by Glanz ( 306204 ) on Monday October 21, 2002 @06:44AM (#4494260)
    Well, most people I know could put their minds on floppies, and it would still leave enough space for a nice copy of FDISK........... [fmind?]
  • Brain Dump! (Score:5, Funny)

    by irn_bru ( 209849 ) on Monday October 21, 2002 @06:45AM (#4494263)
    Wow. A literal brain dump. Just don't use Eproms or you might loose your mind...
  • Kurzweil's Book (Score:4, Interesting)

    by Bohnanza ( 523456 ) on Monday October 21, 2002 @06:47AM (#4494273)
    I read "The Age of Spiritual Machines" last year and found it interesting, but Kurzweil seems to miss a few important points. Mainly, he makes the assumption that if an entire human consciousness where transferred to an electronic system, the transferee would hardly notice. I think I would.

    Oliver Sacks' "A Leg to Stand On" illustrates how great an effect the loss of a single limb can have on the psyche of the victim. What would be the effect of the loss of the entire body? Kurzweil makes no mention of it.

    I don't know about Ray Kurzweil, but I sometimes pay attention to parts of my body that are below my ears.

  • by JHVB ( 613081 ) on Monday October 21, 2002 @06:48AM (#4494277)
    Kurzweil argues that strong AI will preceed the ability to download minds, which does not seem logical. It has been reasoned (by Pinker [wired.com] and others) that AI will be developed by reverse-engineering the brain, and artifically replicating its processes. The evolution of strong AI is thus dependent on technology to copy, and trace the functions of the human mind.
  • Your essence is trapped within the electrical/chemical field of your brain. Simply compying what a brain knows wouldn't do. You have to copy how it reacts. Even then, your brain's copy may or may not be imbued with its own intentionality.

    Metaphysically this is about as practical as putting your soul in a brass pot for storage until you get your new body ready.

    Maybe as a backup - then in the case of brain damage, memories could be reinstated.

    But for my money - I think I'd prefer to be a brain in a tank mounted on a giant robot. :-)

  • MP3 or OGG (Score:2, Insightful)

    by Insightfill ( 554828 )
    When I first read this, I thought they were talking about dumping a brain down to MP3 or OGG.

    Images of artifacts and /. discussions of the best codec or rate came to mind. Suddenly, people will be discussing whether or not the average person can identify a person as real or a copy - maybe a Heechee Turing Test or something.

  • by Zergwyn ( 514693 ) on Monday October 21, 2002 @06:56AM (#4494294)
    and not necessarily even the harder third. It is one thing to be able to copy all the information in a human brain. Especially as storage becomes holographic, with 3 dimensional light patterns being used, even everything a person knows could probably be fit. The problems remaining are two fold: how to access the information, and then what to do with it.

    Copying the information would require an extremely sophisticated, as well as invasive, set of technologies. Nanotech would probably need to be used to get the proper connections throughout the mind. As far as simply linking the brain, many people have discussed 'plugs' and such that would intercept external sensory/control feeds, such as the optic nerve and spinal cord, and then allow that information to be manipulated/redirected. Thus signals to move a leg could be altered so that they would move a mechanical leg, or even something else entirely. In such a way people could transplant their brains into robotic/cyborg surrogates, not even necessarily human looking. A fighter pilot, for instance, might just transport his brain into the plane. Thus the command to 'run' or 'walk' might be mapped onto engine throttling or some such. External camera's would send a feed, acting as 'eyes', etc. However, none of this makes any attempt at all to actually access stuff in reverse, from the brain. We record memories and such in the structure of the main brain, and thus something would need to go into the brain to read those. And because the 3-D structure of the brain is so critical, preserving the meta-information of how the other memories and such were encoded is also critical. Otherwise, you might end up with a record of memories and thoughts, but no way to actually connect those to form the personality.

    Heh, I seem to be ending up with a long post, but the last thing to deal with, assuming sucessful duplication (including the metainformation) is "what now?" A way would have to be found to basically create an artificial neural net that would be able to recreate the exact structure of the original brain. Who knows, it might be possible to do such a thing virtually, having different sectors connected to each other and thus having a person exist in cyberspace. That, however, is pure speculation.

    I actually find a lot of the stuff going on very exciting. Brains seem to last a lot longer then the body supporting them does anyway, so being able to basically have your brain in a very strong container that could be moved from body to body would probably work pretty well, and could potentially be very doable. However, total artifical replacement seems a long ways off. In some ways, what he is talking about in this article is sort of like cryrogenics today. You can get yourself frozen, but for the time being there is no way to ever undo the process.

    • I actually find a lot of the stuff going on very exciting. Brains seem to last a lot longer then the body supporting them does anyway, so being able to basically have your brain in a very strong container that could be moved from body to body would probably work pretty well, and could potentially be very doable.


      This is a fallacy. The brain breaks down with age just like everything else. Your skin, its supporting matrix, liver, kidneys, etc. You lose brain function like making longterm memories (harder to do, takes more time), the ability to think, etc.


      Alzheimer's, Huntington's, strokes are all tied to time and thus tied to aging. Your brain consumes more oxygen per unit mass than any other organ in your body and yet has the least builtin protection against oxygen free-radicals Perhaps brain function requires radicals in some way - they are not of necessity a bad thing - and thus the ultimate unavoidable cost of having a functional brain is that it damages itself as an unavoidable cost of doing business. See: On the true role of oxygen free radicals in the living state, aging, and degenerative disorders, Imre Zs.-Nagy, Annals of the New York Academy of Sciences, 2001, Vol 928: 187-199.



      Your brain degenerates just fine. It is merely a question of whether you croak due to heart disease, hardening of the arteries, cancer, thrombosis, stroke, Huntington's, Alzheimer's, etc, etc.

  • by jjohn ( 2991 ) on Monday October 21, 2002 @06:56AM (#4494295) Homepage Journal

    I'm sure that when I'm copying my mortal soul to the hard drive, that's exactly when the Windows box will blue screen. :-/

    I wonder how tech support is going to field that problem?

  • I'd rather think of this as a thought experiment. "What if?" This may not be possible in the time frame discussed, or it may never be possible, but it's more interesting just to say, if it was possible, what would that mean. We have a responsibility to discuss it before it happens, so we don't get caught with our ethical pants down like we did with human cloning (I mean fully fledged humans, not stem cells).
  • by BluBrick ( 1924 ) <blubrick@nosPAM.gmail.com> on Monday October 21, 2002 @07:08AM (#4494340) Homepage
    But the world ends at GMT 03:14:07, Tuesday, January 19, 2038 [deepsky.com]!

    Uhh, pencil me in for the 18th... just in case.
  • Ok, I can buy that you at some point in the future can take a "snapshot" of the brain, or scanning through it to get some kind of idea of the gridwork. But I hardly think you'll be able to understand the underlying processes going on in the brain, particularly how the brain evolves new pathways etc. Just my 15 øre (aka 2 cents).

    Kjella
    • Why not? (Score:5, Interesting)

      by Goonie ( 8651 ) <robert.merkel@be n a m b r a . o rg> on Monday October 21, 2002 @08:16AM (#4494657) Homepage
      But I hardly think you'll be able to understand the underlying processes going on in the brain, particularly how the brain evolves new pathways etc.

      If you're claiming that we don't know that much about how the brain works, I'd agree with you. If you're claiming that it's going to be tough to figure out how it all works, I'd probably agree with you there as well.

      However, if you're claiming that science can never understand the brain, I'd have to strongly disagree with you. As an atheist, I don't think there's anything so special about the brain. There's no soul there, put there by some random deity. There's no magic. It's just a lump of protein mixed with water, in essence. Sure, it's a marvellously complex lump of protein. but it's still a lump of protein. We've made a heck of a lot of progress understanding the behaviour of lots of other types of stuff using science. What makes this particular lump of protein any different?

      Can anyone give me a non-religious argument why, at some stage in the possibly distant future, that the workings of the brain won't be entirely comprehensible to humans?

      • Re:Why not? (Score:3, Funny)

        by Tablizer ( 95088 )
        What makes this particular lump of protein [brain] any different?

        According to some researchers, it is the ONLY lump of protein found so far that does not taste like chicken.

        There must be something significant to that observation.

  • Roger Penrose may generally be considered to have gone off at a bit of a tangent to reality since "The Emperor's New Mind", but whatever your position on whether quantum (gravity) effects can be important in the workings of a mind, his argument that Godel's theorem shows that a mathematician is capable of using his mind to accomplish a feat which a turing machine is mathematically incapable of replicating has not yet been satisfactorily answered. G.M.Chaitin's discovery of Omega and work on algorithmic complexity theory appears to lend even more weight to the idea that the mind is not simply an information processing device.
    Many (most) objects which perform a task do not do so solely by processing information and often can only be approximately simulated by computers. Just because the computer is the only device we have so far constructed which is capable of complex, flexible behaviour does not imply that all objects which are capable of such behaviour are computers.
    On a side note, claiming that we will have strong AI by 2029 is like predicting that Bin Laden will be caught at 12:49 PM on the 12th of June 2003. My horoscope carries more weight.
  • I read a book called "The Mighty Micro", published in 1979 by a guy called Chris Evans. He made a lot of predictions about future computers. Many, such as planetwide computer networks, have come true. However, his central thesis shared much with Kurzweil - that Moore's law was inevitably going to lead to ultraintelligent machines. Evans predicted it to occur by the 1990's. Kurzweil is saying 2029 or so.

    The key failure of both books, as described for instance here [tof.co.uk]is that Moore's Law hasn't made computers any more intelligent yet, and doesn't show any particular evidence of doing so. What's disappointing is that people are still giving the same argument credence twenty years on.

    Additionally, Kurzweil clearly either doesn't understand digital encryption and quantum computing, or thought it acceptable to funge facts to make an argument. That kind of thing doesn't give me confidence in anything else the guy says.

    I don't reject the possibility of one day doing brain dumps, or artificially intelligent machines, at all. I just dismiss the idea that the incremental advance of hardware technology is going to give it to us for free. We need fundamental breakthroughs from something else.

  • Old News (Score:5, Funny)

    by BoBaBrain ( 215786 ) on Monday October 21, 2002 @07:18AM (#4494394)
    We've been making partial brain dumps for years. They're called "Books".
  • I first came across this idea in Greg Bear's Eon [amazon.com], published in '85. It's some time since I read it, but I recall that it his ideas around this were well-developed, with such notions as "non-corporeal" persons having distinct rights; even the concept of new persons being "born" in a non-corporeal state and having to somehow earn the right to become embodied. Good read.

    Don't fancy it myself.
  • Highly Skeptical (Score:5, Insightful)

    by dcollins ( 135727 ) on Monday October 21, 2002 @07:23AM (#4494426) Homepage
    No, I very much doubt these kinds of predictions (and it's got nothing to do with the issue of the transferrence step).

    What counts as our "minds" are simply far too tied into the physical instantiation of our bodies. (Not that "mind" is too abstract, but that it's not abstract enough for separation from our bodies.) If I make a computer-based simulation of myself, will it get tired? Hungry? Thirsty? Itchy? Horny? Sick? If not, can it then get excited? Scared? Concerned? Bored? Will it have any emotional reactions at all, if all the standard physical stimuli are removed?

    Even if all the "human" inputs are replaced or simulated -- you've still got an added problem of a new level of "hardware breakdowns" on whatever platform is running the simulation. Suddenly you've also got to deal with the various downtimes, pauses, glitches, etc., that will break the illusion of it being the same "mind" as in the original person.

    People are simply too much a construct of their wetware to be able to remove their "minds" as a separate set of procedures.
  • If we're going to do this after strong AI's have had a change to evolve themselves for ten years, then isn't it likely that we'll be doing it if they allow us, and probably only to provide them with the equivelant of Tamagotchi(tm) pets?
  • by Nutrimentia ( 467408 ) on Monday October 21, 2002 @07:41AM (#4494485) Homepage
    "Downloading" a brain is a lot more complicated than copying a harddrive. Even if we figure out how the brain works, and then figure out how this contributes to a mind (neither of which we are close to understanding at all), downloading a brain is just a duplication of you. You yourself wouldn't notice anything, but your copy's memories would depart from your at the point of the brain scan from which the copy is instigated.

    Ugh, there are so many loose ends its hard to pick one to pull on. Someone mentioned before, but your body is more than just a bunch of neurons floating in fluid. Your mind, your person, your sanity rely on constant bodily feedback. Your mind isn't just the brain, its the entire nervous system, head to toe. (check out Antonio Damasio's books Decartes's Error and The Feeling of What Happens for a thrilling discussion of this).

    George Dyson's book Darwin Among the Machines doesn't address the stupendously anthropocentric idea of human intelligence on silicon but does explore some possibilities behind the emergence of intelligent (not necessarily conscious) systems on their own.

    I read Dyson's book after stumbling across it browsing at a bookstore, only to learn that he lived about 2 miles from me! I went down to his boat shop and introduced myself and have had a few chats with him. He talked about Kurzweil a little bit and he actually gave me a copy of The Age of Spiritual Machines. At the time I was a naive fanboy (as opposed to the seasoned fanboy I am now) and asked him if he could write something in the book (I had him sign the Darwin book earlier). He declined, asking me with the ever present Dyson eyesmile, "What am I supposed to say? Sorry this book isn't as good as mine?" It was very humble humor, don't read it wrong.

    I read Spiritual Machines and enjoyed it, if for no other reason that it provided a fun exercise in saying "that's a nice idea, but it won't work for these reason..." It addresses a lot of concerns and the whole identity dissolution theme was rather interesting to play along with. Still, I don't think that his future is a likely one.

    Bah, I'm just rambling. Short end to a long story: Kurzweil's ideas are fun to read and worth the time spent if you have time to kill, but are highly unlikely. Copying humans into computers is a much bigger problem than just raw clock speed, which is what he boils it down to.

    Here's a link to a page about Kurzweilian Singularity [kurzweilai.net]. Its worth checking out if you haven't read any of this stuff before.
  • Not a new idea (Score:2, Interesting)

    by Tikiman ( 468059 )
    I don't know the history of this idea, but the book Mind Transfer [alibris.com] (1988) by Janet Asimov was about the exact same thing - building a robot to hold you "self" that lived on after your biological body died.
  • Refuting strong AI (Score:2, Interesting)

    by bgreska ( 318975 )
    Turing proposed that the ultimate test for an AI was to behave in a human-like manner such that a human observer could not discern the behavior of the machine from the behavior of another human.

    Still, there are many who argue that although machines may one day pass Turing's test, they will nevertheless lack the essential consciousness or awareness that humans possess. See John Searle's paper, "Chinese Room". Nobody knows of a good, direct test for awareness.

    Still others (Roger Penrose) do not rule out the possibility of genuine machine intelligence, but think that we have much to learn about our own minds before we can consider it seriously. Penrose specifically argues that our current understanding of science is too weak too incorporate an accurate model of conscious thought. But our science may change and one day become sufficient.

    In any case, 2009 sounds like a very optimistic (pessimistic?) estimate.
  • There are couple of SF books that explore this idea. I think they are worth checking out:
    • Software - Rudy Rucker. Exactly on this topic - transferring human minds to a computer (too bad the process destroys the brain).
    • Golem XIV - Stanislaw Lem. A supercomputer becomes intelligent, but the intelligence is completely not like human mind, but something quite different. After all, human biology influences how the mind works. The book is a "transcription" of lectures by the computer on the nature of inteligence.

  • by shimmin ( 469139 ) on Monday October 21, 2002 @08:05AM (#4494606) Journal
    Kurzweil figures we'll have strong AI by 2029 and be able to copy a human mind about a decade after that.

    It seems to me that the ability to copy a human mind is almost prerequisite to strong AI. Sure, the "great AI winter" is at least partially due to the crash government funding the field enjoyed in the late 80's / early 90's drying up as suddenly as it emerged, but AI has always been a field prone to too-early predictions. It seems that with each new metaphor we invent for describing the human brain, we also convince ourselves that our minds really are as simple as our metaphors suggest. But Turing thought that human-level mimicry would be possible by 1990 (while at the same time vastly underestimating the quality of hardware that would be available in 1990).

    There's a real possibility that we just aren't smart enough to figure out how we work, and so the only route to strong AI is to make monkey-see, monkey-do copies. And while procreation is a time-honored method of doing that, the structure of the brain suggests that serialized output was not high on God's list of priorities, and the biological format rather resists studies. So, I often think that we might have to be able to emulate the brain in silico or some other more easily-studied medium before we have a chance of understanding what makes that brain tick.

  • by The Famous Druid ( 89404 ) on Monday October 21, 2002 @08:23AM (#4494689)
    By 2039, you'll be able to download what's left of my mind into a potato.
  • by Junks Jerzey ( 54586 ) on Monday October 21, 2002 @08:58AM (#4494919)
    Have you ever read the hardcore books on so-called brain science? Typically, one guy has been mulling over a theory about how the brain works for 10-30 years, then he writes a book about it. Other people do the same thing. All of the books contradict each other, or have nothing at all to do with each other, and there's no way of figuring out who's right. There are even books written simply to claim another book is incorrect ("The Mind Doesn't Work that Way," by Jerry A. Fodor).

    The bottom line is that this is hardly a science at all, just a lot of conjecture.
  • by Rumagent ( 86695 ) on Monday October 21, 2002 @09:27AM (#4495137)
    Blue screen of Death took on a whole new meaning.

  • Run for Eternity (Score:3, Insightful)

    by Ektanoor ( 9949 ) on Monday October 21, 2002 @09:44AM (#4495300) Journal
    This guy is wrong. Deeply and completely wrong. Even if his ideas are technically achievable, he will be a step nearer death and destruction rather then achieving its goal of Eternity.

    The ways he tries to achieve this goal are, by the most, static. It will be harder to modify a detail or change a component of such organism. Besides, such technologies are by too weak to external factors and demand much more energy inputs than a usual organic carbon-rich body. While it is hard for Nature, under Earth's energy balance, to create things with sources other than carbon, many organisms failed or were kicked into Evolution sideroads. Why? Because all these "solutions" were quite far from optimal. Do you know that octopus don't have hemoglobin but a magnese-based protein to fix oxygen in the blood? Or that there is a small vermin with teeth carrying more than 80% of copper? These things are exclusions, sometimes aberrations that the average conditions of Earth's habitat cannot support. These things lived isolated, in particular areas and cannot leave their environment.

    Now how this comes into our problem here? Well this guy forgets more than 4 billion years of evolution and kicks us into a completely aritficial organism. But this organism lives uder what conditions? Human conditions! It is we humans that care for these silicon beings, model them according to our wishes and needs, we feed them with energy and data. Besides, till now not even Deep Fritz could approach the sensibility, reasoning and flexibility of a human. This is a machine that devores energy, that makes milliards of permutations to overcome the speed of the human brain in one single task, that is supported and developed by thousands of engineers. And someone considers this the Future? Give me a break. Dinos were a lot smarter and more autonomous.

    IF something like Deep Fritz will be left alone on Earth it will meet something that even humans barely know about. The law that can be behind Thermodynamics (not the Second Principle, that thing is probably the consequence of this law) and which some biologists have been studying for several years. It is a law about how things interact. In a single system, in every moment there can be milliards of interactions between its components. Some of these interactions are antagonists, one can be successful if its antagonist gets weakened somehow. The state of equilibrium is merely a situation when these interactions meet something looking like an energetic "agreement" among themselves. However, this does not mean that interactions may disappear at all. Frequently some just turn more weak but more numerous as other components of the system "repel" these interactions, because of the more stable state they are in (this is where some people see the appearence of Entropy). However these stable states are not eternal. They may change globally or locally, and then, all other interactions may try to invade te castles of stability.

    Why all this confusing bla-bla-bla? Well get a human and a machine. Make the human to improve the machine too look much like his mind. Now pick the human and shoot him, leave the machine alone in Earth. How long the machine will be capable to survive?

    Even if someone achieves the feat to create an artificial mind much like ours, he will be only half-way. This minfd will need to be able to have a rational meal, to run from dangers, and to have a chance to go to toilet from time to time. Besides, this mind will have the big need to reproduce itself. Alone in the Universe does not give good chances for eternity...
  • Wetness counts (Score:4, Interesting)

    by FranticMad ( 618762 ) on Monday October 21, 2002 @09:53AM (#4495399)
    On hearing the program, I'm feeling cranky about two things (and I speak as someone who was interviewed by Quirks & Quarks about studies in measuring brain activity).

    First, I don't think Kurzweil has said anything that Hans Moravec ("Mind Children") and Marvin Minsky didn't say a long time ago. Minsky contemplated about machines transcending us, and Moravec long ago used Moore's law to predict when computers will be as complicated (he thinks) as human brains. Kurzweil is recycling other people's ideas.

    Second, Kurzweil (like other MIT hardware guys) talks about the brain with the underlying assumption that it is just a collection of processing units (neurons) connected by simple electrical contacts (dendrites and synapses). In fact, the entire body of a neuron is chock-a-block full of calcium channels and tiny pores that are regulated by hundreds of different chemicals. Every year, new processes are discovered. Some chemicals are moved into the cell by active molecular transporters. Some chemicals may move between regions of cells by gaseous diffusion. Not only will you have to scan the connections between each neuron, but you're going to have to mimic the action of all this oozy stuff in real time using silicon.

    And what about hormones and polypeptides that regulate all kinds of activities at short ranges, and also throughout the body? "Thinking" and decision-making involve lots of input from centres that excrete tiny quantities of chemicals -- all of this will have to be "scanned" (whatever that means) at a molecular level. It won't do to merely list the size and position of 100 billion neurons and their 100 trillion connections. You'll have to model the far greater number of wet chemical processes on every neuron.

    In the 1940s some people thought everything would be "atomic" by 1990. Atomic rockets, atomic cars, atomic radios. Today, just substitute the word "computational" or "silicon" for atomic and you can blather about nonsense in the year 2040 without having a clue of what it means.

    I think the brain's "wetness" is an integral part of it's operation, and this makes it a very dynamic and complicated thing. To simply see the brain as a collection of tiny silicon CPUs wired together is naive. It's a theoretical model straight from the 1960s or earlier, before we knew much about the brain at all. A real breakthrough in Artificial Intelligence will probably arrive slowly, and probably be stimulated by people who learned modern (i.e. post-20th century) physiology when they were young.

    Hence, I think the term "an expert in computers and artificial intelligence" is an oxymoron at this time.
  • by rweir ( 96112 ) on Monday October 21, 2002 @12:14PM (#4496848) Homepage Journal
    I read this great science fiction story by some Australian guy (can't remember who or when) that went something like this:

    In the future, everyone has a `jewel' implanted in their brain at birth. It's an optical computer that receives all your sensory data, then tries to replicate the external results of your brain activity. When you're young, it's way off, but it trains itself to match the responses of your real brain. One day, in your thirties, when your real brain is going down hill, you go to the hospital. They hook you up to another computer that keeps an eye on how well the outputs of the jewel match the outputs of your organic brain. If they match up, then they scrape out your meatware, and replace it with non-sentient tissue that consumes just as much blood, glucose, etc as your original brain, and can produce hormones for the rest of your body, while hooking up the jewel to the rest of your body. At that point, `you' are the jewel.

    The cool part of this is that there's no discontinuity between `me' and `it'; the jewel will think the same thoughts as me, it will be me; in fact, it will even worry about dying when the organic brain is killed, since it thinks it is the original.

    The ending was quite a cool twist, which I won't spoil here. It was a really good story tho, hopefully someone will remember it and post details.

It is clear that the individual who persecutes a man, his brother, because he is not of the same opinion, is a monster. - Voltaire

Working...