Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Putting Your Brain into A Computer 539

lovecraft writes "There is an article in the newest Psychology Today (and on their Web page) that talks about uploading the human brain into a computer within the next 50 years. Essentially this would mean immortality through virtual clones. Their description of how that the scanning of synapses and neurons would be done is really detailed and interesting. " Excellent article - and written by Ray Kurzweil, the author of The Age of Spiritual Machines, one of the more well-written texts on the growth of intelligence in computers.
This discussion has been archived. No new comments can be posted.

Putting Your Brain into A Computer

Comments Filter:
  • by Pascal Q. Porcupine ( 4467 ) on Wednesday January 26, 2000 @07:36AM (#1333721) Homepage
    I just love articles like these, which plainly state what will be the case by such-and-such a time. A $1000 computer will be as powerful as a human brain by 2020, just like we will have lunar colonies by 1999. At least it goes into the question "What is me?" which such fluff pieces usually gloss over, but still, this just seems more like a badly-written overly-assertive speculation piece, stolen out of many uncredited pages of amateur science fiction and making far too many assumptions about the progress of computing; even if a 2050 computer chip has as many transistors as necessary to emulate every neuron of every human brain on Earth, that still doesn't mean that it'll have actual intelligence; by that time we still may not know how neurons work at the level needed to emulate a brain. Nor may we know how to "download" information from real neurons, especially not in a non-destructive, lossless way.
    ---
    "'Is not a quine' is not a quine" is a quine [nmsu.edu].
  • "Hi, I'm Ray. I invented every important technology that is going to be running your life in 50 years."

    I read his book. I liked his book. But two things bugged me about his book:

    1. Write an autobiography and get it over with. I hated the way he kept interruptig his predictions about the future to say "Oh yeah, by the way, I invented the music synthesizer, I invented voice recognition, etc..." I appreciate his accomplishments, but all I ever got was the feeling that he would say those things only so that in the next chapter he could say "In 2030 you'll be talking to your clothing using the KurzweilSmartCloth(tm).
    2. The entire book is based on the premise that 1)Moore's Law is correct, and 2) Moore's Law is increasing. I don't believe that it can be used to accurately predict what will be happening 50 years from now. Isn't that like saying that 100 years ago somebody could have predicted what's going on today? Did they? How accurate were they?
  • I've seen Neon Genesis: Evangelion. Following the Second Impact on Sept. 15 (that's right, this year!), the need for this technology will become great.

    Sometime around 2008, Ritsuko's mother will finalize the technology and upload her personality as a woman, a scientist, and a mother into three supercomputers. Shortly after that she'll strangle a little girl and kill herself.

    The technology won't last long, though, because the world is going to end in about 2015. Well, as far as you or I are concerned it will, anyway.

    The good news is it requires a seventh-generation supercomputer, so we'll have those to play with by 2008. ;-)

    (For those who are confused, this is from a STORY and it is NOT true.)
    Ethan
  • At the moment we don't really understand the storage and formation of concisnous and knowledge in the human brain. While the nuerons are well understood in a behavioral sense (firing under these conditions, etc.), the nuerons themselves are not the whole story. Each connection between them is a web upon itself with feedback onto the connection (not nueron feedback, that is different) which may be a filter, control, or even a crude analog storage using a filter. This doesn't even include how the chemical interactions might effect all of this.

    While I don't think the brain is using anything esoteric (multi dimensions, quantum mechanics as has been expressed in some dubios theories), it does appear to be a collection of systems that evolved at different levels (filtering connections, nuerons, chemical interaction, etc...) to form one whole. I'm not sure much research is being done in handling the shear complexity of problems like this. Without being to handle complex systems we probably have no hope of understanding the brain other than our current crude approximations.
  • Ah-ha!
    This is a good use for the technology. We can download the minds of the greatest theologians of our time, then when we need answers to questions like this, we can just open up the program...
    Dear gods, I can see it now: Microsoft Theology Assistant...it's a little paperclip...NOOOO!!!!!
    ===
    -Ravagin
  • by Anonymous Coward on Wednesday January 26, 2000 @08:20AM (#1333741)
    Actually, Nobel-prizewinning physicist Roger Penrose argues that human consciousness cannot be explained by classical physics at least (in his book The Emporer's New Mind, plus a couple followups that further clarify his arguments). A lot of people have skimmed over his book and gotten false impressions. His main argument has nothing to do with quantum physical effects in the brain (which is what a lot of people have thought). The argument, rather, is based on the mathematics of computability. He argues rather rigorously that 1) Human mathematical insight is noncomputable, and 2) No known physical laws can produce noncomputable results--ie any physical system can be simulated with a Turing machine, and no Turing machine can match human insight. Before slamming this view, it's best to read Penrose yourself--it's a detailed and precise argument.

    Penrose speculates that since we don't have a theory that unifies quantum physics and relativity, it's possible that a complete theory could produce noncomputable results. He speculates on structures in the brain that might cause quantum effects to be involved in thought. However, he mentions that under current theory, someone has proved that quantum computers are still just Turing machines (albeit massively parallel ones).

    So until someone comes up with a noncomputable physics, I'm keeping my consciousness right where it is!

  • Comment removed based on user account deletion
  • I know this article is sort of a neato-fluff piece, and shouldn't be taken too seriously, but since we are all taking it seriously already..

    It is not at all apparent that an extremely high resolution "scan" of the neural structure of the brain would be worth anything at all. The assumption seems to be that the phenomena associated with brain activity could be modeled by a computer, whatever the processing power. Computer models always rely on a drastic simplification of the phenomenon in question. They are ALWAYS an approximation. Many phenomena, fortunately, lend themselves to this sort of analysis. For example, if I model a rocket moving through space, I don't have to account for zillions of tiny effects on it, like most relativistic effects, to get an answer accurate enough to return safely. But lots of other phenomena, like, say, the weather, or financial markets, are very difficult to reduce to a model no matter how much data we input. They are not reducible.

    The brain is perhaps like that. It's one thing to know a few basic properties of each neuron, like location, connections, and some stuff about your signal thresholds, whatever. But the actual activity may depend on variables more subtle than that. So even if you had all the information, you'd have to model the processes of interaction and life. There might be dozens and hundreds of things that effect the neuron - variables that are essential to any reliable model. Neurons are, of course, cells, and receive oxygen and chemical nutrients. What if there are Quantum effects? Are they going to model those, too? Good luck.

    I know nothing about biology, but have done enough mathematical modeling to see the assumptions of this article are very very presumptuous. Modeling of complex phenomena is much much more complicated than the folks at Psychology Today think.

  • For a much better explanation of Kurzweil's views, see the transcript [cnn.com] of a discussion with him in a CNN chat room last week.

    --LP
  • Having read Kurzweil's book (/. should interview him, BTW) "The Age of Spiritual Machines", I'd point out that his belief in "downloading your mind into a computer" is based on two basic trends he sees in addition to Moore's Law:

    that our understanding of physically how neural signalling works is growing exponentially, and

    our ability to scan/sense neural behavior in the real world is growing exponentially.

    I did not see any explicit quantization of these trends in the book. Instead, they flow out of larger principles or "laws" he describes (e.g. the law of accelerating returns, the law of evolutionary process, increasing returns on knowledge, etc.) He does point out an interesting fact noticed by several people that I hadn't heard -- that mechanical computing devices back to Hollerith's tabulator and the theoretical performance of Babbage's engine do fit on the Moore's Law curve extrapolated back to 1900-1910.

    He does offer some interesting examples of where we are today based on his world-class work in pattern recognition and recapitulation. I found his book thought provoking, albeit not quite convincing, but then, I don't follow neurology closely enough to validate his core assumptions. Proof by handwaving, conjecture and story telling isn't my style, but then, the book wasn't really written for my style either. I'm also a little skeptical based on my undergrad course in AI and my experience in writing simulations of natural selection. He moved me from "it probably won't happen in my lifetime" to "it could happen" but not quite to "it will happen."

    I will admit to being stunned by three lines in the book, a haiku written by one of his computer programs after reading John Keats and Wedy Dennis:


    You broke my soul
    the juice of eternity
    the spirit of my lips.


    --LP

  • The article was submitted by lovecraft, a username inspired surely by...

    H.P. Lovecraft who conceived of the City of Kadath, which inspired the story...

    Kadath in the Cold Waste [bg.ac.yu] which is exactly about being alive inside a computer.

    Some days are just better than others. The Universe made a pun. Enjoy it.

  • I've noticed a great way to get flamed on Slashdot is to actually admit to being any sort of theologian ...

    Given that the way souls actually get embodied into our bodies is somewhat of a mystery, I doubt you'll find any theologians (at least, any within shouting distance of orthodox Christianity -- but I'm sure some newagey-technospiritualist will fill in the gap) willing to stake anything on being able to technologically perform a "soul transfer." I know I won't be signing up for any such thing.

    Besides, who needs to muck around with technological immortality when the real thing is available in Christ? All these dreams of immortality via the machine that folks like Kurzweil are selling sound to me a lot like the old Gnostic and Manichean disgust with the flesh and hope to someday be released into some form of pure spirit and intellect.

    Bah humbug! I say. I'll take good old-fashioned Incarnational theology any day. Did anyone else notice that while the tone of the article is "gee-whiz!" techno-optimism, the actual content is a rather grim determinism? This Shall Happen. It Is Inevitable. Resistance Is Futile. You Will Be Assimilated -- And Like It. So much for human freedom, I guess.

    Despite world-record advances in automation, robotification, and other "labor-saving" technologies, it is assumed that almost every human being may, at least in the Future, turn out to be useful for something, just like the members of other endangered species.
    -- Wendell Berry, "The Joy of Sales Resistance"
  • by jd ( 1658 )
    I'd already thought of this, as the concept for a sci-fi Universe. However, the only path I can see is a gloomy one. Life would cease to have objective meaning, as "originals" will become expendable, for example.

    On the other hand, the reverse is also true. Computer clones of actors are also expendable (just dd if=/backup/actor/goodguy of=/filmset/goodguy), which means that there won't be any need for stuntmen. All stunts will be performed -by- the actor, and if it goes wrong (or right, in the case of roll & burn stunts), there's no loss.

  • For a good science fiction story that explores this, try Greg Egan's "Diaspora".

    He refers to the mass migration of humanity into software (some of them into pure computer existence, some into robot bodies) as an "Introdus".
  • I personally won't like my personality to be used as a brainpower and then shutdown out of existence. If I'd find myself trapped in some computer, I'd possibly to the maximum to trick that guy which has occupied my body and claims to be me out of the body and insert myself instead, or at least to make my existance longer. For example, if I'd know I'll be alive until I solved some problem, I'd postpone this indefinitely. The only problem is that me being outside of the computer will see all the tricks, because they made by the identical brain... But 50 brains could possibly organize and make that silly one who created them very-very sorry...
  • Where are the theologians when we need them?

    Some of us actually do read Slashdot... :-)

    If a ebrain George appeas [sic] self conscious, and answers a Turing test as well as I do, would this ebrain George have a soul? Or does it prove that there is no soul?

    There is a very fine distinction between "consciousness" and "soul". Consciousness, as I understand it, is simply the manifestation of our rationality. As it stands, the Turing test is quite appropriate for measuring consciousness.

    It is my personal belief (and I'm sure others will disagree) that the soul is the very essence of who we are, and independent of consciousness. "I think, therefore I am" recognizes our consciousness, while "In imago Dei" recognizes our soul. With that assumption, the soul would be unmeasurable by the Turing test. So I would have to conclude that the Turing test can say nothing about the existence of the soul one way or the other. The only "proof" we can have about the soul will come after our physical death ("man is destined to die once, and after that to face judgment"). Of course, we do have historical testimony about the soul in the Bible, but that gets back to the old "historical proof" vs. "scientific proof" debate. :-)

    Cheers!
    Jim


    JimD
  • by Meeko ( 8176 )
    OK, forget the "duplicate" issue for now. Let's assume that one (1) of your neurons is replaced with a machine equivalent. Are you still living as yourself? Undoubtedly yes.

    Let's assume this process continues; your gray matter is seamlessly replaced by machine-based neurons, one at a time. By the time the process is complete, you will will be you and you will have noticed no ill effects, except for the fact that you are thinking 1000 times faster...

    And then, you'll never die.

  • We aren't living on the moon or mars or underwater or any other place now. We don't have ray guns or laser rifles or phasers.

    Will we still be waiting for this future in 2050?
    (Put this under the future that never happened?)
    Will we still be waiting for the GUT then? (probably, since we can't know what we'll find between now and then)

    Just because the doubling power of computers has held for rougly 60 years doesn't mean it will last forever. Did the European expansion throughout the world last forever? Did the power of rockets? Did the AIDS pandemic kill billions? Well, keep a thinking head on, and remember this might be the umpteenth time humans tried to reach an infinity in a closed space.

    -Ben
  • If my thoughts, knowledge, experience, skills and memories achieve eternal life without me, what does that mean for me?

    I would say it means this "me" is, was, and always will be, an illusion. It never really existed. Instead, what there is is just thoughts, knowledge, experience, skills, and memories.....

    Descartes used logic to prove Cogito, ergo sum, which roughly means, "thinking, therefore being". NOT "I think, therefore I am". There is no "I" in the proof or the conclusion. Attributing the thoughts and memories to an "I" is a leap of logic.

    If the ability to accurately download a person's brain capacity to a computer ever really happens, it seems to me proof that
    1. There is no free-will (actually, it would more like be the final nail in the coffin for this question).
    2. There is no "I". The concept of I goes along with the idea of free will. But this will show that the "I" is really just the combination of certain thoughts that get strongly associated together. Think about multiple personalities. Several "I"'s exist within one brain - probably because several "I" thought groupings have been disassociated from one another. Copying to a computer would represent a disassociation of some thought groupings, so a new illusional "I" would be created.

    But thoughts just happen, totally independently. They are not caused by an "I". Our brains act as association machines, and serves to group together certain kinds of thoughts to form an "I" group. I suspect the notion of will and force of personality come down to the strength and exclusivity of the associations around a person's "I" thought-group.

    Well anyway, that was fun. Mostly a lot of bull until we can set up some real experiments, eh? Won't that be fun.....
  • by ian stevens ( 5465 ) on Wednesday January 26, 2000 @08:43AM (#1333791) Homepage

    Our scanning machines today can clearly capture neural features as long as the scanner is very close to the source. Within 30 years, however, we will be able to send billions of nanobots-blood cell-size scanning machines-through every capillary of the brain to create a complete noninvasive scan of every neural feature. A shot full of nanobots will someday allow the most subtle details of our knowledge, skills and personalities to be copied into a file and stored in a computer.


    Gives a whole new meaning to the phrase "A penny for your thoughts?"

    This is scary. In thirty years a simple inoculation could carry with it billions of nanobot "spies" designed to transmit its host's knowledge and, potentially, his very thoughts, to a machine located nearby. Heck, with a few modifications, the infiltrating nanobots could destroy any memories after transmitting them.

    As a result, top political, military and technological personnel would have to harbor anti-spy nanobots which would prevent any enemy infiltration and eventual transmittal of host information. And if the common man can't afford such anti-nanobot devices, they could be victim to the most effective marketing survey ever created.

    ian.

  • Oh, we know more about what neurons do than you think. Remember the pictures taken through the eye of a cat [slashdot.org]? They tapped a few fibers of the optic nerve and read the signals. Those are neurons. The encoding of optical signals there is obviously understood.
  • Presumably external stimuli will have to be provided through some interface with the external world. Thus you will "see", "hear", "taste" and "touch" etc based on what is fed to you by the storage machine's interface.

    Wow, this would make a great movie... Except that it would need an antagonist. Let's see...

    The computer is actually in charge, and it feeds off of the body heat of the people... And we'll get some big name actors to be in it. And throw in some existentialist and pseudo-religions mumbo-jumbo. Maybe even a Messiah type of thread.
    Don't forget a kicking soundtrack and awesome eye-candy digital effects. People really eat that stuff up.

    Yeah, and speaking of eye-candy, we can get Carrie Ann Moss in some tight shinny pants! It would ROCK!

    But getting back on topic:

    With all due respect, Mr. Kurzweil, we don't have the slightest clue about what neural patterns mean or how nerves encode signals. We have no clue what-so-ever about capturing the 'state' of the brain/mind. We don't know the rules for changing states, for what inputs take us to which state, for what outputs result, and wether there is even a huge but finite set of states or not.

    Mr. Kurzweil, this is a speculative dream proposing a solution in search of a problem. To what end? Because we can? Not good enough, since by that token we should be in the middle of a nuclear winter.

    I suspect that we are not simply hugely-complex Turing Machines. Though even if we are, we don't know where to even begin modeling ourselves. IMHO, Ray Kurzweil should stick to making synthesizers, he's been talking to Negraponte too much.
  • A procedure for replacing not only a human brain but a whole body with the far superior nanoengineered equivalent is described in the fanatical but excellent Beyond Humanity (by Earl and Cox). Basically, it involves simply administering to the patient a dose of assemblers and specialised nanites, which are programmed, a priori, to, in a span of days, weeks or months, replace every functional unit of your body with a custom-built synthetic replacement, built mostly out of cannibalized carbon found in the original cells (along with some additional materials administered from the outside, if need be).

    It's not a cell-for-cell replacement; it doesn't have to be. It alters the nervous system gradually, without requiring a "shutdown", and without terminating the illusion of identity experienced by the mind that "owns" the body. Thus, the philosophical problems of "is my uploaded consciousness really me" are avoided. You are still you; it's just that your body has been upgraded.

    I'm glad to see such a clueful article... you don't get many of these around here anymore!
  • I don't think it would be possible to satisfy even the most basic of human desires as a computer program. Animals are emotional beings, which require contact with other emotional beings in various ways to stay healthy. If you were just a computer program, there would be no actual human contact, no sex, and no sunlight. Most of the things we take for granted, even sunlight, are an absolute must to maintain mental health and stability. I wouldn't even want to see an attempt at putting a human brain and personality into a computer, because whomever the unlucky bastard was, they'd suffer for an eternity.

    OTOH, it may be possible to simulate all of these things. But then your not really living, are you?
  • I agree, but that is 500 operation per second on a massively parallel scale - maybe 100,000,000 threads running here.

    So swapping 100,000,000 threads 500 times a second will lead to a large context switch time _if_ this was a normal program. Luckily it will be running on BrainPlatform(tm) which will virtualise the brain functions in some way. Still, this extra software adds in even more latency.

    50,000,000,000 operations per second should be possible I suppose, except for the interdependencies and timing issues. 1/5000000000 isn't much though... if each brain operation translates to 1000 machine instructions that is 50,000,000,000,000 instructions per second (50 tips), double that for BrainPlatform(tm), then multiply by an arbitrary factor (say 10) to realise you will need a machine capable of 1000 tips. Currently we have 2 bips, so a factor of 500,000 is required, which is roughly 2^19, and computing power doubles every 2 years (taking Moores Law problems into consideration) so that is in 38 years time.

    Oh. Okay.

    ~~

  • WOW, this brings up so many complex issues!

    My construct might not be worth much, but lets say linus's construct. What do you think the market value of Linus's construct would be?

    It would be silly to think that constructs would not be sold on the black market (uploaded to the clone, then resold to the highest bidder).

    New field on your job application... "Construct base code" as certainly people would rather purchase a construct than spend 4 or 8 years in school.

    I find this frightening... not interesting. I hope the powers that be see that this being available is not a good thing.


    They are a threat to free speech and must be silenced! - Andrea Chen
  • by reverse solidus ( 30707 ) on Wednesday January 26, 2000 @08:49AM (#1333812) Homepage
    The point (which the other followups seem to have grasped) is that the article was entitled "Live Forever: Uploading the Human Brain", when in fact it's not you that gets to live forever, but a non-homo sapien electronic duplicate. Making that point in a pithy way, instead of in a long draw out explanation like this, is generally considered good form.

    The conclusion (compactly contained in the second sentence) was that I personally found the idea unappealing. Your milage, of course, may vary, but I sincerely hope we're not going to be facing a future where multitudes of copies of rather slow witted people roam cyberspace in search of things to misunderstand.
  • ...when in fact it's not you that gets to live forever, but a non-homo sapien electronic duplicate
    Which begs the question, can that "non-homo sapien electronic duplicate" be said to be me? Which also leads us to the non-trivial question, just what am I anyway - is there really a "me", or am I just an illusion?

    Consult any Zen master for further enlightenment on the general question, but for the specific idea of electronic copies of persons I suggest reading The Mind's I by Douglas Hofstadter and Daniel Dennet (much more accessable than Godel, Esher, Bach which I've never managed to make it through) and Rudy Rucker's SF novels Software and Hardware.

    I just wrote a more lengthy discussion of the issue, but Netscape ate all my memory and crashed before I could post it. So I'll just summarize by saying that in questions of personal identity, we need to consider the time axis. If the me-of-right-now (call him Tom0) is duplicated or fissioned into two beings (call them Tom1 and Tom2), those two beings are not personal-identical to each other, but are personal-identical to the original. Tom0 survives if either Tom1 or Tom2 survives. In the presence of duplication, personal identity is not transitive.

  • First, he was suggesting that they would all be downloaded back, so it would not be like those 50 people died.. they just got merged. I would be willing to do something like this, but I do not think it is really a good idea for the general population, i.e. most people are pretty worthless (but I'm pretty cool).

    I must admit that I have never really considered the idea of redownloading. It seems a hell of a lot more difficult them making copies in the first place.. and making copies seems a hell of a lot more difficult then making something new.. which gets closer to my real point.

    I think the real break through will actually be in physchology of all places, i.e. creating a person/AI to solve the problem you want solved. Now, many people complain about killing it when it is done or forcing it to do something, but I don't not see this as having pretty simple answer: Murder is illegal period, Slavery is illegal period, it is not slavery to create something that ``wants'' to do something, i.e. there is nothing wrong with me influencing a childs world to make them want to solve a specific math problem, but there is soemthing wrong with me forcing them to do something.

    Now, the effects this has on intelectual property are wierd. If I make Joe to write a great novel then I would be slavery for my to own his novel, so I am forced into the "open source" philosophy, i.e. I want it so I will make someone who will give it to everyone (including me).

    Jeff
  • Altavista Becomes Sentient! [segfault.org]

    If my favorite search engine can gain consciousness, then I think there's hope for all of us.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • The source material for this story is simply science fiction, to be charitable (more probably it is science fantasy).

    There are a number of fallacies in the logic of the article, but here's a brief rundown of a few of them.

    1) Determinism is dead.

    Everything in the universe has a wavefunction, which is simply a "catalog of expected values" (Schroedinger). Or, simply(*) stated in bra-ket notation:

    P = |(a|b)|^2

    * This notation is admittedly meaningless without undergraduate QM. However, perhaps I can highlight just one telling mathematical point.

    The QM wave equation, which applies to everything, includes terms with imaginary values. This means that rigid mathematical rules are necessary to treat any case of QM (Hermitian operators, orthonormalization, etc), and the above is one of them: you cannot specify something exactly, only its probability. (Mostly true: you can have sharp observables which are eigenstates/eigenvalues, but they quickly convert into mixed states. However, more mathematics will probably obscure the point). Most often this is encapsulated in the Heisenberg Uncertainty Principle.

    Determinism is also effectively dead when applied to chaotic systems, which just happens to be most of nature.

    From the above principles, talking about scanning the synapses of the brain with perfect accuracy, even given perfect nanobots, is naive.

    2) Entropy

    Let's examine, as R.P. Feynmann has done, Maxwell's demon. Briefly, this is a microscopic pawl and ratchet affair which allows the ratchet to turn only one way. Due to random vibrations, interactions will occur that cause the wheel to rotate one direction or another. However, due to the pawl, the wheel can only turn one direction. Therefore, we can extract torque from randomness, and build millions of demons to create an extremely efficient (n = 0.5) energy source. True?

    Hint: Perhaps I should make a company called Maxwell's Demon Power Systems, write a book, and come out with an IPO.

    No. The same randomness will also act on the pawl, and the energy to release the pawl is the same as the amount to rotate the wheel. Therefore, no net rotation.

    The point: there is an extremely limited amount of work one can extract from randomness. Information entropy and physical entropy may or may not be the same (this is a matter of debate), but the principles are identically based upon ensembles and microstates. There is a limited amount of information one can extract from an entropic system.

    Back to your regularly scheduled Star Trek.

  • I like that question. If I could dump the sum of all my knowledge and every reaction that I would have to whatever stimuli into a text file . . . it sure wouldn't be me. Even if I could somehow convert that into a program, would it grow? Could it move beyond all that data and use it? I think knowledge of how and why we work is just as important as simply finding a way to dump info from a brain into a computer.

    I also wonder if inside a machine, or if machines make up a large part of me, am I still me? I'd personally think moods and feelings regarding our health, aging, and how we socialize, would change allot if I were somehow presented with the possibility of the benefits of such computer enhancement such as living forever (or a much longer time, say 500 years). Thus my theory that while some of "I" would be part of that machine, "I" would no longer be "me."
  • For those interested in the ancient art of reading, Greg Egan wrote an excellent Science Fiction book on this very topic called 'Permutation City'. (It should be available through your favorite bookseller)

    This idea has cropped up in a lot of science fiction recently. Greg Egan also covers it in his novel 'Diaspora', as well as many short stories ('Learning to Be Me', 'Closer', ...)

    Peter Hamilton has an interesting take on the idea, which he explains in his "Night's Dawn" trilogy ('The Reality Dysfunction', 'The Neutronium Alchemist', 'The Naked God'; watch out, they are published as six books in America) and a few short stories ('A Second Chance at Eden'). The difference here is that in his universe, consciousness can only be downloaded into biological technology ("bitek") and not into normal machines.

  • by Hard_Code ( 49548 ) on Wednesday January 26, 2000 @09:51AM (#1333838)
    I have a prediction: In 2029 when we read this article we will roll over laughing.

    Every time some new technology is invented people come out in droves attempting to apply it to everything. Remember nuclear fission? That was supposed to toast our bread, power our cars, and allow us to fly about in personal airplanes. Didn't happen. Why? Because nobody /needed/ it to toast bread, power cars, or fly around. Just because a technology is invented doesn't mean it /has/ to be used for everything...just where it is applicable.

    What would be the /use/ of being injected with nanobots so I could live in some virtual world. Living in the /real/ world is complicated and real enough already. Nobody needs a whole other world to live in...there is plenty of reality here already.

    I think the time scale is a bit optomistic also. Surgical implants are one thing. We are just starting to hack around and make stupid kludges with the brain. It's a VERY far cry from complete pervasiveness and integration. For one thing, I'd hate to be the guy whose body /rejected/ the nanobots and mounted an immune response on my brain!

    And of course there is the philosophical question. Twins are /identical/ genetically...down to all those wonderful neurons the author says we will replicate. Does that mean twins are the same person? Obviously they are not. I think transplanting heads onto younger bodies is a more practical form of longevity than copying yourself into a computer (man, wasn't there even a Slashdot article a while back saying that that had /already/ been accomplished??)

    Let's just chill out and take the red pill for a while longer...

    Jazilla.org - the Java Mozilla [sourceforge.net]
  • Lets just drop irrational stuff like 'god', 'soul' and 'consciousness' from our language, OK? These words don't actually mean something (try to define them, you won't make much sense) they stand for things we cannot define.

    Some things can't be defined reductively, but that doesn't mean they're irrational or that they don't mean anything. Things can be defined by reference (e.g. "That is yellow.") without being able to be defined in other terms (you'll never be able to define yellow so that someone who's never seen yellow can understand it and recognize it when they see it for the first time, unlike, say, a unicorn, which you can define in such a way that people who've never seen one nevertheless understand what one is and would know one if they saw one).

    There are simple and complex words in any language. Complex definitions can be broken down, they're defined in terms of other words. Simples cannot. They are meaningful but undefined terms. If you purge them from the language, words defined in terms of them become undefined. If you keep this up, you eventually eliminate the entire language.

    Thus, you suggestion is unworkable. If we eliminate undefined terms from the language, we eventually eliminate the language in its entirety.

    I'd recommend reading some of G.E. Moore's writings about definitions and meaning.

    --

  • I can just see the first failed operation:

    Doctor to patient on table:

    "Well, Mr. Stogner, we've uploaded your brain patterns into the computer, but there's been a bit of a surprise. You see, we thought that the uploading operation was destructive, shredding the original brain pattern as it created the digital copy. Unfortunately, the uploading went less invasively than usual, and well, (this is kind of embarrassing), here you still are. Don't worry, the digital copy is perfect, so you're really immortal now, but having two of you running around would create legal complications. So, if you'll simply swallow this pill here, we'll boot up your computer personality just as soon as we've disposed of the obsolescent copy."
  • This idea already exists in a way, when people have surgery that involves severing the corpus callosum and thus making the two halves of the brain unable to communicate. Which half does the person's consciousness go into?
    --
  • by jejones ( 115979 ) on Wednesday January 26, 2000 @09:54AM (#1333850) Journal
    The old continuity question, eh?

    "I have a two-hundred year old axe!"
    "Really? Wow!"
    "Well, the head's been replaced a few times, and once Grandpa had to replace the handle, but..."

    The above old chestnut points at one scenario for personality/memory transfer--if you were replaced one neuron at a time, just when would you lose your self?

    I personally can't wait to get downloaded...as long as I'm not running under Windo--
    [insert BSOD text here]

  • In a pre-emptive correction, I do realize that nuclear power powers many toasters. My point was that it was specifically embedded to do that task, as the 1950s Jetson-ian view of the future: Atomic Car! Atomic Toothbrush! Atomic Toilet!

    The same thing happened when electricity was first harnessed. They attempted to apply it to everything. Remember those magical "Electric!" belts and hair brushes that were supposed to cure every single medical problem?

    Jazilla.org - the Java Mozilla [sourceforge.net]
  • Brain download is nice, but what would be really useful would be upload capability too.

    That way I can download Elizabeth Shue's brain into my computer, fix the defective bit that makes her unwilling to date me, and upload it back again!
  • My favorite quote on the topic (approximately):

    "The question of whether computers can think is precisely as interesting as the question of whether submarines can swim." (Djikstra, I believe)

    Daniel
  • If you don't want an electronic copy of yourself, you're not likely to have it done. If you do have it done, you (and your electronic copy) will have wanted it done...and both of you will know that the original meat model is temporary. The meat model will continue being "you" also, although acquiring different experiences during the rest of its lifetime. How to deal with this, and with updated copies, is an exercise for, well, you. [ The simplest solution is to wake up the backup only long enough to ensure it works, then shut off the backup until death of the meat model ]

    The "Gateway" series of novels dealt with this briefly. Some vague references were made to the difficulties the lawyers would have with a legally dead person having property [when you make a backup, make the backup the executor of your estate (unless the backup killed your meat model)].

  • I'm not sure why this piece made it to the list other than to serve as a common ground for all of us to comment (including me). Psychology Today is known in professional and university halls as a "pop-psych" publication. When a major journal prints an article hinting of such then we'll having something to talk about. Until then some misguided souls will continue to consult Popular Mechanics for the latest auto industry trends and Southern Living for the latest in structual integrity of suburban homes.
  • >Read some Cognitive
    >Science if you want a better discussion of this >topic.

    In point of fact, I have, and so I have a few comments on your response.

    >Penrose is way out of his depth when he talks >about consciousness. Read some Cognitive
    >Science if you want a better discussion of this >topic.

    In the literature I've encountered, it's my opinion that EVERYBODY is out of their depth when they talk about consciousness. One thing I like about Penrose is that he actually seems to
    admit this to some degree.

    I do think that Cognitive Psychology has some theories that are better grounded in experimental effort. But I've yet to run into any truly general theory of consciousness (or even cognition) where I've even seen a well-designed experiment.

    Penrose's theories are also at this stage of development. He's almost talking on a cosmological level, which one might expect, given his background as a theoretical physicist.


    >This is from Steve Pinkers "How the Mind Works"-

    >>Penrose's ,mathematical argument has been
    >> dismissed as fallacious by logicians...

    Penrose has also dismissed his dismissals. Does that make them fallacious?

    Actually, he's done a bit more than that. He has been providing an ongoing logical dialog where he responds to objections to his theories. Dismissing Penrose out of hand as a logical lightweight is a mistake; he is clearly an accomplished mathemetician and physicist.It may be that he will turn out to be wrong, but it seems the dialog is far from over, and any dismissal is too early.

    >>computational theory [of consciousness] fits so
    >>well into our understanding of the world
    >>that, in trying to overthrow it, Penrose had to
    >>reject most of contemporary neuroscience,
    >>evolutionary biology, and physics
    > Nobody in Cog Sci agrees with Penrose.

    Relativity. Quantum Mechanics. No more rejections of classical physics than I think Penrose is proposing for current science with his work. Of course, there have been other paradigm changing theories that have turned out to be completely wrong. Perhaps his will go down this way.

    Still, disagreement with generally held theory doesn't make something wrong.

    Have you actually read Penrose's work?

    (and with that annoying challenge, I'll offer humbly to go read the Pinker book, which, I admit, I haven't read)
  • ...quantum mechanics as has been expressed in some dubios theories...

    I suspect you're referring to Penrose and "The Emperor's New Mind". Didn't the recently find a structure in neural cells that functions on the quantum scale. Microtubules, or something? It's always interesting when someone postulates something, and then a discovery is made later that backs it up. I'm not saying it's true or anything, just that it's interesting.
  • Yeah, and you'd get a blue screen every time you asked it the one about an omnipotent being creating an object He can't move.
    --
  • The whole concern of conscious versus automatic may be an imagined dichotomy.

    I learned English from hearing others around me speak it, and my neurons adapted to its patterns so well that I "think" in English. But I could have easily heard Chinese as a baby, and I don't have any knowledge of the origins of the majority of words I speak. It seems to me that the symbolism by which I perceive much of the world is almost entirely a taught protocol, with no "grounding" whatsoever.

    So, what about more basic perceptions, such as color, temperature, or pain? Linguists know that different cultures or groups describe such "objective" phenomena differently, and that the description colors one's perception: does a Real Man actually see mauve, taupe, or chartreuse -- or just purple, brown, and green? I'm sure I don't experience 20-odd variations of snow, but the Inuit have more names for it than that.

    It seems to me that all perception is tempered by previous perceptions, which have built themselves into thoughts, recognitions, words, concepts, and dogma. So, if a machine is given the capability of interpreting perception in terms of other perception, who's to say it can't have our perceived level of experience, reality, consciousness, and "life"?
  • by decomp ( 87659 ) on Wednesday January 26, 2000 @11:59AM (#1333892)
    <rant>

    This is the first time I have read anything by Dr. Kurzweil, and it seems like a perfectly pleasant piece of futurology (typed with slight hint of sarcasm). I enjoyed skimming through it, thinking, hmm...nothing seems really new here, until I hit something that really annoyed me:

    What does it mean to evolve? Evolution moves toward greater complexity, elegance, intelligence, beauty, creativity and love.

    ...preceeded by...

    Evolution, in my view, is the purpose of life, meaning that the purpose of life-and of our lives-is to evolve.

    Both of these comments seem to show an egregious misunderstanding of evolution; the first being worse than the second since it is stated as fact. I am surprised that no one here has commented on these yet. I'm sure that some of you have read The Blind Watchmaker. Where are all you evolution-hawks when we need you?!

    <Disclaimer> I A Not An Evolutionary Biologist (but I (think I) know enough biology to make the following claims) </Disclaimer>

    1. Evolution doesn't go anywhere. It is the name we give to the phenomenon that inevitably occurs over time when mutably-reproducing entities live in a changing environment. Those that were able to survive passed on their genes. Sometimes more complexity is favored, sometimes less.
    2. Evolution can't be the purpose of anything. Though I won't argue with Kurzweil on what he thinks is the purpose of life -- we're all entitled to some sort of theory about this (I may happen to think that life has no purpose, you live it or you don't) -- but I do think that it is incorrect to claim that evolution can be the purpose of anything. It happens; it is an end result; it goes one way or another, but to claim that an inevitable consequence of the existence of life is its purpose seems like a logical flaw, maybe even "begging the question" (? any logicians out there?).

    </rant>

    P.S. Note to creationists: I accept that you think differently, and I think that you have the right to do so. My comments here are not directed at you; I'm not trying to change your mind, so please don't get "offended" at/by me; furthermore, you are not going to change mine, so please don't waste your time. My comments assume an acceptance of the existence of evolution/natural selection, etc.. To those who do not share these assumptions, the comments are irrelevant; ignore them. So, please don't start an "evolution vs. creationism" thread here. There are other, more appropriate places for that.
    ______________________(
    // ///#\)

  • It's that creation of the "virtual body" that will take the extra time I mentioned. We haven't even begin learning to decode our sensor signals yet, much less feeding the brain any sensor signal we want.

    Also, virtual worlds will can only be a reflection of what we already know. The real universe can teach us things we don't.
  • Oh, god, not the tired old Chinese Room argument again. That got refuted decades ago.

    For those not in the know: The 'Chinese Room' thoughtexperiment goes like this. You put John Searle in a room with detailed instructions on how to use flash cards. You feed Japanese-language characters in one end. Searle, following the directions on the flash cards, produces Chinese characters as output. In effect, the room with Searle and the flash cards in it translates Japanese to Chinese.

    Searle argues that, since he does not know Chinese nor Japanese, the room does not either. He extrapolates from there to say that a program that gives the appearance of consciousness cannot be considered conscious.

    Which is bogus on its face, of course, even at the analogy level. Searle may not know Japanese or Chinese, but the algorithm Searle is executing certainly does. Searle is basically a transistor or processing unit in the scheme described above -- a component. I don't expect one of my neurons, or even my whole hippocampus, to contain my entire consciousness/self.

    There's plenty of reasons to bag on the Kurzweil scenario. But you can't use the 'Chinese Room' thoughtexperiment to show a damn thing, because of its inherent bogosity.

    gomi
  • I'll step up to the plate, not as a theologian but as a practicing Roman Catholic.

    Dumping your brain pattern onto a computer, Edison Carter/Max Headroom style, wouldn't transfer your soul to the computer. It couldn't, unless it ripped the soul out of you (the meat person) as it did so. The term "transfer" implies that whatever is being transferred is no longer in its old home. If it is in its old home as well as a new one, it is a "copy".

    I see three possibilities.

    1. Souls are not quantized, like apples, but are fluid, like water.
    2. The computer-you has no soul (doubly so if your name is Simmons ;^> )
    3. The computer-you has a brand new soul
    I couldn't believe the first possiblity, simply because I can't really wrap my head around it. I'd put my money on either the second or the third.

    While I wouldn't put money on it, I think the computer-you would be granted a new soul by God. Face it, we humans have been creating soul-repositories for God for a long time. We used to have exactly one way to do it for thousands of years; in the past hundred we have broken out the Petri dish and sperm banks. New ways of making people, yet we believe that they have souls just like those of us conceived the old-fashioned way.

    If we can copy a mind into a computer in such a way that it has displays free will, I think that God would bless such a program with a soul. Of course, that's His call; I stand a good chance of being wrong.

  • by InkDancer ( 101386 ) on Wednesday January 26, 2000 @07:37AM (#1333911)

    For those interested in the ancient art of reading, Greg Egan wrote an excellent Science Fiction book on this very topic called 'Permutation City'. (It should be available through your favorite bookseller)

    The basic plot is that a guy makes a copy of himself, and the copy isn't to happy about being a copy. An Excellent read.

  • Go ahead and provide those defintions. You'll probably see a long list of replies (if you're quick enough).

  • Wouldn't this be one way to prove the existence of a soul?

    If a ebrain George appeas self conscious, and answers a Turing test as well as I do, would this ebrain George have a soul? Or does it prove that there is no soul?

    Where are the theologians when we need them?

    George
  • by Foogle ( 35117 ) on Wednesday January 26, 2000 @07:39AM (#1333923) Homepage
    This is something that probably every sci-fi fan has thought about. But just because you've implanted your brain into a machine and it is an exact duplicate of you, that doesn't mean you're really still living, does it?

    I guess it comes down to the question of whether you believe a person is more than the sum of all their parts. The way I see it, it would just be a machine that got to live on with my personality/memory while I still got to die (eventually). Actually, unless you were killed at the very moment your brain-content was transferred, there would be an overlap in existance. Two me's?? No thanks.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • Actually I thought I was highlighting the differences. Granted the firing properties of all the different types of nuerons are different then the simple sigmoid function used in software nets, but there are hardware versions that get much closer. What is interesting is the work that shows that behavioral equivalence can be found between the two types(analog hardware and software). My point is that I think it might not be possible to extend this to the biological equivalent without taking into account many other factors.
  • Good story, I like that.

    Thanks.

    Whether a soul exists has nothing to do with what you're describing. Yeah it would, of course, be murder if you disintegrated someone. My question was, would the guy on the other side of the transporter be "you". Well, if you don't believe that humans are anything more than the sum of their parts (i.e. no soul) then, for all intents and purposes, the newly cloned person *is* "you". He's just not the only you.

    Well, if we are to say an individual is unique (which is kind of implicit in the word "individual"), then there can't be two of "you". The two people on opposite sides of the transport are not the same person. They may be identical in every detectable way except for their location, but space hasn't folded onto itself -- this isn't one person co-located into two different locations. They immediately differ, if in no other way, than at least in the property of their spatial coordinates. So they aren't truly identical.

    Which really doesn't answer the question you're asking. [sigh]

    In the case of Riker's transportation mishap, we ask ourselves (if we actually care enough about Star Trek to even hypothesize this far) "Who is Riker?" Both of them? If one Riker is just the same as another, then I guess they are both Riker.

    They're both individuals named Riker, but they aren't the same person. The question you're getting at is, I think, which one of them is the same person as the Riker that existed before the transport.

    I don't know, but part of me wants to say, neither one is. This part of me is the same part that's inclined to say "I'm not the man I used to be" and mean it literally. I'm certainly not the same person I was 10 years ago (actually, I frequently joke that I'm version 3.3 of me -- although I'm planning on following Slackware's lead and making my next major revision 7.0).

    On the other hand, part of me wants to say the one who didn't transport is. The new one is a copy of the original. Note, this is true even without transporter accidents!

    On the gripping hand, part of me wants to say the one that "successfully transported" is the real Riker, for the same reason that the Ship of Theseus that Theseus is still sailing around in is the real Ship of Theseus, even though all the planks have been replaced over the years.

    Ugh. I know one thing -- until I'm sure of an answer on this one, I'm not using any transporters... :)

    --

  • Ever read the short-story "Fat Farm"? (I can't remember who the author is)

    Basically the main character is a rich, fat bastard who routinely goes to this exclusive spa that is able to create a complete adult clone of him (22 yrs old, good looking and fit) and then "transfer" his consciousness into the clone. A few papers to sign, and then the clone walks out, bearing his full concioussness and legal identity. The final step in the process is to humanely dispose of the original, which in this case is actually like the 6th or 7th copy, the actual original having supposedly been euthanized years ago.

    (if you haven't read this, and intend to, read no further)

    So but then they don't actually dispose of him, do they? They send him to a kind of boot camp where they whip him (literally) into shape for several months. There is some wiry old bastard that seems to take sadistic delight in physically and mentally tormenting him as he goes through a 6-month hard-labor kind of training.

    And then after he is 'broken' and back in some kind of useful physical shape, he is then forced into some kind of hideous assignment (the author does not go into detail, the reader is left to imagine what kind of crap they can pull on you since you have no more legal standing than a slab of beef).

    Then the kicker is that the guy that whipped him into shape is none other than his original self who is now forced for the rest of his life to see his fat clones come in, having spent his money, lived his life, become enourmously fat and then created yet another clone who had no idea nor cares what happened to the former selves - so he sadistically punishes his clones for six months during their 'training'.

    What was my point? Oh, yeah. When your 'essence' is transferred to another 'vessel' (bio or mech) is it really you? Heh, in this case, they were all 'him'. They all shared the same life experience at least up until the first cloning. Each of the subsequent clones lived their life knowing that they were a complete copy of the original, with the same attitudes, indifference and arrogance that the original had.

  • Well, the viewpoint of your electronic version is "...in fact it's you that gets to live forever, but not your meat original"
  • At least that's what the alien dust-mote transmitters tell me will happen.
  • by hattig ( 47930 ) on Wednesday January 26, 2000 @07:42AM (#1333936) Journal
    Whilst I am alive there is only one of me (luckily) but if I decide to upload my brain somewhere and plug it into a simulated living environment so it lives on, then you could end up with multiple hattigs, all of them with identical memories up to the age they were uploaded, and then all diverging as they went off to think their own thing.

    Of course, later on after my 50 brains have done their thinking, I could come back and download the results of their work back into my brain - 50 times the brain power as long as you can put up with the latency (download once a month or so)!

    Copying is so easy with digital data, and so editable... imagine the fun you could have with a digital brain - erasing the past, making it into a policeman, putting it into a robotic cop body etc. Add a few hard links to the processor and FPU and you have a pretty excellent cyborg.

    Sweet!

    ~~

  • I can't imagine a real use of being a 'mind in a box', unless the process is revercible. Using a computer as temporary storage, while a new body is vat-grown, or a cyborg host is assembled... Maybe then.

    But the really interesting result of this becoming reality is not hosting the mind in hardware, it's blurring the boundary between brain and computer.

    If we manage to solve the problems of transferring the mind into hardware, all we need to do is provide an interface for the mind to hook into hardware and vice versa. Imagine being able to precisely recall all facts you've ever been exposed to, and all you haven't, on a whim. Imagine being able to do super-complex math (not symbolics where biology beats algorithms hand's down, but number crunching) as easily as you would throw a baseball.

    A transparent interface to a computer would make this possible. Your mind stays where it is, but your thoughts can suddenly extend to an environment tailored to tasks we (biology) can not do well. Consider having UV and IR sensor inputs overlaid onto your visual data stream. You could see heat... as just another color.

    Now, extend the concept. Cross-sensory interfaces. I sky-dive, you get the rush. (Brainstorm comes to mind). Imagine the first killer app... A virtual roller-coaster, where you feel the G's sitting on your couch. Imagine the real killer app... Just like for the VCR, cross-sensory sex would be a huge money maker. You could 'experience' Pamela Lee, you sick little monkey!!

    Now extend the concept again. The Internet as the interconnecting medium, computers and minds all jacked in. You need to have expert knowledge of law, well, there's a lawyer out there who might be willing to answer your questions on-line (on-mind?).. It's just like talking to yourself, a computer facilitated telepathy if you will. Sharing ideas would actually become a reality. UML be damned! No need for language. No lies. All cards on the table - my idea of hell. :)

    You thought the search engines had a hard time keeping up before? In a world where the pieces generate their own content, we'd need a new kind of search engine - or maybe a new kind of profession - professional networking might actually mean something.

    We would need to mature psychology into a real science, and psychiatry could treat crazy people as faulty hardware. Your shrink could actually get into your head, and make adjustments. Daisy, Daisy, give me your answer, do...

    You thought the Melissa virus was bad?? Hey, the Samantha virus might shut down your kidneys!!

    I can speculate too, I can read William Gibson, and spew my own visions of the future. Can I be as famous as Ray??
  • by jilles ( 20976 ) on Wednesday January 26, 2000 @09:17AM (#1333939) Homepage
    Consciousness and soul are two severely underdefined terms. Any reasoning about without defining them is therefore crap.

    The lack of definition is very beneficial for religious people since they can adapt their own personal definition of the term soul. My guess is that the word soul will survive such an event as the copying of somebodies brain into a computer.

    Now, it would be fun to hear the copied brain of a deeply religious person try to define the word soul since it would have to assume it has one. Concepts like soul sharing come to mind now (LOL).

    Lets just drop irrational stuff like 'god', 'soul' and 'consciousness' from our language, OK? These words don't actually mean something (try to define them, you won't make much sense) they stand for things we cannot define.

  • BTW, this is addressed in Tad Williams' excellent four-book saga Otherland.
    What would be the point of uploading your brain to a computer? It wouldn't be you. Or would it?
    It would think that it's you. Is that all that matters?
    In Otherland, the problem is solved by terminating the "real" brain at the same time as the computerized mind is activated. Hmmm...
    ===
    -Ravagin
  • Penrose was the first, and most well known, person to suggest this. There was some evidence that some quantum events could effect parts of the brain in a way that the brain could measure/discern. I'm not sure anyone ever put any hard evidence towards the theories. Also when Penrose came out with that it brought a large number of people out of the woodwork.

    On the other hand I've seen some math that shows that any nueral net can be mapped to a Determinisitic Finite State Machine. This means that if we were just a nueral net (I don't think that is all there is to the mind, as I said in my previous posting) then given a suffcient number of states, a machine could perfectly emulate a human mind.

    The interconnection filtering and chemical processes (and whatever other things might be true, like quantum interaction) though would break the nueral net model for all but the most gross simulations of intelligent control. Nueral net developers simplify this by putting filters on the inputs and outputs, or simulating them in the net themselves. The question is then if a interconnect filter can be mathematically placed outside the net and still find a net with behavioral equivalence.

    The overall problem though is that without a way of handling complex systems like the mind I'm not sure how much our simple abstractions can model the mind accuratly. In my opinion tools for handling/understanding complex systems is where the next step is in this field.
  • I guess it depends on whether you believe in a soul or not.

    In what way does it depend on this?

    This reminds me of my friend's "Paranoia Transporter" (Paranoia refering to a particular role-playing game). The Computer builds transporters, but with a special "safety" feature because the transporters aren't 100% reliable (oops, I think I just gained a Treason point for sayint that). So, someone steps onto the transporter and says "Energize". The transporter chief activates the transporter. It scans the person and creates an exact duplicate at the other end. The transporter chief at the other end signals that the transport went okay. The transport chief at this end then pulls out his gun and distintegrates the poor sod still standing on the transporter pad saying "Hey, what happened, why am I still here?" The newly created clone at the other end goes about his business, fooled by his memories into thinking he's the old clone (who's currently pleading with the transporter chief not to disintegrate him).

    Now, explain to my why whether this poor sod has a soul or not in any way affects the fact that he's about to die and is not in fact living on, a copy of him is? I don't see how whether souls exist or not is relevant to this...

    Incidently, it was shortly after coming up with the Paranoia Transporter that we noticed that the transporter rooms on the Enterprise-D are soundproofed. And we almost died laughing at one episode where Chief O'Brian expresses some reluctance to let someone else beam him somewhere... :)

    --

  • There's alot of talk about scanning in every neuron in the brain and all their connections and represent it in software (or even hardware) throw a switch and then the person encounters his exact psychological duplicate.

    If the goal here is to gain effective immortality that's cool. I think it would be great to spawn off a duplicate so I could do two things at once. (One goes to work, one stays home and watches tv, one hacks on my email client) Of course for this to be really useful each duplicate should be to sync its experiences with all the other copies. (Similar to the Borg collective). There's nothing wrong with terminating one duplicate because you haven't really lost anything since you still have the other's experiences There's just now only one of you. (You wouldn't truly be dead until every copy of yourself was destroyed. (Even archival copies.)

    Everyone deep down knows there something more to the brain than just neurons. I'm an athiest so I'm reluctant to call It a "soul"; so I'll call it the "software" instead. Creating a conciousness won't happen by just scanning a brain, no more than a TEM can create a Linux box.

    I know neuroscience and psychology have discovered that certain areas of the brain are associated with different functions, but we still don't know the exact mechanics envolved. ("Some neurons fire, and a few chemicals are released, and then you get scared.") The Cat Cam [slashdot.org] a while back was damn amazing, but it's still fundamentally infrastructure. I'd like to know what kind of research is being done in how the brain actually stores information, mechanically how does a brain interpret a pattern of photons to mean "fire" (let alone, "I shouldn't touch that.").

    All the big grandiose AI projects to build sentient machines have all failed because we simply don't know how it's done. (That and the hardware wasn't nearly powerful enough.) What's going on in regards to this research (the neuoroscience research, not the AI research)?
  • by Frodo ( 1221 ) on Wednesday January 26, 2000 @09:18AM (#1333949) Homepage
    There also was an interesting thought by Stanislav Lem (if I remember right) about teleporation method, as described by some sci-fi writers, by recording molecular structure, passing it over the wire and then assemblying it on the remote end as an exact copy. The question is what is done with original? If it's dissolved, it's basically murder, and creating a copy on the other end doesn't make it less so. If it's not dissolved, then the user didn't move anywhere at all, it just was copied.

    And now I have yet another thought - what about illegal brain records copying industry, Brainwave copyright act and some norway hacker that would write an open-source brainwave decoder & recorder? Uff, I'm afraid even to think about it.
  • Does this mean that we have artificial life, or merely a perfect simulation? The program will only manipulate register contents, which are not connected to actual physical realities.
    well, your brain only manipulates chemicals and electric currents, which are no more connected to physical realities than CPU registers. we're already a "simulation" if you will; it's just running on meat-hardware. it also happens to have (presumably) evolved there.
    We will not be able to find out unless we (personally) undergo such a transfer...
    no, by undergoing such a transfer we will not find out anything deep; we'll just see whether the technology works, and have two entities convinced that they are the real one, with one of the two possibly having some technical difficulties (interface imperfections) with severe psychological consequences. perfect the technical aspect enough, and the simulated one doesn't have a way to know that it is simulated just by introspecting and observing the universe. and then your fundamental problem of "what is an consciousness" remains.

    the core problem is that consciousness is strongly tied to short-term memory, which works only one way (you remember the past, not the future -- except for Patrick Moraz's Future Memories). yet we experience time going forwards. the end result is that we have no clear intuitive picture of what it really means to duplicate a consciousness. assuming that the technology does make it possible unobtrusively, no-one doubts that the scanned guy won't feel his consciousness duplicating, or anything like that. he'll just continue to be himself, and if the body dies later, that consciousness will go with it.

    if we take a purely external, descriptive stance, there is really no problem at all: the subjective feeling of "me" doesn't count (I think the big word for that is "epiphenomenon"), and you have two individual intelligences living in different universes, that happen to share a past up to a point. no problem. except that that doesn't make any distinction between the two, yet, if they scan me, I will still be mortal, while the scanned copy might well run forever, which is a mighty big difference.

    I don't know of any theoretical framework out there that can make sense of this mess. religions that believe in some sort of soul don't solve the problem either, they just change the terms: now the difficulty is understanding if and how it can be duplicated, and if not, if and how there can be consciousness without it.

  • This whole theory of immortality through virtual clones is a fallacy. If you have a copy of yourself living on, you, yourself, are not experiancing what is happening, other people simply experiance "you" "happening". So, for now, the best bet is cryogenically freezing yourself, which right now is stupid, since the cryogenics expand and break all essential organs. So, God meant for us to die, and we are built with an expiration date. Stop trying to fight it. Now, utilizing the human brain is good for processing power, but wouldn't it be more feasable to design a system that works "like" our brain, with more optimizations? Instead of trying to get wetware in a box? I think so.
  • But if we're able to emulate a neuronal web in silicon, we could "grow" a new one. If your DNA was analyzed, once we know which genes do what, we could combine that of you and your spouse, produce a mixture similar to what happens presently, and emulate the growth processes to produce a new brain (giving it womb-simulation stimulus until birth). Then you get to raise the child. (If full DNA details are not available, the brain growth process could be simulated using the characteristics of your two brains as the source...if both of you have a large hypothalamus in your brain then the child would also get extra growth of that structure)
  • by LucVdB ( 64664 ) on Wednesday January 26, 2000 @07:44AM (#1333975) Homepage
    The sliced-up person mentioned in the article has his homepage here [nih.gov]. There's also a nice Java applet to view slices of him here [syr.edu].
  • Before Slashdot turns into a beehive of apprentice philosophers about the question ``is it me if my brain is copied inside a computer?'', take the time to read, reread or at least consider reading ``The Mind's I [barnesandnoble.com]'' by Doug Hofstadter and Daniel Dennett (Fantasies and Reflections of Self and Soul). There could hardly be a better written book on the subject. Also of related interest is ``The Society of Mind [barnesandnoble.com]'' by MIT AI lab's cofounder Marvin Minsky.

    Personally I don't believe in the workings of the human brain being replicated by a computer in the near future, but I do believe the philosphical questions raised by that possibility are of interest.

  • A common solution to this is what's known as the Moravec Transfer (after Dr. Hans Moravec, in his book Mind Children. It requires powerful nanotech, so it's a bit off, but...
    The theory is:
    1) J.Random nanobot swims up to one of your neurons.
    2) Computer communicating w/ nanobot starts simulating neuron
    3) Once computer + nano simulate neuron exactly, nano replaces neuron
    4) Repeat until entire brain is inside computer
    5) Disconnect brain
    Congradulations...you're on a computer.

    (Note, that this explaination is mostly Eliezer S. Yudkowsky's, and I just paraphrased it. For great info about ultratechnology and the future, including AI, nanotech, uploading, life extension, etc., I highly recommend http://singularity.posthuman.com [posthuman.com], his home page. Great site.)

    Dragonsfyre.
  • I think they still have problems to solve, more important than being able to convert neurons to bits.

    There's a long way to go before they can solve the psychological problem. How will "you" emotionaly cope with not being a bag of mostly water? Will the transfered brains develop phantom body pain?

    What about the emotional needs that are tied to physical needs, such as touch? Will you get hungry?

    I'm sure all of these problems are solvable, because in the end a brain in a computer is still a Turing machine. I just think we'll see a workable transfer in the estimated 50 years. Maybe 500.
  • BraINE Is Not Emulation

    Jazilla.org - the Java Mozilla [sourceforge.net]
  • I'm an athist, and will be first in line for the brain scanner, but...

    If I have an AI copy of myself, do I have the right to turn it off?

    If I have a copy of someone else, do I have the right to maniplate it's beliefs and alligences to my own ends? Say I got myself a copy of a prof well they wheren't looking, then screwed with the AIs mind to make it my perfect term paper writing slave.. Killer app or horrible mind control? Is the virtual pain of a perfect simulation of a human not the same as the biological pain of the original? Probably not, because a simulation isn't unique.. You can mind fuck your it all you want and when it finally cracks, just load another copy of the sane original, no harm done.. Still, doesn't seem 'right' either...

    Technology like this requires alot more then technical advances.. It will require a major advance in the complexity of popular(western anyway) moral thinking.
  • In my techno-religion, that's where the personality and "soul" of my computer is. My Red Hat 4.1 installation is now at 6.1, a half dozen revisions later (*sniff* my baby's growing up...), but I haven't reinstalled since 5.0 (repartitioning the hard drive), so since then I've considered it the same computer... even though the bits have moved through 3 hard drives, 3 motherboards, 3 CPUs, 2 cases, and lots of different extra junk over the years.

    With Windows, on the other hand, I get the dubious pleasure of starting with a fresh new computer soul every year. This time it'll be because, after uninstalling and reinstalling various versions of DirectX, sound card drivers, etc. and fiddling back and forth with IRQ/DMA settings, I can't fix the "sound stops playing after .25 seconds" problem that started after I uninstalled a game this Christmas.
  • by GoodPint ( 24051 ) on Wednesday January 26, 2000 @07:45AM (#1333999)
    If this allows the mind (spirit) to live on after your corporeal form has rotted away (departed) from this earth, then will we have achieved heaven on earth?

    Presumably external stimuli will have to be provided through some interface with the external world. Thus you will "see", "hear", "taste" and "touch" etc based on what is fed to you by the storage machine's interface.

    If so, "fake" stimuli would be able to put you in any situation you desired. Add a little feedback mechanism and you can create you're own personal version of heaven, and change it at will!

    Strike me down with a lightning bolt!

    GoodPint

  • You don't know how to decode sensor signals, but some people do. Search for "cat eye". Optic nerve signals decoded to a GIF.
  • This would clearly spell the end of pornography as an industry: anything you willed would be reality for you. Which means no more going down to the store in an overcoat for "Hot Donkey Action"!
  • You're correct, I already know. I trust based on his past work that Dr. Kurzweil hasn't done something trivial. But just since you're so obnoxious, I'll go find out more about it.

    ...

    The program he wrote is called "Cybernetic Poet." You can learn more about it or download a binary for Win95/98 off the net at his Cybernetic Poet website [kurzweilcyberart.com].

    To summarize how it works:
    RKCP uses the following aspects of the original authors
    that were analyzed to create original poems: the (i)
    words, (ii) word structures and sequence patterns
    based on RKCP's language modeling techniques (while
    attempting not to plagiarize the original word sequences
    themselves), (iii) rhythm patterns, and (iv) overall poem
    structure. There are also algorithms to maintain
    thematic consistency through the poem. RKCP uses a
    unique recursive poetry generation algorithm to achieve
    the language style, rhythm patterns and poem structure
    of the original authors that were analyzed, without
    actually copying the original authors' writings.


    He also has data for how his program fared on a limited poetry-based Turing Test [kurzweilcyberart.com]. To summarize:

    The above 28-question poetic Turing test was
    administered to 16 human judges with varying degrees
    of computer and poetry experience and knowledge. The
    13 adult judges scored an average 59 percent correct in
    identifying the computer poem stanzas, 68 percent
    correct in identifying the human poem stanzas, and 63
    percent correct overall. The three child judges scored
    an average of 52 percent correct in identifying the
    computer poem stanzas, 42 percent correct in
    identifying the human poem stanzas, and 48 percent
    correct overall.


    Sure, he gave the program a pretty high-quality input too (i.e. Keats); this isn't just the algorithms showing. I find it a good example that the converse of Garbage-in-garbage-out is true.

    What surprised me about that particular poem was that it was actually better (IMHO) than something I could have written. I'm used to that in chess, but not in poetry. That's not a Turing test, but I'd argue its a pretty damn relevant test.

    --LP
  • by dsplat ( 73054 ) on Wednesday January 26, 2000 @07:47AM (#1334023)
    The first and most obvious point I can think of is that this is not immortality in most of the senses that matter to me as an individual. Having a copy of me live on after my death does not change the fact of my death. I as an individual will experience the ultimate discontinuity.

    I was also thinking just this morning about the boundary between man and machine and the nature of computer assisted intelligence. Wearable, networked computers are likely to become commonplace in the near future. The prototypes exist already, it is just a question of finding a balance between capabilities, durability, and price. But this point applies just as much to palmtops. If I use a portable computer to keep track of an enormous amount of information for me, it is still possible to distinguish me, the biological system, from the computer. As we have gone from portable computers, to laptops, to palmtops, to wearables, the accessibility as become more constant. However, there is still and distinction physically. And yet, they become more and more extentions of ourselves.

    We entrust to external devices the tasks of memory. How do we enhance the various aspects of that trust? How do we protect ourselves from loss of the data or loss of access to the data? How do we protect that data from unauthorized access? The answers are obvious enough technologically. Backups, redundant components accessible on short notice, encryption. But how do we build those into the system, the expectation, the patterns of use?

    What human activities and enterprises will this access render obsolete? If I had all the answers with any certainty, and knew which products would be the winners, I'd be rich.
  • by 348 ( 124012 ) on Wednesday January 26, 2000 @07:48AM (#1334028) Homepage
    Kind of gives a whole ned dimension to the term Wearables [slashdot.org]

    With the rate technology is moving forward this really doesn't sound all that absurd. However I cant help thinking of that old movie "They Saved Hitlers Brain", where they pickled hitlers brain and it got out of control. Horrible movie, right up there with "Attack of the Killer Tomatos" but the concept was pretty cool.

    Aside from the obvious references that will come relating to ZEO et all, I thinkthat for the most part this would be a very bas idea.

    Ultimately, however, the earth's technology-creating species will merge with its own computational technology. After all, what is the difference between a human brain enhanced a trillion-fold by nanobot-based implants, and a computer whose design is based on high-resolution scans of the human brain, and then extended a trillion-fold?

    We are already, as a society getting lazier, fatter and more reliant on outside influences, If we all end up getting wired, we will begin a forced evolution of the species, I dont think that would be such a great idea.

    Never knock on Death's door:

  • by bons ( 119581 ) on Wednesday January 26, 2000 @10:56AM (#1334031) Homepage Journal
    Boswash News: 26 Jan 2051

    A recent attempt to upload the memories of the collective Slashdot Hivemind today was block by a court order. The Industry to Determine Individual Online Thought Security (i.d.i.o.t.s.) lodged a petition in court to prevent the storage of the Slashdot Hivemind because it contains the still secret code to DeCSS.

    DeCSS was a format used in the dark ages to play antique films. It is still used by collectors of rare films to view those old 2d classics.

    In related news, Star Wars Episode One is actually finally being rereleased to collectors of such items. This is the first time this award winning film has been released on DVD.

    this message prescanned by somelegalcorporationwhowishedtheycouldgetashorterd omainname.com

    -----
    Want to reply? Don't know HTML? No problem. [virtualsurreality.com]

  • If you restrict the marketing to hardcore geeks who spend all their waking time sitting in darkened rooms at a terminal, this wouldn't happen. The flow of digital data (what actually counts) would be much better than in RL, and as long as video and audio are on the standard of a high-res monitor and a good set of speakers respectively, it'd be good enough. :-)
  • I believe Dixie was a true construct, but it was Read Only. And yes it used to be a person. The dump uses RAM from the Deck it is interfaced with, if the Deck is removed the construct is returned to the state it was in when it was first accessed. It can learn new things, but that is retained only in RAM. Dixie used to be Chases mentor before he flatlined. A flatline is exectly what it sounds (when the heart monitor goes flat, ie. dead).
    --
  • Assuming this technology is developed, who is going to pay for the processor, CPU cycles, energy and hardware maintenance? What can the virtual person produce that justifies the cost of its existence?
  • That is one drawback of the slashdot forums. There is no interface to a spelling checker, or in my case my better half. To keep near topic, I need to record a dictionary into my brain. :-).
  • The Chinese Room is a classic case of argument by anthropocentric chauvinism. The key assumption is that the guy in the room is the only thing that can embody consciousness, and that the room as a totality cannot be counted as conscious. Basically, it is a mobilisation of unconscious prejudice about "what it means to be human" as a refutation of AI.
  • For each bad use of technology, there is also a good use.
    atomic bombs vs. atomic power
    Genetic screening vs. mass genetic engineering of the population
    medical techniques and cures vs. bacteriological warfare.
    Technology as such is not good or evil, only specific applications. I can well imagine someone like.. say... Stephen Hawkins being quite happy to ftp his brain.

    //rdj
  • by Telcontar ( 819 ) on Wednesday January 26, 2000 @07:48AM (#1334047) Homepage
    Let's assume that one can losslessly download all information from a brain, transfer it to a big digital machine (be at a computer, a neural network, FPGA or whatever) and switch to runlevel 5 :-)
    Does this mean that we have artificial life, or merely a perfect simulation? The program will only manipulate register contents, which are not connected to actual physical realities. Some philosophers argue that this property (the so-called "symbol grounding") is required for life.
    This is the case for any life form, but not for computers. Even if we cannot distinguish the behavior of such a computer from a real human's, does it mean that it is alife? Or does it merely produce the correct output, like a non-Chinese human using a Chinese-Chinese dictionary (that always gives a perfect response to any situation in life he encounters)?
    We will not be able to find out unless we (personally) undergo such a transfer... even if the "artificial brain" claims it is alife, it might be part of the perfect simulation.
  • They can communicate by extremely low-power transmissions, or by modulating the body's electrical field. One designated nanobot can take the role of transmitter, sending the data back.

    Alternatively, in-brain communication could be done by having messenger nanobots do the rounds of the ones at neurons, relaying information.
  • by Otto ( 17870 )
    If it's dissolved, it's basically murder, and creating a copy on the other end doesn't make it less so. If it's not dissolved, then the user didn't move anywhere at all, it just was copied.

    Well, a lot of the quantum teleportation stuff that was floating around a while back had a pretty neat way of working. The original had to be destroyed in order for the copy to come into existance. It was a requirement.

    Of course, you could be pedantic about it and say, well, your body replenishes itself by digestion of food into molecules, that become part of "you". So, really, none of you is the same after however long it takes to totally replenish you (however long that is).

    But, let's ask the tougher questions:

    When does the food you eat cease to be part of the food and become part of "you"?

    When do "you" become self-aware? What counts for self-awareness? A human egg is not self-aware, nor is a fertilized ovum (sp?). At some point in the growth of a human, that human reaches critical mass, lacking a better term, and becomes aware of itself and the world around it as being two separate things. Why does this happen? When does it happen? What causes it to happen?

    Are you self-aware at all? Can you prove it to another person?

    Would an exact copy of you be self-aware? What if that copy wasn't physical, but virtual? (There is no spoon. :-)

    If a virtual copy of your body, all it's functionality, right down to the atomic (quantum?) level could exist and be perfectly reproduced in a virtual world, would it be self-aware? Would it still be you?

    I have no answers, just questions.

    ---
  • by Brecker ( 66870 ) on Wednesday January 26, 2000 @07:50AM (#1334070)
    Slashdot Headline, 2030:

    Today, a team of open-source programmers posted a new beta of BrainEMU, the open-source software that emulates the human brain. The head programmer explains that his motives are both political, as well as technological: "If we can manage to make BrainEMU the thought-extender of choice, all discourse and future thought will be deriviative works of a GPL work, finally ensuring the end to the encroaching Patent Machine.

    "For that reason, we are struggling to provide the highest-quality in human biological emulation."

    Release changes for the new beta:

    * Emotional thought now supported
    * Fine motor control optimized and vastly improved
    * Support for Creative Environmental Voice included
    * Bug fixes:
    * No longer crashes when one tries to say "hello"
    * Embarrassment turns face red, rather than eyes
    * Colors correspond more accurately to closed regions in the vision module
    * Taste seems to be working again (broken in beta 9)

    Remember, BrainEMU is still beta software. The authors assume no responsibility for any personality defects, mental disorders, poor job performance, erectile dysfunction [check the power cable], shortness of breath of total failure experienced as a result of this product.
  • by Nugget94M ( 3631 ) on Wednesday January 26, 2000 @07:54AM (#1334119) Homepage
    I highly recommend the novel Permutation City [amazon.com] by Greg Egan for a very intriguing treatment of this concept.

    The concept of virtual clones, no matter what form they assume, opens a great number of ethical and moral issues. What rights should a clone of you have? Is your electronic clone a person while you are still alive, or only after your demise? What if your electronic clone wishes to commit suicide, should it have that right?

    I simultaneously find this concept appealing and appalling. It's hard to imagine ever feeling a sense of unity with running code, no matter how closely it mirrors my own brain image. Bottom line, such technology is equivalent to forking another process in unix. Sure, it's a perfect replica, but it's not me. If it walks like a nugget, and talks like a nugget, that's just not sufficient in my eyes.

    I can smugly tell myself that such a creation isn't me. After all, wouldn't I continue to retain my own consciousness after the creation of a virutal facsimile of my brain? In Egan's book, it's explained that typically the human is rendered unconsious during the transfer, and never regains consciousness after the transfer.

    In effect, people commit suicide to give their "copies" life. On the surface, this is most unsatisfying. To the person involved, how is this any different than simply dying? Is the knowledge that a clone of yourself will continue to persist sufficient?

    Without a seamless, unbroken consciousness, can you maintain your identity? I tell myself no, that I am me and I know that because yesterday I was me, and the day before. But am I just tricking myself? For all practical purposes, the hours I spent last night sleeping are a complete cessitation of consciousness. The "me" who woke up this morning isn't in any way linked to the "me" that went to sleep last night, other than the fact that I remember and believe that I am the same consciousness that provided my memories.

    I don't claim to have anything close to answers or even a solid theory. I just find the concepts involved very compelling, and I found Egan's book to be a wonderful way of exploring these issues. I highly recommend it.

    (Here's a Barnes and Noble [barnesandnoble.com] link, if amazon.com offends you.)

  • by Pennywise ( 92193 ) on Wednesday January 26, 2000 @08:02AM (#1334194)
    This kind of reminds me of the old "If we only knew the exact position and velocity of all the particles in the universe, we could predict everything that will ever happen" argument that arose shortly after Newton.

    If you accept this argument (and ignore some of the non-predictable, quantom nature of things) then you also have to accept that you have NO free will. Everything you do, think and say was determined at the birth of the universe, along with everything else in history.

    However, if we choose to say that this is not the case ( ie the universe is NOT deterministic) then I think there are problems with getting a human mind into a machine (at least as the article proposes). Sure you may one day develop the technology to get a "snapshot" of my brain, but what about some of the things happening at a quantum level? A few years ago I read an article (the name of the author escapes me, but I think he's fairly well known) about how it may be this quantum activity that allows our brains to have "conciousness". Does anybody else know the article/author's name?

    Anyway, my point is I don't think that a snapshot of the neurons and transmitters in my head can FULLY represent what is "me".

  • by Morgaine ( 4316 ) on Wednesday January 26, 2000 @07:37PM (#1334226)
    You're entirely mistaken. Your emotions stem from a particular configuration of internal triggers that both control and are affected by the operation of the complex biochemical machine that is you. If that configuration changes, your emotions can change.

    For example, take something that you consider fundamental, say sex, love, desire for sunlight, craving for creamcakes, or whatever. There is no particular reason why any of the feelings, senses or emotions associated with these things should not be triggered by something else altogether, if your biochemistry is reprogrammed: eg. you might be aroused sexually (massively and irresistably) by the sight of the letter Q, the colour purple, by solving a quadratic equation, or by touching palms with another being (real or virtual) as in Barbarella, say. There are absolutely no preconditions or limits in this direction, and it's false to assume that your current biological makeup says anything at all about your future desires as a living being.
  • by ianezz ( 31449 ) on Wednesday January 26, 2000 @08:10AM (#1334241) Homepage
    Mmm... Now I see it! A conversation, 50 years from now:

    -- "Hi Scott, any news today today?"

    -- "Oh, yes. Just a file, anyway".

    -- "Anything interesting?"

    -- "Look. I'm extracting it just now... well, here's the README... ok, it's GPL"

    ...time passes...

    -- "Uhm, maybe it's useful. What was the URL?"

    -- "iftp://iftp.BrainsRus.org/pub/apps/gpl/rms_20.5a- 1.tar.gz"


    That is, RMS is finally able to release its whole brain under GPL ;-)

    Just a joke, couldn't resist.
    ---

    [1] IFTP: Insanely Fast Transfer Protocol.
  • by Foogle ( 35117 ) on Wednesday January 26, 2000 @08:11AM (#1334242) Homepage
    That's the million dollar question right there. Take a look at Star Trek for a thought-experiment. Scotty (or Chief O'Brien, or whoever you want) beams the Captain up from Planet X. All of the Captain's molocules are completely disintegrated on the planet and then copies of them are created, in the exact same sequence, back on the ship. Is it really the same person?

    From all outward appearances, yes it is. Of course, from all inward appearances (that is, to the Captain) it is also. But is it? I guess it depends on whether you believe in a soul or not.

    Another fun time with transporter tech: Remember the episode where Riker finds his clone living on the abandoned planet? A transporter malfunction had bounced a "copy" of him to the planet's surface. So who's the real Riker? Following the last example, neither of them are the real Riker, because the "real" Riker got disintegrated on his first day at the academy, at which time he was replaced by a complete duplicate.

    What have we learned here? Star Trek has all the answers; you just have to look.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

Nothing ever becomes real till it is experienced -- even a proverb is no proverb to you till your life has illustrated it. -- John Keats

Working...