Downloading The Mind 515
bluemug writes "The Canadian Broadcasting Corporation's popular science radio show Quirks and Quarks aired a piece this weekend about Ray Kurzweil's ideas on downloading human minds to silicon. (The interview is available in MP3 or OGG.) Kurzweil figures we'll have strong AI by 2029 and be able to copy a human mind about a decade after that. Book your appointment now!"
I've got a terrible cold.... (Score:3, Funny)
Sure! (Score:5, Funny)
Yeah, I'll go check it out on my flying car, while the robot takes care of things at home.
Re:Sure! (Score:5, Funny)
That's soooo last decade. Surely you will be using your household teleportation booth?
You could combine it with a trip to the tactile 3D hologram suite.
At least tell me you have a subspace communications port or how else will you download the 1Tb bug updates to Microsoft Windows XXXP with "use this browser or else" Interuniversenet Explorationer?
Not that I am doubting there's enough silicon out there for a few brain dumps, no, of course not. And anyway Skynet will protect us from the Matrix.
Troc
Re:Sure! (Score:5, Insightful)
The problem with depending on hardware and network advances to drive his vision is that software engineering simply cannot keep up with the pace of advances on the hardware front. Anyone who has ever read the "Mythical Man Month" understands this at a basic level. Humans simply cannot organize themselves well enough to tackle software projects of the magnitude that Ray envisions, at least not by 2029.
Ray dismisses this argument by saying we'll have software that writes the software. Well, there's a tautology for you. If you can't write the software you need because it's too complicated, how can you possibly be expected to write the software to write that software? Genetic algorithms are useful for some very specific trial and error sorts of problems. But using them to random walk our way to a billion lines of debugged, functional AI code seems a bit of a stretch.
My money sure isn't on Ray's pony...
Re:Sure! (Score:3, Interesting)
I agree with you. To be able to solve a problem using a computer, you have to know how to solve the problem in the first place. Kurtzweil (and his string AI buddies) are counting on some "emergent miracle" to occur.
Now, such an event occured in nature (i.e. evolution), but it took little longer than 30 years. :-)
Re:Sure! (Score:5, Insightful)
The capacity for thought we have is an intensely complex combination of the neural processes of survival and reproduction, with all those billions of years behind it, plus the geologically recent development of a whole lot of extra cognitive juice in the frontal lobe department, plus a couple of million years of tweaking this wetware system in the context of social, tool-using behavior, plus several tens of thousands years of social behavior combined with the meta-social instruction of language, art, text and such...
We have some bare inklings and theories of how we acquire language, intelligence, social functioning. The barest inklings. We're working on it. There is still a helluva lot of controversy on what exactly intelligence is, and no end in sight.
At the dint of enormous effort we have computers that can take a stab at interpreting meaning of isolated phrases based on context and a whole lotta cultural and semantic training by humans. The most powerful computers built for the job can hit-and-miss beat the finest human chess players... a game with a fully mathematically limited scope which is almost entirely susceptible to a brute force approach. "Seeing" computers can be trained to make some decent interpretations based on heavily patterned information. Voice recognition still has to be tuned to every individual, and it's pretty damn iffy for all that. Nowhere near to a computer that can hold anything resembling a conversation.
So where in hell do we get an estimate like "Strong AI by...?" As far as I'm concerned science has barely framed the question of what that would mean... and only in qualitative terms at that. So I'll tell you where these pointless predictions come from: ballpark some meaningless figure about biology - numbers of neural gaps, firing rates, impulses per second, whatever. Connect it in some arbitrary manner to some measurable function in a computer, extrapolate based on some law of technological development with far less than a century of statistical evidence and no basis mechanism whatever behind it (statistical evidence without an explanation is ALWAYS suspect in interpretation) and - viola! - you're a futurist. Or, as I like to say, a worthless dumbass.
ANd how do you get from there to the process of downloading consciousness, despite the fact that there is not even an inkling of a glimmer of the slightest valid theory about how an active and continuously shifting neourochemical proccess of personality and intellectual template, stored memory and present cognition (not to even touch the primal, the emotional, the glandular, the spiritual) gets translated to something that can be interpreted by a machine or stored in a meaningful sense or caused to be active outside of a biological framework? Well you just pull that one right out of your ass because it doesn't have even the flimsiest basis in the "reality" of doodling with a few facts and figures on your scientific calculator.
I'm not sure exactly why but the idea of people making carreers based on this bullshit makes me so mad I could kick puppies.
Re:Sure! (Score:5, Informative)
Then you do not understand genetic algorithms. If you are a programmer I recommend you read up on them, they are far more powerful than a simple random walk. Mathematical analysis shows tremendous implicit parallelism. You aren't merely working on X individuals, you are working on X individuals times Y schema, where Y is a monsterously large number. Mutation is the least signifigant thing going on in the evolution process.
Unfortunately it is to complicated and mathmatical to explain here, but if you are up for it try this Google search on "genetic algorithms" "implicit parallelism". [google.com]
Remember, this is the process that created humans. When people hear "evolution" they usualy think "mutation". Mutation is almost insignifigant. The power lies in recombination and immense implicit parallelism.
-
Re:Sure! (Score:5, Insightful)
My Windows 2000 SP2 Athlon doesn't understand HFS+. My OS 10.2 iMac needed alot of help understanding a
And niether of them know anything about the cordless phone, nor have they met my washer and dryer or the sexy new fridge just 3 meters away.
As for all the awareness coming to the desktop near you, I'll state here and for the record, each new CPU from Intel and AMD will be brought to thier knees by the new versions of Gnome, Windows, and Quake, leaving the desktop user with the same number of free cycles as before.
Now for this mythical AI thats coming surely by 2040, I say poppycock! My stance towards such "advanced thinking", and "futurists" is the same stance I have towards people telling me the World is ending right after the World Series.
I didn't buy this whole Strong AI nonsense a while back, but I did read one of his books. I was left with a sense of wasting my time then.
I popped over to Ray Kurzweil's site and poked around. This bit got my attention.
"If we can combine strong AI, nanotechnology and other exponential trends, technology will appear to tear the fabric of human understanding by around the mid 2040s by my estimation."
Those are alot of "Ifs". Strong AI, which is a buzz word. Nanotechnology, which is something that is built 1 or 3 or 10 at a time and photographed, but does nothing at this point. And my favorite "other exponential trends" In other words this whole idea of a Bowie-esque Savior Machine depends on crap he doesn't understand or can't put in words, but he is sure is coming.
Poppycock.
Re:Sure! (Score:3, Informative)
Lacan , a french psychiatrist dude (hotly contested as to his value as kook or genius), suggested that in the developmental stages the child sees itself as a shattered assemblage of body parts and functions. Then the child goes through a a stage("the mirror stage".. excuse me if I get this wrong , it's been a looooooong time for me) where the child starts to assemble a single sense of self that it can coherently call "me".
As the subject continues thru life, it starts putting it's sense of self , through negotiating the symbolic interactions of itself and the world around it. In effect, the self and the body while coherent is embedded in the language and social structures of the world around it. It's more than it's mere physical self. It's a conversation with the environment and the socius. And we can just "download" that?
(Sorry if the arts-speak is a bit heavy there folks, but its there clearest way to put it)
did I read this right? (Score:2, Funny)
Time to invest in a better couch!
CC
After being copied (Score:4, Funny)
Wow! what a new idea! (Score:2, Funny)
Re:Wow! what a new idea! (Score:4, Funny)
Re:Wow! what a new idea! (Score:2, Funny)
I know a girl like that. The problem is, we were being intimate once, and now they call me "Stumpy"
John Varley explored this in his novels as well (Score:4, Interesting)
He has one hell of an imagination and I highly recommend him.
Eternal life? (Score:5, Interesting)
Re:Eternal life? (Score:5, Insightful)
It wouldn't work like that. Imagine that the copy of your mind is uploaded into a new body before you die. Do you think your consciousness would suddenly transfer to the new body? All the copying could do is create a new consciousness in the new body, your old one (ie, you) would still grow old and die in the old body, never to return.
This is also the argument against Star Trek transporters. You die each time you use one but a new person is created at the other end that thinks it's you. You don't know anything about this, of course, you've just been disintegrated!
TWW
Re:Eternal life? (Score:2, Insightful)
Re:Eternal life? (Score:3, Interesting)
Re:Eternal life? (Score:5, Insightful)
I honestly don't think words like "dual consciousness" or "death" even apply in situations like this; we need some new vocabulary.
Re:Eternal life? (Score:3, Funny)
Only on Slashdot...
Re:Eternal life? (Score:4, Insightful)
Our bodies are just as much a part of ourselves as our brains are. Also, don't forget that the makeup of brains is neurons and nerve cell connections. Well, surprise, surprise--we have nerve endings all over our bodies, and I'm willing to bet that they're used, albeit at a low percentage perhaps, when we think and process the world as well.
As far as I understand it, we haven't yet developed the ability to remove a brain and find out what it's thinking -- it only works when it's inside a human body. And unless I'm out of the loop, brain transplantation has not yet been done on humans [216.247.9.207]. So far, "brain transplants" has meant inserting healthy cells into Parkinson patients. In this case, the individual cells are simply subsumed by the whole brain and used as dopamine factories.
The brain is not a hard disk or cpu. It's not running linux, windoze, or even (despite Steve Jobs' assertions) OS X.
Our understanding of the brain organ, and by extension, the "mind" -- which may or may not overlap 100% with the brain -- is so woefully inadequate as to make any talk of uploading or downloading anything on it silly at best.
Re:Eternal life? (Score:3, Insightful)
The cold was just an example. There are many other physical experiences which affect the way you think that have absolutely no direct physical effect on the brain itself. For instance, I find that my thinking dulls and I feel depressed when I have to pee real bad -- why, I don't know, but whatever is happening in the brain is not *directly* connected to my physical experience, but rather indirectly related.
I don't disagree that the brain is the "center" of the nervous system, but that's "all" that it is--the center of it. My point is not that we don't use our brains to think, just that they aren't the be-all and end-all of consciousness.
Thought experiment (Score:5, Interesting)
You inject yourself with a load of them, and it starts absorbing neurons and taking their place. Eventually, your entire mind ends up running on these replacements, each of which behaves just like the organic neuron it replaced. You've been concious all the way through.
Now, assume each of these is able to communicate it's inputs to a machine on the outside which is able to simulate neurons en masse. They start to disable themselves and telling those around them to get their signal from this machine instead of them.
Eventually, you end up with a load of simulated neurons which are running on this machine, linked to the nerves through whatever method they use to communicate and a bunch of these neuronbots.
The simulated one is functionally identical to the original organic brain, except now it's got the potential to be pysically a lot more robust. Continuity was never lost, and all that was destroyed was a few neurons at a time, who's function was replaced.
Re:Thought experiment (Score:5, Insightful)
During the transfer from inside to outside, suppose you use a machine that has redundant circuits. Each nanobot is replace by a trio of simulated external neurons, so that they can check each other for errors. (If the presumably-binary output of the three disagree, the majority wins and the disagreeing unit syncs to the final result.)
Ok, up to now it's exactly the same situation that you describe, but with additional reliability.
But after the transfer is complete... The trio-links are broken, resulting in 3 perfectly synchronized systems.
Which one is you?
I'm not sure that "continuity" proves anything. Maybe your original consciousness would die slowly, neuron by neuron, as the new consciousness comes to life. If it even does come to life.
Honestly, I don't think the human race yet has the terms to describe the problem, much less speculate about the answer. It's fun to talk about, but so was "how many angels can dance on the head of a pin?"
Re:Thought experiment (Score:3, Interesting)
I used to assume that my "inner voice", my internal monologue, was "me" in some fundamental way. And that when I slept, yeah, I was unconscious, but there must be some "pilot light of me" that was still burning.
After reading Dennett's "Consciousness Explained [amazon.com]", though, now I'm inclined to think that thought experiments like these reinforce the idea that there's no there, there. All 3 of those copies are you, for what it's worth, which isn't as much as you assume. (Though your implication "if it even *does* come to life", implying they would be some kind of zombies...well, if those 3 are zombies, just imitating "real consciousness", and yet they ARE *accurate* copies, able to grown and learn just like 'you'...well pal, you're a zombie too.)
Some of this thinking informs the essay I advertise in my
Re:Eternal life? (Score:5, Interesting)
If there is a continuity of conciousness, which the transporters provide, then you are really the same person, you are just made of different atoms.
As for the down/uploading brain contents thing, well, that is a bit more complex - if you can copy the contents of a brain and upload them to another, then you have fork()'d yourself. Either you kill the old body and have it's fork of your conciousness die, or you have two of yourself.
I'm not sure if the human mind could cope with the trauma of first finding itself in a new body, then seeing its old body die. It sounds simple enough, but it would take quite an adjustment!
Besides, I don't actually believe it's possible, I find it reassuring that our brains are probably too complex for us to possibly understand
Either route, uploading or transporters, is a great way to build a clone army of yourself though
Re:Eternal life? (Score:3, Funny)
You think it's tough for the _copy_?? What about the original?
"Okay, the transfer is complete, we're going to have to kill you now."
"Hey! Wait a minute! It didn't work! I'm still here in this body!"
"Well of course you are, but the copy of you is doing just fine, so you need to die now in order to maintain the illusion of continuity of consciousness."
"But I don't want to die! That's why I signed up for this!"
"Sorry, you should have read the fine print. Now, we have a number of Suicide Packages available for your convience, or for an added fee you can take advantage of our Euthanasia Program."
Re:Eternal life? (Score:4, Funny)
> This is also the argument against Star Trek transporters. You die each time you use one but a new person is created at the other end that thinks it's you. You don't know anything about this, of course, you've just been disintegrated!
I don't have a problem with this. When I go to sleep, my current consciousness is discarded, and when I wake up a new consciousness (with all my memories) is created. This fact doesn't keep me awake at night.
Good night. Sleep well...
Re:Eternal life? (Score:2)
I'd love to pin it down. Great, concise story telling.
Re:Eternal life? (Score:2)
Re:Eternal life? (Score:3, Informative)
It has been shown that people can live and be conscious with only the right side, and only the left side of the brain. No imagine if you split your brain in half, and put your left brain in China and your right brain in the US. Then imagine you could some how connect them wirelessly. Then imaginbe you connect your brain wirlelessly to additional pieces.
The point being: who says you consciousness has to exist in a single location. Many would argue that concsiousness is formed out of the complexity of thwe whole. If this is the case, maintaining consciousness through the whole upload / download transitions, is the same problem as having a distrivuted mind. If you can do one, then you can do the other.
Re:Eternal life? (Score:3, Interesting)
Re:Eternal life? (Score:5, Funny)
Please explain how tattoos last longer than 10 years.
Re:Eternal life? (Score:3, Insightful)
No, the slightly younger argument: what is consciousness?
Or, as I believe, we are simply programmed to believe (I Do not necessarily mean this in a literal sense) that we have a consciousness and in reality we are simply complex organic machines governed by various electrical and chemical processes.
Well, I don't see what that has to do with it. What is doing the beliving in this model if it's not the mind?
. If you copied yourself, quantum particle for quantum particle (simplest theoretical case but hardest to pull off unless we discover some neato trick of physics)
In fact, totally impossible unless we find a way around Heisenberg so this seems to be a totall invalid approach right from the start.
you would have two of you which both believed they were you, and each would be just as right as the other.
Except one has a discontinuity in their existance which the other does not.
The bottom line is no one knows what causes "mind" but pretending it doesn't exist is not a great step forward (unless you're George Bush Jr, in which case it's the best bet).
TWW
Re:Eternal life? (Score:2)
Re:Eternal life? (Score:5, Insightful)
too many questions...
Re:Eternal life? (Score:2)
I keep it in the closet.
Brain Dump (Score:3, Funny)
Re:Brain Dump (Score:2, Funny)
Re:Brain Dump (Score:5, Funny)
What if these mind download systems were MS brand? Contemplate the following hypothetical...
Doc: Good morning Mr. Jones, glad to see you're awake. You might be wondering why you're in the hospital? Well, it looks like your Brain (20)95 (tm) suffered a complete crash, and we had to restore you from tape. While we were at it, we installed the new Brain XP. You might find the new brain a little different - our new Media Play v.324 now plays ads directly into your thought processes once a minute. Can it be disabled? Well, no, not really...it's been integrated so that any attempt for removal will cause a complete cyber-lobotomy.
Also Mr. Jones we've upgraded the DRM portion of your brain. The new EULA - which by the way you've agreed to just by thinking with your new brain - now says that you must give all ownership of any independent thoughts you may have to Microsoft. Any thought that include open source software (including linux) will cause a nasty jolting pain down your side. Our lawyers tell us it is necessary to prevent the viral-like nature of GPL from infecting your new brain.
Now if you dont like your new Brain XP, you'll be happy to note that you have a 30 day trial period before you have to 'activate' your brain. It's a reasonable fee - just 50% more than the total sum of your yearly income and three of your children. You have only two children, you say? No problem - we'll mark you in for the increased payment amount. If after the 30 days you dont want to use your Brain XP, then you dont to do anything - your brain will automatically shut down. We'll restore you from tape to one of our previously supported products.
Which reminds me - now that we've released Brain XP, we no longer support any of our previous Brain products. We hope this wont be an inconvenience to you.
Optimistic guess (Score:5, Informative)
In that half century and a bit, we still haven't got anywhere near a machine that we would consider intelligent. Even quite clever machines like ALICE are rather dimwitted.
Re:Optimistic guess (Score:3)
Much as I hate dogs, I have to say you're underestimating their intelligence greatly. The little tricks that any extant "artificial intelligences"--pretend those quotes are about as big as your monitor--do are barely on a level with photosynthetic plants bending toward sunlight, let alone any animal doing anything.
And the fact that some programs' responses to our stimuli take the form of occasionally coherent sentences doesn't make them much more "intelligent" than probability-weighted boxes of fortune cookies.
"Artificial intelligence" researchers do, however, tend to operate at a near-dog level of consciousness. Ever seen a dog doubt itself?
Transfering is the problem? (Score:5, Insightful)
Sounds like something that would still take centuries instead of decades from now though. Then again I wouldn't have believed I'd live to see everyone carrying a phone everywhere they go ten years ago.
Insanity? (Score:2, Interesting)
ok so its only a copy and you would still be you but how is your mind copy going to cope going from flesh to silicon? would it be some form of tourtue?
RIAA will try to stop this... (Score:5, Funny)
Me me me first (Score:3, Funny)
Great! (Score:2, Funny)
Copyright... (Score:4, Funny)
Maybe not enough... (Score:2, Insightful)
That depends... (Score:3, Funny)
Re:Maybe not enough... (Score:3, Insightful)
I'd be especially interested in the modifications we could make to brain function in a silicon type environment.
For example: say we can't travel faster than light (likely), an explorer ship with silicon humans could lower the clock speed of their brains to make the journey subjectively shorter - or pause them entirely until the automatic systems turn them back on. In moments of danger the speed could be increased until the passage of what we think of as time is almost halted compared to the rate they are thinking.
In the end we cannot make this step expecting to remain who we are. That is very much shaped by what we are and a loosening of the physical restraints around our minds will result in changes to what we consider 'human'. Even living forever (or at least a long time) would change what it means to be human - think of the storage capacity you'd need for meaningful rememberance etc...
so um... (Score:4, Funny)
--
doubtful (Score:5, Interesting)
* It seems mr. Kurzweil thinks he knows what the mind is, what intelligence is and what consciousness is. In fact, these things are very abstract ideas about phenomena that we all experience, but we don't know what they are, or how they come into being. We can't even be sure the mind exists as a physical phenomenon (!).
* Mr Kurzweil also skips the following issue: if I download my mind's content onto a computer, does that mean I am immortal? Or is this just a copy and does the 'real' me just die?
And something I always get a little angry about. Kurzweil says:
We'd all like to live forever
well, thanks, but i don't. I am very glad with the notion that life, for all its splendour, has an end and that i can be sure to find rest someday. Please speak for yourself, Mr. Kurzweil!
I really think Mister Kurzweil is trying to get some publicity with his fantastic statements. But it doesn't really show a deep insight into the philosophical, anatomical, physical etc. etc. etc questions that consciousness and the mind pose. We understand so very little about these things today, that I think his claims are stupid, maybe even ridiculous.
Re:doubtful (Score:3, Interesting)
So what else could it be? And what does the mind do that cannot be done by a phsycial phenomenon?
Mr Kurzweil also skips the following issue: if I download my mind's content onto a computer, does that mean I am immortal? Or is this just a copy and does the 'real' me just die?
How is this an issue? It is if you regard the brain as some super-natural phenomenon as you describe in your first point. Otherwise, it'd indeed just be a copy that feels like it just woke up in another "body" which indeed may be "immortal" if it is in digital form. The original (if he survived being run through the copier/"reader") just lives on in the weird world where a copy of him with a consiousness that was once the identical to his/hers, exists. But you're right there's still a lot that we don't understand but it may one day be a lot easier to copy those things to a digital form and emulate/copy them without fully understanding them; you can always copy something which you don't understand as long as you understand the parts it's made of on the level of detail of which you'd want to make a copy.
Other stuff than physical (Score:3, Interesting)
It could be something that we cannot measure. If we are only what we can touch, then that leaves no room for free-will and you are a determinist in the strongest sense. People on /. have such little faith it seems.
It won't be silicon! (Score:5, Informative)
If we are ever able to do this, it won't be onto silicon. It will have to be some sort of quantum based media
For further reading on the quantum brain start with Roger Penrose's book "Shadows of the Mind"...I think...It's been a while since I've read it.
Also, check out this link and other links related to a "Quantum Brain" google search:e n-Ham/Orch_OR_Model/The%20Orch%20OR%20Paper.htm
http://www.consciousness.arizona.edu/hameroff/P
Great Stuff!
Re:It won't be silicon! (Score:3, Insightful)
Having read through the debates a few years ago, I think there's a lot of "heart" on both sides. Some people, for whatever reason, want to believe that consciousness is just an algorithm. Some people don't. Some people want strong AI to exist. Some people don't. Endless arguments ensue, some of which are sensical, some of which are educational and/or advance the field.
Strong AI (Score:5, Informative)
Second, strong AI has yet to prove it's really working. Quantum mechanics and other (not-so) recent developments have shown that there may be much more in our brains than just bits and bytes. Not to mention that there are other places in the human body where significant, yet nearly unexplored "preprocessing" takes place (f.e. in the eyes).
Endocrine system (Score:5, Funny)
Ray Kurzweil's predictions never appealed to me (Score:5, Insightful)
I bought his book some time ago, hoping it would entertain me during a long business trip. But unfortunately, instead of getting entertained, I've found myself incredibly annoyed by its superficial approach and over-optimistic predctions about the pace of technology will advance.
He seem to be oblivient to the obstacles people in the AI field are facing. Either he is underestimating the complexity of the human mind or he's overestimating the advance in AI research. Anyone how read something about the way our mind works, like Steven Pinker's excellent book "How the mind works" could see what a challange we're facing and realize we're not going to overcome those hurdles in 20-30 years. No way Hose!
I dont think this can be done. (Score:2, Funny)
In 2029? (Score:5, Funny)
"In the next year" means: "We have a working prototype"
"Within three years" means: "We think that we know what we are doing and are applying for patents".
"In five years" means: "We have a great idea, but no f*cking clue as to how we are going to implement it"
"In ten (or more) years" means: "I ate way to much chili and had a really strange dream, which you may get a kick out of. But really I got no clue at all, and my prediction is so far in the future that everybody will have forgotten about it if I'm wrong, but if I'm right I'll pull if out of my hat and wave it in your face!".
The Age of Spiritual Machines. (Score:5, Interesting)
A couple of points:
1. The estimates as to how much processing power is in an average human brain vary quite a bit. Is each neuron a bit? It can have multiple inputs - maybe it's something closer to a byte or a word? How and where is memory stored? Just haveing the raw processing power does not mean we will have the knowledge to USE it. We are seriously lacking in the knowledge departament.
2. Social implications. How many good technologies are set back, or even stopped because the people are not ready for it? Do you really think that an average person will simply accept and approve of the ability to live forever in a computer? All the religions of the world are going to have a field day with that. Don't think so? We've had genetically modified crops for a while now. They're safe and far more efficient. Why are there still countries that will not allow such crops to be used for human consumption?
In the end it reminds me of a story I've heard of a long time ago. I'm going from memory so you'll have to forgive me if I get the details wrong.
It happens during the height of Artificial Intelligence (when a lot of people thought we will have talking, seeing, thinking computers in just a few decades
"Why are you saying this? All of those problems are quite hard. It is unlikely anyone will achieve those things in that time."
The first scientist answered:
"True, but notice that every date I've given is AFTER my retirement."
What a way to generate funding, eh? This kind of things simply hurt the field in general.
And that's my gripe for this week. I feel a LOT better now, thank you!
Floppy minds (Score:5, Funny)
Brain Dump! (Score:5, Funny)
Kurzweil's Book (Score:4, Interesting)
Oliver Sacks' "A Leg to Stand On" illustrates how great an effect the loss of a single limb can have on the psyche of the victim. What would be the effect of the loss of the entire body? Kurzweil makes no mention of it.
I don't know about Ray Kurzweil, but I sometimes pay attention to parts of my body that are below my ears.
Re:Kurzweil's Book (Score:3, Funny)
Chicken before the egg? (Score:4, Interesting)
This isn't very useful - except maybe as a backup (Score:2, Interesting)
Metaphysically this is about as practical as putting your soul in a brass pot for storage until you get your new body ready.
Maybe as a backup - then in the case of brain damage, memories could be reinstated.
But for my money - I think I'd prefer to be a brain in a tank mounted on a giant robot.
MP3 or OGG (Score:2, Insightful)
Images of artifacts and /. discussions of the best codec or rate came to mind. Suddenly, people will be discussing whether or not the average person can identify a person as real or a copy - maybe a Heechee Turing Test or something.
Downloading only a third of the problem... (Score:5, Insightful)
Copying the information would require an extremely sophisticated, as well as invasive, set of technologies. Nanotech would probably need to be used to get the proper connections throughout the mind. As far as simply linking the brain, many people have discussed 'plugs' and such that would intercept external sensory/control feeds, such as the optic nerve and spinal cord, and then allow that information to be manipulated/redirected. Thus signals to move a leg could be altered so that they would move a mechanical leg, or even something else entirely. In such a way people could transplant their brains into robotic/cyborg surrogates, not even necessarily human looking. A fighter pilot, for instance, might just transport his brain into the plane. Thus the command to 'run' or 'walk' might be mapped onto engine throttling or some such. External camera's would send a feed, acting as 'eyes', etc. However, none of this makes any attempt at all to actually access stuff in reverse, from the brain. We record memories and such in the structure of the main brain, and thus something would need to go into the brain to read those. And because the 3-D structure of the brain is so critical, preserving the meta-information of how the other memories and such were encoded is also critical. Otherwise, you might end up with a record of memories and thoughts, but no way to actually connect those to form the personality.
Heh, I seem to be ending up with a long post, but the last thing to deal with, assuming sucessful duplication (including the metainformation) is "what now?" A way would have to be found to basically create an artificial neural net that would be able to recreate the exact structure of the original brain. Who knows, it might be possible to do such a thing virtually, having different sectors connected to each other and thus having a person exist in cyberspace. That, however, is pure speculation.
I actually find a lot of the stuff going on very exciting. Brains seem to last a lot longer then the body supporting them does anyway, so being able to basically have your brain in a very strong container that could be moved from body to body would probably work pretty well, and could potentially be very doable. However, total artifical replacement seems a long ways off. In some ways, what he is talking about in this article is sort of like cryrogenics today. You can get yourself frozen, but for the time being there is no way to ever undo the process.
Re:Downloading only a third of the problem... (Score:2)
I actually find a lot of the stuff going on very exciting. Brains seem to last a lot longer then the body supporting them does anyway, so being able to basically have your brain in a very strong container that could be moved from body to body would probably work pretty well, and could potentially be very doable.
This is a fallacy. The brain breaks down with age just like everything else. Your skin, its supporting matrix, liver, kidneys, etc. You lose brain function like making longterm memories (harder to do, takes more time), the ability to think, etc.
Alzheimer's, Huntington's, strokes are all tied to time and thus tied to aging. Your brain consumes more oxygen per unit mass than any other organ in your body and yet has the least builtin protection against oxygen free-radicals Perhaps brain function requires radicals in some way - they are not of necessity a bad thing - and thus the ultimate unavoidable cost of having a functional brain is that it damages itself as an unavoidable cost of doing business. See: On the true role of oxygen free radicals in the living state, aging, and degenerative disorders, Imre Zs.-Nagy, Annals of the New York Academy of Sciences, 2001, Vol 928: 187-199.
Your brain degenerates just fine. It is merely a question of whether you croak due to heart disease, hardening of the arteries, cancer, thrombosis, stroke, Huntington's, Alzheimer's, etc, etc.
Just my luck (Score:4, Funny)
I'm sure that when I'm copying my mortal soul to the hard drive, that's exactly when the Windows box will blue screen. :-/
I wonder how tech support is going to field that problem?
Think of it as a thought experiment. (Score:2)
A decade after 2029? (Score:5, Funny)
Uhh, pencil me in for the 18th... just in case.
Uh huh... (Score:2)
Kjella
Why not? (Score:5, Interesting)
If you're claiming that we don't know that much about how the brain works, I'd agree with you. If you're claiming that it's going to be tough to figure out how it all works, I'd probably agree with you there as well.
However, if you're claiming that science can never understand the brain, I'd have to strongly disagree with you. As an atheist, I don't think there's anything so special about the brain. There's no soul there, put there by some random deity. There's no magic. It's just a lump of protein mixed with water, in essence. Sure, it's a marvellously complex lump of protein. but it's still a lump of protein. We've made a heck of a lot of progress understanding the behaviour of lots of other types of stuff using science. What makes this particular lump of protein any different?
Can anyone give me a non-religious argument why, at some stage in the possibly distant future, that the workings of the brain won't be entirely comprehensible to humans?
Re:Why not? (Score:3, Funny)
According to some researchers, it is the ONLY lump of protein found so far that does not taste like chicken.
There must be something significant to that observation.
Godel, Omega, Algorithmic Complexity Theory??? (Score:2, Insightful)
Many (most) objects which perform a task do not do so solely by processing information and often can only be approximately simulated by computers. Just because the computer is the only device we have so far constructed which is capable of complex, flexible behaviour does not imply that all objects which are capable of such behaviour are computers.
On a side note, claiming that we will have strong AI by 2029 is like predicting that Bin Laden will be caught at 12:49 PM on the 12th of June 2003. My horoscope carries more weight.
Kurzweil lacks clue (Score:2)
The key failure of both books, as described for instance here [tof.co.uk]is that Moore's Law hasn't made computers any more intelligent yet, and doesn't show any particular evidence of doing so. What's disappointing is that people are still giving the same argument credence twenty years on.
Additionally, Kurzweil clearly either doesn't understand digital encryption and quantum computing, or thought it acceptable to funge facts to make an argument. That kind of thing doesn't give me confidence in anything else the guy says.
I don't reject the possibility of one day doing brain dumps, or artificially intelligent machines, at all. I just dismiss the idea that the incremental advance of hardware technology is going to give it to us for free. We need fundamental breakthroughs from something else.
Old News (Score:5, Funny)
Hardly an original idea (Score:2, Informative)
Don't fancy it myself.
Highly Skeptical (Score:5, Insightful)
What counts as our "minds" are simply far too tied into the physical instantiation of our bodies. (Not that "mind" is too abstract, but that it's not abstract enough for separation from our bodies.) If I make a computer-based simulation of myself, will it get tired? Hungry? Thirsty? Itchy? Horny? Sick? If not, can it then get excited? Scared? Concerned? Bored? Will it have any emotional reactions at all, if all the standard physical stimuli are removed?
Even if all the "human" inputs are replaced or simulated -- you've still got an added problem of a new level of "hardware breakdowns" on whatever platform is running the simulation. Suddenly you've also got to deal with the various downtimes, pauses, glitches, etc., that will break the illusion of it being the same "mind" as in the original person.
People are simply too much a construct of their wetware to be able to remove their "minds" as a separate set of procedures.
Umm (Score:2)
There're a lot of reasons why Kurzweil is wrong (Score:5, Interesting)
Ugh, there are so many loose ends its hard to pick one to pull on. Someone mentioned before, but your body is more than just a bunch of neurons floating in fluid. Your mind, your person, your sanity rely on constant bodily feedback. Your mind isn't just the brain, its the entire nervous system, head to toe. (check out Antonio Damasio's books Decartes's Error and The Feeling of What Happens for a thrilling discussion of this).
George Dyson's book Darwin Among the Machines doesn't address the stupendously anthropocentric idea of human intelligence on silicon but does explore some possibilities behind the emergence of intelligent (not necessarily conscious) systems on their own.
I read Dyson's book after stumbling across it browsing at a bookstore, only to learn that he lived about 2 miles from me! I went down to his boat shop and introduced myself and have had a few chats with him. He talked about Kurzweil a little bit and he actually gave me a copy of The Age of Spiritual Machines. At the time I was a naive fanboy (as opposed to the seasoned fanboy I am now) and asked him if he could write something in the book (I had him sign the Darwin book earlier). He declined, asking me with the ever present Dyson eyesmile, "What am I supposed to say? Sorry this book isn't as good as mine?" It was very humble humor, don't read it wrong.
I read Spiritual Machines and enjoyed it, if for no other reason that it provided a fun exercise in saying "that's a nice idea, but it won't work for these reason..." It addresses a lot of concerns and the whole identity dissolution theme was rather interesting to play along with. Still, I don't think that his future is a likely one.
Bah, I'm just rambling. Short end to a long story: Kurzweil's ideas are fun to read and worth the time spent if you have time to kill, but are highly unlikely. Copying humans into computers is a much bigger problem than just raw clock speed, which is what he boils it down to.
Here's a link to a page about Kurzweilian Singularity [kurzweilai.net]. Its worth checking out if you haven't read any of this stuff before.
Not a new idea (Score:2, Interesting)
Refuting strong AI (Score:2, Interesting)
Still, there are many who argue that although machines may one day pass Turing's test, they will nevertheless lack the essential consciousness or awareness that humans possess. See John Searle's paper, "Chinese Room". Nobody knows of a good, direct test for awareness.
Still others (Roger Penrose) do not rule out the possibility of genuine machine intelligence, but think that we have much to learn about our own minds before we can consider it seriously. Penrose specifically argues that our current understanding of science is too weak too incorporate an accurate model of conscious thought. But our science may change and one day become sufficient.
In any case, 2009 sounds like a very optimistic (pessimistic?) estimate.
Some SF books that explore this idea (Score:4, Interesting)
Re:Some SF books that explore this idea (Score:3, Interesting)
A more futurised (Ie 3000ad mega futurised) one was Diaspora. The ending wasn't as tight as permutation city (Endings aren't Egans strong point) but the discussion on rights etc is. I think , Egan is a anarcho-syndicalist from what I can tell. His book "Distress" deals with an anarcho-syndicalist utopia, and he has been involved with the Refugee rights movements etc around West Australia, traditionally leftie territories.
This seems slightly backwards (Score:5, Interesting)
It seems to me that the ability to copy a human mind is almost prerequisite to strong AI. Sure, the "great AI winter" is at least partially due to the crash government funding the field enjoyed in the late 80's / early 90's drying up as suddenly as it emerged, but AI has always been a field prone to too-early predictions. It seems that with each new metaphor we invent for describing the human brain, we also convince ourselves that our minds really are as simple as our metaphors suggest. But Turing thought that human-level mimicry would be possible by 1990 (while at the same time vastly underestimating the quality of hardware that would be available in 1990).
There's a real possibility that we just aren't smart enough to figure out how we work, and so the only route to strong AI is to make monkey-see, monkey-do copies. And while procreation is a time-honored method of doing that, the structure of the brain suggests that serialized output was not high on God's list of priorities, and the biological format rather resists studies. So, I often think that we might have to be able to emulate the brain in silico or some other more easily-studied medium before we have a chance of understanding what makes that brain tick.
Anyone got a potato? (Score:4, Funny)
No one understands how the brain works (Score:3, Insightful)
The bottom line is that this is hardly a science at all, just a lot of conjecture.
And just like that (Score:4, Funny)
Run for Eternity (Score:3, Insightful)
The ways he tries to achieve this goal are, by the most, static. It will be harder to modify a detail or change a component of such organism. Besides, such technologies are by too weak to external factors and demand much more energy inputs than a usual organic carbon-rich body. While it is hard for Nature, under Earth's energy balance, to create things with sources other than carbon, many organisms failed or were kicked into Evolution sideroads. Why? Because all these "solutions" were quite far from optimal. Do you know that octopus don't have hemoglobin but a magnese-based protein to fix oxygen in the blood? Or that there is a small vermin with teeth carrying more than 80% of copper? These things are exclusions, sometimes aberrations that the average conditions of Earth's habitat cannot support. These things lived isolated, in particular areas and cannot leave their environment.
Now how this comes into our problem here? Well this guy forgets more than 4 billion years of evolution and kicks us into a completely aritficial organism. But this organism lives uder what conditions? Human conditions! It is we humans that care for these silicon beings, model them according to our wishes and needs, we feed them with energy and data. Besides, till now not even Deep Fritz could approach the sensibility, reasoning and flexibility of a human. This is a machine that devores energy, that makes milliards of permutations to overcome the speed of the human brain in one single task, that is supported and developed by thousands of engineers. And someone considers this the Future? Give me a break. Dinos were a lot smarter and more autonomous.
IF something like Deep Fritz will be left alone on Earth it will meet something that even humans barely know about. The law that can be behind Thermodynamics (not the Second Principle, that thing is probably the consequence of this law) and which some biologists have been studying for several years. It is a law about how things interact. In a single system, in every moment there can be milliards of interactions between its components. Some of these interactions are antagonists, one can be successful if its antagonist gets weakened somehow. The state of equilibrium is merely a situation when these interactions meet something looking like an energetic "agreement" among themselves. However, this does not mean that interactions may disappear at all. Frequently some just turn more weak but more numerous as other components of the system "repel" these interactions, because of the more stable state they are in (this is where some people see the appearence of Entropy). However these stable states are not eternal. They may change globally or locally, and then, all other interactions may try to invade te castles of stability.
Why all this confusing bla-bla-bla? Well get a human and a machine. Make the human to improve the machine too look much like his mind. Now pick the human and shoot him, leave the machine alone in Earth. How long the machine will be capable to survive?
Even if someone achieves the feat to create an artificial mind much like ours, he will be only half-way. This minfd will need to be able to have a rational meal, to run from dangers, and to have a chance to go to toilet from time to time. Besides, this mind will have the big need to reproduce itself. Alone in the Universe does not give good chances for eternity...
Wetness counts (Score:4, Interesting)
First, I don't think Kurzweil has said anything that Hans Moravec ("Mind Children") and Marvin Minsky didn't say a long time ago. Minsky contemplated about machines transcending us, and Moravec long ago used Moore's law to predict when computers will be as complicated (he thinks) as human brains. Kurzweil is recycling other people's ideas.
Second, Kurzweil (like other MIT hardware guys) talks about the brain with the underlying assumption that it is just a collection of processing units (neurons) connected by simple electrical contacts (dendrites and synapses). In fact, the entire body of a neuron is chock-a-block full of calcium channels and tiny pores that are regulated by hundreds of different chemicals. Every year, new processes are discovered. Some chemicals are moved into the cell by active molecular transporters. Some chemicals may move between regions of cells by gaseous diffusion. Not only will you have to scan the connections between each neuron, but you're going to have to mimic the action of all this oozy stuff in real time using silicon.
And what about hormones and polypeptides that regulate all kinds of activities at short ranges, and also throughout the body? "Thinking" and decision-making involve lots of input from centres that excrete tiny quantities of chemicals -- all of this will have to be "scanned" (whatever that means) at a molecular level. It won't do to merely list the size and position of 100 billion neurons and their 100 trillion connections. You'll have to model the far greater number of wet chemical processes on every neuron.
In the 1940s some people thought everything would be "atomic" by 1990. Atomic rockets, atomic cars, atomic radios. Today, just substitute the word "computational" or "silicon" for atomic and you can blather about nonsense in the year 2040 without having a clue of what it means.
I think the brain's "wetness" is an integral part of it's operation, and this makes it a very dynamic and complicated thing. To simply see the brain as a collection of tiny silicon CPUs wired together is naive. It's a theoretical model straight from the 1960s or earlier, before we knew much about the brain at all. A real breakthrough in Artificial Intelligence will probably arrive slowly, and probably be stimulated by people who learned modern (i.e. post-20th century) physiology when they were young.
Hence, I think the term "an expert in computers and artificial intelligence" is an oxymoron at this time.
Read a great story about this [Kinda OT] (Score:3, Informative)
In the future, everyone has a `jewel' implanted in their brain at birth. It's an optical computer that receives all your sensory data, then tries to replicate the external results of your brain activity. When you're young, it's way off, but it trains itself to match the responses of your real brain. One day, in your thirties, when your real brain is going down hill, you go to the hospital. They hook you up to another computer that keeps an eye on how well the outputs of the jewel match the outputs of your organic brain. If they match up, then they scrape out your meatware, and replace it with non-sentient tissue that consumes just as much blood, glucose, etc as your original brain, and can produce hormones for the rest of your body, while hooking up the jewel to the rest of your body. At that point, `you' are the jewel.
The cool part of this is that there's no discontinuity between `me' and `it'; the jewel will think the same thoughts as me, it will be me; in fact, it will even worry about dying when the organic brain is killed, since it thinks it is the original.
The ending was quite a cool twist, which I won't spoil here. It was a really good story tho, hopefully someone will remember it and post details.
Re:never happen (Score:2)
The brain is one example of a class of machine we can run a Human on. I'm quite sure there will be others, some smaller, faster and more effecient. One day we might even be able to make one
Re:never happen (Score:2, Interesting)
Of course not. A program on the other hand may be able to.
Isn't Penrose the inventor of the 'Chinese room' illustration of why AI is impossible? It suffers from exactly the same confusion of hardware and software.
For those not familiar it goes like this -
+ Put a man in a room with a basket of Chinese symbols and a set of rules (in English)
+ the man is passed through a delivery hatch a piece of paper with a message in Chinese.
+ using the rules in front of him he composes a response from his basket of symbols
+ in this way the man is able to carry on a conversation with a Chinese speaker outside the room, who has no idea that he doesn't speak Chinese (Turing test)
The conclusion of the story is the question "does the man understand Chinese?" the answer being of course not. From this we are meant to infer the inherent ridiculousness of AI.
However as any astute observer will note - the man is merely performing the function of a CPU i.e. the hardware. The clever bit, the 'understanding of Chinese', is in the rules. The real question should be - "can such rules - i.e. a hardware independant program - be written?".
I'm perfectly prepared to accept that they can't. They may be some so far undiscovered reason why only the brain, or something like it can implement a mind. But until such a discovery is made - I'd say the question is still very much open.
Re:SF come true (Score:3, Interesting)
Old Man "Bob" is wheelchaired into the waiting room of the hospital, where we find the rest of his family dressed in black, obviously in mourning. "Why is everyone so sad" he asks. "We just came from your funeral."
You see, "Bob" had a stroke, and died, however thanks to recent technology, he was able to save a copy of his brain about 3 months prior. The doctors cloned his old body, reloaded the brain. Of course the tech doesn't copy that well, so the life expentancy of the replacement is about a month because of cancers, but its enough time for the family to "bring back the dead", so they can all say their goodbyes in a way they couldn't the first time around. The only problem is that "Bob+1" didn't know he was only a copy, destined to die (again)...