Kurzweil on the Future 300
dwrugh writes "With these new tools, [Kurzweil] says, by the 2020s we'll be adding computers to our brains and building machines as smart as ourselves.
This serene confidence is not shared by neuroscientists like Vilayanur S. Ramachandran, who discussed future brains with Dr. Kurzweil at the festival. It might be possible to create a thinking, empathetic machine, Dr. Ramachandran said, but it might prove too difficult to reverse-engineer the brain's circuitry because it evolved so haphazardly. 'My colleague Francis Crick used to say that God is a hacker, not an engineer,' Dr. Ramachandran said. 'You can do reverse engineering, but you can't do reverse hacking.'"
Re:poverty of expectations (Score:3, Informative)
Re:Obfuscation (Score:3, Informative)
mid-age life crisis (Score:5, Informative)
Re:mid-age life crisis (Score:2, Informative)
I suggest you take a look at his actual research before you say such things. Here's a link to a presentation her recently did at Ted:
http://www.ted.com/index.php/talks/view/id/38 [ted.com]
Re:Where's my f'ing flying car dip$%^* (Score:3, Informative)
AI is completely different. The cost is in computing power not dollars. Computing power is being driven down by forces around the world funneling billions into computers. Ai is also used around the world as it can be developed incrementally not leap to turing ready. Its used many decision making computer systems which again have billions of dollars being funneled to them. So we are constantly improving AI already. As for chips in our brains. There again are supporting technologies to work this out. There was an article about mind reading robots a few days ago, study on the brain is big. Of course cellphones for miniaturization. Mind controlled limbs coming out. Sure it is more difficult but the forces of capitalism are on ourside. And it tends to get its way.
That said i think his timescale is way off. We will have computers as fast as the brain in 2029 perhaps but they'll need to become more common place before we could expend that playing with and testing AI. So i'd say late 203x. Chips in the brain has one obvious setback, like genetic modification the government will surely stand in the way of science. If people arent comfortable with the idea it'll get bogged down in testing phases. After that it wont get enough funding because well there probably isnt a big market outside
Re:Kurzweil Talk in Cambridge, MA (Score:4, Informative)
1) You don't die "instantly" unless the damage is very extensive. In particular, you can have autonomous functioning the brainstem persist after massive "upper" brain damage.
2) You can damage large parts of the brain and have the damage rerouted - sometimes. There are large, apparently "silent" parts of the brain that you can remove without (apparent) problems. Read some of Oliver Sack's stuff for some interesting insights on how the brain works on a macroscopic basis.
Have you accepted Google as your personal search engine?
The future... (Score:2, Informative)
PS Anyone having trouble getting their rightful Karma bonuses despite still having 'excellent' Karma?
Re:Kurzweil Talk in Cambridge, MA (Score:5, Informative)
Have you seen the mess that a bullet going through a skull makes?
It's not the bullet that's the problem, it's the shockwave generated by it's passage that does all the damage. It's called "cavitation" - this video [youtube.com] should help you understand it. If you carefully watch the last part of that video, you'll see that it causes the entire melon to explode outwards. Now imagine what that kind of force does to brain tissue confined inside a skull.
You can't compare that to shooting at computer components - they react completely differently, and are not affected at all by the shockwave. When you shoot a computer, only the part you hit is affected.
Re:Kurzweil Talk in Cambridge, MA (Score:3, Informative)
Re:mid-age life crisis (Score:1, Informative)
It's easy to just sit there and spout negativity. It doesn't require any actual work on your part.
Good day.
Re:Nah (Score:3, Informative)
I think you are talking about "black box" trading at quantitative funds. (I can't imagine that many companies ask Computron where to put their money). If that is what you are talking about, I think you are quite off base. The driver for black box/quantitative trading is speed, not any computer insight. A human can't receive a stock tick and trade off of it in 10 milliseconds. A computer can, and requires nothing remotely approaching intelligence to do so.
Will not happen... (Score:3, Informative)
Never mind all the scientific or technical obstacles that even non-scientific person could think of, let alone once we get into philosophical issues (for things we don't even have words to talk about yet).
Yet there is still a very simple reason why the prediction will not happen. Does he know how long it takes for FDA to approve a brain implant of the kind he is suggesting (even if we had one)?
I've said it before and I will say it again. This is nothing more than a religion posing as pseudo-science from a guy who takes 200 anti-aging pills hoping to reach immortality though technology.
But one thing is for sure, Kurzweil will die just like every other "prophet" before him.
Maybe not THAT low hanging (Score:5, Informative)
Unfortunately,
1. A neuron isn't a transistor. Even the inputs alone would need a lot more transistors to implement at our current technology level.
An average brain neuron takes its inputs from an _average_ of 7000 other neurons, with the max being somewhere around 10k, IIRC. The vast majority of synapses are one-way, so an input coming through input 6999 can't flow back through inputs 0 to 6998. So even just to implement that kind of insulation between inputs, you'd need an average of 7000 transistors per "silicon neuron" just for the inputs.
Let's say we build our silicon transistor to allow for 8k inputs, so we have only one modul repeated ad nauseam, instead of custom-designing different ones for each number of inputs between 5000 and 10000. Especially since, we'll see soon, that number of inputs doesn't even stay constant during the life of a neuron. It must accomodate a bit of variation. That's 2^13 transistors per neuron just for the inputs, or enough to push those optimistic predictions back by 13 whole Moore cycles. Even if you believe that they're still only 1.5 years each, that pushes back the predictions by almost 20 years. Just for the inputs.
2. Here's the fun part: neurons form new connections and give up old ones all the time. Your brain is essentially one giant FPGA, that gets rewired all the time.
Biological neurons do it by physically growing dendrites which connect to an axon terminal. A "silicon neuron" can't physically modify traces on the chip. You have to include the gates and busses that switch an input to another nearby source from thousands available outputs of another "neuron". _Somehow_. E.g., a crossbar kind of architecture. For each of those thousands of inputs.
Now granted, we'll probably figure out something smarter out, and save some transistor for that reconfiguration, but even that only goes so far.
There go a few more Moore cycles.
4. And that was before we even get to the neuron body. That thing must be able to do something with that many inputs, plus stuff like deciding by itself to rewire its inputs, or even (yep we have documented cases) one area of the brain decides to move to a whole other "module" of the brain or take over its function. It's like an ALU deciding to become a pipeline element instead in a CPU, because that element broke. In the FPGA analogy, each logic block there is complex enough to also decide by itself how it wants to rewire its inputs, and what it wants to be a part of.
There are some pretty complex proteins at work there.
So frankly even for the neuron body itself, imagining that one single transistor is enough to approximate it, is plain old dumb.
5. And that's before we even get to the waste we do with transistors nowadays. It's not like old transistor radios, where you thought twice how many you need, and what else you could use instead. Transistors on microchips are routinely used instead of resistors, capacitors, or whatever else someone needed there.
And then there are a bunch wasted because, frankly, noone ever designs a 100 million transistor chip by lovingly drawing and connecting each one by hand. We use libraries of whole blocks and software which calculates how to interconnect them.
So basically look at any chip you want, and it's not a case of 1 transistor = 1 neuron. It's more like a whole block of them would be equivalent to one neuron.
I.e., we're far from approaching a human brain in silicon. We're more like approaching the point where we could simulate the semi-autonomous ganglion of an insect's leg in silicon. Maybe.
6. And that's before we get to the probl
Re:mid-age life crisis (Score:3, Informative)
Acutally, according to several sources, and mentioned on his wikipedia page [wikipedia.org], his biological age is about 20 years younger than his chronological age (which is only two biological years older than when he changed is habits concerning his health 20 years ago).
Re:Maybe not THAT low hanging (Score:3, Informative)
Purkinje cells in the cerebellum have sum up about 100k inputs. Purkinje cells are the sole output of the cerebellar cortex. And the cerebellum has as many neurons as the rest of the brain (~10^10). So your point is very valid, just an order of magnitude or so short on the estimate.