Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Biotech

Kurzweil on the Future 300

dwrugh writes "With these new tools, [Kurzweil] says, by the 2020s we'll be adding computers to our brains and building machines as smart as ourselves. This serene confidence is not shared by neuroscientists like Vilayanur S. Ramachandran, who discussed future brains with Dr. Kurzweil at the festival. It might be possible to create a thinking, empathetic machine, Dr. Ramachandran said, but it might prove too difficult to reverse-engineer the brain's circuitry because it evolved so haphazardly. 'My colleague Francis Crick used to say that God is a hacker, not an engineer,' Dr. Ramachandran said. 'You can do reverse engineering, but you can't do reverse hacking.'"
This discussion has been archived. No new comments can be posted.

Kurzweil on the Future

Comments Filter:
  • by krog ( 25663 ) on Wednesday June 04, 2008 @09:48AM (#23651227) Homepage
    A singularly worthless comment.
  • Re:Obfuscation (Score:3, Informative)

    by ShadowRangerRIT ( 1301549 ) on Wednesday June 04, 2008 @09:51AM (#23651279)
    Because haphazardly hacked together code is usually full of bugs and design limitations, while obfuscated code is simply rearranged good code? Integrating with buggy, poorly written code is not my cup of tea.
  • mid-age life crisis (Score:5, Informative)

    by peter303 ( 12292 ) on Wednesday June 04, 2008 @09:53AM (#23651335)
    Kurzweil's predictions will come to pass, by not on the time-scale he envisions. probably centuries. He has been hoping for personal immortality through technology and takes over 200 anti-aging pills a day.
  • by Sabathius ( 566108 ) on Wednesday June 04, 2008 @10:06AM (#23651609)
    Centuries? This assumes a linear progression. We are talking about the singularity--which i case you haven't been noticing--is happening in an exponential manner.

    I suggest you take a look at his actual research before you say such things. Here's a link to a presentation her recently did at Ted:

    http://www.ted.com/index.php/talks/view/id/38 [ted.com]
  • by Idiomatick ( 976696 ) on Wednesday June 04, 2008 @10:25AM (#23651953)
    There have been flying cars .... I hate people using this. We have the technology to make flying cars we have for a long time. The reason you dont see them is because they are expensive. If you think a hummer gets bad mileage a flying car gets much worse. Since it doesnt use wings for lift (the type envisioned by most people) you need to expend many times more fuel than a plane which uses too much already. Then of course you need to expend lots of effort making it lightweight. And its doomed to being a one seater unless you make it bigger than a regular car. If a product is doomed to lose money with total certainty. Why would any company make it? As well it costs millions for RnD so you cant make it just for novelty sake. It is NOT a technological problem, its economic.
     
    AI is completely different. The cost is in computing power not dollars. Computing power is being driven down by forces around the world funneling billions into computers. Ai is also used around the world as it can be developed incrementally not leap to turing ready. Its used many decision making computer systems which again have billions of dollars being funneled to them. So we are constantly improving AI already. As for chips in our brains. There again are supporting technologies to work this out. There was an article about mind reading robots a few days ago, study on the brain is big. Of course cellphones for miniaturization. Mind controlled limbs coming out. Sure it is more difficult but the forces of capitalism are on ourside. And it tends to get its way.
     
      That said i think his timescale is way off. We will have computers as fast as the brain in 2029 perhaps but they'll need to become more common place before we could expend that playing with and testing AI. So i'd say late 203x. Chips in the brain has one obvious setback, like genetic modification the government will surely stand in the way of science. If people arent comfortable with the idea it'll get bogged down in testing phases. After that it wont get enough funding because well there probably isnt a big market outside /. for putting chips in your brain. So i'd say we are a while off from seeing that. Immortality might be easier than chip in the head ironically because it makes less people feel queasy.
  • by ColdWetDog ( 752185 ) * on Wednesday June 04, 2008 @11:12AM (#23652917) Homepage

    If our brains are so well interconnected, how is it that we instantly die if a bullet merely passes through it and destroys a few of those connections?

    1) You don't die "instantly" unless the damage is very extensive. In particular, you can have autonomous functioning the brainstem persist after massive "upper" brain damage.

    2) You can damage large parts of the brain and have the damage rerouted - sometimes. There are large, apparently "silent" parts of the brain that you can remove without (apparent) problems. Read some of Oliver Sack's stuff for some interesting insights on how the brain works on a macroscopic basis.

    Have you accepted Google as your personal search engine?

  • The future... (Score:2, Informative)

    by religious freak ( 1005821 ) on Wednesday June 04, 2008 @11:21AM (#23653099)
    The future Conan???

    PS Anyone having trouble getting their rightful Karma bonuses despite still having 'excellent' Karma?
  • by c6gunner ( 950153 ) on Wednesday June 04, 2008 @11:48AM (#23653625) Homepage

    Mod me offtopic or whatever, I don't care, but I've been thinking about this for a few weeks. If our brains are so well interconnected, how is it that we instantly die if a bullet merely passes through it and destroys a few of those connections? We can shoot bullets through most parts of a computer and more than likely only a piece of it will be damaged


    Have you seen the mess that a bullet going through a skull makes?

    It's not the bullet that's the problem, it's the shockwave generated by it's passage that does all the damage. It's called "cavitation" - this video [youtube.com] should help you understand it. If you carefully watch the last part of that video, you'll see that it causes the entire melon to explode outwards. Now imagine what that kind of force does to brain tissue confined inside a skull.

    You can't compare that to shooting at computer components - they react completely differently, and are not affected at all by the shockwave. When you shoot a computer, only the part you hit is affected.
  • by quantumplacet ( 1195335 ) on Wednesday June 04, 2008 @11:58AM (#23653775)
    It is a stupid question. Many many many people have survived a bullet or other object penetrating their brain. Bullets are often fatal, but they also do not put a single hole in the brain, instead they fragment and rattle around in the skull, shredding the brain. Also, remember the brain would be the CPU not the whole computer in your analogy. Try putting a bullet in your CPU and tell me if it still works.
  • by Sabathius ( 566108 ) on Wednesday June 04, 2008 @12:25PM (#23654313)
    You're saying Kurzweil's research is bullshit because you've done your own research on this subject? He's been technology trend research for about 30 years now and is very rarely wrong. The guy has 12 doctorates and a team of 10 people working with him to creates models of the predictions he's making. Instead of just being contrary, maybe you should pay attention to what he's saying.

    It's easy to just sit there and spout negativity. It doesn't require any actual work on your part.

    Good day.
  • Re:Nah (Score:3, Informative)

    by LotsOfPhil ( 982823 ) on Wednesday June 04, 2008 @12:43PM (#23654691)

    Of course no one is building a super intelligent CEO in a box as of now, but many companies are developing programs that are borderline AI with dealings with choosing their best investments especially the larger financial firms with those who manage mutual funds.
    Now they don't call them AI at this point but they are approaching and I would wager that when it becomes viable, people will be building MBA's in a box to determine strategic decisions.

    I think you are talking about "black box" trading at quantitative funds. (I can't imagine that many companies ask Computron where to put their money). If that is what you are talking about, I think you are quite off base. The driver for black box/quantitative trading is speed, not any computer insight. A human can't receive a stock tick and trade off of it in 10 milliseconds. A computer can, and requires nothing remotely approaching intelligence to do so.
  • Will not happen... (Score:3, Informative)

    by mario_grgic ( 515333 ) on Wednesday June 04, 2008 @12:51PM (#23654863)
    Kurzweil is one seriously messed up scientist. This guy extended the Moore's law (look up Kurzweil's law of accelerating returns) to predict that civilization as we know it will cease to exist and will effectively become the civilization of super/trans-humans or artificial intelligent beings by 2020.

    Never mind all the scientific or technical obstacles that even non-scientific person could think of, let alone once we get into philosophical issues (for things we don't even have words to talk about yet).

    Yet there is still a very simple reason why the prediction will not happen. Does he know how long it takes for FDA to approve a brain implant of the kind he is suggesting (even if we had one)?

    I've said it before and I will say it again. This is nothing more than a religion posing as pseudo-science from a guy who takes 200 anti-aging pills hoping to reach immortality though technology.

    But one thing is for sure, Kurzweil will die just like every other "prophet" before him.
  • by Moraelin ( 679338 ) on Wednesday June 04, 2008 @01:01PM (#23655021) Journal
    It might be less low hanging than most people think. Most predictions I've seen for, basically, "OMGWTFBBQ, computers are gonna be as intelligent as humans" are based on, basically, "OMGWTFBBQ, we'll soon have as many transistors on a chip as there are neurons in a human brain." Especially marketing depts love to hint that way now and then, but they're not the only culprits.

    Unfortunately,

    1. A neuron isn't a transistor. Even the inputs alone would need a lot more transistors to implement at our current technology level.

    An average brain neuron takes its inputs from an _average_ of 7000 other neurons, with the max being somewhere around 10k, IIRC. The vast majority of synapses are one-way, so an input coming through input 6999 can't flow back through inputs 0 to 6998. So even just to implement that kind of insulation between inputs, you'd need an average of 7000 transistors per "silicon neuron" just for the inputs.

    Let's say we build our silicon transistor to allow for 8k inputs, so we have only one modul repeated ad nauseam, instead of custom-designing different ones for each number of inputs between 5000 and 10000. Especially since, we'll see soon, that number of inputs doesn't even stay constant during the life of a neuron. It must accomodate a bit of variation. That's 2^13 transistors per neuron just for the inputs, or enough to push those optimistic predictions back by 13 whole Moore cycles. Even if you believe that they're still only 1.5 years each, that pushes back the predictions by almost 20 years. Just for the inputs.

    2. Here's the fun part: neurons form new connections and give up old ones all the time. Your brain is essentially one giant FPGA, that gets rewired all the time.

    Biological neurons do it by physically growing dendrites which connect to an axon terminal. A "silicon neuron" can't physically modify traces on the chip. You have to include the gates and busses that switch an input to another nearby source from thousands available outputs of another "neuron". _Somehow_. E.g., a crossbar kind of architecture. For each of those thousands of inputs.

    Now granted, we'll probably figure out something smarter out, and save some transistor for that reconfiguration, but even that only goes so far.

    There go a few more Moore cycles.

    4. And that was before we even get to the neuron body. That thing must be able to do something with that many inputs, plus stuff like deciding by itself to rewire its inputs, or even (yep we have documented cases) one area of the brain decides to move to a whole other "module" of the brain or take over its function. It's like an ALU deciding to become a pipeline element instead in a CPU, because that element broke. In the FPGA analogy, each logic block there is complex enough to also decide by itself how it wants to rewire its inputs, and what it wants to be a part of.

    There are some pretty complex proteins at work there.

    So frankly even for the neuron body itself, imagining that one single transistor is enough to approximate it, is plain old dumb.

    5. And that's before we even get to the waste we do with transistors nowadays. It's not like old transistor radios, where you thought twice how many you need, and what else you could use instead. Transistors on microchips are routinely used instead of resistors, capacitors, or whatever else someone needed there.

    And then there are a bunch wasted because, frankly, noone ever designs a 100 million transistor chip by lovingly drawing and connecting each one by hand. We use libraries of whole blocks and software which calculates how to interconnect them.

    So basically look at any chip you want, and it's not a case of 1 transistor = 1 neuron. It's more like a whole block of them would be equivalent to one neuron.

    I.e., we're far from approaching a human brain in silicon. We're more like approaching the point where we could simulate the semi-autonomous ganglion of an insect's leg in silicon. Maybe.

    6. And that's before we get to the probl
  • by emil10001 ( 985596 ) on Wednesday June 04, 2008 @01:49PM (#23655793)

    Acutally, according to several sources, and mentioned on his wikipedia page [wikipedia.org], his biological age is about 20 years younger than his chronological age (which is only two biological years older than when he changed is habits concerning his health 20 years ago).

  • by shura57 ( 727404 ) * on Wednesday June 04, 2008 @03:10PM (#23657085) Homepage
    An average brain neuron takes its inputs from an _average_ of 7000 other neurons, with the max being somewhere around 10k,
    Purkinje cells in the cerebellum have sum up about 100k inputs. Purkinje cells are the sole output of the cerebellar cortex. And the cerebellum has as many neurons as the rest of the brain (~10^10). So your point is very valid, just an order of magnitude or so short on the estimate.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...