Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Biotech

Kurzweil on the Future 300

dwrugh writes "With these new tools, [Kurzweil] says, by the 2020s we'll be adding computers to our brains and building machines as smart as ourselves. This serene confidence is not shared by neuroscientists like Vilayanur S. Ramachandran, who discussed future brains with Dr. Kurzweil at the festival. It might be possible to create a thinking, empathetic machine, Dr. Ramachandran said, but it might prove too difficult to reverse-engineer the brain's circuitry because it evolved so haphazardly. 'My colleague Francis Crick used to say that God is a hacker, not an engineer,' Dr. Ramachandran said. 'You can do reverse engineering, but you can't do reverse hacking.'"
This discussion has been archived. No new comments can be posted.

Kurzweil on the Future

Comments Filter:
  • Obfuscation (Score:5, Interesting)

    by kalirion ( 728907 ) on Wednesday June 04, 2008 @08:45AM (#23651157)
    How is haphazardly hacked together code any harder to reverse engineer than intentionally obfuscated code? We know the latter isn't a problem for a determined hacker....
    • Re: (Score:3, Informative)

      Because haphazardly hacked together code is usually full of bugs and design limitations, while obfuscated code is simply rearranged good code? Integrating with buggy, poorly written code is not my cup of tea.
      • by dintech ( 998802 ) on Wednesday June 04, 2008 @09:28AM (#23652015)
        According to the summary you do it every day with everyone you meet!
      • Re: (Score:3, Insightful)

        by robertjw ( 728654 )

        Because haphazardly hacked together code is usually full of bugs and design limitations, while obfuscated code is simply rearranged good code? Integrating with buggy, poorly written code is not my cup of tea.
        Yes, because we all know that obfuscated code NEVER has any bugs or design limitations. If the Microsoft document format and Windows File Sharing can be reverse engineered I'm sure anything can.
    • Re:Obfuscation (Score:4, Interesting)

      by PainMeds ( 1301879 ) on Wednesday June 04, 2008 @08:54AM (#23651367)
      I think his point is the belief that hacked together code is more nonsensical and therefore would be more difficult to reverse engineer. It's like the difference between tracing back ethernet cables in a clean colo facility vs. tracing back cables at something like Mae-East, which (the last time I looked, at least) largely resembled a post-apocalyptic demilitarized zone. At least that's how he seems to view hackers. I personally see hackers as codifying something even more beautiful, logical, and well-articulated than the mundane corporate programmer, delivering a much higher level of intelligence and complexity than most could understand. Either way, you end up with the conclusion that it's a real pain in the ass to hack something that someone smarter than you wrote. Going inline with the quote, if God is a hacker, then you'd expect him to be one of the super geniuses and not some poor yahoo not quite knowing what he was doing.

      Apply that to the brain, and we're worlds behind. Considering we're still on the binary system, when DNA uses four base pairs for its instruction code, I think Crick grasped the complexity of the human body much more than Kurzweil seems to. To compare the brain to a computer chip is, I think, a grossly unbalanced parallel.
      • Re:Obfuscation (Score:4, Insightful)

        by Jesus_666 ( 702802 ) on Wednesday June 04, 2008 @09:44AM (#23652345)
        A hack can be beautiful or ugly. A hack that uses a property of the programming language in a clever way to achieve a speedup is beautiful, but a hack that relies on the processor violating its own spec in a certain way is ugly, especially if the programmer who wrote it didn't bother documenting what or why he did. Not only is the latter incredibly fragile, you can also not just take the specs and understand it - you have to know what's not in the spec to be able to fully grok it.

        Also note how "quickly hacked together" usually implies that conceptional and code-level cleanliness were forgone in favor of development time savings. A dynamic webpage that consists of PHP and HTML happily mixed together with constructs like <?php if($d = $$q2) { ?> <30-lines-of-html />* is as unreadable as the worst spaghetti code, but a web dev who needs to deliver a prototype within hours might actually do that. He quickly hacks things together, (ab)using PHP's proprocessor nature in order to deliver quickly, even though not even himself will be abe to maintain the document afterwards.

        The human brain consists of patch after randomly applied patch. I'd think that it would be the equivalent of a byzantine system with twenty different coding styles, three different OOP implementations and a dependency tree that includes four versions of the same compiler because parts of the code require a specific version. The code works, it's reasonably fast but it's miles from being clean and readable.

        * Yes, it is supposed to have an assignment and a variable variable in the if block.
        • by Empiric ( 675968 )
          Perhaps you didn't get your requirements document in on time.

          Or lacked sufficient organizational authority to get it placed as top priority...

          On the upside, I think we've definitively resolved the question of whether once someone brilliant produces Hard AI, whether there'll be a bunch of also-rans complaining, "Well, yeah, but his code sucks..."

        • So that's why the miracles came to an end. God got tired of maintaining his spaghetticode.

          Or maybe just the service contract with earth expired, dunno.
      • I personally see hackers as codifying something even more beautiful, logical, and well-articulated than the mundane corporate programmer, delivering a much higher level of intelligence and complexity than most could understand

        As someone who used to hack away at stuff rather than designing it first (and I still do that usually, though if I'm going to do something fairly complex I sometimes plan it out on paper a bit first), and once had my code referred to as "twisted-hacked" by another coder (I think he was Brazillian or German, can't remember, it was 8 years ago.. I'm still not sure whether to be proud of the fact that my twisted-hacked code works, or ashamed of the fact that I am a bit of a self-taught cowboy when it comes to

      • Re:Obfuscation (Score:5, Interesting)

        by Orne ( 144925 ) on Wednesday June 04, 2008 @11:21AM (#23654247) Homepage
        When I was in college in the '90s, our EE lab had just start experimenting with combining FPGAs [wikipedia.org] with a genetic algorithm [wikipedia.org] to model a non-linear function. Setup: pass in hundreds of random control streams of 0's and 1's that set up the logic of the FPGA, feed inputs and measure output pins, compare against a desired, then use the genetic algorithm to pick the "winners" that correctly modeled the function. The algorithm would randomly combine winners, then feed that back into the control stream, rinse and repeat until you have the "best" stream.

        After that, the researchers took the control stream, mapped it back to find out which logic gates were activated / deactivated / multiplexed to route to one another. What they found was that there was no direct data path from the input to the output ! so how on earth were the output pins being generated?

        What we were then told was that, somehow, the FPGA control pattern had created loops in certain parts of the circuit that was inducing current in the neighboring bus lines, like little radios and receivers. Totally non-intuitive, but mathematically it worked.

        That is what I expect to see when we finally decode the human brain -- an immensely complex network of nodes whose linkages to one another were created in real-time using whatever resources were available to the "trainer" proteins at the time. No two individuals will encode the same event the exact same way: not the same locations in the brain, not the same method of linkages, or the number of linkages.

        This is why I see the "singularity" not as a machine that we can walk into, have our brains scanned, and bam our consciousness can be copied to a computer. I think that every individual that "transcends" will have to do it incrementally, gradually replacing and extending the nodal functions that make up the brain. The brain needs to replace its neural network mesh with electronic blocks, and do it in such a way that the mesh's functionality is maintained while the material that makes up the mesh is replaced.

        Over a period of time, there will be no more "meat mesh" and your conciousness would be transcended into a medium that we know can be copied, backed up and restored. And when that happens, well, our whole concept of what makes up a "person" would need to be redefined.

        -- Scott
      • Re: (Score:3, Insightful)

        by Thiez ( 1281866 )
        I'm suprised nobody has mentioned this before, but where did you get the crazy idea that DNA is somehow 'worlds ahead' of us since it uses a way of storing information that can, with a little imagination, be seen as a base four system. The way you store your data has NOTHING to do with the data itself. If I were to post this message in both binary and hex, would the latter be more advanced that the former? I'm sure we can build a computer that stores data in base 8 by the end of this year. But there is no p
    • by jeiler ( 1106393 )

      The comparison is not to intentionally obfuscated code, but to organized and documented code. Haphazard code is quite a bit more difficult than clearly written code.

    • by cnettel ( 836611 )
      Intentional obfuscation over any greater scale tends to show clear patterns. Having something that is in the range of a couple of GB of data, knowing that it is basically just random junk that happened to pass most of the regression tests for each new version, and then trying to find it what it all does -- that's what you are facing for the human genome.
    • Re:Obfuscation (Score:5, Insightful)

      by mrbluze ( 1034940 ) on Wednesday June 04, 2008 @09:07AM (#23651633) Journal

      How is haphazardly hacked together code any harder to reverse engineer than intentionally obfuscated code? We know the latter isn't a problem for a determined hacker....

      Nonetheless there is something to what Kurzweil says, futurist (or in my language 'bullshit-artist') though he is.

      The brain is probably impossible to 'reverse-engineer', not because of its evolution but because to come up with a brain you need to have 9 months in-utero development followed by years of environmental formation, nurturing and so forth, by which time the brain is so complex and fragile that analyzing it adequately becomes practically impossible.

      I mean, take the electro-encephalogram (EEG). It gives us so little information it's laughable. Electrical signals from the cortext mask those of deeper structures and still we just end up with an average of countless signals. Every other form of brain monitoring is also fuzzy. Invasive monitoring of brain function is problematic because it damages the brain and the brain adapts (probably) in different ways each time. Sure, we can probably get some of the information we are after, but the entire brain is, I would suggest, too big a task.

      But we can use the same principles that exist in the brain to mimic its functionality. But it ultimately is a new invention and not a replica of a brain, even if it does manage to demonstrate consciousness.

    • Re: (Score:3, Insightful)

      by lymond01 ( 314120 )
      Let's say you have this guy, he's about 15 billion years old. He's been working on this code for about half that long, on and off, learning as he goes. He doesn't document, at all, no comments, and while it all compiles properly (though depending on the base OS the quality of the compilation varies), the number of separate files and custom libraries he's using is in the petabyte range.

      He's not trying to obfuscate the code, he's just not trying not to. Purposeful obfuscation implies organization. Hacking
  • by Bastard of Subhumani ( 827601 ) on Wednesday June 04, 2008 @08:47AM (#23651191) Journal

    we'll be adding computers to our brains and building machines as smart as ourselves.
    Sigh, talk about picking the low-hanging fruit...
    • Re: (Score:3, Informative)

      by krog ( 25663 )
      A singularly worthless comment.
      • by njh ( 24312 )

        we'll be adding computers to our brains and building machines as smart as ourselves.
        Sigh, talk about picking the low-hanging fruit...
        A singularly worthless comment.
        --
        Cretin - a powerful and flexible CD reencoder
        So says someone promoting cretins.
    • Fair point --- I'd be incredibly disappointed if the we could only make a machine as clever as you.
    • It is 2008. by 2020 the majority of computers will probably still be running some form of windows, and thus dumber than the sum IQ of all NASCAR fans.
      i hold no hope for AI for a long time.
      • I know that this is horribly off topic, but what's up with the "sum IQ of all NASCAR fans"? Is it because it started in the Southern United States, or is that racing fans in general are perceived to be a bit slow in the intelligence department?
        • While some of the engineers and designers for the cars are brillant in a hacker kind of way, the average fan of Nascar in particular is not overtly bright. These are the kind of people who lose hands by holding onto firecrackers as they go off. The kind of people the darwin awards were designed to showcase.

          While I consider anyone who watches a car drive around an oval for 6 hours to have questionable hobbies, Nascar in particular aren't so bright.

          It is one thing if the race course itself is a variable, o
        • Probably just that among actual racing fans, racing around in a big figure 0 where you only turn left is quite dull. Sure they're going pretty fast, and there are some interesting tactical concepts, but NASCAR fans are probably just in it for the crashes anyway :p
        • From an European point of view, where you're rather thinking of Formula One and Rally when car racing is mentioned, Nascar is even a notch below Indycar. I mean, when you're used to driving 150 on a freeway, watching someone do it in a big circle for hours isn't quite a thrill.

          Then again, we also usually don't understand how Baseball can be exciting. :)
    • by Moraelin ( 679338 ) on Wednesday June 04, 2008 @12:01PM (#23655021) Journal
      It might be less low hanging than most people think. Most predictions I've seen for, basically, "OMGWTFBBQ, computers are gonna be as intelligent as humans" are based on, basically, "OMGWTFBBQ, we'll soon have as many transistors on a chip as there are neurons in a human brain." Especially marketing depts love to hint that way now and then, but they're not the only culprits.

      Unfortunately,

      1. A neuron isn't a transistor. Even the inputs alone would need a lot more transistors to implement at our current technology level.

      An average brain neuron takes its inputs from an _average_ of 7000 other neurons, with the max being somewhere around 10k, IIRC. The vast majority of synapses are one-way, so an input coming through input 6999 can't flow back through inputs 0 to 6998. So even just to implement that kind of insulation between inputs, you'd need an average of 7000 transistors per "silicon neuron" just for the inputs.

      Let's say we build our silicon transistor to allow for 8k inputs, so we have only one modul repeated ad nauseam, instead of custom-designing different ones for each number of inputs between 5000 and 10000. Especially since, we'll see soon, that number of inputs doesn't even stay constant during the life of a neuron. It must accomodate a bit of variation. That's 2^13 transistors per neuron just for the inputs, or enough to push those optimistic predictions back by 13 whole Moore cycles. Even if you believe that they're still only 1.5 years each, that pushes back the predictions by almost 20 years. Just for the inputs.

      2. Here's the fun part: neurons form new connections and give up old ones all the time. Your brain is essentially one giant FPGA, that gets rewired all the time.

      Biological neurons do it by physically growing dendrites which connect to an axon terminal. A "silicon neuron" can't physically modify traces on the chip. You have to include the gates and busses that switch an input to another nearby source from thousands available outputs of another "neuron". _Somehow_. E.g., a crossbar kind of architecture. For each of those thousands of inputs.

      Now granted, we'll probably figure out something smarter out, and save some transistor for that reconfiguration, but even that only goes so far.

      There go a few more Moore cycles.

      4. And that was before we even get to the neuron body. That thing must be able to do something with that many inputs, plus stuff like deciding by itself to rewire its inputs, or even (yep we have documented cases) one area of the brain decides to move to a whole other "module" of the brain or take over its function. It's like an ALU deciding to become a pipeline element instead in a CPU, because that element broke. In the FPGA analogy, each logic block there is complex enough to also decide by itself how it wants to rewire its inputs, and what it wants to be a part of.

      There are some pretty complex proteins at work there.

      So frankly even for the neuron body itself, imagining that one single transistor is enough to approximate it, is plain old dumb.

      5. And that's before we even get to the waste we do with transistors nowadays. It's not like old transistor radios, where you thought twice how many you need, and what else you could use instead. Transistors on microchips are routinely used instead of resistors, capacitors, or whatever else someone needed there.

      And then there are a bunch wasted because, frankly, noone ever designs a 100 million transistor chip by lovingly drawing and connecting each one by hand. We use libraries of whole blocks and software which calculates how to interconnect them.

      So basically look at any chip you want, and it's not a case of 1 transistor = 1 neuron. It's more like a whole block of them would be equivalent to one neuron.

      I.e., we're far from approaching a human brain in silicon. We're more like approaching the point where we could simulate the semi-autonomous ganglion of an insect's leg in silicon. Maybe.

      6. And that's before we get to the probl
      • Re: (Score:3, Informative)

        by shura57 ( 727404 ) *
        An average brain neuron takes its inputs from an _average_ of 7000 other neurons, with the max being somewhere around 10k,
        Purkinje cells in the cerebellum have sum up about 100k inputs. Purkinje cells are the sole output of the cerebellar cortex. And the cerebellum has as many neurons as the rest of the brain (~10^10). So your point is very valid, just an order of magnitude or so short on the estimate.
  • by wcrowe ( 94389 ) on Wednesday June 04, 2008 @08:47AM (#23651211)
    Taking into consideration computer security issues, I think I'll pass.

  • by DriedClexler ( 814907 ) on Wednesday June 04, 2008 @08:51AM (#23651283)

    it might prove too difficult to reverse-engineer the brain's circuitry because it evolved so haphazardly. "My colleague Francis Crick used to say that God is a hacker, not an engineer," Dr. Ramachandran said.
    I'd always thought that this constraint would *help* because you know, in advance, that the solution to the problem is constrained by what we know about how evolution works. You have to start with the simplest brain that could have genetic fitness enhancement, and then work up from there, making sure each step performs a useful cognitive function.

    Furthermore, looking at the broader picture, I was reading an artificial intelligence textbook that was available online (sorry, can't recall it offhand) which said that current AI researchers have shifted their focus from "how do humans think?" to "how would an optimally intelligent being think?" and therefore "what is the optimal rational inference you can make, given data?" In that paradigm, the brain is only important insofar as it tracks the optimal inference algorithm, and even if we want to understand the brain, that gives us another clue about it: to improve fitness, it must move closer to the what the optimal algorithm would get from that environment.
    • by cnettel ( 836611 ) on Wednesday June 04, 2008 @08:58AM (#23651457)
      Any work with genetic algorithms show that you can very easily get a solution containing a lot of crud. Pruning too heavy on fitness in each generation will give you far from optimal results. The point is that each incremental change in our evolutionary history did NOT improve fitness. It just didn't hurt it enough, and might have combined with another change to increase it later on.

      It all boils down to the result that an intelligent organism capable of building a social, technological civilization could have been quite different from us. Even if it looked like us, details as well as overall layout of the brain could supposedly have been quite different, but still giving an equivalent fitness. A simulation reproducing everything is not feasible, so how do we find out which elements are really relevant?

  • by yumyum ( 168683 ) on Wednesday June 04, 2008 @08:53AM (#23651323)
    I attended a talk by Kurzweil a couple of weeks ago at the Broad Institute in Cambrige, MA. Absolutely fascinating what he foresees in the near future (~20 years). I believe it is 2028 when he believes a machine will pass the Turing Test. Even sooner, he predicts that we will have nanobots roaming around inside our bodies, fixing things and improving on our inherent deficiencies. Very cool. He also addressed a similar complaint about being able to reverse-engineer the brain, but it was of the nature that we may not be smart enough to do so. I (and he of course) doubt that that is the case. Kurzweil thinks of the brain as a massively parallel system, one that has very low signaling rate (neuron firing) compared to a CPU which it overcomes by the massive number of interconnections. It will definitely be a big problem to solve, but he is confident that it will be.

    • Re: (Score:2, Interesting)

      by VeNoM0619 ( 1058216 )

      Kurzweil thinks of the brain as a massively parallel system, one that has very low signaling rate (neuron firing) compared to a CPU which it overcomes by the massive number of interconnections.

      Mod me offtopic or whatever, I don't care, but I've been thinking about this for a few weeks. If our brains are so well interconnected, how is it that we instantly die if a bullet merely passes through it and destroys a few of those connections? We can shoot bullets through most parts of a computer and more than likely only a piece of it will be damaged (I have never done this, but we could in theory just reroute the processing through the non-damaged parts correct?)

      How is it that we can have brain damag

      • Re: (Score:3, Interesting)

        by yumyum ( 168683 )
        Whether one dies from an injury depends on the amount and the kind of damage. A famous example of non-lethal brain injury is Phineas Gage [wikipedia.org]. Also lookup lobotomy.
      • by ColdWetDog ( 752185 ) * on Wednesday June 04, 2008 @10:12AM (#23652917) Homepage

        If our brains are so well interconnected, how is it that we instantly die if a bullet merely passes through it and destroys a few of those connections?

        1) You don't die "instantly" unless the damage is very extensive. In particular, you can have autonomous functioning the brainstem persist after massive "upper" brain damage.

        2) You can damage large parts of the brain and have the damage rerouted - sometimes. There are large, apparently "silent" parts of the brain that you can remove without (apparent) problems. Read some of Oliver Sack's stuff for some interesting insights on how the brain works on a macroscopic basis.

        Have you accepted Google as your personal search engine?

      • by c6gunner ( 950153 ) on Wednesday June 04, 2008 @10:48AM (#23653625) Homepage

        Mod me offtopic or whatever, I don't care, but I've been thinking about this for a few weeks. If our brains are so well interconnected, how is it that we instantly die if a bullet merely passes through it and destroys a few of those connections? We can shoot bullets through most parts of a computer and more than likely only a piece of it will be damaged


        Have you seen the mess that a bullet going through a skull makes?

        It's not the bullet that's the problem, it's the shockwave generated by it's passage that does all the damage. It's called "cavitation" - this video [youtube.com] should help you understand it. If you carefully watch the last part of that video, you'll see that it causes the entire melon to explode outwards. Now imagine what that kind of force does to brain tissue confined inside a skull.

        You can't compare that to shooting at computer components - they react completely differently, and are not affected at all by the shockwave. When you shoot a computer, only the part you hit is affected.
      • Re: (Score:3, Informative)

        It is a stupid question. Many many many people have survived a bullet or other object penetrating their brain. Bullets are often fatal, but they also do not put a single hole in the brain, instead they fragment and rattle around in the skull, shredding the brain. Also, remember the brain would be the CPU not the whole computer in your analogy. Try putting a bullet in your CPU and tell me if it still works.
  • mid-age life crisis (Score:5, Informative)

    by peter303 ( 12292 ) on Wednesday June 04, 2008 @08:53AM (#23651335)
    Kurzweil's predictions will come to pass, by not on the time-scale he envisions. probably centuries. He has been hoping for personal immortality through technology and takes over 200 anti-aging pills a day.
    • by yumyum ( 168683 ) on Wednesday June 04, 2008 @09:00AM (#23651519)
      He has been fairly accurate about his past predictions, so I don't think centuries will be what it takes. Basically, he sees medical/biomedical advances starting to heat up and exponentially grow just like the electronics industry has.

      IMO, a very controversial prediction of his is that around 15 years from now, we will start to increase our life expectancy by one year, every year, a rate he also sees taking on exponential growth...
      • by SatanicPuppy ( 611928 ) * <Satanicpuppy@nosPAm.gmail.com> on Wednesday June 04, 2008 @09:53AM (#23652515) Journal
        Wishful thinking has made fools of better thinkers.

        Biomedical advances will never increase at the same rate as computer technology, simply because experimenting with silicon (or whatever) doesn't have any health and safety issues tied to it, more less any potential moral backlash (barring real AI, which I think is farther away as well).

        It takes 15 years, sometimes, for a useful drug with no proven side effects to make it to market. Even if we made theoretical breakthroughs today, it'd be a decade or more before they could be put into practice on a meaningful scale, assuming that they were magically perfect and without flaws/side-effects/whatever.

        It's very dangerous to look at our advances in computer technology and try to apply those curves to other disciplines. It's equally ridiculous to assume that the rate of increase will remain the same with no compelling evidence to support the assertion. In terms of computers and biotech, we're still taking baby steps, and while they seem like a big deal, we still have a long way to go.
        • Its often dangerous to extrapolate an exponential trend, much less a linear trend because they can have the nasty effect of flattening out or even turning over (bubble investing). Who knows whether we are at the middle, base or top of the curve for computing or biotechnology?
    • Re: (Score:2, Informative)

      by Sabathius ( 566108 )
      Centuries? This assumes a linear progression. We are talking about the singularity--which i case you haven't been noticing--is happening in an exponential manner.

      I suggest you take a look at his actual research before you say such things. Here's a link to a presentation her recently did at Ted:

      http://www.ted.com/index.php/talks/view/id/38 [ted.com]
      • by elrous0 ( 869638 ) * on Wednesday June 04, 2008 @10:00AM (#23652659)
        Oh, bullshit. The "singularity" isn't happening is anything close to an exponential manner. Technological advances in many fields have essentially stagnated in recent decades (transportation, space travel, power generation, etc.). In other fields, we are progressing, but hardly at an "exponential" rate (medicine, biology, etc.). Communications is the only field that has progress at anything close to an exponential rate over the last few decades.

        We're not even CLOSE to anything resembling some magical singularity.

    • by Jeff DeMaagd ( 2015 ) on Wednesday June 04, 2008 @09:10AM (#23651669) Homepage Journal
      He has been hoping for personal immortality through technology and takes over 200 anti-aging pills a day.

      Which is pretty funny given that dietary supplements haven't been found to be very useful on a whole.

      He really doesn't seem to look any younger or stay the same age either. He does look a bit better than smokers of his age, but not by a whole lot, in my opinion.
      • the Singularity, that revolutionary transition when humans and/or machines start evolving into immortal beings with ever-improving software. How do we know that it has not already occurred? Immortality would be boring. There is nothing that would keep someone entertained that long. Therefore we die and are reincarnated to keep live interesting. If we were sure about this it would make it boring too so we will always have a fear of death to keep up our interest in our present life. It is just like gamb
        • Re: (Score:2, Insightful)

          Immortality might be boring. Would I like to live forever? I don't know, ask me again in three hundred years.
      • He really doesn't seem to look any younger or stay the same age either. He does look a bit better than smokers of his age, but not by a whole lot, in my opinion.

        According to him his Type II Diabetes appears to have gone away for the time being... At least the symptoms part of it.
      • Re: (Score:3, Informative)

        by emil10001 ( 985596 )

        Acutally, according to several sources, and mentioned on his wikipedia page [wikipedia.org], his biological age is about 20 years younger than his chronological age (which is only two biological years older than when he changed is habits concerning his health 20 years ago).

  • Silliness (Score:5, Insightful)

    by Junior J. Junior III ( 192702 ) on Wednesday June 04, 2008 @08:54AM (#23651369) Homepage
    You can't reverse-hack? Who says?

    You can reverse engineer anything. Whether it has a well-thought out design or not, its functions can be analyzed and documented and re-implemented and/or tweaked.

    If anything, the timetable may be in question, but not the question of whether or not it can be done. I have no doubt it can be done, it's just a matter of how long it'll take given the right resources, the right talent, the right technology, and the right knowledge.

    Granted, I'm just an idiot posting on slashdot, and not an inventor or neuroscientist, but I still think I'm right on this.
    • His problem is that he's confusing form and function.

      I've met many a hack I couldn't figure out; a massive tangle of irreplicatable crap. But I can still make something new that is functionally identical, and that is what would need to be accomplished to replicate the brain.

      I see wishful thinking on two fronts. One, he wants to be immortal and touch the "weak godhood" of the Singularity. But two, he still wants to be "unique" in having this wonderful un-hackable brain. I think those two ideas contradict eac
    • It's basic high-school math that not every function has an inverse function.
      We can understand "engineering" in this context, as opposed to "hacking", as the careful planning of a system so that we use "well-behaved" functions, thereby maximizing our future ability to refactor said system, and minimizing the losses of potentially useful information throughout the various operations...
      For instance, a properly engineered graphics application will let you zoom in and out of a picture without permanently transfo
  • Nah (Score:5, Insightful)

    by Hoplite3 ( 671379 ) on Wednesday June 04, 2008 @08:55AM (#23651385)
    AI is our generation's flying car. It's what we see in the future, not what will be. Instead of the flying car, we got the internet. It isn't very picturesque (especially over at goatse.cx), but it is cool.

    The future will be like that: something people aren't really predicting. Something neat, but not flashy.

    Alternatively, the future will be the "inverse singularity" -- you know, instead of the Vinge godlike AI future singularity of knowledge, there could be a singular event that wipes out civilization. We certainly have the tools to do that today.
    • Re:Nah (Score:4, Insightful)

      by vertinox ( 846076 ) on Wednesday June 04, 2008 @09:26AM (#23651985)
      AI is our generation's flying car. It's what we see in the future, not what will be.

      I don't know. I think AI is economically easier and desirable to acheive than a flying car.

      The internet was predicted about the same time, but no one really paid attention but because it was economically viable and actually desirable (no drunk drivers or grandma's driving 300mph into buildings with a missile) it came about.

      Secondly, AI in any form is desirable. From something as simple as filtering data, to more advanced like picking stocks, and the final goal of actually being a companies CEO is what many companies are investing in right now.

      Of course no one is building a super intelligent CEO in a box as of now, but many companies are developing programs that are borderline AI with dealings with choosing their best investments especially the larger financial firms with those who manage mutual funds.

      Now they don't call them AI at this point but they are approaching and I would wager that when it becomes viable, people will be building MBA's in a box to determine strategic decisions.
      • The current stock pickers are Artificially Dumb, since the main driver (avoiding big losses) is predicting just how irrational all the other stock pickers - meat or silicon - are going to be. That's not really a direction that we want to explore without a Common Sense Override, since sooner or later we'll hit on the perfect feedback loop which will sell the entire global economy for $3.50 to an Elbonian day trader.
      • Re: (Score:3, Informative)

        by LotsOfPhil ( 982823 )

        Of course no one is building a super intelligent CEO in a box as of now, but many companies are developing programs that are borderline AI with dealings with choosing their best investments especially the larger financial firms with those who manage mutual funds.
        Now they don't call them AI at this point but they are approaching and I would wager that when it becomes viable, people will be building MBA's in a box to determine strategic decisions.

        I think you are talking about "black box" trading at quant

    • I hate naysayers such as you. Now if you'll excuse me, my robotic butler is informing me that my space elevator car to moonbase 23 has arrived.
    • by Yvanhoe ( 564877 )
      We have flying cars. They only are oddities that are not economically viable. Hell, we even have jetpacks and privately-funded space travels.
      The difference with most of other predictions is that AI doesn't have to become economically viable. You only need one, somewhere, to make a technological disruption.
    • I think you're misinterpreting the concept of the singularity. What we hit there is a point where things will change so drastically and quickly that we are currently completely unable to predict what might happen afterwards. Think of it as an immense chaos injection. We really don't know what will happen afterwards.

      The primary things influencing the outcome are our current state of mind. Considering the drive humans have to prove themselves superior by wiping out anyone even slightly different than them
  • I said pretty much the same thing [slashdot.org] Ramachandran said in a Kurzweil related thread yesterday. Funny how that works.
  • Where's my motherf'ing flying car?
    • Re: (Score:3, Informative)

      by Idiomatick ( 976696 )
      There have been flying cars .... I hate people using this. We have the technology to make flying cars we have for a long time. The reason you dont see them is because they are expensive. If you think a hummer gets bad mileage a flying car gets much worse. Since it doesnt use wings for lift (the type envisioned by most people) you need to expend many times more fuel than a plane which uses too much already. Then of course you need to expend lots of effort making it lightweight. And its doomed to being a one
      • by jc42 ( 318812 )
        We have the technology to make flying cars we have for a long time. The reason you dont see them is because they are expensive. If you think a hummer gets bad mileage a flying car gets much worse.

        Huh? It's easy to find data on the fuel usage of various vehicles, including airplanes. There's a rough summary at this wikipedia page [wikipedia.org]. Airplanes generally have roughly the same fuel per 100 km per passenger mileage as autos. This includes the general observation that the bigger the vehicle, the smaller the fue
  • Will people move data like in Johnny Mnemonic in there heads?
    • Re: (Score:3, Funny)

      by Yetihehe ( 971185 )
      With current technology I can very easily move some 20-30gb in my stomach (2gb microsd cards in some small pill-like protective case). But download is then a little shitty...
  • No thanks!

    One of the great things about current technology is the ability to unplug and sometimes I just want to do just that.
  • A claim that was made saying we would no longer need "paper" by the year 2000. Even with this guys previous track record about the sudden internet boom, computer chess champion, etc... Com'on now, adding computers to the brain? Don't get me wrong, I think i will eventually happen, just not by the year 2020. At one point in the article it says by the 2020's and in another part it says he made a $10k bet that it would happen by 2029. I know its still considered the 2020's, but still, seems a little too early
  • Adaptation (Score:5, Insightful)

    by jonastullus ( 530101 ) * on Wednesday June 04, 2008 @09:16AM (#23651773) Homepage
    IANANS (I am not a Neuroscientist), but as with other approaches of interfacing the human brain with periphery it seems to work really well to let the brain do the hard interfacing work.

    So, as haphazardly as the brain structures, memory storage, sensory input, etc. might have evolved, it might still be flexible enough to figure out a sufficiently simple interface with anything you might connect to it. Given a smart training of finding the newly connected "hardware", it might be possible to interface with some really interesting brain extensions.

    The complexity and the abstractness of the extension might be limited by the very pragmatic learning approach of the brain, making it more and more difficult to learn the interface if the learning progress is too abstract/far away for the brain to "stay interested". Though maybe with sufficiently long or "intense" sensory deprivation that could be extended a bit.

    My problem with the argument of the "haphazard" structure of the brain is that it could have been used to deny the possibility of artificial limbs or artificial eyes, which both seem to work pretty well. Sure, these make use of already pre-existing interfaces in the brain, but as far as I know (not very far) the brain is incredibly malleable at least during the first 3 years of childhood.

    So, as ethically questionable as that may sound to us, it might make sense to implant such extensions in newborn babies and let them interface to them in the same way they learn to use their eyes, coordinate their limbs and acquire language.

    Good times ;)
    • My worry would be that our brain interfaces too well with the new gadgets.
      "how did he die?"
      "his spacial awareness co-processor run out of battery, and he run into a wall"
      Adding extra-human capabilities without turning a human into a human-machine hybrid, each depending on each other for survival, sounds like the true challenge. And that's not even looking in the ethical challenges of preventing borg-like control via the add-ons. "Don't fret about the elections, Mr. President. The new citizen 2.1 firmwa
  • by jeiler ( 1106393 ) <go.bugger.offNO@SPAMgmail.com> on Wednesday June 04, 2008 @09:17AM (#23651785) Journal

    He not only makes predictions about technology (which is a feasible endeavor, though fraught with difficulties), but also about the universe that the technology will interact with. Predicting that brain scan technology will improve is (pardon the pun) a no-brainer. Predicting that we will map out hundreds of specialized areas within the brain is a prediction that is completely off the wall, because we don't know enough about brain function to know if all areas are specialized.

  • Balls of crystal (Score:4, Insightful)

    by sm62704 ( 957197 ) on Wednesday June 04, 2008 @09:18AM (#23651801) Journal
    by the 2020s we'll be adding computers to our brains and building machines as smart as ourselves

    As a cyborg [reference.com] myself, I don't see any sane person adding a computer to his brain for non-medical uses.

    I was going to say that sane people don't undergo surgery for trivial reasons, then I thought of liposuction and botox for rich morons, and LASIK for baseball players without myopia. I don't see any ethical surgeons doing something as dangerous as brain surgery for anything but the most profound medical reasons, like blindness or deafness.

    As to the "as smart as ourselves", the word "smart" [reference.com] has so many meanings that you could say they already are and have been since at least the 1940s: "1. to be a source of sharp, local, and usually superficial pain, as a wound." Drop ENIAC on your foot ans see how it smarts. "7. quick or prompt in action, as persons." By that definition a pocket calculater is smarter than a human.

    Kurtzwiel has been saying this since the 1970s, only then it was "by the year 2000".

    We don't even know what consciousness is. How can you build a machine that can produce something you don't understand?
  • All concerns of security aside, I do think that sophisticated direct brain connections with computers will be coming along pretty soon - think along the lines of what they're doing now with robotic limbs and such. It absolutely won't surprise me if within a few more years (5-10?) that kind of stuff, an artificial limb being controlled by the brain exactly as a natural limb is, is completely commonplace. And direct brain control is the best interface around, really.

    For me, the huge thing will be when I'm abl
  • And there have been some remarkably bad ones.

    What new tools? What festival? Why doesn't Kurzweil get a first name while Ramachandran gets not only a first name but an initial? Has Ray dropped the Ray?

  • The brain is an adaptive system. Provide it with a stimulus, and it will reprogram itself. How do you think the monkey learned how to use the robotic arm [newscientist.com]? Did they hack into the neurons and input code to work a third arm?

    No, the monkey's brain spontaneously created the neural network to control it. Sentient beings aren't computers, at least not in the conventional sense, because they reformat themselves to process new data (learning), and even to process new types of inputs. One might be able to bui
  • From TFA:

    Two decades ago he predicted that "early in the 21st century" blind people would be able to read anything anywhere using a handheld device. In 2002 he narrowed the arrival date to 2008. On Thursday night at the festival, he pulled out a new gadget the size of a cellphone, and when he pointed it at the brochure for the science festival, it had no trouble reading the text aloud.

    I'm guessing that 20 years ago he was thinking of a handheld device that would actually allow blind people to literally

    • From wiki, the prediction is describes as:

      Blind people navigate and read text using machines that can visually recognize features of their environment.

      Which is not quite the same as what he has, but no neural interfaces are mentioned either. He also promised as robot cars and viable automatic translators in the same book, so I'm not terribly impressed at his accuracy.

      Kurzweil also makes a lot of assumptions tabout our ability to continue improving computers, despite the limits of silicon technology and the current lack of a viable replacement.

  • The future... (Score:2, Informative)

    The future Conan???

    PS Anyone having trouble getting their rightful Karma bonuses despite still having 'excellent' Karma?
  • 8th World Wonder? (Score:2, Insightful)

    by octogen ( 540500 )
    ...that means, in 12 years we will be able to write code as complex as our brains without any catastrophic bugs that crashes it frequently or leads to totally useless results?

    I don't think so. Computers are now faster and "bigger" (not physically) than 30 years before. Programs have more functions than 30 years before.

    But essentially, they do exactly the same thing as 30 years before, just MORE of the same thing. And they don't do it BETTER, they still have the SAME BUGS, and the SAME NUMBER of bugs per lin
  • I'm an extremely experienced programmer with more than a passing interest in how human brains are set up. What the good doctor says about brains being designed by hackers is pretty accurate, but he misinterprets the description.

    "God" (and I presume he means Einstein's God here) isn't the hacker that creates the human brain, the human behind the wheel is. We have varying genetic predispositions, but even after initial conditions are set, our brains develop differently based on our experiences. It's well k
  • i'm very surprised no one has seen fit to link to this article [slashdot.org].

    i think that man-machine interface in general is vastly overlooked, and i'm going to get specific. a senior partner of an mid-sized architect firm keeps asking me why it takes so long to produce a drawing, given that the parts can be created quickly on the command line. what if you could just think of it, and it appeared? sure there would have to some scaling algorithms involved, and it would probably take some practice, but it would ultimately
  • Will not happen... (Score:3, Informative)

    by mario_grgic ( 515333 ) on Wednesday June 04, 2008 @11:51AM (#23654863)
    Kurzweil is one seriously messed up scientist. This guy extended the Moore's law (look up Kurzweil's law of accelerating returns) to predict that civilization as we know it will cease to exist and will effectively become the civilization of super/trans-humans or artificial intelligent beings by 2020.

    Never mind all the scientific or technical obstacles that even non-scientific person could think of, let alone once we get into philosophical issues (for things we don't even have words to talk about yet).

    Yet there is still a very simple reason why the prediction will not happen. Does he know how long it takes for FDA to approve a brain implant of the kind he is suggesting (even if we had one)?

    I've said it before and I will say it again. This is nothing more than a religion posing as pseudo-science from a guy who takes 200 anti-aging pills hoping to reach immortality though technology.

    But one thing is for sure, Kurzweil will die just like every other "prophet" before him.
    • by Creepy Crawler ( 680178 ) on Wednesday June 04, 2008 @01:09PM (#23656075)
      ---Kurzweil is one seriously messed up scientist. This guy extended the Moore's law (look up Kurzweil's law of accelerating returns) to predict that civilization as we know it will cease to exist and will effectively become the civilization of super/trans-humans or artificial intelligent beings by 2020.

      A scientists job is not to criticize the results when one doesnt "like" them. Instead, one interprets results, and this is what he did. The math works, and he stands behind that result.

      ---Never mind all the scientific or technical obstacles that even non-scientific person could think of, let alone once we get into philosophical issues (for things we don't even have words to talk about yet).

      Let me pose this question to you then: What happens when capitalism and high technology blend? The people who matter want this tech he describes: genetic tailoring, nanotech, and robotics.

      Gene manipulation and immunoresets are possible, but for the elite few who can afford it. Genetic diseases are rid of by tailoring non-disease marrow and injecting them. I remember when the Human Genome project started... almost 15 years to complete. SARS took 29 days.

      We already use nanotech in our everyday life. Exactly how do they rate CPUs and GPUs? That's right, by the process they used to create it. Nanotech doesnt mean nano-robots, but will eventually. Nanotech means we can manipulate 1/10^6 m. We also see nanotech by the labs on a chip used in biowarfare detection the military is currently using. It's there, hiding in plain sight.

      Robotics is the hardest now, but that's only because we dont have a powerful enough vision system processor. Once we have that, things will get scary crazy (as in Story of Manna crazy). And along with "robotics" we have plenty of software to run some interesting things right now. Some guy in his garage built a fully automatic heart-targetting gun turret ala Team Fortress Classic using homemade gear and COTS parts. 2 webcams with parallax was all he needed for basic motion/depth detection.

      ---Yet there is still a very simple reason why the prediction will not happen. Does he know how long it takes for FDA to approve a brain implant of the kind he is suggesting (even if we had one)?

      Which is why the USA will stagnate. We need less of these BS laws and allowances for scientists to experiment with accepting subjects who qualify as sane. We could have a working bionic plugin eye if it wasnt for the stranglehold the FDA and AMA hold on the USA. I'd say the FDA needs to be neutered to a "recommend/do not recommend" if exist at all.

      ---I've said it before and I will say it again. This is nothing more than a religion posing as pseudo-science from a guy who takes 200 anti-aging pills hoping to reach immortality though technology.

      My parents started taking supplements after hearing from news broadcasts and Dr Oz on Oprah. I chose not to, while observing what happens. One such drug is resveratrol, along with l-lysene and massive dosages of vitamin C(4g a day). My mom took glucosamine and chrondroitin sulfate for her back after a friend (who is a veterinarian) recommended it to her. Animal clinics use that complex specifically for severe arthritis for animals. No company can make money for paying the required fees for the FDA and stays a "supplement". My mom with glucosamine/chrondroitin healed her back with it to 100%.

      Well, back to he new drugs.. My mom and dad started the cocktail. My dad's bald on the top, or was. One of those drugs is actually regrowing his hair. We're not sure if it's C, resveratrol, or l-lysene, but it's something. Rogane (?) never worked. My mom's knees also cracked and stuff in the joints. Now she can feel her knees healing.

      They took Linus Pauling's recommendation on massive C dosages and seem to work as he claimed. Considering he won 2 Nobels in 2 fields, we respect him, even if the medical society does not. I have thought about following this same regimen and recording my progress, as it does seem to work for them (placebo is not strong enough to grow hair that hasnt in 20+ years).

      ---But one thing is for sure, Kurzweil will die just like every other "prophet" before him.

      Maybe.

Technology is dominated by those who manage what they do not understand.

Working...