Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Biotech

Ray Kurzweil Does Not Understand the Brain 830

jamie writes "There he goes again, making up nonsense and making ridiculous claims that have no relationship to reality. Ray Kurzweil must be able to spin out a good line of bafflegab, because he seems to have the tech media convinced..."
This discussion has been archived. No new comments can be posted.

Ray Kurzweil Does Not Understand the Brain

Comments Filter:
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Tuesday August 17, 2010 @12:22PM (#33277054)
    Comment removed based on user account deletion
  • Sounds reasonable (Score:1, Interesting)

    by mangu ( 126918 ) on Tuesday August 17, 2010 @12:39PM (#33277300)

    1. Technology is growing exponentially
    2. The brain isn't some magical soul-endowed jesus box. It's a function of physics

    PZ Myers threw a red herring there. What Kurzweil says is pretty reasonable, he used the total amount of information in the genome to get an upper limit estimate of the amount of library code needed to simulate a brain. I say "library" to differentiate from data, since a lot of our brain information comes from our experiences, i.e. library == instincts.

    Myers goes off in a tangent about biochemistry which has nothing to do with the argument. I've never read anything hinting that the way to simulate a human brain would be to simulate how the molecules in the brain behave. We don't build airplanes with flapping wings either, machines can emulate the functionality of a living being without need to simulate the exact details.

    From the number on neurons in the human brain, considering how many interconnections there are and how fast the neurons can fire, I think a machine with one million processing cores at 1 GHz would have approximately the same data handling capacity as a human brain. The rest is software. Neural network software is pretty much routine stuff, the tricky part is learning what are the interconnections between the neurons.

  • by Lazy Jones ( 8403 ) on Tuesday August 17, 2010 @12:41PM (#33277334) Homepage Journal
    Kurzweil seems to understand the basics of Algorithmic Information Theory [wikipedia.org], whether by intuition or study, I can't tell. What I can tell is that PZ Myers has problems comprehending the interaction of code and data (hint: the history of billions of cells is data) and the fact that seen from outside the field of highly specialized machines for processing of digital information, 8 bytes of code can seem to do an extremely complex piece of work to their environment, just like small proteins observed from outside their "working environment". When we model the brain successfully, we will probably not do it by simulating proteins and their environment, we will simply simulate the input/output, i.e. on a higher level than what gets PZ, who wants to plug proteins into computers, so aroused.

    To simplify it so a computer science ignorant biologist with a tendency to inane rants can possibly get it, you don't need to simulate electrons in a semi-conductive material at specific temperatures in order to build a complete working emulator for an old computer.

  • Infinite complexity? (Score:3, Interesting)

    by mangu ( 126918 ) on Tuesday August 17, 2010 @12:44PM (#33277374)

    After one reads an article about the infinite complexity of the human brain, one has to wonder if the fundamentalist protestants are the whackjobs

    What do you mean "infinite"? The human brain is composed of one hundred billion or so neurons. Looks like it's pretty much finite to me. I have ten times as many bytes of information in my hard disk.

  • by nofx_3 ( 40519 ) on Tuesday August 17, 2010 @12:47PM (#33277430)

    Obviously you've never heard of FPGA: http://en.wikipedia.org/wiki/Field-programmable_gate_array [wikipedia.org] While you can't add new connections in the strictest sense, you could could conceivably create a chip with a whole bunch of generic unused hardware and in the rest of the hardware program an algorithm that allows new connections to be made with that raw material.

  • by ceoyoyo ( 59147 ) on Tuesday August 17, 2010 @12:55PM (#33277536)

    You're conflating things that are entirely made up and claimed to be fact, predictions based on certain observations (singularity), and things that are known to be possible but that we don't know how to pull off artificially yet (intelligence). These three categories are very different. PZ actually should be ashamed for being so lazy as to compare Kurzweil, particularly in this instance, to Chopra.

  • Re:Uh (Score:3, Interesting)

    by Apatharch ( 796324 ) on Tuesday August 17, 2010 @01:02PM (#33277622)

    I don't think the claim is entirely implausible; 25MB of code may well suffice to simulate the human brain if it was written in something like brainfuck [wikipedia.org].

    I do however disagree with the assertion:

    The genome is not the program; it's the data.

    The difficulty in truly understanding the genome is that it's both program and data.

  • by peter303 ( 12292 ) on Tuesday August 17, 2010 @01:03PM (#33277632)
    Ray's documentary about the Singularity [imdb.com] has been touring the national film festivals along with Ray. I saw it June. Its begins as a dull-talking heads piece about the current state of A.I. Many of the Big Name A.I. Scientists are interviewed. Then it transitions into a crime-drama story about the legal rights of A.I.s. That part was more interesting, since it had a story. The film is full of special effects to advance the story. Although I know most of the film to be factual, I suspect it will look like a scifi movie to the average audience member. I think Ray is seeking looking at broader distribution on cable television or arts theaters after the festival run.

    Ray was interesting in person during a film-makers Q&A. He reminded me of Woody Allen, but more confident and intelligent. He was graduated from M.I.T. about decade before myself. I personally believe in the Singularity, but more likely in centuries rather than decades.
  • Re:Sounds reasonable (Score:3, Interesting)

    by brasselv ( 1471265 ) on Tuesday August 17, 2010 @01:09PM (#33277700)

    From the number on neurons in the human brain, considering how many interconnections there are and how fast the neurons can fire, I think a machine with one million processing cores at 1 GHz would have approximately the same data handling capacity as a human brain.

    We are not sure yet whether the equation :

      "human brain" = "some current technology" * "some large number"
    has merit or not.

    I wish we were there, but the vast majority of neuroscientists currently think this NOT to be the case. There is likely some qualitative difference that we still fail to understand. Assuming the equation above to be true, is largely responsible for the clear failure of AI of the last few dozens of years.

    PS: to avoid misunderstandings - this does NOT mean that there is something mystique about our brain. We have simply not fully understood how it works, yet - but we are making very fast progress in this area, especially in the last 15 years or so. It's still a long road ahead, though.

  • by Lord Ender ( 156273 ) on Tuesday August 17, 2010 @01:17PM (#33277830) Homepage

    There is evidence to support the theory of the technological singularity. There is no evidence to support the idea of "the rapture." Your comparison is unfair.

    No one can deny that technology is advancing. It is hard to argue against the claim that the rate of advancement is accelerating. Yesterdays intractable problems are today's hobby projects. The idea of the Singularity is simply that what is possible according to physics will become practical as our technology progresses.

    Feel free to argue over the timeline of the singularity, but don't dismiss the entire concept. In the face of all the evidence, that would just be silly.

  • by Anonymous Coward on Tuesday August 17, 2010 @01:27PM (#33277978)

    religious woo

    Isn't that a tautology?

    (Incidentally, I'm really curious whether this comment will end up at -1, Troll or +5, Interesting now.)

  • by Surt ( 22457 ) on Tuesday August 17, 2010 @01:36PM (#33278096) Homepage Journal

    I don't know where you got your definition of 'the Singularity', but I'd bet that the majority of slashdot readers would disagree with you. I expect most of them have the definition of the Singularity as the time when an AI capable of building an AI superior to itself exists, and begins the freefall towards an AI that is operating at the maximum capability that the universe will allow.

    http://en.wikipedia.org/wiki/Technological_singularity [wikipedia.org]

    And of course the singularity folks typically conveniently ignore the possibility that we are already close to the limit on intelligence density with the human brain, or that the problem could become a steep exponential more difficult, etc.

  • Re:Uh (Score:3, Interesting)

    by Surt ( 22457 ) on Tuesday August 17, 2010 @01:42PM (#33278194) Homepage Journal

    And the huge hole in his theory is the execution environment, e.g., the cpu that the brain is running on is REALITY itself. So be sure to add that to your cost of simulation of the brain.

  • by Lord Ender ( 156273 ) on Tuesday August 17, 2010 @01:47PM (#33278254) Homepage

    From your wikipedia link:

    Technological singularity refers to a prediction in Futurism that technological progress will become extremely fast, and consequently will make the future (after the technological singularity) unpredictable and qualitatively different from today.

    The idea is more vague than your statement about AI writing AI; you indicate only one possible definition/manifestation of the concept.

  • Re:Uh (Score:3, Interesting)

    by GizmoToy ( 450886 ) on Tuesday August 17, 2010 @01:50PM (#33278302) Homepage

    I don't think it's all that big a leap. There are lots of very smart people actively trying to simulate human intelligence. While a million lines of code is a fairly large undertaking, it's not an unmanageable amount. If anyone actually believed it could be done in a million lines of code, it would have been done, because the profit potential is huge and undeniable. Indeed, why isn't Kurzweil working on it right now?

    Even creating just the part that could find interactions between proteins based upon their genetic structure and relative concentrations would make you fabulously wealthy.

    The reality is that the problem is vastly more complicated than presented in his estimates.

  • by Khazunga ( 176423 ) on Tuesday August 17, 2010 @01:56PM (#33278400)
    It hasn't happened decades later because the singularity date isn't past yet. You may criticize Kurzweil, which I do, but you should read what he says before criticizing vaguely. As it is, you sound like a misinformed radicalist. Just so you gain something from this post: a) I think he predicts the singularity to happen near 2030; b) He predicts humans will 'fuse' with machines, not that machines will replace humans.
  • by careysub ( 976506 ) on Tuesday August 17, 2010 @02:00PM (#33278462)

    There is a major flaw in the article too: The author apparently believes that you need to simulate the proteins and the exact chemical method for interaction in order to simulate the result of the interaction. It is the result that is important, not the method. I won't say that it is an easy matter of determining how the cells in the brain interact with one another, nor will I say that the chemical interactions are entirely precise, but if there is a finite number of possible outcomes to all possible interactions between two cells in the brain, it can be simulated.

    ...

    It is not a flaw. He was explaining what one would have to do to derive "the brain" from the genome, which was Kurzweil's contention.

    One could indeed simply look at the complete brain and model it, true, but then you are looking at 10^10 neurons, each connected (not at random) to some 10,000 other neurons to produce a net of 10^14 synapses.

    To understand the challenge of modelling a system this vast and complex, consider the state of research on the model organism Caenorhabditis elegans (a tiny worm). Its nervous system has been (almost) exactly mapped: it contains 302 neurons, 6393 chemical synapses, 890 gap junctions, and 1410 neuromuscular junctions. Imagine now the difficulty of reaching this level of precision in a system 10^7 times larger.

    But the good news is that with this level of neuro-mapping precision we can now completely simulate the neural network ("brain") of a tiny worm, right? Right?

    Wrong. Not by a long shot. We are still struggling with characterizing the behavior of this primitive neural net, and making efforts at simulating some aspects of that behavior. The 302 neuron "brain" is far beyond our abilities to simulate at present.

  • by Anonymous Coward on Tuesday August 17, 2010 @02:15PM (#33278714)

    I have also studied algorithmic information theory. Let me ask you a quick question: what is the Kolmogorov complexity of this line of code:

    #include "syscall.h"

    Does it include just the length of the string '#include "syscall.h"' in bytes, or does it include the length of all the functions from syscall.h that are used by the including program?

    In Kurzweill's biology, the Kolmogorov complexity of #include "syscall.h" is 20 bytes.

  • by Anonymous Coward on Tuesday August 17, 2010 @07:26PM (#33282660)

    this isn't about modeling the brain, its about replicating it. Your point only applies to models, which by purpose, simplify a problem.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...