Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News Science

Ray Kurzweil Responds To PZ Myers 238

On Tuesday we discussed a scathing critique of Ray Kurzweil's understanding of the brain written by PZ Myers. Reader Amara notes that Kurzweil has now responded on his blog. Quoting: "Myers, who apparently based his second-hand comments on erroneous press reports (he wasn't at my talk), [claims] that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain."
This discussion has been archived. No new comments can be posted.

Ray Kurzweil Responds To PZ Myers

Comments Filter:
  • by Tenek ( 738297 ) on Friday August 20, 2010 @10:29AM (#33314462)
    Clearly, this dispute should be resolved by a poll.
    • by Kilrah_il ( 1692978 ) on Friday August 20, 2010 @10:35AM (#33314538)

      Clearly, Myers did not RTFA (or Watch the featured talk - whatever)! Shame on him. He must be old here.

    • by Abstrackt ( 609015 ) on Friday August 20, 2010 @10:40AM (#33314600)
      I say we resolve it with a deathmatch. Then Kurzweil can attempt to reverse-engineer his opponent's brain with his bare hands!
    • by Lord Ender ( 156273 ) on Friday August 20, 2010 @10:49AM (#33314722) Homepage

      It isn't really a dispute.

      Kurzweil is obviously optimistic about his time tables. But his theory of technology growth accelerating calls for optimism; there's good reason to believe that experts historically underestimate the rate of advancement.

      Clearly, Myers has discovered that being unnecessarily angry and insulting leads to more pageviews in his blog. I'm sure he knows his field, and it's great when he tears into real jokers, but he has moved beyond that. He is now being inflammatory just for page hits.

      • by GreatAntibob ( 1549139 ) on Friday August 20, 2010 @11:27AM (#33315236)

        Kurzweil is more than optimistic - he's just plain guessing. His predictions for the near term are accurate because they don't require big leaps in imagination or technology. His predictions for further out tend to be wrong or loony (many, if not most, of the predictions he made for technology achieved by 2010 back in the 90s were wrong in whole or in part).

        His "theory" of technology growth is ridiculous in the face of prima facie evidence. It's true that experts historically underestimate the rate of technology advancement. It's also true they almost always underestimate the field in which explosive exponential growth takes place. In the 1950s, we were dreaming about flying cars and meals in pill form. Who actually predicted the full extent of the internet in our lives back in 1960? Or ubiquitous celluar communication? Or that we wouldn't have just 3 broadcast television stations? Technological progress is a given and the more limited of Kurzweil's predictions are correct because they typically require modest improvements in current technology - but epiphenomenalism, i.e. the singularity, is far from a given.

        .

        Kurzweil does a fine job making the simple types of predictions (the type that led to predicting flying cars in the 50s). The problem is that, like everybody, he can't predict the "next big thing". Exponential growth in technology always relies on discovering and exploiting as yet undiscovered technologies, and Kurzweil mostly relies on existing tech. That's fine for 10 or 20 years out but gets progressively worse at predictive power past that (see his predictions for 2010 and beyond made in the 90's, as opposed to the predictions he made in the last 10 years). And, to be honest, most scientists could have (and did) made the same short-term predictions Kurzweil made. It's not a stretch to think that Moore's Law will keep chugging along for at least 5 years and that people in different fields will exploit that.

        • Re: (Score:3, Informative)

          by Cruciform ( 42896 )

          Heck, even people in the fields of science related to some advancements don't see some of those advancements coming.
          In one of the Futures in Biotech podcasts (a 2007 episode if I recall) the guest was talking about gene sequencing and that as little as four years before they managed to sequence an earthworm genome it was thought to be impossible because of the work/technology involved. And then they did it. Shortly afterward the human genome project began.

          Whether Kurzweil is in crazyland or not, if he's jus

        • by Lord Ender ( 156273 ) on Friday August 20, 2010 @12:29PM (#33316044) Homepage

          He isn't being loony. If he were loony, he would predict things known to by impossible based on our understanding of physics. He is very specifically predicting developments which (a) people want, and (b) the universe (seems to) allow. This is necessarily murky business, but he at least attempts to set his time-tables based on quantifiable, empirical observations as best he can.

          So accepting that predicting the longer-term future is inherently difficult, he at least makes an attempt. You are the sort to just throw up your hands and sling mud at those who try. It's a good thing we have a few people like him. It would be tragic if everyone thought like you.

      • by popsicle67 ( 929681 ) on Friday August 20, 2010 @11:28AM (#33315252)
        P.Z. Meyers is not some headline grabbing putz like half the republican party. He would have an interested following regardless of whether he even bothered to talk about Kurzweil or not. Kurzweil has a vested interest in trying to shout down dissenting opinion while Meyers has no dog in the fight save illustrating the scientific fallacies and fantasies foisted upon a credulous public by pompous windbags such as Kurzweil.
        • Fanboyish maybe but flamebait?

          Someone mod up/fix this.parent(); IMO Meyers has a good enough standing that one slip off can't suddenly reclassify him as an clueless irate blow hard.

          That is assuming he actually slipped off, I haven't read that fine RTFA article.

      • by Chris Burke ( 6130 ) on Friday August 20, 2010 @12:04PM (#33315674) Homepage

        Kurzweil is obviously optimistic about his time tables. But his theory of technology growth accelerating calls for optimism; there's good reason to believe that experts historically underestimate the rate of advancement.

        Hey, optimism regarding the exponential growth of (some) technology, and the unpredictable and amazing consequences of such is fantastic. I try to be optimistic that it will continue myself (being in a field that has been the poster child for exponential improvement and not liking the idea of this ending).

        Exponential growth in technology ergo artificial brains isn't optimism, it's a (specific) leap of faith.

        Clearly, Myers has discovered that being unnecessarily angry and insulting leads to more pageviews in his blog. I'm sure he knows his field, and it's great when he tears into real jokers, but he has moved beyond that. He is now being inflammatory just for page hits.

        I guess, but what I considered to be the biggest failing that Myers tore into in the previous article still remains. Kurzweil says Myers is mischaracterizing his thesis, and sure maybe he was at some point. But then he goes right on to emphasize that "the genome constrains the amount of information in the brain prior to the brain's interaction with its environment."

        Aside from the fact that you can't separate the brain's development from its interaction with the environment even in the womb and it's doubtful that a brain that somehow developed completely without stimulus would look very much like a functioning human brain at all, that's still just not true. It's like saying that the tiny binary produced by compiling "Hello World" constrains the amount of information needed to actually run the program (especially since it's suppossed to tell you how to make the computer its running on too). Or that the amount of information on a web page is constrained by the size of the .html file. Img tags are not sufficient information to reconstruct the image it references.

        The genome contains instructions for constructing the human body/brain within the context of another human body. The genome itself is not sufficient information to create that body. It's exploiting a huge amount of external information to allow itself to be as compact as it is.

        • But then he goes right on to emphasize that "the genome constrains the amount of information in the brain prior to the brain's interaction with its environment."

          To be sure, the genome must be a major factor. I recognize that human bodies are not the products of genomes alone -- indeed, over 90% of the cells in our bodies (counting by sheer number) don't even have our DNA because they belong to our symbiont species -- but surely the complexity of the blueprint for brain-developing-systems goes a long way towards approximating total complexity of the developed-brain-system. Part of my intuition here is an anticipated relative complexity between environment and the

      • by snowgirl ( 978879 ) on Friday August 20, 2010 @12:11PM (#33315768) Journal

        Clearly, Myers has discovered that being unnecessarily angry and insulting leads to more pageviews in his blog. I'm sure he knows his field, and it's great when he tears into real jokers, but he has moved beyond that. He is now being inflammatory just for page hits.

        You missed something. The media will always inaccurately propagate scientific... hell, just about ANY view. They necessarily must summarize, simplify, and downplay. Typically, their own personal interests will cause a bias towards one particularly interesting feature of the advancement or article, and they will focus on that. (Remember the recent "chicken or egg" article whose scientific findings had NOTHING to do with that question?)

        PZ Meyers made a bit of a mistake in responding so vehemently to a strawman construction of media's doing.

        • Re: (Score:3, Insightful)

          by IICV ( 652597 )

          I'm not entirely certain what strawman construction PZ Myers responded to. Ray Kurtzweil said, and yes this is from the article, but presumably he actually said something like this:

          Here's how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 mi

      • Re: (Score:3, Insightful)

        by joeyblades ( 785896 )

        there's good reason to believe that experts historically underestimate the rate of advancement

        Except in the area of artificial intelligence. About every 5 years, starting back in the early 1950s, some group of experts have proclaimed that human level intelligence would be simulated on a computer "within the next 20 years". They all overestimated the growth rate in this field... and continue to do so, in all likelihood.

        Don't confuse what Moore's Law does for technology with growth of knowledge about the human brain. We know a lot more than we did 60 years ago... but we still don't have a clue how

  • It's the only way to be sure.

    • Or send a robot back in time to kill his mom.

      • Or send a robot back in time to rape his mom.

        FTFY.

        • Or send a robot back in time with a canister of his sperm to rape and impregnate his mom with his seed.

          "Who is your Daddy? I am, Literally!"

          • That is actually... nasty. Even worse than Fry being his own uncle.

            However, what does that say about the Time Paradox? How can you go back in time and impregnate your mom before you were born?

    • How should Avatar have ended?

  • by jeffmeden ( 135043 ) on Friday August 20, 2010 @10:37AM (#33314560) Homepage Journal

    This whole discussion reminds me way too much of the million partisan pundit sissy fights that rage endlessly on the internet. If I wanted to see two guys argue about what the other did or didnt say, I would gladly head over to DailyKos or BigJournalism and drown myself in their pedantry. This is slashdot; please save the inanity for the comments and at least give us stories that have meaning!

    • by truthsearch ( 249536 ) on Friday August 20, 2010 @10:41AM (#33314606) Homepage Journal

      I agree, but the original story was interesting (800+ comments). This followup is almost required.

      Having "editors" /. should have only quality posts. I'm disappointed almost daily but it's still better than many other sites.

    • by Stargoat ( 658863 ) <stargoat@gmail.com> on Friday August 20, 2010 @10:44AM (#33314666) Journal

      I'm actually glad to see that Slashdot is participating in such a debate. As a longtime Slashdot resident, I'm happy that Slashdot is attempting to find a niche in the Internet that involves scientific (or semi-scientific) and computer related matters.

      The draw to Slashdot needs to be the articles, but also the response to the articles. The comments should be a cut above what you see at other websites.

      • The draw to Slashdot needs to be the articles, but also the response to the articles. The comments should be a cut above what you see at other websites.

        And indeed, they are. Or anyway, a small subset of them, which is all that you can hope for. Slashdot is one of a subset of websites on which [various] people who know about many different things share useful information. It's rare indeed that I encounter any truly significant news item (to me, anyway) that isn't discussed here. Timeliness varies but I have only myself and all the rest of you to blame for that.

    • by Ohrion ( 814105 )
      Partisan pundit sissy fight? No, this is somebody defending his research after somebody essentially lied to make him look bad and got press from it.
      • Partisan pundit sissy fight? No, this is somebody defending his research after somebody essentially lied to make him look bad and got press from it.

        I cannot, for the life of me, determine who it is that you think "essentially lied." Lying requires an intent to deceive; i.e. in order to lie, you have to know the truth, and intentionally communicate in a manner that is contrary to that truth, either by creating facts from whole cloth, or by omitting certain pieces of information in order to get your audienc

  • by Zarf ( 5735 ) on Friday August 20, 2010 @10:38AM (#33314586) Journal

    Myers may have been focused on the "reverse engineer from the genome" argument but really the main issue is whether Kurzweil is within a few orders of magnitude of guessing the right level of complexity necessary to simulate a brain. The gist of the Myers argument isn't so much about genomics and ontogeny as it is about the emergent complexity of inter-related systems and I think the real nugget there might be something like: "We could model a brain but that wouldn't mean we modeled a mind. To model a mind you need to model a great deal of the environment the mind lives in... and that is many many orders of magnitude more complex."

    For the record: I hope Kurzweil is right but I rather doubt he is. I don't think he's wrong about how powerful machines will be in 2050 I think he may be wrong about whether those machines can simulate a mind well enough because I really wonder if the complexity of a mind is actually a superpolynomial problem due to the hyper connected-ness of a mind and its environment.

    • by Zarf ( 5735 ) on Friday August 20, 2010 @10:50AM (#33314742) Journal

      In retrospect, maybe I should have read both articles and thought about what I was writing first instead of just spouting off.

    • The fundamental assumption is that there is some kind of mystical brain/mind dualism. From where I sit, modeling environments really isn't a hard thing to do. Our brains develop minds by not much more than sensory feedback. Experiments with rat brain cells in petri dishes attached to electrodes that control robots have shown that brain cells respond to sensory feedback even in ad hoc configurations. If we can truly model what the brain is physically, then development will be a simple trial and error experie
      • by astar ( 203020 )

        Pooh, I cannot get much traction on mystical, but it is blindingly obvious that the brain deals in sensory stuff and there are thousands of years of developments of the claim that the sensory data is not the universe. So you either do some Plato et al or you say that all you can know is your emotional state and figure reality is effectively some sort of psych thing. If you play Plato, then maybe you end up knowing something about fundamental principles of the universe by looking at the contradictions in

      • by Omestes ( 471991 )

        To paraphrase something somebody wisely said in the previous thread about this topic, you don't need to model the electrons in the circuit of a machine to emulate an NES.

        Except the NES is a mishmash of seemingly random bits and junky non-logical software. You might not need a model of the electrons, but you need the software too. I'm sure someone could, eventually, built a rough facsimile of the human brain, but lacking software you've built nothing but a pile of quivering Jello. Think of it as building

    • I'd be more worried about concurrency issues. If you have to treat each neuron as its own processor in order to simulate it correctly to get a mind even if computers are fast enough to do it they might not be able to with out deadlocking.
    • This starts turning into a definition problem. A matter of semantics.
      A mind anything like a human being's runs on a hardware substrate that's built to interact with a physical environment in ways that promote organic survival. A mind that isn't anything like a human mind could run on very differently designed hardware, but then, if it's that different, how do you determine if it's equivalently complex, and ultimately, what justifies calling it a mind at all? People such as Verno

    • by ceoyoyo ( 59147 )

      "We could model a brain but that wouldn't mean we modeled a mind. To model a mind you need to model a great deal of the environment the mind lives in... and that is many many orders of magnitude more complex."

      Few serious hard AI approaches since the 60s have actually tried to do this as you suggest. Most use the ACTUAL environment rather than trying to model it. This process is usually called "learning."

      PZs meaningful point is that the prenatal development environment affects brain development, in additio

  • Here We Go Again (Score:5, Insightful)

    by eldavojohn ( 898314 ) * <eldavojohn@gm a i l . com> on Friday August 20, 2010 @10:42AM (#33314628) Journal

    Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome.

    So put your speech up on your site, all I can find are videos from previous summits [magnify.net]. TED seemingly posted videos as they happened and therefore we could openly debate them. Summits are great but not everyone has the time or resources to attend them. I would suggest you move towards a more open format of disseminating your ideas and the very specific and lengthy details about them. I'm not going to buy a book on futurism and wade through it for the details you provide about neurobiology and I don't think PZ Meyers would do that either.

    I mentioned the genome in a completely different context. I presented a number of arguments as to why the design of the brain is not as complex as some theorists have advocated. This is to respond to the notion that it would require trillions of lines of code to create a comparable system. The argument from the amount of information in the genome is one of several such arguments. It is not a proposed strategy for accomplishing reverse-engineering. It is an argument from information theory, which Myers obviously does not understand.

    Well, frankly, I don't understand it either. You're applying information theory to lines of code ... and that just doesn't make any sense to me. I haven't heard of it. I haven't heard of anyone say "theoretically could be reduced to x lines of code." I don't know why we're talking about information theory when we're talking about simulating the brain or even understanding the brain.

    The amount of information in the genome (after lossless compression, which is feasible because of the massive redundancy in the genome) is about 50 million bytes (down from 800 million bytes in the uncompressed genome). It is true that the information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

    So first it was information theory on the genome and now you're on about compression of the genome. Great, you've applied theoretical limits to lines of code in order to describe a complex biological system and then argued that due to redundancy we can reduce it to 50 million bytes. And what did that buy us exactly? Look at how many lines of code we've devoted to simulating a single neuron or synapse ... and it's not even a complete and accurate simulation. Your theoretical limits are amusing but pointless ... to further apply your 'exponential growth' of the lines of code we can program is further amusing.

    Kurzweil is a futurist with just enough knowledge to sell people. His exponential growth to a singularity and proof of it doesn't do him much good when he doesn't understand the complexity of the brain and then applies theoretical limits to that from other disciplines. He's free to keep preaching, I just question at what point people will give up on him. If he dies soon and pulls a L. Ron Hubbard what sort of cult then will we have on our hands?

    • Re: (Score:2, Insightful)

      by jcampbelly ( 885881 )

      http://www.vimeo.com/siai/videos/sort:oldest [vimeo.com]
      http://singinst.org/media/interviews [singinst.org]
      http://www.youtube.com/user/singularityu [youtube.com]

      Well, lack of searching is not a lack of material, you can find several hours of Ray's talks on video at Singularity Summit 2007, 2008, 2009, TED.com, Singularity University and just plain independent YouTube videos. He also has two movies out (I haven't seen either), the Transcendent Man criticisng his esoteric side and The Singularity Is Near (based on his book) supporting his ideas.

      All

    • It is true that the information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

      So the implication here is that a genome can create a brain without input from the environment (at least any input that carries information). I have some news: every human born ever has come from a womb. That womb has supplied raw materials and information in the form of the mix and timing of resources. There are no exceptions at all . Would you get a blank brain or a malformed brain if the resources were not supplied in the correct mix? Almost certainly and that means you need to include at least so

    • Re: (Score:3, Insightful)

      by FelxH ( 1416581 )

      Well, frankly, I don't understand it either. You're applying information theory to lines of code ... and that just doesn't make any sense to me. I haven't heard of it. I haven't heard of anyone say "theoretically could be reduced to x lines of code." I don't know why we're talking about information theory when we're talking about simulating the brain or even understanding the brain.

      Kurzweil doesn't advocate the use information for understanding or modeling the brain. He only used it in combination with other methods to get an estimate on how complex the brain actually is (whether his methods and estimates are correct I can't tell). That was, imo, the whole point of the paragraph you quoted ...

    • by gtall ( 79522 )

      Actually, there is something called Kolmogorov complexity where information theory is cached out in terms of algorithmic complexity, http://en.wikipedia.org/wiki/Kolmogorov_complexity [wikipedia.org].

      Personally, I think Kurzweil is still full of shit. Systems are usually way more complex than most "futurists" would like to admit. They are finding that with the human genome. The promise was that once it is decoded, we'll find cures for everything. Errr...yeah, well, it sort of depends on how it gets expressed in proteins wh

    • by Attila Dimedici ( 1036002 ) on Friday August 20, 2010 @12:18PM (#33315888)
      You point out what I thought was the failure of Kurzweil's defense against Myers' argument. Kurzweil repeats the claim that Myers said was a wrong assumption on Kurzweil's part: that the genome contains all of the information necessary to create the brain. Myers argument with Kurzweil boils down to this: the genome does not contain all of the information necessary to reconstruct the brain. There is an awful lot of information about building a living creature contained in various ways in the structure of each cell. For example, if you were to take the nucleus of a fertilized monkey ovum and place it in a fertilized shark ovum (after removing the nucleus of the shark ovum), you would not end up with a monkey, although it would be closer than if you just swapped the genome between the two. There is a lot of information about how to interpret the genome in the cell structure. The same sequence of DNA has been shown to code for significantly different proteins in different creatures.
  • Two decades? (Score:3, Insightful)

    by mcgrew ( 92797 ) * on Friday August 20, 2010 @10:43AM (#33314650) Homepage Journal

    I said that we would be able to reverse-engineer the brain sufficiently to understand its basic principles of operation within two decades, not one decade, as Myers reports.

    We don't have more than a rudimentary understanding of how the brain works, or even what Consciousness [wikipedia.org] is.

    Although humans realize what everyday experiences are, consciousness refuses to be defined, philosophers note (e.g. John Searle in The Oxford Companion to Philosophy):[3]

    "Anything that we are aware of at a given moment forms part of our consciousness, making conscious experience at once the most familiar and most mysterious aspect of our lives."
    --Schneider and Velmans, 2007[4]

    • Re: (Score:3, Insightful)

      by Zarf ( 5735 )

      A good point. I think Kurzweil is one of those that would say "consciousness is computing" so all you need is enough of the right computations. This is definitely something brain simulations would have to explore. We simply have no idea yet.

      • by Improv ( 2467 )

        Still, it's very reasonable to believe that it is - what else could it be that fits with modern science?

        • by mcgrew ( 92797 ) *

          Why does it have to fit contemporary science? What we know about the universe is almost nothing whatever, compared to what there is to know. We don't know what consciousness is because biochemistry hasn't advanced far enough to understand it. Remember, all thought and feeling and sense is nothing more than complex chemical reactions.

          We have a lot more to learn before we can even ask the question, let alone answer it. If thought is simply computation, why can't a house cat do trigonometry? Trig is easy for a

          • by Improv ( 2467 )

            It has to fit contemporary science because forcing consistency between our models is how we advance them. If we had some object that had "magical" properties and we had physics that didn't cover that object, we'd do well to figure out how to mix them together. Likewise, when we have an unknown object, it makes a lot of sense to assume it's covered by the laws of physics as we know them unless we see strong indicators otherwise. That's how we learn.

            Maybe consciousness in the popular usage is 90% delusion. In

            • by mcgrew ( 92797 ) *

              No, I agree with you, but my point was that we still have way too much to learn before we can even know if it's possible.

              • by Improv ( 2467 )

                I think it's reasonable to have a strong belief that consciousness is computing, based on my general thoughts in philosophy of science and experience with and studies in neuroimaging. I would suggest you look into the state of the field - there's serious progress into making broad maps of brain function, and the characteristics of the neuron strongly suggest reasonable parsimony with computational models.

                I would not care to make a guess on timing or methods - I think Kurzweil may have stuck his neck out muc

      • by Arlet ( 29997 )

        The thing is, you won't find consciousness looking at the signals in the brain. The brain is composed of parts that have no consciousness themselves, and the patterns are too complicated to understand anyway. Even if you manage to see all the patterns at once, you still won't see conciousness.

        The only solution is to look at the behavior. If the simulated brain can have a discussion about consciousness, it has everything you can possibly want.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      We can easily do it within two decade: 2010-2019, and 3560-3569.

    • It is not inconceivable that we could create a thing like a brain which would give rise to consciousness, and yet still not understand what it really is. If we somehow manage to write a computer program which can be (again somehow) qualitatively defined as conscious, then we will need to have first understood consciousness. But if we only assemble a collection of technologies which somehow surprises us with consciousness, then we will have a new direction for research, but not an understanding of the thing

      • by mcgrew ( 92797 ) *

        There's about as much chance of building a sentient computer when we don't know what sentience is as there is of giving someone who knows nothing about electricity a box of electronic parts and having them build a working radio.

        • by Arlet ( 29997 )

          You could use a genetic algorithm, and get the results without understanding how it works.

    • Re:Two decades? (Score:5, Insightful)

      by Angst Badger ( 8636 ) on Friday August 20, 2010 @11:05AM (#33314970)

      We don't have more than a rudimentary understanding of how the brain works, or even what consciousness is.

      People say this a lot, and I don't understand why. Our understanding of how the brain works is a good deal more than rudimentary. The advances we've made in understanding the brain on both the large and small scales in just the last five years are breathtaking. Our understanding is a long way from complete, but Kurzweil is correct at least to the extent that our understanding is significant and appears to be growing at an accelerating rate. It may not be accelerating as fast as he expects, but keeping up with new developments in neurology at even a cursory level is quite challenging. The main difficulty we face at present in implementing the structures we do understand in silicon is the lack of adequate parallelism in current computing hardware, not our understanding of the relevant neural structures.

      As for consciousness, unless you believe in some kind of pre-scientific vitalism, a reasonable working assumption is that it is an emergent property of brain-like structures. Unless and until we discover otherwise, there is no reason to wait for an understanding of consciousness to begin working on replicating the functionality of the brain. Quite likely, the attempt to replicate the brain will reveal more about consciousness than idle philosophical inquiries. Those so inclined might want to settle on a definition of consciousness before trying to figure out how it works.

      • by mcgrew ( 92797 ) *

        It may not be accelerating as fast as he expects, but keeping up with new developments in neurology at even a cursory level is quite challenging.

        Yes, we're learning a lot, but they still can't fix a broken spinal cord, let alone revrse brain damage.

        As for consciousness, unless you believe in some kind of pre-scientific vitalism, a reasonable working assumption is that it is an emergent property of brain-like structures

        I simply don't know. It may well be that everything is sentient, that even subatomic part

    • Re: (Score:3, Interesting)

      by Arlet ( 29997 )

      Dennett has already provided some insights. The problem is that people find that it doesn't match their intuition, so they keep looking for something else. The biggest hurdle you have to take is to realize that you can't know your own consciousness. Once you get beyond that, the problem becomes a lot easier.

      http://www.youtube.com/watch?v=kOxqM21qBzw [youtube.com]

    • We don't have more than a rudimentary understanding of how the brain works, or even what Consciousness is.

      Woo-woo aside, consciousness is almost certainly no more than "computing beyond our comprehension."

      I say this because computing is the process of acting on data, using other data and procedures as the basis for action.

      We know that computing does occur in the mind, we can perceive it at some levels. We also know that computing is an enormously successful strategy for coping with problems. This i

  • by timepilot ( 116247 ) on Friday August 20, 2010 @11:09AM (#33315036)

    The major flaw I can see in his response (which I think was addressed by Myers) is

    but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

    He even underlined it. The problem is that the brain doesn't just spring into existence fully formed and THEN get exposed to the environment. The brain starts out as a few cells and is constantly exposed to the environment as it develops. I think this was a major point in Myers response and RK just blew right past it.

    • When are we human? Abortion hinges on this, WHEN is the foetus a human being with a human brain. Is there some magic moment the brain switches on OR are we a bacteria that evolves rapidly into a complex life form?

      Can it be that the brain "knows" the human body and how to operate it because it "grew up" with it? We imagine a robot being build typically on a long assembly line and only at the last moment the head is connected and the robot switches on. Could a brain instead function as a very small simple "c

    • It is even more basic than that. What a particular peice of the genome codes for depends on what structures are in the cell it is in, this starts with the very first cell of the organism. Addtionally, what a particular peice of genome codes for also depends on what cells are surrounding the cell it is in.
    • by Jherico ( 39763 )
      I got that too. While I'm largely in the Kurzweil camp in this whole thing, he's misreading Meyer point about the environment. A strand of DNA dropped on the moon isn't ever going to form a brain any more than dropping a paper containing source code on a computer will cause it to run the program. It needs a very specific environment and its easy to see that there are lots of small environmental imperfections that can fuck up brain development in a child. But even though Kurzweil doesn't address that, I
  • by divisionbyzero ( 300681 ) on Friday August 20, 2010 @12:15PM (#33315848)

    is right. Myers criticism may be off the mark but Kurzweil's speculation about brain design, like some much of his other speculation, is bullshit. His basic argument in the blog post is that the amount of information in the human genome constrains the amount of information (and the complexity) required to design the brain. This thesis is wrong on a bunch of levels but let's take the most obvious. The amount of information in the genome is the amount of information that the "body" (to simplify) requires to replicate or create parts of itself. The amount of information required is relative to the machinery which is going to interpret it. There is no reason to believe we are dealing with a Turing machine here where the amount of bits required for a program to perform a function is going to be more or less consistent across languages and platforms (assuming similar complexity of the code). The machine interpreting the bits matters. So while the body may only need "50 million bytes" to create itself we may need many, many more millions of bits to specify how to build it. Just consider the complexity of protein folding.

    More dubious statements follow:

    "The goal of reverse-engineering the brain is the same as for any other biological or nonbiological system – to understand its principles of operation. We can then implement these methods using other substrates other than a biochemical system that sends messages at speeds that are a million times slower than contemporary electronics. The goal of engineering is to leverage and focus the powers of principles of operation that are understood, just as we have leveraged the power of Bernoulli’s principle to create the entire world of aviation."

    This completely begs the question of whether it can be replicated in another substrate. He just assumes that it can be done and by doing so he already assumes a model of the brain that could be (and is most likely) wrong. The brain is clearly not a Turing machine. That's not say it is not another kind of "computer" (for some expanded definition of computer) or follow mechanistic principles however. Assuming the brain is like a Turing machine (which Kurzweil implicitly does) is one of the biggest obstacles to developing real AI.

    Speculation of Kurzweil kind does not belong in the "Science" category, maybe "Idle".

    • is right. Myers criticism may be off the mark but Kurzweil's speculation about brain design, like some much of his other speculation, is bullshit. His basic argument in the blog post is that the amount of information in the human genome constrains the amount of information (and the complexity) required to design the brain. This thesis is wrong on a bunch of levels but let's take the most obvious. The amount of information in the genome is the amount of information that the "body" (to simplify) requires to replicate or create parts of itself. The amount of information required is relative to the machinery which is going to interpret it. There is no reason to believe we are dealing with a Turing machine here where the amount of bits required for a program to perform a function is going to be more or less consistent across languages and platforms (assuming similar complexity of the code). The machine interpreting the bits matters. So while the body may only need "50 million bytes" to create itself we may need many, many more millions of bits to specify how to build it. Just consider the complexity of protein folding.

      Exactly so. The genome information assumption is absurd and arbitrary. It's like assuming that because I can buy a book an Amazon by transferring 1500 bytes of information to Amazon's website I can thus recreate that book inside a simulation using only 1500 bytes of code. In both this case and the issue of brain complexity, the mechanism for transforming the initial information into the finished product is far more complex than the "input data."

  • by KingAlanI ( 1270538 ) on Friday August 20, 2010 @12:23PM (#33315976) Homepage Journal

    Kurzweil ridiculously optimistic, Myers ridiculously cynical?

  • Our progress towards "reverse engineering" the brain may actually be SLOWING DOWN, not accelerating. Despite the wishes and dreams of computer scientists, animal rights adv. and folks like Kurzweil, the real nitty gritty of "figuring out the brain" comes primarily from painstaking experiments in the anatomy and physiology of the brain. The primary funder of this research in the US is the NIH. And funding has been stagnant if not decreasing in real dollars. Consequently, fewer smart students are entering the

    • Re: (Score:3, Informative)

      by ceoyoyo ( 59147 )

      As someone who actually does neuroscience research, the tools and techniques available today were almost undreamed of a couple of decades ago. Nothing is slowing down. But more money is always greatly appreciated, of course.

  • Kurzweil is right (Score:4, Informative)

    by ShooterNeo ( 555040 ) on Friday August 20, 2010 @12:36PM (#33316188)

    Kurzweil is absolutely correct. His best argument is not the complexity of the genome, but focusing on the actual functional structures in the brain. A cortex composed of a billion repeating units is something we CAN feasibly simulate. Already, we have massive systems that run an algorithm spread across billions of separate instances. (google.com is one)

    An "algorithm" could also model the behavior of a few neurons working in circuit.

    Also, keep in mind that most of the complexity of the brain and body are completely unrelated to the task of thinking. Much of that genome codes for molecular machine parts needed to maintain and grow the hardware. There's all kind of defense and circulatory and support systems that we won't have to worry about when designing artificial minds.

    And finally, when you consider the changes made to the brain from the enviroment : that doesn't make the problem harder. Once you have a self organizing neural system that works like the human brain but a million times faster, you expose that system to our environment and train it up just like we do with humans. Sure, it might take a few years for such a system to reach super-intelligence, but if your fundamental design was right then this would eventually happen.

    • Re: (Score:3, Informative)

      Kurzweil is absolutely correct. His best argument is not the complexity of the genome, but focusing on the actual functional structures in the brain. A cortex composed of a billion repeating units is something we CAN feasibly simulate. Already, we have massive systems that run an algorithm spread across billions of separate instances. (google.com is one)

      I would urge you to read the following slashdot post: http://science.slashdot.org/comments.pl?sid=1757102&cid=33278462 [slashdot.org] The point of the post is that we are unable to model the neural activities of a worm with 302 neurons, and this after an extremely large amount of work. The cortex is not 'composed of a billion repeating units'. It is composed of 100 billion non-repeating units, with thousands of connections (each) to other non-repeating units, and each of the non-repeating units keep changing both

  • by GrantRobertson ( 973370 ) on Friday August 20, 2010 @01:22PM (#33316804) Homepage Journal
    Myers primary complaint was that Kurzweil used the number of genes in the genome and how many bits would be required to store that data as a predictor of how long it will take to completely understand the complexity of the human mind. Myers' post lays out a glimpse of the additional complexity involved and rightly points out the fallacy of making such a grand prediction based on such a small amount of information and understanding. Of course Kurzweil's entire career and fame are now dependent on people continuing to fall for his dramatic generalizations and overreaching predictions that "Something Big" is right around the corner. I have watched Kurzweil talk and sometimes it seems as if he has a messianic complex.
  • He should go back to what he does well - inventing interesting and useful machines. His prickliness displayed towards those who disagree (often with good reason) with his worldview has done more to harm his reputation than any of his critics' corrections. If you can't take criticism, you shouldn't be a futurist. He also won't be the first whose hubris will lay him low. Get back to the lab while you still can, Ray...

  • Sceptics are adept at making really quite fetching mincemeat sculptures of religion, alternative medicine and the new age, but we need some serious attention paid to the transhumanist/singularist/cryonicist belief cluster. Because these are smart people, they are likely our friends, they share a lot of our notions and they are proving that the main use apes with delusions of grandeur like ourselves put intelligence to is being stupid with far greater efficiency.

    Obligatory RationalWiki plug: Cryonics [rationalwiki.org]. I was actually neutral-to-positive on the subject until a friend started looking seriously into spending $120k on freezing his head and I started looking seriously into what he was getting into. And goddamn, it's woo all the way down. Woo by people who are ridiculously smarter than you or me and use it to be dumb. How do you fight that sort of woo? Piece by piece, of course. So I have to learn the bollocks on its own terms to take it down (at which point you see goalpost-moving, reversal of burden of proof, etc., all the things apes with delusions of grandeur do so well). And it's just AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.

    tl;dr: Singularitarians talk as much utter bollocks as creationists, climate change deniers, New Age hippies and the tobacco industry. There needs to be more analysis and dissection of said bollocks.

  • Why would we even want to simulate a human mind? To spare ourselves trouble of thinking? While we're at it, why don't we build a bunch of sex robots to save us the trouble of having sex. Then I guess we'll sit in front of the TV for the rest of eternity. Sounds like a blast.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...