Ray Kurzweil Does Not Understand the Brain 830
jamie writes "There he goes again, making up nonsense and making ridiculous claims that have no relationship to reality. Ray Kurzweil must be able to spin out a good line of bafflegab, because he seems to have the tech media convinced..."
ahh, the "singularity"... (Score:5, Insightful)
Comment removed (Score:5, Insightful)
Re:ahh, the "singularity"... (Score:5, Funny)
And of course, John Woo. Let's hear it for two-fisting, slow motion and doves!
Re: (Score:3, Funny)
Let's hear it for two-fisting
yeah, let's...hear it.
Re: (Score:3, Interesting)
You're conflating things that are entirely made up and claimed to be fact, predictions based on certain observations (singularity), and things that are known to be possible but that we don't know how to pull off artificially yet (intelligence). These three categories are very different. PZ actually should be ashamed for being so lazy as to compare Kurzweil, particularly in this instance, to Chopra.
Except he didn't (Score:3, Informative)
Kurzweil didn't make that ridiculous claim in the first place, despite Myers' third-hand assumptions.
It was just an aside pointing out that the brain's overwhelming complexity all stems from a few million bytes worth of DNA, implying a significant level of replicated structure, andcertainly not a suggestion that we could derive a whole working brain from it.
Re:ahh, the "singularity"... (Score:4, Interesting)
religious woo
Isn't that a tautology?
(Incidentally, I'm really curious whether this comment will end up at -1, Troll or +5, Interesting now.)
Re: (Score:3, Funny)
You could get really lucky and end up +5 Troll. I've only managed +1 Troll myself, I need practice. =p
Re:ahh, the "singularity"... (Score:4, Insightful)
There's also a "skeptic woo". It means dismissing things you know nothing about because they involve things you don't understand.
It never ceases to amaze me how so many "skeptics" have decided that they've seen it all, know it all. They're the mechanical engineer who have decided that their expertise also qualifies them as experts in quantum mechanics. They're the chemist who has decided to write the "definitive" work on physics that's going to refudiate Einstein.
It's very easy to tell someone serious who can discern the difference between scientific claims and hokum from the "professional skeptic" who dismisses anything they don't understand as phony. It's the corollary to the saying about how "technology that is sufficiently advanced is indistinguishable from magic". Basically, it says that "anything that I don't understand must be magic" and it's intellectually lazy. Yes, I'm saying that many "skeptics" who tout their intellectual rigor are actually intellectually lazy.
Here's how I tell the difference between a serious skeptic and a "pop" skeptic: I ask them if acupuncture is "woo". One question, that's all. The question works just as well with tai chi chuan.
Comment removed (Score:5, Insightful)
Confounding your Criteria: (Score:3, Insightful)
Here's how I tell the difference between a serious skeptic and a "pop" skeptic: I ask them if acupuncture is "woo". One question, that's all. The question works just as well with tai chi chuan.
And what if this was my answer; a vast majority of the claims relating to acupuncture are woo, though there are some areas that demand further research. I would use chiropracty as a test, personally, since 90% of the claims, and supposed reasons, are pure, unadulterated woo, but 10% of it is actually helpful (if if
Re: (Score:3, Informative)
Actually, in human trials spanning hundreds of years and hundreds of thousands (if not millions) of subjects, it's been shown to be effective.
If that's not science, I don't know what is. The mechanism of acupuncture is well understood by the practitioners of it, but the terms in which that mechanism is described are different from the terms that western medicine uses. Further, acupuncture has been proven effective enough
Re: (Score:3, Insightful)
Re:ahh, the "singularity"... (Score:5, Funny)
Myself, I think that both the singularity and the rapture have already happened. You didn't translate to the other realm. Get over it.
Re: (Score:3, Interesting)
There is evidence to support the theory of the technological singularity. There is no evidence to support the idea of "the rapture." Your comparison is unfair.
No one can deny that technology is advancing. It is hard to argue against the claim that the rate of advancement is accelerating. Yesterdays intractable problems are today's hobby projects. The idea of the Singularity is simply that what is possible according to physics will become practical as our technology progresses.
Feel free to argue over the tim
Re:ahh, the "singularity"... (Score:4, Interesting)
I don't know where you got your definition of 'the Singularity', but I'd bet that the majority of slashdot readers would disagree with you. I expect most of them have the definition of the Singularity as the time when an AI capable of building an AI superior to itself exists, and begins the freefall towards an AI that is operating at the maximum capability that the universe will allow.
http://en.wikipedia.org/wiki/Technological_singularity [wikipedia.org]
And of course the singularity folks typically conveniently ignore the possibility that we are already close to the limit on intelligence density with the human brain, or that the problem could become a steep exponential more difficult, etc.
Re:ahh, the "singularity"... (Score:4, Interesting)
From your wikipedia link:
The idea is more vague than your statement about AI writing AI; you indicate only one possible definition/manifestation of the concept.
Re: (Score:3, Informative)
It's one of about four or five conventional mechanisms. Generally each of the versions lead into the others, however, so there's no real conflict between them.
E.g., neural interfaces connected via a development of the Internet could yield an "overmind" capable of addressing problems that cannot now be addressed.
Or, those same neural interfaces connected to a computer via an advanced programming interface could enable the development of programs not currently possible by using feedback to stabilize one's th
Pre Computer Science 101 (Score:3, Insightful)
Entering college we get students whos goals in life are the following.
Make a True AI/Mimic a Human Brain - If they good they will end up getting a PHD and being a computer science professor and perhaps doing some cool research on a limited area of AI.
Make an Operating System which can run any code for any platform faster and more securely then the exiting OS's - If they are good they may work for a software company doing some lower level programming
Make the ultimate game which will make them millions nay bi
Re:ahh, the "singularity"... (Score:5, Insightful)
PZ Myers wasn't there; he based his whole critique on gizmodo's writeup.
Speaking as someone who was there and heard Kurzweil's full speech, I can confidently say that PZ Myers does not understand Ray Kurzweil.
First off, a significant factual mistake: Kurzweil -clearly- never said we'd reverse engineer the brain by 2020. He argued against exactly that (his prediction was late 2020s, shading into 2030-- perhaps also unbelievable, but if you're going to critique someone, why not get the facts right?). Sure, gizmodo's writeup was entitled "Reverse-Engineering of Human Brain Likely by 2020". It'd be an understandable attribution mistake for say, an undergraduate.
Second, Myers is critiquing Kurzweil's ontological position based on a throwaway writeup dashed off by gizmodo. (Really, Myers? And you wonder why you're a magnet for shitstorms...)
Third, Myers' criticism is essentially that the brain is an emergent system, and we'll have to understand all the protein-protein interactions, functional attributes of proteins, etc. in order to actually model the brain.
This third assumption is arguable, but Kurzweil wasn't actually arguing against this. All Kurzweil meant with his comment about bytes and the genome was there's an interesting information-theoretic view of how much initial data gives rise to the wonderful complexity of the brain.
I had a lot more respect for Myers before I read this rant.
Re:ahh, the "singularity"... (Score:4, Insightful)
as opposed to those who are satisfied with the theory that life evolved from inorganic chemical compounds, totally by chance, with a series of ininitely improbable events occurring in the right sequence over and over and over again.
What a lovely caricature you've constructed there. Secondly, just like most crappy caricatures of biological evolution you also seem to conveniently gloss over the major role that natural selection plays which is not random.
Re:ahh, the "singularity"... (Score:5, Funny)
as opposed to those who are satisfied with the theory that life evolved from inorganic chemical compounds, totally by chance, with a series of ininitely improbable events occurring in the right sequence over and over and over again.
What a lovely caricature you've constructed there. Secondly, just like most crappy caricatures of biological evolution you also seem to conveniently gloss over the major role that natural selection plays which is not random.
Oh, yeah...explain the platypus, then.
Re: (Score:3, Funny)
Even evolution has a sense of humor.
Re:ahh, the "singularity"... (Score:5, Informative)
why does the platypus always need explaining?
it is the sole remaining species in the Genus Ornithorhynchus and the Family Ornithorhynchidae. along with the echidnas (do they need explaining, too?) they make up the Order Monotremata, the egg-laying, web-footed, electrolocating mammals. they evolved, just like the rest of us.
if there had only been one remaining species of marsupial, would they need explaining?
The Platypus Question (Score:3, Informative)
because it's an odd-looking creature that seemingly has 'random' bits and pieces from various other animals... the full question being "what specific selections in 'natural selection' led to this particular evolutionary path?"
So the question isn't to 'explain' the platypus.. that would be like asking to explain the number 5 or explain the color red.. the question itself doesn't make any sense without being more specific.
It's also not a question of 'why does the p
Re:ahh, the "singularity"... (Score:4, Funny)
And forty-three species of parrots! Nipples for men! Slugs! They can't hear. They can't speak. They can't operate machinery. Are we not in the hands of a lunatic?
(Award yourself two points for not having to use Google.)
Re: (Score:3, Insightful)
Oh and as a side note, the current state of the field in biological evolution has long since moved past the works of Darwin. Your remark is about as disingenuous as trying to use the failings of Newton's classical mechanics to make criticisms of the current state of quantum mechanics.
Infinite complexity? (Score:3, Interesting)
What do you mean "infinite"? The human brain is composed of one hundred billion or so neurons. Looks like it's pretty much finite to me. I have ten times as many bytes of information in my hard disk.
Re:Infinite complexity? (Score:4, Insightful)
You are assuming 1 neuron = 1 "byte" of data.
It's much more complex than that. We are barely starting to understanding it now.
I agree with you, though, if you are implying that the brain is a physical entity with a physical size and physical limits. We just don't quite know yet what those limits are.
Re:Infinite complexity? (Score:4, Insightful)
The human brain is composed of one hundred billion or so neurons. Looks like it's pretty much finite to me. I have ten times as many bytes of information in my hard disk.
Yet while you were typing (presumably not saving anything other than in RAM), was the content of your hard disk changing (Yes, perhaps a bit, but play along for this example)
The neurons are continuously 'remapping' in your brain. Even while some may be static, other's are making new connections in manners which we currently can't predict, or really understand why did it connect to 'this' neuron instead of 'that' neuron.
Not that the brain functions in any quantum manner, but it's one of those things that if you were to KNOW the exact mapping of neurons, the very next instant the mapping would be incorrect and very quickly become inaccurate (100 billion or so items making new connections in multiple paths)
I suppose it would be something like trying to map the water vapor droplets in a cloud. There is a finite number of droplets there too, but predicting the shape/behavior of a cloud with any precision after only a single second would be very, very difficult.
Re:ahh, the "singularity"... (Score:4, Insightful)
After one reads a comment about the supposed wackiness of biological evolution, one has to wonder whether people are taught biology in school these days at all.
So, you believe in a planned economy, then? (Score:5, Insightful)
Obviously, by your logic, a free market economy is impossible, Our economy is too complex to have evolved on its own. In fact, it is far more complex, with far more different parts, than a human being. It must have had a creator. If most any part of the economy, like the steel industry, say, were removed, the economy would not function. How did the economy function before there was a steel industry? Obviously, it couldn't, and therefore we have demonstrated irreducible specificated complexification or something.
All this free market talk is obvious bullshit, and we actually DO have a centrally planned economy because it is impossible for something so complex to have evolved without a central planner.
Re:So, you believe in a planned economy, then? (Score:5, Funny)
Obviously, by your logic, a free market economy is impossible, Our economy is too complex to have evolved on its own. In fact, it is far more complex, with far more different parts, than a human being. It must have had a creator. If most any part of the economy, like the steel industry, say, were removed, the economy would not function. How did the economy function before there was a steel industry? Obviously, it couldn't, and therefore we have demonstrated irreducible specificated complexification or something.
All this free market talk is obvious bullshit, and we actually DO have a centrally planned economy because it is impossible for something so complex to have evolved without a central planner.
The Illuminati control the free market. Point Intelligent Design.
Re:ahh, the "singularity"... (Score:4, Funny)
It would be nice.. (Score:5, Informative)
Namely, that we'll be able to reverse engineer the human brain in the next 10 years.
Re: (Score:3, Funny)
Yeah, the article quotes some pretty funny statements.
Sejnowski says he agrees with Kurzweil's assessment that about a million lines of code may be enough to simulate the human brain.
You know, the program they had set up in Jurassic Park supposedly had MILLIONS of lines of code, and look how well THAT turned out.
Re:It would be nice.. (Score:5, Funny)
Re: (Score:2)
Yeah, but then Jaimie would have to read (and copy) more than the first paragraph.
To be fair, PZ obviously slept through the class in high school where they teach you that the first paragraph of a news article should act as an abstract.
Because the Article Breaks Down the Claim Fully (Score:5, Insightful)
Would be nice if the summary even hinted at what the ridiculous claim actually WAS... Namely, that we'll be able to reverse engineer the human brain in the next 10 years.
It's a little more complicated than that. You see, the article actually breaks down the logic behind that statement and points out how poor it is. Here's the initial part of Kurzweil's argument:
Sejnowski says he agrees with Kurzweil's assessment that about a million lines of code may be enough to simulate the human brain.
Here's how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.
About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.
I have only taken high school biology but I know that the genome doesn't magically become the brain. It goes through a very complex process to amino acids which fold into proteins which in turn make cells that in turn make tissues that in turn comprise the human brain. To say we fully understand this transformation entirely is a complete and utter falsity as demonstrated by our novice understanding of the twisted beta amyloid protein that we think leads to Alzheimer's. How amino acids turn into which proteins I believe is largely an unsolved search problem that we don't understand (hence efforts like Folding@Home). And he claims that in ten years not only will we understand this process but we will ... reverse engineer it?
The man is insane. I've posted about this same biologist criticizing him before [slashdot.org] and it looks like P.Z. Myers just decided to take some extra time to point out how imprudent Kurzweil's statements are becoming. Kurzweil will show you tiny pieces of the puzzle that support his wild conclusions and leave you in the dark about the full picture and pieces that directly contradict his statements. This is a dangerous and deceptive practice that -- despite my respect for Kurzweil's work in other fields -- is rapidly turning me off to him and his 'singularity.' He's becoming more Colonel Kurtz than Computer Kurzweil.
Re:Because the Article Breaks Down the Claim Fully (Score:5, Insightful)
There's a few more major flaws.
The proteins/cells that make up the brain are only part of the story. The protein/cell level is roughly what a newborn can do. The rest of brain development is creating and tearing down billions of interconnections between neurons. It's those interconnections that turn the brain from a pile of goo into something interesting, and we have no understanding of how that mechanism works.
Secondly, 3 billion base pairs does not mean 6 billion bits. First, DNA is base-4, not base-2. Second, the pairs are the units of information, not 2 nucleotides that make up the pairs.
3rd, source code isn't compressed.
4th, there isn't much redundancy in a gene sequence. There is redundancy in that we have 2 copies of our genome, but that's already accounted for by the '3 billion base pairs' number. While there's a lot of 'junk' DNA, there isn't much (if any) redundant DNA.
Re:Because the Article Breaks Down the Claim Fully (Score:4, Informative)
The "3 million base pairs are 6 million bits" isn't because each pair has two parts, it's becuase each pair has four possibilities. 3 million digits in base 4 is equivalent to 6 million digits in base 2.
For instance, decimal 15 is "33" in base 4 and is "1111" in base 2. You could think of it as one bit for which basepair is at this point in the chain, and one bit for which orientation it's in.
Re:Because the Article Breaks Down the Claim Fully (Score:4, Insightful)
There is no such thing as 'junk DNA', I wish people would stop saying that.
Just because we don't know what it effects doesn't make it junk DNA.
Re: (Score:3, Insightful)
I forgot a few points. A few years back I went to a "singularity talk" by some people doing silicon design, trying to cram denser neural nets onto chips.
Even at the time, it struck me that by the tame you've made a "human equivalent" hardware simulator in some sort of neural net, you've got a newborn. Let's assume you're "at" the singularity, with your brand new AI...
I have experience with this. I've participated in the creation of two NIs.
They can't do spit at initialization. Actually they can do 2 thi
Re: (Score:3, Informative)
Actually, I'd disagree with the "roughly what a newborn can do". By the time a baby is born, it has a non-zero number of neural connections. These are not coded for anywhere in the DNA and the exact dynamics of how they do form isn't clear to me (if it's known at all). A newborn has roughly twice the number of connections than an adult brain, according to some estimates I've seen. Some will die off as new ones form, but the net result is a die-back. There is then a massive construction phase in the brain be
Re:Because the Article Breaks Down the Claim Fully (Score:5, Insightful)
I'd probably take an intermediate point of view.
The genome of a creature, plus the cytoplasm contents of an egg, plus a complete understanding of the laws of physics should in fact be all that you need in order to fully simulate a human being. Granted, you'd need to simulate it sequentially from conception to adulthood before you get anything useful out of it, which might take more or less than the biological time required depending on the power of your simulator.
Humans are deterministic, after all - we're just a bunch of atoms and molecules. Granted, there is the effect of random quantum effects, so three simulations with the same input might not come up with the same output if this is genuinely taken into account. However, all three would be plausible outcomes if we were talking about a real person with a real brain.
The part that is being left out is the little caveat: "plus a complete understanding of the laws of physics."
Here is an illustration. A jpeg of a rendition of the Mandelbrot set might take 20k of space. A mathematical description might take well under 1kb of code. That description might even be enough to fully simulate its behavior. That description is certainly not sufficient to UNDERSTAND its behavior.
Also, don't discount the cytoplasm. Proteins don't fold the same in buffer as they do in a cell, and simply adding non-specific protein doesn't always do the trick either. Gene regulation doesn't work without epigenetics, and epigenetics doesn't happen without regulatory proteins, and those proteins don't get there without translation from gene transcripts. DNA alone without capturing the initial state of the machine is as useful as a memory dump without the CPU status dump on a CPU with 43 million registers. The last I heard things like centrioles can't be replicated except in the presence of another centriole.
The bottom line is that there is nothing "magical" about human cells. However, to estimate their total information content at only 2GB or so is probably a gross underestimate.
Re:Because the Article Breaks Down the Claim Fully (Score:4, Interesting)
There is a major flaw in the article too: The author apparently believes that you need to simulate the proteins and the exact chemical method for interaction in order to simulate the result of the interaction. It is the result that is important, not the method. I won't say that it is an easy matter of determining how the cells in the brain interact with one another, nor will I say that the chemical interactions are entirely precise, but if there is a finite number of possible outcomes to all possible interactions between two cells in the brain, it can be simulated.
...
It is not a flaw. He was explaining what one would have to do to derive "the brain" from the genome, which was Kurzweil's contention.
One could indeed simply look at the complete brain and model it, true, but then you are looking at 10^10 neurons, each connected (not at random) to some 10,000 other neurons to produce a net of 10^14 synapses.
To understand the challenge of modelling a system this vast and complex, consider the state of research on the model organism Caenorhabditis elegans (a tiny worm). Its nervous system has been (almost) exactly mapped: it contains 302 neurons, 6393 chemical synapses, 890 gap junctions, and 1410 neuromuscular junctions. Imagine now the difficulty of reaching this level of precision in a system 10^7 times larger.
But the good news is that with this level of neuro-mapping precision we can now completely simulate the neural network ("brain") of a tiny worm, right? Right?
Wrong. Not by a long shot. We are still struggling with characterizing the behavior of this primitive neural net, and making efforts at simulating some aspects of that behavior. The 302 neuron "brain" is far beyond our abilities to simulate at present.
Re:Because the Article Breaks Down the Claim Fully (Score:4, Insightful)
it looks like P.Z. Myers just decided to take some extra time to point out how imprudent Kurzweil's statements are becoming. Kurzweil will show you tiny pieces of the puzzle that support his wild conclusions and leave you in the dark about the full picture and pieces that directly contradict his statements.
He staked his reputation on a timeline that everyone but him knew was impossible and now he tries to find little pieces of evidence to support the idea that we are still on that timeline. As reality and his predictions diverge further from each other his claims and evidence become weaker, until the day he predicted the singularity would happen passes by and he is forced to revise his proph-... er, prediction. Even assuming his basic premise is correct (an idea which I feel there isn't enough evidence to say either way) it should be obvious by now that his time scales are way, way off, probably by at least an order of magnitude. He'd better serve himself and his causes by admitting his mistake and reevaluating his predictions.
Re:Because the Article Breaks Down the Claim Fully (Score:5, Insightful)
Kurzweil hasn't just staked his reputation on this barmy timeline, but his life too. I mean, seriously, the guy is popping vitamin pills like crazy thinking that if he can just extend his life a decade or so, the nerd rapture will finally happen and he'll get to be absorbed into the giant galactic Googlebrain.
But, no, this isn't religious enthusiasm gone too far. No, this is SCIENCE. I mean, the man has graphs, so it has to be science, right?
Re:Because the Article Breaks Down the Claim Fully (Score:5, Informative)
Moore's law, the base of his argument that technology is evolving exponentially is pretty much on schedule. We are now on the Petaflop (10^15) range, with the transistor count following the predicted exponential [readwriteweb.com].
Cost of DNA sequencing, another of his examples, is today at 0.000008(USD) per base pair [scienceblogs.com]. Fits the curve.
RAM cost is now at 28000kB/USD, also fitting the curve
GDP per capita also is within schedule [www.bit.ly] (note that the scale is logarithmic), even with the wealth transfer east (which is bound to be limited in time to ten more years give or take)
And, lastly, the core of all atacks on Kurzweil, so is life expectancy [www.bit.ly] on track.
You may still believe these exponentials will hit some kind of ceiling somehow. That might be true. The numbers, however, support Kurzweil's theory. And predicting from the number of times Moore's law depletion was announced in the last twenty years, I'd wager my bets on Kurzweil.
Re:Because the Article Breaks Down the Claim Fully (Score:5, Informative)
Test your theory against US data. Infant mortality rate in '80 was 13 per thousand births. In '06 was 6 per thousand births. In a thousand people set, you had seven datapoints that lived zero years (worst scenario case) and now show up as living 80 years (again, worst case scenario). The effect of better infant mortality rates comes down to 80*7/1000 years=7 months (average scenarios produce 5 months). Meanwhile, in the same period, life expectancy went from 74 to 78 years.
Better infant mortality rates explain 14% of the increase in life expectancy. Where does the rest come from? Better car safety? Perhaps, but certainly with lower effect than infant mortality. The rest? Medical technology.
Re: (Score:3, Informative)
Life expectancy continues to go up only because infant mortality goes down. Among those who reach adulthood, life expectancy has barely moved in the past 50 years. Among those who reach elderly age (70+) life expectancy has been nearly constant for all of human history.
According to this site [infoplease.com], for white males, it's gone up from 10 years life expectancy at age 70 in 1949-1951 to 13 years life expectancy in 2004. In absolute terms, that's not a big difference, but it's a 30% increase in life expectancy. The improvement in life expectancy at age 10 for the same demographic group has gone up from 59 more years to 66 years, an improvement of 7 years. Either figure indicates your assertion is incorrect since the overall life expectancy from birth has gone up from 66 to almost 76
A half million lines? I can do it in 1 (Score:3, Insightful)
The code is simple.
Simulate_Brain();
Now just find the compiler with the right set of libraries that can compile it. And yes, I am NOT just being anal. Half a million lines of code is MEANINGLESS. Quickly, how many lines do you need for a "Hello World" program? In assembly? C? Java? PHP?
If one day someone designs a cpu with a built in Hello World function, then it would require what? 2 instructions in assembly? Meanwhile the java guy will be pounding out yet another page of code.
Re: (Score:3, Informative)
Just for reference, the GCC compiler is pushing 1.5 million lines of code. Windows XP supposedly had 40 million lines of code.
Kurzweil is literally saying that the human brain is 2/3 as complex as a C compiler, and 1/40th the complexity of Windows XP.
Complete lunacy.
Re: (Score:3, Informative)
If you can program with any programming language without understanding every sublayer beneath it, I don't see why you couldn't do the same with DNA without understanding all the physics and chemistry that makes it work.
I'm going to go out on a limb here and guess that you're one of the computer scientists with little or no background in neurobiology that Kurzweil has convinced we will magically live forever starting ... now! Listen, unless you're writing science fiction, you should probably stop drawing analogies between two unrelated fields and start reading about our limitations in understanding the human brain.
Besides, if you read Kurzweil's statement, he's saying we'll reverse engineer the various inputs that can be given to the brain, not the brain it's in entirety.
We can't even do a brain transplant [wikipedia.org] and you're telling me we just need to reverse engineer the 'various input
Re:It would be nice.. (Score:5, Insightful)
Re:It would be nice.. (Score:4, Insightful)
But, but, but... (Score:3, Insightful)
Comment removed (Score:5, Interesting)
10 years?! (Score:4, Funny)
His latest claim is that we'll be able to reverse engineer the human brain within a decade
Amateur. I could put something together to simulate the human brain in about 8 months.
(Plus another 3 minutes at the start)
Re:10 years?! (Score:5, Funny)
More like half an hour. It doesn't take Jello all that long to set up.
(Just finished an hour drive in Seattle - my current impression of the human brain isn't particularly complimentary.)
A biologist doesn't understand programming (Score:3, Insightful)
FTFA: The end result is a brain that is much, much more than simply the sum of the nucleotides that encode a few thousand proteins.
Likewise, the end result of a computer is much, much more than simply the sum of the commands that encode a CPUs instruction set.
Re:A biologist doesn't understand programming (Score:4, Insightful)
No not really.
A computer is a fixed system. If you tell it to do A (via software), you know you will get B, based upon knowledge of how the circuits are hardwired. The same can not be said of the human brain, because it has the ability to change its hardware (via growing new connections between neurons).
Re:A biologist doesn't understand programming (Score:4, Interesting)
Obviously you've never heard of FPGA: http://en.wikipedia.org/wiki/Field-programmable_gate_array [wikipedia.org] While you can't add new connections in the strictest sense, you could could conceivably create a chip with a whole bunch of generic unused hardware and in the rest of the hardware program an algorithm that allows new connections to be made with that raw material.
Re:A biologist doesn't understand programming (Score:4, Insightful)
Bad compsci (Score:5, Insightful)
Sejnowski says he agrees with Kurzweil's assessment that about a million lines of code may be enough to simulate the human brain.
Here's how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.
About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.
Idiot. The design of the brain is encoded in the genome in the same way that the design of a 4KiB program is encoded in its load module: useful for running the program on its original hardware.
But then you have architectural issues. That 4KiB of information does not run unless it's supported by a complex operating system, which itself is supported by complex logic in an ISA and memory managment architecture backing it up. And all that is implemented on a specific design in a specific physics model.
Translating that program to SPARC takes work, and it comes out roughly the same size. Translating that program to a progression of chemical reactions produces something vastly different, especially since you need a new middle ware (chemical environment) running on top of different physics (chemistry).
Translating a physical architectural design from chemistry to computer logic on top a given ISA is the same problem. You now have odd issues that are messy, and then the program running on the brain needs to be built again. That program is even more complex and less known.
Whaaa... (Score:2, Insightful)
I'm not even sure what to say about this statement
Man (Score:3, Funny)
I wish I could get a job as a futurist....think about it:
"What do you think is going to happen in the future???"
"Um...dogs will bring soda to you when you whistle a Cradle of Filth song?"
"OMFG THATZ BRILLIANT. HERE R MONIES, PLZ HAS MAH BABBIES!"
Laughable (Score:4, Insightful)
Let's see. On another recent article it was stated that the average car has several million lines of code running in it. I haven't come across a sentient Prius yet.
And there's that pesky parallel processing the brain does. I don't think that a rack full of Nvidia Tesla cards can approach the average two year old's parallel processing capability.
I agree, Kurzweil is smoking something and not sharing.
Re: (Score:3, Funny)
Of course not. No self-respecting Autobot would be caught dead disguised as a Prius.
50 Megabytes is WAY too much ! (Score:5, Funny)
Sejnowski says he agrees with Kurzweil's assessment that about a million lines of code may be enough to simulate the human brain.
Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says.Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.
Dude, the equations of quantum mechanics can be written on one page. General Relativity can be written on a second page. What more do you need ? Clearly, a few hundred lines of code (and a few do loops) should be enough to simulate the entire universe, brains and all.
Glad we cleared that up. All you physicists and astronomers can go home now and work on your resumes.
Re: (Score:3, Informative)
PZ Myers does not understand computers ... (Score:4, Interesting)
To simplify it so a computer science ignorant biologist with a tendency to inane rants can possibly get it, you don't need to simulate electrons in a semi-conductive material at specific temperatures in order to build a complete working emulator for an old computer.
Re:PZ Myers does not understand computers ... (Score:5, Insightful)
you don't need to simulate electrons in a semi-conductive material at specific temperatures in order to build a complete working emulator for an old computer.
Maybe not, but you do need to understand the fundamental laws and rules that govern the systems of a computer. The fellow who wrote this article seems to be asserting that we actually don't know the fundamental laws and rules that govern the systems of the human brain, or, at least, Kurzweil doesn't. In other words, Kurzweil seems to oversimplify the problem by stating that, since the brain is organically grown from a base set of information, it should be trivial to emulate a brain once we can emulate that base set of information. Meyers seems to be asserting that the fundamental laws that govern the functions of the human brain appear to be far more complex and tend to derive from things other than that base set of information. The human brain appears to function under a set of laws and rules different than the set that Kurzweil assumes it does. That is the fallacy that Meyers is pointing out in Kurzweil's logic. Meyers may not understand computers very well, but he certainly does seem to have some insight on what rules and laws (biochem, protein folding, etc.) at least partially govern the human brain. Similarly, anyone writing a computer emulator needs to have the understanding of the fundamental laws and rules that govern the computer (binary logic, architectural pathways, memory addresses, etc.). Meyers goes on to say that our understanding of the fundamental laws of the human brain are incomplete at best and downright ignorant at worst. That's how he derives his argument.
Re: (Score:3, Insightful)
That's not really how Kurzweil is arguing. He's looking at the genome, then saying you can build a working brain from that info alone. It may be theoretically possible, but it's so difficult that we shouldn't even bother trying. It's akin to trying to understand the behavior of a volume of a gas by looking at how just two molecules bounce off each other; it looks very straightforward, but you're actually missing some hugely complicated behavior going on.
A prediction of my own: if the brain is ever simulated
Re:PZ Myers does not understand computers ... (Score:4, Insightful)
you don't need to simulate electrons in a semi-conductive material at specific temperatures in order to build a complete working emulator for an old computer
You do, if you have no idea what the higher levels are all about. Our knowledge of how the brain works (hell, even of the biochemistry of a single cell) is so poor that we cannot yet discard "lower details" if we want to get a working system. So finding upper bounds by looking at the lower level of the picture is not such a bad idea.
Myers does not raise any objections to code or data "quantity" -- the big hurdle is that vital part of the system is outside the DNA, and we are only beginning to explore it. Read up on epigenetics [wikipedia.org].
No one understands The Brain... (Score:4, Funny)
...except, maybe, Pinky.
R'ing TFA? Heresy! (Score:5, Funny)
Not ridiculous at all (Score:3, Insightful)
I'm not sure what it is about his claims that are supposed to be so ludicrous. For example, a million lines of code seems at least plausible, as long as we bear in mind the following:
1. We're not trying to mimic the brain at the protein level, rather at the broader, inter-neuron level (and whatever complex intra neuron behaviour we discover).
2. The million lines of code don't need to encompass the capacity of the brain, just its general neural architecture and adaption rules - there will no doubt be many gigabytes (terabytes?) of working memory, which would actually store the neural connections and whatever parameters they may have.
To be honest, the authors of this article seem to be rather too cocksure in dismissing all this. Even the apparent agreement of Terry Sejnowski (co-inventor of the boltzmann machine) doesn't give them pause. I'm not that familiar with Kurzweil's predictions, but this seems fairly reasonable to me.
There is a google tech talk by Geoff Hinton on restricted boltzmann machines, (a sort of stochastic neural network) that's well worth a watch, for those that are interested. They are considered biologically plausible, and he seems mostly to apply them to machine vision tasks.
Kurzweil documentary on The Singularity (Score:5, Interesting)
Ray was interesting in person during a film-makers Q&A. He reminded me of Woody Allen, but more confident and intelligent. He was graduated from M.I.T. about decade before myself. I personally believe in the Singularity, but more likely in centuries rather than decades.
Sejnowski 4 academic disciplines (Score:3, Insightful)
There are currently four academic disciplines working on the reverse engineering of a human mind. Linguistics, psychology, computer science, and philosophy. You can count neurology too if you want to start talking about the *actual* brain. Several tens of thousands of individuals are directly and indirectly working on this problem. We've come a long way in the last few decades. Unfortunately, we have a pretty long way to go. For the moment we lack a model which accurately describes how mental processes work. There isn't even a consensus on how the processing is done.
"modeling the brain" is not even really the hard part. One only needs sufficient computing power to model what they *think* is going on logically (there isn't even a consensus here). The trick is modeling the mind. We are very, very far away from that.
A fun number to throw around is how many synaptic connections are present in the brain. Synaptic connections are widely believed to be the best indicator of overall memory storage and processing speed (to an extent). There are about 10 to the 15th (Peta I believe?) synaptic connection in a normal human brain. A significant number of these are active at any given time. In other words, the brain is performing a HUGE number of "calculations" simultaneously at all times. Modeling just the hardware is obviously not easy... modeling the software is currently not possible. I doubt it will be in the next 50 years.
For a good read on what many cognitive scientists think is going on, though it is clearly not an accurate model but rather a best guess, go read up on "connectionism".
A mild defense of Kurzweil (Score:4, Insightful)
First off, Ray Kurzweil doesn't want to die. That's a preoccupation that a lot of people have (including one of his critics, Rudy Rucker, who has written whole books hoping to find immortality in the fourth dimension), and it leads them to some pretty fantastic conjectures from time to time. It's not necessarily a bad thing, as long as you keep the proverbial grain of salt handy. Modern chemistry and its not insignificant contributions to our vastly expanded lifespans arose from the alchemical search for immortality. Alchemy was bullshit, of course, but the incidental discoveries of alchemists on the way to their illusory elixir of life paved the way for the real science to follow and build upon after it had ejected the dross.
And secondly, I don't think it's entirely implausible that we can eventually design hardware and software that will match and exceed the performance of the human brain. Our brains, after all, are the end product of evolution, and like pretty much every other part of our bodies, an accumulation of kludges that were just good enough to get passed to the next generation (or not bad enough not to get passed on). It's also implemented using hardware so unreliable that it wouldn't function at all if it wasn't constantly repairing itself, and even then, no matter how well you treat it, it irreparably craps out after about 75 years. And it still doesn't work all that well -- ever seen the long chain of train wrecks that is the history of human civilization? We might be able to engineer something that works a lot better. Granted, it's not going to be by deriving simulated human brains from a copy of the human genome. More likely, it will be very much unlike the way biological brains work.
The fundamental problem, which I think smart and optimistic guys like Ray Kurzweil are particularly prone to forgetting, is that it may not be possible for a mind to understand a mind of equal complexity, i.e., humans may lack the necessary intelligence to duplicate their own intelligence. That will force us back on genetic algorithms to evolve AI, leading to an end product that will likely be just as badly undesigned as natural brains. Worse, it will do little to advance our understanding of how minds work: if we can't reverse-engineer our own brains, we probably won't be able to reverse-engineer even more sophisticated artificial minds, nor will they be able to reverse-engineer themselves. (We can hope that they could reverse-engineer us, and then explain it to us in terms we can understand, if such terms exist, but that takes us so far out on a conjectural limb that I can see Ray Kurzweil from here.)
Anyway, there's room for bold conjectures. That doesn't mean that when Kurzweil completely fails to understand the way molecular biology works that we shouldn't call bullshit on it, but we shouldn't be entirely hostile to futurist speculation. By nature, most of it will be bullshit, but a lot of progress in unexpected areas has been made in the pursuit of mirages (alchemy leading to chemistry, astrology leading to astronomy), and explaining (or discovering) why a conjecture is bullshit is a beneficial exercise in and of itself.
Oh come on how. (Score:3, Insightful)
Is this a slashdot story, or someone's twitter page? At least some kind of objective summary would be nice, other than "Lul Kurzweil, here, a link, he stoopid!"
But before I just hit preview and go, lets take a look at the article itself. Aaand, holy crap, the post is verbatim from the article.
Kurzweil's effective claim is "There's only so much data in the DNA. The brain is about 50 million bytes. If we can reverse engineer the process used to turn those 50 million bytes into a brain, we can then reverse engineer the brain."
Seems logical - and even though the endpoint might not be "brain on a chip" it might be "oh, there's a flaw in the DNA here that's causing the hypothalamus to be malformed, lets start checking for that and maybe fixing it in the womb." There are many, many scientists that are trying to puzzle out this "source code" for that very reason. It's a perfectly valid point of study.
Kurzweil is a futurist. His scientific area of study is not "You should do X Y and Z to get to points A B and C." His area of study is "Scientists are working on X, which may lead someday to Z, and might bring us technology C." There's an important difference there, which I always find amusing when scientists and the anti-singulatarians start hooting, "he forgot Y, A and B!"
His math all points to Technology C and beyond being really amazing [wikipedia.org], but that's besides the point. His area of study is not "every technology field ever", but rather "this is where things are trending". People mix the two up, sometimes intentionally, and hoot hoot hoot, Y A B.
Anyway. Back to the article. The rebuttal in the article is "We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently."
In other words, It's too complex to do. It's maaagiiic. (Feel free to insert hand wiggling here.)
He forgot Y, A and B!
See, the brain might have a source code, one that's remarkably small and turns into something really complex, but that doesn't mean anything cause... maaaagic. And you can't understand magic, right? Everyone knows that something that's so complex that it seems impossible to understand [wikipedia.org] should never be attempted. Worthless endeavor. Everyone knows that. Right? ... Maagggiiiiccc~~~
The fact of the matter is, DNA is source code. For a system we don't fully understand, one that's remarkably complex, but ultimately, DNA, even our DNA, is just data. We can understand, change, manipulate, and create data.
To treat it all as magic -- as something that we will just never be able to understand -- is to do a disservice to centuries of scientists, of the past and the future.
Forget 100 Billion, Try 302 Neurons First (Score:4, Insightful)
A model of the human brain would need to model 10^10 neurons, each connected (not at random) to some 10,000 other neurons to produce a net of 10^14 synapses.
To understand the challenge of modelling a system this vast and complex, consider the state of research on the model organism Caenorhabditis elegans (a tiny worm). After many years of work its nervous system has been (almost) exactly mapped: it contains 302 neurons, 6393 chemical synapses, 890 gap junctions, and 1410 neuromuscular junctions. Imagine now the difficulty of reaching this level of precision in a system 10^7 times larger. Unlike the genome, we have no clues about how to automate mapping of an intact brain.
But the good news is that with this level of neuro-mapping precision we can now completely simulate the neural network ("brain") of a tiny worm, right? Right?
Wrong. Not by a long shot. We are still struggling with characterizing the behavior of this primitive neural net, and making efforts at simulating some aspects of that behavior. The 302 neuron "brain" is far beyond our abilities to simulate at present.
Re: (Score:2, Informative)
Re: (Score:3, Insightful)
"Read it. Other than the solid date he predicts, it's pretty plausable."
No it's not. If it was possible to do in a million lines of code, it would have been done by now. Windows XP had something like 40 million lines of code. While we can agree it was probably coded relatively inefficiently, there is no way that any OS even comes close to the complexity of the brain.
Re: (Score:3, Interesting)
I don't think the claim is entirely implausible; 25MB of code may well suffice to simulate the human brain if it was written in something like brainfuck [wikipedia.org].
I do however disagree with the assertion:
The difficulty in truly understanding the genome is that it's both program and data.
Re: (Score:3, Interesting)
And the huge hole in his theory is the execution environment, e.g., the cpu that the brain is running on is REALITY itself. So be sure to add that to your cost of simulation of the brain.
Re: (Score:3, Interesting)
I don't think it's all that big a leap. There are lots of very smart people actively trying to simulate human intelligence. While a million lines of code is a fairly large undertaking, it's not an unmanageable amount. If anyone actually believed it could be done in a million lines of code, it would have been done, because the profit potential is huge and undeniable. Indeed, why isn't Kurzweil working on it right now?
Even creating just the part that could find interactions between proteins based upon the
Re:Uh (Score:5, Insightful)
if you rtfa you'll see that the million lines of code only gives you the proteins that make up the brain - i.e., it gives you a parts list and a delivery schedule, not a set of assembly intstructions. The genome doesn't give you how the proteins interact, in usually complex ways (i.e., three or more proteins interacting simultaneously), in billions of cells in parallel, over the course of 9 months to give us an infant brain (even leaving aside the tremendous amount of development that takes place in the brain during childhood).
As the author of tfa writes: To simplify it so a computer science guy can get it, Kurzweil has everything completely wrong. The genome is not the program; it's the data.
IOW, the program is the developing organism itself, the complex protein interactions and it's (uterine) environment none of which are encoded in the genome. The organism uses the data encoded in the genome to produce proteins which interact with each other and the organism and its environment to grow cells which eventually form a brain.
The mistake in Kurzweil's thinking is the typical mistake engineers make when dealing with biology; the enviroments into which engineers place their designs do not typically spontaneously cooperate in the construction of the engineer's design. When an engineer designs a circuit board, his lab bench doesn't spontaneously start soldering connections and adding components for him and automatically complete parts of the design
without his explicit instructions. But the organism does precisely this with proteins syntesised from the genome.
As a result, the genome alone cannot possibly tell you how to "make" an organism, because the genome only tells you the parts list and delivery schedule for the organism, not the assembly instructions. The assembly instructions are not explicit anywhere in the system; the assembly instructions are implicit in the combination of the complex behavior of the cells of the developing organism, the uterine environment and the very complex ways the proteins sythensized from the genome interact.
In order to extract the actuall assembly instructions we'd need a full blown molecular biology simulator that could correctly simulate:
1. protein folding (still unsolved)
2. comlex multi-protein interaction (still unsolved)
3. simultaneous behavior and development, (i.e., in parallel) of billions of living cells each undergoing trillions of chemical reactions per second (computationally prohibitive)
IOW, it's not going to happen in the next 10 years.
Re:Uh (Score:5, Funny)
Get Kurzweil a slashdot account, stat!
Re:Uh (Score:5, Informative)
Re:Uh (Score:5, Funny)
Which account is it?
PS: Go fuck yourself.
Re:Uh (Score:5, Funny)
Re:Sounds reasonable (Score:5, Insightful)
Yes. Well done. Did you try reading the article that you are criticising because it rips your point apart fairly easily. The thing about an upper limit is that it should be at least as large as the thing that you are estimating. The article shows quite conclusively that Kurzweil's "upper limit" is far too small because he knows nothing about brains and pulled some numbers out of his arse.
That "tangent" that Myers went off of was a reasonable argument for why the amount of information described is not sufficient to simulate a brain. Not least because it is a highly compressed description of a process that builds a brain. It is not a description of a brain itself. Furthermore to use that description to build an actual model of the brain you need to understand all of the biological processes that are relevant in executing that construction code, and the environment that they run in.
Oh the irony, it's burning my eyes. You're defending somebody who was caught babbling about something they don't understand by repeating the trick. Well done you.
Re: (Score:3, Insightful)
You also lack an understanding of what is involved in the functioning brain.
Biochemistry is incredibly important. The brain is not just a neural network; it is an electrochemical organ and the chemicals floating around in there greatly affect the operation of neurons. There is no distinction between "hardware" and "software" in the brain--every new thought or stimulus causes the physical structure to change: neurons form new pathways, areas get flooded with neurotransmitters, etc. This shit is way more c
Re: (Score:3, Interesting)
From the number on neurons in the human brain, considering how many interconnections there are and how fast the neurons can fire, I think a machine with one million processing cores at 1 GHz would have approximately the same data handling capacity as a human brain.
We are not sure yet whether the equation :
"human brain" = "some current technology" * "some large number"
has merit or not.
I wish we were there, but the vast majority of neuroscientists currently think this NOT to be the case. There is likely some qualitative difference that we still fail to understand. Assuming the equation above to be true, is largely responsible for the clear failure of AI of the last few dozens of years.
PS: to avoid misunderstandings - this does NOT mean that there is something m
Re:Sounds reasonable (Score:5, Informative)
PZ Myers threw a red herring there. What Kurzweil says is pretty reasonable, he used the total amount of information in the genome to get an upper limit estimate of the amount of library code needed to simulate a brain. I say "library" to differentiate from data, since a lot of our brain information comes from our experiences, i.e. library == instincts.
Actually he's right. The statement is pure bullshit.
Or maybe that's too much. Kurzweil just doesn't understand how Kolmorogrov complexity works.
Let's say the brain as a machine is the output of a process. How complicated is that process? The Kolmorogrov complexity of a string (or whatever) is the minimum size of the data that you have to give to a machine in order to produce the string. E.g. a string of 100 0s is simpler than a string of alternating 0s and 1s and simpler than encoding the first 100 digits of pi. Write code for each of those and you'll see the measure works (and it's actually a lower limit, but it's the closest concept...)
But the crucial point is that the size of this string depends on the kind of machine. The size of the input (program) for a Turing machine is very different than that for an actual computer.
So, yes. 800MB of code. But that's not the library code. The library that interprets that program is the egg that grows those 800MB of data into a human, together with all the laws of physics and chemistry involved in the process.
Take all the chromosomes encoding a whole human genome and drop it into a test tube of distilled water. Does it grow a brain? What if you put it into a chicken egg. What grows out? Putting those 800 MB into a computer doesn't do anything if you don't provide the equivalent of the egg. The bootstrap structure and the underlying architecture are as important as the code in understanding the whole system.
Myers is right. In order to understand the human brain directly from the genes you have to understand all chemistry that interacts with it, all the self replicating machinery provided by the mother and simulate that at a molecular level.
So the upper bound is NOT 800 MB. It's 800 MB plus the size of a codebase good enough to simulate every interaction at an atomic level plus a full 3D scan at an atomic level of the egg provided by the mother. Or simplified models of all those things, provided by the chemists and biologists out there, as Myers points out. (Plus data equivalent to a few years of training like we do with children)
Not saying that simulating the brain is necessarilly that hard, it's just that Kurzweil's pseudo-scientific measurement is just bullshit.
Re:worst article ever (Score:4, Informative)
I haven't read the article yet, but Ray Kurtzweil is a technology speculator - like a sci-fi writer except that he doesn't make up a story to go with his ideas and tries harder to convince people they're actually going to happen. He wrote "The age of intelligent machines" and "The age of spiritual machines" where he takes a hard AI stance that computer thought can become indistinguishable from human thought. He is also a proponent of technological singularity.
Generally his ideas aren't taken very seriously by academia in Computer Science, or at least that has been my experience. The philosophy department at my university sometimes enjoyed going over his ideas; but the philosophy department at my university was very fond of pseudoscience.
Re: (Score:3, Funny)
Kurzweil's assessment that about a million lines of code may be enough to simulate the human brain.
If you accept that, then the real problem to solve becomes: in what language do you write the code?
Why, Brainfuck [wikipedia.org] of course!
Re: (Score:3, Interesting)