Kurzweil on the Future 300
dwrugh writes "With these new tools, [Kurzweil] says, by the 2020s we'll be adding computers to our brains and building machines as smart as ourselves.
This serene confidence is not shared by neuroscientists like Vilayanur S. Ramachandran, who discussed future brains with Dr. Kurzweil at the festival. It might be possible to create a thinking, empathetic machine, Dr. Ramachandran said, but it might prove too difficult to reverse-engineer the brain's circuitry because it evolved so haphazardly. 'My colleague Francis Crick used to say that God is a hacker, not an engineer,' Dr. Ramachandran said. 'You can do reverse engineering, but you can't do reverse hacking.'"
Obfuscation (Score:5, Interesting)
Re: (Score:3, Informative)
Re:Obfuscation (Score:4, Funny)
Re: (Score:2)
Re: (Score:3, Insightful)
Re:Obfuscation (Score:4, Interesting)
Apply that to the brain, and we're worlds behind. Considering we're still on the binary system, when DNA uses four base pairs for its instruction code, I think Crick grasped the complexity of the human body much more than Kurzweil seems to. To compare the brain to a computer chip is, I think, a grossly unbalanced parallel.
Re:Obfuscation (Score:4, Insightful)
Also note how "quickly hacked together" usually implies that conceptional and code-level cleanliness were forgone in favor of development time savings. A dynamic webpage that consists of PHP and HTML happily mixed together with constructs like <?php if($d = $$q2) { ?> <30-lines-of-html
The human brain consists of patch after randomly applied patch. I'd think that it would be the equivalent of a byzantine system with twenty different coding styles, three different OOP implementations and a dependency tree that includes four versions of the same compiler because parts of the code require a specific version. The code works, it's reasonably fast but it's miles from being clean and readable.
* Yes, it is supposed to have an assignment and a variable variable in the if block.
Re: (Score:2)
Or lacked sufficient organizational authority to get it placed as top priority...
On the upside, I think we've definitively resolved the question of whether once someone brilliant produces Hard AI, whether there'll be a bunch of also-rans complaining, "Well, yeah, but his code sucks..."
Re: (Score:2)
Or maybe just the service contract with earth expired, dunno.
Re: (Score:2)
I personally see hackers as codifying something even more beautiful, logical, and well-articulated than the mundane corporate programmer, delivering a much higher level of intelligence and complexity than most could understand
As someone who used to hack away at stuff rather than designing it first (and I still do that usually, though if I'm going to do something fairly complex I sometimes plan it out on paper a bit first), and once had my code referred to as "twisted-hacked" by another coder (I think he was Brazillian or German, can't remember, it was 8 years ago.. I'm still not sure whether to be proud of the fact that my twisted-hacked code works, or ashamed of the fact that I am a bit of a self-taught cowboy when it comes to
Re:Obfuscation (Score:5, Interesting)
After that, the researchers took the control stream, mapped it back to find out which logic gates were activated / deactivated / multiplexed to route to one another. What they found was that there was no direct data path from the input to the output ! so how on earth were the output pins being generated?
What we were then told was that, somehow, the FPGA control pattern had created loops in certain parts of the circuit that was inducing current in the neighboring bus lines, like little radios and receivers. Totally non-intuitive, but mathematically it worked.
That is what I expect to see when we finally decode the human brain -- an immensely complex network of nodes whose linkages to one another were created in real-time using whatever resources were available to the "trainer" proteins at the time. No two individuals will encode the same event the exact same way: not the same locations in the brain, not the same method of linkages, or the number of linkages.
This is why I see the "singularity" not as a machine that we can walk into, have our brains scanned, and bam our consciousness can be copied to a computer. I think that every individual that "transcends" will have to do it incrementally, gradually replacing and extending the nodal functions that make up the brain. The brain needs to replace its neural network mesh with electronic blocks, and do it in such a way that the mesh's functionality is maintained while the material that makes up the mesh is replaced.
Over a period of time, there will be no more "meat mesh" and your conciousness would be transcended into a medium that we know can be copied, backed up and restored. And when that happens, well, our whole concept of what makes up a "person" would need to be redefined.
-- Scott
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re: (Score:2, Interesting)
Re: (Score:2)
The comparison is not to intentionally obfuscated code, but to organized and documented code. Haphazard code is quite a bit more difficult than clearly written code.
Re: (Score:2)
Re:Obfuscation (Score:5, Insightful)
Nonetheless there is something to what Kurzweil says, futurist (or in my language 'bullshit-artist') though he is.
The brain is probably impossible to 'reverse-engineer', not because of its evolution but because to come up with a brain you need to have 9 months in-utero development followed by years of environmental formation, nurturing and so forth, by which time the brain is so complex and fragile that analyzing it adequately becomes practically impossible.
I mean, take the electro-encephalogram (EEG). It gives us so little information it's laughable. Electrical signals from the cortext mask those of deeper structures and still we just end up with an average of countless signals. Every other form of brain monitoring is also fuzzy. Invasive monitoring of brain function is problematic because it damages the brain and the brain adapts (probably) in different ways each time. Sure, we can probably get some of the information we are after, but the entire brain is, I would suggest, too big a task.
But we can use the same principles that exist in the brain to mimic its functionality. But it ultimately is a new invention and not a replica of a brain, even if it does manage to demonstrate consciousness.
Re: (Score:2)
Humans are formed from the coding in DNA so therefore the function of a brain is also contained with DNA. Therefore in time, it can be entirely understood.
The thing I find interesting is the rapid rate of progress in learning to sequence DNA.
e.g. http://www.fiercebioresearcher.com/story/biotechs-use-nanotech-in-pursuit-of-100-sequencing-bill/2008-04-22 [fiercebioresearcher.com]
Up until now its hugely expensive and time consuming to sequence just one human. But one human is
Re: (Score:3, Insightful)
He's not trying to obfuscate the code, he's just not trying not to. Purposeful obfuscation implies organization. Hacking
Re:Obfuscation (Score:4, Funny)
So you're saying the human brain is Open Source?
poverty of expectations (Score:5, Funny)
Re: (Score:3, Informative)
Re: (Score:2)
--
Cretin - a powerful and flexible CD reencoder
Re: (Score:2)
Re: (Score:2)
i hold no hope for AI for a long time.
Re: (Score:2)
Re: (Score:2)
While I consider anyone who watches a car drive around an oval for 6 hours to have questionable hobbies, Nascar in particular aren't so bright.
It is one thing if the race course itself is a variable, o
Re: (Score:2)
Re: (Score:2)
Then again, we also usually don't understand how Baseball can be exciting.
Maybe not THAT low hanging (Score:5, Informative)
Unfortunately,
1. A neuron isn't a transistor. Even the inputs alone would need a lot more transistors to implement at our current technology level.
An average brain neuron takes its inputs from an _average_ of 7000 other neurons, with the max being somewhere around 10k, IIRC. The vast majority of synapses are one-way, so an input coming through input 6999 can't flow back through inputs 0 to 6998. So even just to implement that kind of insulation between inputs, you'd need an average of 7000 transistors per "silicon neuron" just for the inputs.
Let's say we build our silicon transistor to allow for 8k inputs, so we have only one modul repeated ad nauseam, instead of custom-designing different ones for each number of inputs between 5000 and 10000. Especially since, we'll see soon, that number of inputs doesn't even stay constant during the life of a neuron. It must accomodate a bit of variation. That's 2^13 transistors per neuron just for the inputs, or enough to push those optimistic predictions back by 13 whole Moore cycles. Even if you believe that they're still only 1.5 years each, that pushes back the predictions by almost 20 years. Just for the inputs.
2. Here's the fun part: neurons form new connections and give up old ones all the time. Your brain is essentially one giant FPGA, that gets rewired all the time.
Biological neurons do it by physically growing dendrites which connect to an axon terminal. A "silicon neuron" can't physically modify traces on the chip. You have to include the gates and busses that switch an input to another nearby source from thousands available outputs of another "neuron". _Somehow_. E.g., a crossbar kind of architecture. For each of those thousands of inputs.
Now granted, we'll probably figure out something smarter out, and save some transistor for that reconfiguration, but even that only goes so far.
There go a few more Moore cycles.
4. And that was before we even get to the neuron body. That thing must be able to do something with that many inputs, plus stuff like deciding by itself to rewire its inputs, or even (yep we have documented cases) one area of the brain decides to move to a whole other "module" of the brain or take over its function. It's like an ALU deciding to become a pipeline element instead in a CPU, because that element broke. In the FPGA analogy, each logic block there is complex enough to also decide by itself how it wants to rewire its inputs, and what it wants to be a part of.
There are some pretty complex proteins at work there.
So frankly even for the neuron body itself, imagining that one single transistor is enough to approximate it, is plain old dumb.
5. And that's before we even get to the waste we do with transistors nowadays. It's not like old transistor radios, where you thought twice how many you need, and what else you could use instead. Transistors on microchips are routinely used instead of resistors, capacitors, or whatever else someone needed there.
And then there are a bunch wasted because, frankly, noone ever designs a 100 million transistor chip by lovingly drawing and connecting each one by hand. We use libraries of whole blocks and software which calculates how to interconnect them.
So basically look at any chip you want, and it's not a case of 1 transistor = 1 neuron. It's more like a whole block of them would be equivalent to one neuron.
I.e., we're far from approaching a human brain in silicon. We're more like approaching the point where we could simulate the semi-autonomous ganglion of an insect's leg in silicon. Maybe.
6. And that's before we get to the probl
Re: (Score:3, Informative)
Purkinje cells in the cerebellum have sum up about 100k inputs. Purkinje cells are the sole output of the cerebellar cortex. And the cerebellum has as many neurons as the rest of the brain (~10^10). So your point is very valid, just an order of magnitude or so short on the estimate.
Ok, let's consider it (Score:4, Insightful)
Ok, let's consider it.
Which is basically a non-factor, since the wiring gets it back to the right orientation anyway.
And it does saccades that not only allow it to see in any direction anyway, but also greatly increase resolution. [wikipedia.org]
It's got a low light mode, unlike most modern cameras which become 100% useless in low light. Most cameras you can buy need a flashlight even in relatively well artificially lit rooms, and become freaking useless at the light levels where the eye becomes predominantly B/W. So, hmm, between going monochrome and going blind, it seems to me that the eye wins, hands down.
Only in as much as any other piece of biology is. Even so, it can withstand a lot of things which would render a cheap camera useless. And it can self-heal from most things.
But it's wired to something which can do a reasonable job even with an unfocused image. Try an OCR or, better yet, image recognition in the same conditions, and you'll see some epic fail.
But let's talk about some other advantages:
- better resolution than almost any digital camera
- saccades help increase the effective resolution even more
- some image processing and compression is built right into the retina, so it needs _far_ less bandwidth on the optic nerve than a modern camera would
- takes up less space than a camera able to focus over the same range of distances, and get similar image quality. (Hint: it doesn't need to move the lens waay forward and back to focus.)
- can deal with a wider range of brightness in the same image (most cameras need postprocessing so if the bride looks ok, the groom doesn't look like a light-sucking black hole, or viceversa)
- it can even rewire itself to deal with stuff it wasn't designed to deal with. E.g., you can get a camera-style photo-receptor as an implant against blindness, and the neurons in the eye and brain will rewire themselves to work with the fundamentally different image it gives. (That's one amazing thing about neurons: they can essentially reverse-engineer almost any kind of body, and learn to use it.)
Etc.
Now I'm not saying it's _perfect_, nor "proof of creation". But it's a lot better than you seem to assume, anyway. We're not quite at the point where we can equal it. Yet. We will be eventually, but not yet. We can do better in _some_ aspects, but often at the price of doing something else worse.
Adding computers to our brains? (Score:3, Insightful)
Re: (Score:2, Funny)
Re:Adding computers to our brains? (Score:4, Funny)
Re: (Score:2)
- If that is possible, there will be master brain hackers.
- If there is a master brain hacker, there will be Naked Female Robots jumping from buildings [wikipedia.org]
For the average Slashdot Id, the trade off is probably worth it.
wetware security (Score:5, Insightful)
Without the ability to install properly open code, I suggest a good security patch, like zen, or some other semi-mystical skepticism.
Re: (Score:2)
Re: (Score:3, Funny)
Wouldn't that *help*? (Score:3, Interesting)
Furthermore, looking at the broader picture, I was reading an artificial intelligence textbook that was available online (sorry, can't recall it offhand) which said that current AI researchers have shifted their focus from "how do humans think?" to "how would an optimally intelligent being think?" and therefore "what is the optimal rational inference you can make, given data?" In that paradigm, the brain is only important insofar as it tracks the optimal inference algorithm, and even if we want to understand the brain, that gives us another clue about it: to improve fitness, it must move closer to the what the optimal algorithm would get from that environment.
Re:Wouldn't that *help*? (Score:5, Interesting)
It all boils down to the result that an intelligent organism capable of building a social, technological civilization could have been quite different from us. Even if it looked like us, details as well as overall layout of the brain could supposedly have been quite different, but still giving an equivalent fitness. A simulation reproducing everything is not feasible, so how do we find out which elements are really relevant?
Re: (Score:2)
Re: (Score:2)
Kurzweil Talk in Cambridge, MA (Score:4, Interesting)
Re: (Score:2, Interesting)
Kurzweil thinks of the brain as a massively parallel system, one that has very low signaling rate (neuron firing) compared to a CPU which it overcomes by the massive number of interconnections.
Mod me offtopic or whatever, I don't care, but I've been thinking about this for a few weeks. If our brains are so well interconnected, how is it that we instantly die if a bullet merely passes through it and destroys a few of those connections? We can shoot bullets through most parts of a computer and more than likely only a piece of it will be damaged (I have never done this, but we could in theory just reroute the processing through the non-damaged parts correct?)
How is it that we can have brain damag
Re: (Score:3, Interesting)
Re:Kurzweil Talk in Cambridge, MA (Score:4, Informative)
1) You don't die "instantly" unless the damage is very extensive. In particular, you can have autonomous functioning the brainstem persist after massive "upper" brain damage.
2) You can damage large parts of the brain and have the damage rerouted - sometimes. There are large, apparently "silent" parts of the brain that you can remove without (apparent) problems. Read some of Oliver Sack's stuff for some interesting insights on how the brain works on a macroscopic basis.
Have you accepted Google as your personal search engine?
Re:Kurzweil Talk in Cambridge, MA (Score:5, Informative)
Have you seen the mess that a bullet going through a skull makes?
It's not the bullet that's the problem, it's the shockwave generated by it's passage that does all the damage. It's called "cavitation" - this video [youtube.com] should help you understand it. If you carefully watch the last part of that video, you'll see that it causes the entire melon to explode outwards. Now imagine what that kind of force does to brain tissue confined inside a skull.
You can't compare that to shooting at computer components - they react completely differently, and are not affected at all by the shockwave. When you shoot a computer, only the part you hit is affected.
Re: (Score:3, Informative)
mid-age life crisis (Score:5, Informative)
Re:mid-age life crisis (Score:4, Interesting)
IMO, a very controversial prediction of his is that around 15 years from now, we will start to increase our life expectancy by one year, every year, a rate he also sees taking on exponential growth...
Re:mid-age life crisis (Score:5, Insightful)
Biomedical advances will never increase at the same rate as computer technology, simply because experimenting with silicon (or whatever) doesn't have any health and safety issues tied to it, more less any potential moral backlash (barring real AI, which I think is farther away as well).
It takes 15 years, sometimes, for a useful drug with no proven side effects to make it to market. Even if we made theoretical breakthroughs today, it'd be a decade or more before they could be put into practice on a meaningful scale, assuming that they were magically perfect and without flaws/side-effects/whatever.
It's very dangerous to look at our advances in computer technology and try to apply those curves to other disciplines. It's equally ridiculous to assume that the rate of increase will remain the same with no compelling evidence to support the assertion. In terms of computers and biotech, we're still taking baby steps, and while they seem like a big deal, we still have a long way to go.
exponential curves, s-curves, and bell-curves (Score:3, Insightful)
Re: (Score:2, Informative)
I suggest you take a look at his actual research before you say such things. Here's a link to a presentation her recently did at Ted:
http://www.ted.com/index.php/talks/view/id/38 [ted.com]
Re:mid-age life crisis (Score:4, Insightful)
We're not even CLOSE to anything resembling some magical singularity.
Re:mid-age life crisis (Score:4, Insightful)
Which predictions? I'm looking at Wikipedia [wikipedia.org] and so far his track record doesn't seem good.
Robotic limbs are in research, but we don't have anything thats production ready soon. We don't have translating telephones. We do have software that can transcribe speech into text. Poorly, but I'd still chalk that one up to him. Cybernetic chauffeurs have not materialized. Unless you count phone menus as intelligent (which I don't) then his intelligent answering machine one hasn't materialized either. About the best I've seen is ones that could, poorly, parse and english sentence and look for key words. Most have trouble with "Yes" or "No."
Moving on to the next section which, as the text indicates, is supposed to happen before 2010... The classroom is not dominated by computers in any real sense, much less generated course-work tailored to students. The production sector is not dominated by a small number of highly skilled people. Just look at how things are in China. That is unless you mean economic domination (and sub "highly skilled" with "rich"), which he didn't. Tailoring for individuals is not common by any reasonable sense. We are just beginning to use simulations for things like protein folding. We are far away from using them for drug testing. About the only one that could be considered accurate is handheld image recognition for blind-people, which I would imagine is technically feasible. However since this tech is not widespread (haven't seen a single blind person using it), its only a half-win.
So no, his predictions are not good, and he is wrong at least as often as he is right.
Moving on to your next statement... Sure he has doctorates, works with a nice-sized team of people, and does a lot of research. None of these things make him right. People can be highly educated and hard working, and still be wrong.
And yeah it is easy to spout negativity, but that doesn't mean the negativity is wrong either. Likewise, wishing really hard for something to be true doesn't make it true.
And seriously, the singularity has always seemed to seem like a lot of wishful thinking. There are real theoretical upper limits to computational ability and information storage. Granted, we are nowhere near those theoretical upper limits, but there are also practical upper limits as well, and we have no idea how close we are to those. Humanity cannot break the laws of physics, and thus we will, at some point, plateau in our development. Singularity advocates always seem to ignore this in favor of predicting unbounded exponential growth, which makes me question their conclusions. A plateau seems just as likely to me as exponential growth.
Re: (Score:3, Insightful)
But putting aside credentials (for reals this time), you're not addressing my main point that there are theoretical and practical limitations to things. There are physical laws. You can't go faster than the speed of light. You can't get more energy out of a system tha
Re: (Score:3, Funny)
Re:mid-age life crisis (Score:5, Interesting)
Which is pretty funny given that dietary supplements haven't been found to be very useful on a whole.
He really doesn't seem to look any younger or stay the same age either. He does look a bit better than smokers of his age, but not by a whole lot, in my opinion.
Re: (Score:2)
Re: (Score:2, Insightful)
Re: (Score:2)
According to him his Type II Diabetes appears to have gone away for the time being... At least the symptoms part of it.
Re: (Score:3, Informative)
Acutally, according to several sources, and mentioned on his wikipedia page [wikipedia.org], his biological age is about 20 years younger than his chronological age (which is only two biological years older than when he changed is habits concerning his health 20 years ago).
Silliness (Score:5, Insightful)
You can reverse engineer anything. Whether it has a well-thought out design or not, its functions can be analyzed and documented and re-implemented and/or tweaked.
If anything, the timetable may be in question, but not the question of whether or not it can be done. I have no doubt it can be done, it's just a matter of how long it'll take given the right resources, the right talent, the right technology, and the right knowledge.
Granted, I'm just an idiot posting on slashdot, and not an inventor or neuroscientist, but I still think I'm right on this.
Re: (Score:2)
I've met many a hack I couldn't figure out; a massive tangle of irreplicatable crap. But I can still make something new that is functionally identical, and that is what would need to be accomplished to replicate the brain.
I see wishful thinking on two fronts. One, he wants to be immortal and touch the "weak godhood" of the Singularity. But two, he still wants to be "unique" in having this wonderful un-hackable brain. I think those two ideas contradict eac
Re: (Score:2)
We can understand "engineering" in this context, as opposed to "hacking", as the careful planning of a system so that we use "well-behaved" functions, thereby maximizing our future ability to refactor said system, and minimizing the losses of potentially useful information throughout the various operations...
For instance, a properly engineered graphics application will let you zoom in and out of a picture without permanently transfo
Re: (Score:2)
A function is a one-to-one mapping.
Nah (Score:5, Insightful)
The future will be like that: something people aren't really predicting. Something neat, but not flashy.
Alternatively, the future will be the "inverse singularity" -- you know, instead of the Vinge godlike AI future singularity of knowledge, there could be a singular event that wipes out civilization. We certainly have the tools to do that today.
Re:Nah (Score:4, Insightful)
I don't know. I think AI is economically easier and desirable to acheive than a flying car.
The internet was predicted about the same time, but no one really paid attention but because it was economically viable and actually desirable (no drunk drivers or grandma's driving 300mph into buildings with a missile) it came about.
Secondly, AI in any form is desirable. From something as simple as filtering data, to more advanced like picking stocks, and the final goal of actually being a companies CEO is what many companies are investing in right now.
Of course no one is building a super intelligent CEO in a box as of now, but many companies are developing programs that are borderline AI with dealings with choosing their best investments especially the larger financial firms with those who manage mutual funds.
Now they don't call them AI at this point but they are approaching and I would wager that when it becomes viable, people will be building MBA's in a box to determine strategic decisions.
Re: (Score:2)
Re: (Score:3, Informative)
I think you are talking about "black box" trading at quant
Why must you piss on my parade, sir? (Score:2)
Re: (Score:2)
The difference with most of other predictions is that AI doesn't have to become economically viable. You only need one, somewhere, to make a technological disruption.
Regarding the singularity (Score:2)
The primary things influencing the outcome are our current state of mind. Considering the drive humans have to prove themselves superior by wiping out anyone even slightly different than them
Deja vu (Score:2)
Where's my f'ing flying car dip$%^* (Score:2, Funny)
Re: (Score:3, Informative)
Re: (Score:2)
Huh? It's easy to find data on the fuel usage of various vehicles, including airplanes. There's a rough summary at this wikipedia page [wikipedia.org]. Airplanes generally have roughly the same fuel per 100 km per passenger mileage as autos. This includes the general observation that the bigger the vehicle, the smaller the fue
Will people move data like in Johnny Mnemonic in ? (Score:2)
Re: (Score:3, Funny)
Adding computers to my brain? (Score:2)
One of the great things about current technology is the ability to unplug and sometimes I just want to do just that.
Sounds a little like.... (Score:2)
Re: (Score:2)
Adaptation (Score:5, Insightful)
So, as haphazardly as the brain structures, memory storage, sensory input, etc. might have evolved, it might still be flexible enough to figure out a sufficiently simple interface with anything you might connect to it. Given a smart training of finding the newly connected "hardware", it might be possible to interface with some really interesting brain extensions.
The complexity and the abstractness of the extension might be limited by the very pragmatic learning approach of the brain, making it more and more difficult to learn the interface if the learning progress is too abstract/far away for the brain to "stay interested". Though maybe with sufficiently long or "intense" sensory deprivation that could be extended a bit.
My problem with the argument of the "haphazard" structure of the brain is that it could have been used to deny the possibility of artificial limbs or artificial eyes, which both seem to work pretty well. Sure, these make use of already pre-existing interfaces in the brain, but as far as I know (not very far) the brain is incredibly malleable at least during the first 3 years of childhood.
So, as ethically questionable as that may sound to us, it might make sense to implant such extensions in newborn babies and let them interface to them in the same way they learn to use their eyes, coordinate their limbs and acquire language.
Good times
Re: (Score:2)
"how did he die?"
"his spacial awareness co-processor run out of battery, and he run into a wall"
Adding extra-human capabilities without turning a human into a human-machine hybrid, each depending on each other for survival, sounds like the true challenge. And that's not even looking in the ethical challenges of preventing borg-like control via the add-ons. "Don't fret about the elections, Mr. President. The new citizen 2.1 firmwa
My biggest problem with Kurzweil (Score:5, Insightful)
He not only makes predictions about technology (which is a feasible endeavor, though fraught with difficulties), but also about the universe that the technology will interact with. Predicting that brain scan technology will improve is (pardon the pun) a no-brainer. Predicting that we will map out hundreds of specialized areas within the brain is a prediction that is completely off the wall, because we don't know enough about brain function to know if all areas are specialized.
Balls of crystal (Score:4, Insightful)
As a cyborg [reference.com] myself, I don't see any sane person adding a computer to his brain for non-medical uses.
I was going to say that sane people don't undergo surgery for trivial reasons, then I thought of liposuction and botox for rich morons, and LASIK for baseball players without myopia. I don't see any ethical surgeons doing something as dangerous as brain surgery for anything but the most profound medical reasons, like blindness or deafness.
As to the "as smart as ourselves", the word "smart" [reference.com] has so many meanings that you could say they already are and have been since at least the 1940s: "1. to be a source of sharp, local, and usually superficial pain, as a wound." Drop ENIAC on your foot ans see how it smarts. "7. quick or prompt in action, as persons." By that definition a pocket calculater is smarter than a human.
Kurtzwiel has been saying this since the 1970s, only then it was "by the year 2000".
We don't even know what consciousness is. How can you build a machine that can produce something you don't understand?
Re: (Score:2)
The ultimate in interface design (Score:2, Interesting)
For me, the huge thing will be when I'm abl
Worst summary in a long time (Score:2)
What new tools? What festival? Why doesn't Kurzweil get a first name while Ramachandran gets not only a first name but an initial? Has Ray dropped the Ray?
You don't really need to reverse engineer it... (Score:2, Insightful)
No, the monkey's brain spontaneously created the neural network to control it. Sentient beings aren't computers, at least not in the conventional sense, because they reformat themselves to process new data (learning), and even to process new types of inputs. One might be able to bui
Changing ideas (Score:2)
From TFA:
Two decades ago he predicted that "early in the 21st century" blind people would be able to read anything anywhere using a handheld device. In 2002 he narrowed the arrival date to 2008. On Thursday night at the festival, he pulled out a new gadget the size of a cellphone, and when he pointed it at the brochure for the science festival, it had no trouble reading the text aloud.
I'm guessing that 20 years ago he was thinking of a handheld device that would actually allow blind people to literally
Re: (Score:2)
Blind people navigate and read text using machines that can visually recognize features of their environment.
Which is not quite the same as what he has, but no neural interfaces are mentioned either. He also promised as robot cars and viable automatic translators in the same book, so I'm not terribly impressed at his accuracy.
Kurzweil also makes a lot of assumptions tabout our ability to continue improving computers, despite the limits of silicon technology and the current lack of a viable replacement.
The future... (Score:2, Informative)
PS Anyone having trouble getting their rightful Karma bonuses despite still having 'excellent' Karma?
8th World Wonder? (Score:2, Insightful)
I don't think so. Computers are now faster and "bigger" (not physically) than 30 years before. Programs have more functions than 30 years before.
But essentially, they do exactly the same thing as 30 years before, just MORE of the same thing. And they don't do it BETTER, they still have the SAME BUGS, and the SAME NUMBER of bugs per lin
Interpretation by someone from both fields (Score:2)
"God" (and I presume he means Einstein's God here) isn't the hacker that creates the human brain, the human behind the wheel is. We have varying genetic predispositions, but even after initial conditions are set, our brains develop differently based on our experiences. It's well k
improve the interface, not combine the components (Score:2)
i think that man-machine interface in general is vastly overlooked, and i'm going to get specific. a senior partner of an mid-sized architect firm keeps asking me why it takes so long to produce a drawing, given that the parts can be created quickly on the command line. what if you could just think of it, and it appeared? sure there would have to some scaling algorithms involved, and it would probably take some practice, but it would ultimately
Will not happen... (Score:3, Informative)
Never mind all the scientific or technical obstacles that even non-scientific person could think of, let alone once we get into philosophical issues (for things we don't even have words to talk about yet).
Yet there is still a very simple reason why the prediction will not happen. Does he know how long it takes for FDA to approve a brain implant of the kind he is suggesting (even if we had one)?
I've said it before and I will say it again. This is nothing more than a religion posing as pseudo-science from a guy who takes 200 anti-aging pills hoping to reach immortality though technology.
But one thing is for sure, Kurzweil will die just like every other "prophet" before him.
Re:Will not happen... (Score:5, Interesting)
A scientists job is not to criticize the results when one doesnt "like" them. Instead, one interprets results, and this is what he did. The math works, and he stands behind that result.
---Never mind all the scientific or technical obstacles that even non-scientific person could think of, let alone once we get into philosophical issues (for things we don't even have words to talk about yet).
Let me pose this question to you then: What happens when capitalism and high technology blend? The people who matter want this tech he describes: genetic tailoring, nanotech, and robotics.
Gene manipulation and immunoresets are possible, but for the elite few who can afford it. Genetic diseases are rid of by tailoring non-disease marrow and injecting them. I remember when the Human Genome project started... almost 15 years to complete. SARS took 29 days.
We already use nanotech in our everyday life. Exactly how do they rate CPUs and GPUs? That's right, by the process they used to create it. Nanotech doesnt mean nano-robots, but will eventually. Nanotech means we can manipulate 1/10^6 m. We also see nanotech by the labs on a chip used in biowarfare detection the military is currently using. It's there, hiding in plain sight.
Robotics is the hardest now, but that's only because we dont have a powerful enough vision system processor. Once we have that, things will get scary crazy (as in Story of Manna crazy). And along with "robotics" we have plenty of software to run some interesting things right now. Some guy in his garage built a fully automatic heart-targetting gun turret ala Team Fortress Classic using homemade gear and COTS parts. 2 webcams with parallax was all he needed for basic motion/depth detection.
---Yet there is still a very simple reason why the prediction will not happen. Does he know how long it takes for FDA to approve a brain implant of the kind he is suggesting (even if we had one)?
Which is why the USA will stagnate. We need less of these BS laws and allowances for scientists to experiment with accepting subjects who qualify as sane. We could have a working bionic plugin eye if it wasnt for the stranglehold the FDA and AMA hold on the USA. I'd say the FDA needs to be neutered to a "recommend/do not recommend" if exist at all.
---I've said it before and I will say it again. This is nothing more than a religion posing as pseudo-science from a guy who takes 200 anti-aging pills hoping to reach immortality though technology.
My parents started taking supplements after hearing from news broadcasts and Dr Oz on Oprah. I chose not to, while observing what happens. One such drug is resveratrol, along with l-lysene and massive dosages of vitamin C(4g a day). My mom took glucosamine and chrondroitin sulfate for her back after a friend (who is a veterinarian) recommended it to her. Animal clinics use that complex specifically for severe arthritis for animals. No company can make money for paying the required fees for the FDA and stays a "supplement". My mom with glucosamine/chrondroitin healed her back with it to 100%.
Well, back to he new drugs.. My mom and dad started the cocktail. My dad's bald on the top, or was. One of those drugs is actually regrowing his hair. We're not sure if it's C, resveratrol, or l-lysene, but it's something. Rogane (?) never worked. My mom's knees also cracked and stuff in the joints. Now she can feel her knees healing.
They took Linus Pauling's recommendation on massive C dosages and seem to work as he claimed. Considering he won 2 Nobels in 2 fields, we respect him, even if the medical society does not. I have thought about following this same regimen and recording my progress, as it does seem to work for them (placebo is not strong enough to grow hair that hasnt in 20+ years).
---But one thing is for sure, Kurzweil will die just like every other "prophet" before him.
Maybe.
Re: (Score:2)