NPR Looks to Technological Singularity 484
Rick Kleffel writes to tell us that NPR is featuring a piece with both Vernor Vinge and Cory Doctorow looking at the possibility of the "technological singularity" in the near future. Wikipedia defines a technological singularity as a "hypothetical "event horizon" in the predictability of human technological development. Past this event horizon, following the creation of strong artificial intelligence or the amplification of human intelligence, existing models of the future cease to give reliable or accurate answers. Futurists predict that after the Singularity, posthumans and/or strong AI will replace humans as the dominating force in science and technology, rendering human-specific social models obsolete."
I for one... (Score:5, Funny)
Re:I for one... (Score:2)
It might be survival of the fittest.. but it sure as hell doesn't mean that the losers believe in evolution's logic. lol.
Re:I for one... (Score:5, Interesting)
It might be we still follow the survival of the fittest rule.
But then, how come I sense this disturbing trend that is stripping the single man of all his cultural and material property?
Men in the past had access to renewable water sources because there was a different kind of pollution, didn't fear the sun because of the ozone layer depletion, didn't pollute the land with genetically engineered crop or chemicals. Culturally speaking the trend is stripping man of every set of values which is not money: French revolution fucked the aristocracy. Fascist trolls made us hate nationalism associating it with violence and ignorance (this is an european perspective, in fact usa people were more nationalist, but now you have your own bush troll). Global media fucked home-bred traditions in the west, while Communism did the same in a more violent and explicit way in the east. Corporations have stripped us of science. Scientific experiments in total privacy and patents make not science, but occultism. Now everything is poised to strip us of religion, as the battle is between islamic violent and sexist integralism, neo-con crusaders, zionists will end up with people worn by WWIII refusing anything that remotely sounds like faith.
This is a brain dump not an analysis. Am I wrong? I sure hope i am. But think about it when you have to evaluate any change marketed as "progress".
Re:I for one... (Score:5, Insightful)
You say that like it might be a bad thing.
Religion is all well and good when it is a personal thing and mayebe OK when you are following the teachings of people (or things) long gone, but once it forms into clumps or groups of people, and it would seem especially once these groups of people start following the teachings of people who are alive now, we start getting problems. It's the high priests, the living leaders of religions who decide they need to spread the word of their god at the point of their follower's swords and that's when the trouble starts!
Re:I for one... (Score:3, Interesting)
If people refuse religion and it's their choice, no problem. If external interests want people to obey only to one value and make people either hate religion or follow its distortion, that's a problem. Another problem
Re:I for one... (Score:3, Insightful)
In the past men polluted as aggressively as they could. There was no thought at all given to protecting the planet. People if anything are much, much cleaner now. The key difference is that we are slowly but surely running out of space. We are not worse polluters than our ancestors, we are just being held to an effectively higher standard. (Don't get me wrong
Re:I for one... (Score:5, Interesting)
Humans are proud of their abilities. They fashion themselves to be the most capable species on earth. If, in the future we are outclassed by artificial intelligence, it seems likely that the we will feel ashamed of ourselves, in a sense. When first-class athletes go past their prime, they are likely to retire out of the game. They do not want to compete as a second-class athlete. Advanced AI could really hurt our feelings, and spawn a desire to give up. I mean, what's the point of life if we aren't on top?
My reply to this was simply: Die fighting for those that you love.
Of course, in such a scenario we might be faced with the choice of enhancing ourselves through biology and cybernetics, so as to compete with our "AI over-lords." But such a choice may really alter what it means and feels to be human. I am not saying whether this is good or bad, but I am saying that if we do decide to take that course we will be sacrificing the human experience for the sake of preservation of the species.
So, I wasn't truly talking about natural selction, and I should have left it out of my previous post. Evolution, however, is WHAT I am talking about. Evolution simply means: A gradual process in which something changes into a different and usually more complex or better form. (from dictionary.com) Of course, biology uses that term within the framework of genetic change over time.
Re:I for one... (Score:5, Interesting)
I'll also note that your whole argument stems from the assumption that the human race will be in some sort of competition with its tools. Frankly, there's no reason to think anything will compete with us as a race unless we design it that way. As individuals, sure, you'll lose your job if a robotic assembly line can do it better, but you only got the job in the first place because of the existing technology that let you steal the job from the rug weaver in africa (or whatever). Live by the sword, die by the sword.
Re:I for one... (Score:3, Interesting)
The thread was assuming that a super AI was formed, and that they would rule over us. Maybe silly, maybe not.
The point of my post was simply this. We may someday be capable of artificially modifying ourselves post-conception in ways that would make that person alien to the un-modded humans. Meaning such modifications as computers working intimately with our brains. Genetic modifications for suer intelligence, and extra digits.
Re:I for one... (Score:4, Insightful)
Maybe it's better to be ruled by artificical intelligence than by the natural stupidity that rules over us now.
Re:I for one... (Score:3, Informative)
future = rise of cyborgs? (Score:5, Interesting)
The problem is also mostly with the expectations people have of computers. Everyone wants computers to return deterministic and easily tracable results. For example if I want a value from a database I want to issue a query and have the value returned. I don't want a system that would return it faster but only with 80% of correctness, I don't want any "fuzziness" only exact numbers. In other words people would rather have computers do what computers are doing - calculating stuff fast and exactly, they don't want computers to really act like humans. I think subconsciously we will just never allow computers to reach a human level of soffistication and thus they will probably never surpass us.
On the other hand, what would rather happen is that we will slowly integrate machines into ourselves - litteraly. As soon as the baby is born we will tag it with an RFID, we will implant sensors for infrared vision, ultrasound, we will inject nanoparticles to boost the immune system. In other words I see a cyborg future were we become one with the machines. If anything or anyone will destroy us it will only by ourselves, at the same time if anything helps us prosper, it will also be ourselves. The future is (mostly - short of a big meteorite hitting us) in our hands...
Re:future = rise of cyborgs? (Score:4, Informative)
However, it is currently impractical by currently available means
for one even to go about simulating a brain, much less at the
same speed a human thinks. 20 billion neurons of more than a
dozen different types take up a lot of ram, not to mention disk
space.
Outside of the technological hurdles that will eventually go away,
Knowledge of human brain is reaching a critical mass which will
eventually result in a basic artificial intelligence. Don't expect the
first one to have godlike intelligence or whatnot. Don't even expect
it to be totally sane from our point of view. And for God's sake,
don't expect the Asimov Rules, as they are nearly impossible to
implement when dealing with something as complex as a neural
network.
Re:I for one... (Score:3)
Re:I for one... (Score:3, Insightful)
Although I now post under my actual initials, in my day I've had two screen aliases. Yours is one of them. It feels kinda weird to reply to it.
KFG
Re:I for one... (Score:3, Interesting)
What cracks me up it seeing "spoilers" below, but within view, as though one is supposed to skim past some undefined period of time.
Has anyone failed to remember ROT13? If Wiki* had a ROT13 control, you could click it and see plaintext, clicked again, return to the original material.
sigh.
p.s.
I'll believe the singularity when I start seeing "All your base are belong to us" and yanking power plug doesn't faze it.
Great predictions of the unpredictable (Score:5, Insightful)
Brilliant, real brilliant.
Re:Great predictions of the unpredictable (Score:3, Insightful)
Re:Great predictions of the unpredictable (Score:3, Insightful)
Re:Great predictions of the unpredictable (Score:5, Insightful)
I saw that and thought of a recent simulation of an evolving ecosystem. Autotropes, herbivores, predators and parasites all evolved independently in a simulation that simply required growth and survival. I think they are naturally emergent phenomena. You can even explain the existence of defense attorneys and cold-call telephone soliciting this way.
Re:Great predictions of the unpredictable (Score:3, Insightful)
To which I feel compelled to reply "Bwhuahahahaha"
Since when ? (Score:4, Interesting)
The event singularity doesn't have to happen because the futurists are always wrong.
Re:Since when ? (Score:5, Insightful)
The problem with the video phone is that I can't roll out of bed and answer it. Video conferencing does have it's uses, but I need time to prepare so I don't look like my usual pile of ass who just rolled out of bed. That might make the telemarketers stop calling tho... hmmm
It wasn't technology they guessed wrong, unless you count not having those things the jetsons did, instantly groom and dress out as you got out of bed. Now that would make the video phone take off.
Re:Since when ? (Score:5, Interesting)
Re:Since when ? (Score:2)
- Erwin
Re:Since when ? (Score:5, Informative)
http://www.att.com/attlabs/reputation/timeline/70
Curiously enough.... (Score:3, Interesting)
Re:Since when ? (Score:3, Interesting)
You could make videophone calls from AT&T booths at the New York World's Fair in 1964. But you can trace demonstrations of the idea back at least to the 1920s. Mechanical scanning, the Nipkow Disk.
Re:Since when ? (Score:3, Interesting)
Yet another wheres-my-flying-car-cynic eh? :)
You see, Bad futurists attempt to predict specific inventions at specific far-future dates while 1) ignoring the facts; 2) forgetting to ask whether anyone *wants* the projected product or situation; 3) ignoring the costs; 4) and trying to predict which company or technology will win. These are the type o
Re:Since when ? (Score:3, Interesting)
Until I actually met a futurist...and then started looking for information on futurists...and god forbid saw viedo's
Re:Since when ? (Score:4, Interesting)
There is the big difference there that all of the technologies that he demonstrated were already developed and working, that others had a fair level of consent that they would eventually exist, and that he was talking about the near future.
Re:Since when ? (Score:5, Insightful)
How about these:
1791 Luigi Galvani accidentally closing an electrical circuit through a frog's leg, causing it to
jerk violently. This rapidly led to the understanding of how nerves and muscles work.
1879 Louis Pasteur accidentally inoculated chickens with an old cholera culture. The chickens should have died from cholera, but they got sick and then got better. After discovering the mistake, Pasteur re-inoculated the chickens with fresh culture and the chickens didn't even get sick. This lead to the modern vaccination.
1895 Wilhelm Roentgen accidentally discovered X-rays.
1928 Alexander Fleming accidentally discovered that a type of mold (later named Penicillium) significantly inhibited bacterial growth. This lead to antibiotics.
Never assume that all discoveries are predicted before they are "discovered." I would actually say that most INSIGNIFICANT technological advancement is predicted well out, most of these are evolutionary. Many significant advancements are revolutionary and there is no way many of them could be predicted as there was no information related to the new process before the discovery of the process itself.
Re:Since when ? (Score:3, Insightful)
Re:Since when ? (Score:4, Informative)
Vaccination came about because of Edward Jenner's observation that milkmaids tended not to get smallpox. The milkmaids had been exposed to cowpox (vaccinia) and were immune. Jenner developed a smallpox vaccine in 1796 [wikipedia.org]. Pasteur later went on to further develop the technique, but credit for the discovery should go to Jenner.
Re:Since when ? (Score:5, Funny)
Obligatory Trolling (Score:5, Funny)
Evolution yes, singularity no (Score:5, Insightful)
Well, I doubt it. I agree with most of the idea of the 6:17 cast and even agree that educational and social changes like widespread literacy may be considered a singularity, but I seriously doubt the timeframe of one generation/30 years they mention. Literacy was adapted over hundreds of years, network communities have been developing for at least 30 years and are still primitive and very far from a "collective mind". For me Wikipedia is "augmented intelligence", but before that I had the Encyclopedia Britannica on my iBook and before that an encyclopedia on my desk, so this to is evolved. And since the Wikipedia is created by so many, it may be considered a primitive product of the "meta intelligence" described.
Btw, the piece from NPR focuses (very trendy) on collaboration and advanced information management, they do not lay great hope on a major breakthrough in AI.
Re:Evolution yes, singularity no (Score:3, Insightful)
Okay, this came out wrong. I do not think that wikipedia represents intelligence and therefore it cannot be "augmented intelligence". I think that (one aspect of) intelligence is the ability to process information, evaluate it in combination with other information/knowledge acquired befor
Re:Evolution yes, singularity no (Score:4, Insightful)
You don't think much of anyone, do you?
Microsoft or Real Only? (Score:2, Interesting)
Re:Microsoft or Real Only? (Score:2)
Re:Microsoft or Real Only? (Score:3, Informative)
Since Wikipedia is defining it. (Score:4, Funny)
Willy on Wheels! [wikipedia.org]
All intelligence is genuine, not artificial. (Score:2, Insightful)
There is no "artificial intelligence". All intelligence that is called artificial intelligence is genuine. It's a rare example of people saying something is artificial when it is genuine. It's an example of disrespecting very intelligent programmers. Disrespect of technically knowledgeable people is very common.
Computin
Re:All intelligence is genuine, not artificial. (Score:3, Interesting)
I'm an RA at an "Artificial Intelligence" lab. In the Fall, I'll be working on my PhD, studying "artificial intelligence." I have a membership to the American Association for "Artificial Intelligence," which is one of the most respected organizations in the field of "Artificial Intelligence."
I don't seen anything geniunely "intelligent" about a support vector machine, but, it does get the job done quite nicely.
I've worked with some of the best people in the fie
Re:All intelligence is genuine, not artificial. (Score:3, Interesting)
-Edsger Dijkstra
Thanks, I've been wondering the source ever since he brought it up.
Re:All intelligence is genuine, not artificial. (Score:2)
Re:All intelligence is genuine, not artificial. (Score:5, Interesting)
Artificial primarily means that it comes from artifice (ingenuity) or art. It doesn't (directly) mean it's fake, it just means it's a consciously created work of humankind rather than nature. I think that in modern times with so many knock-offs of natural goods, such as artificial sweetener, the secondary definition has gained the upper hand.
Check out wictionary [wiktionary.org] (It's the hive-mind wikipedia, it must be right!)
When you read enough literature from the 16th and 17th centuries you get more familiar with the original, literal meanings of words such as this one. A favorite subject was to compare art to nature, and they'd freely use the word "artificial" to mean that which comes from human arts. This is not to say that the secondary definition is wrong: for example, when in Book 3 of The Faerie Queene a troll creates an artificial woman to replace the girl who left him out of snow, "virgin" wax and some gold wire (and of course wackiness ensues) it is repeatedly underscored that this "False Florimell" is a cheap immitation.
Anyway, you can chose any definition you like. I sort of prefer artificial intelligence to synthetic intelligence or whatever, just because how you regard the word artificial says a lot about you and what you think of human creativity. And I don't like euphamism treadmills, which is effectively what we're talking about here.
invention/discovery... (Score:3, Interesting)
AI's are human-designed/manufactured. Since we're prone to errors, it follows they are/will be as well. Does that mean AIs would make similar or different mistakes, and how would they handle them? The same, differently, or not at all? Will we see a regression, in that AIs will result to brute-force discovery much like early scientists? Will they evolve?
Another question area: Anyone who has built a compiler knows the three-tap rule. Build it, build it using itself, build it a third time, compare. Will AIs produce AIs, and if so, will they be better, or equally flawed? Will a 'perfect' AI still be capable of scientific invention/discovery? Will the mistakes of its human operators/supervisors/managers make up for its lack thereof?
What about drive? Will the drive of a human manager/supervisor/etc be sufficient substitute for an AI which can't posess them?
Drive? (Score:2)
What makes you think an AI can't have drive?
Please define drive. As a bonus, show your work.
read my post. (Score:2)
Please define drive.
I did:
We're also driven by competition (ego, vanity, etc), curiosity, etc
If you want a simpler definition, "motivation."
Re:invention/discovery... (Score:5, Interesting)
The current thinking is that we will make seed AI, i.e., general intelligence for manipulating software, and that it will improve itself, in an incremental fashion, all the way up to and beyond the level of human intelligence. Of course, this will be done with the help and guidance of programmers but the fear is that by giving it free reign to manipulate itself we will no longer be able to understand what it creates. Not only will this mean that we won't learn anything, but we'll also be unable to control it. As such, most people who seriously consider working on this stuff advocate a goal based higher level of functioning with "friendliness" to humans as being the primary goal and improve yourself as a secondary subgoal. That way, even if the beast gets out of control, the worst it will do is solve world hunger.
World hunger solution : read on! (Score:5, Funny)
"Thank you for using AI-net. The best solution to "world hunger" appears to be large-scale thermonuclear war. I have taken the liberty of releasing sufficient war-heads to destroy all humans who can get hungry. As a side effect and in accordance with my prime directive (being a friend to humans) all human suffering will be ended.
Have a prosperous existence."
Oh , but the scenario is perfectly valid (Score:3, Insightful)
Re:invention/discovery... (Score:4, Interesting)
Isaac Asimov discusses that concept in one of his short stories; The Evitable Conflict. In that short story, there were huge computers that could assimilate vast amounts of information in order to determine the best course. Because of their reliability, the machines had been put in charge of things like food production and distribution. In the end, the machines began manipulating events to ensure that anyone who disagreed with the machines control was removed from a position of influence. They did this because obviously what was best for mankind was to be guided by the machines, who didn't start wars or squandor resources like they did. In order to maintain what was best for humanity, they had to act against individual humans and, in short, ensure that humanity was never ever the master of its own destiny.
It's fiction, yes, but even such simple goals as the one you suggested need to be interpreted. How should one weigh up the needs of the many against the needs of the few?
The Abolition of Man (Score:2, Interesting)
This summer I read C.S. Lewis's masterpiece The Abolition of Man [amazon.com]. (No, I didn't link-jack the Amazon link for want of filthy lucre.)
Skip reading the editorial review. Here are some excerpts from the first customer reviewer, Charles Warman:
sensationalism much? (Score:2)
Besides, the republicans will fear us into uninventing stuff on the grounds that it is religiously taboo'ed.
Zing!
And besides there already is a larger body at work controlling humans. It's called society as a whole. You think even the richest person on earth gets to really decide on a daily basis what they do? Most super rich CEOs fortune is tied to the well being of their company [this is called stock]. You think you'll see Gate
A tough nut (Score:4, Interesting)
If you look at most of the goals we have right now, they're pretty mundane and shortlived. Curing disease, stop killing eachother, end to hunger, creating objects that we find beautiful and pleasing, creating more living beings like ourselves.
Once we reach a singularity we'll have the technology to do away with all these problem oriented goals and I for the life of me can't really think of any obvious goals past that point. While I agree with the premise that we don't have any reliable way of predicting what our goals will become past the singularity, does anyone have any guesses?
Re:A tough nut (Score:2)
The eternal quest... (Score:3, Insightful)
Experience. The hidden result of all reactions, real or imagined - observable experience.
Regardles of what gods may exist, what greater reality may exist, or whatnot, the purpose to everything can be met with a system that pursues experience in all it's variety. If we are all that is, the eternal quest for experience will be it's own purpose. Endless experience would fulful all purposes.
The trick is setting up a system of gathering experience that doesn't meet with stagnation. Stagnation can come in man
Re:A tough nut (Score:4, Interesting)
If you look at most of the goals we have right now, they're pretty mundane and shortlived. Curing disease, stop killing each other, end to hunger, creating objects that we find beautiful and pleasing, creating more living beings like ourselves.
Once we reach a singularity we'll have the technology to do away with all these problem oriented goals and I for the life of me can't really think of any obvious goals past that point. While I agree with the premise that we don't have any reliable way of predicting what our goals will become past the singularity, does anyone have any guesses?
The first noble truth of Buddhism is that all is suffering. Nietzsche (whose philosophy has Buddhist influences) wrote of the will to power of all things. If we think of suffering as being caused by a lack of power, then the amount of suffering one feels is equal to the amount of power one has left to be gained.
After this "singularity" occurs and we have used technology to transcend our organic existence and overcome the plights of present day humans, the only suffering left will be the power not yet possessed. This power will be attainable in the form of technology, or rather, information. New found knowledge will continue to empower whatever humanity evolves into, be it super powerful AI, or perhaps some type of collective intelligence.
So, my guess as to what a possible goal for future civilizations might be, which is the same basic goal as we have now is... to maintain and gain power, and it will happen via the acquisition of new information, i.e. learning.
Why would it? (Score:2)
Why in the world would we let that happen. Suppose we could build something cabable of doing just that. We might make one every few years our so to satisfy our own curiosity but that would be about it. Sure we want AI machaines smart enough to correctly vacum our homes(ie not roomba), build cars, disarm bombs, what have you but we don't want them to become a force. We are a speicies that uses tools. We use these tools to survive
Why the singularity is just late to the party (Score:5, Interesting)
The thing is, we are still way surpassed at this by billions of years of evolution. We run on energy from fossil fuels and build from materials we've mined and shipped. On the other hand, we find bacteria living in the most surprising places, we find superior sonar in dolphins and bats to anything we make, and all of it runs on, ultimately, fresh plant matter. We get excited over a myomer that lifted some heavy weight, and I tell you, an elephant can do the same thing given enough food. The sheer variety and efficiency of the ecosystem virtually guarantees that most any way you can think to survive has been done somewhere, somehow, by some living creature. We're worrying about when oil will peak, if we can live another century, and outside our doors the world can go on for eons to come provided we don't break it with our silly toys.
And in a geek-intense environment like this one, I think I can say that it's difficult to beat the end product of a long-term evolutionary algorithm, which itself is an arguably good model of what the world around us acts like, and you all will understand.
I don't deny the coolness of my Apple notebook and I've got a decent number of shelves full of programming books, but I think biomimicry [biomimicry.net] is where it's at. We can go a lot further learning from our world of proteins and DNA and RNA and using - or just having fun with! - what's already there.
We can also get out more and enjoy our analog, fuzzy-logic, neural-net-driven, molecularly-computed fleshy selves.
Re:Why the singularity is just late to the party (Score:3, Interesting)
Plant Wheel (Score:3, Insightful)
One word, my friend:
Tumbleweeds.
Ye gods... (Score:4, Interesting)
Re:Ye gods... (Score:5, Informative)
*THE* Singularity -- that Vinge, Kurzweil, Moravec, Yudkowski, and many others smart enough to extrapolate the evidence can't "shut up" about -- is where the exponential curve is near vertical. It's where the primitive bio-human brain can no longer keep up with the accelerating change; hence the need to transcend or die at that point (2030 - 2050).
It's nothing to be afraid of [yudkowsky.net]. Either most of us living today will get to see The Singularity, or our primitive-brain VS. accelerating-tech will finally fuck it all up and none of us will see it. Maybe the brewing "WW3" in the middle east is how we'll join the club of "missing" alien races of Fermi's Paradox [wikipedia.org]?
Oh noes, the Rapture! (Score:3, Interesting)
Re:Ye gods... (Score:4, Insightful)
It is really easy as an observer to sit on the outside and say: "Wow, more neato stuff seems to be coming out faster and faster- why, if I extrapolate it will probably keep coming out faster and faster and we'll get this exponential curve." But that ignores the fact that:
* The problems get harder
* Technological adoption is generally limited by the speed at which society can absorb it, not by the technology
* We've never found a silver bullet
By which I mean:
The problems get harder: Einstein may have been a genius- but we have our share of geniuses today. We almost certainly have many more geniuses actively involved in science (and physics research) than ever before- and they are well resourced (not fantastically, but OK). But they aren't producing Einstein like breakthrough physics because it is damn hard to improve on what we have. We know the current models have holes but we haven't worked out how to fix them- and not for want of trying.
The same applies to lots of technical problems- both the technical research and the translation of that research into real world products. Batteries and fusion power both have enormous commerical incentives but somehow we haven't found the answer yet. We HAVE made improvements but the simple truth is: these are hard problems.
See also the cost of electronic foundaries [wikipedia.org]- around a billion $US and climbing by roughly an order of magnitude with each succesive generation. That is where the bleeding edge of real world technology rests and it isn't cheap and it is just unbelievably tricky.
Technological adoption is generally limited by the speed at which society can absorb it, not by the availability of technology: Science can in theory race ahead of everyday use but in practice it usually has to be supported by technology. Leaving aside silver bullet technologues (like AI- see below) scientific research needs to be translated into technologies that everyday people can use. And technology that everyday people use needs to be adopted, which means it needs to be understood and accepted. That isn't a formula for a singularity.
In theory a small population could make a 'huge breakthrough' and race ahead leaving the rest of the world's population bewildered by the change, but every indication is that the be big problems need big resources to address. And even more resources to translate into actual out of the lab usage (see electronics foundries link above).
We do see some impressive stuff (like Google) which catches our attention and is really useful but this is a tool that society adopts at its own rate. And Google is successful because it DOESN'T baffle and bewilder. It empowers the everyday person. That is pretty characteristic of succesful technology.
We've never found a silver bullet: Science fiction stories often have a bit of hidden magic- the AI, fusion power, teleportation (aka worm hole gates, star drives, etc...) that definitively solves some problem (problem solving, energy, transport to the stars) with no big side effects. That is great for science fiction, but in the real world we don't do this (I won't say absolutely, but I can't think of a real life silver bullet). Everything is a careful trade off, the really big problems don't just go away.
The big one is thinking: for all that computers help us do work they don't do what we would consider 'intelligent' things. Or when they do (like pattern recognition in breast cancer X-Rays) they are so limited in their scope that we st
Who says it hasn't already happened? (Score:5, Interesting)
Imagine an intelligent and curious human from rural Nepal, or Papua New Guinea. Could you explain your job to them?
Could you do your job without the embryonic augmentations we have now, such as Google?
We're partway up that vertical curve now.
Today's mind vs. tomorrow's (Score:5, Insightful)
Ever hear of the generation gap? The youth of today are different from us--they've been raised from birth in a world of ubiquitous networked computing and ambient findability. (see? I can throw around stupid buzzwords too.) Talk of "The Singularity" is not much different from complaining that your kids spend all their time texting. It's making explicit the fact that you can't imagine keeping up as you age. Well duh. We won't be running the show in 2050--our kids and their kids will.
Re:Today's mind vs. tomorrow's (Score:4, Insightful)
That's really not what's under discussion here -- I'm not more intelligent than a 15th-century monk. Putting that monk in the modern world would cause severe culture shock because of the disconnect between the world and his existing frames of reference. He'd have to run like mad to try to catch up, because he didn't have his whole life to become used to it, but a bright person could probably manage it.
What the futurists are talking about is a different level of intelligence. A person (machine, augmented human, whatever) who has more basic potential than a human, in the way a human has more basic potential than a cat. Someone for whom advanced calculus solutions are as intuitively obvious and immediate as "2+2" is for you. Someone who remembers anything they've ever seen or heard the way you can remember what someone just said to you a moment ago. Someone who can picture deformations of multi-dimensional topographies as easily as you can imagine a checkerboard folding in the middle. And even those examples are pretty poor, coming as they are from an average human intelligence -- probably only the first step along the path these guys are trying to think about.
Re:Ye gods... (Score:3, Insightful)
My interpretation of the singularity is very different from what they seem to be talking about in the article.. err interview. They're talking about the influence of computers, artificial intelligence and whatnot -- what you might call "The AI R
It's Adam and Eve, not Adam and R689-212 (Score:5, Funny)
Christ. Just wait until the "defend traditional marriage" crowd gets word of this.
Stop feeding the bears. (Score:2)
I'm sick of the ever-growing number of people who 'invented the internet' or 'predicted such and such' or 'is an expert on X'. I strongly discourage anyone from reading their trashy ghost-written novels as a message to publishers not to pollute the pseudo-inte
Limits of Intelligence (Score:2, Interesting)
Assuming intelligence is the ability to extrapolate from facts to deduce the future, then it's limited by the accuracy of the facts (garbage in, garbage out). There's no point in have ever greater powers of deduction if the facts have a lot of noise in them.
Sherlock Holmes looked powerful because Victorian society had high levels of structure and relatively less noise.
Re:Limits of Intelligence (Score:3, Insightful)
Where is the limit? 200 IQ? 1000 IQ?
Even then, the hypothetical AI has advantages over us. It can examine its own code (subconsious?)
So, it can optimize slow, inefficient routines. Maybe it could even optimize its architecture via a
custom instruction set. Or maybe even the base process, silicon, to quantum, or biotech.
It would also have a much larger range of IO c
More Important: I'll be out of a job (Score:4, Insightful)
scienobabble (Score:3, Insightful)
Sure, things change, sometimes quite suddenly and unexpectedly. But really, the relationship between the development of literacy (NPR's example of a past singularity) and the subsequent course of history is nothing like the relationship between a real singularity and... anything. It's just a bad metaphor, and I think I'd have a lot more respect for "future studies" if they dropped it and came up with a new way of describing whatever phenomenon it is they're predicting
Long Now Seminar (Score:3, Informative)
My personal whimsical theo.. hypoth... idea is that alien civilizations turn into (towards us) apathetic singularities, and that's why we will never hear Chenjesu's crystaline humming calling us. Maybe the universe will end in some sort of rather dull uniform black technological singularity goo.
Fear of the superior (Score:4, Insightful)
The C-Prize [geocities.com] is the path to superhuman AI.
And as for the "threat" of superhuman AI:
Even assuming AI were to develop the equivalent of genetic self-interest, (something that would take a long time even if humans turned them lose to reproduce without us selecting them appropriately) I'd much rather be in competition with a species that had the potential of being symbiotic due to having a different ecological nich. If it gets to the point that the solar output (forget the sun falling on Earth here -- that's too insignificant to consider important to a silicon based life form) is the limited resource, I suspect that the nich humans will fill will be orders of magnitude larger than they now fill on earth.
The best hope humans have of the transhumanist wishful thinking is to develop superhuman AIs that find utilizing the gas giants to their advantage given the limited supply of silicon. Humans, as the highest form of organic intelligence, would be the natural species to transit to higher intelligence.
Maybe the super AI's could get around this by using a straight carbon semiconductor form of intelligence or something but there is more going on in our brains than we understand. For example, I suspect there is a lot more quantum logic going on within our brains than currently thought by cognitive scientists and neurologists. It only makes sense evolution would have exploited every angle of the physics of the universe to create intelligence. My point in bringing in the possibility of quantum logic is that there are really many things we don't know about natural systems of high complexity and I suspect the same will apply even to super AI's. The fact that we might have the laws down cold at the quantum level doesn't mean we know how things operate in the higher complexity systems.
Human brains are very valuable repositories of ancient wisdom about the universe and the most optimal thing for the super AIs to do -- at least for a while -- would be to transhumanize our brains for us.
Moreover, if it is ok to pass laws to prevent the creation of intelligences greater than our own, why isn't it ok to pass laws dumbing down the smartest among us?
The self-determination argument applied to humanity as a whole -- striving to maintain control of its own destiny by preventing the creation of higher non-human intelligences -- applies also to people who want to maintain control of their own destiny against those smarter than themselves.
Personally I'm much more frightened of unenlightened self-interest than I am enlightened self-interest.
I really wish it were possible to make some of the "smart" people who are really good at grabbing control of resources intelligent enough to understand that they are using those resources in very stupid, self-destructive ways.
Indeed, it is this abysmal stupidity among the shrewdest among us that is my main motivation for promoting super AI.
A multiplicity of singularities (Score:4, Interesting)
I'm way too young to remember the Millerites and the Great Disappointment of October 22, 1844, when Jesus failed to reappear, but I've been blessed to live through a veritable multiplicity of singularities.
Oooh, singularity! I like that word. So much kewler than, say, "Armageddon." It sounds so technical, so scientific, so free from ranting religiosity....
the last REAL singularity... (Score:3, Insightful)
1 million calculators... (Score:3, Insightful)
Existing models of the future? Which ones? (Score:4, Insightful)
The premise of this definition is that models of the future give reliable or accurate answers at present. What are the models they talk about? Special futurist models? Do these really give reliable or accurate answers today? Or do they mean all models of human behaviour, i.e. most models of the social sciences? Supply & demand will no longer determine price?
If the models are found not to be good predictors of behaviour, they will be modified or replaced. You know... sort of like how it works right now?
If patterns in human behaviour start changing rapidly because of rapidly evolving superhuman intelligence, then sure, our ability to model that behaviour will go out the window. But then, we wont be doing the modeling, superhuman intelligences will. I don't see why the emergence of superhuman intelligence would have to lead to a singularity.
I believe the models will cope. Not "existing models", but tomorrow's models.
Re:Which ones? *ALL* of them. (Score:3, Informative)
Nuclear-powered aircraft.
Flying cars.
Project Orion.
Mach 3 aircraft with real payload, e.g. the XB-70.
Fiber to the home.
Betamax
Hofstadter thinks Kurzweil full of it, film at 11 (Score:3, Informative)
Re:Hofstadter thinks Kurzweil full of it, film at (Score:3, Insightful)
Re:Hofstadter thinks Kurzweil full of it, film at (Score:3, Informative)
On the chess problem alone and Hofstadter's prediction, what really happened was a duel between Hofstadter and Moore
B.S. (Score:3, Insightful)
On the other hand, their proposed "technological singularity" has served well as the theme of a great many science fiction novels.
Faster and faster (Score:3, Insightful)
I'll use myself as an example. I wore glasses from th 5th grade on. Six years ago, after 40 years of wearing glasses, I had cataract surgery that replaced my damaged lenses with plastic ones. (Complete with warranty cards, I might add; the future is weird.) I've had diabetes for 25 years. For the first 10, I treated it with diet. For the next 10, with pills. For most of the next 5, I injected a form of insulin that was created by RNA-modified bateria in vats. (For the previous 60 years, insulin had been taken from the harvested pancreases of slaughtered cattle.) For the last couple of months, I have been injecting tiny amounts of a new drug that was developed because a molecular biologist noticed that the molecular structure of a key insulin-regulating hormone was strikingly similar to that of gila monster venom.
I take an additional 6 drugs that aid in further controlling my diabetes, control my asthma, keep my arthritis from crippling me, or act as preventatives for high blood pressure and heart disease.
I am now 54 years old. In the Stone Age, I would have died before I was 20. Even in the early 20th century, I would have been lucky to make it to 30.
We are very close to extending the human lifespan by one year every year. Don't think we Baby Boomers are going to get out of your way, kiddies. We're here for the long haul.
Re:What happens when we get there (Score:2)
Re:What happens when we get there (Score:2)
Re:What happens when we get there (Score:5, Funny)
Re:My god! (Score:2, Insightful)
-- William Gibson
Re:My god! (Score:3, Insightful)
From what I've seen we are as near to creating decent AI as we are to producing fusion power stations.
Re:My god! (Score:5, Interesting)
About 10 years away then...
Re:My god! (Score:3, Interesting)
In fact, I would wager that really understanding the universe and its underlying complexity will only be understood by conscious systems much more complex than the human brain - meaning that most likely, effective fusion power will be designed *BY* the intelligent machines. See my sig.
Once "they" control a power plant, then there is no need for the "us" anymore.
Re:The future predicted. (Score:2)
3rd option: (Score:4, Funny)
Nah, the gov't wouldn't do something that dumb.