IEEE Special Report On the Singularity 483
jbcarr83 writes "The IEEE Spectrum is running a special issue on the technological singularity as envisioned by Vernor Vinge and others. Articles on both sides of the will it/won't it divide appear, though most take the it will approach. I found Richard A.L. Jones' contribution, 'Rupturing The Nanotech Rapture,' to be of particular interest. He puts forward some very sound objections to nanomachines of the Drexler variety."
I for one welcome our (Score:4, Funny)
Re:I for one welcome our (Score:5, Funny)
Re:I for one welcome our (Score:5, Funny)
Re: (Score:3, Funny)
Re: (Score:3, Informative)
As for long distance travel, many animals are capable of traveling much further without refueling/maintenance than any machine (outside of a space craft) can.
We do 'great speeds' better than nature, given.
Any process based on plant energy conversion can not beat the efficiency of plants themselves. However, plants convert less than 1% of sunlight into usable energy. With PV, we have them beat by an order of magnitude in efficiency. As far as to
Re:I for one welcome our (Score:5, Insightful)
Is it just me, or does all this poorly-reasoned "singularity" crap have a religious feel to it?
Re:I for one welcome our (Score:5, Funny)
If you assume that:
* Technology will continue to improve exponentially (it is right now - see Moore's law)
* The brain is a fully deterministic computer.
Then it is a fair assumption that we will eventually design a superbrain. The superbrain will design a super-duper brain, and the chain reaction (singularity) will be upon us. I can't wait!
Re:I for one welcome our (Score:4, Interesting)
Understand, I'm not claiming that progress in creating smaller and cheaper circuits will suddenly come to a halt. There's too much economic demand for that to happen. But development will have to switch to new technologies, and progress almost certainly will be slower.
Every kind of growth has limits. If it didn't bacteria would still be the most conspicuous life form on the planet. Science and technology are no different. They're dependent on natural resources, an educated pool of skilled workers, and a lot of other factors.
Re:I for one welcome our (Score:5, Insightful)
When relays reached max density, tubes appeared. Tubes were shrunk, and at about the time when they couldn't get a lot smaller, transistors appeared. Those shrunk for a while, then IC's appeared. IC's have been shrinking for a while, with various technologies, each able to go smaller than before, driving that change. Now IC's are within reach of maximum density in the 2D zone, but the 3rd dimension beckons, especially to low-power (hence low heat) technologies. Two layers gives a doubling in the same 2d space; four does it again... that's probably good for quite a few doublings before the 3rd dimension becomes unwieldy. In the meantime, can we anticipate what might come next? Biologicals are one possibility; look at the brain. 3d and fits in a funny shape. Brains come in all sizes, and who is to say that the one we have is either the best design or the largest or the fastest? What if materials that work 2x as fast as our neurons are found? Look at the recent development of memristors; how many people saw that coming? Not many! And they're not even in hardware yet. They have the potential to spike memory density up, power down, speed up, and more... because they aren't transistors at all. And they're small. In fact, the smaller they are, the better they seem to work. There's a limit in there somewhere, but still, how cool is that?
Furthermore, Moore's law is just one aspect of technology; we are also experiencing doublings along many other paths (see Ray Kurzweil's observations for details on that) and some of them aren't about materials or hardware, they're about knowledge leveraging next steps. For instance, in the late 1970's, we had microprocessors that were very capable, but we didn't have many kinds of software. If we had it at the time, we could have done more, earlier... nothing but knowledge. But instead, many of these software technologies didn't show up for years. Yet we could take one of those microprocessors (a 6809 or a z80, for instance) and program all *manner* of cool things on them today, were it called for. And build them huge memory spaces, too. To put it another way, with what I know after 40 years of programming, if I could go back in time to 1979, what I now know how to do with microprocessors would make me a very rich man. Technology has come a long way regardless of Moore's law. Technology multiplies itself.
Honestly, there is nothing that falls so flat on my ears as doomlike predictions of technology reaching an unbreachable wall. Not going to happen. What's going to happen is technology will continue to double. The consequences of that are shrouded in mystery, but the one thing that is clear is that there will be extremely significant consequences.
Here's an observation for you: When you have projects that are pendant upon technologies that are experiencing doublings in a particular time period, those projects will typically get 1/2 of the total work done in the last time period.
For example, four time periods of doublings: 1 - 2 4 8 16... a total factor of 31, of which 16 occurred in the last period. It is because of this that projects like the human genome project look stalled at first; half of the work required to get them done will occur in the last doubling period (about a year in that case.) I suspect that's exactly what we're looking at with fusion as well; we're just not far enough up the curve yet.
Re:I for one welcome our (Score:4, Informative)
I am reminded of an article published in Analog around 1970 showing the exponential curve of the speed of human travel. It plotted very nicely: horse, locomotive, aeroplane, rocket. At the time human speed had recently hit a new high: the Apollo moon missions with humans traveling at 11.09 km/sec (Apollo 10 in 1969 was the fastest). The author projected we would be traveling close to the speed of light and be prepared to colonize the galaxy before 2000.
Almost 40 years later the fastest any human has ever travelled is (drum roll) 11.09 km/sec on Apollo 10. It looks like, with luck, humans may again travel about as fast in another 20 years.
Re:I for one welcome our (Score:4, Insightful)
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
However this will not and cannot continue ag nausium because eventually the laws of thermodynamics catch up with you.
* The brain is a fully deterministic computer.
This is an unproven article of faith and contrary to a lot of evidence which would appear the brain uses at least some semi chaotic processes. Most natural system do.
Re: (Score:3, Insightful)
Re: (Score:3, Funny)
"from the you-will-have-already-read-this dept." (Score:2)
The singularity already happened (Score:3, Insightful)
Re:The singularity already happened (Score:5, Informative)
Re: (Score:3, Interesting)
I suspect the root doesn't have locality enforcement (which can be observed by measuring the upper limit on the speed of light) or entropy like we do . Imagine how powerful a computer could be if its elements could communicate instantaneously over light years o
Re: (Score:3, Funny)
An infinite loop? Please no! (Score:4, Funny)
Re:The singularity already happened (Score:5, Interesting)
As I remember, the conversation threads then devolved into whether or not it would be possible for one of those virtual systems to, within the simulation, build a virtual simulation with the same resolution
Re: (Score:3, Informative)
Re:The singularity already happened (Score:4, Interesting)
At laaaasstt !!! (Score:4, Funny)
The what? (Score:3, Interesting)
Re:The what? (Score:5, Informative)
Mankind has been progressing technologically in steps that seem to get closer and closer together. The theory is that at some point, technological advances will begin to happen all at once, with the emergence of things like sentient AI and usable quantum engineering. Basically, technological transcendence.
It's a pretty silly idea, but everyone has their own vision of nirvana.
Re:The what? (Score:5, Informative)
I was listening to a talk on hypercanes quite some time ago, and the lecturer was using the term singularity to describe the point beyond which the weather system became self-sustaining, a situation for which the predictive equations could not account. Once the predictive systems are expanded the 'singularity' is 'pushed back' to the point where the systems break down again.
Re:The what? (Score:4, Informative)
Re:The what? (Score:4, Interesting)
Re: (Score:2, Insightful)
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
* Assume the Earth can sustain 1000 times as many people as it does now.
* Assume the solar system can sustain 1000 times that many people (Dyson sphere, whatever, who knows).
* Assume the nearby galaxy is fantasticly rich in resources, with a system that can support this same high population every 100 cubic light years.
* Assume that we can colonize all systems within the light cone of "now" - that is, we have the resources of every system we can reach at the s
Re:The what? (Score:5, Interesting)
I think you're missing the point of the singularity. Mankind has progressed at a rate limited by his brain, which is determined by genetics. Our brains have a bounded capacity and rate of operation, and our brain can evolve at only a very slow rate. Therefore, our rate of advancement has been bounded.
On the other hand, if we develop beings with an artificial intelligence equal to the smartest scientists, they could potentially develop a second generation that would be improved. That generation could operate more quickly and be smarter, and develop a third generation even more quickly. Essentially, the limit to our rate of advancement would be removed, and that would cause technological advances to happen very quickly. In a short period of time, we could find ourselves surrounded by beings that seem like gods to us. I think it's less a matter of whether it will happen, and more a matter of when and how it will happen.
Skip the AI part (Score:5, Funny)
Re:The what? (Score:4, Insightful)
You are assuming that each generation of intelligences can not only create an intelligence smarter than themselves, but one that is as much smarter than themselves as they are smarter than their predecessors. That is definitely not something which is guaranteed to be true, and I would go so far as to say it is most likely false.
It doesn't matter if your infinite series is always strictly increasing, it's not necessarily going to get to infinity.
For instance, say you manage to create an intelligence twice as intelligent as you. This one puts all its intelligence into creating another intelligence, and manages to create one which is 2.5 times as intelligent as you. That one manages 2.75. And so on, until you top out at three times. No runaway evolution happens, because intelligence turns out to be really hard.
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:3, Insightful)
We're still stuck with primitive programming languages, defective (by design?) platforms, unimplementable document formats (OOXML anyone?), carbon-based power plants, software patents....
There CANNOT be any singularity. The chilling effect of Mankind's stupidity is a factor too great to ignore.
Re: (Score:2)
You raise a good point.
Many proponents of the singularity suggest a 'critical mass' type scenario, where the option not to proceed with these developments is effectively removed from man's control in the very near future.
This point of view seems more than a little optimistic right now, but if we succeed in surviving another couple of decades... who knows.
Re: (Score:2)
If we are approaching an asymptote, that asymptote must cross the time axis at some point. That would, in theory, be the date of the singularity. When is it predicted for?
Re:The what? (Score:4, Informative)
Re: (Score:2)
Re:The what? (Score:4, Interesting)
It is suggested that once the intelligence and the medium are one, then force will simply be an expression of 'thought,' and could only be instructive, and not destructive.
Just a thought, and not my own at that.
Re: (Score:2)
Compared to the likelihood that we blow ourselves up using a nuclear war or a ecological catastrophe I'm not too worried.
Re:The what? (Score:5, Funny)
Re: (Score:2)
In case you missed it, recent research showed that carbon nanotubes, despite earlier reports to the contrary, carry quite a cancer risk. So making self organizing molecules with lethal potential sounds like as safe an idea as splitting the atom. Someone will do it, but where it leads only (enter deity/superhuman ai/advanced life form of c
Re: (Score:2)
Not that It would happen. It's silly on the surface and doesn't hold up. It's the same failing in the human mind splayed with mystical technology instead of a security guard in the sky.
Re: (Score:3)
...So what happens when it determines that the biggest problem with this planet is the ugly bags of mostly water...?
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Most singulitarians expect the first human-equivalent AIs circa 2025 and the singularity circa 2045.
hmmm. (Score:5, Interesting)
It will not be you. It will be a copy. You will still be the one that dies afterwards.
It would be you if a progressive upgrade path could be found from biochemical to mechanical/electrical system.
The copy however will believe that he is is you as he will have no memory of his existence after the "transfer" unlike poor flesh you in the xerox machine.
Who has legal rights until/after fleshy death?
Even then the copy will be subject to mechanical breakdowns, loss of sensation, and other issues interacting with the real world.
Would they want to interact with normal world? Would they prefer a virtual world?
As a society I feel that we are nowhere near ready for such questions, and in any case I strongly suspect individual sanity would not survive transfer.
For a good fictional account of this (there are many) I still hold the Gateway books by Frederick Pohl - and the death of Robinette Broadhead and the society of electronic people stored after death.
In the book, to interact with us really slow and boring humans he creates an electronic avatar and animates it whilst having a fun time in virtual fantasy world, checking on it every while to see if anything interesting has been said and instructing it on what to say next.
Re:hmmm. (Score:4, Insightful)
When you copy a linux binary, it is a linux binary, as well as a copy of it. This whole thing touch at the essence of what "being" means. If you were to instantly copy yourself right now, you would have one instance of you thinking "Well, the copy is not me !" and another one thinking "Whee, I am the digital one, I am the one who get immortality, yay !"
Thinking of people as instanciable things require a little time to adapt to the idea.
Re:hmmm. (Score:4, Interesting)
Why?
Consciousnous will not be split across the two new instances, and if a non-destructive reading has taken place there is no magic that will make your consciousness jump to the computer. You, in meatspace, will still have a continuous existance and you, in meatspace, are not suddenly immortal.
It would be like giving birth perhaps, you spawn off a part of yourself. To me it would feel utterly futile. Where's the benefit to me (by which I mean my internal monlogue, my continuous experience of life), other than in terms of vanity?
Re: (Score:2)
Consciousness, if defined as a unbroken chain of remembered events isn't singular. It will indeed split. Both beings - you in your flesh, and you in the silicon - will have exactly the sam
Re: (Score:2)
Re:hmmm. (Score:5, Funny)
Surely that's not even a serious question. If I could choose between hanging out with you meatsicles, or living in a perfect copy of meatspace but with a flawless copy of the flawless Alyson Hannigan oiled up and duct-taped to a water-bed, it's not a matter of if, it's a matter of when and how much.
Re:hmmm. (Score:5, Interesting)
Re: (Score:2)
Re: (Score:3, Insightful)
In fact, every moment of the day I die and am recreated again as a slightly different individual.
I am the phoenix and you can too!
Re: (Score:3, Interesting)
Moreover, even if we are merely the sum of our ac
Re: (Score:2)
It will not be you. It will be a copy. You will still be the one that dies afterwards.
What if you slowly replace each brain cell that dies with a synthetic replica of a brain? Eventually, your mind will be synthetic or a machine, but if that machine is not you at what point do you loose your consciousness?
We all have brain cells die all the time and grow new ones without (at least observantly to ourselves) loosing our concisenesses (more so than other aft
Re: (Score:2)
On the plus side, it would be comforting for your friends and family to still have 'you' around, even if it is just 'You' restarting from a point in the past.
If your child dies, and you can reactivate him from yesterday, most people, if not all, would do so. It wouldn't matter that it's not actually the same person.
There would be a lot of cool stuff as well, could you imagine backing up and activating a dozen Hawkings? Or having 'Hawking' in a computer so he can d
Light Speed Rule (Score:5, Insightful)
Re: (Score:2)
There are really only a very small number of sources of energy on
Re: (Score:2)
Re:Light Speed Rule (Score:4, Interesting)
Short term. Thats what inflation does on its own.
Long term. Technology actually saves costs and increases productivity. A single scientist today with a desktop computer and the internet is more productive than 100 in 1908 with slide rules and a large library.
If nothing else, those scientists in 1908 had to deal with the time in looking up materials in their reference sources, do very complex calculations by hand, and if they needed to correspond with their peers they had to deal with the postal service and transatlantic journeys if their letters needed to reach their friends across the pond.
So while the costs appear to increase (probaly due to inflation and energy costs) productivity increases just as fast if not faster.
And speaking of energy crisis... I believe the current crisis will actually benefit alternative technologies and actually force companies to really consider more efficient ways of using and eventually creating their own energy.
Imagine if you would a world were solar cells or so efficient you don't need to even have to bother with a real power grid. In reality, I don't think the singularity will be created by a bunch of nerds with fancy algorithms but by corporations who create technologies out of competitive necessity.
nothing to worry about (Score:3, Interesting)
Re: (Score:3, Interesting)
Re:nothing to worry about (Score:4, Insightful)
Re: (Score:3, Funny)
Just-like-human intelligence (Score:4, Interesting)
Think that in most classic sci-fi books we already should have humanoid bots walking between us, colonized most of the solar system planets (even visited and returned from other stars/galaxies), sent manned probes to jupiter, have flying cars and/or MrFusion (and not as exceptions, but as something that everyone have), etc. There are some "practical" issues that delayed a bit that, wasnt found a way to travel safely faster than light, antigravity wasnt discovered, duplicators just arent there, neither teletransporting (with flies in it or not), even getting a full grown clone with my memory and concience is a little hard to get.
Worse than that, between the practical issues arent just technical ones. Economy, ethical, social, safety issues are as good stopping reaching some utopical sci-fi society as FTL travel.
In this category falls any kind of machine that talks and in fact think like a human, including handling contexts and perceiving reality like human. Is something very common in movies and sci-fi stories, but afaik is still a bit far on time.
Re: (Score:2)
I've heard estimates that we will experience 20,000 years of progress (at the current rate) during the next 100 years. It sounds insane, but if you think back to the year 1900... no polio vaccine, no relativity/quantum mechanics, no airplanes, no electronics, no radar, no X-rays,
Re:Just-like-human intelligence (Score:4, Interesting)
Of course, people will complain that it's not real intelligence if it can't be mistaken for a human. To them I say, a plane can not be mistaken for a bird, does it not really fly?
Real Singularity... (Score:2, Interesting)
Debug time (Score:4, Interesting)
Soon, if not already, biotech will be able to create genetically modified humans. But it will take a century or so to tell if a given mod was an improvement. It's going to be a very slow development cycle.
This is ridiculous (Score:5, Interesting)
From another angle, this is really no different from predictions of rayguns and flying cars decades ago. Have you seen the state of AI and nanotech? It hasn't progressed qualitatively for quite some time. We've got microscopic gears and shitty speech recognition. What makes everyone think that we aren't going to hit some serious physical limits, or that human civilization is stable enough to support this kind of continued advance?
It's just religion. Nerd religion, but still religion.
LS
Re:This is ridiculous (Score:4, Interesting)
Also, a lot of the singularity talk does have a religious cast to it. "The Singularity will work in mysterious ways" and all that.
I'm just sayin'
In self-aware internet... (Score:2, Troll)
Qin Shi Huang (Score:4, Insightful)
Humans can make small machines, but that completely ignores the fact that we have very limited knowledge about the workings of our cells and we really don't even know what sentient life is.
In the grand scheme of things, we are only a few steps down the road from Qin Shi Huang. Every generation talks up unlimited life spans, and it is always BS.
In other words, be prepared to die like everyone else.
Sound objections? (Score:2)
He puts forward some very sound objections to nanomachines of the Drexler variety.
Really? I found his objections to be fairly imprecise. For instance:
All well and good (Score:4, Insightful)
The slavery-based imperialist economies of the past relied on captive expendable human labour and looting. There was no compelling need for mechanical transport when slaves could carry you, no need for extensive infrastructure when the roads were primarily intended to enforce the rule of the empire through the rapid movement of armies. Nor was there any extensive profit in consumer retailing when the majority of the population, locked into feudalism did not have the surplus income to spend. The Romans had an extensive and often surprising level of technology that the traditional teaching classical history fails to address at a high school level. They had fast food similar to burgers but no extensive empire-encompassing franchise with the motto "Id amo", nor did their technological abilities extend much past properly constructed water and sewer systems and roads for the majority of the populace. They had all the resources both physical and intellectual to develop into a technologically advanced society but they did not and could not.
It was not until much later, long after the system that was the Roman Empire had vanished, after the Black Death devastated the populations of Europe that feudalism ended and human labour became a valuable resource. It was at this point the cost effectives of machines became apparent and people were willing to invest time and money in their development and make a profit. The profit part doesn't necessarily appear as the direct result of new knowledge or research. On the contrary, some of the finest example of our technological advancements, anti-biotics and anti-malaria for example are a direct result of military strategic planning and had nothing at all to do with either venture capitalism or pro-bono publico development.
So yes, The Singularity just like The End of History, (or dare I suggest even the Flying Car!) might be very pleasant but also equally difficult to either pin-down precisely or predict accurately.
Re: (Score:3, Insightful)
Re: (Score:2)
Re:Faith in the Singularity (Score:5, Informative)
Minor nitpick: Futurists make a distinction between "strongly" and "weakly" Godlike AI. Strongly Godlike AI refers to an intelligence that is for all meaningful purposes God - effectively unbounded control over space and time. Weakly Godlike AI refers to a being that intellectually transcends us in ways we can't imagine, but is still bound by the laws of the universe. Most talk about the Singularity focuses on a weakly Godlike scenario.
Re: (Score:2, Informative)
Re: (Score:2)
In a sense... Had Christianity not came about and caused the downfall of the Roman Empire we would still be using slave labor for most tasks today and not had the need for technological advancements.
Re:Faith in the Singularity (Score:5, Insightful)
I already am a god compared to someone living 200 years ago. I'm not afraid of infection. My children and wife survived the childbirth process easily. Name a topic, any topic in the world, and I can talk intelligently about it (all of us here are pretty much augmented beings, backed by the internet). I've seen the Earth from on top of the clouds. I've seen the sun come up over the Bay in Annapolis in the morning, and watched it go down over the bay in San Francisco in the evening of the same day.
Few people of the past would have thought such things were possible.
Sure, there's some faith, but there's a lot of carefully considered fact involved in the belief as well.
you vs. primitive man (Score:2, Funny)
Re:you vs. primitive man (Score:4, Insightful)
Re: (Score:3, Informative)
Re:Faith in the Singularity (Score:5, Insightful)
That's the height of irony.
Re: (Score:3, Insightful)
Yes, such a trivial difference. Tell you what, I'll try to boil a pot of water using technological means, while you try to do the same using mystical means. We'll see who gets to drink their tea first.
Re:Faith in the Singularity (Score:5, Insightful)
Precisely. The only difference from religious people is that the coming of the singularity is something that can be predicted from observable facts, instead of old texts written by self-serving priests of long ago interpreted by self-serving priests of today.
Re: (Score:2)
a point at which the derivative of a given function of a complex variable does not exist but every neighborhood of which contains points for which the derivative does exist [merriam-webster.com]
doh?
Geekism is as much faith-based as any other religion.
Re: (Score:2)
No, really? It happens that I *do* know what a singularity is, and it's not hard at all to predict. Want an example? The function f(x) = 1 / (3 - x) has a singularity at x = 3. It's trivial to see where a singularity is, if you know a little bit of math.
Re: (Score:2)
As a Christian, I find it humorous to see the tone people (athiests, I presume?) use when talking about this. It seems very similar to a "rapture" mentality, coming from people who claim to be 100% rational. It's like:
Meanwhile, 3rd world countries become dominated by the technology-driven, post-singularity borgs, and end up hiding in caves and losing all their knowledge. That is, until a solar storm of apocalyptic proportions causes enough damage to the systems to drive them unusable. All artificial life is lost, and the few survivors go back to the dark ages.
Re:Faith in the Singularity (Score:5, Insightful)
Religion, on the other hand, does not do this. The most religion can claim is providing government-like structures and psychotherapy-like benefits. It's sure not moving along the path to curing all diseases and increasing mankind's power over the universe.
So, yeah, there is a rational, historically-supported reason to be excited about one but not the other.
Re: (Score:2)
A Strong AI isn't something that people just want, but is economically the most logical route for corporations to pursue (in order to save money) so that it would happen anyways even if the nerds didn't say anything about it.
Otherwise, I argue that if a singularity doesn't happen eventually we'll all die anyways so we won't be arguing about the issue.
And by
Re: (Score:3, Informative)