Douglas Hofstadter Looks At the Future 387
An anonymous reader writes with a link to this "detailed and fascinating interview with Douglas Hofstadter (of Gödel Escher Bach fame) about his latest book, science fiction, Kurzweil's singularity and more ... Apparently this leading cognitive researcher wouldn't want to live in a world with AI, since 'Such a world would be too alien for me. I prefer living in a world where computers are still very very stupid.' He also wouldn't want to be around if Kurzweil's ideas come to pass, since he thinks 'it certainly would spell the end of human life.'"
Singularity is naive (Score:5, Interesting)
I mean, if I ordered a burrito yesterday, and my neighbor ordered one today, and his two friends ordered one the next day, does that mean in 40 more days, all one trillion people on earth will have had one?
Kind of a strange response really (Score:5, Interesting)
Re:Singularity is naive (Score:4, Interesting)
AI's exist in a perfectly designed environment, they have humans feed them power & data and all they need to do is process. At some point computers will need to interact with the environment, it is then that everything will slow down, and probably take a step backwards.
Massive amounts of processing power will have to get reassigned to tasks currently taken for granted, like acquiring data. Imagine the size of big blue if it had to actually see the board and physically move the pieces.
End of *this* human life... (Score:5, Interesting)
(Also according to my understanding of Kurzweil's projections,) It's worth noting however, that for those willing to make the leap, much of the real growth and advancement will occur in Matrix-space. It's an excellent way to keep "growing" in power and complexity without using more energy that can be supplied by the material world.
Here's my analogy explaining this apparent paradox: Amphibians are less "advanced" than mammals, but still live their lives as they always have, though they are now food for not only their traditional predators but mammals too.
In fact, I can't help but wonder how many of us will even recognize when the first AI has arrived as a living being. Stretching the frog analogy probably too far: What is a frog's experience of a superior life form? I am guessing "not-frog". So I am guessing that my experience of an advanced AI life-form is "whatever it does, it/they does it bloody fast, massively parallel, and very very interesting...". Being in virtual space though, AI "beings" are likely only to be of passing interest to those who remain stuck in a material world, at least initially.
Another analogical question: Other than reading about the revolution in newspapers of the day, how many Europeans *really experienced* any change in their lives during the 10 years before or the 10 years after the American revolution? We know that eventually, arrival of the U.S. as a nation caused great differences in the shape of the international world, but life for most people went on afterward about the same as before. The real action was taking place on the boundary, not in the places left behind.
(Slightly off topic: This is why I think derivatives of Second Life type virtual worlds will totally *explode* in popularity: They let people get together without expending lots of jet fuel. I believe virtual world technology IS the "flying car" that was the subject of so many World's Fair Exhibits during the last century.)
Re:Hail to the robots (Score:3, Interesting)
I know, I know... Asimov's laws, etc etc. But... for a being to be sentient and at the same time reach the same level of thinking that we enjoy, you must given them the freedom to think, without any restrictions... as humans (ostensibly) do. This requires a level of both bravery and of careful planning that is far greater than we as humans are capable of today.
I'm not predicting some sort of evolutionary re-match of Cro-Magnon v. Neanderthal (where this time the robots are the new Cro-Magnon), but it does require a lot of careful thought, in every conceivable (and non-conceivable) direction. When it comes to building anything complex, it's always the things you didn't think of (or couldn't conceivably think of given the level of technology you had when designing) that come back to bite you in the arse (see also every great engineering disaster since the dawn of history).
Best bet would be to --if ever possible-- give said robot the tools to be sentient, but don't even think of giving them any power to actually do more than talk (verbal soundwaves, not data distribution) and think.
It reminds me of an old short story, where a highly-advanced future human race finally created a sentient device out of massive resources, linked from across every corner of humanity. They asked it one question to test it: "Is there a God?" The computer replied: "There is... now."
Re:Intelligent Beings (Score:3, Interesting)
On a tangent:
Intelligence is such a broad word, and then to tack on Artificial. AI lacks a precise meaning and if anything needs to be done in the world of AI, it's to create a nomenclature that makes sense and provides a protocol of understanding.
For many the word AI simply means "human brain in a jar" but that's just one small branch of AI sciences. But where is our Fujita Scale of artificial intelligence? Where is out toolkit of language (outside of mathematics)?
I ask this seriously btw, if any of you know about work on this please post a response.
It's even funnier (Score:3, Interesting)
The last one we had was the Great Depression. The irony of it was that it was the mother of all crises of _overproduction_. Humanity, or at least the West, was finally at the point where we could produce far more than anyone needed.
So much that the old-style laissez-faire free-market-automatically-fixes-everything capitalism model pretty much just broke down. There just was no solution to how much a country should produce. Hence my calling it a singularity.
By any kind of optimistic logic, it should have been the land of milk and honey. It was actually _the_ greatest economic collapse in known history, and produced very much misery and poverty.
And the funny thing is, the result was... well, that we learned to tweak the old model and produce less. We still go to work daily, and a lot of companies still want overtime, and a whole bunch of people still are dirt-poor. We just divert more and more of that work into marketing, services and government spending. It's a better life than the downwards spiral of the 19'th century, no doubt. But basically no miracle has happened, and no utopia has resulted. The improvement for the average citizen was incremental, not some revolution.
That was actually one of the least destructive "singularities". Previous ones produced stuff like, for example, the two world wars, as the death throes of old-style colonialism. When the model based on just keeping expanding into new territories and markets reached the end, we just went at each other's throats instead. A somewhat similar "singularity" arguably helped the Roman Empire collapse, and ushered in a collapse of trade and return to barbarism. The death throes of feudalism created a very bloody wave of revolutions.
All the way back to the border between Bronze Age and Iron Age in Europe, where... well, we don't know exactly what happened there, but whole civilizations were displaced or enslaved, whole cities were razed, and Europe-wide trade just collapsed. Ancient Greece for example, although most people just think of it as a continuous "Greece", had a collapse of the Mycenaean civilization and Achaean language it had before, and after some 300 years of the Greek Dark Ages, suddenly almost everyone there speaks Dorian instead. The Greeks and Greek language of Homer, are not the same as those of Pericles. (An Achaean League was formed much later, but apparently had not much to do with the original Achaeans.) And, look, they displaced the Ionians too in their way.
We recovered after each of them, no doubt, but basically the key word is: recovered. It never created some utopian/transcendence golden age.
So, well, _if_ our technology model ends up dividing by zero, I'd expect the same to happen. There'll be much misery and pain, we'll _probably_ recover after a while, and life will go on.
Re:End of *this* human life... (Score:5, Interesting)
So they are both right in ways and wrong in ways. The real rub is that Kurzweil's future is probably farther away but not for the reasons that Hofstadter thinks. The real reasons are probably based in bad technology decisions we made in the last century or two.
We (humanity) have made several technological platform choices that are terrifyingly hard to change now. These choices drove us down a path that we may have to abandon and thus suffer a massive technological set back. In specific the choices were oil, steel, and electricity.
Oil (fossil fuels) will run out. Steel (copper too) is growing scarcer. Electricity is too hard to store and produce (and heats silicon rather inconveniently). Data centers today are built with steel and located near power plants that often produce power using fossil fuel. That means even a Data Center driven life will be affected by our platform limitations.
When we start hitting physical limits to what we can do with these, how much of these supplies we can get, then we will be forced to conserve, change, or stop advancing. Those are very real threats to continued technological advancement. And they don't go away if you hide in Second Life.
Show me a Data Center built with ceramic and powered by the sun or geo-electric sources and I'll recant.
Re:Hail to the robots (Score:3, Interesting)
I think that most people who want AI for pragmatic reasons are essentially advocating the creation of a slave race. You think companies/governments are going to spend billions of dollars creating an AI, and then just let it sit around playing Playstation 7 games? I doubt it. They'd likely want a return on their investment, and they'd force the program to do their bidding in some manner (choosing stocks, acting as intelligent front ends for advanced semantic search engines, etc). Maybe this would involve an imperative built into the AI at ground level: "obey your masters", or it could be more obviously sinister like a pain/pleasure reward system like the ones used to control human slaves.
Do you think that mainstream society would find this as repugnant as I do? I doubt it. Most people seem to find it difficult to empathize with other humans who have a different skin color, a different religion, or a different sexual orientation. If Average Joe doesn't care about the individual rights of people in Gitmo, he's certainly not going to care about the individual rights of a computer program- which is not even a biological life form.
I would say that any serious AI research needs to be preceded by widespread legislation expanding the definition of individual rights (abandoning the "human rights" label as anachronistic along the way). We need to insure that all sapient beings- organic or digital- have guaranteed rights. Until then, I think AI researchers are badly misguided- they're naive idealists working towards a noble goal, without considering that they're effectively working to create a new slave race...
Re:Singularity is naive (Score:2, Interesting)
Re:I liked "I am a Strange Loop" (Score:5, Interesting)
BUT, I think that his chapters on math and physics and their interface (everything prior to the biology chapters) constitute the SINGLE GREATEST and only successful attempt ever to present a NON-DUMBED DOWN layperson's introduction to mathematical physics. I gained more physical and mathematical insight from that book than I did from any other source prior to graduate school. For that alone, I salute him. Popularizations of physics a la Hawking are a dime a dozen. An "Emperor's new mind" having (what I can only describe as) 'conceptual math' to TRULY describe the physics comes along maybe once in a lifetime.
His latest book is the extension of that effort and the culmination of a lifetime of thinking clearly and succinctly about math and physics. He is the only writer alive who imo has earned the right to use a title like "The road to reality: a complete guide to the laws of physics".
As for Hofstadter, GEB was merely pretty (while ENM was beautiful), but essentially useless (to me) beyond that. Perhaps it was meant as simply a guide to aesthetic appreciation, in which case it succeeded magnificently. As far as reality is concerned, it offered me no new insight that I could see. Stimulating prose though - I guess no book dealing with Escher can be entirely bad. I haven't read anything else by Hofstadter so I can't comment there.
Cyborgs, not AI (Score:4, Interesting)
I am far more interested in digitally enhancing human bodies and brains than creating a new AI species.
Consider this: throughout the eons of natural and sexual selection, we've evolved from fish to lizards, to mammals, to apes, and eventually to modern humans. With each evolutionary step, we have added another layer to our brain, making it more and more powerful, sophisticated and most importantly, more self-aware, more conscious.
But once our brains reached the critical capacity that allows abstract thought and language, we've stepped out of nature's evolutionary game and started improving ourselves through technology: weapons to make us better killers, letters to improve our memory, mathematics and logic to improve our reasoning, science to go beyond our intuitions. Digital technology, of course, has further accelerated the process.
And now, without even realizing it, we are merging our consciousness with technology and are building the next layer in our brain. The more integrated and seamless communication between our brains and machines will become, the closer we get to the next stage in human evolution.
Unfortunately, there is a troubling philosophical nuance that may bother some of us: how do you think our primitive reptilian brain feels about having a frontal lobe stuck to it, controlling its actions for reasons too sophisticated for it to ever understand? Will it be satisfying for us to be to our digital brain as our primitive urges and hungers are to us?
Re:It's even funnier (Score:3, Interesting)
Re:Intelligent Beings (Score:3, Interesting)
The field of AI research has taken tasks that were once thought to require sentience to perform, and found ways to perform those tasks with simple sets of rules and/or large databases. Isn't even the term "AI" passe in the field now?
It's not moving the goalposts, it's simply a clarification of what sentience means: some level of self-awareness. Even a hamster has it, but no software yet does.
Re:Singularity is naive (Score:3, Interesting)
Re:Singularity is naive (Score:4, Interesting)
My take, which sounds very anthrocentric, is that it won't work like that. I have a belief, which might be scary. It goes like this: we are as smart as it gets.
Before you dismiss, here's the thing: intelligence and processing power are not the same thing. I know that computers will process much more raw information much more quickly than a human mind, but there's no understanding there. I also believe that at some distant point we'll be able to build a computer "brain" that does have the ability to understand as we do. What I don't believe is that just because it can function faster it will suddenly understand better.
Despite the enormous amount of completely idiotic stuff humans do, the best and brightest humans in their best and brightest moments are nothing short of amazingly intelligent. Compared to what? Compared to everything else that we've ever encountered. This very interview is a good example. People like Hofstatder are dealing not with a lack of processing power, but running up against the very ambiguities of the universe itself. You've absolutely got to read GEB if you don't understand what I mean by that.
So yeah: as little evidence as I have, I believe that humans are capable of (though not usually engaged in) the highest form of intelligence possible. I don't think a computer brain that runs 10x faster would be 10x smarter. It'll get the same tasks done more quickly, but it's overall comprehension will be within an order of magnitude of anything the best humans can do.
Let me say this to: while I respect the AI field, we've already got 6 billion and counting super-high-tech neural networks on this planet right now that can blow the pants off any computer in comprehension and creativity. Yet we are shit at benefitting from all that. I don't think mechanized versions are going to cause a dramatic improvement. It's a complex world.
Cheers.
Re:Singularity is naive (Score:2, Interesting)
Re:Singularity is naive (Score:3, Interesting)
Re:Singularity is naive (Score:5, Interesting)
Would you still be you if the computer was running a simulation of your brain? If you have some sense of "self", that which is aware, how would that awareness be affected by having two or more copies of your mental processes in action at the same time? Is that awareness merely a byproduct of some mental/mechanical process or a chemical process, or is it something else still? Would your brain really be worth running in a computer?
I tend to think, and a "thinking" computer would probably agree, that the computer is probably better off doing other things than running wetware facsimilies that grew out of a willy-nilly evolutionary process over millions of years.
Re:Singularity is naive (Score:3, Interesting)
That's the argument that, if we get something smarter than an un-augmented human, it will find it relatively easier to make something still smarter, and so on. First, how hard it is for something to reproduce, even at its own level of intelligence, varies widely with just what type of singularity model we use. Suppose AI happens in a system that has lots of sensory elements, and control elements that affect real world processes, where we actually encourage the first steps of the system waking up. That makes more sense than an AI spontaneously generating in some big processor network, or developing in a system with very limited bandwidth devoted to interacting with the real world.
So the number of 'transistors' that fit on this thing's 'chips' doubles every 18 months, or whatever variant of Moore's law you want to use. That doesn't mean 18 months later it (or you) can build one twice as smart. All its sensory and motor capabilities don't automagically double, even if Moore somehow still applies. Its intelligence needs to reproduce a body for its offspring, not just a mind, and if that body involves the whole existing net, a dozen radio telescopes, and a few automated car factories, it has to build something better than that for the next generation, as well as just building a better brain.
If we actually got something a little bit smarter than us, and educated it well, it might be pretty smart about not building its successor to have more environmental consequences than the parent, or making something smarter that would be miserable without senses and effectuators capable of using the increased intelligence.
After all, if you are I.Q. 130, and find a mate who is also smarter than average, and genetic analysis shows your kids would average 150 or more, you should probably go for lots of kids, right? What if those kids also have significant chances of suicidal burnout and schizophrenia like alienation from their limited environment? And they are only going to able to realize their potential on a very steady high protein diet, which looks hard to sustain given your predictions for the ecology. Maybe you'd skip that opportunity, or even decide reproducing at all isn't such a good idea, at least not just yet.
Re:Singularity is naive (Score:2, Interesting)
You know what. If Hofstadter started a religion, I'd probably at least attend the services. Mostly because I could meet interesting women.
Re:Singularity is naive (Score:3, Interesting)
Change the surface finish on the board and watch your tool cry out when it can't find the fiducials; or enjoy the fun of putting a really thick PCB without telling the tool (and disabling all the safeguards) and have the placement nozzles crash. SMT components are amazingly easily to pick up since they have flat areas perfect for a vaccuum nozzle to grab hold of, fed off of reels with carefully controlled distances between parts, and simple package characteristics for alignment.
As I mentioned in a response to another poster, for an autonomous machine the level of image acquisition, processing, and spacial computation is far beyond anything we have today.
I was an SMT process engineer for 4 years in CPU manufacturing, though never worked on the Fuji's.
Re:Singularity is naive (Score:3, Interesting)
There are several serious problems with planet earth right now and if we don't get off our collective asses then within 50 years all this great tech we are developing will look like nice paint on the stern of the Titanic.
The kind of problems we should be dealing with are fairly low tech, large screen plasma TV's attract lots of $, clean water, food and medicine are unfortunately not a priority, except with a small number of idealists who unfortunately do not have the funds to make much impact.
I saw a speech by Jane Goodall not that long ago and was very much moved by the amount of energy that she still puts in trying to save this blue-green globe but it will need a lot more than a couple of speeches.
Re:Singularity is naive (Score:3, Interesting)
You had to undo three axis of rotation and translation in order to position the code so that it could be read, and scale it as well.
The pattern was - you've guessed it
We did this on a targa vision board and an AT clone at 20 Mhz in realtime, I'm pretty sure that todays computers could do a lot better than that.
(well, not better than realtime, but better in terms of algorithm complexity).
It's not virtual (Score:2, Interesting)
Re:I liked "I am a Strange Loop" (Score:2, Interesting)
A great mind and a great interview (Score:3, Interesting)
1. "Ray Kurzweil is terrified by his own mortality", and
2. "Rather ironically, [Kurzweil's] vision totally bypasses the need for cognitive science or AI"
It is exactly this complex and elusive puzzle of "I" and "consciousness" Hofstadter explores that Kurzweil hopes we can conquer without having to think about it at all. Which I scorn as "magic science".
I have to say I find the cyberpunk vision more appealing than Hofstadter. It would be "the end of humanity as we know it." I'm not sure it would be "the end of human life." It might be evolution. I just think it is many hundred years in the future at the most "optimistic" (depending on your viewpoint).
Re:Intelligent Beings (Score:3, Interesting)
For a class project, I once created a genetic algorithm to evolve a Reversi-playing algorithm (Reversi is also known as Othello). I coded the system not to be able to consider more than X moves in advance, because I wanted to prevent it from using "computer tricks" (i.e. I didn't want it looking farther ahead than a typical human could do with a moderate amount of practice). I tried playing with that number just to see what would happen, but I eventually left it at 4.
By the time I was done with my evolving system, it could evolve in 4 days (using 4 ~2Ghz Intel servers and an island genetic model, for those who know about genetic algorithms) an algorithm which could handily and consistently beat myself and all of my friends.
The interesting thing here is that I didn't even "initialize" it with a basic strategy or any personal training -- it started with randomly-generated strategies (most of which were no better than randomly placing pieces in legal squares). It then played against itself for those 4 days, learning through trial and error (as opposed to training by playing against a human). By the end, it had learned enough without human feedback that it could defeat a group of fairly intelligent (though not very practiced) humans at Reversi.
I never analyzed the generated programs enough to fully understand how they worked, but I did inspect them a little. Each evolved algorithm consisted of no more than 40 lines of C code (which called various global helper functions such as get_opponent_score(), get_self_side_pieces(), etc which I had created). By inspecting algorithms that were able to beat me, I actually learned a thing or two about Reversi strategy.