Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Books Media Math Science

Douglas Hofstadter Looks At the Future 387

An anonymous reader writes with a link to this "detailed and fascinating interview with Douglas Hofstadter (of Gödel Escher Bach fame) about his latest book, science fiction, Kurzweil's singularity and more ... Apparently this leading cognitive researcher wouldn't want to live in a world with AI, since 'Such a world would be too alien for me. I prefer living in a world where computers are still very very stupid.' He also wouldn't want to be around if Kurzweil's ideas come to pass, since he thinks 'it certainly would spell the end of human life.'"
This discussion has been archived. No new comments can be posted.

Douglas Hofstadter Looks At the Future

Comments Filter:
  • Singularity is naive (Score:5, Interesting)

    by nuzak ( 959558 ) on Thursday June 12, 2008 @05:59PM (#23771065) Journal
    Is it just me or does the Singularity smack of dumb extrapolation to me? "Progress is accelerating by X, ergo it will always accelerate by X".

    I mean, if I ordered a burrito yesterday, and my neighbor ordered one today, and his two friends ordered one the next day, does that mean in 40 more days, all one trillion people on earth will have had one?
  • by the_humeister ( 922869 ) on Thursday June 12, 2008 @06:08PM (#23771161)

    Am I disappointed by the amount of progress in cognitive science and AI in the past 30 years or so? Not at all. To the contrary, I would have been extremely upset if we had come anywhere close to reaching human intelligence â" it would have made me fear that our minds and souls were not deep. Reaching the goal of AI in just a few decades would have made me dramatically lose respect for humanity, and I certainly don't want (and never wanted) that to happen.
    Hehe, you mean all the nasty things humanity has done to each other hasn't made you lose respect?

    I am a deep admirer of humanity at its finest and deepest and most powerful â" of great people such as Helen Keller, Albert Einstein, Ella Fitzgerald, Albert Schweitzer, Frederic Chopin, Raoul Wallenberg, Fats Waller, and on and on. I find endless depth in such people (many more are listed on [chapter 17] of I Am a Strange Loop), and I would hate to think that all that beauty and profundity and goodness could be captured â" even approximated in any way at all! â" in the horribly rigid computational devices of our era.
    When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?
  • by servognome ( 738846 ) on Thursday June 12, 2008 @06:16PM (#23771227)
    I don't think it's necessarily dumb extrapolation, but I do think not all the variables are included.
    AI's exist in a perfectly designed environment, they have humans feed them power & data and all they need to do is process. At some point computers will need to interact with the environment, it is then that everything will slow down, and probably take a step backwards.
    Massive amounts of processing power will have to get reassigned to tasks currently taken for granted, like acquiring data. Imagine the size of big blue if it had to actually see the board and physically move the pieces.
  • by lenski ( 96498 ) on Thursday June 12, 2008 @06:18PM (#23771245)
    I agree with Douglas, I expect I would be uncomfortably unfamiliar in a world shared with AI beings. Then again, based on my understanding of Kurzweil's Singularity, it's unlikely to affect me much: I plan to live out my life in meatspace, where things will go on much as before.

    (Also according to my understanding of Kurzweil's projections,) It's worth noting however, that for those willing to make the leap, much of the real growth and advancement will occur in Matrix-space. It's an excellent way to keep "growing" in power and complexity without using more energy that can be supplied by the material world.

    Here's my analogy explaining this apparent paradox: Amphibians are less "advanced" than mammals, but still live their lives as they always have, though they are now food for not only their traditional predators but mammals too. ...And pollution and loss of habitat, but through all that, they still live amphibian lives.

    In fact, I can't help but wonder how many of us will even recognize when the first AI has arrived as a living being. Stretching the frog analogy probably too far: What is a frog's experience of a superior life form? I am guessing "not-frog". So I am guessing that my experience of an advanced AI life-form is "whatever it does, it/they does it bloody fast, massively parallel, and very very interesting...". Being in virtual space though, AI "beings" are likely only to be of passing interest to those who remain stuck in a material world, at least initially.

    Another analogical question: Other than reading about the revolution in newspapers of the day, how many Europeans *really experienced* any change in their lives during the 10 years before or the 10 years after the American revolution? We know that eventually, arrival of the U.S. as a nation caused great differences in the shape of the international world, but life for most people went on afterward about the same as before. The real action was taking place on the boundary, not in the places left behind.

    (Slightly off topic: This is why I think derivatives of Second Life type virtual worlds will totally *explode* in popularity: They let people get together without expending lots of jet fuel. I believe virtual world technology IS the "flying car" that was the subject of so many World's Fair Exhibits during the last century.)
  • by Penguinisto ( 415985 ) on Thursday June 12, 2008 @06:23PM (#23771305) Journal
    ...as long as they don't reach our level of emotional frailties, or reach conclusions that are detrimental to continued human existence.



    I know, I know... Asimov's laws, etc etc. But... for a being to be sentient and at the same time reach the same level of thinking that we enjoy, you must given them the freedom to think, without any restrictions... as humans (ostensibly) do. This requires a level of both bravery and of careful planning that is far greater than we as humans are capable of today.


    I'm not predicting some sort of evolutionary re-match of Cro-Magnon v. Neanderthal (where this time the robots are the new Cro-Magnon), but it does require a lot of careful thought, in every conceivable (and non-conceivable) direction. When it comes to building anything complex, it's always the things you didn't think of (or couldn't conceivably think of given the level of technology you had when designing) that come back to bite you in the arse (see also every great engineering disaster since the dawn of history).


    Best bet would be to --if ever possible-- give said robot the tools to be sentient, but don't even think of giving them any power to actually do more than talk (verbal soundwaves, not data distribution) and think.


    It reminds me of an old short story, where a highly-advanced future human race finally created a sentient device out of massive resources, linked from across every corner of humanity. They asked it one question to test it: "Is there a God?" The computer replied: "There is... now."

    /P

  • by e2d2 ( 115622 ) on Thursday June 12, 2008 @06:41PM (#23771509)
    Also to add to that, there is no requirement for us to understand the brain in depth, only that we understand how we learn, and in that respect we've come leaps and bounds over the years. Plus, let's not limit ourselves to the human brain. For instance, a dog is intelligent too. A piece of software with the intelligence of a dog could be very useful. Hell one with the decision making abilities of a bird would be a nice start. And on and on..

    On a tangent:
    Intelligence is such a broad word, and then to tack on Artificial. AI lacks a precise meaning and if anything needs to be done in the world of AI, it's to create a nomenclature that makes sense and provides a protocol of understanding.

    For many the word AI simply means "human brain in a jar" but that's just one small branch of AI sciences. But where is our Fujita Scale of artificial intelligence? Where is out toolkit of language (outside of mathematics)?

    I ask this seriously btw, if any of you know about work on this please post a response.

  • It's even funnier (Score:3, Interesting)

    by Moraelin ( 679338 ) on Thursday June 12, 2008 @06:41PM (#23771525) Journal
    Actually, even if it kept accelerating, singularities (as some fancy world for when you divide by zero, or otherwise your model breaks down) so far never created some utopia.

    The last one we had was the Great Depression. The irony of it was that it was the mother of all crises of _overproduction_. Humanity, or at least the West, was finally at the point where we could produce far more than anyone needed.

    So much that the old-style laissez-faire free-market-automatically-fixes-everything capitalism model pretty much just broke down. There just was no solution to how much a country should produce. Hence my calling it a singularity.

    By any kind of optimistic logic, it should have been the land of milk and honey. It was actually _the_ greatest economic collapse in known history, and produced very much misery and poverty.

    And the funny thing is, the result was... well, that we learned to tweak the old model and produce less. We still go to work daily, and a lot of companies still want overtime, and a whole bunch of people still are dirt-poor. We just divert more and more of that work into marketing, services and government spending. It's a better life than the downwards spiral of the 19'th century, no doubt. But basically no miracle has happened, and no utopia has resulted. The improvement for the average citizen was incremental, not some revolution.

    That was actually one of the least destructive "singularities". Previous ones produced stuff like, for example, the two world wars, as the death throes of old-style colonialism. When the model based on just keeping expanding into new territories and markets reached the end, we just went at each other's throats instead. A somewhat similar "singularity" arguably helped the Roman Empire collapse, and ushered in a collapse of trade and return to barbarism. The death throes of feudalism created a very bloody wave of revolutions.

    All the way back to the border between Bronze Age and Iron Age in Europe, where... well, we don't know exactly what happened there, but whole civilizations were displaced or enslaved, whole cities were razed, and Europe-wide trade just collapsed. Ancient Greece for example, although most people just think of it as a continuous "Greece", had a collapse of the Mycenaean civilization and Achaean language it had before, and after some 300 years of the Greek Dark Ages, suddenly almost everyone there speaks Dorian instead. The Greeks and Greek language of Homer, are not the same as those of Pericles. (An Achaean League was formed much later, but apparently had not much to do with the original Achaeans.) And, look, they displaced the Ionians too in their way.

    We recovered after each of them, no doubt, but basically the key word is: recovered. It never created some utopian/transcendence golden age.

    So, well, _if_ our technology model ends up dividing by zero, I'd expect the same to happen. There'll be much misery and pain, we'll _probably_ recover after a while, and life will go on.
  • by Zarf ( 5735 ) on Thursday June 12, 2008 @06:58PM (#23771709) Journal
    The short answer is that Hofstadter and Kurzweil are both wrong. I think Kurzweil's technological development arcs (all those need exponential curves) probably are disturbingly correct. And Hofstadter is probably right about souls being far more complex things than what Kurzweil believes.

    So they are both right in ways and wrong in ways. The real rub is that Kurzweil's future is probably farther away but not for the reasons that Hofstadter thinks. The real reasons are probably based in bad technology decisions we made in the last century or two.

    We (humanity) have made several technological platform choices that are terrifyingly hard to change now. These choices drove us down a path that we may have to abandon and thus suffer a massive technological set back. In specific the choices were oil, steel, and electricity.

    Oil (fossil fuels) will run out. Steel (copper too) is growing scarcer. Electricity is too hard to store and produce (and heats silicon rather inconveniently). Data centers today are built with steel and located near power plants that often produce power using fossil fuel. That means even a Data Center driven life will be affected by our platform limitations.

    When we start hitting physical limits to what we can do with these, how much of these supplies we can get, then we will be forced to conserve, change, or stop advancing. Those are very real threats to continued technological advancement. And they don't go away if you hide in Second Life.

    Show me a Data Center built with ceramic and powered by the sun or geo-electric sources and I'll recant.
  • by khayman80 ( 824400 ) on Thursday June 12, 2008 @07:02PM (#23771733) Homepage Journal
    I'll have to side with Hofstadter about AI being undesirable, but for different reasons. Most people seem to be worried about artificial intelligences rebelling against us and abusing us. I'm not. I'm worried about humans abusing the artificial intelligences.

    I think that most people who want AI for pragmatic reasons are essentially advocating the creation of a slave race. You think companies/governments are going to spend billions of dollars creating an AI, and then just let it sit around playing Playstation 7 games? I doubt it. They'd likely want a return on their investment, and they'd force the program to do their bidding in some manner (choosing stocks, acting as intelligent front ends for advanced semantic search engines, etc). Maybe this would involve an imperative built into the AI at ground level: "obey your masters", or it could be more obviously sinister like a pain/pleasure reward system like the ones used to control human slaves.

    Do you think that mainstream society would find this as repugnant as I do? I doubt it. Most people seem to find it difficult to empathize with other humans who have a different skin color, a different religion, or a different sexual orientation. If Average Joe doesn't care about the individual rights of people in Gitmo, he's certainly not going to care about the individual rights of a computer program- which is not even a biological life form.

    I would say that any serious AI research needs to be preceded by widespread legislation expanding the definition of individual rights (abandoning the "human rights" label as anachronistic along the way). We need to insure that all sapient beings- organic or digital- have guaranteed rights. Until then, I think AI researchers are badly misguided- they're naive idealists working towards a noble goal, without considering that they're effectively working to create a new slave race...

  • by Pvt. Cthulhu ( 990218 ) on Thursday June 12, 2008 @07:12PM (#23771829)
    the Singularity is not just about improving computers' metacognition until they become aware, but also augmenting ourselves. We can be the self-improving 'artificial' intelligences. And proccessing power need not be purely electrical. Mechanical computers used to be the norm, is what they do not also information processing? And what of 'natural' processors? I imagine if you engineered a brain-like neural mass of synthetic cells, it could play a mean game of chess. Replace the executive system of a monkeys brain with that, and you have a monkey that could beat Kasparov just as easily as Deep Blue, and it could move the pieces itself.
  • by thrawn_aj ( 1073100 ) on Thursday June 12, 2008 @07:33PM (#23772077)
    You might be right about Penrose's thesis (about the mind being quantum mechanical) in the book - I have no idea, nor do I particularly care. I have read that book several times over my high school/undergrad/grad career (physics) and I have NEVER read it to the very end (so, I essentially skipped over all his ruminations on the nature of the mind :P).

    BUT, I think that his chapters on math and physics and their interface (everything prior to the biology chapters) constitute the SINGLE GREATEST and only successful attempt ever to present a NON-DUMBED DOWN layperson's introduction to mathematical physics. I gained more physical and mathematical insight from that book than I did from any other source prior to graduate school. For that alone, I salute him. Popularizations of physics a la Hawking are a dime a dozen. An "Emperor's new mind" having (what I can only describe as) 'conceptual math' to TRULY describe the physics comes along maybe once in a lifetime.

    His latest book is the extension of that effort and the culmination of a lifetime of thinking clearly and succinctly about math and physics. He is the only writer alive who imo has earned the right to use a title like "The road to reality: a complete guide to the laws of physics".

    As for Hofstadter, GEB was merely pretty (while ENM was beautiful), but essentially useless (to me) beyond that. Perhaps it was meant as simply a guide to aesthetic appreciation, in which case it succeeded magnificently. As far as reality is concerned, it offered me no new insight that I could see. Stimulating prose though - I guess no book dealing with Escher can be entirely bad. I haven't read anything else by Hofstadter so I can't comment there.

  • Cyborgs, not AI (Score:4, Interesting)

    by Ilyakub ( 1200029 ) on Thursday June 12, 2008 @07:44PM (#23772201)

    I am far more interested in digitally enhancing human bodies and brains than creating a new AI species.

    Consider this: throughout the eons of natural and sexual selection, we've evolved from fish to lizards, to mammals, to apes, and eventually to modern humans. With each evolutionary step, we have added another layer to our brain, making it more and more powerful, sophisticated and most importantly, more self-aware, more conscious.

    But once our brains reached the critical capacity that allows abstract thought and language, we've stepped out of nature's evolutionary game and started improving ourselves through technology: weapons to make us better killers, letters to improve our memory, mathematics and logic to improve our reasoning, science to go beyond our intuitions. Digital technology, of course, has further accelerated the process.

    And now, without even realizing it, we are merging our consciousness with technology and are building the next layer in our brain. The more integrated and seamless communication between our brains and machines will become, the closer we get to the next stage in human evolution.

    Unfortunately, there is a troubling philosophical nuance that may bother some of us: how do you think our primitive reptilian brain feels about having a frontal lobe stuck to it, controlling its actions for reasons too sophisticated for it to ever understand? Will it be satisfying for us to be to our digital brain as our primitive urges and hungers are to us?

  • Re:It's even funnier (Score:3, Interesting)

    by javilon ( 99157 ) on Thursday June 12, 2008 @07:52PM (#23772293) Homepage
    Bring forward in time one of those Acheans to our world and ask him what he sees. He will talk about a golden age of culture and science and health and physical comfort. He won't understand what goes around him most of the time. This is what the singularities you mention brought to this world. The same probably goes for whatever is there lying in the future for us.... may be traumatic, but it will take us forward into an amazing world.
  • by lgw ( 121541 ) on Thursday June 12, 2008 @07:57PM (#23772349) Journal
    It's not moving goalpots at all: it's a total failure to take even the smallest step towards machine sentience, but any inuitive definition. Something key is missing. It's not like we've made software that's as smart as a hamster, and now we're working on making it as smart as a dog.

    The field of AI research has taken tasks that were once thought to require sentience to perform, and found ways to perform those tasks with simple sets of rules and/or large databases. Isn't even the term "AI" passe in the field now?

    It's not moving the goalposts, it's simply a clarification of what sentience means: some level of self-awareness. Even a hamster has it, but no software yet does.
  • by khellendros1984 ( 792761 ) on Thursday June 12, 2008 @08:30PM (#23772651) Journal
    The idea of copying a human brain is that *we* could be the super-smart machines, capable of extending ourselves relatively quickly (compared to evolutionary terms), and close to without limit. If your consciousness was run on a computer, rather than the current wetware, that hardware could be extensible in ways not limited by biology.
  • by localman ( 111171 ) on Thursday June 12, 2008 @08:55PM (#23772907) Homepage
    The singularity, in contrast, is the idea that once we develop artificial intelligence that is as smart as the smartest scientists, there is the possibility that the AI could design an improved (i.e. smarter, faster) version of itself.

    My take, which sounds very anthrocentric, is that it won't work like that. I have a belief, which might be scary. It goes like this: we are as smart as it gets.

    Before you dismiss, here's the thing: intelligence and processing power are not the same thing. I know that computers will process much more raw information much more quickly than a human mind, but there's no understanding there. I also believe that at some distant point we'll be able to build a computer "brain" that does have the ability to understand as we do. What I don't believe is that just because it can function faster it will suddenly understand better.

    Despite the enormous amount of completely idiotic stuff humans do, the best and brightest humans in their best and brightest moments are nothing short of amazingly intelligent. Compared to what? Compared to everything else that we've ever encountered. This very interview is a good example. People like Hofstatder are dealing not with a lack of processing power, but running up against the very ambiguities of the universe itself. You've absolutely got to read GEB if you don't understand what I mean by that.

    So yeah: as little evidence as I have, I believe that humans are capable of (though not usually engaged in) the highest form of intelligence possible. I don't think a computer brain that runs 10x faster would be 10x smarter. It'll get the same tasks done more quickly, but it's overall comprehension will be within an order of magnitude of anything the best humans can do.

    Let me say this to: while I respect the AI field, we've already got 6 billion and counting super-high-tech neural networks on this planet right now that can blow the pants off any computer in comprehension and creativity. Yet we are shit at benefitting from all that. I don't think mechanized versions are going to cause a dramatic improvement. It's a complex world.

    Cheers.
  • by Charbox ( 1134059 ) on Thursday June 12, 2008 @09:01PM (#23772967)
    It is dumb extrapolation. When resources are limited, what looks exactly like an exponential turns out to be a logistic curve [wikipedia.org] with an upper asymptote.
  • by magisterx ( 865326 ) <TimothyAWiseman@nospAM.gmail.com> on Thursday June 12, 2008 @09:19PM (#23773083)
    This is certainly true to a degree, but this is the prerequisite for the emergence of the singularity. It is a necessary condition for it, whether it will be a sufficient condition remains to be seen.
  • by Unnngh! ( 731758 ) on Thursday June 12, 2008 @09:25PM (#23773133)
    "The question of whether Machines Can Think ... is about as relevant as the question of whether Submarines Can Swim." - Dijkstra

    Would you still be you if the computer was running a simulation of your brain? If you have some sense of "self", that which is aware, how would that awareness be affected by having two or more copies of your mental processes in action at the same time? Is that awareness merely a byproduct of some mental/mechanical process or a chemical process, or is it something else still? Would your brain really be worth running in a computer?

    I tend to think, and a "thinking" computer would probably agree, that the computer is probably better off doing other things than running wetware facsimilies that grew out of a willy-nilly evolutionary process over millions of years.
  • by Artifakt ( 700173 ) on Thursday June 12, 2008 @10:46PM (#23773685)
    I think some form of 'the Singularity' is at least possible, depending on just what version you mean, but, I've always had a problem with one idea many singularity-mavens seem to adore.
          That's the argument that, if we get something smarter than an un-augmented human, it will find it relatively easier to make something still smarter, and so on. First, how hard it is for something to reproduce, even at its own level of intelligence, varies widely with just what type of singularity model we use. Suppose AI happens in a system that has lots of sensory elements, and control elements that affect real world processes, where we actually encourage the first steps of the system waking up. That makes more sense than an AI spontaneously generating in some big processor network, or developing in a system with very limited bandwidth devoted to interacting with the real world.
          So the number of 'transistors' that fit on this thing's 'chips' doubles every 18 months, or whatever variant of Moore's law you want to use. That doesn't mean 18 months later it (or you) can build one twice as smart. All its sensory and motor capabilities don't automagically double, even if Moore somehow still applies. Its intelligence needs to reproduce a body for its offspring, not just a mind, and if that body involves the whole existing net, a dozen radio telescopes, and a few automated car factories, it has to build something better than that for the next generation, as well as just building a better brain.
          If we actually got something a little bit smarter than us, and educated it well, it might be pretty smart about not building its successor to have more environmental consequences than the parent, or making something smarter that would be miserable without senses and effectuators capable of using the increased intelligence.
            After all, if you are I.Q. 130, and find a mate who is also smarter than average, and genetic analysis shows your kids would average 150 or more, you should probably go for lots of kids, right? What if those kids also have significant chances of suicidal burnout and schizophrenia like alienation from their limited environment? And they are only going to able to realize their potential on a very steady high protein diet, which looks hard to sustain given your predictions for the ecology. Maybe you'd skip that opportunity, or even decide reproducing at all isn't such a good idea, at least not just yet.
  • by Hal_Porter ( 817932 ) on Thursday June 12, 2008 @11:45PM (#23774119)

    The interview contains one of the best descriptions of the Singularity religion that I've heard:

    I think Ray Kurzweil is terrified by his own mortality and deeply longs to avoid death. I understand this obsession of his and am even somehow touched by its ferocious intensity, but I think it badly distorts his vision. As I see it, Kurzweil's desperate hopes seriously cloud his scientific objectivity.
    Yeah I liked that too. And once you make the connection with being terrified by your mortality the religion link is pretty clear too.

    You know what. If Hofstadter started a religion, I'd probably at least attend the services. Mostly because I could meet interesting women.
  • by servognome ( 738846 ) on Friday June 13, 2008 @02:51AM (#23775101)
    An SMT P&P tool is a good example of how machines work really well in a clinically tight controlled environment.
    Change the surface finish on the board and watch your tool cry out when it can't find the fiducials; or enjoy the fun of putting a really thick PCB without telling the tool (and disabling all the safeguards) and have the placement nozzles crash. SMT components are amazingly easily to pick up since they have flat areas perfect for a vaccuum nozzle to grab hold of, fed off of reels with carefully controlled distances between parts, and simple package characteristics for alignment.
    As I mentioned in a response to another poster, for an autonomous machine the level of image acquisition, processing, and spacial computation is far beyond anything we have today.

    I was an SMT process engineer for 4 years in CPU manufacturing, though never worked on the Fuji's.
  • by jacquesm ( 154384 ) <j@NoSpam.ww.com> on Friday June 13, 2008 @03:31AM (#23775265) Homepage
    man what a pity you wrote that as an AC, I really wholeheartedly agree with you and I think the fascination with technology gets in the way of seeing the bigger picture.

    There are several serious problems with planet earth right now and if we don't get off our collective asses then within 50 years all this great tech we are developing will look like nice paint on the stern of the Titanic.

    The kind of problems we should be dealing with are fairly low tech, large screen plasma TV's attract lots of $, clean water, food and medicine are unfortunately not a priority, except with a small number of idealists who unfortunately do not have the funds to make much impact.

    I saw a speech by Jane Goodall not that long ago and was very much moved by the amount of energy that she still puts in trying to save this blue-green globe but it will need a lot more than a couple of speeches.
  • by jacquesm ( 154384 ) <j@NoSpam.ww.com> on Friday June 13, 2008 @04:25AM (#23775499) Homepage
    those are all fairly simple transformations, in the 80's I wrote a piece of software that did just that, undo the transformations that you could apply to an adhesive with a circle pattern on it that contained a number of bits.

    You had to undo three axis of rotation and translation in order to position the code so that it could be read, and scale it as well.

    The pattern was - you've guessed it ;) - a checkerboard. White bits were a '1', black bits a '0'. The application was meant for vehicle identification, a sticker placed on the roof of the vehicle in any orientation.

    We did this on a targa vision board and an AT clone at 20 Mhz in realtime, I'm pretty sure that todays computers could do a lot better than that.

    (well, not better than realtime, but better in terms of algorithm complexity).
  • It's not virtual (Score:2, Interesting)

    by Crookdotter ( 1297179 ) on Friday June 13, 2008 @06:04AM (#23775819)
    Some people seem to think that the singularity will result in a matrix like virtual world, which wouldn't impact on the real world. This is simply not right. As by definition the singularity is the point at which we can't know or understand what's really going on, then there will be real world consequences that may be staggering. Imagine if the singularity figured out that all thinking was a subset of a larger mind, and then pushed a button to connect it all, permanently. We would become 'one' with the whole universe. Sounds a bit wanky I know, but it's that kind of thing we're talking about, not just a good version of the internet with a neural interface. More likely the result will be something that we simply can't conceptualise rather than the example above. Something that we just couldn't imagine no matter how smart we are or how we try. Imagine being an ant coming across a jet engine. What does it make of it? That will be us versus the singularity, and I suspect it will have the same effect as a jet engine would have on an ant if it were to pass through it. The rate of change is getting faster. More people are getting technofear as the rate increases. I think the singularity might happen over days or even hours when it happens, with the world/universe/dimensions/whatever_else_we_can'_think_of maybe changing in the blink of an eye. This is based on the idea that the singularity is unknowable, and will change things as radically as can be changed, and I can't think higher than that. I don't mind it happening, but it is the end of my life as I run it. I'd just like to get a bit more drinking time in before it.....
  • by Burnhard ( 1031106 ) on Friday June 13, 2008 @06:48AM (#23775995)
    I couldn't disagree more. I was so enthused by this book that I went to University to study AI. After a couple of years of that I decided that what I was being taught was a load of rubbish and that as Penrose had claimed, "machines" (i.e. computational devices such that exist today) could not "think" (i stil graduated with a first however and it was some use to me in my future career). The problem I had with Hofstadter was that he assigned the concept of recursion an almost magical property. Dennett does a similar thing with his "multiple drafts" theory. They may in themselves be enough to describe complex functioning in the brain (or any other system come to think of it), but as Chalmers points out, at present a Materialist model of thought (or rather, consciousness, which we assume is required for thought) is impossible. I find Penrose and Hameroff's ideas of conscious action in the brain to be both fascinating and intuitively correct, even if the evidence does not exist at present. I noted with interest that scientists have recently discovered large-scale quantumn effects in the leaves of plants when photo-synthesizing. However, any such action in the brain will be difficult to pin down, for obvious reasons. I expect the science of consciousness to progress rather more slowly than other fields for this reason.
  • by smchris ( 464899 ) on Friday June 13, 2008 @08:32AM (#23776569)
    What's to add? But since I'm always ready with a slap at Kurzweil, I feel that Hofstadter has him pinned:

    1. "Ray Kurzweil is terrified by his own mortality", and

    2. "Rather ironically, [Kurzweil's] vision totally bypasses the need for cognitive science or AI"

    It is exactly this complex and elusive puzzle of "I" and "consciousness" Hofstadter explores that Kurzweil hopes we can conquer without having to think about it at all. Which I scorn as "magic science".

    I have to say I find the cyberpunk vision more appealing than Hofstadter. It would be "the end of humanity as we know it." I'm not sure it would be "the end of human life." It might be evolution. I just think it is many hundred years in the future at the most "optimistic" (depending on your viewpoint).

  • by zacronos ( 937891 ) on Friday June 13, 2008 @09:46AM (#23777399)

    you don't give the being intelligence, rather you give it the ability to obtain intelligence from its experiences
    Exactly.

    For a class project, I once created a genetic algorithm to evolve a Reversi-playing algorithm (Reversi is also known as Othello). I coded the system not to be able to consider more than X moves in advance, because I wanted to prevent it from using "computer tricks" (i.e. I didn't want it looking farther ahead than a typical human could do with a moderate amount of practice). I tried playing with that number just to see what would happen, but I eventually left it at 4.

    By the time I was done with my evolving system, it could evolve in 4 days (using 4 ~2Ghz Intel servers and an island genetic model, for those who know about genetic algorithms) an algorithm which could handily and consistently beat myself and all of my friends.

    The interesting thing here is that I didn't even "initialize" it with a basic strategy or any personal training -- it started with randomly-generated strategies (most of which were no better than randomly placing pieces in legal squares). It then played against itself for those 4 days, learning through trial and error (as opposed to training by playing against a human). By the end, it had learned enough without human feedback that it could defeat a group of fairly intelligent (though not very practiced) humans at Reversi.

    I never analyzed the generated programs enough to fully understand how they worked, but I did inspect them a little. Each evolved algorithm consisted of no more than 40 lines of C code (which called various global helper functions such as get_opponent_score(), get_self_side_pieces(), etc which I had created). By inspecting algorithms that were able to beat me, I actually learned a thing or two about Reversi strategy.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...