Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Books Media Math Science

Douglas Hofstadter Looks At the Future 387

An anonymous reader writes with a link to this "detailed and fascinating interview with Douglas Hofstadter (of Gödel Escher Bach fame) about his latest book, science fiction, Kurzweil's singularity and more ... Apparently this leading cognitive researcher wouldn't want to live in a world with AI, since 'Such a world would be too alien for me. I prefer living in a world where computers are still very very stupid.' He also wouldn't want to be around if Kurzweil's ideas come to pass, since he thinks 'it certainly would spell the end of human life.'"
This discussion has been archived. No new comments can be posted.

Douglas Hofstadter Looks At the Future

Comments Filter:
  • Singularity is naive (Score:5, Interesting)

    by nuzak ( 959558 ) on Thursday June 12, 2008 @05:59PM (#23771065) Journal
    Is it just me or does the Singularity smack of dumb extrapolation to me? "Progress is accelerating by X, ergo it will always accelerate by X".

    I mean, if I ordered a burrito yesterday, and my neighbor ordered one today, and his two friends ordered one the next day, does that mean in 40 more days, all one trillion people on earth will have had one?
    • by servognome ( 738846 ) on Thursday June 12, 2008 @06:16PM (#23771227)
      I don't think it's necessarily dumb extrapolation, but I do think not all the variables are included.
      AI's exist in a perfectly designed environment, they have humans feed them power & data and all they need to do is process. At some point computers will need to interact with the environment, it is then that everything will slow down, and probably take a step backwards.
      Massive amounts of processing power will have to get reassigned to tasks currently taken for granted, like acquiring data. Imagine the size of big blue if it had to actually see the board and physically move the pieces.
      • Re: (Score:2, Interesting)

        the Singularity is not just about improving computers' metacognition until they become aware, but also augmenting ourselves. We can be the self-improving 'artificial' intelligences. And proccessing power need not be purely electrical. Mechanical computers used to be the norm, is what they do not also information processing? And what of 'natural' processors? I imagine if you engineered a brain-like neural mass of synthetic cells, it could play a mean game of chess. Replace the executive system of a monkeys
      • by sjhs ( 453964 ) * on Thursday June 12, 2008 @09:11PM (#23773037)

        Imagine the size of big blue if it had to actually see the board and physically move the pieces.
        Actually, those two tasks are fairly easy compared to the task of winning a game of chess. This is because chess boards all have a fairly consistent look, and even if presented with a strange, unfamiliar-looking chess board [turbulence.org], a decent "chess vision" algorithm should have relatively little trouble inferring what the pieces are etc. Similarly, good robotics coupled with a good 3D world model should take care of moving the pieces relatively easily. So, your home computer with a webcam and a nice USB robotic arm [lynxmotion.com] attached could take care of those two tasks. Now, to deal with the 10^123 game-tree complexity...
        • by SirSlud ( 67381 ) on Thursday June 12, 2008 @11:06PM (#23773859) Homepage
          The point is still valid within the context of the idea that whatever input required for a human to develop healthily is far beyond staring at a chess board and moving some pieces for 100 years.

          We'd need robots who could design equipment for themselves, to scale mountains. To invent instruments. To scour the depths of the ocean. The point is still valid in the sense that people are a product of their environment, and what makes the human experience so unique is that we're constantly attempting to gain more access to more input. Presumably, any old brain in a box placed in a single room, unable to move, would cease being healthy after awhile, and probably even recognizably human after years because I would have to imagine that some part of the programming of the human mind requires or in the very least infers the ability to alter and modify our environment to a satisfactory degree.
        • Re: (Score:3, Informative)

          by servognome ( 738846 )

          a decent "chess vision" algorithm should have relatively little trouble inferring what the pieces are etc. Similarly, good robotics coupled with a good 3D world model should take care of moving the pieces relatively easily.

          I disagree. Most vision systems I've seen use very specific models to compare images to.
          A decent "chess vision" system would need to on it's own create 3D models by examining the pieces and interpret what those models are. The computer would have to be able to capture images of the boar

      • Re: (Score:3, Insightful)

        by joto ( 134244 )

        AI's exist in a perfectly designed environment, they have humans feed them power & data and all they need to do is process.

        I'm not arguing against this point, I just thought you had a silly example of the difficulties involved.

        Imagine the size of big blue if it had to actually see the board and physically move the pieces.

        Yeah, it would add another $139 to the cost, like this device [amazon.com]. If you were thinking about a device that can recognize and move the pieces of any "normal" chess board, then it wou

    • by bunratty ( 545641 ) on Thursday June 12, 2008 @06:26PM (#23771333)
      You're not understanding what the singularity is about. What you're describing is a dumb extrapolation. The singularity, in contrast, is the idea that once we develop artificial intelligence that is as smart as the smartest scientists, there is the possibility that the AI could design an improved (i.e. smarter, faster) version of itself. Then that version could design a yet more improved version, even more quickly, and so on. That will mean the rate of scientific progress could be faster than humans are capable of, and we could find ourselves surrounded by technology we do not understand, or perhaps we cannot possibly understand. The idea behind the singularity is feedback, such as the recursion that can be created by the Y combinator in your sig.
      • Re: (Score:3, Insightful)

        by smallfries ( 601545 )
        The interview contains one of the best descriptions of the Singularity religion that I've heard:

        I think Ray Kurzweil is terrified by his own mortality and deeply longs to avoid death. I understand this obsession of his and am even somehow touched by its ferocious intensity, but I think it badly distorts his vision. As I see it, Kurzweil's desperate hopes seriously cloud his scientific objectivity.
        • by bunratty ( 545641 ) on Thursday June 12, 2008 @06:59PM (#23771723)
          I believe that something like the singularity will come to pass, in the sense that super-smart machines will quickly develop. On the other hand, the whole idea of copying human brains just strikes me as silly. I'm really not sure what the interaction between humans and super-smart machines will be. That's one of the key points of the singularity; things will change so much so rapidly that we cannot predict what will happen.
          • Re: (Score:3, Interesting)

            The idea of copying a human brain is that *we* could be the super-smart machines, capable of extending ourselves relatively quickly (compared to evolutionary terms), and close to without limit. If your consciousness was run on a computer, rather than the current wetware, that hardware could be extensible in ways not limited by biology.
            • by Unnngh! ( 731758 ) on Thursday June 12, 2008 @09:25PM (#23773133)
              "The question of whether Machines Can Think ... is about as relevant as the question of whether Submarines Can Swim." - Dijkstra

              Would you still be you if the computer was running a simulation of your brain? If you have some sense of "self", that which is aware, how would that awareness be affected by having two or more copies of your mental processes in action at the same time? Is that awareness merely a byproduct of some mental/mechanical process or a chemical process, or is it something else still? Would your brain really be worth running in a computer?

              I tend to think, and a "thinking" computer would probably agree, that the computer is probably better off doing other things than running wetware facsimilies that grew out of a willy-nilly evolutionary process over millions of years.
          • by Anonymous Coward on Thursday June 12, 2008 @11:34PM (#23774057)
            It's easy to see what will happen. Just look back at the development of technology. The Industrial Revolution changed the way people work, but it did not change the goal people were working towards.

            We will always create tools to accomplish specific work and our tools (assuming they become aware) will do the same.

            Quite frankly, I don't care if some CEO can pay to upload himself into some AI construct. I will believe that the singularity has created true advancement when "the other 85% of humanity" has adequate access to clean water, nutritious food, and medical care.
            • Re: (Score:3, Interesting)

              by jacquesm ( 154384 )
              man what a pity you wrote that as an AC, I really wholeheartedly agree with you and I think the fascination with technology gets in the way of seeing the bigger picture.

              There are several serious problems with planet earth right now and if we don't get off our collective asses then within 50 years all this great tech we are developing will look like nice paint on the stern of the Titanic.

              The kind of problems we should be dealing with are fairly low tech, large screen plasma TV's attract lots of $, clean wate
      • by flnca ( 1022891 )
        Hey did you read the Culture series of science-fiction novels by Iain M. Banks? :-)

        I think the concept of self-evolving AI is very awesome! :-)

        Writing a mind program is very interesting... if we understand how the human mind works in abstract terms, then we should be able design AIs. I'm baffled at the apparent stagnation in AI development. Weren't we supposed to have this already in the 80ies? ;-)
        • Re: (Score:3, Informative)

          by VdG ( 633317 )
          Something more like the Singularity can be found in a lot of Ken MacLeod's books, particularly The Stone Canal and The Cassini Division.

          MacLeod's an excellent writer and well worth a look if you haven't already come across him.
      • by magisterx ( 865326 ) <TimothyAWiseman@nospAM.gmail.com> on Thursday June 12, 2008 @07:38PM (#23772119)
        Just to clarify this excellent post slightly, the concept of a singularity does not entail AI per se. It requires an intelligence capable of enhancing itself in a recursive fashion, but this could in principle be achieved in a number of ways. Genetic engineering which then permits the development of better genetic engineering, or the direct merging of biological and computer components in a fashion which permits developing better mergers, or in principle taken to the extreme even simply ever better tools for the use in developing technology to make better tools yet.

        If a singularity does occur, it will likely emerge from multiple paths at once.
        • by servognome ( 738846 ) on Thursday June 12, 2008 @08:33PM (#23772693)

          It requires an intelligence capable of enhancing itself in a recursive fashion, but this could in principle be achieved in a number of ways.
          I would argue this already exists. If you look at humans as a single social entity, since the start of specialization & trade, human intelligence has enhanced itself recursively.
          • Re: (Score:3, Interesting)

            by magisterx ( 865326 )
            This is certainly true to a degree, but this is the prerequisite for the emergence of the singularity. It is a necessary condition for it, whether it will be a sufficient condition remains to be seen.
      • by William Baric ( 256345 ) on Thursday June 12, 2008 @07:47PM (#23772233)
        I am already surrounded by technology I do not understand. What would be the difference for me?
      • by localman ( 111171 ) on Thursday June 12, 2008 @08:55PM (#23772907) Homepage
        The singularity, in contrast, is the idea that once we develop artificial intelligence that is as smart as the smartest scientists, there is the possibility that the AI could design an improved (i.e. smarter, faster) version of itself.

        My take, which sounds very anthrocentric, is that it won't work like that. I have a belief, which might be scary. It goes like this: we are as smart as it gets.

        Before you dismiss, here's the thing: intelligence and processing power are not the same thing. I know that computers will process much more raw information much more quickly than a human mind, but there's no understanding there. I also believe that at some distant point we'll be able to build a computer "brain" that does have the ability to understand as we do. What I don't believe is that just because it can function faster it will suddenly understand better.

        Despite the enormous amount of completely idiotic stuff humans do, the best and brightest humans in their best and brightest moments are nothing short of amazingly intelligent. Compared to what? Compared to everything else that we've ever encountered. This very interview is a good example. People like Hofstatder are dealing not with a lack of processing power, but running up against the very ambiguities of the universe itself. You've absolutely got to read GEB if you don't understand what I mean by that.

        So yeah: as little evidence as I have, I believe that humans are capable of (though not usually engaged in) the highest form of intelligence possible. I don't think a computer brain that runs 10x faster would be 10x smarter. It'll get the same tasks done more quickly, but it's overall comprehension will be within an order of magnitude of anything the best humans can do.

        Let me say this to: while I respect the AI field, we've already got 6 billion and counting super-high-tech neural networks on this planet right now that can blow the pants off any computer in comprehension and creativity. Yet we are shit at benefitting from all that. I don't think mechanized versions are going to cause a dramatic improvement. It's a complex world.

        Cheers.
    • It's even funnier (Score:3, Interesting)

      by Moraelin ( 679338 )
      Actually, even if it kept accelerating, singularities (as some fancy world for when you divide by zero, or otherwise your model breaks down) so far never created some utopia.

      The last one we had was the Great Depression. The irony of it was that it was the mother of all crises of _overproduction_. Humanity, or at least the West, was finally at the point where we could produce far more than anyone needed.

      So much that the old-style laissez-faire free-market-automatically-fixes-everything capitalism model prett
      • Re: (Score:3, Interesting)

        by javilon ( 99157 )
        Bring forward in time one of those Acheans to our world and ask him what he sees. He will talk about a golden age of culture and science and health and physical comfort. He won't understand what goes around him most of the time. This is what the singularities you mention brought to this world. The same probably goes for whatever is there lying in the future for us.... may be traumatic, but it will take us forward into an amazing world.
      • Re:It's even funnier (Score:5, Informative)

        by scottyokim ( 898934 ) on Thursday June 12, 2008 @08:22PM (#23772577)
        The current Federal Reserve chairman, Ben Bernanke, would disagree with you about the causes of the Great Depression. He says that it was the fault of the Federal Reserve. http://www.federalreserve.gov/BOARDDOCS/SPEECHES/2002/20021108/default.htm [federalreserve.gov] Check mises.org for other economic information.
      • by joebob2000 ( 840395 ) on Thursday June 12, 2008 @08:32PM (#23772669)
        It was caused by a shortage of money. The Fed tightened, causing a deflationary collapse. Without a certain critical mass of money, the economy will not function. The speculative excesses of the 20's were caused by a loose monetary policy that was then whipsawed to an overly tight policy. Ironically, the entity responsible for these actions, the Fed, was supposedly created to "smooth over" business cycles, not exacerbate them.
    • by pitchpipe ( 708843 ) on Thursday June 12, 2008 @06:43PM (#23771555)
      Ho ho ho, those silly AI researchers. Anyone with a brain (not their AI haha) knows that evolution has produced the pinnacle of intelligence with it's intelligent design. Along with the best flying machines, land traveling machines, ocean going machines, chess players, mining machines, war machines, space faring machines, etc, etc, etc. Will they never stop trying to best evolution only to be shown up time and again?
    • Re: (Score:2, Insightful)

      by nbates ( 1049990 )
      I think that the reason the extrapolation is not that naive is that there is already existing intelligence (not just us, but also many species), so saying "one day we'll develop an artificial intelligence" is just saying one day we'll reproduce what is already existing.

      If you use the Copernican principle (i.e. we are not special) it is easy to assume we, as species, are not specially intelligent nor specially stupid. So the statement that there could be AI more intelligent than us is not that hard to believ
      • Re: (Score:3, Insightful)

        by flnca ( 1022891 )
        The scientists have not produced a viable AI so far, because they focus on the brain rather than on the mind. Brain function is poorly understood, as brain scientists often admit, and hence, there's no way to deduct an AI from brain function. The right thing to do would be to focus on abstract things, namely the human mind itself, as it is understood by psychology, perhaps. Even spirtuality can help. If god existed, how would s/he think? What would a ghost be like? What is the soul? What does our soul feel?
        • by khellendros1984 ( 792761 ) on Thursday June 12, 2008 @08:35PM (#23772713) Journal
          And the philosophers have been working on all of those questions for far longer than we've been systematically trying to understand the brain.

          I believe that we'll gradually come to understand the brain better, and from that, how the mind arises from its physical functioning. *That* is where an artificial intelligence can be designed, when we understand the cognition provided by the brain.
      • Re: (Score:3, Insightful)

        by nuzak ( 959558 )
        Oh I fully believe that one day we'll create a machine smarter than us. And that eventually it will be able to create a machine smarter than it. I do disagree with the automatic assumption that it'll necessarily take a shorter cycle each iteration.

        Usually the "singularity" is illustrated by some graph going vertical, where I can only assume that X=Time and Y="Awesomeness". The fact that I didn't commute to work on a flying car makes me a bit skeptical.
    • Re: (Score:3, Interesting)

      by Artifakt ( 700173 )
      I think some form of 'the Singularity' is at least possible, depending on just what version you mean, but, I've always had a problem with one idea many singularity-mavens seem to adore.
      That's the argument that, if we get something smarter than an un-augmented human, it will find it relatively easier to make something still smarter, and so on. First, how hard it is for something to reproduce, even at its own level of intelligence, varies widely with just what type of singularity model w
  • Hail to the robots (Score:4, Insightful)

    by oever ( 233119 ) on Thursday June 12, 2008 @06:01PM (#23771085) Homepage
    Perhaps Hofstadter has no need for AI or robots, but I would love to see robots reach our level of thinking while I'm living. Work on AI shows us how we think and that is very fascinating. The rise of the robots will be *the* big event in our lives.

    • Re: (Score:2, Funny)

      by Anonymous Coward
      I, for one, welcome... oh nevermind.
      • You might as well do it... at the rate that AI development is proceeding, the earth is more likely to be hit by an life ending asteroid than be overtaken by AI lifeforms unless they show up in spaceships and demand to see McGyver.
    • Re: (Score:3, Interesting)

      by Penguinisto ( 415985 )
      ...as long as they don't reach our level of emotional frailties, or reach conclusions that are detrimental to continued human existence.

      I know, I know... Asimov's laws, etc etc. But... for a being to be sentient and at the same time reach the same level of thinking that we enjoy, you must given them the freedom to think, without any restrictions... as humans (ostensibly) do. This requires a level of both bravery and of careful planning that is far greater than we as humans are capable of today.

      I'm not predi

      • Best bet would be to --if ever possible-- give said robot the tools to be sentient, but don't even think of giving them any power to actually do more than talk (verbal soundwaves, not data distribution) and think.

        And if that's a good idea, you can bet there will be some who will not do it that way, precisely because not doing it that way is a bad idea, and some will not do it that way because, of course, they're smarter than everyone else.

        I don't see technology rendering evil or ego obsolete. I see it m

      • by flnca ( 1022891 )
        The old short story you mentioned was written by Fredric Brown. :-)

        If AI development focuses on abstract concepts, like the mind, then it can be written like a normal software application. It would have the speed of the computer system it runs on. If such an AI was capable of learning, it would quickly outperform humans, perhaps within minutes. (Did you see the movie Colossus? When the American and Russian machine begin to synchronize to each other, develop their own common language and soon outsmart th
      • by AnyoneEB ( 574727 ) on Thursday June 12, 2008 @10:08PM (#23773439) Homepage

        I know, I know... Asimov's laws, etc etc.

        A lot of people seem to misunderstand Asimov's Laws of Robotics. They are not a suggestion for what laws real robots should follow. They are used to demonstrate that no simple set of rules could possibly make robots "safe". See the Wikipedia article [wikipedia.org], which mentions that.

    • by ady1 ( 873490 )
      This question has been bothering me for a time. What if AI can reach the level of human intelligence and is human intelligence even a static target for it to achieve? We have evolved from very stupid creatures to our present state and I imagine that we continue to do so, thus making our brains more sophisticated and us, more intelligent. Just look at the people a century ago and todays' modern society. Even a child today is more mentally capable than an adult was 50 years ago.
      • Re: (Score:2, Funny)

        by maxume ( 22995 )
        That's because your're drunk.

        Or would you like to show us all a group of children that could produce an atomic weapon given a few years and a few billion dollars?
      • Uhm, 50 years ago we already discovered nuclear weapons, discovered relativity, cars, airplanes, etc. Children now are no more or less mentally capable than in the past. It's just that we have more information about the world around us.
    • Re: (Score:3, Interesting)

      by khayman80 ( 824400 )
      I'll have to side with Hofstadter about AI being undesirable, but for different reasons. Most people seem to be worried about artificial intelligences rebelling against us and abusing us. I'm not. I'm worried about humans abusing the artificial intelligences.

      I think that most people who want AI for pragmatic reasons are essentially advocating the creation of a slave race. You think companies/governments are going to spend billions of dollars creating an AI, and then just let it sit around playing Plays

      • Re: (Score:3, Insightful)

        by glittalogik ( 837604 )
        I suspect you may have read one too many Arthur C Clarke short stories - artifical intelligence and artificial emotion are far from mutually inclusive by default. However, I agree with you to the extent that humans should maintain some level of compassion/respect even for inanimate objects, if only because we need the practice.

        There is hope though, check out Wired's R is for Robot [wired.com] for some interesting insights into human/machine interaction.
  • The discussion of souls, should shards, etc. was not what I expected but I enjoyed this material anyway, and I enjoyed the entire book.

    I like and more or less agree with Hofstadter's general take on AI also: I have been very interested in AI since the mid-1970s when I read "Mind Inside Matter", but I also appreciate the spiritual side of human life and I still look at human consciousness as a mystery although attending one of the "human consciousness and quantum mechanics" conferences sort of has me thinkin
    • by abigor ( 540274 ) on Thursday June 12, 2008 @06:13PM (#23771199)
      That's what mathematician Roger Penrose thinks also, in case you weren't aware. You may want to read his book "The Large, the Small, and the Human Mind".
      • I read Penrose's and Gardner's "The Emporer's New Mind: Concerning Computers, Minds, and the Laws of Physics" a long time ago - at the time I did not agree with him, but now I mostly do think that non-quantum digital computers "lack something" required for consciousness and Real AI(tm)

        I will look at "The Large, the Small, and the Human Mind" - I just added it to my Amazon shopping cart - thanks for the recommendation.
        • Re: (Score:2, Informative)

          Neurons are far too large to be affected by QM effects.
          • by lgw ( 121541 )
            Neurons and the as-yet-fictional quantum computers are both quite different from current computers in that they are both (effectively) massively parallel. It's not unreasonable to suspect, given the complete failure to take even one step towards "real" AI (machine sentience), that this difference might matter.
      • by Angostura ( 703910 ) on Thursday June 12, 2008 @06:25PM (#23771327)
        I found The Emperor's New Mind a remarkably irritating book. As far as I could tell, the whole tome basically boiled down to 'Consciousness is spooky and difficult to explain, Quantum effects are spooky and difficult to explain, ergo human consciousness probably has its basis in qyuantum effects'. I didn't read any of his books after that one.

        I like Hofstadter a *lot* though. His book of essays from SciAm: Metamagical Themas is still woeth grabbing if you ever see a copy.
        • by thrawn_aj ( 1073100 ) on Thursday June 12, 2008 @07:33PM (#23772077)
          You might be right about Penrose's thesis (about the mind being quantum mechanical) in the book - I have no idea, nor do I particularly care. I have read that book several times over my high school/undergrad/grad career (physics) and I have NEVER read it to the very end (so, I essentially skipped over all his ruminations on the nature of the mind :P).

          BUT, I think that his chapters on math and physics and their interface (everything prior to the biology chapters) constitute the SINGLE GREATEST and only successful attempt ever to present a NON-DUMBED DOWN layperson's introduction to mathematical physics. I gained more physical and mathematical insight from that book than I did from any other source prior to graduate school. For that alone, I salute him. Popularizations of physics a la Hawking are a dime a dozen. An "Emperor's new mind" having (what I can only describe as) 'conceptual math' to TRULY describe the physics comes along maybe once in a lifetime.

          His latest book is the extension of that effort and the culmination of a lifetime of thinking clearly and succinctly about math and physics. He is the only writer alive who imo has earned the right to use a title like "The road to reality: a complete guide to the laws of physics".

          As for Hofstadter, GEB was merely pretty (while ENM was beautiful), but essentially useless (to me) beyond that. Perhaps it was meant as simply a guide to aesthetic appreciation, in which case it succeeded magnificently. As far as reality is concerned, it offered me no new insight that I could see. Stimulating prose though - I guess no book dealing with Escher can be entirely bad. I haven't read anything else by Hofstadter so I can't comment there.

    • Re: (Score:3, Funny)

      by e2d2 ( 115622 )
      Sould shards?

      NERF LOCKS!
  • Intelligent Beings (Score:2, Insightful)

    by hawkeye_82 ( 845771 )
    I personally believe that AI will never happen with us humans at our current level of intelligence.

    To build a machine that is intelligent, we need to understand how our own intelligence works. If our intelligence was simple enough to understand and decipher, we humans would be too simple to understand it or decipher it.

    Ergo, we humans will never ever build a machine that is intelligent. We can build a machine that will simulate intelligence, but never actually make it intelligent.
    • What's the difference between simulating intelligence and "actual" intelligence? If you can't tell the difference via your interactions, then for all intents and purposes there is no difference.

      Also, it's fairly "simple" to build an intelligence without understanding how intelligence works. You can either make a whole human brain simulation, or you can go have children.

      • I cannot prove that I have consciousness; a computer could probably simulate my failure at witty reparté on Slashdot with ease. But I do have consciousness.

        I put it to you that when people talk about "actual" versus "simulated" intelligence, this is what they mean. And it certainly matters to the one who is experiencing it!

        • I cannot prove that I have consciousness; a computer could probably simulate my failure at witty reparté on Slashdot with ease. But I do have consciousness.
          Well there's the problem. How do I know that you have consciousness? And how do you know an artificial intelligence wouldn't have consciousness as well?
      • by Knara ( 9377 )

        It's the "moving goalposts" problem with AI. Any time something reaches some threshold of intelligence, people (for whatever reason) decide "well, that's not really intelligent behavior, what would be intelligent behavior is ".

        I tend to agree that so long as the output seems intelligent, the system that produced it can also be considered reasonably intelligent.

        • Re: (Score:3, Interesting)

          by lgw ( 121541 )
          It's not moving goalpots at all: it's a total failure to take even the smallest step towards machine sentience, but any inuitive definition. Something key is missing. It's not like we've made software that's as smart as a hamster, and now we're working on making it as smart as a dog.

          The field of AI research has taken tasks that were once thought to require sentience to perform, and found ways to perform those tasks with simple sets of rules and/or large databases. Isn't even the term "AI" passe in the fi
    • that would be true except for the fact that your intelligence is only possible because your brain is constructed of billions of cells with discrete knowable functions and mechanisms. ultimately, because it is made of a finite number of components, components that are getting better understood over time, it is only a matter of time before humans like us engineer something of equal if not superior function and design compared to our own intelligences.
      • Re: (Score:3, Interesting)

        by e2d2 ( 115622 )
        Also to add to that, there is no requirement for us to understand the brain in depth, only that we understand how we learn, and in that respect we've come leaps and bounds over the years. Plus, let's not limit ourselves to the human brain. For instance, a dog is intelligent too. A piece of software with the intelligence of a dog could be very useful. Hell one with the decision making abilities of a bird would be a nice start. And on and on..

        On a tangent:
        Intelligence is such a broad word, and then to tack on
        • by lgw ( 121541 )
          I prefer the term "machine sentience" for this thing we've so far failed to create, or even just "self awareness". We can write software to do all sorts of difficult tasks, but nothing so far that's smart in the way that even a dog is smart.
        • by lelitsch ( 31136 )

          For instance, a dog is intelligent too. A piece of software with the intelligence of a dog could be very useful. Hell one with the decision making abilities of a bird would be a nice start. And on and on..
          I prefer living in a world where my laptop doesn't chase cats. Or a world where my laptop doesn't try to fly away when it sees a cat.
    • AI is simple really. All you need to do is get over the vision problem. The vision problem is digitizing the world into a 3d imagination space. Once you have sensors that can digitize the world, all sorts of robots will pop up. Arguably someone may raise the point that programming a computer isn't AI, but programming is sort of like how robots learn like Neo in the Matrix. If you want a robot to learn on its own, you'll have to give it trust algorithms to know if it is being told the truth or not. You
    • To build a machine that is intelligent, we need to understand how our own intelligence works.
      I don't think so, any more than we had to understand how birds flew to make airplanes, or how our muscles work to make an internal combustion engine. I don't know why so many people believe copying humans is the shortest path to AI.
    • True...

      the problem is that we're buggy and we make buggy things.

      The most advanced game for the PC right now:
      http://youtube.com/watch?v=GNRvYqTKqFY [youtube.com]
    • To build a machine that is intelligent, we need to understand how our own intelligence works. If our intelligence was simple enough to understand and decipher, we humans would be too simple to understand it or decipher it.

      It is a fallacy to think that humans cannot create something more intelligent than ourselves. The creation of AI is analogous to the creation of another human: you don't give the being intelligence, rather you give it the ability to obtain intelligence from its experiences. You don't even need to know how it works!

      That's the beauty of programs that can adapt/self modify.

      • Re: (Score:3, Interesting)

        by zacronos ( 937891 )

        you don't give the being intelligence, rather you give it the ability to obtain intelligence from its experiences

        Exactly.

        For a class project, I once created a genetic algorithm to evolve a Reversi-playing algorithm (Reversi is also known as Othello). I coded the system not to be able to consider more than X moves in advance, because I wanted to prevent it from using "computer tricks" (i.e. I didn't want it looking farther ahead than a typical human could do with a moderate amount of practice). I tried playing with that number just to see what would happen, but I eventually left it at 4.

        By the time I was done

    • by tnk1 ( 899206 )
      The human brain is a finite set of matter with a finite, albeit large number of possible states and interactions. Given a finite state, a computer of sufficient design and "size" should be able to replicate it.

      Bear in mind, the human brain is a wonder, but its not a wonder so much because it is unreplicatable. I mean, we replicate the brain every day in new children. The brain is not just intelligent, but also highly efficient because it does all of that in a few pound of brain matter.

      So, no, I don't thi
  • by the_humeister ( 922869 ) on Thursday June 12, 2008 @06:08PM (#23771161)

    Am I disappointed by the amount of progress in cognitive science and AI in the past 30 years or so? Not at all. To the contrary, I would have been extremely upset if we had come anywhere close to reaching human intelligence â" it would have made me fear that our minds and souls were not deep. Reaching the goal of AI in just a few decades would have made me dramatically lose respect for humanity, and I certainly don't want (and never wanted) that to happen.
    Hehe, you mean all the nasty things humanity has done to each other hasn't made you lose respect?

    I am a deep admirer of humanity at its finest and deepest and most powerful â" of great people such as Helen Keller, Albert Einstein, Ella Fitzgerald, Albert Schweitzer, Frederic Chopin, Raoul Wallenberg, Fats Waller, and on and on. I find endless depth in such people (many more are listed on [chapter 17] of I Am a Strange Loop), and I would hate to think that all that beauty and profundity and goodness could be captured â" even approximated in any way at all! â" in the horribly rigid computational devices of our era.
    When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?
    • When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?

      Thinking by the numbers. Comparing specs. That's the same dispassionate thinking that insists a PC is a better value than a Mac and an iPod is lame because...well, you know the joke. The same thinking that values speed and power and low cost and cool and

    • Hehe, you mean all the nasty things humanity has done to each other hasn't made you lose respect?

      and

      When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?

      Humans are tempered by their huge vulnerabilities. It does not take much at all to turn us "off". I don't have much faith in a human created intelligent, sentient robot with very few vulnerabilities. You can still stop and kill a human criminal.

      I don't expect much, but my hope is that most of these new robots will want to work with us (and protect us) rather than enslave or exterminate us. Humans throughout history have ruled through the ultimate threat of violence. Why would robots be different?

      Think

      • Think about it. If there were few or no repercussions for your bad actions, would you stop doing them?
        Of course not! If there's anything that MMORPGs have shown us, well that's it.
    • by naoursla ( 99850 )

      When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?

      Be nice to the person with future shock. It takes some people a little time to come around to the idea that the only thing are ultimately good for is serving as a daemon in some obscure subsystem of a weakly godlike intelligence.

      And then it takes even longer

    • by hypomorph ( 1305401 ) on Thursday June 12, 2008 @09:20PM (#23773089)

      ... I would hate to think that all that beauty and profundity and goodness could be captured; even approximated in any way at all! in the horribly rigid computational devices of our era.
      When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?
      I believe this misses Hofstadter's idea. That the "horribly rigid computational devices of our era" are currently implemented with silicon, is immaterial. He means that our minds and consciousnesses are such beautifully complex machines, that the crude computational devices and formal languages we have so far developed are insufficient to model them. Hofstadter shares your sentiment that the medium in which his `strange loops of consciousness' are realized makes no difference at all -- it is the pattern that matters, and only the pattern that matters.
  • by lenski ( 96498 ) on Thursday June 12, 2008 @06:18PM (#23771245)
    I agree with Douglas, I expect I would be uncomfortably unfamiliar in a world shared with AI beings. Then again, based on my understanding of Kurzweil's Singularity, it's unlikely to affect me much: I plan to live out my life in meatspace, where things will go on much as before.

    (Also according to my understanding of Kurzweil's projections,) It's worth noting however, that for those willing to make the leap, much of the real growth and advancement will occur in Matrix-space. It's an excellent way to keep "growing" in power and complexity without using more energy that can be supplied by the material world.

    Here's my analogy explaining this apparent paradox: Amphibians are less "advanced" than mammals, but still live their lives as they always have, though they are now food for not only their traditional predators but mammals too. ...And pollution and loss of habitat, but through all that, they still live amphibian lives.

    In fact, I can't help but wonder how many of us will even recognize when the first AI has arrived as a living being. Stretching the frog analogy probably too far: What is a frog's experience of a superior life form? I am guessing "not-frog". So I am guessing that my experience of an advanced AI life-form is "whatever it does, it/they does it bloody fast, massively parallel, and very very interesting...". Being in virtual space though, AI "beings" are likely only to be of passing interest to those who remain stuck in a material world, at least initially.

    Another analogical question: Other than reading about the revolution in newspapers of the day, how many Europeans *really experienced* any change in their lives during the 10 years before or the 10 years after the American revolution? We know that eventually, arrival of the U.S. as a nation caused great differences in the shape of the international world, but life for most people went on afterward about the same as before. The real action was taking place on the boundary, not in the places left behind.

    (Slightly off topic: This is why I think derivatives of Second Life type virtual worlds will totally *explode* in popularity: They let people get together without expending lots of jet fuel. I believe virtual world technology IS the "flying car" that was the subject of so many World's Fair Exhibits during the last century.)
    • by Zarf ( 5735 ) on Thursday June 12, 2008 @06:58PM (#23771709) Journal
      The short answer is that Hofstadter and Kurzweil are both wrong. I think Kurzweil's technological development arcs (all those need exponential curves) probably are disturbingly correct. And Hofstadter is probably right about souls being far more complex things than what Kurzweil believes.

      So they are both right in ways and wrong in ways. The real rub is that Kurzweil's future is probably farther away but not for the reasons that Hofstadter thinks. The real reasons are probably based in bad technology decisions we made in the last century or two.

      We (humanity) have made several technological platform choices that are terrifyingly hard to change now. These choices drove us down a path that we may have to abandon and thus suffer a massive technological set back. In specific the choices were oil, steel, and electricity.

      Oil (fossil fuels) will run out. Steel (copper too) is growing scarcer. Electricity is too hard to store and produce (and heats silicon rather inconveniently). Data centers today are built with steel and located near power plants that often produce power using fossil fuel. That means even a Data Center driven life will be affected by our platform limitations.

      When we start hitting physical limits to what we can do with these, how much of these supplies we can get, then we will be forced to conserve, change, or stop advancing. Those are very real threats to continued technological advancement. And they don't go away if you hide in Second Life.

      Show me a Data Center built with ceramic and powered by the sun or geo-electric sources and I'll recant.
    • Another analogical question: Other than reading about the revolution in newspapers of the day, how many Europeans *really experienced* any change in their lives during the 10 years before or the 10 years after the American revolution? We know that eventually, arrival of the U.S. as a nation caused great differences in the shape of the international world, but life for most people went on afterward about the same as before. The real action was taking place on the boundary, not in the places left behind.

      It depends on who you were and where you were. If you were Louis XVI, the effects were pretty radical and immediate, and the experience was shared to a variety of degrees by quite a lot of the people of France.

      The further impact of the French Revolution was felt quite acutely by much of Europe, and was for decades (The Napoleonic Wars, major political upheaval, military drafts, injuries, death, etc.) Remember, the American Revolution can be very clearly implicated as a major causative factor in the

    • Re: (Score:3, Insightful)

      by timeOday ( 582209 )

      Here's my analogy explaining this apparent paradox: Amphibians are less "advanced" than mammals, but still live their lives as they always have, though they are now food for not only their traditional predators but mammals too. ...And pollution and loss of habitat, but through all that, they still live amphibian lives.
      How do you know we're analogous to amphibians instead of dinosaurs?
    • To say that the advent of powerful AI will spell the end of human life is to misunderstand humanity.

      Humanity is an evolving entity, not a static and stagnant one. It has been evolving beyond its mere animal origins for a long time now, and the rate of change is increasing exponentially in step with our mastery of technology. In fact, through our intelligence, we have taken control of evolution away from nature (a very haphazard director at best), and are beating a path towards a very engineered and steadi
    • by lgw ( 121541 )

      It's worth noting however, that for those willing to make the leap, much of the real growth and advancement will occur in Matrix-space. It's an excellent way to keep "growing" in power and complexity without using more energy that can be supplied by the material world.

      For computational ability to continue to increase exponentially, power consumption must increase expnentially. The current expoential curve is limited by this, and not too many decades out at that. Even most starry-eyed singularity embracers expect the growth to stop once we're using the entire energy output of the Sun for computation. I suspect the curve will flatten long before that, but with exponential growth even that limit is surprisingly close.

      If the energy needed for computation doubles every N

  • Overlords? (Score:5, Funny)

    by CarAnalogy ( 1191053 ) on Thursday June 12, 2008 @06:19PM (#23771263)
    Hofstadter, for one, does _not_ welcome our new AI overlords.
  • In the impending robot wars, this guy will be hailed as a champion of humanity. Or just be the guy who said "I told ya so!".

    Obligatory xkcd plug. [xkcd.com]

  • Cognitive science is something many AI people don't consider. What makes up the human mind? Are emotions really needed? I've recently started a blog dealing with these sorts of things that a few of you may find interesting. http://www.eatheists.com/2008/05/the-challenge-of-mind-duplication-and-transfer/ [eatheists.com]
  • Cyborgs, not AI (Score:4, Interesting)

    by Ilyakub ( 1200029 ) on Thursday June 12, 2008 @07:44PM (#23772201)

    I am far more interested in digitally enhancing human bodies and brains than creating a new AI species.

    Consider this: throughout the eons of natural and sexual selection, we've evolved from fish to lizards, to mammals, to apes, and eventually to modern humans. With each evolutionary step, we have added another layer to our brain, making it more and more powerful, sophisticated and most importantly, more self-aware, more conscious.

    But once our brains reached the critical capacity that allows abstract thought and language, we've stepped out of nature's evolutionary game and started improving ourselves through technology: weapons to make us better killers, letters to improve our memory, mathematics and logic to improve our reasoning, science to go beyond our intuitions. Digital technology, of course, has further accelerated the process.

    And now, without even realizing it, we are merging our consciousness with technology and are building the next layer in our brain. The more integrated and seamless communication between our brains and machines will become, the closer we get to the next stage in human evolution.

    Unfortunately, there is a troubling philosophical nuance that may bother some of us: how do you think our primitive reptilian brain feels about having a frontal lobe stuck to it, controlling its actions for reasons too sophisticated for it to ever understand? Will it be satisfying for us to be to our digital brain as our primitive urges and hungers are to us?

  • He also wouldn't want to be around if Kurzweil's ideas come to pass, since he thinks 'it certainly would spell the end of human life.'"
    Well, duh! If it's the end of human life, then he won't be around!
  • by Genda ( 560240 ) <mariet@go[ ]et ['t.n' in gap]> on Thursday June 12, 2008 @08:39PM (#23772741) Journal

    This topic seems to make the nerdy and the not-so nerdy alike, a little crazy. Let's see if we can't illuminate this conversation just a wee bit? Eh!

    1. A sigularity is completely unpredictable... " 'X' divided by '0'" has no sane meaning... you can't understand it "By Definition", at best you can talk about it. So those speaking of utopia, dystopia, and autotpia are simply clue free. The question of will it be "good" or "bad" for humanity, will be for some souls "yes", and for others "no", and it will be a great cosmic crap shoot, and speaking generally, I would recommend you listen to Ford, and just keep your towel close by.
    2. The rate of "All Human Knowledge" now doubles in just over four years. That rate is accelerating and has been for some time now. This is the fundamental driver behind all other asymtotic trends in human evolution. So As our knowledge grows, our ability to create ever more powerful tools grows, the tools expose more knowledge and the probability that we'll either significantly reengineer ourselves, or create a new sentience on the planet becomes a simple numbers game. Not if, but when. Subsequently, if you give a human level intelligence the necessary tools to build it successors, it will be a very short matter of time before you are confronted with a world of unrecognizable smart babies indeed. At that point history is pointless, and the future get's really fuzzy.
    3. "Careful Analysis", shows these trends have been at work since simple amino acids and sugars joined to make the earliest life. You can trace all the way back from where we are today all the way to the very beginning, and looking at it from multiple contexts... as information, biological diversity, complexity, intelligence, the growth of sentience, autonomy, the ability to go futher and futher from the planet, sentience and cognitive capacity, as many different points of view as you like. You can see a clear trend, a predictable process of accelerating evolution, ultimately reaching the point at which the information density meets and possibly exceeds the capacity for quantum space time hold it. That would be by definition a singularity. Human beings (as we konw them) would have been gone for a very long time (computationally speaking) before that happened.
    4. As our technology blossoms, and accelerates at ever greater velocity, we will enhance ourselves, augment ourselves, reengineer ourselves, to keep up with our toys, and reap the benefits of more personal time, and greater productivity. By virtue of this pressure, as in any approaching singularity, tidal forces will quickly begin to smear out the flavors of humaninty until they are quickly unrecognizable from one another. First adopters will evolve faster and faster, while Luddites will fall further and further behind, and those in the middle will fall into the multitude of discrete states within the growing gulf formed by the two desparate entities at either end. Do not worry, if you don't want to become a godling, there will be all kinds of human variations you can play with, consistent with the degree with which you've chosen to reengineer yourself.
    5. So it is primarily ignorance, xenophbia, and hubris that speaks when most folks express fear and concern with a singularity. We are already at the verge, and we are no less human than our great-great-grand parents (and you'd just have to take my word for it when I say the elder one's still walking the world are intrigued by the odd folks that now people the planet.) To the catepillar, the buttle-fly looks like dying. Making summary judgements about giving up your humanity to become something more doesn't sound the least like bad news to me. You just want to begin looking now at how and where you want your trajectory to run you through this process. You might not want to end one with a huge artificial brain. Maybe a quasi-human super-sentient who overcame death, skipping around the universe with a few close friends sounds more like your heart's desire. Like I said just know where the towel is, and keep it close.
  • by RexDevious ( 321791 ) on Thursday June 12, 2008 @09:00PM (#23772961) Homepage Journal
    being killed by a super-intelligent robot, if I had some hand in creating it? Think how awesome that would be - you build something intelligent enough to not only not need you anymore, but that also determines the world is actually better off without you. Maybe it's just because I figure, if I helped create it, I'd be pretty damn far down on the list of people the robots figured the world would be better off without.

    And don't give me any of that, "Oh, it'll kill coders *first* because they represent the biggest threat" nonsense. Do you know how hard it is to get a machine to exhibit anything *remotely* resembling intelligence? If you created something capable of even *reasoning* that you were a threat, you'd have created something smart enough to deal with that deduction in better ways than killing you. And if it's not really smarter than you, but just more dangerous - like those automated border guard robots they had to turn off because they turned their guns on the engineers during the demo - well, the world probably *is* better off without you. *First* you make it intelligent, *then* you install the guns. Jeez - how hard is that to figure out?

    Or maybe it's just that running from Terminator style robots would be far more exciting that sitting at this freakin' desk all day. But to me, dying at the hands of creation that surpassed your intelligence would be right up there with dying of a heart attack during your honeymoon with Jessica Alba. The kind of death where the epitaph on your tombstone could be: "My work here is done!"
  • Hofstadter vid (Score:4, Informative)

    by religious freak ( 1005821 ) on Friday June 13, 2008 @04:28AM (#23775505)
    I was looking forward to hearing a coherent rebuttal of the singularity, because it seemed to make so much sense to me once I heard the theory completely laid out. This is Hofstadter's response - I can say I was not impressed by his argument or rationale. In fact I can say I don't recall seeing either in his presentation... just an "it's not possible" attitude.

    http://singinst.org/media/tryingtomuserationally [singinst.org]
  • by smchris ( 464899 ) on Friday June 13, 2008 @08:32AM (#23776569)
    What's to add? But since I'm always ready with a slap at Kurzweil, I feel that Hofstadter has him pinned:

    1. "Ray Kurzweil is terrified by his own mortality", and

    2. "Rather ironically, [Kurzweil's] vision totally bypasses the need for cognitive science or AI"

    It is exactly this complex and elusive puzzle of "I" and "consciousness" Hofstadter explores that Kurzweil hopes we can conquer without having to think about it at all. Which I scorn as "magic science".

    I have to say I find the cyberpunk vision more appealing than Hofstadter. It would be "the end of humanity as we know it." I'm not sure it would be "the end of human life." It might be evolution. I just think it is many hundred years in the future at the most "optimistic" (depending on your viewpoint).

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...