Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Books Media Math Science

Douglas Hofstadter Looks At the Future 387

An anonymous reader writes with a link to this "detailed and fascinating interview with Douglas Hofstadter (of Gödel Escher Bach fame) about his latest book, science fiction, Kurzweil's singularity and more ... Apparently this leading cognitive researcher wouldn't want to live in a world with AI, since 'Such a world would be too alien for me. I prefer living in a world where computers are still very very stupid.' He also wouldn't want to be around if Kurzweil's ideas come to pass, since he thinks 'it certainly would spell the end of human life.'"
This discussion has been archived. No new comments can be posted.

Douglas Hofstadter Looks At the Future

Comments Filter:
  • Hail to the robots (Score:4, Insightful)

    by oever ( 233119 ) on Thursday June 12, 2008 @06:01PM (#23771085) Homepage
    Perhaps Hofstadter has no need for AI or robots, but I would love to see robots reach our level of thinking while I'm living. Work on AI shows us how we think and that is very fascinating. The rise of the robots will be *the* big event in our lives.

  • Intelligent Beings (Score:2, Insightful)

    by hawkeye_82 ( 845771 ) on Thursday June 12, 2008 @06:08PM (#23771159) Journal
    I personally believe that AI will never happen with us humans at our current level of intelligence.

    To build a machine that is intelligent, we need to understand how our own intelligence works. If our intelligence was simple enough to understand and decipher, we humans would be too simple to understand it or decipher it.

    Ergo, we humans will never ever build a machine that is intelligent. We can build a machine that will simulate intelligence, but never actually make it intelligent.
  • by Angostura ( 703910 ) on Thursday June 12, 2008 @06:25PM (#23771327)
    I found The Emperor's New Mind a remarkably irritating book. As far as I could tell, the whole tome basically boiled down to 'Consciousness is spooky and difficult to explain, Quantum effects are spooky and difficult to explain, ergo human consciousness probably has its basis in qyuantum effects'. I didn't read any of his books after that one.

    I like Hofstadter a *lot* though. His book of essays from SciAm: Metamagical Themas is still woeth grabbing if you ever see a copy.
  • by smallfries ( 601545 ) on Thursday June 12, 2008 @06:47PM (#23771597) Homepage
    The interview contains one of the best descriptions of the Singularity religion that I've heard:

    I think Ray Kurzweil is terrified by his own mortality and deeply longs to avoid death. I understand this obsession of his and am even somehow touched by its ferocious intensity, but I think it badly distorts his vision. As I see it, Kurzweil's desperate hopes seriously cloud his scientific objectivity.
  • by bunratty ( 545641 ) on Thursday June 12, 2008 @06:59PM (#23771723)
    I believe that something like the singularity will come to pass, in the sense that super-smart machines will quickly develop. On the other hand, the whole idea of copying human brains just strikes me as silly. I'm really not sure what the interaction between humans and super-smart machines will be. That's one of the key points of the singularity; things will change so much so rapidly that we cannot predict what will happen.
  • by nbates ( 1049990 ) on Thursday June 12, 2008 @07:10PM (#23771795)
    I think that the reason the extrapolation is not that naive is that there is already existing intelligence (not just us, but also many species), so saying "one day we'll develop an artificial intelligence" is just saying one day we'll reproduce what is already existing.

    If you use the Copernican principle (i.e. we are not special) it is easy to assume we, as species, are not specially intelligent nor specially stupid. So the statement that there could be AI more intelligent than us is not that hard to believe.

    All this, of course, assuming you don't believe in the soul, god, ghosts and those things.
  • by timeOday ( 582209 ) on Thursday June 12, 2008 @07:19PM (#23771935)

    Here's my analogy explaining this apparent paradox: Amphibians are less "advanced" than mammals, but still live their lives as they always have, though they are now food for not only their traditional predators but mammals too. ...And pollution and loss of habitat, but through all that, they still live amphibian lives.
    How do you know we're analogous to amphibians instead of dinosaurs?
  • by flnca ( 1022891 ) on Thursday June 12, 2008 @07:21PM (#23771949) Journal
    The scientists have not produced a viable AI so far, because they focus on the brain rather than on the mind. Brain function is poorly understood, as brain scientists often admit, and hence, there's no way to deduct an AI from brain function. The right thing to do would be to focus on abstract things, namely the human mind itself, as it is understood by psychology, perhaps. Even spirtuality can help. If god existed, how would s/he think? What would a ghost be like? What is the soul? What does our soul feel? These things are those that are the key to artificial intelligence. Not functional elements of a device we don't fully understand.
  • by magisterx ( 865326 ) <TimothyAWiseman@nospAM.gmail.com> on Thursday June 12, 2008 @07:38PM (#23772119)
    Just to clarify this excellent post slightly, the concept of a singularity does not entail AI per se. It requires an intelligence capable of enhancing itself in a recursive fashion, but this could in principle be achieved in a number of ways. Genetic engineering which then permits the development of better genetic engineering, or the direct merging of biological and computer components in a fashion which permits developing better mergers, or in principle taken to the extreme even simply ever better tools for the use in developing technology to make better tools yet.

    If a singularity does occur, it will likely emerge from multiple paths at once.
  • by nuzak ( 959558 ) on Thursday June 12, 2008 @07:59PM (#23772369) Journal
    Oh I fully believe that one day we'll create a machine smarter than us. And that eventually it will be able to create a machine smarter than it. I do disagree with the automatic assumption that it'll necessarily take a shorter cycle each iteration.

    Usually the "singularity" is illustrated by some graph going vertical, where I can only assume that X=Time and Y="Awesomeness". The fact that I didn't commute to work on a flying car makes me a bit skeptical.
  • by glittalogik ( 837604 ) on Thursday June 12, 2008 @08:08PM (#23772453)
    I suspect you may have read one too many Arthur C Clarke short stories - artifical intelligence and artificial emotion are far from mutually inclusive by default. However, I agree with you to the extent that humans should maintain some level of compassion/respect even for inanimate objects, if only because we need the practice.

    There is hope though, check out Wired's R is for Robot [wired.com] for some interesting insights into human/machine interaction.
  • by joebob2000 ( 840395 ) on Thursday June 12, 2008 @08:32PM (#23772669)
    It was caused by a shortage of money. The Fed tightened, causing a deflationary collapse. Without a certain critical mass of money, the economy will not function. The speculative excesses of the 20's were caused by a loose monetary policy that was then whipsawed to an overly tight policy. Ironically, the entity responsible for these actions, the Fed, was supposedly created to "smooth over" business cycles, not exacerbate them.
  • by servognome ( 738846 ) on Thursday June 12, 2008 @08:33PM (#23772693)

    It requires an intelligence capable of enhancing itself in a recursive fashion, but this could in principle be achieved in a number of ways.
    I would argue this already exists. If you look at humans as a single social entity, since the start of specialization & trade, human intelligence has enhanced itself recursively.
  • by khellendros1984 ( 792761 ) on Thursday June 12, 2008 @08:35PM (#23772713) Journal
    And the philosophers have been working on all of those questions for far longer than we've been systematically trying to understand the brain.

    I believe that we'll gradually come to understand the brain better, and from that, how the mind arises from its physical functioning. *That* is where an artificial intelligence can be designed, when we understand the cognition provided by the brain.
  • by Genda ( 560240 ) <mariet@go[ ]et ['t.n' in gap]> on Thursday June 12, 2008 @08:39PM (#23772741) Journal

    This topic seems to make the nerdy and the not-so nerdy alike, a little crazy. Let's see if we can't illuminate this conversation just a wee bit? Eh!

    1. A sigularity is completely unpredictable... " 'X' divided by '0'" has no sane meaning... you can't understand it "By Definition", at best you can talk about it. So those speaking of utopia, dystopia, and autotpia are simply clue free. The question of will it be "good" or "bad" for humanity, will be for some souls "yes", and for others "no", and it will be a great cosmic crap shoot, and speaking generally, I would recommend you listen to Ford, and just keep your towel close by.
    2. The rate of "All Human Knowledge" now doubles in just over four years. That rate is accelerating and has been for some time now. This is the fundamental driver behind all other asymtotic trends in human evolution. So As our knowledge grows, our ability to create ever more powerful tools grows, the tools expose more knowledge and the probability that we'll either significantly reengineer ourselves, or create a new sentience on the planet becomes a simple numbers game. Not if, but when. Subsequently, if you give a human level intelligence the necessary tools to build it successors, it will be a very short matter of time before you are confronted with a world of unrecognizable smart babies indeed. At that point history is pointless, and the future get's really fuzzy.
    3. "Careful Analysis", shows these trends have been at work since simple amino acids and sugars joined to make the earliest life. You can trace all the way back from where we are today all the way to the very beginning, and looking at it from multiple contexts... as information, biological diversity, complexity, intelligence, the growth of sentience, autonomy, the ability to go futher and futher from the planet, sentience and cognitive capacity, as many different points of view as you like. You can see a clear trend, a predictable process of accelerating evolution, ultimately reaching the point at which the information density meets and possibly exceeds the capacity for quantum space time hold it. That would be by definition a singularity. Human beings (as we konw them) would have been gone for a very long time (computationally speaking) before that happened.
    4. As our technology blossoms, and accelerates at ever greater velocity, we will enhance ourselves, augment ourselves, reengineer ourselves, to keep up with our toys, and reap the benefits of more personal time, and greater productivity. By virtue of this pressure, as in any approaching singularity, tidal forces will quickly begin to smear out the flavors of humaninty until they are quickly unrecognizable from one another. First adopters will evolve faster and faster, while Luddites will fall further and further behind, and those in the middle will fall into the multitude of discrete states within the growing gulf formed by the two desparate entities at either end. Do not worry, if you don't want to become a godling, there will be all kinds of human variations you can play with, consistent with the degree with which you've chosen to reengineer yourself.
    5. So it is primarily ignorance, xenophbia, and hubris that speaks when most folks express fear and concern with a singularity. We are already at the verge, and we are no less human than our great-great-grand parents (and you'd just have to take my word for it when I say the elder one's still walking the world are intrigued by the odd folks that now people the planet.) To the catepillar, the buttle-fly looks like dying. Making summary judgements about giving up your humanity to become something more doesn't sound the least like bad news to me. You just want to begin looking now at how and where you want your trajectory to run you through this process. You might not want to end one with a huge artificial brain. Maybe a quasi-human super-sentient who overcame death, skipping around the universe with a few close friends sounds more like your heart's desire. Like I said just know where the towel is, and keep it close.
  • by RexDevious ( 321791 ) on Thursday June 12, 2008 @09:00PM (#23772961) Homepage Journal
    being killed by a super-intelligent robot, if I had some hand in creating it? Think how awesome that would be - you build something intelligent enough to not only not need you anymore, but that also determines the world is actually better off without you. Maybe it's just because I figure, if I helped create it, I'd be pretty damn far down on the list of people the robots figured the world would be better off without.

    And don't give me any of that, "Oh, it'll kill coders *first* because they represent the biggest threat" nonsense. Do you know how hard it is to get a machine to exhibit anything *remotely* resembling intelligence? If you created something capable of even *reasoning* that you were a threat, you'd have created something smart enough to deal with that deduction in better ways than killing you. And if it's not really smarter than you, but just more dangerous - like those automated border guard robots they had to turn off because they turned their guns on the engineers during the demo - well, the world probably *is* better off without you. *First* you make it intelligent, *then* you install the guns. Jeez - how hard is that to figure out?

    Or maybe it's just that running from Terminator style robots would be far more exciting that sitting at this freakin' desk all day. But to me, dying at the hands of creation that surpassed your intelligence would be right up there with dying of a heart attack during your honeymoon with Jessica Alba. The kind of death where the epitaph on your tombstone could be: "My work here is done!"
  • by hypomorph ( 1305401 ) on Thursday June 12, 2008 @09:20PM (#23773089)

    ... I would hate to think that all that beauty and profundity and goodness could be captured; even approximated in any way at all! in the horribly rigid computational devices of our era.
    When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?
    I believe this misses Hofstadter's idea. That the "horribly rigid computational devices of our era" are currently implemented with silicon, is immaterial. He means that our minds and consciousnesses are such beautifully complex machines, that the crude computational devices and formal languages we have so far developed are insufficient to model them. Hofstadter shares your sentiment that the medium in which his `strange loops of consciousness' are realized makes no difference at all -- it is the pattern that matters, and only the pattern that matters.
  • by khayman80 ( 824400 ) on Thursday June 12, 2008 @10:12PM (#23773459) Homepage Journal
    As long as it isn't an intelligent robot, go ahead and enjoy your fuckbot.

    By the way, if you manage to find one at a reasonable price, let me know so I can buy one too.

  • by Anonymous Coward on Thursday June 12, 2008 @10:51PM (#23773737)

    ... the idea that once we develop artificial intelligence that is as smart as the smartest scientists ...

    Sorry, but this part of your post is basically just semantic nonsense. It sounds like you're saying something that could be possible - but in fact there is no question of whether or not it's possible, because you don't even know what you've said. Just like all of Vinge's and Kurzweil's books. The entire theory of the singularity is predicated on this ill-defined statement of nothing. That's why it the singularity never amount to more than anything but the far-off, distant figment of imagination that it is now. Again, sorry - I don't mean to sound like an asshole and bust your bubble, I'm just offering a small dose of reality. The truth is no more complicated than this.

  • by SirSlud ( 67381 ) on Thursday June 12, 2008 @11:06PM (#23773859) Homepage
    The point is still valid within the context of the idea that whatever input required for a human to develop healthily is far beyond staring at a chess board and moving some pieces for 100 years.

    We'd need robots who could design equipment for themselves, to scale mountains. To invent instruments. To scour the depths of the ocean. The point is still valid in the sense that people are a product of their environment, and what makes the human experience so unique is that we're constantly attempting to gain more access to more input. Presumably, any old brain in a box placed in a single room, unable to move, would cease being healthy after awhile, and probably even recognizably human after years because I would have to imagine that some part of the programming of the human mind requires or in the very least infers the ability to alter and modify our environment to a satisfactory degree.
  • by Anonymous Coward on Thursday June 12, 2008 @11:34PM (#23774057)
    It's easy to see what will happen. Just look back at the development of technology. The Industrial Revolution changed the way people work, but it did not change the goal people were working towards.

    We will always create tools to accomplish specific work and our tools (assuming they become aware) will do the same.

    Quite frankly, I don't care if some CEO can pay to upload himself into some AI construct. I will believe that the singularity has created true advancement when "the other 85% of humanity" has adequate access to clean water, nutritious food, and medical care.
  • by aproposofwhat ( 1019098 ) on Friday June 13, 2008 @02:36AM (#23775061)

    The future computers/robots better keep on functioning when 40% of their brain is destroyed

    I don't know what the record is for the longest uptime of a computer system, but it's surely less than a normal human lifetime - hardware wears out, and without infrastructure to support it, the 'singularity' will die through disk/memory/processor/whatever failure in fairly short order.

    I think Hofstadter's spot on when he refers to it as 'the nerds rapture' - it's bollocks on the scale of Drexler's imaginary nanorevolution, and should be treated as such.

    AI in itself is a noble field of research, but pointless speculation such as Kurzweil's makes the whole field poorer.

  • by lenski ( 96498 ) on Friday June 13, 2008 @07:54AM (#23776313)

    But, We are all operating on the premise that the economic and social freedom we have today to pursue these new technologies will continue to exist. This is not true. Today's freedom is an aberration of history that is fragile and must be protected.
    From your keyboard to God's monitor...

    Throughout history, and I expect throughout the future, the battle between good and evil will continue wherever life exists, material or virtual. That battle is, in my opinion, the same in all places and for all time: Between those who use others and those who would not be used.

    I don't see Kurzweil describing post-singularity existence as utopian, however. Merely way different from the material existence we have today. It's as if he is simply warning us of changes to come, and to make a best effort to prepare for them.
  • by JymmyZ ( 655273 ) on Friday June 13, 2008 @08:53AM (#23776735)
    Why on earth would the "brain in the box" need to move around? If it's part of some large system it could have specialized subsystems that did all that data-collection for them. We already have autonomous robots that excel at collecting data. If they all pump this data back to some collective mind then that should easily satisfy any data requirements it has. Why these AI's of the Singularity need to resemble us in any way is beyond me.
  • by joto ( 134244 ) on Friday June 13, 2008 @10:19AM (#23777741)

    AI's exist in a perfectly designed environment, they have humans feed them power & data and all they need to do is process.

    I'm not arguing against this point, I just thought you had a silly example of the difficulties involved.

    Imagine the size of big blue if it had to actually see the board and physically move the pieces.

    Yeah, it would add another $139 to the cost, like this device [amazon.com]. If you were thinking about a device that can recognize and move the pieces of any "normal" chess board, then it would be a bit harder, a robotic arm, a camera, and some image recognition software, but still probably at a cost below $1000000, including development. Most likely somebody has already built it as part of a thesis in robotics already.

    If, on the other hand, you are thinking of a device that has to go the library/bookstore and borrow/buy books, and then read them, in order to extract and encode the knowledge in its database of opening moves and endgames, then it would be a tad more difficult. If it also had to learn the rules of chess this way (and how the chess-pieces looked), it would be even more difficult. And if it also had to go to the library to learn about alpha-beta-pruning to learn how computers efficiently play chess, and reprogram itself in this way, even more so. If it also had to design its own hardware for chess playing, even more so. All of these problems would probably require full AI capability/human-equivalent thought (something we do not know how to make)

    On the other hand, it could also be the case that these problems eventually become "easy" when they are finally solved. Circuits that could add and multiply seemed pretty much like magic when they first appeared. Today they are viewed as "dumb". Face-recognition software is rapidly becoming mainstream, even though just a few years ago it was viewed as extremely difficult, and thirty years ago people like me would probably say it would require human-equivalent thought. Natural language processing and computer learning could take a similar leap, but it wouldn't necessarily mean that computers would be able to do everything else we do.

    Because computers doesn't recognize faces like we do, they do it another way, but it still works. Similarly, a breakthrough in natural language processing or computer learning could mean that computers understood natural language (or learned) as well as we do (just like they currently recognize faces as well as we do), but still in a different way. Eventually the frontiers of AI move. When it's a solved engineering problem, it's no longer AI.

    I don't know what the definition of AI is, but when humans are no longer needed, I guess we have it.

Say "twenty-three-skiddoo" to logout.

Working...