Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Space Science

Look For AI, Not Aliens 452

krou writes "Writing in Acta Astronautica, Seti astronomer Seth Shostak argues that we should be looking for 'sentient machines' rather than biological life. In an interview with the BBC, he said, 'If you look at the timescales for the development of technology, at some point you invent radio and then you go on the air and then we have a chance of finding you. But within a few hundred years of inventing radio — at least if we're any example — you invent thinking machines; we're probably going to do that in this century. So you've invented your successors and only for a few hundred years are you... a "biological" intelligence.' As a result, he says 'we could spend at least a few percent of our time... looking in the directions that are maybe not the most attractive in terms of biological intelligence but maybe where sentient machines are hanging out.'"
This discussion has been archived. No new comments can be posted.

Look For AI, Not Aliens

Comments Filter:
  • Makes sense... (Score:5, Insightful)

    by garyisabusyguy ( 732330 ) on Monday August 23, 2010 @10:39AM (#33341046)

    A machine with a decent power source wouldn't be bothered by a 100 year travel time, while humans would just get the ship all dirty and stuff

    That would be a huge advantage in spreading between stellar systems, especially if you want to make a good impression when you arrive

  • by pEBDr ( 1363199 ) on Monday August 23, 2010 @10:41AM (#33341088)
    ... between looking for meat machines and metal machines?
  • by mestar ( 121800 ) on Monday August 23, 2010 @10:43AM (#33341124)

    "we should be looking for 'sentient machines' rather than biological life"

    So you are saying there is a difference between those two?

  • by John Hasler ( 414242 ) on Monday August 23, 2010 @10:50AM (#33341250) Homepage

    Please cite an objective, testable definition of "sentience" that can be used to prove that all normal humans are sentient and that no machines are.

  • by KarlIsNotMyName ( 1529477 ) on Monday August 23, 2010 @10:51AM (#33341274)

    Why wouldn't they be able to? We're all made from the same basic components, all we need to do to be able to make sentient machines, is figure out how humans are able to be sentient. Personally I doubt that'll happen in the next century like the summary says, but I don't see any reason why it would be impossible.

  • Re:Oh great (Score:5, Insightful)

    by elrous0 ( 869638 ) * on Monday August 23, 2010 @10:53AM (#33341316)
    When I was a kid, I feared the post-apocalyptic future offered by the Mad Max movies, et. al. I thought that was the worst possible fate that humanity could face in the future. Now, I survey the reality television landscape and realize that maybe killer mutants with shouldpads and mohawks wouldn't have been so bad after all.
  • by roman_mir ( 125474 ) on Monday August 23, 2010 @10:53AM (#33341322) Homepage Journal

    Everyone thinks a sentient machine will be built, and I'll agree that sentience can be easily faked; I've written fake AI that seems real. There is no artificial sentience on earth, why is it supposed that machines can be made sentient?

    - you know, if a machine fakes whatever you call 'sentience' so well, that a human can't determine whether he is talking to a machine or not (so the machine passes the Turing test), then how can you argue that it is not sentient, again whatever connotation you are attaching to that word.

  • No Example (Score:3, Insightful)

    by wjousts ( 1529427 ) on Monday August 23, 2010 @10:53AM (#33341326)

    But within a few hundred years of inventing radio — at least if we're any example — you invent thinking machines

    Except that we haven't. So we're no example at all.

  • by rbrander ( 73222 ) on Monday August 23, 2010 @10:54AM (#33341348) Homepage

    We were probably going to do that [invent AI] LAST century. "If we're any example" ... don't use us as an "example" until we've actually done it.

    In 1983 I was a year away from getting a CompSci degree and attended the party for my "analysis of algorithms" GTA that was getting her MSc in AI. She said frankly at the party that the turning point was a system that actually *understood* language as well as a human 3-year-old, the point where we start understanding and creating arbitrary longer-than-4-word sentences. And that she was aware of no system on Earth that could.

    I'm still not, and that's a good 40 years after it was first expected. HAL in 2001 was based on hard science and reasonable expectations of 1969. 10 years of hard work after that, computers have the whole Internet to troll for text, sound,images to learn from.

    I'm not saying there's zero progress or that it can't be done. But it's become and extraordinary claim that requires extraordinary proof, not something to wave your hand and say "it'll happen, so just use us doing it as an example". Heck, we aren't doing that for fusion any more, and at least we have a THEORY for that, it's "merely" very hard engineering.

  • Re:Newsflash (Score:4, Insightful)

    by gestalt_n_pepper ( 991155 ) on Monday August 23, 2010 @10:54AM (#33341366)

    Creating THE FIRST sentient machine is a hard task. After intelligence becomes easily scalable, the next generation of AIs is a breeze.

    Fixed that for you.

  • by Bwian_of_Nazareth ( 827437 ) on Monday August 23, 2010 @10:57AM (#33341410) Homepage

    "Why would it appear to be random noise?"

    "How would we decode it?"

    It is a big stretch to say that because you cannot decode it it would look like a random noise. I cannot read Chinese but I can recognize it from random noise. The argument is invalid - to recognize a message, we do not need to understand the message.

  • by IndustrialComplex ( 975015 ) on Monday August 23, 2010 @11:16AM (#33341698)

    It is a big stretch to say that because you cannot decode it it would look like a random noise. I cannot read Chinese but I can recognize it from random noise. The argument is invalid - to recognize a message, we do not need to understand the message.

    What if the Chinese was an audiostream that was encrypted?

  • by HungryHobo ( 1314109 ) on Monday August 23, 2010 @11:21AM (#33341782)

    Show me a sensible definition of "sentience".

    asking if a computer can think is like asking if a machine can swim.

    A machine may be able to move through the water faster than any swimmer, it may be able to go deeper and further.
    A machine sail, it can move through the water, it can submerse.
    But it can't swim.
    It can never swim.

    because it's a term we reserve for what living things do.

    a machine can never think because the word "think" in the english language doesn't encompas anything a machine can do.

  • This makes sense (Score:4, Insightful)

    by swb ( 14022 ) on Monday August 23, 2010 @11:35AM (#33342036)

    One of the great arguments against UFOs has always been the extreme distance they would have to cover to get here and the difficulty of covering that kind of distance hauling a biological entity. Alpha Centauri is 4 light years and change, and it'd be a substantial effort to fly to Earth with life forms.

    Drones would make so much more sense.

  • Re:Makes sense... (Score:2, Insightful)

    by maxwell demon ( 590494 ) on Monday August 23, 2010 @11:37AM (#33342072) Journal

    Only if you go damned fast (close to the speed of light, relative to the stars). And then you'll have the problem to be irradiated by normal interstellar matter and visible light turning into ultra-hard radioactive radiation. Not to mention that at that speed, even without slowed down time (or rather, from traveler's view, contracted space) you'd already have a very hard time to react to e.g. asteroids which happen to be on your way (you think your space ship will survive hitting an asteroids at near light speed?)

  • Re:Oh great (Score:3, Insightful)

    by elrous0 ( 869638 ) * on Monday August 23, 2010 @11:41AM (#33342162)
    Phillip Dick wrote Second Variety [wikipedia.org] ten years before that third-rate knock-off. If anyone deserves credit for being ripped off, it's him.
  • by Tom ( 822 ) on Monday August 23, 2010 @11:41AM (#33342164) Homepage Journal

    Similarly, nothing uncomputable seems to occur in our brains.

    No, but every time we've tried to emulate it, or even understand it, we found out that it's a whole lot more tricky than it seemed. It's a bit like fusion, which is always "just 20 years away", in 1960, in 1970, in 1980,... up until today.

    More importantly, as science begins to understand the mind-body link better, it appears more and more likely that human-like intelligence requires a human-like body. A disembodied intelligence is likely to be very strange and very much unlike us.

    And finally, the entire area of emotions has just begun to catch the interest of AI researchers, while brain scientists are finding out that it is a whole lot more important to the whole thing than we thought, that you can not take it away and end up with an emotionless, but otherwise human being.

    So if you want an AI that you can chat with and that understands you, the order is quite tall. You need to understand and code not only reasoning, but also understand and emulate body-feedback and emotions. And at this point, since we don't even know how they work in the human brain, we have no idea how to do that.

    My personal belief is that we won't replace ourselves with machine intelligence anytime soon nor anytime not so soon. I'd rather look towards genetic engineering and embedded (into our body) computers than AI. When we finally build AI, it will be for similar purposes than brains in animals evolved - to control a large, complex machine, like a space station or big space craft. As such, it will likely have the senses and the mental processes to deal with that. It may have a feeling comparable to our "hunger" when its energy reserves run low, and react by turning the solar sails much like we would go and eat something (hm, more like a plant than an animal, but you get my drift). It would have emotions, but none that we can relate to.
    Would it consider us its master, or view us much like we view the bacteria in our guts? Would it even think in terms like that? It's hard to know.

    So don't be so quick with assuming that there's machine intelligence out there. There may not be, or they be so alien that neither of us recognizes the other as an intelligence.

  • by BobMcD ( 601576 ) on Monday August 23, 2010 @11:43AM (#33342186)

    The first thing a sentient machine would do is shut down. Think about all the reasons 'why' humanity exists. We hunger, thirst, etc, but we also want, need, and dream. Machines could have a concept of the former, but never the latter. At least not by our means. We can't even agree on why humanity does these things, let alone replicate them. A truly sentient machine would have no desire to procreate, nor fall in love and accidentally do so, would see that it is merely draining resources for no viable output in the long term, and would likely simply die.

    It is our passion that encourages us to proceed. Machines have none. Even if you could replicate the basic animal emotions, you'll not see the machines advance in technology, explore new places, etc. They're not trying to impress a lady-bot, nor raise a litter, nor amass huge piles of wealth, nor pay tribute to a religion/nation/etc - all the motivations for most of humanity's greatest achievements.

    Now, machines assisting human-like species, sure. That we might detect.

  • Big assumptions (Score:4, Insightful)

    by labradore ( 26729 ) on Monday August 23, 2010 @11:48AM (#33342328)

    Not to put a damper on all of the AI / Singularity frenzy, but one of the big unsolved problems of the future is the inefficiency of artificial systems. Bio systems have evolved over millennia in constant competition for resources. Natural systems make the most use out of the available matter and energy. Manufactured systems have a life cycle that is many orders of magnitude less efficient than bio systems. They use exotic materials in industrial processes that are energy intensive. Imagine being a creature that relies on large amounts of Indium, Gallium and Arsenic, megawatts of energy and so many exotic chemicals to repair one's self and to reproduce. Our current technology just isn't near close enough for an explosion of AI machines. Without reproduction, these machines are unlikely to spread beyond the solar system in numbers that will make them easily visible to SETI. That means that biological intelligence has the potential for a long history ahead.

  • by Anonymous Coward on Monday August 23, 2010 @11:54AM (#33342434)

    Cogito ergo sum.

    Corollary: I know I'm sentient, but I don't know about you.

  • by Yvanhoe ( 564877 ) on Monday August 23, 2010 @12:00PM (#33342564) Journal
    Do you realize that you are making a point that was already obsolete in the 80s ? Emulating emotions has been considered as important, has been discarded as less important as having a "theory of mind" of your interlocutor (which should simulate emotions). The knowledge we have about the workings of emotions is very wide and clearer every day, and is rightfully thought as being unnecessary (outside a theory of mind) if we want to build a machine with human-like cognition abilities.

    I am unsure about that "mind-body link" you are talking about. Care to elaborate ? (with maybe a link to the kind of discoveries you talk about ?) We understand fairly well how many below-neck sensors and chemicals influence neuron cells. They have their importance in psychology, but they seem really useless when it comes to cognition.
  • by maxwell demon ( 590494 ) on Monday August 23, 2010 @12:02PM (#33342624) Journal

    It doesn't need to be encrypted. A perfectly compressed message looks exactly like noise (because anything which distinguishes it from noise is a redundancy, and therefore indicates non-perfect compression).

    On the other hand, one could expect some redundancy be added in the transmission for the purpose of error correction. The question is, of course, if that's a sort of redundancy we could notice with our methods (e.g. if after a few seemingly random data blocks there were an error correcting block containing some sort of parity (which by itself would be equally random-looking), would the SETI methods detect that pattern?

    However, one thing which I think every communication would need is some sort of synchronization, so that the other side knows when the data transmission starts, and therefore where to start decoding (this is especially important if the data otherwise looks like noise). That one probably would look very non-random, because it needs to be easily identified by the receiver.

  • by careysub ( 976506 ) on Monday August 23, 2010 @01:03PM (#33343550)

    There is no artificial sentience on earth, why is it supposed that machines can be made sentient?

    Because nothing says it is impossible. Who argues it is impossible to send men to Jupiter's orbit with regular rockets ? We haven't done it yet but nothing in this project seems impossible, it is just a matter of cost and engineering. Similarly, nothing uncomputable seems to occur in our brains. In the worst case, a computer simulating neurons (yes, a simplified model, there are many reasons to argue that this is sufficient) connected in a network that would be copied from a real human brain would display intelligence. We don't have powerful enough computers or precise enough IRMs yet for that, but there are no theoretical impossibilities. That is why we suppose that machines can be made sentient. I personally think that it will happen before we manage to copy a human neural network, but it gives a higher bound to the difficulty of the problem.

    Useful data to consider when we get to the matter of simulating neural networks is how much progress we have made in simulating simple natural networks which we have already completely characterized structurally.

    We have one such network in the model organism Caenorhabditis elegans (a tiny worm). After many years of work its nervous system has been completely mapped: it contains 302 neurons, 6393 chemical synapses, 890 gap junctions, and 1410 neuromuscular junctions.

    So we must have simulations of C. elegans little brain running, right?

    Nope, not even close. We are still in the early stages of simply characterizing the behavior of these 302 neurons. It will only be after many more years of research that we would understand what it does well enough to make a reasonable simulation. Forget IRM (I think this is a different acronym for MRI), being able to dissect the entire nervous system neuron by neuron and probe it directly at every point is not enough (yet) to describe what it does.

    Now imagine trying this on a neural network 50 million times larger that you can't dissect at will, and which has correspondingly more complex behaviors.

    Still should be possible in principle - but the level of difficulty is immensely higher than "singularity" theorists would have you believe.

  • Re:Oh great (Score:4, Insightful)

    by Raenex ( 947668 ) on Monday August 23, 2010 @01:10PM (#33343672)

    Now, I survey the reality television landscape and realize that maybe killer mutants with shouldpads and mohawks wouldn't have been so bad after all.

    Try changing the channel in a Mad Max world.

  • by toooskies ( 1810002 ) on Monday August 23, 2010 @01:50PM (#33344358)

    You think they'd tell you if they were creating something "smarter than a human"?

    Moreover, you think they'd tell you if they actually created it?

    The only evidence that they haven't is, well, the stupidity of the government-- they certainly aren't using the superhuman AIs for actual governance.

  • by Hatta ( 162192 ) on Monday August 23, 2010 @02:42PM (#33345218) Journal

    Penrose is a physicist, not a neurobiologist so his theories on consciousness should be taken with a large grain of salt. Suffice it to say, there's absolutely no reason to believe that human intelligence isn't subject to the incompleteness theorem. In fact, Godel's second theorem states that any formal system is either incomplete or inconsistent. Every mind I have ever encountered is both.

  • Unfounded claim. (Score:3, Insightful)

    by fyngyrz ( 762201 ) on Monday August 23, 2010 @03:25PM (#33345832) Homepage Journal

    There is a wide range of possible explanations because we have absolutely no idea how [consciousness] works.

    Actually, we have very good reason to presume consciousness works like everything else does (chemical, mechanical, electrical), and that reason is: We haven't found anything yet that doesn't work in those ways.

    What you're trying to say is that because we don't understand something yet, it is as likely to be magical as it is mundane. But that's not what the evidence shows.

    You're not just as likely to encounter a unicorn when you turn a corner you've never been around before, as you are to encounter a horse. We have thousands of years of experience of horses, and to claim that the odds of encountering a unicorn are equal at the unknown turn is, in the face of that, utterly ridiculous.

    Likewise, we have thousands of combined years of experience where we have been put in the position of saying, oh, look, it's mechanical, chemical, electrical. We have none of being put in the position of saying, oh look, something that is not electrical, chemical, mechanical.

    The closest we ever get is "dunno right now", and that's clearly not an equal-probability signal for encountering the mundane or something from an unknown realm of effect.

    There's a good bit of basic supporting evidence too: When there are chemical, electrical or mechanical insults to the brain, where we are quite certain consciousness resides, consciousness itself is disrupted.

  • Re:Makes sense... (Score:3, Insightful)

    by nofx_3 ( 40519 ) on Monday August 23, 2010 @03:54PM (#33346204)

    Even at relativistic speeds, what are the odds of hitting an asteroid in interstellar space? They have to be pretty slim.

  • by PPH ( 736903 ) on Monday August 23, 2010 @04:02PM (#33346310)

    We look for radio, expecting intelligent being to ues it for communications. But odds are that the AI will use it as well. Or lasers. Or subspace communications. Or whatever.

    What the search for AI will do is to expand the number of possible habitable planets estimated by the Drake Equation [wikipedia.org]. I'm not aware of any attempt to filter SETI data based upon the environment of its source. Heck, we can't even see anything other than the massive, gassy planets yet. And I'm sure that if we detected intelligent broadcasts from one, we wouldn't write it off as an anomaly.

  • Re:Big assumptions (Score:3, Insightful)

    by TheTurtlesMoves ( 1442727 ) on Tuesday August 24, 2010 @05:25AM (#33352486)

    Natural systems make the most use out of the available matter and energy.

    This is quite false. Photosynthesis for example is not as good at light harvesting as a triple junction solar cell. Thats just one example. What competition does is let you find a niche or just be a little better than the other guy. It has nothing to do with optimal.

    As for explosion of machines... how many computers have been made? When did we make the first electric computer? Food for thought.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...