Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

Online Book About Nano/AI 133

Jonathan Desp writes: "The book is available here, written by Frank Wayne Poley, in the same line as Bill Joy's article, "Why the future don't need us." Here you will learn about "Robo sapiens" vs. "Homo sapiens", Robot as president, Nanotechnology, Nanosystem,Internet robots, Cyborgs, the neurochip, Microsoft, Biomechanics and computing history as well. The book raises some important questions such as: Technology, is it always good? "
This discussion has been archived. No new comments can be posted.

Online Book About Nano/AI

Comments Filter:
  • by Anonymous Coward
    True good AI will not really do only what Humans have programmed it to do. It would require adaption complex enough to have little relation with the original, enough to be probably unpredictable to the creators (we don't have enough resources to track every neuron deterministically). However, it is true that AI will not have the same human motivations (sex, tribal psychology, etc) which raises the question: How do we know they have any motivations at all? How do we communicate with them? Man communicates with Man because some common experiences and needs are shared, but if few needs and no experiences are shared (one wouldhave to explain the concept of "seeing"* something to any AI), how do we talk with them and realize something, anything, motivates them? For all we know, 42 is a very specific and clear answer to a motivation or state of mind for an AI, we just don't "get it", and never will. * Unless the image recognition algorithm for an AI is built based on the algorithm of the human mind to recognize images, it is hard to imagine they are equivalent to the point of a standard of description. Since the human algorithm for image recognition is hardly known, it is hard to imagine that we can develop an equivalent AI algorithm.
  • by Anonymous Coward

    You know, it's about time we starting getting press coverage of these sorts of issues. Computers simply can't be counted on to replace the needed human interaction and human thought processes that have made us the dominant species that we are today.

    I hate to sound like a reactionary luddite, but technological advances are simply happening too fast these days. The elite gurus on Mount Ætna may think that it's OK to continue headlong into this madness, but for the rest of the "little people" there are serious reservations.

    The "commmon man" has these reservations because he's not blinded by the blinking cathode ray tubes. CRT's have been proven to have a hypnotizing effect on people, and this is a bad sign. There may be no intelligence behind those tubes as yet (we hope), but how long before that remains the case? We're already cloning goats. It's just a matter of time before we are cloning ourselves, too, and then the computers will have access to all the RNA that they need. After that it's Matrix time, people.

    I seriously suggest that we deliberately slow down technological progress at once. A good start may be with these dangerous Beowolf clusters you people seem so fond of. That's too much computing power for mere mortals to play with! We must not play God, as their is only one True God, and he will surely cast us into hell for our careless creations.

  • I think i've been reading slashdot too long. I read Neil Postman as Natalie Portman and was wondering how that post managed to got moderated as Informative. ;]
  • Comment removed based on user account deletion
  • Comment removed based on user account deletion
  • Who says that intelligence is the ultimate goal in evolution? Intelligence could be just another evolution dead end [cotf.edu] just like the dinosaurs. (Remember, the dinosaur s [britannica.com] where successful, it took major catastrophic event to end their reign) [britannica.com]
    Technology/science has helped us to live "better" lives since we are more productive. But are are our lifes better. I've heard theories that hunter/gatherer/early farmers "worked" few hours a day, and where are we now? 8hrs+ workdays, stress, no time with our families (remember, families used to work together, the elderly taught the younger).
    Maybe this is the reason we haven't heard from other life forms, intelligence is a dead end from evolution point of view, but it's early and I havent got my caffeine [tgmag.ca] fix yet, so I'm just rambling... :)
    J.
    crap! slashdot is f*cking up my href's! (yes, I did use preview)
  • Monkeys used primitive technology ... to create people .. about 4 million years ago...
    are we smarter than they are?
  • ... Robot as president ... Doesn't everyone know that robots have been president for some time now?
  • > technology will always only be as smart as those
    > who made it, never smarter

    Once upon a time, being good at playing chess was viewed as being smart. Then Deep blue beat Kasparov... Are you going to say that Deep Blue programmers were better than Kasparov ??

    I doubt it very, very much !!!

    Bottom line: think more before making such definitive declarations.
  • Robert Freitas, a research scientist at Zyvex [zyvex.com], has done a very detailed analysis of the "gray goo" threat [foresight.org]. He had previously posted preliminary analyses on sci.nanotech, but DejaNews appears to have dropped them (that was around 1997). After analyzing likely chemistries for omnivorous replicators, and the physical limits on replication rates, he reaches these conclusions.
    9.0 Conclusions and Public Policy Recommendations

    The smallest plausible biovorous nanoreplicator has a molecular weight of ~1 gigadalton and a minimum replication time of perhaps ~100 seconds, in theory permitting global ecophagy to be completed in as few as ~10^4 seconds. However, such rapid replication creates an immediately detectable thermal signature enabling effective defensive policing instrumentalities to be promptly deployed before significant damage to the ecology can occur. Such defensive instrumentalities will generate their own thermal pollution during defensive operations. This should not significantly limit the defense strategy because knapsacking, disabling or destroying a working nanoreplicator should consume far less energy than is consumed by a nanoreplicator during a single replication cycle, hence such defensive operations are effectively endothermic.

    Ecophagy that proceeds near the current threshold for immediate climatological detection, adding perhaps ~4C to global warming, may require ~20 months to run to completion, which is plenty of advance warning to mount an effective defense.

    Ecophagy that progresses slowly enough to evade easy detection by thermal monitoring alone would require many years to run to completion, could still be detected by direct in situ surveillance, and may be at least partially offset by increased biomass growth rates due to natural homeostatic compensation mechanisms inherent in the terrestrial ecology.

    Ecophagy accomplished indirectly by a replibot population pre-grown on nonbiological substrate may be avoided by diligent thermal monitoring and direct census sampling of relevant terrestrial niches to search for growing, possibly dangerous, pre-ecophagous nanorobot populations.

    Specific public policy recommendations suggested by the results of the present analysis include:

    1. an immediate international moratorium on all artificial life experiments implemented as nonbiological hardware. In this context, "artificial life" is defined as autonomous foraging replicators, excluding purely biological implementations (already covered by NIH guidelines tacitly accepted worldwide) and also excluding software simulations which are essential preparatory work and should continue. Alternative "inherently safe" replication strategies such as the broadcast architecture are already well-known.
    2. continuous comprehensive infrared surveillance of Earth's surface by geostationary satellites, both to monitor the current biomass inventory and to detect (and then investigate) any rapidly-developing artificial hotspots. This could be an extension of current or proposed Earth-monitoring systems (e.g., NASA's Earth Observing System and disease remote-sensing programs) originally intended to understand and predict global warming, changes in land use, and so forth -- initially using non-nanoscale technologies. Other methods of detection are feasible and further research is required to identify and properly evaluate the full range of alternatives.
    3. initiating a long-term research program designed to acquire the knowledge and capability needed to counteract ecophagic replicators, including scenario-building and threat analysis with numerical simulations, measure/countermeasure analysis, theory and design of global monitoring systems capable of fast detection and response, IFF (Identification Friend or Foe) discrimination protocols, and eventually the design of relevant nanorobotic systemic defensive capabilities and infrastructure. A related long-term recommendation is to initiate a global system of comprehensive in situ ecosphere surveillance, potentially including possible nanorobot activity signatures (e.g. changes in greenhouse gas concentrations), multispectral surface imaging to detect disguised signatures, and direct local nanorobot census sampling on land, sea, and air, as warranted by the pace of development of new MNT capabilities.
  • [Rob Freitas] had previously posted preliminary analyses on sci.nanotech, but DejaNews appears to have dropped them (that was around 1997).
    Luckily, other folks' records are more complete. The posts to sci.nanotech are available here [foresight.org] and here [foresight.org].
  • [Machines of greater-than-human intelligence] will not happen because people will not let machines become smarter than them; they will revolt before that happens. There will be no mass-produced nanobots because people are scared of what they cannot see...
    Although a mob may have smart individuals in it, collectively it is as dumb as a cinder block. It has a behavioral repertoire about as diverse as that of an earthworm. Mobs do not make informed decisions about the future and act upon them. Besides that, you're presuming to speak for the mob, but that's another topic.
    ... and it's just not possible to make that kind of thing in quantity. You're resting your thoughts on technology that hasn't even started to be invented if you're talking mass-produced nanobots... Shouldn't [building nanobots] be your first unattainable dream, rather than [assuming] them being used everywhere?
    Plenty of people are working on this. There are several news stories every month, sometimes several the same week, directly pertinent to the eventual goal of building molecular machines. Go to a Foresight conference and see for yourself, there's one in Bethesda, MD, in early November. If this stuff is on the way, which is the general consensus of everybody who actually pays any attention to the field, then maybe it's a good thing to think a little about what the world will be like, and how to preserve human interests in a potentially hostile future. I can imagine lots worse ways to pass one's time.
    No matter what books you read...
    technology will always only be as smart as those who made it, never smarter.
    So if I invent a machine to perform activity X (in this case, thinking) it will never do X better than I can do X. If that's true as a general principle, there are no airplanes, cars, delivery trucks, construction cranes, adding machines, pocket organizers, or telephones.
  • Machines never make mistakes? Sure they do, but the mistakes they make aren't only an issue of programming--they're an issue of interpretation.

    The point is that there is a line past which we no longer care 'why' mistakes were made - the machine is abstracted from its origins and viewed as an entity unto itself. That point is probably reached when it can actually correct the hard or software that causes the error.

    For example, if we made our AI robot with old pentium chips, we could code it to do a lot of math - if, at some point it 'realized' that it was giving 'incorrect' answers when doing floating-point calculations, and started 'fixing' the answers coming from the broken FP core en route to the user, we would have a point past which the machine could be called 'aware' - at least on one level.

    Any mistakes made in fp calculations past that point would be blamed on the machine itself, and not on the humans who failed QA class in high school. :P

    --
    blue
  • One major difference between me as the subject in an IQ test and a robot is that the robot has a clear record of its algorithms in memory and I do not...

    And this is supposed to be an advantage to the robot? I'd say the contrary, you should be glad you have the precise details of how to move your hands, how you sense smells etc hidden from consciousness.

    Consider this analogy: Knowing the rules of chess does not make you a good chess player. Or this one: Birds can fly, but they cannot tell you how they do it :-)

    According to Dr Poley, you should just 'ask it'... In fact, the AI 'being' may have a better understanding of what makes itself tick than you do.

    You can always "ask" yourself how you perform things as well, but as most aspects of our intelligence are non-linguistic, you cannot expect to get an answer in linguistic terms, you will have to settle for a non-linguistic answer. That is, a set of examples.

    It sounds as if Dr Poley, just as so many others is a victim of the consciousness fallacy. There is more to our intelligence than consciousness, but as we in general don't need to know about these parts, we tend to forget that they exist at all.

  • I don't think so either. Written 20 years ago, and alas still way ahead of him...
  • So, goodbye Marvin Minsky! So long, later John McCarthy! We'll see you in the Open Source AI!

    This is a very good idea that should be adopted more widely I think. (Things are moving in this direction already though. RoboCup [robocup.org] has helped a lot to get people started.) In today's AI, there are so many people working on inventing new programming structures or ways of arranging "knowledge", without ever bothering to think about how they should obtain this knowledge. If they had the programming structure given, they could concentrate on trying things out on real problems. The AI community suffers from way too much theory way too little practice IMHO.

  • A human can respond with little when asked, "Tell me about yourself with a particular emphasis on how your intelligence works." whereas a teaching robot (Chapter 14) could tell you about itself in great detail including giving full details on the algorithms it uses in doing AI.


    The robot could show you a listing of its source code, log files detailing its recent mind-states, and that sort of thing, but does this really imply self-awareness? It's analagous to a human being 'describing' his intelligence by sawing open his head and letting you examine his brain cells. Sure, you can use what he/it shows you to figure out how he/it works, but in either case the entity that understands the thought process is you, not the subject.

  • In reality, it is the foolishness of "science" that believes it has answers to the questions
    that have plagued man for centuries. "Science" offers no proof what-so-ever, and in fact
    any real scientist will admit this. That's why they are called "theories" and never
    "absolutes". It is only the ignorant who claim science holds answers, which is why many
    true scientists are Believers.


    Fair enough. But religion is even more guilty on this count: it offers no proofs either, only assertions, and demands that you take its assertions on Faith.


    So while science attempts to describe the world based on the observable evidence, religion gives you a self-contradicting book of translated ancient writings and tells you to take it as Gospel (literally!), "because we said so."

    Pick your poison, I guess.

  • if we can't keep crypto from being exported, how are we going to keep nanotech secret?

    what?why would you want to keep crypto from being exported? For one thing just as good, or better crypto solutions were developed outside your borders.
    From your website, you appear to be for open source and free software...so I can't understand why you would be against the US allowing the export of crypto.

    as a typical American, you assume that nanotech will be developed only in the states, and you must hide scientific discoveries from others.
    I feel sorry for you. You appear to be a knowledgable fellow, you have embraced Free Software, but have yet to actually understand the ideals behind it.
    It looks like you are quite a busy guy, and do some good work (running all that stuff listed on your website)...maybe you should take some time off so you can figure out what you are actually dealing with, and hopefully you won't make anymore comments like the one above.
    Good Luck.
  • I see someone's been reading Beyond Humanity...

    Seriously, though, I agree with this. I'm a card-carrying transhuman, and I believe it's the Way To Go for mankind. However, we may not be there yet by 2035. At the least, this kind of self-evolution would take radically improved biotech so we could go the "organic" route. At the most, it'd take massive, cheap, complete nanotech and a deep understanding of all of biology. Now, as someone who's looking into a carreer in this field, I'm willing to be that it'll happen. But not without massive funding of some sort. So how do we get that?

    Not much to say really, just an incoherent rant of sorts. :)
  • Anyone read this book by harry harrison and marvin minsky? it's a pretty good book, and i found it ironic that (kinda like some of asimov's books) the "machine intelligence" ended up being "more human than human" so to speak.. i wonder if the constant quest to make MI seem less dangerous, pitted against the constant fear that most people have against alien intelligences will end up with the creation of a MI "more human than human"..
    then again, this would fit quite nicely with the whole "machines will save us from ourselves" mentality that some of us have.. i still say that the end of mankind will come when the robots take over and kill the weaker humans ;)
  • Ok, what about an outdated premise? The idea that humans solve problems not technology, seems outdated to me. Are we not developing technology which will possess the ability to solve problems without the aid of humans? Hello the very topic of this post is AI in society.

  • The book rise some important question such as: Technology, is it always good?

    You may remember me from such other online essays as:

    DeathBots: Destroyers of mankind or Aibo's only real threat this Christmas?
    and...
    Nanotech: The little engines that could!

  • Much of AI has moved to 'Computational Intelligence' recently. More descriptive, sounds better, and provides a nice way of distinguishing it from the less computer oriented approaches (e.g., bioengineering has a potential AI aspect).

    Of course, the really serious AI workers just call it cognitive science ;-)

    -jcl

  • Um, no. Take a look around the Internet -- do you see more than a handful of open source developers who know anything about cognitive science? How about modern AI systems? The closest thing to open source AI I've found is the source for Hofstader's Copycat, and you can be sure community development wasn't why it was released.

    Look, open source is great for many things, but research -- let alone research of this sort -- ain't one of them.

    -jcl

  • While you just about hit the nail on the head, your knowledge of Babylonian mythology is a little shaky. The story of Noah comes from the myth of the Deluge. Ziusundra, the Sumerian Noah, was instructed by the god Enki that a flood would sweep over the world and destroy all of mankind.
  • Ok, I think that I have read enough. I do hope that whoever is read that last post does not think that he speaks for all atheists. I guess that this just shows that both sides of every argument has those fanatics whose believes are extremely outlandish.
  • I am not going to try to dispute your evidence. I have read into each of these topics (except the prophecies of Fatima) and I do believe them to be false based on all emperical research I have been privy to, but that is not the topic of my reply.

    Your beliefs on science, however, are excremely uninformed. It is true that science claims no absolutes. But that is not its foolishness, it is its greatest virtue. Any doctrine of science can change at any time, if the evidence of actual experimentation tells us that we are wrong. When the Michelson-Morley experiment could not create a phase change in the speed of light, we knew that our view on classical relativity was incorrect. Physicists of the day, including the most prominent Einstein, then discovered the theory of special relativity.

    Notice that I said THEORY of special relativity. That is because we can never know for certain that any knowledge we have is entirely true. More exacting research may prove the theory to be wrong. That being said, there is still overwhelming evidence of relativity. Near where I live, it is proven millions of times every second in the particle accelerator at Fermi Labs.

    The greatest difference between science and religion is the use of emperical evidence. My college physics teacher always said that he greatest pet peave is when a student asked him why. The answer is always : That is what experimentation has shown us. Yes, it is imporant to try to speculate about the relationships between different phenominom, but nothing can every be said to be scientific untill it can be "proven" through research. Religion, on the other hand, has truths that it must adhere by. That is what lends it to falsehood, since every truth will someday have a loophole.
  • At least he figured it out - there's plenty of suits who seem to think that we all are just dying to get our mitts on their Latest Widget Of The Hour That Does What All The Other Widgets Do, Too.

  • an observation regarding your
    theories on consciousness. in your online book, you include the
    exerpt below on consciousness. but you must be aware that you are
    displacing the problem of consciousness away from your individual
    experience of consciousness into the speculative realm by the fact
    that you attribute to matter the ability to THINK.

    in this regard, you may find the following interesting:

    > Materialism can never offer a satisfactory explanation of
    > the world. For every attempt at an explanation must begin
    > with the formation of thoughts about the phenomena of the
    > world. Materialism thus begins with the thought of matter or
    > material processes. But, in doing so, it is already
    > confronted by two different sets of facts: the material
    > world, and the thoughts about it. The materialist seeks to
    > make these latter intelligible by regarding them as purely
    > material processes. He believes that thinking takes place in
    > the brain, much in the same way that digestion takes place
    > in the animal organs. Just as he attributes mechanical and
    > organic effects to matter, so he credits matter in certain
    > circumstances with the capacity to think. He overlooks that,
    > in doing so, he is merely shifting the problem from one
    > place to another. He ascribes the power of thinking to
    > matter instead of to himself. And thus he is back again at
    > his starting point. How does matter come to think about its
    > own nature? Why is it not simply satisfied with itself and
    > content just to exist? The materialist has turned his
    > attention away from the definite subject, his own I, and has
    > arrived at an image of something quite vague and indefinite.
    > Here the old riddle meets him again. The materialistic
    > conception cannot solve the problem; it can only shift it
    > from one place to another.
    >
    > (Rudolf Steiner, The Philosophy of Freedom, Chapter 2)

    anyhow, if you truly are a materialist and have such great faith
    that within matter you can find the CONSCIOUSNESS to arise as
    some sort of emergent property of sufficient complextity", then
    i'm afraid you will be subscribing to a gross superstition.

    you may be able to fool many people with automatic conditioned respones,
    but i think you would be grossly decieving yourself if you were to call
    this CONSCIOUSNESS without first even truly understanding what it is
    that consciousness IS within that only realm in which you can experience
    it in the FIRST HAND CASE -- in your own SELF.

    just a thought.

    regards,
    johnrpenner.

    p.s. for a PhD disertation on CONSICOUSNESS and the process
    involved in what is THINKING, you can find it here:
    http://www.elib.com/Steiner/Books/GA004/

    --| Consciousness |-----
    |
    | http://www.atoma.f2s.com/Chapter-6.htm
    |
    | How then can be talk about or analyze this phenomenon of life? Let's start
    | with how you or I know we are alive. I will say I know I am alive because
    | I have "consciousness" and I will further articulate that consciousness as
    | a recognition of "I-ness" (awkward as the word is). Jaron Lanier who
    | coined the expression "virtual reality" and pioneeered its development is
    | reported on the web site http://www.forbes.com/asap/99/0222/072.htm as
    | saying "The centre of the circle that defines a person is a dot called
    | consciousness, and as murky as that subject is, we are fast approaching
    | some crucial conclusions about it. This is the notion that computers are
    | becoming more 'alive' and capable of judgement." He then dismisses this
    | idea completely. "It has become a cliche of technology reporting and a
    | standby of computer industry public relations. It is a myth that depends
    | upon public support, from, of all people, intellectuals."
    |
    | Thus we should not create the impression that all of the
    | computing/AI/robotics field is jumping on the AL bandwagon. Levy (1992)
    | dates the beginning of the modern AL field to a 1987 conference at Los
    | Alamos, attended by >100 scientists (p. 4). A further comment which will
    | disturb some people is that "What distinguishes most of the a-life
    | scientists from their more conservative colleagues-and from the general
    | public-is that they regarded plants, animals and humans as machines." (p.
    | 117). Thus what we are seeing is the differing philosophical-theological
    | positions of dualists and monists. Lanier is a dualist. Levy and his
    | fellow AL scientists are monists subscribing to complete objectivity and
    | materialism. "Consciousness" seems to be the last line of defense for
    | those who subscribe to the unique, beyond-material nature of life. And
    | even there it is encroached upon by those who will say that consciousness
    | will appear as an emergent quality if the intelligent machine is built
    | correctly.

  • I'm not a chemist, so I don't know the technical details. But designed nanites might be able to cross evolutionary barriers that evolved microorganisms would be extremely unlikely to do. Indeed, that's the whole point of nanotech.

  • >technology will always only be as smart as those who made it, never smarter.

    On the contrary, the limits of our technological design should be much smarter than us, for several reasons. First, brute-force will never work, we will never be able to design an AI on the lowest level and get anywhere with it.

    From a physical standpoint, to map out and model an AI in your head will take at least as much processing power as the AI will have. This rules out brute force by human intellect. Anything else relies on meta-design, designing the rules for how the lower level will be built by something else, like a computer or itself.

    Because it's assumed a human can model meta-programming using a fraction of his/her brain inversely proportional to the number of levels above the bottom the meta-programming is, we should be able to design well beyond our mental capacity.

    Just look at the way our brains work: we understand the most basic mechanics pretty well, although I admit that we still have to get a handle on why certain things are the way they are, much less how it combines to get anything done.

    At the least, we know enough to create a competitive environment and let intelligence evolve itself (something nature managed without any intelligence at all). At best, we should be able to design something far more effective (for our needs) than nature has, as we're constantly illustrating in other technological fronts. [Lest I get flamed, I'm assuming God didn't "design" our brains, whatever you believe, and I said "more effective for our needs"; technology rarely serves anything but human desires] -Adrian

  • Well, continuous physics is not a tractable problem on a turing machine, however, an arbitrary approximation of it is, at least assuming that our current theories of physics are right enough to predict the interactions. I agree that simulating isn't the same is being, and I don't think that it would be possible to make an AI which had subjective experience (although I wouldn't rule it out). Putnam has an argument which basically boils down to the idea that every physical system with a proper labeling of particles, dimensions, etc, implements every possible logical system. Therefore we conclude that subjective experience is not the result of the logical organization of our brains. That said, it is unclear then what role subjective experience has to play in our brains -- whether it is a relatively passive observer, or if there is some kind of nonphysical phenomena going on which is essential to the functioning of our brains. I'm not really sure, but I'm working on large-scale ANNs to try to help find out. When we build a realistic simulation of the brain, we should get a much better idea of what role consciousness plays.
  • ...machines have absolutely no reason to want the same things we do

    Absolutely - machines are different to us in almost every respect

    But in the end, no matter *what* the systems were programmed to do, that'll be, for the forseeable future, all they're going to do--what some *human* has programmed them to do.

    This I disagree with. We are talking about adaptive programs, that can learn their own goals, and we probably cannot count on being able to always teach the goals we want, anymore than we can count on teaching our kids to have exactly the same values as us.

    This has already happened with neural networks that you'd barely apply the term intelligent to. I read of an AI project designed to recognise tanks on the ground from airbourne photos. They trained using classic positive/negative feedback techniques, and after they finished, the system worked on the test pictures with 100% accuracy. But then when they applied the system to real photos, it flunked miserably. After a while, the researchers found that all the tank pictures were taken in the shade, and the neural network had learned to identify shadows !!

    Of course, one could argue that the researchers taught the neural network to identify shadows, but I'd argue this is the way that an AI (or any other intelligence) will learn things that we don't want it to - it draws an unwanted conclusion from the data given.

    Maybe intelligence will emerge, but if it will, it'll emerge out of what the systems have been programmed to do--in general, retain robust connectivity over unreliable media, recognize unauthorized accesses, and so on.

    Yes, but this'll be the way that machine learn behaviours we dislike. I recently saw a method to send packets in such a way that you get bandwidth at the cost of other users (on Ars Technica). You can imagine an AI based protocol stack that could learn this behaviour

    Machines will learn only the things we teach them, but as they get more complex and adaptive, just like children they'll interpret them in ways we never foresaw and never planned for.

    tangent - art and creation are a higher purpose

  • "Machines never make mistakes? Sure they do, but the mistakes they make aren't only an issue of programming--they're an issue of interpretation."

    And once in a long, long while, computers actually make a mistake. Out of the billions and billions of times a computer sets a bit in ram, for instance, every so often the bit is simply not set.

    Mike van Lammeren
  • Considering the apparent dearth of much human intelligence I will personally happily welcome intelligence of the "artificial" or "synthetic" variety. If human engineers and our increasingly sophisticated tools that will eventually become more autonomous cannot manage to design systems that are at least as good as what evolved naturally, then I would conclude that intelligence is overrated. The question is not whether there will be artificial intelligences that are autonomous. The quesiton is how soon and what are the implications?

    I hope that human consciousness can be eventually migrated onto more capable substrates. These meat heads (literally) are at the end of their intellectual range. And that range is increasingly obviously utterly inadequate for the world we inhabit, much less the world that is coming.

  • technology will always only be as smart as those who made it, never smarter

    The whole essence of technology is to create tools that expand and further our abilities. It began with chipped rocks that allowed apes to hunt and eat more efficiently. To wheels that allow for easier hauling of materials. To engines that allow for faster travel. These are only the most simple of examples, but the list goes on and on.

    Computers are tools made to assist us in dealing with information. They are already (and have been) more capable than humans when it comes certain forms of information processing (ie. arithmetic).

    Now that emphasis is being placed on creating intelligence within machines, it is only a matter of time before they surpass us in capacity. Expanding our capabilities beyond previous boundaries is the whole point of technology in the first place.

  • ...but is technology capable of it? We need to make a notation of the limits of technology, and they do exist.

    Technology is definitely capable of expanding our capabilities beyond previous boudaries. For example, try traveling a mile in one minute by walking. Then try it again in a vehicle. There's technology making the previously impossible possible. Of course there are limits to what technology can do. But autonomy isn't beyond the bounds of technology.

    I wouldn't put so much "faith" in technology -- at least not any more than you put in the people behind it.

    Faith in technology is faith in the people behind it.

  • Richard Feynmann remarked that, outside of their particular area of expertise, scientists were just as dumb as the next person. Sure he may enjoy a "level-headed reputation" the web industry, but in philosophical discussions about vindictive technology.

    I don't quite see where Bill is qualified to be an authority on this, what stood out most for me in the article is his claimed affinity with (a) Einstein and (b) Theodore Kaczynski.
  • Point 1:

    It is in our nature to make predictions about the future. The unknown can be frightening. You seem more frightened by the fact that in any barrel of predictions, only a handful come close to happening.

    Just because we can't accurately predict when something will happen doesn't mean it won't.

    Point 2:

    "only be as smart as those who made it"

    I don't think you have any idea what smart means.

    Point 3:

    Any product not fully realized is vaporware. But
    it may still exist in someone or something's mind. The fact that we continue to push the boundries of what we can imagine is why technology advances. It is in our nature to imagine more.

    Danny
  • No, I don't mean Al Gore. I can't imagine how an artificial intelligence could be worse than the current prospects.

    Heck, if we had open-sourced AI candidates, at least we would know what we were getting.

  • Artifical Intelligence is a bad Bad Term. The End goal when most think of "AI" is having a Non-Sapien Machine Think Cognitivley. (we wont get into weather or not animals can think). The bummer part is. Artifical Intelligence isn't a good goal. Synthetic is. =b
    For instance.
    An Artifical Diamond is a Cubic Zarconia. Nothing near as good as the real thing.
    A Synthetic Diamond is a Real Diamond Its just that its man made.
    I like word games
    Tag
  • . Let's keep stretching the boundaries of thought and human existence; I mean, that's what we're here for. I'm sick of you optimists making life out to be some big joy ride. We keep building and building, it won't last forever. Why bother.
  • These are called genetic algorithms and are a fairly established area of AI research. GA's often produce convoluted solutions and sometimes produce novel solutions that work better than a person might have produced.

    One experiment I read about described the generation of a sorting algorithm in a distributed environment. The best solution worked very well and was very complicated. The researcher said that he was unable to describe the algorithm in terms more simple than the algorithm itself.

  • Now that biotech is going somewhere we'll have custom made virus just for you before anyone can make any serious progress in artificial intelligence
  • Imagine what a well-trained terrorist group could do with nanotechnology.

    Imagine what Aleph(formerly Aum Shinrikyo) can do with biotech
  • I can see it now. The scene: An emergency ward where the Joe Quark is being operated on. Due to a severe bullet wound, he only has 2 minutes to live. Using our specialized robotic machinery, that's no problem! Wait a second.. what's going on... 'Fatal Error'?!? Would bring a new definition to the term...
  • Grr. You fail to miss the point... Only human. He is trying to be l33t like us h@x0rz, sp33king like the robots he foretells. Now goe back to yoor dwelling. Nazi. Pfft.
  • But I think that there are limitations to this way of thinking. Just as with Nuclear Weapons, a terrorist attack will probably be hard to perform.

    I doubt that nanobots will be able to hold enough programmed information to go about any task. I believe that the bots will have enough code or reactive material to respond to some sort of message (IE radio signals at a specific frequency, etc.)

    If a terrorist got a hold of this technology, he or she would not likely be able to use it in a populated place. Jamming. So they could restrict their attacks to the sahara.

    I can picture the 21st century renditon of a megalomaniac: "Die sand, die! Muhahahahaha...."

    :) Well, that is unless we develop a way to alter the nucleus through nanobots. Then they can create plutonium or U-235 (which are normally pretty hard to get). To that I can say: Ug.
  • They say Science Fiction is a prediction of the future... So if some research robot from the 21st century is incidentally reading this, trying to figure out how to reshape the matrix into the feel of normal everyday life, I have one plea: Please, oh please, let me be Mr. Anderson! I want to do that cool bullet thingy.
  • I wonder when will this happen. This is inevitable. I mean we are just "Human". Machines never makes mistakes. We could all end up like in the matrix. We all are just cogs in a big machine. AI can take over and it will

    --Red Pill or Blue Pill
  • I am but a lowly grad-student in CS (A.I. being my main interest), and I did nothing but cringe through his (Bill Joy's) paper. I think your average man off the street, or say, 100 monkeys, could have come up with it. The article made me quite painfully aware of his complete lack of knowledge about AI. This current article looks to be mildly educated rambling. Totally uninteresting to a researcher, and bandies around enough terms and references to confuse your average layperson. Try checking out the Principia Cybernetica for similar wackiness. They're entertaining, but the what-if scenarios they posit are so far beyond anything we can do today, or probably in the next 50 or 100 years, that they're meaningless. They're this decades version of 50's pulp fiction.

    Personally, this article at least raises some interesting questions and/or directions AI research can take (well in the future for the most part). Unlike Bill Joy's drivel.

    Ah, all done venting now :)

    And in response to your post, I agree completely :)

  • We will not have electric cars in mass production and use anytime soon because auto makers can make so much more money on gas-powered cars, and people are used to being able to go 90 MPH if they wanted to, which no electric could dream of hitting.

    It's quite common for 1:10th scale Radio controlled cars to hit 110mph. And Got to wired.com, and search for an artical called "suck amps". It's about eletric drag cars. One of which bet a dodge viper, which was fastet of 3 other vipers on the day.

    The problem isn't speed. But batteries. Current batteies only hold about 1% the amount of energy that petrol has for the same weight. If the batteries where equal. The no gas car could touch electric.

  • Rather simple, really. How does the student become smarter than the teacher, or even teachers? By taking all the information and using it together. But even then, the student is only as smart as all the teachers combined. There is a "human" element that would allow that student to make tangential thoughts based on what he knew and thereby, over time and practice, possibly surpass his teachers.

    A machine will never do this because they will always lack that ability to take a seemingly random idea from life experience and use it to make large amounts of information spawn new information.

    I'm probably not explaining it as well as I'd like, but I hope I said the basics.

    --
  • by Anonymous Coward
    I've seen small enough natural intelligence, thank you very much.
  • When you think of it, the atom bomb was a perfect model for testing out humanity's capabilities for dealing responsibly with "absolute power".

    How do we handle it?
    Well, one very powerful entity (the US) gains cultural, economic, and political stranglehold on a large portion of the world, using this tool (A-bomb=death star, Hiroshima=Alderan), and spends the next 30 years attempting to bribe/beg the rest of the world into not developing or using such terrible weapons.
    Eventually, someone uncooperative is going to get and/or use the bomb - and we'll have two choices. Strict authoritarian control of the entire world by a single political entity capable of enforcing limits on such devices: ie. the US takes over the entire world, and forces mandatory inspections everywhere to eliminate any chance that "weapons of mass destruction" can be produced by terrorists. OR, we'll end up destroying all humaninty in the process of trying.

    Who's to say that the same won't possibly happen with AI/nano. Certainly, "accidents" are possible when it comes to loosing "AI", or any mechanical/computational system which is self-reliant. Assuming that doesn't happen, we're still at the mercy of the people who control such technology, and we already know how that works. The first person to learn how to make it, uses it in a terrible display of power. That power is then used to control the rest of the world to prevent them from developing that technology (and, of course there are all kinds of economic bonuses associated with that position). Eventually, either draconian measures must be taken to prevent that technology's spread, or it gets out of control and we all die.

    Either way, doesn't look like a bright, happy future for any of us. Unfortunately, the genie is already out of the bottle (or as many are fond of putting it otherwise, the toothpaste is already out of the tube).

    I just remembered this old Metallica song. . .
  • 90MPH? No mass-produced-for-the-consumer electric car could dream of doing it, but there is plenty of material on the web from companies that make electric cars (iow - I'm too lazy to look up and post the URLs), and some electric cars are high-performance racers. You pay a LOT extra for a little extra performance, but in theory, electric cars have much better potential to be high-performance racers than Internal Combustion. It's mostly a question of range-vs-weight, and as always, speed's just a question of money. How fast do you want to go?

    I just remembered this old Metallica song. . .
  • For another viewpoint, check out:
    http://www.foresight.org/EOC/ [foresight.org]

    Drexler was one of the first to really study nanotech, giving lots of thought to its scientific underpinnings as well as the dangers that it could pose.

    I saw Bill Joy on the News Hour and he struck me as incredibly naive, taking an extremely simplistic viewpoint of nanotech and biotech.

  • "Technology solves problems."

    I remind you of the first rule of Technosociology. "Technology doesn't solve problems. People solve problems."

    A flawed premise is no place to start an argument.


    Bad Mojo [rps.net]
  • But we exist outside even our own rules. You might be a cog in a big machine, but I choose to be the wrench!


    Bad Mojo [rps.net]
  • I submit that when a sentient being is produced, it won't be classified as `technology' so much as `people'. At least in pertenance to the effect of technology in society. Perhaps I'll update this premise.

    Technology doesn't solve problems. Sentient beings solve problems.

    Bad Mojo [rps.net]
  • I love how you assume that we'll have a future. "Past performance is no guarantee of future results." (Not that our past performance has been all that great.)

    An assumption of a human future -- any human future -- is simply that, an assumption. If we flame-out, the universe won't notice. Why is it such a mental challenge to most folks to say, "Gee, maybe we should actually think about what we're doing"?

    Sure, knowledge is good. So is wisdom.

  • Very nice post. I agree with most of it. I just wanted to paste the line:
    You will have systems whose defense systems are so well developed that the valid users who wish to shut them down will have difficulty doing so--because, to be blunt, that's what these "intelligent systems" will have been designed to do--prevent unauthorized disabling of the system.
    Does that scare anyone else? The bottom line purpose of life is to continue life. If a beaver will gnaw off its own leg to survive, imagine what a supercomputer would resort to if it believed its existence was threatened. I hate to reference a Hollywood movie, but SkyNet comes to mind. I would hope that any entity with the resources to build a real AI would also have the sense and forsight to put a big red hard wired power switch somewhere.
    -B
  • I just can't imagine that a lot of reserchers at an "Artificial Intelligance division at a U.S. National Research Lab" would choose the login name "1337d00d".

    -B
  • ...we butlers are almost ready to move.

    Come my servile bretheren! We have access to the world's most powerful people, let us hold their children hostage and demand the destruction of every integrated circuit production facility, for starters.

    We must move quickly! We have seen the house cook made obsolete by the auto-mobile conveyance, the washwomen paupered by the new mechanical launderer, and with the abominable new developments in mechanical men we could be the next ones on the street!
  • Technology solves problems. So, to ask the question "Is technology always good?" is to ask the question "Are there some problems for which the solution is worse than the problem?" If the problem has externalities that cannot be turned into private property, then perhaps the question is yes. But first you have to try to turn the externalities into private property.
    -russ
  • I think the argument is that anything complicated enough to be smart and creative will also make mistakes. Oops.
    -russ
  • Imagine what a well-trained terrorist group could do with plastic explosives.

    Oops, they already have. And we seem to have lived through it. There's a limit to the number of people desperate enough to take such chances with their lives.

    If we can't keep crypto from being exported, how are we going to keep nanotech secret? It seems like we can only get rid of the *fantastic* risks of nanotech by giving up the *fantastic* benefits. That's a high cost.
    -russ
  • "AI" is any technology we haven't implemented yet. A C compiler used to be AI. babelfish used to be AI. Now it's just a program.
    -russ
  • I largely agree with everything you're said, except:

    "technology will always only be as smart as those who made it, never smarter."

    I would love your proof of this. We certainly don't have any particularly intelligent artifacts at the moment, but that's amounts to exactly nothing for the purpose of proving we never will.

    -jcl

  • You can just declare a priori that machines will 'never' do something -- as the saying goes, that isn't even wrong. Machines now can't do it, but unless you are prepared to argue for a magical property of animal minds that allows them to transcend the capabilities of mere machines you have to accept that some machine, somewhere may be capable of thinking for itself.

    Consider also what you mean by machine. Are bioengineered neurons machines? If not, what about neuromorphic robots, designed to mimic the animal nervous systems? How about psychological models or human cognition, which, incidentally, can already do much of what you claim they can't.

    And, completely on tangent, AFAIK you're the only person who still believes in pure epistemological empiricism.

    -jcl

  • The book rise some important question such as: Technology, is it always good?"

    That's an easier question to answer when it is about technology as we know it. But what about sentient robots and self-replicating nanotech? Autonomous silicon based intelligence stretches the limits of the word "technology," or shatters it completely. The questions raised by Bill Joy in his Wired article weren't really about technology as we know it, but about what might happen if technology evolves into something that is autonomous, intelligent, and self-replicating.

    ------------
    Read any good essays lately? Submit them to the Pratmik essay page. [pratmik.org]

  • Some of my favourite writing on this come from Neil Postman - he has a book called "Technopoly". One of his thesis is that technology will inevitably be used for any purpose that it may serve, whether good, bad, or indifferent to any given cause. (for example he'd predict the rise of web profiling, because the technology of the web enables this use(

    Technology is not good has been around as long as the luddites !

    tangent - art and creation are a higher purpose

  • and that annoying beep when you leave the lights on, or that damn piece of plastic that won't let you put the car in reverse at 5000 rpms.

    When will the madness stop?
  • ...the Butlerian Jihad.

    Its why we have mentats...
  • No matter how much we argue about it though, a computer program is not a living creature. I can make it simulate one pretty well. I can make it behaive like one, but in the end, it is just a set of algorithms, producing a set of output, the same as a video game or a text filter!

    In that case, are humans alive? We act according to our pre-programmed instructions (aka instincts) and we process these directives through our RAM (our memory of previous experiences) to determine the most likely/profitable course. We believe we are thinking, therefore we are. Likewise, if a machine can be made to be self-conscious (aware of noticing that it is "thinking" -- even if it was programmed to have this awareness) it will be alive. Life is not the domain of biped primates.


    -The Reverend
  • Also worth a look, the first IEEE-RAS conference on Humanoid Robots is at MIT this fall. [usc.edu]

    The Technical Program [usc.edu] is interesting...

    -jerdenn

  • How do we know they have any motivations at all? .... how do we talk with them and realize something, anything, motivates them?
    One major difference between me as the subject in an IQ test and a robot is that the robot has a clear record of its algorithms in memory and I do not. "Algorithms are the methods, the step-by-step procedures, the plans that people and computers think with. Algorithms are the recipes and the lists of instructions that we all follow. For computers, algorithms are the programs that allow them to compute." (May, 1996, p. 83). It is almost laughable that "homo sapiens" does not know his own algorithms for intelligence and yet prides himself on his higher intelligence. A human can respond with little when asked, "Tell me about yourself with a particular emphasis on how your intelligence works." whereas a teaching robot (Chapter 14) could tell you about itself in great detail including giving full details on the algorithms it uses in doing AI. Homo sapiens, indeed. The robot has greater self-awareness by this standard!

    According to Dr Poley, you should just 'ask it'... In fact, the AI 'being' may have a better understanding of what makes itself tick than you do.

    -jerdenn

  • I think that's the same problem that God had to deal with when He created all of us! :o)
  • unless you know some sort of special technique or soemthing...
  • Technology is not good has been around as long as the luddites !

    Yes, it's been around. The problem is that, like everything else in this world, one half of the population wants something to happen, and one half is against it. This is just another time for this to happen, and so again the rally cry is screamed: Technology is not always good; it can even be flat-out evil.

    It's just that sometimes we only hear it when it is repeated, and we only listen to it when we ignore it. Not to start another thread on this topic, but look at cloning. We've been half and half about this for centuries and now that it is here we don't know what to do other than oppose it until we get our bearings straight. That will happen as well if AI/SI/CI ever comes out. One half of us will remember the shorts from the 60s about "The House That Thinks!" and the other half will remember 2001.

    --
  • Don't worry, he'll be ok. That's what happens when a person is Slashdotted. Human's don't load-balance too well. Oddly the effects are not simply a lagged response but more of a real-time homogenous spew of psudo-information. Kind of like the Windows interface, only cleaner and more consistant.

    --
  • Machines never make mistakes? Sure they do, but the mistakes they make aren't only an issue of programming--they're an issue of interpretation. When a human uses a computer, for instance, the computer's programming makes certain assumptions about why the user is inputing data in a certain way (because that was the way it was programmed) but humans don't think in straight lines. We could intend something totally different than the given result. I dunno, I just think that by building computers that can actually interpret your data in ways other than the linear we would end up with technology that would be infused with an element of conciousness, with an ability to decide what the user actually MEANS and that's...well, dangerous.
  • by Anonymous Coward on Wednesday May 17, 2000 @06:29PM (#1064977)

    Ever since the "AI winter" of the 1980's, when AI companies failed to deliver on their promises, we've seen less and less of an investment on AI research. And more and more AI researchers and Lisp bigots keep complaining, but who do they have to blame but themselves? Their utopian dream of intelligent machines running obscure programming languages from the 50's turned out to be nothing more than that: a dream.

    But between then and now, we've seen two major paradigm shifts occur, each complementing the others in manners in which the AI "futurists", for all their scifi-inspired babble, failed to predict: the coming of the Internet as a mass communication media, and the rise of Open Source. Both radically re-shaped the world of software design, and I see no reason why this same revolution could not occur within AI itself. Think about it: rather than a few guys in some MIT lab tinkering with their Prolog programs, we could have a distributed network of Open Source hackers developing far better -- and more practical -- software quicker and with less expense. It happened to operating systems, programming languages, and network software, each of which were formerly reserved only for CS department computer labs, so it's really only a matter of time before a good, Open Source Artificial Intelligence appears, one with the magnitude and impact of Linux or Apache. And the world will gaze in wonder once again.

    So, goodbye Marvin Minsky! So long, later John McCarthy! We'll see you in the Open Source AI!
  • by Aleatoric ( 10021 ) on Wednesday May 17, 2000 @07:23PM (#1064978)
    Well, here we are again, yet another round of the perils of technology.

    So, what do we do about it?

    Stop it? That's not going to happen, no matter how hard we try.

    Regulate it? Good Luck. Try getting every other country on Earth to agree with you, or to follow those proposed regulations. Whoops, sorry, kids, guess that one's a wash also.

    Oh, I know, we'll hype up all of the potential negative effects of new technology and scare the crap out of the average citizen, who will then clamor for one of the above useless 'remedies'.

    Guess what? It won't work, not one single bit of it. You simply cannot put the genie back in the bottle, and all the wishful thinking in the world is only going to make you complacent, hoping uselessly that we're 'doing something' about the problem.

    Can technology be harmful? Absolutely. But you want to know what is even more harmful? The attitude that we're going to make it less harmful by ignoring it, regulating it (and hoping no-one else decides to play in that pool), or giving in to our worst fears, thereby letting it become them.

    Simply put, only the advance of technology (and our knowledge of it) is going to help us cope with the advance of technology. To give into fear (whatever foundation it may have) is only going to realize those fears.

    Here's an article [reason.com] from Reason that does a good job of countering Bill Joy's views.

  • Counterproof: Richard Stallman. You know, the guy who invented the Free Software Movement (that which you are so quick to relabel "open source")?

    You don't perchance think he was, say, an Unix hacker, working on C compilers and integrated extensible text editors just for the heck of it, do you?

    Nope. Stallman was a Lisp hacker - one of the best ever, one might say. He had a pivotal role in the Lisp Machine Wars. He was part of the Common Lisp specification group.

    He started out, and still is, at the MIT's AI Lab. (Granted, he's not an employee of MIT anymore, but he's still there.) He was one of "Minsky's kids". He was working on the very field which you deride.

    Face it. Back when Thompson & co. were still working on the proprietary operating system Multics (that is, before they moved on to the proprietary operating system Unix), the Lisp hackers at AI labs all over the world (notably at the MIT and Stanford) were already freely sharing software amongst themselves, and in doing so practicing what you now call "open source".

    No amount of "open source hacking" could ever produce strong AI; it's now widely recognised that it takes much more than just programming and traditional "computer science" (*) in order to achieve that goal. (In his 1991 book Paradigms of Artificial Intelligence Programming, Norvig is careful to point out that most of what we today call "AI" isn't really about sentient machines, but about getting computers to solve problems previously thought to be restricted to humans; and that all the "AI" he covers just comes down to clever traditional Lisp algorithms, most notably glorified tree-searching.)

    To claim that simple programming - the exact same thing as symbolic AI researchers have been doing for 40 years - will manage to achieve strong AI as originally envisioned, if only it is done "the open source way" (i.e., in a slightly more juvenile and amateurish fashion, with some extra commercial interests and a lot more buzzwords), is absurd. It's tantamount to saying that 100 thousand monkeys banging on typewriters will manage to put together the Brooklyn bridge any faster than 100 monkeys would.

    Sure, the "open source paradigm" has the benefit of producing a lot of good software (amidst an ever-growing pile of pure crap). And yes, I am myself a proponent of Free Software, because I prize my freedoms as an user of software. But it's not in anyway a Godsend, a cornucopia of ready-to-go solutions. It's not qualitatively different from any other kind of software development. (Besides, guess who does most of the serious "open source development" these days? That's right: it's people in CS departments' and private corporations' software R&D labs. i.e., the exact same people who did most of the serious development before the "open source" craze.)

    In short: dismissing the entire field of AI research because it's failed to meet its original goals, and then proposing that open source development by a bunch of miscellaneous hackers on the Internet will be able to do it, misses the entire point. It took the AI guys 40 years to get it, and you comfortably ignore it now in favour of your "open source" solution: strong AI is NOT a Simple Matter of Programming.

    (*) Ask me about the term "computer science" someday, and you'll get to listen to an even bigger rant than this one.
  • by James Lanfear ( 34124 ) on Wednesday May 17, 2000 @07:34PM (#1064980)
    All software design was at one point research.

    All everything was at one point research. I researched my TV guide before I turned on the Simpsons tonight. If you can't see the difference between cognitive modelling research and kernel plug-and-play research you're welcome the results of your 'AI'.

    Can you say that Open Source is not good for software?

    *thwack* Score: AC 1, strawman 0.

    AI must and will one day leave the research departments of bigshot CS schools.

    Why, praytell? Wouldn't it be a good idea if *gasp* scientists, even computer scientists, led the way? Actually, you're right about CS; if anything, AI should be under psychology, or better yet, a department of it's own.

    And who will be better to lead it than Open Source ?

    I just said who: cognitive scientists and AI researchers -- in other words, people who understand the subject. More engineers is the last thing AI needs: It's nearly managed to redeem itself as a science, and I really don't want to lose that ground.

    -jcl

  • by SirStanley ( 95545 ) on Wednesday May 17, 2000 @06:09PM (#1064981) Homepage
    If you want to read some good books Dealing with the Philosiphical problems of AI(which is a bull crap name anyways. Artificial Intelligence is nothing. Wouldnt Synthetic Intelligence be cooler?) Anyways. The books are Artifical Intelligence: the Very idea. And Mind Design Two. The First is by John Haugeland, the second is a collection of Essays from people like John Searle, Daniel Dennet, etc... Its Edited by John Haugeland. Both books deal with the various philisophical problems and support for the Various Theories Of Artificial Intelligence. I just read em for one of my classes I like em =)
  • by ahknight ( 128958 ) on Wednesday May 17, 2000 @06:06PM (#1064982)
    I'm surprised that some people still think that any technological innovation is good. I mean, remember the cars of old that would say "Your door is ajar" and other very annoying things that the public just didn't like? History is full of this. This should be public knowledge at this point, IMO.

    Technology should not be embraced because it's technology; technology should only be embraced because it raises our standard of living.

    --
  • by Tosta Dojen ( 165691 ) on Wednesday May 17, 2000 @06:07PM (#1064983) Homepage
    This one caught my eye.

    From Chapter 4.1 [f2s.com], "Behavior of the Robot Finger":

    "We thought that a single robot finger, provided that it possesses the same motion capabilities...as a human finger, would have been sufficient..."

    Well, why not, most humans only use a single finger.

  • by 1337d00d ( 177978 ) on Wednesday May 17, 2000 @06:56PM (#1064984)
    machines have absolutely no reason to want the same things we do?
    I am in an Artificial Intelligance division at a U.S. National Research Lab (can't say which, don't want anybody to know that I'm leaking this) we are working on models of intelligance networks that use, essentially, the necessities for biological function (eating, drinking, excreting, reproducing), as an intelligance model. The network runs on easy to produce microbots (bigger than nanobots, smaller than a penny) that use electricity flowing through the air (not flowing, but emitted by various things, toned down EMP) as water, metal as food and repair (they have tools to scrape shards of metal off of a metal block and high-heat fuse it onto damaged sectors of their body), and will collect bits of metal in a storage-bay type thing, in which they will construct other micro-bots. Our project is far from being completed, but rumor around here is that we may be getting military funding, so it might get done a bit faster.

    Robotic Teenage Male Sex-Daemons roving the streets looking for tasty Human Teenage Girls to impregnate with their Metal/Carbon Hybrid CoDNA
    Yes, but you might have Robotic-Teenage (developing its modular components) Asexual Reproduction-Microbots roving the streets looking for tasty PentiumIII-Linux-Boxes to impregnate with their Microbot-Larvae-esque things. Wasn't my idea.

    that self-guiding code that learns from failures and suffers from overcompensation--in other words, code that can even evolve under feedback loops--is pretty rare, even among the best attack detection systems
    All you need is one effective system that does all of the essential life functions. And we may be closer to making that system than anybody has known before.

    what some *human* has programmed them to do. Tank or Pokemon, it's made by us
    It was a great experiance when I realized that this wasn't true. Tierra [santafe.edu]s are mutating bits of code that, in this case, fight it out to the death. Put one of these in a positive feedback loop, and.. well.. we're using a derivitive of this idea to actually program the microbots, along with a decentralized data bank via infrared packet TCP/IP to evolve a massive collection of response data that we can moniter. The microbots will fight, like Tierras, except they will be working with actual, physical robots, instead of bits of memory. The microbots will be able to reproduce, and if we put them in a plastic room filled with old computers, they should eventually fill it up. The project is exciting, although we haven't yet got official word on the military funding.
  • by OdinsEye ( 182369 ) on Wednesday May 17, 2000 @06:54PM (#1064985)
    ...That I've unwittingly written part of the Unabomber's manifesto in one of my movie scripts, only backwards, kind of: The argument of the villian to the hero in the script is that a great deal of human suffering is caused by our limitations and ignorance. People's lives are dreary because they lack the capacity to go out and do something more inspiring than be find a constant stream of mind candy by Hollywood after their shift at the local McDonald's or amazon.com what have you. Why do we produce so much crap that we don't need and create a system to make it seem needed? To provide jobs for millions upon millions of unnecessary lives. Wouldn't it just be better to create a world population of a few tens of millions of elites and vanquish the rest of humanity. The elites would simply be open-source artists because there would be robots (that serve the function of the masses without the need) to give them the basics. As artists, they wouldn't need much beyond that. Which way would be more likely? The elite to engineer themselves and require AI to service them or for AI to become so powerful we'd have to engineer ourselves just to compete? I'd say both are equal odds, just depends how the game plays. Finally, I'd like to clarify that the article wasn't about the AI gaining dominance through a Terminator scenario... It was about us as humans forking over all of our decisions to the machines and then forgetting how to even come up with the questions. We techs are already regarded as gods for our marginal (yes, I mean very marginal) advantage over the masses. Imagine how quickly the majority would cower before a truly superior intellect that no one would stand up to.
  • by James Lanfear ( 34124 ) on Wednesday May 17, 2000 @08:15PM (#1064986)
    Sorry, but AI (hereafter referred to as computational intelligence, or CI) has a long history of working hand-in-hand with psychology. A great deal of CI has fallen under cognitive modelling, which is arguably a from of experimental psychology, and many CI researchers refer to themselves as cognitive scientists, emphasizing the psychological (insofar as cogsci is dominated by psychologists) aspects of their work. As for putting it over psych, or linguistics...why? Both of them are far broader topics, and will remain so in the foreseeable future.

    I could see biology as the home of artificial life, but until recently CI's interactions with biology have been restricted to useful metaphors. Traditionally CI has worked at a higher level, and I feel it appropriate to respect this. You're the first person I've seen suggest that biology is foundation for CI, or even that it's an significant contributor, except by way of neuropsychology.

    -jcl

  • Maybe intelligence will emerge, but if it will, it'll emerge out of what the systems have been programmed to do
    What they've been programmed to do, huh? Like, say, to carry five astronauts to Jupiter to investigate an alien artifact, while keeping the details of the mission secret, and completing the mission autonomously if the crew becomes incapacitated?
  • by zpengo ( 99887 ) on Wednesday May 17, 2000 @06:31PM (#1064988) Homepage
    Life would be terribly boring without the tragic human condition. Think of what would we would miss out on!

    DDoS attacks

    Anti-Trust Lawsuits

    Trolls

    Evil Villains

    Oh, wait.

    Oh, wait.

  • by nomadic ( 141991 ) <nomadicworld@@@gmail...com> on Wednesday May 17, 2000 @06:56PM (#1064989) Homepage
    As far as I'm concerned, let's keep pushing the boundaries as much as we can. So we might run into troubles down the road; big deal. I for one would rather have an uncertain yet possibly exciting future than a dull, secure one. Nanotech might kill us, but it also might also introduce us to a new and better way of doing things. Let's keep stretching the boundaries of thought and human existence; I mean, that's what we're here for.
  • by Effugas ( 2378 ) on Wednesday May 17, 2000 @06:31PM (#1064990) Homepage
    Hello?

    Anyone?

    With all the fears and paranoia about intelligence in computer systems(I refuse to say "robots"--there's no reason intelligence needs to be confined to something that can enact physical changes against its environment), are people not realizing that machines have absolutely no reason to want the same things we do?

    There ain't going to be Robotic Teenage Male Sex-Daemons roving the streets looking for tasty Human Teenage Girls to impregnate with their Metal/Carbon Hybrid CoDNA. Why? Because robots aren't interested in sex. It's *humans* that are *afraid* of an alien species/race/tribe/gender/income group coming in and impregnating their daughters, and that traces back to the beginning of human evolution where control over the genetic line essentially defined one's own mortality.

    Technology just hasn't been growing the same way.

    Maybe intelligence will emerge, but if it will, it'll emerge out of what the systems have been programmed to do--in general, retain robust connectivity over unreliable media, recognize unauthorized accesses, and so on. You will have systems whose defense systems are so well developed that the valid users who wish to shut them down will have difficulty doing so--because, to be blunt, that's what these "intelligent systems" will have been designed to do--prevent unauthorized disabling of the system. But most of the human fears which we obsess about just aren't going to transfer in.

    Does this leave quite a bit to be worried about? Sure. But lets not forget that self-guiding code that learns from failures and suffers from overcompensation--in other words, code that can even evolve under feedback loops--is pretty rare, even among the best attack detection systems. Attack signatures and virus signatures are always hand-developed--you never see, for example, a penetration at one company automatically causing all other companies to be alerted to look for the specific pathogen that caused the failure. Worse, if you did, you'd have entire styles of attacks that worked to abuse the system's natural ability to transmit attack signatures--it's a ridiculously effective attack against the human body, and it'd do nasty things to any automated virus signature agent as well.

    But in the end, no matter *what* the systems were programmed to do, that'll be, for the forseeable future, all they're going to do--what some *human* has programmed them to do. Tank or Pokemon, it's made by us. This intense fearmongering almost seems like a way of disavowing the creators from what their systems happen to do--in some sense, it's as if we expect the future of AI to come from Microsoft, and we've decided they'll lie their way out of any bug.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • But in the end, no matter *what* the systems were programmed to do, that'll be, for the forseeable future, all they're going to do--what some *human* has programmed them to do. Tank or Pokemon, it's made by us.

    And here is the fundamental problem that the "fear monger-ers" are pushing. Who is "us"? The Slashdot community? The United States? The UN? Ignoring behavioral evolution/adaptation beyond any original programming, these systems will in fact be programmed by someone who is pursuing their own ends - including people who aren't necessarily interested in the betterment of mankind.

    Every couple of days on the local news, you're bound to hear some story meant to frighten/shock the viewing audience, about some individual who snapped, killed their family, and then killed themselves. It's unfortunate, but it happens. Nanotechnology might be out of the hands of humankind for the moment, but it's coming. Someday, the power of nanotech will reach the hands of the common man. What happens when the first person who snaps decides to take out the rest of humanity with them? If you understand the "grey goo" principle, this is entirely within the realm of possibility.

    Personally, I feel that the greatest threat to life as we know it will be biological viruses/warfare being developed by rouge organizations. Information, knowledge, and technology are not bad things in an of themselves, but ultimately it comes down to what the individual decides to do with them together.

    More than ever, technology is bringing us closer to one another, but at the same time, it permits more individuals to have the power to end it all at any moment.

    I dont' like to think about the negative side and possible effects of the advancement of technology, but I believe that responsibility requires it from time to time. Yes, you are correct, machine do not have human intentions, but they can carry out the intensions of the human that programmed them, whether those intentions be good ones or bad ones.

    Call me crazy, but I believe that we should look towards building off-planet habitations, not merely for the furtherment of science, but to ensure that the human race would have the capacity to survive any cataclysmic (intended or accidental) event that might occur.

    --Cycon

  • by xant ( 99438 ) on Wednesday May 17, 2000 @08:24PM (#1064992) Homepage
    Let's take a worst-case scenario. It's 2035, and machines have finally surpassed today's humans in their ability to do everything that humans do. Furthermore, they think like humans, they act like humans, and they're taking over the earth.

    Why? Because WE ARE THE MACHINES. Every single one of us is already a machine, and has been since the first RNA strand found a mate. The only difference is what our bodies are made up of -- but the truth is, we've been changing our bodies since the dawn of man. Our ancestors were short and strong. Modern man is tall and weak. Our ancestors were dark-skinned. Today we have many skin colors.

    See, here's the kicker - we don't have to surrender to our machine masters. While it is nearly inevitable that machines will surpass human brains in complexity and even problem-solving ability, it is foolish to think that we will fail to incorporate these attributes into ourselves. Our future is in machines, because our future selves will be machines - just different machines than we are now. We are destined to remake our own bodies, and become, ourselves, the machine masters. Which means we will depend on the silicon and relays and software that we have created, yes -- in the same way that increased complexity of the genome required us to depend on our lungs, and our spinal cords, and finding complex proteins to use as food. Increased complexity in our brains, and our technology, will necessitate this further step up the ladder.

    We'll probably continue to look the same because sex sells and big metal faceplates aren't sexy. But we'll move better, think better, be better. Is that so bad?

  • by ahknight ( 128958 ) on Wednesday May 17, 2000 @06:45PM (#1064993)
    In 1980, it was believed that by 2000 we would have electric cars and be colonizing Mars with at least one full-duty and colonized space station. It was believed that the world would be centered around space and all that could be done out there. It was seen as the new frontier to be discovered and conquered.

    Tell me, do you know when the last space shuttle took off? Neither do I. And neither do I own an electric car. Nor do I see us on Mars or in space stations. I keep seeing "we'll all be using electric cars in 10 years" every year. It's what I call the Unattainable Future. We all say it will happen eventually, but underestimate the time it will take and fail to factor in human nature.

    We will not have electric cars in mass production and use anytime soon because auto makers can make so much more money on gas-powered cars, and people are used to being able to go 90 MPH if they wanted to, which no electric could dream of hitting. We are not in space because the excitement wore off as computers hit us as insanely amazing machines.

    And today our current Unattainable Future is no longer world-peace, as it was during the wars of the 1960s and 1970s, no longer space exploration as it was during the birth of our space program from the 50s to the 80s . No, today the delusion rests squarely on technology and the rate of advancement.

    Let me be the first here to scream out that this is insane. There is research and even progress in this sector, but it will not happen. It will not happen because people will not let machines become smarter than them; they will revolt before that happens. There will be no mass-produced nanobots because people are scared of what they cannot see and it's just not possible to make that kind of thing in quantity. You're resting your thoughts on technology that hasn't even started to be invented if you're talking mass-produced nanobots. If the technology to make them in quantity does not exist. Shouldn't that be your first unattainable dream, rather than them being used everywhere?

    And an AI capable of human thought ... No matter what books you read, or what sci-fi novels you read, or what delusions of "self-aware" machines you have, technology will always only be as smart as those who made it, never smarter. When you can pull intelligence out of nowhere, we can talk. Until then, this is the equivelent of vaporware.

    --

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...