Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

Guidelines For Nanotech Safety 113

aibrahim writes "The Foresight Institute has released its guidelines on molecular nanotechnology. Background information on the dangers from Engines of Creation in the chapters Engines of Destruction and Strategies of Survival. The document describes how to deal with the dangers of the coming nanotech revolution. Among the reccomendations: making the nanodevices dependent on external factors, such as artificial "vitamins" and industry self-regulation. The guidelines were cosponsored with the Institute of Molecular Manufacturing. So, is it enough ? Is it too much ? What measures should be taken to secure our safety during the nanotech revolution?" The Foresight Institute sounds like hubris, but it's got a masthead that fairly drips with smart people, like Stewart Brand and Marvin Minsky. Remind anyone of Asimov's Three Laws of Robotics?
This discussion has been archived. No new comments can be posted.

Guidelines For Nanotech Safety

Comments Filter:
  • Technology rox.

    World Wars are a human problem.
    The proliferation and invention of missiles is a human problem.
    The fact people are INTIMIDATED (as in worship) by EPCOT Center instead of INSPIRED (as in building) by it is a human problem.

    The fact that we want everything AUTOMATED is also a human problem. (Ironically, this flaw gives us UNix so I'm not too pissed). No one wants to create a thing from beginning to end. They just want everything EASY. Convenience fine, spoon feeding by demand no thanks.

    The fact is we don't want to be involved in the world we live in. That's how nanomachines will be a problem. Our incessant need to postpone and procrastinate will cause us to ignore important details, the kind of details that blew up Challenger.

    What's important is that a tool be researched on the whole. Put the knowledge into everyone's hands not just a few. Letr everyone be aware of technology.

    As for the wars...
    Every development begins as a solution to a problem. If it does not become used by many people in many DIFFERENT ways, it never stops being a bargaining chip in a power play. The way to kill the danger is to give people reasons to use it for something other than power.

    Once you control something only those seriously interested would care to get one.

    How do you explain no one buys semi-automatic disassembled safety pin guns to kill people?

  • I was referring to terrorists. They don't have access to the nuclear capabilities that most other countries do.

    And as for nations that are not in the United States Fan Club, just go watch Dr. Strangelove. [filmsite.org]

    Wargames, too.

    Face it, man. None of these nations are crazy enough to let loose The Bomb.

    Hopefully...
  • Should nanotech be open-sourced... :)
  • too often technologists embark on a new project thinking the same way: "Lets make it work and let someone else figure out how to secure it."

    Thus we end up with useful protocols like SMTP that are also prone to abuse. These guys have the right idea, i.e. lets talk about security FIRST so that the right measures will be built right into whatever mechanism does end up working.

  • The trick would be preventing self-replicating nano-bots from evolving around their dependence.
  • You're probably thinking of

    "How to Mutate and Take Over the World: An Exploded Post-novel"
    by R.U. Sirius, & St. Jude and the Internet 21. I can't tell you the ISBN number because it's out of print, and my copy has gone to the Great Box'O'Books In the Back Closet somewhere. It was lots of fun, partly due to the collaborative writing process (they took donations), but, well, it wasn't all that good outside the "you had to be there at the time" context.
  • by billstewart ( 78916 ) on Saturday June 17, 2000 @11:25AM (#996108) Journal
    Yes, there may be Hubris at Foresight, but the Masthead Name you need to mention is Eric Drexler, author of "Engines Of Creation", isbn://0385199732 , the 1986 seminal book in the field, and the main promoter of nanotech for the last N years.
  • Not really a bug, per-se, but an "evolution".

    First off, it wasn't the robots who did it!

    Daneel altered his laws, by placing humanity as a whole above that of a single person, aka the Zero'th Law of Robotics. Giskard couldn't, which is why he failed after hurting whats-his-name telepathically (but not before making Daneel telepathic).

    Daneel saw the toxification of Earth as necessary to save humanity, and therefore higher than the First Law, which would have forced him to kill/maim the "Spacers" who did, in fact, contaminate the Earth (at Three Mile Island no less!).
  • Exactly what points would those be? That nanotech has the possibility to be dangerous? Took a real genius to figure that one out, didn't it?
  • So after writing my soon-to-be-famous nano-bio-techno-paranoid thriller "Acts of the Apostles" (www.wetmachine.com)(see review by Hemos May/00)in which I basically said (in the words of other /. posters) that the genie cannot be put back in the bottle and that people should maybe be concerned about it, I sent a few notes to the Foresight Institute. Most of these notes were ignored, but I had one "funny" exchange with one of the dudes on the Foresight roster, who told me, basically, that I wasn't qualified to have an opinion. He reached this conclusion because (a) my website violated generally accepted website design principles, (b) I think Ayn Rand is a crackpot Nazi, and (c) I didn't show proper respect for the wise old graybeards of the Foresight Institute. The Foresight Institue and their nonprofit, tax-free "everything is just peachy" ilk really believe that they've got everything under control, and that the rest of us should shut up and let Erik Drexler figure the future out for us. Their belief in the goodness of all this stuff is basically a religious belief. Since they consider themselves eminiently rational scientists, there's nothing that riles them more than to be called religious cultists. Which is why my book is called what it is: it's a pie in the face to the techno-idolatrors. Call me the Roger Moore of the neo-luddites. John Sundman, soon-to-be-famous novelist.
  • I think the original reference to drugs was the to the attempts to control illegal drugs, which has been so phenomonally succesful. The US has almost gone to war trying to prevent Saddam from researching nuclear weapons, and it's quite questionable as to how succesful they've been. Nanotech research can be carried on in a basement lab, and it doesn't require access to very rare materials to be succesful.
  • My 386/33 never found a way to adapt and evolve.

    What you're actually suggesting is that nanotechnology = artificial intelligence, which may be true to some degree, but even the most intelligent AI nanobot is still not as smart as, say, a mouse.

    "...life will find a way to adapt."

    Robots don't count as "life," last time I checked.

  • "Terrorists will have access to nanotech, and the question is will they use it, and will the masses have any defence from it."

    I disagree. I suspect that nanotechnology, in any form that poses a genuine threat to the population at large, will probably cost a great deal more than than terrorists are willing to pay.

    Also. I wonder if EMP is a viable defense against nanotech?

  • The facts of nonotechnology are very scarce at the present time. Even informed discussions on the subject rely heavily on speculations about what may or may not be possible. Presented below are some assumptions that often crop up in these discussions, along with my $.02 about each one.

    1. We will one day be able to produce self-replicating nanites capable of destroying life on earth; possibly all life.
    This is unknown at the present time. But it's better to assume the worst than to get surprised by it. So, let's assume this for the time being.

    2. We will be able to control the propogation of these nanites by some technological means.
    Also unknown. But, I'm willing to buy this for now. I think that, given enough time and talent, we're clever enough to pull this off.

    3. The nanites, themselves, might circumvent our controls by a process of mutation and natural selection.
    This seems likely to me. Everything that reproduces also evolves. The ability to reproduce without limit would afford the affected mutants a huge survival advantage. I think that such species would be highly favored.

    4. At least some people will be able to circumvent these controls deliberately.
    Sounds right.

    5. It will be possible to control access to the technologies needed to produce deadly, self-replicating nanites.
    This is completely unknown, and probably the most worrisome aspect of the present inquiry. Nuclear proliferation is relatively easy to control due to the difficulty of producing fissionable material of sufficient quality to make a weapon. Biological weapons are easier to produce, but harder to deploy effectively. Chemical weapons lie somewhere between nuclear and biological weapons in terms of ease of manufacture and deployment.

    If the technical challenges of producing nano-weapons are very great, then it will be feasible to limit access to the technology. But it is extremely dangerous to think that they would be. Indeed, the phrase, "cheap and ubiquitous," is often used in connection with nanotechnology, and universal access is touted by its proponents as a key advantage of the technology.

    If someone like Kip Kinkel [cnn.com] has the ability to produce this type of weapon, then we really do have something to worry about. Looking at Drexler's own scenarios, this is not at all inconceivable.

    ***

    In his novel, "3001: Final Odyssey", Arthur Clarke introduces the idea of a nanotechnological device used for virtual reality and knowledge acquisition. The device also detects mental illness. In the book, every person, upon coming of age, has this device applied. If the person is found to be psychologically fit (i.e., unlikely to act out in dangerous, destructive ways), then s/he gets access to all known knowledge. Otherwise, presumably (though Clarke never states this explicitly), the person would be treated.

    Although the privacy issues raised by such a regimen are pretty disturbing, I think that I would favor this type of prophylactic. Having considered this at some length, I have concluded that the privacy problems would be easier to solve than the nano-weapons proliferation problem.

    Just for the record, I am a great proponent of nanotech. I want the benefits. That's why I think that we should take the risks seriously.

    --

  • I'm sure you had an excellent reason for not putting spoiler warnings in that posting. I can't wait to hear what it was.

    The fact that the book is 15 years old is a pretty good reason.

    --
  • That sudden eruption of madness from Bill Joy had a very obvious cause: he was sitting right next to John Searle! The madness of illogic is highly contagious.

    (Ray Kurzweil was there as well, but his mind is at least 20 years ahead of those of the other two, so their Neanderthal pack rantings were just "Ughh, Ughh, Grunt!" to him.)
  • Huh? They're robots, following a program. If your Windows program comes up against one of Windows' many bugs, does it evolve round it? Has your PC evolved so that a 4-second press on the power button doesn't stop it? I could come up with any number of examples against this, but there's no point.

    Maybe, conceptually, in several decades time we'll be good enough at nanotech and have sophisticated enough strategies to start making evolutionary nanobots. But it strikes me that trying to make plans for that now is like a society based on the horse-and-cart making rules for right-of-way on spaceships. It's all so far in the future that we've got no concept of how it'll affect us. Think about the Internet. You reckon there's any laws which cover it properly and fairly? Copyright, trademark and other laws were drafted in an era of physical objects. The whole area of information-based technology was maybe an interesting philosophical discussion, but there was no way anyone could make laws to cover it, cos there was no way of doing it then. And just like the Internet has thrown up all sorts of new ways of screwing ppl over, nanotech could well do the same, and we just can't think of all the ways it could be used now.

    I'd recommend reading Neal Stephenson's "The Diamond Age" for some thoughts on a society based on nanotech. It's one possible outcome - there's any number of other ways it could go.

    Grab.

  • If you insist... :)

    There once was a Nanotech fair
    For which people weren't quite prepared
    The boffins were skilled
    Such that they could build
    Machines that were almost not there!
  • Eh... guess I should stop posting drunk. No matter how many times I learn that, it never sinks in... :)
  • I guess I should stop posting sober ;-)

  • The fact that the book is 15 years old is a pretty good reason.

    Oh, I see. Yes, I suppose you're right, if it's that old I'm sure everyone on Earth has read it by now.

    </sarcasm>

    ------

  • Oh, I see. Yes, I suppose you're right, if it's that old I'm sure everyone on Earth has read it by now.

    A spoiler isn't necessary in perpetuity. I've never seen any credible guide to nettiquette that required a spoiler for any revelation of any plot in any story, no matter how old.

    Do I have to put a spoiler to say Romeo and Juliet die? Or even to say that Tony dies and Maria lives?

    No. That would just be silly.

    It's been 15 years. If you didn't read it yet, nobody cares.

    --
  • A spoiler isn't necessary in perpetuity. I've never seen any credible guide to nettiquette that required a spoiler for any revelation of any plot in any story, no matter how old.

    Maybe not, but it should be in there. I agree with YASD.

    It's been 15 years. If you didn't read it yet, nobody cares.

    Maybe I've only been reading SF for 6 months, and haven't gotten to that book yet?

    I've always wanted to sock the guy who wrote the essay I read years ago which gave away what "Rosebud" is in Citizen Kane before I saw it--even though it was 50+ years after the movie came out.

    I always cringe when I see the Simpsons episode where, in a flashback, Homer comes out of a movie theater showing The Empire Strikes Back and spoils the Big Secret for everyone standing in line...and everyone watching the Simpsons episode too. Just think that each time that episode airs, the Big Secret is ruined for some eight-year-old who hasn't seen TESB yet, but will.

  • The real problem is with nano-terrorism

    Nano-terrorism is a real issue. However, just as we were (and still are) concerned about bio-terrorism, and we were (and still are) concerned about nuclear terrorism, yes, although there is a potential for total destruction of the Earth and all surrounding planets, it won't happen.

    My thinking is this: The reason terrorists don't build nuclear weapons is because of the dangers of either them accidentally detonating (ouch!), or more likely because of radiation leakage. The reason they don't build (large scale) biological weapons is because of the dangers of building the things. The reason they won't create nano-bot-evil-killing-machines is because they will have a (highly justified) fear of what would happen if it misfired. Anyway, wouldn't nanotech bots be able to be put out with a powerful EMP? The U.S. Military is in the process of testing new EMP weaponry, and if nano-terrorism becomes a problem, it can be put to a quick end.

    In addition, terrorists are not usually the most intelligant people. Many fear technology. The ones that don't really aren't the kind of people that would be putting together nanobots. Maybe in fifty or a hundred years, but not now. If you want to fear something about nanotechnology, think about when it's used as a weapon by a hostile government. The United States, Russia or China could turn nanotechnology into something very evil if the military needed to.
  • Somebody a long time ago gave this example of a useful nanobot: you would have thousands of them on your head, one for each hair, and the nanobots would trim the hairs as they grew so that you'd never need a haircut.

    Of course, the dangers of this are obvious. You might mistakenly apply punk nanobots to your hair, and end up walking around with a blue mohawk instead of the yuppie look you had in mind. Or if you put punk nanobots on one side of your head, and hippie nanobots on the other, they might start a war. The subsequent arms race would presumably end when one side or the other (probably the punks, being more violent and less passive) invented nano-nukes and blew your head to nanosmithereens.

  • The only use for an item only a small percentage can access is power. To destroy the power association you have to trivialize it. Put it in cereal packaging material to make sure it all stays fresh forever.

    The only way to fight the power drive is to put technology into the right hands for harmless uses more than slip into the hands of "terrorists".

    Perfect example is lipstick containers. Those things made good material for bullets when they were needed yet there was no slowly increasing arms race. Deal with war when you have it but make sure you don't instigate it int the first place over a stupid fear.

  • ...before we know what the reproduction mechanisms are.

    The solutions will have to depend on the actual machines we make. We're not 100% clear on what we can make yet, so the feasability of (and need for) any safety system can't be established.

    It might not even be a problem. The energy requirements of reproduction are likely to be high, and I imagine that it would take a lot of hard design work to make ones that can reproduce without perfectly controlled conditions and pure chemical feedstocks. Building the gray goo seed might very well be one of the hardest things to do with nanotechnology.

    I personally think that gray goo breakouts are likely to be as annoying as mould and computer viruses are today.

    "Damn it, Tim, the little buggers have gotten into the acetone feedstock again! We'll have to plas-burn the whole batch."
    "Okay, Harry, but we should get a sample. Some idiot kid probably did it on purpose, and they usually don't know enough to cover their tracks."
  • Terminator series was a bit wiser than these apocalyptic morons.
  • My stand on nono revolution is that we will end up having to be
    somehow separated from outside environment for this to work.
    Ok, I think that sounds too vague.. Let's look at the situation:
    in 50 to 200 years it will be relatively easy to make tiny automata
    that could, for instance, navigate itself inside one's body
    and destroy lobe parts of brain, so to say, an internal lobotomy.
    Almost anybody who wants to do this will be able to.

    The only solution? Well, this may sound ridiculous but consider
    that a middle of 18th century english gentleman would probably
    consider ridiculous the notion of boob implants or video games. My
    solution is basically for every compact group (a family, or a group
    of families, a small community united by religious belief) to live
    in a spacecraft that is protected by impenetrable shield of steel
    (or what else do they use, iridium?).

    Sure, you will say that's too extreme and simply having a
    spaceship-like undergroud complex will be protection enough, but
    I seriously doubt it, because in a spacecraft you can't "go out
    for a quick stroll down the valley, cause grass and trees look
    so inviting.."

    Please do not take it as a joke - I'm quite serious.
  • I'd say that atomic technology has been controlled rather well.

    Is that why North Korea, Iraq, Iran, Pakistan, and India have nuclear programs that are either rapidly approaching or have already attained the ability to create an atomic ICBM?

    The American government sees nuclear proliferation as one of the highest threats to national security. Your lax attitude is severely out of line with current thinking in international affairs.

  • silly responder knows not how to make haiku looks like an asshole
  • I was referring to terrorists. They don't have access to the nuclear capabilities that most other countries do.

    Who do you think sponsors and supports these terrorists?

    The relationship between "rogue" states and terrorists is very interesting - they sponsor them, yet have very little control over them. Its a highly volatile situation.

    And yes, many of these terrorists would love to see an atomic device detonated in Central Park.

  • If I recall my sci fi correctly, the main worry about nanotech is that it's by definition accessible to all - only a few very dedicated terrorist organizations can make nuclear weapons ;), but it's impossible to keep tabs on all carbon, hydrogen, oxygen, and nitrogen molecules. Books like Neal Stephenson's The Diamond Age talk a lot about nano, and the main thing seems to be that you need to be rich enough to be part of a large group, one that develops its own anti-nano nano. Any angry group or person, if versed enough in low-level physics, can design a malicious, self-replicating bug and set it free.

    In the meantime, I've just realized how much fun it is to say "nano" over and over again...

    nano nano nano nano nano nano nano nano nano nano nano nano nano nano nano... thanks, all done.

  • by 575 ( 195442 )
    Nanotech safety:
    No science fair for Wesley
    He'll destroy the ship
  • Don't get me wrong - I don't want my note to come off as a luddite rant, and I am very much an advocate of extremely rapid technological advancement. What I believe differentiates me from other people is I understand that this rapid advance is going to cost lives and could very well jeopardize our society. My only concern is with people who live with the illusion that we can or will control this stuff.
  • and what about industry self-regulation

    isn't that the same thing that was supposed to take care of our privacy ? well will they learn that there's no such thing as "industry self-regulation" ?
  • What I'd like to see, in a world swarming with potential nanotech viruses, is an analogous nanotech immune system to take care of them, nanites which can be set to recognize and rip apart other nanites which meet certain parameters. Got a rogue oil-spill cleaning nanite ripping up asphalt in San Francisco? Get the standby security nanites in Oakland to kill it.

    Yes, absolutely. This is the only way to combat nanotech-gone-bad. Think about it: we have already decided that nanotech will be more powerful than anything else we have invented; how could anything but nanotech defeat it?

    Your scheme also has a nice self-regulating aspect: if one "nanophage" in a million mutates, the other 999,999 ought to eat it immediately.
    --
    Patrick Doyle

  • I'm sure you had an excellent reason for not putting spoiler warnings in that posting. I can't wait to hear what it was.

    ------
  • I believe the biggest threat to human safety, when nanotechnology becomes available, will be who is in charge of developing it. If the US government gets its hands on nanotechnology, the first use they will have for it will be as a weapon of war. Nanotechnology is also the CIA's dream come true... Think about it - now that the US can kill anyone, anywhere and without any evidence, what will happen to Marxists, Communists and anti-US organizations? As much as I don't agree with these people, I value freedom of speech above all, and the thought that the US could kill this freedom by killing people with opposing viewpoints is a scary consideration. - how about political assasinations? I have no trouble seeing a lot of politicians in the future dying suddenly of cardiac arrest, brain aneurisms and many other problems which can be caused by sheer physical manipulation - say an attack by Utility Fog (Utility Fog is one perceived application of nanotechnology... very interesting idea) Nanotechnology has the potential to eliminate all risk of many physical ailments, including cancer and aids - all without any genetic modification or drugs. It is one of the greatest potential tools to achieve standard of living parity across all peoples. It must be watched and it must not be in the hands of special interests, such as any one government. I propose nanotechnology should be put into the hands of scientists and watchdogs to ensure there is constant public disclosure of all developments, and that it is never put into the hands of groups that have exhibited aggressive tendencies, such as military and government.
  • Ever heard of the Bacteriophage?

    Or seen a picture [phage.org] of one?

    Isn't the apperance mechanical like? The first time I ever heard about the phage was from a ufo freak who suggested that in reality phages are MNT devices that has been planted in our ecological system by extra terrestial intelligence. To what purpose would you say? I don't know.

    But, I do know that the phage is a virus that attacks and feeds off bacterias, and that they are found in many flavours and designs. It's primary objective is to reassemble, and spread. More about the phages [britannica.com]

    Today phages are increasingly popular in biology as they are believed to keep the secrets of bacterias and might provide us with knowledge to battle resistant bacterieas.

    In the end we might find out that they were created by somebody like us, maybe in a galaxy far far away, a long time ago.

    Anyway, here's some more pictures [phage.org]

    resource pages: www.phage.org [phage.org] and the foresight page [foresight.org]

    NanoBot

  • . . . Microsoft nanobots. They wouldn't have to be susceptible to virii anymore. They are them. I wouldn't let those things into my body if my life depended on it. Constantly trying to connect to the internet without my asking by plugging me into a wall outlet via a fork. Ouch.

    Ethan Jewett
    E-mail: Now what spa I mean e-mail site does Microsoft run again?

  • Well, industry self-regulation doesn't work especially well in competitive consumer enterprises (though there are examples like UL and the ASME standards for boilers, etc., which were spectacularly successful). But if you look at the kind of self-regulation among bio researchers after Asilomar in the early 70s it's been very successful -- certainly at least as successful as any government-imposed regulation could have been.
  • As the development of nanotech is quite possible inevitable, assuming it is possible [as it dosen't seem to violate any laws of physics, I would say it is] I am pretty sure it is only a matter of time. The notion of a central state having control of it is truly terrifying, but that may well be the way it happens, due to the massive R&D and extremely precise [and new] manufacturing processes that will be needed to make NT a reality [although you really need only one functional programmable nano-assembly system to start making others]. NT will be so unbelievably powerfull, I hope and pray [figuratively speaking] that it will come to be available to private citizens as part of their everyday lives. [I mean, you think internet privacy is an issue today, imagine self replicating surveilance agents across the landscape, invisibly within your home, your body, even perhaps your brain]. If only one or few interests control NT, those without it will be powerless to oppose their power. I prefer the notion of an organic approach wherin everybody has their own personal NT systems to protect them from other malicious systems, (which of course will sometimes win, as measures and counter-measures evolve together [life has never been without risk]). I doubt any privacy or such freedoms will survive, otherwise. Imagine getting a McAfee vaccination from your doctor!
  • by Marce1 ( 201846 )
    Nutters willing to go outside guidelines:

    1 madmen (a-la "Fools! I will destroy you all!")
    2 mad women (see above)
    3 ambitious corporate types ("Embrace and extend")
    4 World powers
    5 Teenagers told not to by their parents.

    Get rid of the first 4, use contraceptives, or try reverse pschology, and we'll be OK.
    I know where I'm starting..

    NB - only number 5 is the real joke. Even then, be careful.

  • And I have a bridge at a very reasonable price..I think you may be interested..
  • stuffy! [large huff] Harumph! [/large huff]
  • Although we do not have any truly intelligent robots as of yet, the laws are already broken. Weapons anyone?
  • Last I heard the nanotech genie was still in the bottle. Nanotech DOES NOT EXIST. The technology to create it is expensive and highly advanced. You think that will be hard to regulate?
  • If the danger is as great as it is being made out to be, then shouldn't a research facility be built on the moon, or better yet at one of the Lagrange Points, where this technology can be studied in relative safety? After all, it's usually in the early stages of experimentation where all of the "oops--*KABOOM!!*" type mistakes are made.
  • Allmost true. Much less the nature of humanity than the nature of things.
    Things don't change to acomindate the greater good. A tree remains a tree and it dose far more than is nessisary. It cares not if it cuts off the sunlight to other trees it comes first.
    Dirt is dirt it dose not yeald to more useful matereals. It may be made into other things by plants etc but in the end it all returns to dirt.

    Human nature is not really to be lax or selfish but to be socal and coprative. We loath those who act out of pure selfiishness or put others in harms way for self.
    But as lacking in human nature as it is. As much as we aspire to be great things we do not overcome the simple fact that we operate as singler as self and must defend self and function as self.
    It is the nature of this singlaity that produces a lack of responsabilty. No matter how much we are a group we will never overcome our singularity and will allways function at what is best for self.

    Anyone who thinks we can be otherwise with out being omnipotent is fooling themselfs. Anyone who thinks an omnipotent man kind could use nanites are being silly.
  • This reminds me of Jurassic Park.

    Remember, the dinosaurs were dependent on lysine, so they couldn't leave the park. However, they just found some chicken in the surrounding area, and had lysine feast!

    It almost seems that crypto is the only way to ensure they don't go haywire; you could have nanotech-antibodies to go around checking the MD5SUM of a characteristic of the nano robots.

    It's an interesting idea, and certainly one that must be addressed.
  • by skullY ( 23384 ) on Saturday June 17, 2000 @09:36AM (#996154) Homepage
    To avoid having runaway nanotechs, you make them highly oxidizable so that exposure to o2 would rust them quickly. Then they can only work in inert gasses.

    Sure, it limits a lot of the practical use of nanotechs, but since this is a new technology proceed carefully. Give them 20 years testing and using nanotechs in inert gas before you think about deploying them in environments containing oxygene, that way they have real world tests of how well nanotech's work and how likely they are to run away.

    It seems like a good compromise to me.....

  • by Syberghost ( 10557 ) <.syberghost. .at. .syberghost.com.> on Saturday June 17, 2000 @09:20AM (#996155)
    Remember that in Asimov's universe, a bug in the implementation of single robot's Laws resulted in him slowly rendering the entire Earth unfit for human habitation because he thought it was "for their own good."

    The fact that he was right in the story matters not a whit in this cautionary tale.

    The same care should be exercised in working with nanotechnolgy as with any other potentially dangerous technology.

    And, as with every other potentially dangerous technology, that shouldn't prevent us from working with it.

    --
  • I think they seek to prevent something similar to what happened in Ben Bova's book "Moonrise"

    Fairly early on in the book a leak occurs and some carbon-decomposing nanites break loose and proceed to decompose spacesuits and the people themselves. Had there been a killswitch or external dependency, they might not have gotten that far, which i think is a valid concern.

    (unfortunately, earth goes mass neo-luddite, and, well... go read it)


  • Nanotech is an incredible thing. If you read Age of Spiritual Machines, by Kurzwiel, you'll get an awesome insight into how nano will change our world.

    The real problem is with nano-terrorism. Think about it: Will governments *not* make self-replicating (non-restricted-growth) nano-bots, when terrorists will likely be able to for near no-cost?

    And... if you trust governments to handle this tech, you should better trust the scientists researching it in the more open public sector.

    With nanotech, the human experience from work, health, death, knowledge... can all change for the better. (not just from Kurzwiel's influence).

    With terrorists & a trend of open technology, I can see that nanotech will never be able to be controlled like the US has done such a secure job with, say drugs. Terrorists will have access to nanotech, and the question is will they use it, and will the masses have any defence from it.

    Now: back to science. While it will be important to wire in some of the " precautions " from the guidelines published, they are mostly obvious, and most likely useless for any real university lab with actual scientists (ie: not just a bunch of techies reading slashdot...)


    All that said - even with infinite computation power, the largest problem in creating useful nanobots will be programming them to do anything *useful*.

  • And, as with every other potentially dangerous technology, that shouldn't prevent us from working with it.

    Well, Bill Joy [wired.com] seems to disagree with you, and I would tend to agree with him.

  • Assuming we do indeed get nanobots, they must have some sort of way to kill them that cannot be altered without there destruction. If a nanotech does ever gain sentience that could be very bad unless its programmed not to do harm to people. The atificial 'vitamins' is a good idea, provided that you make it impossible for the nanobots to reproduce it, or supplement it with something else. Anyone remember that one Outer Limits episode where the guy gets nanobots to kill his cancer? and they end up evolving him, the Scientist guy tries to kill them with a kill switch, but they modified themselves so it didn't work. Then the guy tried jamming their radio signals so they couldn't communicate, they switched to chemical communication in the body. A general rule of thumb "Never give anything a brain, that you wouldn't want an enemy to have". I figure if we do create robots that can think indepentedly, we might have a little problem as they could become our masters. Or, another example is that Star Trek voyager episode with the robots designed to fight the other robots. They killed their masters because of self preservation, and when they tried to replicate themselves they failed because everything was the same, except for the power supplies which were unique to each robot.
  • The next fifty year are going to be a wild ride. Be prepared for technologies that defy regulation. You're already living through the first such wave, the Internet.

    The idea that nanotech can be regulated is quaint, but if we can't regulate the proliferation of atomic technology, what makes us think we can control nanotech?

    Biotech, long the best-controlled "viarl" (no pun intended) technology, is about to change drastically, as powerful tools and techniques manifest themselves in the private sphere before the public sphere. Celera beating the HGP to the punch is but one example.

    Can't put the genie back in the bottle folks, deal with it.

  • I can remember hearing about a story from some Extropian I'm related to, where the Earth is transformed into Betty Crocker pudding by deranged nanobots, while the Extropians are having a party.

    Anyone remember this one?
  • Secondly, that is something to be wary of. (the neo-luddite part, not the carbon monster nanobugs) With every new technology, there is new opposition. Look at all the steam coming from the people against genetic engineering...personally, I think that the next evolution in the human race is technological telepathy, so I'm all for this stuff. However, as people use nanotechnology for biological purposes, the "luddites", if you will, are going to start screaming. These issues hopefully will be addressed before use, by education and perhaps mass hypnosis, if necessary :) -All that I have to give, I've given All that I have to do, I've done Now I have to be
  • I read their rules. Too complicated. This is better:

    The goo shall not be gray. "Goo of colour" is acceptable. Red, orange, purple... but not gray.

    No gray goo. Problem solved.

  • Here we see the real Slashdot Effect(tm). People making comments without actually reading articles... Did you notice that the words "Bill Joy" were a link to an essay?

    Thanks for coming out.

  • Even if Joy were still a relevant figure in the computer industry nowadays, he still wouldn't be qualified to talk about "the dangers of nanotech" and what we should do about them. His ridiculous article (published on Wired, even!) made that painfully evident.

    Richard Feynman once said something to the effect that a scientist is usually just as wrong on non-scientific matters as a non-scientist. The same applies here. The idea that we should mind Bill Joy's crazed rant on nanotech (as opposed to Joe Q. Public's crazed rant on nanotech) just because he's Bill Joy (as opposed to being Joe Q. Public) is a logical fallacy: a clear case of the argument from authority gone haywire.

    Ah well.

  • Of course, the link was to that ridiculous article in Wired. You just don't have anything intelligent to say today, do you?
  • by Anonymous Coward
    All the more reason to push towards the Singularity [aleph.se] before a "Grey Goo" nano-disaster... or a nuclear one... or a comet-strike... etc. Safeguarding the only known bastion of sapience in the Universe is of prime importance.
  • "Ottos Mops" is a poem by Ernst Jandl, a rather famous german (language, not nation) author, who died recently.
  • What nanotech does is blur the distinction between software and "real" things.

    Nanotech monsters of various descriptions will be produced by script kiddies as soon as the technology advances far enough.

    Foresight's documunt is about as useful a reminder that it's not a Good Thing to create "I LOVE YOU" thingies, or orchestrate denial of service attacks.

  • After reading this paper, I realized how lucky the world is that there are people around right now who are worrying about this and thinking about the ethical/biological/political/insert-your-favorite- 'al'-word-here implications. One of the best things about this period of history is that so much has happened so fast that a few visionaries are thinking about where we could be 10 or 20 years down the road and doing something about it.

    If the Forsight Institute, stuffy and wierd as they are, weren't at least thinking about where we could be a few years down the road and bringing it into the light, we'd end up being stuck with a nanotech world run by corporations as soon as their researchers could scale it into production.

    As alarmist, or strange, or wanky, as you may think this group is, we'll probably all be appreciative a few years from now that someone thought to come up with a code of ethics for nanotech.

  • Back at the dawn of Modern Sifi Isac Assimov wrote several stories in which he mentions the 3 laws of Robotics. These were rools hardwierd deap into every subsystem of "inteligent" robuts that govorn it's actions even in the most dier sercomstances. This set of guidlines looks like an attempt to impliment the same concept for a difernt kind of technology that once build won't be easily controled or halted ( the way a pulled key can halt a car ). BTW: Those infamus laws ran something like this. 1. Do not harm a human or throgh inaction alow one to be harmed. 2. Obay orders from a human. 3. Prevent your own destruction. Nanorobots *need* this kind of guidline more than other stuff because once they are deployed there realy is no way to disable them, except maybe by using an EMP blast. That however is not the Matrix conection. The Matrix comes into the picture when you think of why the machines wold keep humans around for power suply when they have Fusion ? Could it be because somewhere deap inside they can't bring themselvs to render the spiceis extinct ?
  • IMHO there is no way to prevent evolution, barring extintion. Retard it, yes, but life will find a way to adapt. More than likely, nanobots would find a way around any preventative measures implemented. It might take a while (in nano-time) but eventually, Darwinism wins.
  • I can see the VBSCRIPT now:

    Sub ILOVEYOU()

    Print("Hello World, I Love You!!!")

    Set Me = LittleNanoWarrior

    Me.Locate(Human(atRandom))

    Me.EnterBody(Human)

    Me.Moveto(Human.Brain)

    Me.NeuronClipper.Activate

    Do

    Me.ClipNeurons()

    Until Human = Nothing

    Print("Goodbye World.")

    ILOVEYOU()

    End Sub

  • Nanotechnology is also the CIA's dream come true... Think about it - now that the US can kill anyone, anywhere and without any evidence, what will happen

    What makes you think they can't do this already, w/o the aid of nanotechnology?
    ---
  • But a one-time error in interpreting the blueprint probably won't carry over into the next generation. One misbehaving nanobot isn't a big problem if it can't create other misbehaving bots.
  • I've read a number of posts that equate mutations with evolution. At this point in time, nanobots are intended to be very basic creatures; their effectiveness will be in mass numbers, not brainpower or AI. Looking into the future, I don't think you could make a nanometer sized CPU capabable of sufficient AI, given the thermodynamics required for that kind of computation on such a small scale.

    However, the metaphor of DNA mutation = evolution probably won't apply for quite some time based on this thought: change a random bit in your a linux kernel binary. Do you think flipping one bit would make linux run better, or crash it?

    IMHO, for a good long time, nanabots will be nothing but really tiny embedded machines that would simply fail if there was a slight modification to their code. And as a corolary, given the # of atoms in a nanobot, I doubt they will ever be capable of computational AI, even when it is available on desktop.

    my $0.03.


    ---
  • I express ordered this training video that will guide us on the issue of volcano safety: Duck, and cover. :)
  • Just curious did it mention what the Carbon was being decomposed into? Also if anyone played Alpha Centauri, there is some nanotechnology that is shown off where it decomposes people and building and creates new weapons and such for the military. It was somewhat cool looking to see dead bodies go up and tanks grow out.
  • Sound like a good starting plan to me.
  • uh, how do you figure? please let me know about your witty ip-address tracing scheme so i can come back and tell you in public how full of shit you are?
  • Damn all this talk about nanobots, seem like they will one day be pretty bad ass mofos, too bad I'm only just a human think it would be pretty cool to be some advanced sentient technological machine. Would be pretty cool if in the future we could transfer our minds into machines that way we wouldnt have to worry so much about them taking over as long as we had enough beings with some form of humanity in charge of the whole operation.
  • Imagine trying to figure out how to design a good firewall in the days before the internet became popular.

    Security measures must be tested. Who knows what works well? A few weak molecular bonds which break when exposed to sunlight? Something that crumbles when exposed to oxygen? Or will they simply be built to only consume a specific sort of 'food', normally found only in laboratories?

    Having nanites more powerfull that nature's bacteria would be an intersting problem to have. But bacteria have been evolving for a long, long, time. I think it'll be a while before we need to start building antibodies.

  • The agro genmodders promised that their stuff would never get into the wild, either. Ooopsies! And then they came up with the brill idea of a killer gene to prevent self-replication, er, well, that was so the farmers couldn't use the seeds without paying first. Hmmm. What happens when *that* one gets into the wrong gene code? And what happens when the standby security nanopups run wild? Let's just hope they've been paper-trained. We'll make 'em all snap to at the first crinkle of a restraining order.
  • Safeguards are meaningless if the industry is to regulate itself. I know there is a certain faction of people who believe that big business (or small business, for that matter) can regulate itself, but the simple fact of the matter is that profit always wins out over self-control. If self-regulation were actually effective, industry would fight it tooth and nail. It is embraced by industry precisely because it's a joke designed to lull the public -- through the medium of naive, middle-class libertarians -- into a false sense of security.

    We don't attempt to prevent murder through a system of self-regulation, we make it illegal. Knowing that it is always easier for an institution to act unethically than for an individual, why place fewer restrictions on institutions?

  • Actually, I'd say that atomic technology has been controlled rather well. If you still disagree, then why haven't there been any nuclear terrorist attacks?
  • What happens if one of the nano dudes isn't built right or some rogue developer makes a polymorphic nano bot that can built itself differently and adapts to its environment..... reminds me of human kind.

    We adapted by destroying other species.... We expanded into the territory of other species and let that species attack us, so we killed it.

    I don't want these nanodudes in my backyard swimming in my pool and drilling into my dog's brain looking for inteligent life. Sorry nanobots aren't for me until I know they are safer.

    x-empt
    Free Porn! [ispep.cx] or Laugh [ispep.cx]
  • ". . .Any replication information should be error free. . ." hmmmmmmmm.

    Every living organism is dependent on resources that can only be found in a specific region. Organisms living today are dependent on resources, such as oxygen, that weren't available in earth's past. I hope no nanotech researcher honestly believes that replication information can be transmitted without error. We have to assume that there will be error; with that assumption any self replication will result in evolution. If errors are one in a million, then there will be a mutation a thousand times every time a billion nonobots are produced. If all but one in a million are deleterious, it will still only take a trillion nanobots before one has a happy (for it) accident.

    I never agreed with my Biology texts, when they made the arbitrary decision to consider only cellular organisms as living. When we create a self replicating system, the best model we have to predict how they will behave is organic life. And the life form most simular to the proposed nanobots, is the virus. Nonobots and virii will share a number of common attributes:
    1. Biologists don't consider them to be living.
    2. They are precocious molecules.
    3. They can reproduce.

    Well, I still think we should take the risks, but this document is a little to optimistic in my opinion. We need stiffer controls, and I DON'T like the idea of purposefully making nanobots evolve, even in "controlled" conditions.
  • Remember, the dinosaurs were dependent on lysine, so they couldn't leave the park. However, they just found some chicken in the surrounding area, and had lysine feast!

    One solution would be if the nanoassemblers were dependent on an artificial element, one that does not exist in nature. Technetium or something like that.

    Figure out how many nanobots you need, figure out how much element will be required, and pour that much into the vat.

    Also, if the element has a low half-life, you can limit the lifespan of the nanobots.

    The trick would be keeping the facilities where the element is produced free of nanobots.

  • Here's the core set of recommendations:
    Specific Design Guidelines
    1.Any self-replicating device which has sufficient onboard information to describe its own manufacture should encrypt it such that any replication error will randomize its blueprint.
    2.Encrypted MNT device instruction sets should be utilized to discourage irresponsible proliferation and piracy.
    3.Mutation (autonomous and otherwise) outside of sealed laboratory conditions, should be discouraged.
    4.Replication systems should generate audit trails.
    5.MNT device designs should incorporate provisions for built-in safety mechanisms, such as: 1) absolute dependence on a single artificial fuel source or artificial "vitamins" that don't exist in any natural environment; 2) making devices that are dependent on broadcast transmissions for replication or in some cases operation; 3) routing control signal paths throughout a device, so that subassemblies do not function independently; 4) programming termination dates into devices, and 5) other innovations in laboratory or device safety technology developed specifically to address the potential dangers of MNT.
    6. MNT developers should adopt systematic security measures to avoid unplanned distribution of their designs and technical capabilities.

    Most of these look like reasonable ideas, but what the heck is an "Encrypted instruction set"? They should be clear whether they're talking about code signing (i.e., any program must be signed with a private key kept by the original designer) or mere obfuscation.

    I particularly like #1--- it'll be an interesting research problem to come up with a "genetic code" specifically designed to make evolution hard. I imagine the hope here is to ensure a sort of "fail-stop" property for the devices, similar to catching programs that have gone bad through the virtual memory system.

  • You have waaaaaay too much time on your hands.
  • Please do not take it as a joke - I'm quite serious.

    the last line in every love letter ever written by an insecure teenager. brings back memories.

    ouch.
  • There is a real danger in letting your desire for perfection interfere with taking reasonable safety precautions. Drug safety in the US is imperfect, but a whole lot better than in the "good old days" when patent medicines were severely toxic, drug toxicity reactions not publicized, etc.

    Similar efforts to provide guidelines for working with Recombinant DNA, with hazardous materials, etc. have made significant improvements. The vast majority of the players want to behave responsibly and will follow reasonable guidelines. The fact that there are a few percent who will ignore the safety procedures does not make safety procedures irrelevant. They reduce the level of irresponsible behavior, and we can hope that the remaining hostile behavior is infrequent enough that we can deal with the resulting problems.

    And you will find that most of the government players will also try to follow good safe practices, though there is always reason to doubt the wisdom of the TLAs.
  • "Bastion of sapience"? Where? Certainly nowhere on this planet.
  • by tilly ( 7530 ) on Saturday June 17, 2000 @10:14AM (#996194)
    People theoretically see the need for lots of nice protections. Then they go ahead and cut corners unless someone has been burned and the memory is fresh.

    I cannot think of any area of technology from automobile design to nuclear power plants to office suites where this principle of human nature has not been operational. I can personally list examples from NASA to genetics research to the SNMP spec. (It was nicknamed Security - Not My Problem for a reason!)

    IMNSHO anyone who thinks that nano has the potential to be any different is just kidding themselves about human nature...

    Cheers,
    Ben
  • Nanotech will never gain popular acceptance in the same way that nuclear fission powerplants are shunned by the general population despite their stellar safety record. This is why the research funding for fusion power technologies has dried up, and I predict that nanotech will suffer a similar fate.

    (hint to moderators +4 Insightful)
  • Spaceships are made of aluminum or titanium and such. Steel is too heavy (density 7.85 g/cm3) and iridium has a higher density than gold IIRC about 20 g/cm3. Means that if a spacecraft were made of iridium it would weigh 5 times the amount it would weigh if it were made from aluminum. This would increase launch costs enormously and you would need a Saturn V just to launch a small commsat.
  • The boy Wesley can
    Destroy the mighty Borg fleet
    But cannot get girls.

    :)

  • You're absolutely right about Biotech, it's a science that's progressing so rapidly that soon we'll see some advancements that will really need to be examined and tested thoroughly before being released publicly.

    Certainly, some of them could be used for the better and some would argue that releasing them sooner rather than later would be the ideal. And it would, but the fact is that there are too many unknowns in this field. The technology is progressing faster than even scientests can deal with. This, I believe, is a result of technologisation. Let me explain. In the early days of science, scientests had a lot more control over their research than we do today. Mass production, computerization and high technology have enabled lightning fast research and development, but they've also speeded it up to a point where it's hard to keep track of.

    Therein lies the danger. We need to put procedures in place for even more strict testing and analysis of new technologies before they hit the marketplace, or we may find ourselves in grave danger of catastrophe. A man I know well, Kevin Warwick, predicts that technology will be the end of humanity. This is extreme, but not laughable. We should be weary of new technology and use it wisely.

  • Actually it points out a different problem, the "leak" was initiated on purpose by a person who wanted them to be leaked and had designed them specifically to cause the damage they caused.

    Thus it wasn't the nanites themselves which were the problem, but the madman who used them as assassins.

    Also they did have a kill switch (Hard UV) the people affected just didn't have access to the "kill switch" when they were being eaten. Later in the book they do go out and clean up the nano-mess and destroy the responsible nanites.
  • It looks like everyone has already brought up the point that the danger in putting a "self-destruct" mechanism in a nanite. With millions or billions of nanites, even if the odds of one of them surviving that self-destruction are one in a million, those odds are too high. And if that nanite is designed to construct other nanites (or, worst case, copies of itself) then you have a problem on your hands.

    If nanotechnology ever reaches the total control of matter, self-replicating machine, Diamond Age "Seed" level (I don't have enough information to argue either way, but it seems to me that it'd be easier to create macroscopic Von Neumann machines than microscopic ones, and we haven't even done that yet) we're going to need more protection than a self destruct mechanism.

    What I'd like to see, in a world swarming with potential nanotech viruses, is an analogous nanotech immune system to take care of them, nanites which can be set to recognize and rip apart other nanites which meet certain parameters. Got a rogue oil-spill cleaning nanite ripping up asphalt in San Francisco? Get the standby security nanites in Oakland to kill it.

    There was an interview with a somewhat apocalyptic tech giant (a veep at Sun? I forget) who believed that the ever increasing technological power available to humanity (nanotech, biotech, and AI being three examples I remember) would cause the world to be ripped apart by terrorism in the coming century. He likened it to an airplane in which every passenger had a "Crash" button in front of their seat, and only one psycho was necessary to bring everyone down with him.

    I don't think it will be that way. With nanotechnology specifically, if our available defenses are kept up to the level that our potential offenses would require, then having a small set of nanites go rogue wouldn't be a concern; they would be overwhelmed by their surroundings. Going back to that analogy, if everybody had a "Crash" button in front of their airplane seat, but the plane was guaranteed to survive unless 50% of the passengers voted to crash, that would be the safest flight in history.
  • The vitamin dependent nanobot is a good idea, but what about the flipside? What if, for example, Kool Aid Corp. decides to put nanobots in their packages. The bots are dependent on Kool Aid and if they don't get any, the person gets headaches/feels bad/whatever until they drink more Kool Aid. I'm glad that the Foresight Institute is looking ahead into regulating these things, because being addicted to something truly horrible, like health food, would be very, very bad =).
    -Antipop

It is clear that the individual who persecutes a man, his brother, because he is not of the same opinion, is a monster. - Voltaire

Working...