Foresight Taking Advanced Nano Discussion to DC 17
An anonymous reader writes "Looks like Foresight Institute, the nanotechnology public policy think tank founded way back in 1986, is heading to Washington DC this October with their new event, the 1st Conference on Advanced Nanotechnology. The government's original motivation for funding nanotechnology was in large-part due to Foresight's leading educational role and vision for molecular manufacturing. That vision, led by their co-founder K. Eric Drexler, has now become extremely politicized, as Ed Regis discusses in this month's Wired Magazine feature on Drexler."
I actually bought Drexler's book ... (Score:2, Interesting)
His view of the atom is best described as magic legos, and his grasp of s
Re:I actually bought Drexler's book ... (Score:3, Insightful)
At minimum, you can postulate that artificial nanoconstructs can do anything biological constructs can - because biology nicely demonstrates that it's possible to build devices with the performance characteristics of bacteria, and at worst we can tweak nature's existing designs.
Building-eating viruses? We already have dry rot and
Re:I actually bought Drexler's book ... (Score:1)
CO_2_ plus hydrocarbon atmosphere, I know...
Re:I actually bought Drexler's book ... (Score:4, Interesting)
It doesn't have to be a macrostructure like a termite - single-celled organisms are capable of eating things too, just less efficiently (usually).
The barrier to digestion speed is available energy. An organism optimized solely for digesting organics and producing copies of itself could do so surprisingly quickly. Not in an eyeblink, but fast enough to be a significant problem. The thing that prevents this from happening in the natural world (at least more quickly than it currently does) is predation and competition. If you engineer something that's far enough off the beaten path that it has no natural predators and attacks a niche that there are no efficient competitors for, it could conceivably grow unchecked (until we decide to do something about it; the usual array of chemical agents should kill engineered bacteria-like nanomachines as easily as natural bacteria).
If you allow macrostructures of arbitrarily intelligent design (i.e., someone's engineered a plague, and it isn't evolving naturally), you can postulate that your layer of lichen- or mould-like digestive goo would form a tissue structure that allows rapid transfer of energy within itself. That would allow digestion in any given region to proceed far more quickly than it would otherwise, at the expense of digestion elsewhere. Doesn't provide much consolation if the thing being eaten is something important.
Disassembly of inorganic materials is harder for a nanobot based on carbon biology, so I'm doubtful it would happen any time soon. If inorganic nanobots were created, they'd be at a competitive disadvantage, because their chemistry is lower-energy than that of carbon nanobots, so I think "green goo" is a more plausible scenario on earth than "grey goo". Unless you invoke the "complex macrostructures" argument and let the grey goo build power plants, but that's a far more difficult type of goo to design.
And in genetic engineering, yeah, we can make things that are competative, since all the challenging design problems have been solved. But we're no where near the capability of making something so competative it would reduce the surface of the earth to an ooze made up of just the organism.
We're not too far off. How hard would it be to tweak lichen or mould strains to be more aggressive, and secrete toxins that would make it harder for other life to survive in the same area? Nobody (reasonable) is postulating goo-grade nanotech tomorrow. "Within 50 years" is the most optimistic estimate I'd call plausible.
Self replicating Nanobots.. (Score:2)
"You and people around you have scared our children," Smalley fairly shouted in print. "I don't expect you to stop, but I hope others in the chemical community will join with me in turning on the light and showing our children that, while our future in the real world will be challenging and there are real risks, there will be no such monster as the self-replicating mechanical nanobot of your dreams."
Isnt it called DNA?
Re:Self replicating Nanobots.. (Score:4, Funny)
Does it have to be that order? Cant we just get it to eat Will Smith?
Grey goo.. (Score:2)
I mean, another civilisation somewhere would have made it, and it would have eaten.. well.. everything..
Re:Grey goo.. (Score:3, Insightful)
It's been pointed out that it already is.
It's called the biosphere.
Scare our children? (Score:1)
What Smalley doesn't seem to get (Score:4, Interesting)
Lets try a comparison to explain the difference between Smalley and Drexler in a way which should be easy for all here: "math formulas vs. computers".
Math formulas comparative to the Smalley approach (nanotechnology):
One formula does only one or perhaps a few things maximum if constrained in different ways. If you want to do something else you need to make a new formula or combination of formulas to reach your goal. This approach degrades "nanotechnology" into just another buzzword for advanced chemistry as it uses exactly the same old methology and is limited to the natural properties of atoms and molecules in groups.
Computers comparative to the Drexler approach (zettatechnology):
A computer is a tool not only for solving one or a few goals, but a tool that can be used to solve everything that can be represented digitally. This is all done from a few fairly simple tools that form the core of a computer: logic gates. This is analogous to the basis of Drexlers idea which is to find ways to manipulate individual atoms (such as IBM did when they crafted their logo out of xenon atoms [ibm.com]). The more ways you can find to manipulate individual atoms and combinations of such tools the stronger and more powerful zettatechnology becomes.
Just as with math formulas and computers the approaches share other characteristics:
Computers took far more technology to create than math formulas, but when it was achieved computers showed themselves to be immensly more powerful - also for computing math formulas. This will not seem strange at all to the average
The big deal with both computers and zettatechnology is their general nature. Zettatechnology doesn't defy the laws of chemistry or physics, it's a general tool which enriches the way one can use said laws.
Other likely similarities is that the first computer was enormous: the first "zettamachine" is likely to be enormous (compared to later generations) as well. In computers anything represented digitally is feasible, there are no borders for what is possible except for lack of time: in zettamachines anything possible within the laws of nature will be possible except for lack of time. And just like computers advanced the science of mathematics, zettamachines will advance the science of chemistry through the nature of being a generalized, multi-function tool.
In XX years time Drexler (and Feynman) will be as historical becaue of their ideas on nano/zettatechnology as Turing and Babbage are for modern computers. His direction of attacking the overall problem is (as I hope I've made abundantly clear) basically the same as theirs. No amount of attack on the specifics of one proposed method of atom manipulation from Smalley will change that (it will only ensure that USA loses to China or possibly Europe in this technology as Smalley only influences the US nanotechnology aims).
There is one other big difference between the two:
Drexler advocates a high le
Re:Smalley Drexler PLEASE MOD UP PARENT (Score:3, Insightful)
The fact that Eric Drexler was one of the first (after Feinman) to write about the dream of nanotechnology is widely recognized. But as the parent post notes, Drexler ignores chemistry. His critics are all experimental chemists. Not exactly a group to be ignored.
All accomplishment starts with a dream. But the path from dream to reality requires something more than dreaming. Here Drexler falls down.