Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Computing With Molecules 50

ruppel writes: "Scientific American has an interesting article on molecular computing. The article is quite extensive, covering several technological issues and visions for the future. It also lists references for further reading and some interesting links. " The article is a great technical overview of what's actually going in nano/molecular/x computing.
This discussion has been archived. No new comments can be posted.

Computing with Molecules

Comments Filter:
  • this article seems to imply that the primary issues confronted by cognitive researchers ( including AI folks ) are size/speed related. i'm fairly certain that the real problems stem from Godel's incompleteness theorem and the limitations implied in any given formal system. And since all algorithms are deterministic formalisms, they cannot ever know their own meaning. ( this is the syntax v. semantics debate ). also, there is what philosophers of mind call 'the problem of other minds'. that is, when do we "believe" that an artificial cognitive agent is telling us what it *really* desires. How do we safely attribute beliefs to it? how do we know it's not simply a super fast turing machine, no more intelligent than any other turing machine, including my PC?
  • by Anonymous Coward

    Don't get me wrong -- I've get nothing against faster processors. But my computer is already fast enough to play movies and do 99% of anything that I really want it to. If I'm not doing numerical integration, do I really need it to get any faster?

    (If you can't tell, I'm one of those people who thinks Quake 3 is not so great a step above Quake 2.)

  • ...from May issue of MIT's Technology Review:
    http://www.techreview.com/artic les/may00/rotman.htm [techreview.com]

    Like their competitors at Yale and Rice, a West Coast collaboration of chemists and computer scientists from Hewlett-Packard and the University of California, Los Angeles, have recently characterized molecules capable of acting as electronic switches and memory (see past issue: "Computing After Silicon," TR September/October 1999). R. Stanley Williams, who heads the effort at HP, says his team expects to build a prototype of a logic circuit that integrates a small number of nanoscale molecular devices within 18 months. "We have the switches and wires-the components to actually make true nanocircuitry," says Williams.

  • Please add a Nanotechnology section so that I don't have to look at this crap.

    Thank you.
  • what I really need to simulate are emergent behavior algorithms

    There is some interesting work along these lines at MIT's Amorphous Computing [mit.edu] web site. What these people are doing will be of very great importance when nanocomputing hardware starts to exist. Luckily, they will be able to apply it long before then, since their assumptions also apply to very-inexpensively-manufactured silicon.

  • Moore's law in silcion devices operates by
    (1) making things small, (2) working with more
    things on (2a) a chip or (2b) more chips.
    Obviously #1 wouldn't apply here, but there is
    probably a lot room for #2.

  • WTF are you talking about...

    If you can't stay on topic, go away! Your not impressing anyone with your "diligence"

    Farging 31337 ....
  • My 128 bit keys are looking less secure these days. Actually, it would still take a lot of these little computational engines to crack a key, but just think of an NSA building built out of them :-) (damn, did I just give them the idea?) Bruce Schnider [counterpane.com] talks briefly about what these types of breakthroughs would mean for encryption in his book on encryption...
  • Why not analog? *grin*

    Seriously, there are people working on non-volitile analog memory (insofar as that's possible), and analog computers seem to be catching on again. That gives you an infinite number of bits per bit! ;-)

    -jcl

  • Can we use nanotubes, single-walled structures of carbon with diameters of one or two nanometers and lengths of less than a micron, as the next generation of interconnects between molecular-scale devices?

    Now that's just silly. Why use nanotubes as interconnects between molecular transistors when you can use the nanotubes as transistors themselves? Nanotubes come in metallic and semiconducting flavors and consequently it's not difficult to make diodes and transistors out of them (and the popular single-electron transistor is also doable with nanotubes). One can lay down nanotubes in patterns to form gates and whatnot, which imnsho seems easier than trying to twist strange molecules with fields between substrates. Oh well.
  • What happens when we get molecular computing and then have to double the complexity in a year and a half?
  • that crash windows link didn't crash mine
  • Well, one of the ideas mentioned in the article, if I'm not mistaken, was about self-organizing molecules. If we can come up with some sort of sequence that will, say, create a transistor, or even a whole AND gate, on every silicon-atom-tipped needle on which the process is executed, we could follow by devising processes to join these components to each other. Considering the microscopic scales on which these are happening, billions of devices could be fabricated at once. It's microscopic processes happening on macroscopic scales.
  • When you get down to the scale on which molecules are interacting, and start building up the superstructures that could be formed by endless combinations of these structures, you'll probably conceptually end up with something like a cluster of clusters of clusters and things like that. I think that's the way the brain works anyway. No single CPU, just each part doing it's job (based on the many clusters of parts doing their job beneath it), the sum total of which ends up being the consciousness of the system.
  • Diamond IS a molecule - essentially an 'infinitely large' molecule. It is not a crystal - it does not melt. It irreversibly thermally decomposes. Each carbon atom is covalently bonded to four other carbon atoms. This is what makes it extremely hard.
  • Consider that plastics, proteins, and cellulose are large molecules; so is diamond. You can make a single molecule that you can see with you eye; I've see one organic molecule many centimeters on a side.
  • well, if we want nanorobotics (and I'm sure that I do), we need some computers to power them. would you like to try to stick a pentium in my nanorobot that's the size of a blood cell?

    Why do we need to put computers into a nanobot? Granted, nanobots need some logic circuits but they really don't need a fully programable computer on board. Why not use remote control instead of on board computational power. A transceiver and some simple control logic would be more useful than trying to build a fully programmable computer using nanologic.

  • I've always wondered this... why don't we use 2, 4, etc. bit bits? as I've learned more about processing, I've been able to reason it out, but it still doesn't make sense why we don't do it for storage. say, have a magnetic disk that writes/reads like 16 levels of magnetic strength per bit, so each bit is 16 times more compact than normal... true, this would require conversion to binary, so it'd be a little slower (probably less than a picosecond per bit, but things add up...), but it'd be denser... as we get down to smaller and smaller media, this would become more difficult...

    This idea is actually used in data communications. You have two different thing that effect the bandwidth.

    1. Baud Rate - Number of Symbols per Second
    2. Number of Symbols in Alphabet.

    For example, a 28.8Kbps modem might have a alphabet of 12 valid symbols and send 2400 symbols per second for a total of 28.8Kbps. NOTE: Baud rate is equal bandwidth only for binary alphabets.

  • The following book describes the implementation of a Turing machine in game of life:

    Winning ways (vol 1 or 2), by

    ER Berlekamp,
    JH Conway and
    RK Guy,
    Academic press 1983
  • The best reason to use bits is that it can represent the switch and the truth table; the essence of computing.
  • So, who's going to be the first one to port Linux to this new platform?
  • I think we're missing the point with all these posts about miniature computing and hacking the future i-opener molecule edition to run linux, what if we hacked this thing to meld it's controlled molecules into the universe's molecules, imagine the possibilities; weather control, creation of new worlds, blowing up ms campus...

    Of course, we're gonna have billy try to control all the ions in our minds...

    (it's not being paranoid if people really are out to get you)

  • You failed to mention that there were aeons between these comet and astroid impacts in which most life was dead, unless you're okay with that...
  • It won't really matter, it'll be a winmol that'll suck at cracking Rc5-64 [distributed.net].
  • This is all cool, but it's still pretty far out. I mean, how long is it going to be until you can go to CompUSA and by your molecular iOpener?

    Just wait until /. starts posting the latest molecular hack...

    kwsNI

  • besides the obvious "your computer already IS made out of molecules, and so are you" types of remarks i could make... has a really small computer ever been a problem? i wouldn't mind wearing a GHz computer...
  • I've always wondered this... why don't we use 2, 4, etc. bit bits? as I've learned more about processing, I've been able to reason it out, but it still doesn't make sense why we don't do it for storage. say, have a magnetic disk that writes/reads like 16 levels of magnetic strength per bit, so each bit is 16 times more compact than normal... true, this would require conversion to binary, so it'd be a little slower (probably less than a picosecond per bit, but things add up...), but it'd be denser... as we get down to smaller and smaller media, this would become more difficult...
    from my (extremely limited) knowledge of how these things work, it seems that it would be possible for these molecular processors to process a 2 bit bit, depending on a voltage level, would raise the electrons to different quantum level... anyone know? this sound reasonable? am i completely out of my gourd? (not a first by any means)
  • I think the reason we still use the good old 0 and 1 system is because it's the hardest to mis-read.

    When you get into 0 through 7 or even 0 through 2 instead of 0 through 1, you get an increased chance of blurring because all these different values have to fit in the same magnetic/electrical/optical range. An error that would cause a 4 to be read as a five probably wouldn't even affect a binary system.

    Of course, there's also the minor matter of backwards compatability...
  • Interesting reading on the problems of creating AI is: In Our Own Image, by Maureen Caudill. The book discusses the technological requirements, from the very basic to the advanced, necessary to create a functional being: sight, language, neural networks, autonomous learning, etc. Facinating. There are also some interesting social ramifications presented - would the being have the same rights to life?, etc.
  • sure but dont forgetting creating more efficient molecules to execute your instructions.
  • mind you, if you didn't make it out of molecules, it'd be a miracle.
  • Kool A Beowulf cluster of clusters of clusters of clusters...

  • Wow computing with a molecule

    Can you imagine a beowulf of these

    Sorry, had to be said

  • "if you made a computer out of molecules"

    Hmm .. my computer is already made out of molecules ..

  • As far as I can see biology has already pipped the hardware bods to the goal post. ...now we only need Intel/Microsoft headhunters to make the hardware revolution truly complete Ollie
  • well, if we want nanorobotics (and I'm sure that I do), we need some computers to power them. would you like to try to stick a pentium in my nanorobot that's the size of a blood cell?

    probably the best solution to the heat dissipation problem is reversible computing. I looked on google and found some links:

    • Zyvex [zyvex.com] has a bit on the subject
    • a group [snu.ac.kr] who say they've made a reversible-architecture CPU
    • Mike's BibTeX file [mit.edu]. I have no idea who the guy is, but there are some papers on the concept.
    quantum computing is very good for many tasks (bye-bye Mr. NP-complete!), but not necissarily as good for others. however, my guess is that nanotech will allow quantum computing to become practical

    Lea

  • it's a good idea for some applications, especially for self-replicating types, but there's no real way to stream instructions to several billion running around in a spacesuit (for example) without some getting a huge lag. this /is/ one of those things on my todo list, but I'm a little busy :)

    and in any case, you need logic circuits to run this thing, no matter what, and there aren't any other viable options other than using nanocomputing. mechanical, electrical, whatever. you have to have something on the other end, or else the instructions aren't going to help at all.

    another problem that people are beginning to run into is that nanotech won't interface with what we have now very smoothly. there's going to have to be a huge changeover from one type of tech to another. it's not quite as bad as quantum, but think that sort of a change.

    in any case, there is a lot of promise to nanocomputing in just about every application, especially embedded or portable stuff (can you IMAGINE the mp3 players? :) ). I'd suggest taking a look at some of the more popular works of Drexler (mostly) and Merkle. there are all sorts of reasons to want nanotech in general, nanorobotics more in specific, and nanocomputing most specifically -- and not only becasue we'll need it to run all the other stuff.

    Lea
  • very close to the stuff I'm doing right now, though I'm just doing it because it's just sitting in the way of waht I really want to do, really :)

    if you remember the "nasa" snakebot article, that robot was actually a copy of PARC's PolyBot [xerox.com]. They have another robot (not completely built yet) called Proteo [xerox.com] which is exactly the embodiment of something like this.

    Lea
  • oddly enough, my nanotech research could use some of the molecular computers that I've been designing (that's just a sideline)

    what I really need to simulate are emergent behavior algorithms -- then I need to explain them to mechanical engineers, which is the /really/ hard part. I'm thinking pretty pictures :)

    Lea
  • We need new computing models (i.e. Quantum Computing) not just smaller/faster version of what we have.

    You can have new models of computation without busting your noggin against exotic new physics. One easy way to do that is with new algorithms. Public-key cryptography opened up all kinds of interesting opportunities, running on tedious conventional hardware.

    A couple years ago, I heard a talk at MIT about amorphous computing [mit.edu]. It is basically a way of thinking about algorithms and communication so that we can successfully program low-reliability hardware (ordinary lithography/silicon stuff) to get reliably high performance. It's approximately the art of coordinating behavior in the presence of noise and unpredictability.

    As a side benefit, this work is applicable to a lot of different scenarios for nanocomputing. They assume systems where processors or the communication pathways between them may be unreliable, or where there isn't a regular geometry, so this is the kind of thinking you'd need to program plaque-cleaning bots wandering around your arteries.

  • Actually in AI and machine learning, most algorithms are basically search algorithms that look for a particular function/hypothesis. The problem is that usually the problem they are trying to solve is NP complete, i.e. the search space of possible hypotheses is exponential. To get around this, learning algorithms make horrible (and I mean ridiuclously bad and wrong) assumptions and simplifications. An example would be the naive Bayes classifier which is one of the best learning algorithms and yet the assumption it is based on is just plain wrong, but it makes computation faster. Also neural networks are usually made only 2 or 3 layer deep, because of speed considerations. Having more layers would improve the "intelligence" of the algorithm a lot, but is computationally intractable. So, speed is currently a very important issue, while Godel's theorem is almost irrelevant (in practice, philosophy put aside).
  • Diamond is not a molecule -- it's chemical formula is just C i.e. one carbon atom. It is a crystal with its crystal lattice organized in such a way that it makes it really really hard. You are right about big single molecules though, in plastic production plants, when something goes wrong all the liquid in the "reactor" polymerizes and you get one big molecule meters across. Those are extremely hard compared to normal plastics and extremely tough to break up.
  • They mention this in the article but I don't really think that they emphasised it enough. This is just some work building basic bulding blocks, not entire systems.

    Building systems is where the real challenge lies. It the difference between having a transistor or simple logic gate and VLSI circuits. There are heat dissipation problems that were not even mentioned in the article.

    transistors work well for what they do. We need new computing models (i.e. Quantum Computing) not just smaller/faster version of what we have.

  • MIT's Technology Review this time around is a special issue dedicated to alternative/future computing methods. One of the sections is dedicated to molecular computing as well on sections discussing possible uses of DNA and other technologies. Very interesting and accesible to those without any particular specialized knowledge
  • actually, you might want to take a look at some of the more popular works of Merkle [merkle.com] and Drexler [foresight.org], describing (very accurately) both the advantages and disadvantages of nanotechnology. the environmental consequences are actually one of the best things, in my opinion.

    self-replication and self-assembly mean factories turn into tanks, without spewing toxic chemicals all over the place. we would probably almost entirely stop using roads for shipping (and transportation in cities) in favor of extremely fast underground subways. we can smear the roads with an extremely tough substance which essentially acts as a solar panel that you can drive on and lasts for quite a long time. it's a very good way to get power -- you take otherwise useless radiating heat from the roads outside, and you release it out your roof, and on the way it's done a little work. toxic waste? that's one of the easiest of all to take care of. think of the bacteria that scientists are developing to "eat" oil slicks. it's more than possible to break down, molecule by molecule, entire toxic waste dumps into basically whatever you would like.

    disadvantages: grey goo. if something eats up the entire earth, the environment will go, along with everything else. there are quite a lot of people worrying about this -- we anticipated it, so it's likely we can take care of the risk (through blue goo or similar)

    there are other advantages: perfect recycling at a molecular level, basically an end to cancer and many other lethal and debilitating diseases, a chance to explore our galaxy... there are disadvantages as well -- but the environmental condition is not likely to be one of them.

    Lea

  • by FascDot Killed My Pr ( 24021 ) on Wednesday May 10, 2000 @05:32AM (#1081254)
    I need to get on the bandwagon. Everyone is computing with molecules and here I am like a dolt, using pure energy.
    --
    Have Exchange users? Want to run Linux? Can't afford OpenMail?
  • by spiralx ( 97066 ) on Wednesday May 10, 2000 @05:40AM (#1081255)

    Although the advances were encouraging, the challenges remaining are enormous. Creating individual devices is an essential first step. But before we can build complete, useful circuits we must find a way to secure many millions, if not billions, of molecular devices of various types against some kind of immobile surface and to link them in any manner and into whatever patterns our circuit diagrams dictate. The technology is still too young to say for sure whether this monumental challenge will ever be surmounted.

    They've shown that it's possible to build molecular-level electronic components like an AND gate and a memory analogue, but it'll be connected them together to form circuits that'll be the real challenge.

    Think of how many transistors comprise even the most simplest of processors nowadays, and the technology it takes to fabricate them. And then think of doing the same, but with individual components and connections consisting of only a few molecules. There's going to have to be some real advances made in the ability to manipulate matter on this scale before this sort of thing becomes feasible on a large scale where economies of scale can apply.

    But at least the proof of concept is there, and work will advance quickly with the threat of the end of Moore's Law approaching. And yet again, so much for all of the "end of computing" doomsayers :)

  • by spiralx ( 97066 ) on Wednesday May 10, 2000 @06:14AM (#1081256)

    Don't get me wrong -- I've get nothing against faster processors. But my computer is already fast enough to play movies and do 99% of anything that I really want it to. If I'm not doing numerical integration, do I really need it to get any faster?

    You don't and I don't (I was running a P133 until it broke) but there are a lot of requirements out there for fast computers. Look at this [slashdot.org] story for an example of a problem domain where superfast computers are required. There are a huge number of simulation tasks out there which can always use more power in order to use better models. And that can lead to any number of new technologies for us.

    There'll always be a need for better computers. After all, since the smallest computer required to simulate the Universe is the Universe itself, we can always build a better machine to better simulate things.

  • by zpengo ( 99887 ) on Wednesday May 10, 2000 @05:42AM (#1081257) Homepage
    I remember reading a long time ago about small computers built using Conway's Game of Life. People with way too much free time on their hands actually sat down and started constructing logic gates, and then put those on a huge playing field where they could interact with each other and actually crunch numbers. I was impressed.

    Molecular computing is a similar phenomenon. At this point it's not really feasible, but who knows what it could turn into? Once computers reach the microscopic level, then we can begin to see some cool things happening. It will be like the transition from vacuum tubes all over again!

  • by streetlawyer ( 169828 ) on Wednesday May 10, 2000 @05:47AM (#1081258) Homepage
    Yeah, yeah, great idea, molecular computing. Never mind that the phenolacytithin molecules involved are proven carcinogens. Never mind that the manufacture of these horrifically toxic chemicals spreads huge amounts of di-ethyl-chloroplectythides (DEPs) over whatever luckless Third World country ends up having the plants sited there. Just so long as we get our new toys.

    If you guys would just leave the computer room for a while, you might get some sense of proportion about what you're destroying. I happen to own a small ngwa in the limitless plains of Tanzania, bought with my share of the fees on a biggish corporate real estate settlement. It brings tears to my hard face to think of the despoliation that will be wreaked somewhere just as lovely, simply in order to produce more toxic shit so that somebody can play Quake a little bit faster.

"One day I woke up and discovered that I was in love with tripe." -- Tom Anderson

Working...