Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science Technology

DNA Solves Million-Answer NP-Complete Problem 169

cybrpnk writes: "A 'DNA computer' has been used for the first time to find the only correct answer from over a million possible solutions to a computational problem. Leonard Adleman of the University of Southern California in the US and colleagues used different strands of DNA to represent the 20 variables in their problem, which could be the most complex task ever solved without a conventional computer. Details to be published in Science."
This discussion has been archived. No new comments can be posted.

DNA Solves Million-Answer NP-Complete Problem

Comments Filter:
  • Where is the promised press release here?
  • now, one needs to ask - "can this same technology be used to break codes/algorithms used within security?" the article is a bit lacking in details of how they achieved this exactly, but, koas theory is in action here.
    • Well, any computer can be used to attack crypto algorithms, it's just a matter of whether the computer can do so in a reasonable amount of time (like, say, the lifetime of the universe :) ). With current technology, it is not possible to break these algorithms, because the problems they are based on are believed to be hard. You can view this DNA computer as simply just being a new architecture, like Intel or SPARC. It's cool, but it has no effect on the dificulty of problems. The computing models computational complexity theory is based on are mathematical, and do not depend on any physical implementation of a machine.

      Quantum computers are a slightly different story. They are theoretically non-deterministic machines that can solve all problems in NP in polynomial time. In theory, with a big enough quantum computer you could break all currently used encryption schemes, but in practice, no one has really come close to building a quantum computer to actually do anything non-trivial let alone useful. I've heard it suggested (I don't really know anything about quantum mechanics, so maybe the physics geeks in the crowd could comment on this and let me know if I'm full of crap) that it may be the case that while a quantum computer may be able to solve NP-hard problems in polytime, the difficulty and complexity of building such a machine might be proportional to the difficulty of solving the problem in P.

      If this is the case, then even if it's theoretically possible to build quantum machines to break known crypto algorithms, in practice it would be infeasible to do so. If the complexity of building the machine doubles when you add a qbit then you may as well just use a deterministic computer and wait a few billion years for your answer.
      • The interesting feature of DNA computing is that it is massively parallel, maybe potentially much more than anything we have build electronically so far. Also, it is not restricted to a specific clock cycle, which makes it hard to define the "computing power" of a bowl of strands. Still, they are not equivalent in power to Nondeterministic Turing Machines in general, so DNA computers will not necessarily be the end of all NP-complete problems. Probbly only those of a limited size.

        Any instance of a NP complete problem can be solved by a deterministic computer in a fixed time. Often, throwing more computers at the problem lowers the time linearly, so for that specific problem, the time can be as low as wanted. It is just that the number of computers you have to throw at the problem grows very fast as the problem size grows. What they have shown here is that DNA computing works at somewhat large problems, but 2^20 could be bruteforced just as easily.

        Now, you calim that Quantum Computing can solve ALL NP-complete problems. That is not known. The group of problems that are polynomially quantum-commputable are a subset of the NP problems which is not known to include any NP-complete problem.
        (We don't know that it doesn't, since that would mean that we knew that PNP, and we would have heard that :).

        Factoring can be performed effectively by a qunatum computer, but is not NP complete.
        (It is in NP intersect coNP, incidentally).

        /RS
    • well is guess Leonard Adleman is the guy to know.
      he would be the 'A' in RSA.
  • But with only a million different combinations - surely this is no match for today's computers yet! How long will it be before it can be used for more practical problems?
    • This is an important breakthrough. They have shown that DNA can solve their test problem, and that is a big and necessary step on the way to creating DNA computers more powerful than comventional computers for some special problems.

      I look forward to when this will find a practical use. It is certainly possible. Hopefully this experiment generates some interest, not to mention good ideas for future DNA computers.

  • Which problem? (Score:3, Interesting)

    by cperciva ( 102828 ) on Saturday March 16, 2002 @09:03AM (#3172866) Homepage
    The article is a bit garbled (exponential time != NP-complete!) so it's a bit hard to work out what problem they were looking at.

    I'm guessing 2-SAT; can anyone confirm/deny this?
    • But 2-SAT is not in NP-complete nor needs exponential time. Maybe you're thinking of 3-SAT or just plain SAT?
    • 2-SAT is in P. 3-SAT, 4-SAT and so on are all NPC. In fact, 3-SAT was the first problem which was proved to be NPC (and, AFAIK, the only problem proved to be in NPC without using a Karp reduction proof).
      • Re:Which problem? (Score:2, Informative)

        by allanj ( 151784 )

        If memory serves me correctly, the problem described in the REAL article [usc.edu] is a 3-SAT, but algorithmics class was years back and beginning to fade from memory...

        • Yep, the article describes 3-SAT - so you're remembering your algorithms class just fine. Also, the article linked from the slashdot story describes 3-SAT in a nice, easy-to-understand but not too dumbed-down way (I posted before reading the article). I just looked it up and 3-SAT was proved to be NPC in 1974 by Cook and Levin.
          • I just looked it up and 3-SAT was proved to be NPC in 1974 by Cook and Levin.

            The first problem proved NPC was general CNF SATISFIABILITY, by Cook in 1971 [epita.fr]. SATISFIABILITY can be shown to polynomially reduce to (CNF) 3-SAT (proving that 3-SAT is NPC), and 3-SAT reduces to lots of other problems, which themselves reduce to even more problems (this is the typical way to prove a new problem NPC).

            I somewhat doubt that 3-SAT wasn't proven NPC until 1974, because SAT->3-SAT is a pretty easy reduction. (You just introduce a bunch of dummy variables, e.g. (a or b or c or d) becomes ((a or b or z) and (~z or c or d)) and so on; the reduction is linear time in the number of variables in the original expression.)
      • The first problem proved to be NP-complete is SAT (that's Cook Theorem). 3-SAT is the first problem proved NP-complete by Karp in his seminal paper.
    • Adleman and co-workers expressed their problem as a string of 24 'clauses', each of which specified a certain combination of 'true' and 'false' for three of the 20 variables. The team then assigned two short strands of specially encoded DNA to all 20 variables, representing 'true' and 'false' for each one.

      Clearly the problem was 3-SAT.

    • Re:Which problem? (Score:3, Informative)

      by Lictor ( 535015 )
      It was definitely 3-SAT; I just spoke with one of the experimenters yesterday.
      • Re:Which problem? (Score:1, Informative)

        by Anonymous Coward
        It was definitely 3-SAT; I just spoke with one of the experimenters yesterday.

        Well tell them than a 24-clause 20-variable 3-SAT problem isn't very hard. You can number the variables in an arbitrary way, sort the clauses by "smallest index variable it uses" and start assigning values from the end. You only need very little guesswork.(Yeah, I'm contesting the claim that "A DNA-based computer has solved a logic problem that no person could complete by hand")

        You need about 5.1909 times more clauses than there are variables to get a "tough" 3-SAT instance. That means 104 clauses, not 24.
        • >Well tell them than a 24-clause 20-variable 3-SAT problem isn't very hard.

          I don't think anyone would argue with you on this point. These are still toy examples... proof-of-principle type stuff. Remember that this result came out of Len Adleman's lab; If you're aware of who Adleman is, then I assume you'll agree its likely he was quite aware of this fact. If you don't know who Adleman is... well, I assume anyone mathematically literate enough to come up with the (quite correct) argument that you made would have to be at least passingly familiar with him.

          If you're interested in attacks on DNA computing, I highly recommend reading:

          Juris Hartmanis. On the weight of computations. Bulletin of the European Association For Theoretical Computer Science, 55:136--138, 1995.

          Its short, but sweet.

          As for the "no person could complete by hand" bit; I have to agree with you. The concept of "doable by hand" is pretty ill-defined. In any case, thats not the *real* point of the paper. The point is that the lab tech who implemented the computation did not know the solution a priori. Seems like a pretty obvious control, but so far all previous experiments have been so trivial that a human (or moderately skilled monkey) could trivially see the answer just by looking at them.
  • Nice, but... (Score:1, Informative)

    by Anonymous Coward
    this doesn't make NP-complete problems polynomial ofcourse.

    Still, impressive!
    • Indeed. The advantage of DNA computing is massive parallelism, but consider the 3-Satisfiability problem (the classic NP-complete problem) with n predicates. Regardless of the number of parallel processors (be they silicon or DNA), the only known algorithm has complexity O(2^n), which means:

      • For n=10, a few thousand operations are required.
      • For n=30, a few billion operations are required.
      • For n=250, somewhere around 1e+80 operations are required, which is roughly the same number as the number of atoms in the universe.

      So imagine a DNA computer consisting of every atom in the universe -- even for this computer, a solution to 3-Satisfiability for n>250 would be basically hopeless.

      • Re:Nice, but... (Score:2, Informative)

        by gkatsi ( 39855 )
        These numbers are nice, but ignore the fact that the current algorithms for solving SAT can routinely solve random 250-variable problems in subsecond times and even larger structured problems. zChaff, the fastest solver currently has even solved some 1-million variable problems (of course it probably took a few days, but it's still better than the multiple-universe-lifetimes prediction of 2^n).
      • Yes, but it turns out that not all instances of 3-SAT (or SAT) actually require that much time. Do a search on Google for "phase transitions in SAT" for more information.
  • Our old binary digit computers have been around for AGES (50+ years in tech time ;-)

    It is really the time for searching new and more complex computer architechtures if we ever want to have true IA (a la HAL 9000). I always wondered why ppl insist on building neural nets based on ordinary computers rather than building them on dedicated neural hardware (expeeeensive, but needs to be tried more often).

    Of course, today's computers do a good job resolving them, but there are too many architechtural limitations. Brains are a hella lot older than computers, and a hella lot more advanced (but very fragile).

    Way to go dudes.
    • This isn't new research. In fact Adleman has published materials on his results before.

      The big setbacks of DNA computing are

      1. Slow to setup
      2. Takes a while to get results
      3. Has a high rate of error [compared to a typical computer]

      What people mistake is that while a DNA computer could test 2^n cases simultaneously it still takes a while to get the results back. During that time errors can creep in (this is real life...)

      Tom
  • Real article (Score:5, Informative)

    by mnordstr ( 472213 ) on Saturday March 16, 2002 @09:08AM (#3172881) Journal
    Here is the article [usc.edu] at USC which covers the subject, including an interesting picture!
  • by Ozan ( 176854 ) on Saturday March 16, 2002 @09:12AM (#3172887) Homepage
    ...it's not a bug it's a mutation.
  • Which OS? (Score:3, Funny)

    by Gangis ( 310282 ) on Saturday March 16, 2002 @09:12AM (#3172888) Journal
    Which OS did they use? Microsoft DNA? [extropia.com]

    *rimshot*
  • I must resist my urge for an inane beowulf cluster comment here....so instead...can you imagine a beowulf cloned with one of these?
    • Actually, the beowulf comment MIGHT actually be applicable this time. Because that is what clustering is all about. So really... a beowulf cluster of digital machines is an attempt at making one huge DNA machine.

      Now if only they could stop making the damn DNA computer morph into Bill Gates. And I thought sheep were baaaad.
    • please forgive me, for i know not what i do...plus i'm really sleepy...
    • If you won't make an inane beowulf comment, I will.
      Imagine a beowulf cluster of these!
      It's easy to imagine, since that is basically what DNA computers are: clusters of DNA molecules.

      They can put a lot of DNA molecules together and have them compute in parallel, which is why this has so much potential.

    • A Beowulf cluster of DNA?

      I think it's called the human body...
  • by Alien54 ( 180860 ) on Saturday March 16, 2002 @09:21AM (#3172911) Journal
    It would be interesting to find out what the comparitive time for finding an answer would be for common conventional computers. vs this process.

    at least we would have a benchmark of sorts.

    I imagine that the problems of creating a truly AI computer will be solved using a DNA based computer.

    ;-)

    • that is the biggest illusion people have themselves running under. everyone seems to think the only problem with as you call it "truly AI computer", which doesnt even make much sense in itself, but ill give u the benefit of the doubt...

      is computing power and parallel processing vs the inherently serial nature of todays conventional computing solutions. the benchmark idea is nice, but the BIGGEST problem with AI is trying to create a model that can handle every kind of input (and even one to start with would be nice) and be able to generalize concepts as well as learn independantly (without predefined associations between internal structure and external input).

      computing power is nice, but we can't even create the correct models yet. even if its slow, at least it'll work. sure, DNA sounds nice...cause its biological ala humans, but beyond that, its just another computing method.

      QED
      • ah, I see.

        You took it very seriously because it hit a hot button.

        There is a technical term for the comment I made about a DNA based AI computer.

        It is called a "joke".

        This is because of the cognitive dissonance between the usual attempts at AI using wires and cirsuits and such, the DNA from living creatures, where some form of awareness and intelligence can occasionally be found.

        go have yer morning cup of coffee, and relax. perhaps you didn't see the smiley face in the original post.

    • On, say, a P3 1Ghz.... I guess it would talk around 0.000000 sec, if you use some of the older algorithms. That is, times() is not accurate enough to measure how long it would take.

      That would be even faster for newer and better algorithms.
  • Each of the possible 1 048 576 solutions were then represented by much longer strands of specially encoded DNA, which Adleman's team added to the first cell

    So they had to go through and find all the possible answers and put them in the mix? I don't think this will every be a viable solution when to find the correct answer you have to know all the wrong ones as well. Part of problem solving is starting out with 0 answers. I bet they couldn't use the DNA computer to figure out all those 1048576 wrong solutions. Although I think it would be an interesting concept, until they can figure out a way to solve problems without knowing all the answers, I do not see how it could be used for serious calculations. I think it could possibly be the next great search engine, or maybe an RC5 engine. But how long would it take to encode 18,446,744,073,709,551,616 strands of DNA?

    • Actually the correct answer should stand out like a beacon.

      In early Adleman experiments the correct solution is one where the opposing strands completely bind together. Then they use a comb+gel+electrode+glow_in_dark_radioactive_die. The shorter strands will make it further through the gel towards the electrodes. the longer [complete answer] strand will not move far.

      Obviously new DNA computers use different filtering techniques...

      Another thing you should read up on is PCR or Polymeraese Chain Reaction [and I know I can't spell]. Its a technique that won a Nobel prize. I don't recall the technique but it involves making each strand duplicate themselves. So to get the 2^64 strands you alluded to you apply the PCR technique 64 times.

      [no mention of errors...]

      Tom
      • PCR would really not apply because as you said 'it involves make each strand duplicate themselves'. So if you have 2^64 duplicate strands you would have 2^64 copies of 1 answer. You would need to create 2^64 unique strands to get that many answers.
  • The REAL article [usc.edu] states that

    "It would take a breakthrough in the technology of working with large biomolecules like DNA for molecular computers to beat their electronic counterparts".
    So while the article also mentions code-breaking as an obvious application for this ... device ..., electrons will be our computing friends for the foreseeable future.


  • LSD (Score:2, Insightful)

    by GigsVT ( 208848 )
    And to think, LSD made this all possible [thegooddrugsguide.com]

    "Would I have invented PCR if I hadn't taken LSD? I seriously doubt it,"

    -- Dr Kary Mullis, Nobel Prize Winning inventor of the Polymerase Chain Reaction that allows pretty much all the modern research in DNA technology.

    And to think on the same Google search that I found that, there was a sponsered link to "How drugs support terrorism" from the government.
  • Two years ago I bought

    ISBN 3-540-64196-3

    which discussed Adlemans experiements.

    How could this article describe it as the "first" dna computer used to solve a problem?
    • Re:Is this new? (Score:3, Insightful)

      by Lictor ( 535015 )
      Well it certianly isn't the "first" DNA computer to solve a problem. However, it *is* the first one to solve a problem that would be seriously non-trivial for a human to do by hand.
    • Precisely my question. I would swear that I saw an article about this exact experiment using eight variables in SciAm three or four years ago. My first reaction when I saw this piece on Slashdot was actually "this is news?".
  • by guttentag ( 313541 ) on Saturday March 16, 2002 @09:29AM (#3172935) Journal
    before we hear:

    It looks like you're attempting mitosis. Now would be a great time to sign up for a Passport account. WARNING: Are you sure you want to attempt mitosis without a Passport account? Your ancestors may regret it!

    And on the other side of the coin, the Open Source DNA advocates will be saying:
    You don't need a Passport account to have kids, honey. Yes, it's perfectly safe. Support? Who the hell told you Microsoft was going to support our child?! A free PC?!

  • by hyrdra ( 260687 ) on Saturday March 16, 2002 @09:46AM (#3172964) Homepage Journal
    According to Adleman and co-workers, their demonstration represents a watershed in DNA computation comparable with the first time that electronic computers solved a complex problem in the 1960s. They are optimistic that such 'molecular computing' could ultimately allow scientists to control biological and chemical systems in the way that electronic computers control mechanical and electrical systems now.

    Huh? What they did here is use the self-construction property of DNA whereby only the respective nucleotides A, T, C, and G only form a bond with their compliment.

    That means you can have millions of solutions and the whole thing will solve itself because only the correct solutions 'fit' into the problem which you have represented in the ATCG language. You can do the same thing when you add more variables, and it's just as easy. That is something very hard to do with electronic computers, because they deal with information on the quantity level whereas DNA is able to solve a problem on the problem "abstraction" level itself.

    However, conventional *serial* problems are something very hard to do with DNA, because it involves the manipulation of a single strand whereas you would be working in parallel with millions, even billions of strands for NP complete problems. A DNA strand is infismal compared to today's current Si processes, where we measure things in micrometers. DNA is in the single nanometer range. That's several 100 times smaller than a single wavelength of light.

    I don't think DNA will be viable for most standard computational tasks, or for a practial turing machine. Biological systems don't use DNA to do logical operations (that I know of), and the only thing they use it for is for data storage (instructions for building proteins). The only operations (under normal circumstances) an organism does with DNA is copy. Mutations (reversals, transpositions, etc.) occur because of chemical errors. That is the only operation it does really.

    This all seems very interesting albeit limited to the lab.
    • That means you can have millions of solutions and the whole thing will solve itself because only the correct solutions 'fit' into the problem which you have represented in the ATCG language.

      Well, the way you stated the problem, all problems are solvable (on a digital computer) in linear time wrt problem representation The most amazing order I've seen so far.

      Hint: char alphabet[]="AGCT";

    • However, conventional *serial* problems are something very hard to do with DNA, because it involves the manipulation of a single strand whereas you would be working in parallel with millions, even billions of strands for NP complete problems.

      You are missing the point: NP complete problems (like 3-SAT, which is what they are solving with this DNA computer) are incredibly difficult for serial computers to solve. Nobody seems to be suggesting that DNA computers should be used for serial problems that we can already solve with conventional computers. However, being able to solve SAT (and hence other NP-hard) problems more efficiently is incredibly useful, because SAT problems exist that we can't (and might never be able to) solve using existing techniques.

      • You are missing the point: NP complete problems (like 3-SAT, which is what they are solving with this DNA computer) are incredibly difficult for serial computers to solve.

        Not quite. If you have a class of problems, characterized by some parameter n, then for large enough n, the problems in a class that is NP will get harder with increasing n faster than they would if the class was P.

        But for any particular instance of a class of problems, it doesn't really matter what the class is--in fact, you can construct example problems that would be NP if generalized in one way, P if generalized another, or even constant time if you choose a perverse "generalization" (e.g., n=date on which the question is posed).

        Saying that the problem was NPC is a red herring; what they are actually doing is making a time/space tradeoff which would be hard of conventional computers, and then solving a particular example problem (not the class of problems).

        -- MarkusQ

        • Not quite. If you have a class of problems, characterized by some parameter n, then for large enough n, the problems in a class that is NP will get harder with increasing n faster than they would if the class was P.
          True as far as it goes. That's the whole point of complexity: to talk about the growth in the difficulty of solving a problem as the problem size grows. The reason we spend so much time studying NP hard problems is because all known general methods for solving them grow exponentially in the worst case.
          Saying that the problem was NPC is a red herring; what they are actually doing is making a time/space tradeoff which would be hard of conventional computers, and then solving a particular example problem (not the class of problems).

          The reason I replied to your original comment was that you implied that the work wasn't useful, or wasn't as much of a breakthrough as Adleman claims. Saying that they only solved a single instance isn't relevant: they have a method that works on any 3-SAT problem for which they can construct a long enough DNA chain to represent an assignment, and they have an implementation that actually finds the solution.

          Don't forget that the first practical computer algorithm for SAT (Davis and Putnam, A Computing Procedure for Quantification Theory, Journal of the ACM, 1960) didn't even have a computer implementation: they demonstrated its usefulness by working an example out by hand!

          • The reason I replied to your original comment was that you implied that the work wasn't useful, or wasn't as much of a breakthrough as Adleman claims

            I assume you are refering to the post by Hydra; my only prior post on this topic was a reply to you, not visa versa.

            Saying that they only solved a single instance isn't relevant: they have a method that works on any 3-SAT problem for which they can construct a long enough DNA chain to represent an assignment, and they have an implementation that actually finds the solution.

            But (I claim) they have only distributed the work, adjusting the time required by factors that depends on things like:

            The number of DNA strands they use (which goes up roughly as the cube of the liniar scale of their reaction chamber(s), or linarly with cost of reagents, and goes down liniarly with n.
            The length of time that it takes two strands to bind (liniar with length, IIRC).
            Chance of mismatch (best case coding, liniar with length, but could easily turn into a birthday problem).
            Etc.
            Note that these are all polynomial factors, and some of them work against you; you haven't really changed the NP-ness of the problem. Yes, this is cool work, and I'm glad they are doing it; no, it doesn't affect the math, any more than faster CPU's (for which I am also glad) do. To me, a "breakthrough" would be something like cracking NPC or proving that that was impossible. Everything else is just technology.

            -- MarkusQ

    • the only thing they use it for is for data storage (instructions for building proteins). The only operations (under normal circumstances) an organism does with DNA is copy. Mutations (reversals, transpositions, etc.) occur because of chemical errors. That is the only operation it does really.

      oh god......Microsoft is going to use this for their next generation Office File format!!!!!

      now we will have to pay royalties to reproduce!!!!
    • "Limited to the lab" is right. The title of this article is rather misleading, since it indicates that the answer had one million components, instead of one million choices. It's far less impressive for a DNA computer to pick one right answer out of a million when you consider that's the only advantage that DNA computers are really said to have.

      When I first started reading the article, I thought this was a big breakthrough for security and encryption (knee-jerk reaction, I know). When I saw "million-answer" and "Adleman", I thought he'd found a way to factor an enormous composite number into a million factors. Correct me if I'm wrong, but he is the "A" in RSA, right? I was very quickly disappointed.

      This article is a total letdown and, dare I say it, neither news for nerds, nor stuff that matters. The sheer theoretical limitations of this project demands that this sort of thing will never, ever make it into common and public use until every Joe on the street has at least a BS in biogenetical engineering.

      There are so many constraints to this kind of work that it will only ever be useful to academics with too much free time. Is it interesting? Sure, for some. Is it useful? I don't see how. When every Linux-loving 14-year-old out there has his own little genetics lab in his bedroom next to his homebrew Apache server, maybe this article will start to stir real interest on this site. This subject is so far removed from "real" computing, by which I mean conventional transistors and whatnot, that it's virutally useless to the computing community. Bravo, Dr. Adleman.
    • by Nagash ( 6945 ) on Saturday March 16, 2002 @12:34PM (#3173472)
      I don't think DNA will be viable for most standard computational tasks, or for a practial turing machine.

      No (respected) person in the field of DNA computing thinks that DNA computing will be practical for everyday tasks. It's just too slow. (For that matter, no turing machine is practical. Every try to program one?)

      For the record, I (Geoff Wozniak) am a graduate student of Dr. Lila Kari [csd.uwo.ca], a well known member of the field of DNA computing. Incidently, Lila was involved in the project talked about in the article.

      However, what DNA computing could be useful for in the future is solving problems that can take electronic computers far too long to figure out. Consider the SAT problem that was solved in this article. Suppose we are able to get DNA to solve SAT problems with hundreds of variables. Sure, it might take a week to do it, (maybe even a month), but it sure beats waiting for millions of years.

      Quantum computers, however, could change the whole spectrum. However, they are not as evolved as "DNA computers" are right now and I suspect they may take a longer time to be viable.

      Biological systems don't use DNA to do logical operations (that I know of), and the only thing they use it for is for data storage (instructions for building proteins). The only operations (under normal circumstances) an organism does with DNA is copy. Mutations (reversals, transpositions, etc.) occur because of chemical errors. That is the only operation it does really.

      Biological systems do a lot more than just copy. Look up work by Landweber and Kari on ciliates and gene rearrangement, for starters. In addition to copy, biological systems also to extracting/cutting, filtering, and pasting/annealing.

      You mentioned data storage. Here is where the real benefit of DNA could come into use. The way genes are expressed using only A, C, G, T is quite remarkable. The real advantage of DNA computation lies, imo, in the encoding proerties of DNA. The language of DNA has incredible error-detecting/correcting capabilities. Our work is focusing on learning more about this language and using it for the computational process in some way. I/O would be slow to DNA, but if it can store huge quantities of information, it's worth the effort, especially if better ways for long term storage can be found (of which there is a good chance).

      You have to think outside of the conventional computing process to see why DNA computation is so interesting. The problem is that "computers" and "electronic" seem to be synonymous, which they are most certainly not.

      Woz
    • A computer capable of increasing it's memory size by literary multiplying its memory cells - or growing (of-course I would have to feed it!) A computer that will solve tasks in parallel and will increase its solving power by growing more and more CPUs (brain cells if you will) A computer that has every brain cell connected with every other brain cell (the power is not the number of cells but number of connections between cells) A computer capable of abstract thinking, A computer aware of its own existence, A computer capable of real language communications, A computer that you can communicate with directly with no other input system than your own brain, A computer capable of reproducing itself with various mutations that may or may not improve the next computer generation powers, A computer consisting of trillions of living organisms that combine all computing and storage capacities of every separate organism into one huge organic computer, A computer capable of figuring out who I am without me having to remember any passwords, a computer capable of winning GO, and finally, me being a part of this enormous computing system being able to know everything that is worth knowing all at the same time as writing this goddamn web based application I am currently working on. This is the computer I want.
    • The only operations (under normal circumstances) an organism does with DNA is copy. Mutations (reversals, transpositions, etc.) occur because of chemical errors. That is the only operation it does really

      Actually, we do alot more than that. How do you think that our immune system works? Every immune globulin is coded in DNA. Do you think we have DNA to produce an immune globulin to every possible molecule? Nope. We rearrange our DNA to produce matching (ie., stereochemical gloves that fit a specifically shaped and charged molecule) immune globulin. There may be similar stuff for our sense of smell.

      Its alot more complex than that. Just like true neural computing has to do more than just weigh up a series of electrical inputs to a nerve - there are local and systemic hormonal messengers.

      Its easy to make a mistake by interpreting a technology only in terms of your current understanding of technology. See my .sig with regard to this.

      Michael
  • by rtos ( 179649 ) on Saturday March 16, 2002 @09:50AM (#3172972) Homepage
    Perhaps you are wondering what an NP-complete problem is or what this P vs. NP stuff is all about. You might want to check out the comp.theory FAQ [cs.unb.ca] and scroll down to 7. P vs NP. It gives a bit of history and a decent description.

    Or check out The P versus NP Problem [claymath.org] at Clay [claymath.org] for a really good description (unfortunately too long to quote here). And lastly, you might want to check out Tutorial: Does P = NP? [vb-helper.com] at VB Helper for a little more info.

    Ok, but what is it good for? The Compendium of NP Optimization Problems [nada.kth.se] is a great place to look for real world examples of NP problems. Including everything from flower shop scheduling [nada.kth.se] to multiprocessor scheduling [nada.kth.se].

    Hopefully that helps. I was very clueless when it came to P vs. NP stuff that always seems to be mentioned on Slashdot. So I took the time to look it up. Now I'm clueless but I have links to share. :)

    • I already knew something about P vs NP but some of these links have really non-technical easy explanations.
      I am particularly enjoying the one on the link between Microsoft Minesweeper and NP complete problems
      (Yes I remember when it was reported here, but it only linked to the guys academic paper which was too deep for me)
  • It's amazing the things possible with DNA computers, little tiny ones implanted in the human brain.

    Of course, it won't be long until people are over clocking. Or worse, adding transparent case mods. *shudders*

  • by Baldrson ( 78598 ) on Saturday March 16, 2002 @10:18AM (#3173018) Homepage Journal
    the first time that electronic computers solved a complex problem in the 1960s.

    Although the author may be responding to Seymour Cray's first supercomputers circa 1960 [digitalcentury.com] it is untrue that complex computations weren't being performed electronically until the 1960s.

    The History of Unisys [unisys.com] shows the earliest milestones with the following one almost certainly qualifying as "complex computation":

    1952 UNIVAC makes history by predicting the election of Dwight D. Eisenhower as U.S. president before polls close.

    • That history is quite amusing to read from bottom to top. It shows how highly the Unisys marketroids rate their own importance.

      So we have in 1946 'ENIAC developed', and in 1997, with about the same amount of emphasis: 'Lawrence A. Weinbach named chairman, president, and CEO. Unisys Windows NT servers lead industry in price/performance.'.
  • by isenguard ( 14308 ) on Saturday March 16, 2002 @10:18AM (#3173022) Homepage
    According to Adleman and co-workers, their demonstration represents a watershed in DNA computation comparable with the first time that electronic computers solved a complex problem in the 1960s.

    The description of the problem they are solving corresponds to a 3-SAT (propositional satisfiability with clauses of length 3) instance. In 1962 Davis, Logemann and Loveland published a paper entitled "A Machine Program for Theorem-Proving", in which they described a computer program which could solve SAT problems of a similar size, extending earlier work by Davis and others published in 1960. (You can read the paper in Communications of the ACM if you have a library that goes back that far.) So it looks like their comparison is correct.

    The method they are using for the DNA computer is rather crude compared to that proposed by Davis et al, whose procedure is still in use today for solving SAT problems. We can now solve problems with thousands of variables, and actually do useful things in the process (e.g. verify hardware specifications).

  • The A in RSA (Score:1, Informative)

    by Anonymous Coward
    This is Leonard Adleman .. the A in RSA, difficult to tell from his homepage .. unless u dig. i guess he keeps in low key.

    Ironic, by showing how to do 0computation with DNA he may be undemining RSA!
  • by lars ( 72 )
    Imagine a Beowulf cluster of these!

    But seriously, I imagine these things must be pretty slow, since they are powered by chemical reactions. I don't forsee one of these babies supplanting my desktop any time soon. Of course, the whole point is that this shows the power of genetics, and it is amazing in that sense, to think that life is basically created by computer programs. Fortunately Microsoft hasn't gotten into the DNA software market yet, I'd hate to have bugs growing out of my skin and stuff. :)
    • I imagine these things must be pretty slow, since they are powered by chemical reactions.

      DNA base pairs are not held together by chemical bonding. That would be a really, really difficult. They're held together by hydrogen bonds. The breaking and formation of hydrogen bonds takes on the order of picoseconds (it's the same timescale as rotations in the medium, since that's the basic mechanism for their breaking). Furthermore, they can break and form in parallel.

      You can read more about the mechanism for base pairing here [utexas.edu] or if you want to see google render the pdf here [google.com].

    • That's like saying that the brain is slow because it's based on chemical reactions...

      The whole point of this "DNA computer" is that it's massively parallel - it evaluates all possible solutions in parallel. Increase the size of your problem .. no biggie - just add another pint of "computer".
      • That's like saying the brain is slow because it's based on chemical reactions...

        Yes, exactly -- what would you use if you wanted to factor a 256-bit prime, your brain, or a computer? Similarly, this technology isn't going to be useful for solving general computational problems. I imagine it would be very useful for controlling biological systems, but it's not something I'm going to pull out when I want to do some number crunching! That's my point.

        it evaluates all possible solutions in parallel.

        Which means it's limited by the number of solutions you can represent in parallel. You're not going to be able to use this to evaluate 2^1024 possible solutions in parallel, unless there's funky quantum mechanics stuff involved. There probably aren't that many particles in the universe.

        Increase the size of your problem .. no biggie - just add another pint of "computer".

        Not exactly. For a problem like 3-SAT (the problem they solved here), a linear increase in the size of the problem input = an exponential increase in the number of solutions they need to evaluate. If they doubled the input size from 20 variables to 40, they'd need to evaluate 1 TRILLION solutions in parallel instead of 1 million. So they'd need the equivalent of 1 million of these DNA computers running in parallel. I'd consider that more than "just another pint".

  • if a byproduct of that solutions of problems
    becomes a monster (some cells actually incorporated problem DNA-s).
    WoW!
    • Odds are it would do nothing. Without an intron, a strand of DNA isn't translated into protein and therefore does nothing. The odds of it being inserted into a coding sequence are less than one in ten, and even if it was the odds of it producing some sort of constructive mutation that would do anything interesting are vanishingly small.
  • It seems to me that the amount of preparation that went into constructing the long strands and each of the truth sets is extensive. For this type of computing to become useful, it would appear to me that this construction of the parts necessary to carry out the computation would need significant work. I am very, very hopeful that DNA based computing as well as quantum molecular based computing will begin to make rapid gains in the very near future. The potential to astronomically increase the available computing power is there, I've read all the theories. It just needs to be made to happen :)

    Just imagine ... quantum computing devices become common, as do DNA based computing devices - each in their own niche, possibly (neither device may be promising as a 'general computing' device, but used in conjunction they may complement each other) and the article previously posted about table-top fusion from the collapse of bubbles provides us with practically limitless and clean energy to drive the energy needs of all our computing devices. I can't wait :)

    Add to that the possibilities for human augmentation by cybernetic implants and I feel that the day of Gibson's Neuromancer may be soon approaching. As the geek I am, I can't wait :)
    • Each of the possible 1 048 576 solutions were then represented by much longer strands of specially encoded DNA, which Adleman's team added to the first cell.

      Granted I'm not a bio-engineer, but it's very difficult for me to believe that DNA will ever solve a problem faster then the computer can.

      For this problem, millions of DNA strands were generated. I guarentee you the researchers didn't do it by hand. In the time the computer was instructing the hardware on how to make any given solution, it could have checked a thousand solutions. And the speed is accelerating.

      Maybe, just maybe, if the strands could be generated automatically by a gene-gen'ed life form, except for the query strand, this could keep up with conventional silicon. But that's a tough nut itself.

      I'm more intrigued by the QM computers... and I don't really believe in them, either.

      (Biological computing may someday take off. But I don't think it's going to involve DNA. DNA is not meant for computing in the sense most of us find interesting. The gymnastics necessary to answer this simple question demonstrates that, and unlike conventional computers, where there has always been a clear path of progress, exactly what part of this computation is going to be streamlined? Millions upon millions of unique strands must be generated; what's the equivalent of shrinking the transistor by a factor of 2? I'm not holding my breath.)
  • This is interesting, but would it have worked at all if the scientists did not know the answer already? I mean, they set up all the variables, and then they set up all the possible answers, and the DNA just matched up with the one that they already knew to be correct. What if they didn't know which one was correct? What if there were a billion possible answers? Would they have to program them all in? At what point is it actually faster to use a computationally inefficient conventional PC instead because the compiler does the brunt work for you? Is this DNA method ever faster?

    Obviously, there are a lot of questions to be asked about this, but it will be very interesting to see what happens when somebody develops some kind of compiler for the DNA computer, and you can just input an equation and some parameters and in several minutes you have the answer... But until then, this doesn't seem all that useful. On the other hand, it is fascinating research, and by all means, they should continue research on it.
  • 42 (Score:2, Funny)

    by edoug ( 66662 )
    As a testimate to the parallel computing power of DNA computers, an entire planet of DNA computers has produced the answer to the universe. However, now we must wonder as to what question this answer applies....

  • The details are a repost, this technology is "old" as in they did this same problem with travelling salesmen 2 years ago. Details were posted in SCIAM.com
  • The following are results on 100 variable, 460 clause, "hard" random 3-SAT problems running on a PIII 750. The results are averaged over 100 repetitions of 1000 variables (some solvers are stochastic which is why multiple runs are used). The result shown is the average number of seconds required to solve a single problem once.
    Note, these problems are 5 times larger than the one solved by the DNA computer and likely much harder (the DNA one sounds very underconstrained).
    Both complete and incomplete solvers are shown:
    DLM 0.00406513
    Novelty 0.00560927
    Novelty+ 0.00325692
    WalkSAT 0.00759083
    Satz 0.00557251
    zChaff 0.00800162

    Note: Things don't really get interesting until you get up over 500 variables for hard, random 3-SAT.
  • ...that DNA computation offers no theoretical advantage over Turing machines. That is, problems that are thought to take an exponentially increasing number of steps on a regular computer still do with the DNA computation model. The DNA method requires that every combination be exhaustively generated, and therefore the number of molecules required grows exponentially with the input size. Of course, the DNA method offers some practical advantages over today's computers (when it comes to this type of combinatorial problem.) The computational elements (DNA molecules) are very small and can "swim around" in three dimensions to make many more combinatorial "decisions" than could be hardwired onto a piece of silicon today.
  • This is not exactly news. The second edition of "Applied Cryptography" by Bruce Schneier already mentions DNA Computing in the discussion of key size.

    According to Schneier, Adleman was able to solve a Directed Hamiltonian Path Problem with 7 vertices already in 1994. It is interesting that it took 8 years to get from 8 to 20 vertices. The difficulties seem to be enourmous.

    Allthough this seems to be excellent research, I still doubt that this is significant for solving real world problems. After all, this is a brute force search (although a massively parallel one). But there are physical limits to consider: there are ~10^50 atoms on this planet, but this only in the order of 42!. (read faculty of 42...) So 42 may not be the answer after all, but the size of the problem...

    Modern algorithms for discrete optimisation can do much better than this. Travelling Salesman Problems in the order of several thousand cities have already been solved.

  • I remember reading an article in Dr. Dobbs journal from something like 1993 about how Adleman used DNA computation to solve an instance of the travelling salesman problem. How is this work really revolutionary then, aside from demonstrating the use of molecular computation techniques on a more complex problem. As far as I can tell this isn't 'news' so much as it has already been done, published and publicised before, by the same physicist/mathematician/biochemist.

  • by areiosoltani ( 566922 ) on Saturday March 16, 2002 @04:01PM (#3174366)
    I don't think DNA will be viable for most standard computational tasks, or for a practial turing machine. Biological systems don't use DNA to do logical operations (that I know of), and the only thing they use it for is for data storage (instructions for building proteins). The only operations (under normal circumstances) an organism does with DNA is copy. Mutations (reversals, transpositions, etc.) occur because of chemical errors. That is the only operation it does really.

    this is not entirely true. nucleic acids are responsible for quite a few things in the cell. yes, DNA is not very reactive, designed to be a stable archival form of genetic material, but RNA (ribonucleic acid) is a different story. derived from DNA, it has a 2' hydroxyl (-OH) group on its sugar (hence the name ribo- instead of deoxyribo- which has a 2' hydryl (-H) group) and is much more reactive, causing cleavage, ligation, and other enzymatic modifications. there are programmed errors and very regular processes in the cells, things like SOS DNA repair, nonhomologous end-joining, and crossing over, that can result in modification. there is so much here to study it can make your head spin! so don't count any of it out =)
  • There is one serious flaw in DNA computing that people are sweeping under the rug, and that's that although the amount of time needed to solve the problem does not increase exponentially, the amount of DNA does. Off the top of my head, I don't know the amount of DNA needed to perform a given calculation, but one of my professors who works on this sort of thing showed us once that a calculation that could be done in a reasonable amount of time on a standard desktop computer would require more DNA than there is on earth. I'm sure there are ways to increase the efficiency of the process, but this is still a fundamental limitation of this type of computing.
  • What would happen if NP-Complete was solved?

    Many responses (IIRC) were regarding encryption encoding...

    several days after that article I ran across something that
    suggested the potential benefits far outweighted not doing it.

    Something regarding Space as in outer space and it's safety of us in it.

    Anyone have links to such information, as I seem to have forgotten and lost out of browser cache the link.
  • Printable version of the article found here [howstuffworks.com].

    (This is a good starting place if you want to know the basics.)

/earth: file system full.

Working...