3D Microfluid Computers Used To Solve NP Problems 125
Sergio Lucero writes "The Proceedings of the National Academy of Science (PNAS)
just published an
article
on the use of 3D microfluid networks as computers for the
solution of very hard (NP) mathematical problems. So far it looks
like they've solved a specific case of the maximum clique problem,
but of course this also means they can do other stuff."
Dear moderators: (Score:1)
Re:"Other Stuff" (Score:1)
I believe this applies to the class of NP-complete problems, not NP problems.
Re:Paid subscriptions only! (Score:1)
Here's the abstract and the first couple of paras of the introduction, for those who can't get access to the article. (I'll get into shit if I set up a mirror.)
Using three-dimensional microfluidic networks for solving computationally hard problems
Daniel T. Chiu, Elena Pezzoli, Hongkai Wu, Abraham D. Stroock, and George M. Whitesides
Abstract
This paper describes the design of a parallel algorithm that uses moving fluids in a three-dimensional microfluidic system to solve a nondeterministically polynomial complete problem (the maximal clique problem) in polynomial time. This algorithm relies on (i) parallel fabrication of the microfluidic system, (ii) parallel searching of all potential solutions by using fluid flow, and (iii) parallel optical readout of all solutions. This algorithm was implemented to solve the maximal clique problem for a simple graph with six vertices. The successful implementation of this algorithm to compute solutions for small-size graphs with fluids in microchannels is not useful, per se, but does suggest broader application for microfluidics in computation and control.
Introduction
Intuitively, a problem is in the class NP (i) if it can be solved only by an algorithm that searches through exponentially many candidate solutions, and (ii) when the correct solution has been identified, verification of the solution can be performed easily (ref. 4). No feasible sequential algorithm is known to solve NP problems. If one can, however, search all candidate solutions simultaneously in parallel, solution of NP problems becomes feasible. The potential for parallel searches is the foundation for interest in DNA-based computation (refs. 5-8) and in part for interest in quantum computation (refs. 9-12). In addition to sequence-specific recognition (the basis of DNA computation) and manipulation and resolution of entangled quantum states (the basis for quantum computation), other physical phenomena or systems might also be suited for solving computational problems. In this paper, we demonstrate the use of moving fluids in a microfluidic system to solve an NP complete problem.
Microfluidic systems are providing the basis for new types of rapid chemical and biological analyses (refs. 1-3). With the increasing complexity of microfluidic systems, there is also a growing need to carry out relatively complex processeslogic operations, fluidic control, and detectionon the chip. Although these processes can be (and usually are) controlled by electronic systems, it would be more natural and compatible to use the same medium (that is, a fluid) to carry out some or all elements of the required computation and control/decision making. Here, we explore the potential of microfluidic systems in computation by using a representative "hard" problem a parallel solution of a nondeterministically polynomial (NP) complete problem [the maximal clique problem (MCP)] for a graph with six vertices. This method we describe uses moving fluids to search the parallel branches of a three-dimensional (3D) microfluidic system simultaneously, exploits parallel fabrication of relief patterns by photolithography, and detects solutions in parallel with fluorescence imaging.Re:Nothing new... (Score:1)
Local minima my ass. Jesus says not to be greedy, so the United States government should go and beat up anybody that doesn't listen to him. Blow up Iraq, blow up Microsoft. Praise Jesus!
Re:Before a million people get this wrong... (Score:4)
Your definition of NP is correct, provided you mention that the verification takes place on a deterministic turing machine, and that verification is done using a polynomial size certificate. If you give me a computational problem X, then I can show you it is in NP by (1) demonstrating that there is a way to express the proper solution x' to an instance x of X using O(f(|x|)) characters, where f(|x|) is a polynomial function, and (2) that given both the instance x and the solution x', you are able to verify using a DTM that x' really is a solution to x in polynomial time.
Your definition of P is correct. You are also correct in saying that P is a subset of NP. Mathematicians are still uncertain whether P is a proper subset of NP -- this is a huge open problem.
Let's tackle your definition of NP-complete next. The correct definition of an NP-complete problem is that it is both a member of NP *and* a member of NP-hard. Note that this does *NOT* mean that NP hard problems cannot be solved in P time. That is a BIG open question, and if you have a proof for it, I'd love to see it.
A problem is defined to be NP-hard if it is polynomial time reducible (on a DTM) to Circut-SAT. This is the very famous Cook-Levin theorem. (The theorem also demonstrates that Circut-SAT happens to be in NP as well, and is hence NP-complete. But that isn't the main point of the theorem -- something people often miss.) Generally when people define the NP-hard set, they say "any problem which is polynomial time reducible to another problem in NP-hard" but of course that is problematic in that it is a circular definition. The genius of the Cook-Levin theorm is that it gave us a base problem for defining the set, and gives us a very illumniating method for looking at the meaning of NP-hard.
Okay. Now that we've got that squared away, I'd like to address two final inaccuracies that I hear a lot. Myth number one: if you can efficiently factorize large integers, you've broken today's best encryption schemes. This is not true. In fact, you must solve the *discrete logarithm* problem in polynomial time in order to break today's encryption schemes. The reason that integer factorization is often substituted here is that usually when you've solved this, you've also solved the discrete logarithm problem. But *not always*. (Witness the L3 algorithm or other lattice reduction algorithms.)
Myth number two: if you have solved integer factorization (or discrete logarithm) then you've proven that P=NP. This is *not* necessarily true. Unfortunately, computer scientists have never been able to classify these problems in the strict P/NP/EXPTIME hierarchy. It is *not* known if either of these problems is NP-complete (or even weaker, simply NP-hard.)
A final variant and twist on both these myths: if you can show that P=NP, then today's modern encryption schemes are failed. This is only partially true. There are some schemes which rely on the computational intractability of NP problems. These schemes would break. However, there are other schemes (such as RSA, and one I just heard about called NTRU) which rely on problems which are *not* understood to be NP-complete (discrete logarithm, and in NTRUs case I think it is change of algebraic rings, which I don't understand so well.) These encryption schemes would *still stand* though most mathematicians would agree that they fall on shakier ground.
Okay, that's my theory lesson for today. I hope everyone has found this useful. If anyone has spotted inaccuracies or blatant lies, please let me know.
love,
the anonymous theory coward
Re:first things first... (Score:2)
Some things better left unsolved (Score:4)
If humans solve everything, then we'll grow lazy and ungrateful. Goedel showed that there are infinitely many unsolvable problems in the world (and Turing showed there are infinitely many uncalculable ones), but there's no guarantee that this infinity of problems consists of interesting problems. In fact, they might all be dull ones! And where would that leave us?
Mathematics is where it is today because bygone generations left problems for us to solve. They may have wanted to solve them all, but for whatever reason, they were unable. Are we really better than they? Doesn't conservativism dictate that we should look to our nation's history and traditions when determining what current approach to take?
Where will we be in twenty years if all the interesting mathematical problems are solved? There'll be anarchy in the streets. Unemployed mathematicians and computer scientists will be roaming the nation's highways, raping our women and defiling our basilicas.
I'm not so sure this is a good thing. Certain things man was not meant to know. Certain things are beyond our comprehension by design. If we seek this path, we may bring down divine vengeance upon the National Academy of Sciences and all who are complicit in their proceedings. It is a mark of human hubris, and Xenu won't look favorably upon it.
Friends don't let friends solve NP problems.
Re:And in software news (Score:1)
#define X(x,y) x##y
Re:Nope (Score:1)
NP stands for non-deterministic polynomial. Which means that you can solve the problem in polynomial time, but with a non-deterministic machine. Another way to say it is that given a supposed solution to the problem you can test if it is really a solution in polynomial time with a deterministic machine.
Re:Paid subscriptions only! (Score:2)
Best solution not guaranteed (Score:2)
-
Re:No, he gets the cigar (Score:2)
Quite true, also. I was being careful to state the Euclidian nature of the method he was describing, because there are, as you undoubtedly know, other graph problems where it does make quite a difference.
Nice try, but no cigar (Score:4)
The difference between a problem with a straightforward polynomial-time solution and an NP-hard problem is often very small. Be careful!
Re:Some things better left unsolved (Score:1)
NP-hard (Score:1)
NP-complete: NP and NP-hard.
There DO exist NP-hard problems which are not NP. In fact, there exists an infinate ascending tower of classes that extend beyond NP (which are all NP-hard problems.)
From the conclusion (Score:1)
Excerpts from the conclusion (emphasis mine):
"The strength of this microfluidic system as an analog computational device is its high parallelism. Its weakness is the exponential increase in its physical size with the number of vertices.
Re:Before a million people get this wrong... (Score:1)
Myth number one: if you can efficiently factorize large integers, you've broken today's best encryption schemes. This is not true. In fact, you must solve the *discrete logarithm* problem in polynomial time in order to break today's encryption schemes.
The RSA cryptosystem relies on that factorization is a hard problem. If you find an efficient algorithm to the factorization problem you have broken RSA. RSA may not be the best cryptosystem but it is widely used.
From the RSA Laboratories FAQ 4.0: Question 3.1.3 The most damaging would be for an attacker to discover the private key corresponding to a given public key; this would enable the attacker both to read all messages encrypted with the public key and to forge signatures. The obvious way to do this attack is to factor the public modulus, n, into its two prime factors, p and q. From p, q, and e, the public exponent, the attacker can easily get d, the private exponent. The hard part is factoring n; the security of RSA depends on factoring being difficult.
Re:"Other Stuff" (Score:1)
Re:Nothing new... (Score:1)
Of course there are practical difficulties using these types of analog computer as you scale up, but I don't know how much serious effort has been taken to look for analog approaches that would scale.
Re:Nothing new... (Score:1)
Quantum may well be the practical way to go for NP complete problems, and I believe the general approach is the same as the DNA method you linked to - you figure out how to make your quantum system simultaneously represent all solutions to the problem, then reject the ones you don't want - pretty much the exact opposite of conventional methods.
Re:Nothing new... (Score:1)
O(X) only applies to stepwise problem solving as done by digital computers (and Turing machines). If you can use an analog/parallel approach, then you have circumvented the complexity entirely.
See the DNA and sphagetti examples in this thread, or my comment about quantum solutions.
Re:Nothing new... (Score:1)
Not necessarily. You can consider the analog soap bubble computer as a parallel solution, since it is "computing" the solution, but the DNA/quantum computer approaches are radically different... With that paradign you're not relying on the solution method being parallelizeable because you're not trying to derive the solution!.. instead you're assuming that you can create a non-computed set that contains the solution, then remove the non-solutions. If puts the empasis on verification rather than solution generation, and regardless of the problem you can always represent multiple potential solutions in parallel.
Re:Isn't this... (Score:2)
Nothing new... (Score:5)
Want to find the shortest set of roads to build to interconnect a bunch of cities? You need a couple of pieces of glass, some round plastic rod and some soapy water...
Cut the rod into 1" lengths, so that you have one per city, then glue them as separators between the two pieces of glass so that the positions of the rods represent the locations of the cities. Dip the completed "computer" into soapy water, and let surace tension and energy minimization do it's job - the soap bubbles that form between the rods will have edges that join where you should build your roads.
Isn't this... (Score:2)
Boy, this sounds exciting. Too bad it'll probably turn out to be nothing so cool if I get to read the full article...
-grendel drago
Computer size? (Score:2)
If you are allowed to limit the problem size, then you can solve it on a normal computer. All you need to do is make the computer big enough. For instance, you could make a travelling-salesman computer out of a circuit what builds all possible paths in parallel and then use pairwise comparison to find the lowest-cost path.
This assumes the circuit is large enough to hold the whole search space, which is the fatal flaw. Classification of a problem as NP-complete is based on its asymptotic complexity: how the computation time grows as the problem size grows without bound. Put a bound on the problem size, and you haven't solved anything.
It seems to me that the microfluid computer has the same flaw.
Does anyone know more about this? Please straighten me out.
--
Patrick Doyle
Re:Before a million people get this wrong... (Score:3)
A problem X is in the set NP-hard if all problems in NP can be transformed into X in polynomial time. Thus, every problem in NP-hard is at least as hard as every problem in NP.
A problem is in NP-complete if it is in NP and NP-hard. Thus, NP-complete is a "representative" set of NP problems, such that solving one of these in polynomial time would also mean solving all other NP problems.
Here [ic.ac.uk] is the FOLDOC [ic.ac.uk] entry for NP-hard. It explains all this.
--
Patrick Doyle
Modeling (Score:1)
The 'DNA Computers' we've been making such a fuss over lately, are essentially just modeling again. If you can sift well, and your modeling parameters are correct, you can run a lot (10^38 anyone?) in a few minutes. (Of course, it may take you weeks or even years to set it up correctly, and the old GIGO rule still applies.)
This seems to be just modeling a class of hard computing problems, and letting reality show us what the answer is. Sometimes, analog is best.
Re:Nothing new... (Score:1)
A long time ago, I concluded that the biggest difference between a communist-style centralized economy and an end-game capitalistic economy is how obvious it is who's pulling the strings. I've come across precious little that would cause me to change my mind.
--
Re:Some things better left unsolved (Score:1)
This has nothing to do with undecidability, which deals with functions (or sets), not statements. An undecidable function one that no machine can be built to calculate (an undeciable set one which has no decidable function that determines membership of the set).
Also, you forget when trying to prove a statement, there are more options than proving it it true or proving it is false. You can also prove that it can't be proven either way. This has not been done with the P=NP problem.
Re:Before a million people get this wrong... (Score:2)
ALL key-based encryption is in NP (it is easy to verify a suggested key). So it matters little whether it is based on factorisation, points on elliptic curve, rotor positions, or S boxes.
BTW I do believe that while nobody knows whether FACTOR is NP, it is commonly held NOT to be NP COMPLETE.
I've GOT it! (Score:1)
What we want to show is that P = NP, right? Simple! P = NP <=> P/P = NP/P <=> 1 = N. There you have it!
Re:Paid subscriptions only! (Score:2)
I couldn't read the article (paid subscription), but this excerpt looks like it's still a brute-force. They were only able to solve in P time by using parallelism.
Does this really count? I'm curious.
Re:Before a million people get this wrong... (Score:1)
Link to proprietary (paid suscription) page? (Score:1)
IT DOESN'T WORK! (Score:1)
You are very correct.... (Score:1)
For this problem picture a three city setup, where each city is the vertex of a triangle. Then the solution to this problem is to have each city connect a road towards a fourth, unknown node somewhere in the middle of the triangle. The NP-edness of this problem comes from the fact that you do not know where the fourth node belongs, and the only way to prove where it belongs is to test all available placements.
The soap bubble will always yield the optimal solution to this because (as you state) of surface tension and energy minimization.
--
"A mind is a horrible thing to waste. But a mime...
It feels wonderful wasting those fsckers."
Re:Before a million people get this wrong... (Score:1)
Re:Dear moderators: (Score:2)
Now, this post is Offtopic.
Of course, I could throw in a comment about the article to change that, but I can't seem to actually see the article.
You have requested an item that requires a subscription to PNAS Online.
Free, shorter version of the article (Score:3)
if you trust me [pnas.org]
Oops... (Score:2)
Before a million people get this wrong... (Score:4)
NP: A problem which can be *verified* in polynomical time. This even includes constant time problems, like, is 5*6=42?
P: problems in NP that can be solved in polynomial time (ie quickly)
NP-hard: those problems in NP that can't be solved in P-time. This is probably what all Slashdotters mean when they say "NP".
NP-complete: a class of problems that all other problems in NP reduce to. One of the biggest open questions in computer science is determining if any NP-complete problem does not reduce to any problem in P (then any NP-hard problem would have any easy solution...)
Psuedo-computer science (Score:3)
Graph Isomorphism Factoring? (Score:1)
One problem I see with trying to show them equivalent is that one problem takes two graphs, and the other takes a number and an upper bound on the size of the factor (or just a number if you are checking for primality). This makes it difficult to imagine a reduction.
Thoughts?
Re:Nothing new... (Score:3)
I thought that soap can only guarantee a local minimum [curvefit.com] for travelling salesman, not necessarily the global shortest path. Also, if the problem is sufficiently complicated (millions or more nodes, multiple dimensions, etc) it becomes prohibitive to construct a physical representation.
That said, it's likely that organic [warwick.ac.uk] or quantum solutions are the right approach due to their built-in massive parallelisms.
I disagree on a fine point... (Score:1)
No, in order to solve the classic TSP, you need to make it the shortest round trip, not be cheap on the construction materials! To really do a TSP, you need a permutation generator(got one!), and start trying solutions, prune larger nodes off, and skip, and you'll finally have it.
Granted, it's messy, long, and not too elegant(program the PG to skip bad sequences..), but it's the only way I've found to get there. If the soap array generated the shortest routes, instead of optimized materials, I'd agree/apologise. Still, alternative resouces must always be investigated, you never know what's out there. =)
ya whatever (Score:1)
-Jon
i wish i had a link so i could get a +1...
Streamripper [sourceforge.net]
A.K. Dewdney & Comp Recreations (Score:1)
Check out his book "The New Turing Omnibus" for further examples. Incidentally, I recommend this book as one that should be in every hacker's library.
"Other Stuff" (Score:1)
Yes. In fact it means that they can do every single other NP problem. If you can do one, you can do them all. All you have to pay is a measly polynomial time conversion cost. (Of course this polynomial could have HUGE constants in it, but theoretical CS people only care about the O() of a problem.)
Doesn't seem to be a Turing machine.. (Score:1)
Re:Before a million people get this wrong... (Score:1)
It's the other way round: A problem is NP-hard if SAT can be reduced to it.
Re:Nothing new... (Score:1)
--
Peace,
Lord Omlette
ICQ# 77863057
oops, i posted to quickly :( (Score:1)
--
Peace,
Lord Omlette
ICQ# 77863057
Re:Isn't this... (Score:2)
For instance, an iterative loop counting numbers from 1 to some limit is not NP - you know the maths to work it out, so given the loop count, you can bypass the whole iteration process (eg. counting the numbers 1 to 10: the total is (1 + 10) * (10/2) = 55).
But an iterative loop to find the value for which x = sin(x) _is_ NP (as far I can remember from my fairly basic maths) - you can't solve this any way other than iteratively. The only way to solve it is to take the sine of a value, see what the error is (either side), add on a correction factor and try again, and repeat this process until you've got the result to the accuracy you want. And there isn't any equation you can use to short-cut this process - if you want the result, you just have to keep working through it.
Grab.
Re:Nothing new... (Score:1)
The trick here lies in the fact that analog methods inherently support parallel computations.
And what is even more important, analog parallel computations scale easily - their parallelism expands infinitely to meet your needs. Well, almost infinitely :)
Re:Some things better left unsolved (Score:1)
Not enough space in the margin?
Re:Nothing new... (Score:2)
Soap bubbles solve local minimization problems... the whole trouble with NP complete problems is that local optimization doesn't seem to help with the global optimization problem. I.e. nothing better than brute-force works.
Re:Nothing new... (Score:1)
Re:Nothing new... (Score:2)
Isn't it rather the Steiner tree problem? The difference to the minimum spanning tree problem is that we're allowed to select intermediate points to reduce the cost of the tree. And the Steiner tree problem actually is NP hard.
To answer both replies (Score:1)
You're funny (Score:1)
Nope (Score:2)
I tell you what (Score:2)
With all due respect (Score:4)
ACM (Association of Computing Machinery) charges hundreds of dollars in professional dues for membership, but it's money well spent if you are seriously interested in the profession or, in many cases, NEED truly up to date information.
I don't see any problem with pointing to an articly on NAS(proceedings are proceedings people)... Just don't point to one on Tabloidfor$50.com
Re:No, he gets the cigar (Score:1)
But what about minesweeper (Score:1)
Also, it will never validate the educational postulate that the famous philosopher Wesley Snipes offered in the critically acclaimed movie 'Passenger 57' that we should "Always bet on black."
I want my tax money back.
Re:Nope (Score:1)
To be NP-complete it needs to be in NP as well. It may seem like a silly thing, and it'll be simple in many cases (guess... try). Otherwise you might drag in things from outside NP.
Re:Nothing new... (Score:3)
The new BubbleServer GS320 is the industry's fastest NP Solution server, with up to thirty two Glass EV67 plastic rods, the highest levels of availability, and an innovative architecture for managing massive systems in record time. A modular structure allows you to purchase only currently needed glass and rods -- and still be prepared for explosive growth. With upgrades, expect a 20-fold increase in application performance over the lifetime of the system.
The BubbleServer GS320 is ideal for NP-business and other critical enterprise applications with high or unpredictable growth rates -- and as a replacement and growth extension for BubbleServer GS140 customers. A system configured with as little as one glass plate and one rod will be able to grow to a fully loaded 32-way system without operational disruption.. Single system management lets you add capacity and applications without adding people. Unique partitions allow mixed NP systems on the same server, facilitating workload management and server consolidation.
[From the Compaq AlphaServer [compaq.com] Site]
Re:Some things better left unsolved (Score:2)
Is it just me, but doesn't this just seem to be a bit non-logical?
Not to worry, since as pointed out there are an infinitely unsolvable problems, etc. This means that the methods of mathematics ar not adequate to that class of problems. Since this class is infinite, it means that any simple sub divisions of this group are infinite as well. Therefore, not only are there an infinite number of unsolvable problems, but there are an infinite number of solvable problems. Furthermore, these are dividable into an infinite number of other types of problems, such as an infinite number of interesting problems vs an infinite number of boring problems.
So therefore it is impossible for humans to solve all interesting mathematics problems using the methods of mathematics, because these are infinite.
You could use the method of Alexander the Great, who was a very good practical mathematician in his own right. [He was taught by Archimedes, who certainly was not a slouch.]
Alexander is famous for solving the mathematical problem of the Gordian Knot. He used a sword.
NP hard and microfluidics (Score:1)
Re:Some things better left unsolved (Score:1)
Cubem autem in duos cubos, aut quadratoquadratum in duos quadratoquatratos, et generaliter nullam in infinitum ultra quadratum potestatem in duos eiusdem nominis fas est dividere. Cuius rei demonstrationem mirabilem sane detexi hanc marginis exiguitas non caperet.
It is impossible for a cube to be written as a sum of two cubes or a fourth power to be written as the sum of two fourth powers or, in general, for any number which is a greater power than the second to be written as a sum of two powers. I have a truly marvellous demonstration of this proposition which this margin is too narrow to contain.
Re:Before a million people get this wrong... (Score:1)
Hehe, I suppose the first thing to do is to reduce all the computer science babeling down to a minimal set of irreducable words but open ended enough to handle all the CS babeling before it got reduced. Once you get the Abstraction Manipulation Machinery (computer) straightened out then you'll have a correct foundation (digital tool) upon which to solve NP-complete in the digital environment.
Yes, NP-complete is solvable, once you get the tool right.
3 S.E.A.S - Virtual Interaction Configuration (VIC) - VISION OF VISIONS!
Re:some considerations (Score:1)
So for an award based more so on acceptance than on solution, it sounds to me that the Clay Institute will have plenty a way to so no. And in the event they ultimately cannot say no, then they can change the rules and awards at will it seems given their "rules".
3 S.E.A.S - Virtual Interaction Configuration (VIC) - VISION OF VISIONS!
Re:Before a million people get this wrong... (Score:1)
Myth number two: if you have solved integer factorization (or discrete logarithm) then you've proven that P=NP. This is *not* necessarily true. Unfortunately, computer scientists have never been able to classify these problems in the strict P/NP/EXPTIME hierarchy. It is not known if either of these problems is NP-complete (or even weaker, simply NP-hard.)
Uh, finding a factor of a number (as opposed to finding all the factors) and finding the discrete log of a number are both clearly in NP. If P=NP, then cryptanalysis of all standard public and private key cryptosystems I'm aware of is in P (i.e. relatively easy).
There is a recent thesis out there (I don't have the cite handy, but Springer-Verlag published it) on constructing private-key cryptosystems whose cryptanalysis is C-hard for an arbitrary class C. I haven't read it, however, so I can't vouch for its practicality.
Just to keep this result in perspective... (Score:1)
My colleagues routinely solve hard max-clique instances involving several hundred vertices on PCs. See, for example, the now 8-year-old 2nd DIMACS Challenge [cmu.edu] for more details including performance on specific instances [rutgers.edu].
obligatory Beowulf (Score:1)
Re:Some things better left unsolved (Score:2)
Which, IMHo, is how the whole Florida-ballots conundrum should've been solved. Bush vs. Gore, and may the best man win.
Tongue-tied and twisted, just an earth-bound misfit, I
Re:"Other Stuff" (Score:1)
Cliques, Subgraphs & NP-complete problems (Score:1)
Re:Nope (Score:1)
Re:Before a million people get this wrong... (Score:1)
Re:Some things better left unsolved (Score:1)
Same as Apollo (Score:1)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ the real world is much simpler ~~
Re:Nothing new... (Score:1)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ the real world is much simpler ~~
some considerations (Score:1)
I don't know much of the specifics, but this doesn't seem to be an incredibly interesting development. Since "three-dimensional microfluidic networks" are not quantum-mechanical in nature, at best whatever they can do is to more
What does this mean for you? That this evolution is not interesting and does not shed new light on anything in the physical or mathematical world: nowhere does the article say that this system will solve in polynomial time the maximum clique problem. NP doesn't mean a problem is unsolvable: just that it becomes increasingly and increasingly difficult to solve as the size of it increases. Here is an introduction [busygin.dp.ua] to the idea of NP. The clay institute is offering $1m [claymath.org] for anyone that can solve NP [claymath.org], so I doubt this article claims to do anything of the sort, although, as we've all by now noticed, I can't actually see the article itself. Not worth $5 if you ask me.
Here is an article that already proposes DNA computing [uidaho.edu]. (.gz, and probably not worth a d/l)
And here are some other NP problems [nada.kth.se]
Re:some considerations (Score:1)
second, showing
Let x = 0.999999999999...
10x = 9.99999999...
10x - x = 9.99999... - 0.9999...
10x - x = 9.0 (since there are infinite 9's after
the decimal point, so they all cancel out)
9x = 9
x = 1, qed.
Paid subscriptions only! (Score:1)
Anyone feel like paying their one-time fees and setting up a mirror?
Cyclopatra
"We can't all, and some of us don't." -- Eeyore
Leonard Adleman [Re:ya whatever] (Score:1)
I believe that 'dude' was Leonard Adleman [usc.edu] ('A' of RSA infamy)
Check out the papers at the USC Laboratory for Molecular Science [usc.edu]
It's a statistical technique, which is never guaranteed to find a solution (I believe) but takes advantage of the huge number of molecules available to speed up the brute-force search. Unfortunately, I have no idea what it's time complexity is, nor how it compares to this technique (not paying for a subscription today :)
This is important because... (Score:2)
"And thus the Holy Grail of the Hollywood InfoTainment Digeratti was found to have yet another dent upon it's glistening face. And it was good."
"And the Open Source Angels did sing with rejoice in their hearts and lo thier song carried upon the net...'YOU CANNOT SECURE DIGITAL CONTENT'"
shutdown -now
"A microprocessor... is a terrible thing to waste." --
Hand in hand... Math on the chemical level. (Score:1)
God first, wife second, math third, and physics fourth. Wait
Re:And in software news (Score:1)
Re:Nothing new... (Score:3)
If you love God, burn a church!
And in software news (Score:2)
Not quite yet (Score:3)
The strength of this microfluidic system as an analog computational device is its high parallelism. Its weakness is the exponential increase in its physical size with the number of vertices. This space-time tradeoff is reminiscent of the limitations of using DNA for solving large NP problems (refs. 5-7). We estimate that the largest graph that might be solved with our algorithmby using 12-inch wafers (commercially available) and 200-nm channels (within the range of photolithography)is 20 vertices. If we use space more efficiently by encoding subgraphs in a plane and use the third dimension for fluid flow, we might solve 40-vertex graphs. By using a computer capable of performing 109 operations per second, a 40-vertex graph can be solved in about 20 min, which makes this microfluidic approach (in its current form) impractical to compete with traditional silicon computers for solving large search problems.
Re:Paid subscriptions only! (Score:2)
Materials and Methods
Fabrication of Microfluidic System. Our method for the fabrication of a 3D microfluidic system has been described in detail elsewhere (refs. 13 and 14). Briefly, the silicon master was fabricated by first spinning a negative photoresist (SU 8-50 or SU 8-100) onto a silicon wafer, which has been cleaned by sonicating in acetone (5 min) then in methanol (5 min) and dried by baking at 180C (10 min). The photoresist-covered wafer was soft baked (105C for 15 min) to evaporate the solvent, let cool, then placed under a photomask in a Karl Suss mask-aligner (Zeiss) to expose the photoresist. The exposed photoresist was baked (105C for 5 min), then developed in propylene glycol methyl ether acetate to create a master with one level of feature. Masters with two levels of feature were fabricated by repeating this procedure with a different photomask. The masters were silanized in vacuo (in a desiccator) with 0.5 ml tridecafluoro-1,1,2,2-tetrahydrooctyl-1-trichloros ilane
for 12 h. The silanized masters were then used for molding slabs and membranes of poly(dimethylsiloxane) (PDMS). To fabricate the PDMS membrane, a drop of
PDMS prepolymer was sandwiched between the master and a Teflon sheet and was allowed to cure overnight under pressure (10-50 kPa) at 70C. The cured PDMS
membranes and slabs were aligned by using a home-built micromanipulator stage, oxidized in a plasma cleaner (model SP100 Plasma System, Anatech, Alexandria,
VA) for 40 sec at 60 W under [roughly equal to] 0.2 Torr (1 torr = 133 Pa) oxygen, and brought into contact to form an irreversible seal. This procedure of alignment, oxidation, and
sealing was repeated multiple times for the fabrication of a multilayer microfluidic system.
Chemicals. Negative photoresists (SU 8-50 and SU 8-100) were obtained from Microlithography Chemical (Newton, MA), propylene glycol methyl ether acetate from Aldrich, tridecafluoro-1,1,2,2-tetrahydrooctyl-1-trichloros ilane from United Chemical Technologies (Bristol, PA), PDMS prepolymer (Sylgard 184) from
Dow-Corning, fluorescent nanospheres from Molecular Probes, and silicon wafers from Silicon Sense (Nashua, NH).
[...]
Conclusions
The strength of this microfluidic system as an analog computational device is its high parallelism. Its weakness is the exponential increase in its physical size with the number of vertices. This space-time tradeoff is reminiscent of the limitations of using DNA for solving large NP problems (refs. 5-7). We estimate that the largest graph that might be solved with our algorithmby using 12-inch wafers (commercially available) and 200-nm channels (within the range of photolithography)is 20 vertices. If we use space more efficiently by encoding subgraphs in a plane and use the third dimension for fluid flow, we might solve 40-vertex graphs. By using a computer capable of performing 109 operations per second, a 40-vertex graph can be solved in about 20 min, which makes this microfluidic approach (in its current form) impractical to compete with traditional silicon computers for solving large search problems. In comparison to DNA-based computation, this microfluidic system can carry out certain logical operations, such as addition, more naturally. The z-direction flow in the four-layer microfluidic device (Fig. 4) acts as an integrator by adding the beads that arrive at the wells from all of the layers. In contrast, the implementation of an algorithm for DNA-based addition was nontrivial (ref. 8), although a far more direct method for DNA addition has been proposed (ref. 17) and partially implemented (ref. 18).
The algorithm we described here for using fluids to search the parallel architecture of a microfluidic system could also, perhaps, be implemented in a 3D microelectronic circuit. There are, however, several advantages to using microfluidic systems. (i) Fluids can carry fluorescent beads or molecules, thereby making the readout by using parallel optical systems simpler than in microelectronic circuits. (ii) Many different "color" beads/molecules can be used in microfluidic systems, whereas electrons have only one "color"; this feature permits fluidic systems to encode more information than electrical systems. (iii) Microfluidic systems might not require power (our algorithm, for example, can be implemented by using gravity). Advantages of electrical over fluidic systems include ease of use (no clogging of channels) and the high speed at which electrons travel through the circuit (which is important for implementing sequential algorithms). Although clogging is a concern in microfluidic systems, it was not a problem under our experimental conditions, because of the relatively small sizes of the beads used (400 nm or smaller) compared with the widths of the microchannels (50 m or greater).
Another motivation for using microfluidic-based computation is the possibility of integrating fluidic components for controlling complex chip-based microanalytical devices. In addition, computation by using microfluidic systems are complementary to ones based on biological molecules (e.g., DNA) or coupled chemical reactions (refs. 19 and 20). The wells and channels in our 3D microfluidic system, for example, could compartmentalize and transport molecules and reactions for the construction of a chemical or DNA-based computer.
AI out Decryption in (Score:2)
So those of you worrying about AI can relax a bit. Also, building hardware which solves a single algorithm with no abstract constants is not particularly useful for taking over the world, simulating thought, etc.
There might be some applications to decryption, if the size of the key is fixed, etc. But they'll have to build a different "computer" as the key-size grows. Wont be a problem for the NSA though...
the article is wrong (Score:2)
Such a network may solve an NP complete problem quickly under an idealized model of fluid dynamics. However, the real world does not obey idealized fluid dynamics. It doesn't obey it in theory, and it doesn't obey it in practice. Fluid dynamics is a convenient approximation that breaks down at some point.
It's am embarrassment to see this kind of nonsense come out of Harvard. There are lots of people there who should know better. This isn't a problem with some obscure point in the physics of computation, it shows a pretty fundamental misunderstanding of physical theory by a physicist.
more problems in full article (Score:2)
Well, I managed to get a hold of the full article. It turns out that their circuits are exponential size and the largest problem they can solve even using optimistic assumptions is of size 20 (if that). So, they didn't even manage to solve the problem in polynomial time using fluid dynamics, they are merely confused about the meaning of the term "NP complete" and "polynomial time" (which refer to complexity on a Turing machine, not circuit depth). In their introduction, they also make lots of mistakes in the definition of NP completeness.
These people don't know what they are doing, and they don't seem to have found anything interesting as far as I can tell. Just ignore it. As far as I know, not all articles in PNAS are peer reviewed; this one should have been.
No, he gets the cigar (Score:2)
Unfortunately, you need to infer the definition of the problem from his description of the solution. I originally thought it was an MST as well, but as SLi points out it sounds like it is a Steiner tree problem. To be exact, a Euclidean Steiner tree problem, which is in NP-hard.
you've got a solution to a particular version of the minimum spanning tree problem (the points are all located in 2-D eucludian space, which is not necessarily true for general problem).
So it would seem that this is not an MST problem. Additionally, the fact that the graph is embeddable in a 2D Euclidian space does not make a difference. MST is in P, regardless of the weights assigned to the edges.
Re:No, he gets the cigar (Score:2)
Agreed.
I guess what's most interesting about this example is that it challenges us to think about a "computer" in a way that we normally do not. Makes me wonder if someday my system will have an add-on NP-complete card for handling a specific NP-complete problem.