Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

3D Microfluid Computers Used To Solve NP Problems 125

Sergio Lucero writes "The Proceedings of the National Academy of Science (PNAS) just published an article on the use of 3D microfluid networks as computers for the solution of very hard (NP) mathematical problems. So far it looks like they've solved a specific case of the maximum clique problem, but of course this also means they can do other stuff."
This discussion has been archived. No new comments can be posted.

3D Microfluid Computers Used To Solve NP Problems

Comments Filter:
  • by Anonymous Coward
    "First Post" attempts are "Off Topic", not "Trolls". Please read the FAQ and moderating correctly. I say this because Metamod no longer allows us to correct the rampant mislabeling, as it ignores and penalizes us for disagreeing with more than 2 moderations.
  • by Anonymous Coward
    "If you can do one, you can do them all"

    I believe this applies to the class of NP-complete problems, not NP problems.
  • by Anonymous Coward

    Here's the abstract and the first couple of paras of the introduction, for those who can't get access to the article. (I'll get into shit if I set up a mirror.)

    Using three-dimensional microfluidic networks for solving computationally hard problems

    Daniel T. Chiu, Elena Pezzoli, Hongkai Wu, Abraham D. Stroock, and George M. Whitesides

    Abstract

    This paper describes the design of a parallel algorithm that uses moving fluids in a three-dimensional microfluidic system to solve a nondeterministically polynomial complete problem (the maximal clique problem) in polynomial time. This algorithm relies on (i) parallel fabrication of the microfluidic system, (ii) parallel searching of all potential solutions by using fluid flow, and (iii) parallel optical readout of all solutions. This algorithm was implemented to solve the maximal clique problem for a simple graph with six vertices. The successful implementation of this algorithm to compute solutions for small-size graphs with fluids in microchannels is not useful, per se, but does suggest broader application for microfluidics in computation and control.

    Introduction

    Intuitively, a problem is in the class NP (i) if it can be solved only by an algorithm that searches through exponentially many candidate solutions, and (ii) when the correct solution has been identified, verification of the solution can be performed easily (ref. 4). No feasible sequential algorithm is known to solve NP problems. If one can, however, search all candidate solutions simultaneously in parallel, solution of NP problems becomes feasible. The potential for parallel searches is the foundation for interest in DNA-based computation (refs. 5-8) and in part for interest in quantum computation (refs. 9-12). In addition to sequence-specific recognition (the basis of DNA computation) and manipulation and resolution of entangled quantum states (the basis for quantum computation), other physical phenomena or systems might also be suited for solving computational problems. In this paper, we demonstrate the use of moving fluids in a microfluidic system to solve an NP complete problem.

    Microfluidic systems are providing the basis for new types of rapid chemical and biological analyses (refs. 1-3). With the increasing complexity of microfluidic systems, there is also a growing need to carry out relatively complex processeslogic operations, fluidic control, and detectionon the chip. Although these processes can be (and usually are) controlled by electronic systems, it would be more natural and compatible to use the same medium (that is, a fluid) to carry out some or all elements of the required computation and control/decision making. Here, we explore the potential of microfluidic systems in computation by using a representative "hard" problem a parallel solution of a nondeterministically polynomial (NP) complete problem [the maximal clique problem (MCP)] for a graph with six vertices. This method we describe uses moving fluids to search the parallel branches of a three-dimensional (3D) microfluidic system simultaneously, exploits parallel fabrication of relief patterns by photolithography, and detects solutions in parallel with fluorescence imaging.
  • by Anonymous Coward
    Of course exactly the same can be said to those who argue for 'market forces' to run the economy ... personal greed does indeed tend toward local optimizations - but some form of global control and insite can help find global minima (hello Mr Greenspan :-) - anyone who's used logic design tools to do chip design knows in their gut that Mr Friedman was full of it :-)

    Local minima my ass. Jesus says not to be greedy, so the United States government should go and beat up anybody that doesn't listen to him. Blow up Iraq, blow up Microsoft. Praise Jesus!
  • by Anonymous Coward on Saturday March 24, 2001 @03:27PM (#342236)
    As someone who has lived and breathed this stuff for years and years (and years), I thought I'd chime in and correct the definitions given above, which are better than most that I see, but are still (alas) a bit incorrect. There are so many subtleties to these definitions that it has taken me nearly ten years to feel that I have 'em right. And I still might not.

    Your definition of NP is correct, provided you mention that the verification takes place on a deterministic turing machine, and that verification is done using a polynomial size certificate. If you give me a computational problem X, then I can show you it is in NP by (1) demonstrating that there is a way to express the proper solution x' to an instance x of X using O(f(|x|)) characters, where f(|x|) is a polynomial function, and (2) that given both the instance x and the solution x', you are able to verify using a DTM that x' really is a solution to x in polynomial time.

    Your definition of P is correct. You are also correct in saying that P is a subset of NP. Mathematicians are still uncertain whether P is a proper subset of NP -- this is a huge open problem.

    Let's tackle your definition of NP-complete next. The correct definition of an NP-complete problem is that it is both a member of NP *and* a member of NP-hard. Note that this does *NOT* mean that NP hard problems cannot be solved in P time. That is a BIG open question, and if you have a proof for it, I'd love to see it. :-)

    A problem is defined to be NP-hard if it is polynomial time reducible (on a DTM) to Circut-SAT. This is the very famous Cook-Levin theorem. (The theorem also demonstrates that Circut-SAT happens to be in NP as well, and is hence NP-complete. But that isn't the main point of the theorem -- something people often miss.) Generally when people define the NP-hard set, they say "any problem which is polynomial time reducible to another problem in NP-hard" but of course that is problematic in that it is a circular definition. The genius of the Cook-Levin theorm is that it gave us a base problem for defining the set, and gives us a very illumniating method for looking at the meaning of NP-hard.

    Okay. Now that we've got that squared away, I'd like to address two final inaccuracies that I hear a lot. Myth number one: if you can efficiently factorize large integers, you've broken today's best encryption schemes. This is not true. In fact, you must solve the *discrete logarithm* problem in polynomial time in order to break today's encryption schemes. The reason that integer factorization is often substituted here is that usually when you've solved this, you've also solved the discrete logarithm problem. But *not always*. (Witness the L3 algorithm or other lattice reduction algorithms.)

    Myth number two: if you have solved integer factorization (or discrete logarithm) then you've proven that P=NP. This is *not* necessarily true. Unfortunately, computer scientists have never been able to classify these problems in the strict P/NP/EXPTIME hierarchy. It is *not* known if either of these problems is NP-complete (or even weaker, simply NP-hard.)

    A final variant and twist on both these myths: if you can show that P=NP, then today's modern encryption schemes are failed. This is only partially true. There are some schemes which rely on the computational intractability of NP problems. These schemes would break. However, there are other schemes (such as RSA, and one I just heard about called NTRU) which rely on problems which are *not* understood to be NP-complete (discrete logarithm, and in NTRUs case I think it is change of algebraic rings, which I don't understand so well.) These encryption schemes would *still stand* though most mathematicians would agree that they fall on shakier ground.

    Okay, that's my theory lesson for today. I hope everyone has found this useful. If anyone has spotted inaccuracies or blatant lies, please let me know.

    love,
    the anonymous theory coward
  • No. Read the sign at the graduation office:
    Someone set you up the red tape. All your diploma are belong to us.

  • by alewando ( 854 ) on Saturday March 24, 2001 @01:16PM (#342238)
    It's far better, imho, to let certain things remain unsolved. NP problems are one class of such things.

    If humans solve everything, then we'll grow lazy and ungrateful. Goedel showed that there are infinitely many unsolvable problems in the world (and Turing showed there are infinitely many uncalculable ones), but there's no guarantee that this infinity of problems consists of interesting problems. In fact, they might all be dull ones! And where would that leave us?

    Mathematics is where it is today because bygone generations left problems for us to solve. They may have wanted to solve them all, but for whatever reason, they were unable. Are we really better than they? Doesn't conservativism dictate that we should look to our nation's history and traditions when determining what current approach to take?

    Where will we be in twenty years if all the interesting mathematical problems are solved? There'll be anarchy in the streets. Unemployed mathematicians and computer scientists will be roaming the nation's highways, raping our women and defiling our basilicas.

    I'm not so sure this is a good thing. Certain things man was not meant to know. Certain things are beyond our comprehension by design. If we seek this path, we may bring down divine vengeance upon the National Academy of Sciences and all who are complicit in their proceedings. It is a mark of human hubris, and Xenu won't look favorably upon it.

    Friends don't let friends solve NP problems.
  • How appropriate: Windoze always is pretty nondeterministic. I don't know how it's polynomial, though...

    #define X(x,y) x##y
  • Not at all.

    NP stands for non-deterministic polynomial. Which means that you can solve the problem in polynomial time, but with a non-deterministic machine. Another way to say it is that given a supposed solution to the problem you can test if it is really a solution in polynomial time with a deterministic machine.
  • By "mirror" you mean pirated copy, right?
  • Soapwater films do not always settle on the best solution, they often get stuck at a local minimum. The more complex the problem, the more likely they are to find such a local minimum.

    -
  • but as SLi points out it sounds like it is a Steiner tree problem. To be exact, a Euclidean Steiner tree problem, which is in NP-hard.
    Yep, that makes sense. I'm still not convinced you'll get the global minimum via the method described, though - it's not something I'll just buy with a handwave.
    Additionally, the fact that the graph is embeddable in a 2D Euclidian space does not make a difference. MST is in P, regardless of the weights assigned to the edges.

    Quite true, also. I was being careful to state the Euclidian nature of the method he was describing, because there are, as you undoubtedly know, other graph problems where it does make quite a difference.

  • If I'm following you correctly, you've got a solution to a particular version of the minimum spanning tree problem (the points are all located in 2-D eucludian space, which is not necessarily true for general problem). That's all fine and dandy, except that the minimum spanning tree problem is not NP-complete. In fact, there is a relatively straightforward polynomial-time algorithm for it.

    The difference between a problem with a straightforward polynomial-time solution and an NP-hard problem is often very small. Be careful!

  • Methinks you didn't read past the third paragraph...

  • NP-hard: A decidable problem such that any NP problem can be reduced to it in polynomial time.

    NP-complete: NP and NP-hard.

    There DO exist NP-hard problems which are not NP. In fact, there exists an infinate ascending tower of classes that extend beyond NP (which are all NP-hard problems.)
  • It looks like they trade off exponential time with exponential space...

    Excerpts from the conclusion (emphasis mine):

    "The strength of this microfluidic system as an analog computational device is its high parallelism. Its weakness is the exponential increase in its physical size with the number of vertices. ... We estimate that the largest graph that might be solved with our algorithm... is 20 vertices. ... If we use space more efficiently ... a 40-vertex graph can be solved in about 20 min."

  • Myth number one: if you can efficiently factorize large integers, you've broken today's best encryption schemes. This is not true. In fact, you must solve the *discrete logarithm* problem in polynomial time in order to break today's encryption schemes.

    The RSA cryptosystem relies on that factorization is a hard problem. If you find an efficient algorithm to the factorization problem you have broken RSA. RSA may not be the best cryptosystem but it is widely used.

    From the RSA Laboratories FAQ 4.0: Question 3.1.3 The most damaging would be for an attacker to discover the private key corresponding to a given public key; this would enable the attacker both to read all messages encrypted with the public key and to forge signatures. The obvious way to do this attack is to factor the public modulus, n, into its two prime factors, p and q. From p, q, and e, the public exponent, the attacker can easily get d, the private exponent. The hard part is factoring n; the security of RSA depends on factoring being difficult.

  • Well, yeah, but it's not free. You've still got to figure out how to map your specific problem onto some other one that you know how to solve.
  • Well in theory if you can solve any NP complete problem you can solve them all - you just need to figure out how you can map the new problem onto the solvable one.

    Of course there are practical difficulties using these types of analog computer as you scale up, but I don't know how much serious effort has been taken to look for analog approaches that would scale.
  • I'm not sure if it's a local minimum or not - it was a LONG time ago that I saw this done on TV.

    Quantum may well be the practical way to go for NP complete problems, and I believe the general approach is the same as the DNA method you linked to - you figure out how to make your quantum system simultaneously represent all solutions to the problem, then reject the ones you don't want - pretty much the exact opposite of conventional methods.

  • Well, it better be a damn fast Turing machine if you want to solve any O(N**1000) problems! ;-)

    O(X) only applies to stepwise problem solving as done by digital computers (and Turing machines). If you can use an analog/parallel approach, then you have circumvented the complexity entirely.

    See the DNA and sphagetti examples in this thread, or my comment about quantum solutions.
  • It means the problem is parallelizeable.

    Not necessarily. You can consider the analog soap bubble computer as a parallel solution, since it is "computing" the solution, but the DNA/quantum computer approaches are radically different... With that paradign you're not relying on the solution method being parallelizeable because you're not trying to derive the solution!.. instead you're assuming that you can create a non-computed set that contains the solution, then remove the non-solutions. If puts the empasis on verification rather than solution generation, and regardless of the problem you can always represent multiple potential solutions in parallel.
  • No - strong AI is a philosophical problem (for some). There's no reason to believe it's NP complete.
  • by SpinyNorman ( 33776 ) on Saturday March 24, 2001 @01:26PM (#342255)
    NP complete problems are only hard if you insist on using a digital computer to solve them.

    Want to find the shortest set of roads to build to interconnect a bunch of cities? You need a couple of pieces of glass, some round plastic rod and some soapy water...

    Cut the rod into 1" lengths, so that you have one per city, then glue them as separators between the two pieces of glass so that the positions of the rods represent the locations of the cities. Dip the completed "computer" into soapy water, and let surace tension and energy minimization do it's job - the soap bubbles that form between the rods will have edges that join where you should build your roads.
  • Sorry, no subscription, but isn't strong AI an NP problem? Eek, the implications!

    Boy, this sounds exciting. Too bad it'll probably turn out to be nothing so cool if I get to read the full article...

    -grendel drago
  • I can only access the abstract of the article, but isn't the size of the problem they can solve limited to the size of their "microfluid" computer? If so, what's the breakthrough?

    If you are allowed to limit the problem size, then you can solve it on a normal computer. All you need to do is make the computer big enough. For instance, you could make a travelling-salesman computer out of a circuit what builds all possible paths in parallel and then use pairwise comparison to find the lowest-cost path.

    This assumes the circuit is large enough to hold the whole search space, which is the fatal flaw. Classification of a problem as NP-complete is based on its asymptotic complexity: how the computation time grows as the problem size grows without bound. Put a bound on the problem size, and you haven't solved anything.

    It seems to me that the microfluid computer has the same flaw.

    Does anyone know more about this? Please straighten me out.
    --
    Patrick Doyle
  • by p3d0 ( 42270 ) on Saturday March 24, 2001 @05:01PM (#342258)
    You have it wrong too. If you can find a problem in NP which can't be solved in polynomial time, you'll be famous.

    A problem X is in the set NP-hard if all problems in NP can be transformed into X in polynomial time. Thus, every problem in NP-hard is at least as hard as every problem in NP.

    A problem is in NP-complete if it is in NP and NP-hard. Thus, NP-complete is a "representative" set of NP problems, such that solving one of these in polynomial time would also mean solving all other NP problems.

    Here [ic.ac.uk] is the FOLDOC [ic.ac.uk] entry for NP-hard. It explains all this.
    --
    Patrick Doyle

  • I don't think we should fall into the trap of thinking that all problems need to be solved by digital computers. Way back in the 1960's and 70's, engineers were trained in modeling. If you scale the paramaters properly, then you can get a (reasonably) accurate answer from a small test case. Many aircraft designs were tested on a small scale model in water, for instance. looking at the water flow meant that they had all those aerodynamic calculations done, in real time. The best computers on earth at the time would have taken decades to crunch the numbers to get the same result.

    The 'DNA Computers' we've been making such a fuss over lately, are essentially just modeling again. If you can sift well, and your modeling parameters are correct, you can run a lot (10^38 anyone?) in a few minutes. (Of course, it may take you weeks or even years to set it up correctly, and the old GIGO rule still applies.)

    This seems to be just modeling a class of hard computing problems, and letting reality show us what the answer is. Sometimes, analog is best.
  • The original argument about 'market forces' presumed many small, local companies. Our current situation of large mega-companies was cited by Adam Smith as being little different than government intervention.

    A long time ago, I concluded that the biggest difference between a communist-style centralized economy and an end-game capitalistic economy is how obvious it is who's pulling the strings. I've come across precious little that would cause me to change my mind.
    --

  • Godel showed that in any consistent proof system, there are true statements that the system cannot prove. Take for example the statement: "This proof system cannot prove this statement."

    This has nothing to do with undecidability, which deals with functions (or sets), not statements. An undecidable function one that no machine can be built to calculate (an undeciable set one which has no decidable function that determines membership of the set).

    Also, you forget when trying to prove a statement, there are more options than proving it it true or proving it is false. You can also prove that it can't be proven either way. This has not been done with the P=NP problem.
  • bit of my favorite hooby horse, so I hope you'll excuse me repeating it:

    ALL key-based encryption is in NP (it is easy to verify a suggested key). So it matters little whether it is based on factorisation, points on elliptic curve, rotor positions, or S boxes.

    BTW I do believe that while nobody knows whether FACTOR is NP, it is commonly held NOT to be NP COMPLETE.
  • What we want to show is that P = NP, right? Simple! P = NP <=> P/P = NP/P <=> 1 = N. There you have it!

  • , (ii) parallel searching of all potential solutions by using fluid flow

    I couldn't read the article (paid subscription), but this excerpt looks like it's still a brute-force. They were only able to solve in P time by using parallelism.

    Does this really count? I'm curious.
  • I just want to point this out, since it's important, but I believe the definitions state that reducing from one NP-Complete problem to another has to be able to be done in P time.
  • The inner link goes to PNAS which seems to cost over $100/yr. to subscribe to. Please don't post articles referencing such material.
  • I've been using microbrew to solve my problems for years...
  • And the reason that people keep saying that this problem is a simple MST problem is because they are picturing a problem in which all nodes are known.

    For this problem picture a three city setup, where each city is the vertex of a triangle. Then the solution to this problem is to have each city connect a road towards a fourth, unknown node somewhere in the middle of the triangle. The NP-edness of this problem comes from the fact that you do not know where the fourth node belongs, and the only way to prove where it belongs is to test all available placements.

    The soap bubble will always yield the optimal solution to this because (as you state) of surface tension and energy minimization.

    --
    "A mind is a horrible thing to waste. But a mime...
    It feels wonderful wasting those fsckers."
  • As someone else pointed out, RSA is based on the prime factorization problem, not the discrete log problem. Diffie-Hellman and ElGamal are two public key systems based on discrete logarithms.
  • Actually, it'd be 'Redundant', wouldn't it? Since the 'by' line clearly indicates the post is #1.
    Now, this post is Offtopic.

    Of course, I could throw in a comment about the article to change that, but I can't seem to actually see the article.
    You have requested an item that requires a subscription to PNAS Online.
  • by jonnythan ( 79727 ) on Saturday March 24, 2001 @01:36PM (#342271)
    at http://www.pnas.org/cgi/content/short/98/6/2961

    if you trust me [pnas.org]
  • The article got it wrong, too. Obviously, they didn't have a computer scientist as a peer reviwer. So much for the theory that all scientists are computer scientists...
  • by adubey ( 82183 ) on Saturday March 24, 2001 @01:39PM (#342273)
    The editor got it wrong, and so did a number of posters, but here is a quick run down of some major complexity categories:

    NP: A problem which can be *verified* in polynomical time. This even includes constant time problems, like, is 5*6=42?

    P: problems in NP that can be solved in polynomial time (ie quickly)

    NP-hard: those problems in NP that can't be solved in P-time. This is probably what all Slashdotters mean when they say "NP".

    NP-complete: a class of problems that all other problems in NP reduce to. One of the biggest open questions in computer science is determining if any NP-complete problem does not reduce to any problem in P (then any NP-hard problem would have any easy solution...)
  • by Ghazgkull ( 83434 ) on Saturday March 24, 2001 @01:42PM (#342274)
    Whew. Let's get a few things straight. First of all, NP problems are *not* hard by definition. The class of NP problems certainly includes hard problems, but it also includes all of the P problems. Second, solving any old NP problem doesn't mean you've solved all NP problems (after all, we have solutions to lots of P problems and P is in NP). It is only NP-COMPLETE problems that have the property of solving every problem in NP via a poly-time reduction. Third, solving a specific instance of an NP-Complete problem isn't a big deal. I'll make one up right now. The solution to the instance of the CLIQUE problem for n=0 (zero verteces) is false. Wasn't that easy? Solving a specific instance isn't particularly special. You have to solve the *general* problem to grab the big prize (P=NP). Conclusion: Solving a specific instance of an NP problem isn't anything to wet yourself over.
  • Graph Isomorphism and Factoring are two problems which are neither known to be NP-Complete, nor is any poly time algorithm known for either. I know factoring is in both NP and Co-NP (a certificate can be generated to show that a number has no factors less than n, but don't ask me how). Is there any relation between these two problems? By the "what's clean and elegant is probably true" method of proof, it seems like these should be equivalent. Is graph isomorphism in Co-NP? It seems like it might be possible to generate some certificate that here's some property one of the graphs has but not the other that it might be.

    One problem I see with trying to show them equivalent is that one problem takes two graphs, and the other takes a number and an upper bound on the size of the factor (or just a number if you are checking for primality). This makes it difficult to imagine a reduction.

    Thoughts?
  • by frankie ( 91710 ) on Saturday March 24, 2001 @01:42PM (#342276) Journal
    Dip the completed "computer" into soapy water, and let surace tension and energy minimization do it's job

    I thought that soap can only guarantee a local minimum [curvefit.com] for travelling salesman, not necessarily the global shortest path. Also, if the problem is sufficiently complicated (millions or more nodes, multiple dimensions, etc) it becomes prohibitive to construct a physical representation.

    That said, it's likely that organic [warwick.ac.uk] or quantum solutions are the right approach due to their built-in massive parallelisms.

  • I read the Scientific American article too, but if you notice the interconnection points, there is the shortest ammount of roadway, not the shortest/fastest paths. Between three points on a triangle, they intersect in the middle, but it isn't the fastest.
    No, in order to solve the classic TSP, you need to make it the shortest round trip, not be cheap on the construction materials! To really do a TSP, you need a permutation generator(got one!), and start trying solutions, prune larger nodes off, and skip, and you'll finally have it.
    Granted, it's messy, long, and not too elegant(program the PG to skip bad sequences..), but it's the only way I've found to get there. If the soap array generated the shortest routes, instead of optimized materials, I'd agree/apologise. Still, alternative resouces must always be investigated, you never know what's out there. =)
  • About 4 years ago some dude figured out to get DNA to solve the traveling salesman problem. i think it was from nature, not sure.

    -Jon

    i wish i had a link so i could get a +1...

    Streamripper [sourceforge.net]

  • A.K. Dewdney (quondam writer of Computer Recreations in Scientific American) wrote up some stuff on "organic computing" like sorting in O(n) -- take a handful of uncooked spagetti of different lengths, whack one end on the table, yank out the longest piece, then second longest piece, etc.

    Check out his book "The New Turing Omnibus" for further examples. Incidentally, I recommend this book as one that should be in every hacker's library.

  • > this also means they can do other stuff.

    Yes. In fact it means that they can do every single other NP problem. If you can do one, you can do them all. All you have to pay is a measly polynomial time conversion cost. (Of course this polynomial could have HUGE constants in it, but theoretical CS people only care about the O() of a problem.)

  • I haven't read the article (I don't have an account), but I read part of it which someone posted. What they seem to have done is build a machine (a set of pipes and sensors essentially), which can determine this problem. This machine is not a Turing machine, and thus most (if not all) theorems about time complexity or whatever don't apply. This reminds me of a piece in Cryptonomicon where someone was building a machine specifically for solving differential equations, and then Turing comes along and builds a machine which can solve any algorithmic problem. Are we now going back to building porblem-specific machines, i.e. the Dark Ages?
  • A problem is defined to be NP-hard if it is polynomial time reducible (on a DTM) to Circut-SAT

    It's the other way round: A problem is NP-hard if SAT can be reduced to it.

  • does that just work with the problem you described? or any np complete problem? what happens when you're dealing with thousands or millions of nodes?
    --
    Peace,
    Lord Omlette
    ICQ# 77863057
  • my last question was: "where did you hear about this, and where can i find more?" thanks in advance...
    --
    Peace,
    Lord Omlette
    ICQ# 77863057
  • An NP problem is one where, given a program, you can only find the result by running the program. This is usually the case with algorithms which can't be solved arithmetically and have to be solved by iteration.

    For instance, an iterative loop counting numbers from 1 to some limit is not NP - you know the maths to work it out, so given the loop count, you can bypass the whole iteration process (eg. counting the numbers 1 to 10: the total is (1 + 10) * (10/2) = 55).

    But an iterative loop to find the value for which x = sin(x) _is_ NP (as far I can remember from my fairly basic maths) - you can't solve this any way other than iteratively. The only way to solve it is to take the sine of a value, see what the error is (either side), add on a correction factor and try again, and repeat this process until you've got the result to the accuracy you want. And there isn't any equation you can use to short-cut this process - if you want the result, you just have to keep working through it.

    Grab.
  • You're right.

    The trick here lies in the fact that analog methods inherently support parallel computations.

    And what is even more important, analog parallel computations scale easily - their parallelism expands infinitely to meet your needs. Well, almost infinitely :)

  • Mathematics is where it is today because bygone generations left problems for us to solve. They may have wanted to solve them all, but for whatever reason, they were unable.

    Not enough space in the margin?
  • If that were the case it would be great. Unfortunately, NP problems are much harder: find the minimum length round-trip road between the cities.

    Soap bubbles solve local minimization problems... the whole trouble with NP complete problems is that local optimization doesn't seem to help with the global optimization problem. I.e. nothing better than brute-force works.

  • I beg your pardon. NP-complete has nothing to do with digital computers. Only Turing machines. If you can prove that P=NP with any implementation of a Turing machine (soap bubbles and plastic rods/digital computer/whatever) it will hold for all Turing machines. That's the point of abstracting away computation to a Turing machine.
  • This is the minimum spaning tree problem. This is not an NP hard problem.

    Isn't it rather the Steiner tree problem? The difference to the minimum spanning tree problem is that we're allowed to select intermediate points to reduce the cost of the tree. And the Steiner tree problem actually is NP hard.

  • Yeah, I was trying to express that idea in context with the question without getting too involved. I've studied big O and little O. Sorry, if my answer wasn't satisfactory. FOR A COMPUTER, you don't solve NP problems. Not classically. And STILL, you wouldn't have strong AI. It was a decent question, but I was saying... no, this won't do that. And I am pretty sure that I answered his AI problem in context, no offense.

  • Who the hell even remembers me to call me a karma whore anymore. I've been so busy with school that I can barely take the time to post and then reexplain myself the idiots that flood this site.

  • No, an NP problem is a computational (supposed) impossibility. (or really hard). As in, the formula is insanely hard. AI problems usually center around "what is it exactly that we should be doing to solve this problem." Or in some cases lookahead, or scalability. A large neural net falls into the catagory that you are thinking of, or lookahead in a chess game. That's only solved by having more ram and more processor speed. Passing the Turing test is just a question of finding a good system to do it with.

  • You write a computer program that solves a handful of NP problems, and call me when if finishes running, ok?

  • by Dungeon Dweller ( 134014 ) on Saturday March 24, 2001 @01:38PM (#342295)
    With all due respect... yeah, it would be kind of tacky, but this is a reputable organization, probably many readers are members.

    ACM (Association of Computing Machinery) charges hundreds of dollars in professional dues for membership, but it's money well spent if you are seriously interested in the profession or, in many cases, NEED truly up to date information.

    I don't see any problem with pointing to an articly on NAS(proceedings are proceedings people)... Just don't point to one on Tabloidfor$50.com


  • I think this is a well-known example of an analog computer and years ago (back when they were still a good magazine) Scientific American covered this topic in an issue. They specifically pointed out that the soap bubbles aren't really going to solve the problem since they get into a local minimum and there's just not enough of a difference to force it to the global min. Vince
  • When it comes down to two equal choices in my game of minesweeper, this thing is completely worthless.

    Also, it will never validate the educational postulate that the famous philosopher Wesley Snipes offered in the critically acclaimed movie 'Passenger 57' that we should "Always bet on black."

    I want my tax money back.

  • (On your last point)
    To be NP-complete it needs to be in NP as well. It may seem like a silly thing, and it'll be simple in many cases (guess... try). Otherwise you might drag in things from outside NP.
  • by 1337d00d ( 177978 ) on Saturday March 24, 2001 @09:40PM (#342299)
    their parallelism expands infinitely to meet your needs

    The new BubbleServer GS320 is the industry's fastest NP Solution server, with up to thirty two Glass EV67 plastic rods, the highest levels of availability, and an innovative architecture for managing massive systems in record time. A modular structure allows you to purchase only currently needed glass and rods -- and still be prepared for explosive growth. With upgrades, expect a 20-fold increase in application performance over the lifetime of the system.

    The BubbleServer GS320 is ideal for NP-business and other critical enterprise applications with high or unpredictable growth rates -- and as a replacement and growth extension for BubbleServer GS140 customers. A system configured with as little as one glass plate and one rod will be able to grow to a fully loaded 32-way system without operational disruption.. Single system management lets you add capacity and applications without adding people. Unique partitions allow mixed NP systems on the same server, facilitating workload management and server consolidation.

    [From the Compaq AlphaServer [compaq.com] Site]
  • It's far better, imho, to let certain things remain unsolved. NP problems are one class of such things. If humans solve everything, then we'll grow lazy and ungrateful. Goedel showed that there are infinitely many unsolvable problems in the world (and Turing showed there are infinitely many uncalculable ones), but there's no guarantee that this infinity of problems consists of interesting problems. In fact, they might all be dull ones! where would that leave us?

    Is it just me, but doesn't this just seem to be a bit non-logical?

    Not to worry, since as pointed out there are an infinitely unsolvable problems, etc. This means that the methods of mathematics ar not adequate to that class of problems. Since this class is infinite, it means that any simple sub divisions of this group are infinite as well. Therefore, not only are there an infinite number of unsolvable problems, but there are an infinite number of solvable problems. Furthermore, these are dividable into an infinite number of other types of problems, such as an infinite number of interesting problems vs an infinite number of boring problems.

    So therefore it is impossible for humans to solve all interesting mathematics problems using the methods of mathematics, because these are infinite.

    You could use the method of Alexander the Great, who was a very good practical mathematician in his own right. [He was taught by Archimedes, who certainly was not a slouch.]

    Alexander is famous for solving the mathematical problem of the Gordian Knot. He used a sword.

  • Happen to have read this article yesterday while browsing journals/killing time...anything (Harvard chemist) Whitesides writes is must read. But authors emphasize this is a toy (n = 6) problem: don't try scaling this to (e.g.) scheduling 1,000 airliners.And if one airliner breaks down,what do you do: tell the other 999 to sit still while you whip up another microfluidics array...?
  • shoulda made at least 2, funny for this refernce... quote follows. (in both latin and translated american english)

    Cubem autem in duos cubos, aut quadratoquadratum in duos quadratoquatratos, et generaliter nullam in infinitum ultra quadratum potestatem in duos eiusdem nominis fas est dividere. Cuius rei demonstrationem mirabilem sane detexi hanc marginis exiguitas non caperet.

    It is impossible for a cube to be written as a sum of two cubes or a fourth power to be written as the sum of two fourth powers or, in general, for any number which is a greater power than the second to be written as a sum of two powers. I have a truly marvellous demonstration of this proposition which this margin is too narrow to contain.
  • "NP-complete: a class of problems that all other problems in NP reduce to. One of the biggest open questions in computer science is determining if any NP-complete problem does not reduce to any problem in P (then any NP-hard problem would have any easy solution...)"

    Hehe, I suppose the first thing to do is to reduce all the computer science babeling down to a minimal set of irreducable words but open ended enough to handle all the CS babeling before it got reduced. Once you get the Abstraction Manipulation Machinery (computer) straightened out then you'll have a correct foundation (digital tool) upon which to solve NP-complete in the digital environment.

    Yes, NP-complete is solvable, once you get the tool right.


    3 S.E.A.S - Virtual Interaction Configuration (VIC) - VISION OF VISIONS!
  • The clay institute $1m award is more so based on acceptance by the math community then it is on a solution. This reminds me of a story a math teacher told me. Something about a man who applied for membership in some American Math Society and was turned down. So he went back to europe and a few years later he returned and applied again, but this time he had a simple equasion that showed .999_ (repeating decimal) equalled 1. I also recall the story of Newton where he was dismissed by his colleagues and even an attempt to steal his discovery. And there are many other such histories ...

    So for an award based more so on acceptance than on solution, it sounds to me that the Clay Institute will have plenty a way to so no. And in the event they ultimately cannot say no, then they can change the rules and awards at will it seems given their "rules".
    3 S.E.A.S - Virtual Interaction Configuration (VIC) - VISION OF VISIONS!
  • Myth number two: if you have solved integer factorization (or discrete logarithm) then you've proven that P=NP. This is *not* necessarily true. Unfortunately, computer scientists have never been able to classify these problems in the strict P/NP/EXPTIME hierarchy. It is not known if either of these problems is NP-complete (or even weaker, simply NP-hard.)

    Uh, finding a factor of a number (as opposed to finding all the factors) and finding the discrete log of a number are both clearly in NP. If P=NP, then cryptanalysis of all standard public and private key cryptosystems I'm aware of is in P (i.e. relatively easy).

    There is a recent thesis out there (I don't have the cite handy, but Springer-Verlag published it) on constructing private-key cryptosystems whose cryptanalysis is C-hard for an arbitrary class C. I haven't read it, however, so I can't vouch for its practicality.

  • My colleagues routinely solve hard max-clique instances involving several hundred vertices on PCs. See, for example, the now 8-year-old 2nd DIMACS Challenge [cmu.edu] for more details including performance on specific instances [rutgers.edu].

  • cluster of these thing would be very useful for the maximum clique of Slashdot users
  • Alexander is famous for solving the mathematical problem of the Gordian Knot. He used a sword.
    Which, IMHo, is how the whole Florida-ballots conundrum should've been solved. Bush vs. Gore, and may the best man win.

    Tongue-tied and twisted, just an earth-bound misfit, I
  • If you can do one NPC problem, you can do all NP problems. Not conversely.
  • This one caught my eye because I actually worked on the clique detection/maximal common subgraph & finding subgraph isomorphisms for my Ph.D. It turns out these NP-complete problems are directly applicable to the searching of chemical databases. The basic problem is this: given a query molecule A and a database molecule B, does B completely contain A? This is the problem of subgraph isomorphism. Finding a subgraph in the case of a single pair of molecules is tough, especially if the molecules are large. Now imagine running a query aginst a large database of patented molcules (CAS online- tens of millions of structures). The stakes are high since patents on pharm molcules can be worth billions; the research to find them can cost similar amounts of money and take years. Now to take a step back. If a method was found to solve ANY NP-complete problem in polynomial time it would mean NP=P since the NP-complete problems form an equivalence class. NP=P? is one of the foremost unsolved questions of mathematics. My guess is someone has found a trick to solve certain NP problems quickly under specific conditions. This is nothing new, in fact, in my case, I showed that a massively parallel computer (Distributed Array Processor 4,096 1 bit processors with a taurus geometry) can be used to solve subgraph isomorphism quickly for specific cases.
  • No, no, no. NP problems are not "impossible," or even "hard," and the formulas usually aren't hard either. They just take a long (exponential) time to solve. An NP-complete problem has these features:
    • There is no known algorithm that will compute them in polynomial time on a deterministic Turing machine.
    • There IS an algorithm that will compute them in polynomial time on a nondeterministic Turing machine.
    • There is an isomorphism, constructible in polynomial time between any NP-complete problem and any other. In other words I can convert any instance of the Traveling Salesman problem to an instance of the Knapsack problem and back again, and the conversion will in general be easier than solving the problem itself.
    • We usually attempt to show a problem is NP-complete by constructing the isomorphism. If I can convert an instance of, say, Maximum Clique into an instance of your problem without taking exponential time, and vice versa, then your problem is NP-complete.
  • Thanks for posting all definitions. They were quite precise and it's a good thing. However, IMHO, there is a small mistake in your post. Proving NP=P would imply that you can break RSA in polynomial time. It goes this way. RSA relies on intractability of factoring a product of two big prime numbers. If you can do it fast then the rest of the breaking is straightforward. Maybe there is a more involved approach but it does not matter here. The factorization problem I described is clearly in NP. The certificate is just the pair of prime numbers. If P turns out to be the same as NP then you can for sure do the factorization in polytime and the rest of the procedure is also "fast" so the overall running time is polynomial. Just before someone complains, my reasoning does not depend on the factorization problem being NP-complete (nobody knows that BTW). Just being in NP is all we need. Another interesting point is that RSA could be broken in polynomial time EVEN if P!=NP. This is because breaking RSA could be not NP-complete. We just don't have a decent lower bound for its complexity.
  • Oh come on, what you're saying is "maybe scientist should take a break and save some for the rest of us." Go ahead and try to find supporters of that philosophy. As a physicist-in-training, I can tell you that there're a hell of a lot of unsolved problems out there, and I think it's the height of egotism to think we as a generation have any chance of solving all the 'interesting' problems. Even if NP problems become routinely processable, they'll only open the door to more difficult, wonderful and powerful problems. Don't worry, the human race will have beautiful problems for centuries to come.
  • Simple fluid computers were used on the Apollo missions to calculate trajectories. Nice to see that this branch of computer design never really went away.


    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~ the real world is much simpler ~~
  • The soap bubble approach is creative but produces only a greedy solution for local minimization. The greedy solution is not always the right solution for the traveling salesman problem.


    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~ the real world is much simpler ~~
  • You know Slashdot wouldn't suck if
    1. People read the articles before posting.
    2. Slashdot editors turned 30 seconds of their time toward making sure people can read the articles.
    With that said, you can see a /summary/ at least of the article by going to pnas.org and clicking "Microfluidic networks solve computationally hard problems" near the center of the screen. (gets you here [pnas.org]).
    I don't know much of the specifics, but this doesn't seem to be an incredibly interesting development. Since "three-dimensional microfluidic networks" are not quantum-mechanical in nature, at best whatever they can do is to more /efficiently/ solve what we already can solve. Remember, people, NP stands for "non-polynomial[time]." In other words, as a given 'n' for the some measure of the complexity of the type of problem (such as n=6 for the specific achievement this article heralds) increases, the amount of computation (or compatational "time") increases at a rate greater than a polynmolial...in other words, at exponential or greater rates and not at something you can express in terms of O(x^n) with n fixed.
    What does this mean for you? That this evolution is not interesting and does not shed new light on anything in the physical or mathematical world: nowhere does the article say that this system will solve in polynomial time the maximum clique problem. NP doesn't mean a problem is unsolvable: just that it becomes increasingly and increasingly difficult to solve as the size of it increases. Here is an introduction [busygin.dp.ua] to the idea of NP. The clay institute is offering $1m [claymath.org] for anyone that can solve NP [claymath.org], so I doubt this article claims to do anything of the sort, although, as we've all by now noticed, I can't actually see the article itself. Not worth $5 if you ask me.
    Here is an article that already proposes DNA computing [uidaho.edu]. (.gz, and probably not worth a d/l)
    And here are some other NP problems [nada.kth.se]
  • first, the phrase "to steal his discovery" makes no sense literally.
    second, showing .9_ = 1 is easy.
    Let x = 0.999999999999...
    10x = 9.99999999...
    10x - x = 9.99999... - 0.9999...
    10x - x = 9.0 (since there are infinite 9's after
    the decimal point, so they all cancel out)
    9x = 9
    x = 1, qed.
  • ...and what's more, they want to use your credit card number as a password.

    Anyone feel like paying their one-time fees and setting up a mirror?
    Cyclopatra
    "We can't all, and some of us don't." -- Eeyore

  • I believe that 'dude' was Leonard Adleman [usc.edu] ('A' of RSA infamy)

    Check out the papers at the USC Laboratory for Molecular Science [usc.edu]

    It's a statistical technique, which is never guaranteed to find a solution (I believe) but takes advantage of the huge number of molecules available to speed up the brute-force search. Unfortunately, I have no idea what it's time complexity is, nor how it compares to this technique (not paying for a subscription today :)

  • ...however small an advance this is, it does weaken encryption as a practical surety.

    "And thus the Holy Grail of the Hollywood InfoTainment Digeratti was found to have yet another dent upon it's glistening face. And it was good."

    "And the Open Source Angels did sing with rejoice in their hearts and lo thier song carried upon the net...'YOU CANNOT SECURE DIGITAL CONTENT'"

    shutdown -now


    "A microprocessor... is a terrible thing to waste." --

  • This article just goes on to show how many of the computer science problems and mathematical problems can be solved with the help of physical scientists (note: not only physics, but chemistry also as in this case!). This goes along with research efforts at the University of Chicago [uchicago.edu] to builder wires that are a strand of conductive molecules long. They can potentially solve a lot of mathematical problems by designing circuits that accomplish what they want. (For example, you can put a "junction" , where a signal gets split between to strands of molecules. Then one can flip the switch by sending photons to the junction!)

    God first, wife second, math third, and physics fourth. Wait
  • Don't forget that as an option you can get Office bundled with it... Windows NP-complete.
  • by the real jeezus ( 246969 ) on Saturday March 24, 2001 @02:21PM (#342325)
    I'm not sure, but you may be violating the EULA that came with your soap. What brand did you use? Hey, gotta be careful these days with WIPO, DMCA et cetera...

    If you love God, burn a church!
  • Microsoft announced that their newest version of Windows will be renamed to Windows NP, for Windows Very Hard.
  • by Spy Hunter ( 317220 ) on Saturday March 24, 2001 @01:28PM (#342335) Journal
    Unfortunately, this system is not (yet) practical for use in solving actual hard NP problems. From the article's conclusion:

    The strength of this microfluidic system as an analog computational device is its high parallelism. Its weakness is the exponential increase in its physical size with the number of vertices. This space-time tradeoff is reminiscent of the limitations of using DNA for solving large NP problems (refs. 5-7). We estimate that the largest graph that might be solved with our algorithmby using 12-inch wafers (commercially available) and 200-nm channels (within the range of photolithography)is 20 vertices. If we use space more efficiently by encoding subgraphs in a plane and use the third dimension for fluid flow, we might solve 40-vertex graphs. By using a computer capable of performing 109 operations per second, a 40-vertex graph can be solved in about 20 min, which makes this microfluidic approach (in its current form) impractical to compete with traditional silicon computers for solving large search problems.
  • ...and a couple more chunks are here, for your reading delight. Most of the article makes heavy use of figures so wouldn't be much use posted here.

    Materials and Methods

    Fabrication of Microfluidic System. Our method for the fabrication of a 3D microfluidic system has been described in detail elsewhere (refs. 13 and 14). Briefly, the silicon master was fabricated by first spinning a negative photoresist (SU 8-50 or SU 8-100) onto a silicon wafer, which has been cleaned by sonicating in acetone (5 min) then in methanol (5 min) and dried by baking at 180C (10 min). The photoresist-covered wafer was soft baked (105C for 15 min) to evaporate the solvent, let cool, then placed under a photomask in a Karl Suss mask-aligner (Zeiss) to expose the photoresist. The exposed photoresist was baked (105C for 5 min), then developed in propylene glycol methyl ether acetate to create a master with one level of feature. Masters with two levels of feature were fabricated by repeating this procedure with a different photomask. The masters were silanized in vacuo (in a desiccator) with 0.5 ml tridecafluoro-1,1,2,2-tetrahydrooctyl-1-trichloros ilane for 12 h. The silanized masters were then used for molding slabs and membranes of poly(dimethylsiloxane) (PDMS). To fabricate the PDMS membrane, a drop of PDMS prepolymer was sandwiched between the master and a Teflon sheet and was allowed to cure overnight under pressure (10-50 kPa) at 70C. The cured PDMS membranes and slabs were aligned by using a home-built micromanipulator stage, oxidized in a plasma cleaner (model SP100 Plasma System, Anatech, Alexandria, VA) for 40 sec at 60 W under [roughly equal to] 0.2 Torr (1 torr = 133 Pa) oxygen, and brought into contact to form an irreversible seal. This procedure of alignment, oxidation, and sealing was repeated multiple times for the fabrication of a multilayer microfluidic system.

    Chemicals. Negative photoresists (SU 8-50 and SU 8-100) were obtained from Microlithography Chemical (Newton, MA), propylene glycol methyl ether acetate from Aldrich, tridecafluoro-1,1,2,2-tetrahydrooctyl-1-trichloros ilane from United Chemical Technologies (Bristol, PA), PDMS prepolymer (Sylgard 184) from Dow-Corning, fluorescent nanospheres from Molecular Probes, and silicon wafers from Silicon Sense (Nashua, NH).

    [...]

    Conclusions

    The strength of this microfluidic system as an analog computational device is its high parallelism. Its weakness is the exponential increase in its physical size with the number of vertices. This space-time tradeoff is reminiscent of the limitations of using DNA for solving large NP problems (refs. 5-7). We estimate that the largest graph that might be solved with our algorithmby using 12-inch wafers (commercially available) and 200-nm channels (within the range of photolithography)is 20 vertices. If we use space more efficiently by encoding subgraphs in a plane and use the third dimension for fluid flow, we might solve 40-vertex graphs. By using a computer capable of performing 109 operations per second, a 40-vertex graph can be solved in about 20 min, which makes this microfluidic approach (in its current form) impractical to compete with traditional silicon computers for solving large search problems. In comparison to DNA-based computation, this microfluidic system can carry out certain logical operations, such as addition, more naturally. The z-direction flow in the four-layer microfluidic device (Fig. 4) acts as an integrator by adding the beads that arrive at the wells from all of the layers. In contrast, the implementation of an algorithm for DNA-based addition was nontrivial (ref. 8), although a far more direct method for DNA addition has been proposed (ref. 17) and partially implemented (ref. 18).

    The algorithm we described here for using fluids to search the parallel architecture of a microfluidic system could also, perhaps, be implemented in a 3D microelectronic circuit. There are, however, several advantages to using microfluidic systems. (i) Fluids can carry fluorescent beads or molecules, thereby making the readout by using parallel optical systems simpler than in microelectronic circuits. (ii) Many different "color" beads/molecules can be used in microfluidic systems, whereas electrons have only one "color"; this feature permits fluidic systems to encode more information than electrical systems. (iii) Microfluidic systems might not require power (our algorithm, for example, can be implemented by using gravity). Advantages of electrical over fluidic systems include ease of use (no clogging of channels) and the high speed at which electrons travel through the circuit (which is important for implementing sequential algorithms). Although clogging is a concern in microfluidic systems, it was not a problem under our experimental conditions, because of the relatively small sizes of the beads used (400 nm or smaller) compared with the widths of the microchannels (50 m or greater).

    Another motivation for using microfluidic-based computation is the possibility of integrating fluidic components for controlling complex chip-based microanalytical devices. In addition, computation by using microfluidic systems are complementary to ones based on biological molecules (e.g., DNA) or coupled chemical reactions (refs. 19 and 20). The wells and channels in our 3D microfluidic system, for example, could compartmentalize and transport molecules and reactions for the construction of a chemical or DNA-based computer.

  • It seems to me like they are just moving the problem into the hardware area. I.e. the size of the hardware grows exponentially, rather than computation time. Moreover, it's not clear from the article that the computation time wont grow exponentially either. I'm assuming that it won't, but these things are pretty tough to prove, and would require building a large number of these devices, each suited for a different # of vertices, and then graphing the computation time. Even that wouldn't be conclusive.

    So those of you worrying about AI can relax a bit. Also, building hardware which solves a single algorithm with no abstract constants is not particularly useful for taking over the world, simulating thought, etc.

    There might be some applications to decryption, if the size of the key is fixed, etc. But they'll have to build a different "computer" as the key-size grows. Wont be a problem for the NSA though...


  • You don't have to know much about physics to see that this is wrong. No matter what kind of abstractions you find convenient in your discipline, computation ultimately is implemented in terms of particles and their interactions. Particles have a finite size and finite interaction times, and you can't outrun an exponential with a classical system. You probably can't even outrun it with a quantum system.

    Such a network may solve an NP complete problem quickly under an idealized model of fluid dynamics. However, the real world does not obey idealized fluid dynamics. It doesn't obey it in theory, and it doesn't obey it in practice. Fluid dynamics is a convenient approximation that breaks down at some point.

    It's am embarrassment to see this kind of nonsense come out of Harvard. There are lots of people there who should know better. This isn't a problem with some obscure point in the physics of computation, it shows a pretty fundamental misunderstanding of physical theory by a physicist.

  • The abstract claims that their system solves NP complete problems in polynomial time, which is clearly wrong even given the little information they provide in the abstract.

    Well, I managed to get a hold of the full article. It turns out that their circuits are exponential size and the largest problem they can solve even using optimistic assumptions is of size 20 (if that). So, they didn't even manage to solve the problem in polynomial time using fluid dynamics, they are merely confused about the meaning of the term "NP complete" and "polynomial time" (which refer to complexity on a Turing machine, not circuit depth). In their introduction, they also make lots of mistakes in the definition of NP completeness.

    These people don't know what they are doing, and they don't seem to have found anything interesting as far as I can tell. Just ignore it. As far as I know, not all articles in PNAS are peer reviewed; this one should have been.


  • Unfortunately, you need to infer the definition of the problem from his description of the solution. I originally thought it was an MST as well, but as SLi points out it sounds like it is a Steiner tree problem. To be exact, a Euclidean Steiner tree problem, which is in NP-hard.


    you've got a solution to a particular version of the minimum spanning tree problem (the points are all located in 2-D eucludian space, which is not necessarily true for general problem).

    So it would seem that this is not an MST problem. Additionally, the fact that the graph is embeddable in a 2D Euclidian space does not make a difference. MST is in P, regardless of the weights assigned to the edges.
  • I'm still not convinced you'll get the global minimum via the method described, though - it's not something I'll just buy with a handwave.

    Agreed.

    I guess what's most interesting about this example is that it challenges us to think about a "computer" in a way that we normally do not. Makes me wonder if someday my system will have an add-on NP-complete card for handling a specific NP-complete problem.

This is now. Later is later.

Working...