Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Science

Polynomial Time Code For 3-SAT Released, P==NP 700

An anonymous reader writes "Vladimir Romanov has released what he claims is a polynomial-time algorithm for solving 3-SAT. Because 3-SAT is NP-complete, this would imply that P==NP. While there's still good reason to be skeptical that this is, in fact, true, he's made source code available and appears decidedly more serious than most of the people attempting to prove that P==NP or P!=NP. Even though this is probably wrong, just based on the sheer number of prior failures, it seems more likely to lead to new discoveries than most. Note that there are already algorithms to solve 3-SAT, including one that runs in time (4/3)^n and succeeds with high probability. Incidentally, this wouldn't necessarily imply that encryption is worthless: it may still be too slow to be practical."
This discussion has been archived. No new comments can be posted.

Polynomial Time Code For 3-SAT Released, P==NP

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn AT gmail DOT com> on Thursday January 20, 2011 @11:40AM (#34940840) Journal

    Even though this is probably wrong, just based on the sheer number of prior failures ...

    Okay, so I'm going to agree with you that it's probably wrong. After reading the paper once last night in a sort of sleepy state, it's certainly a novel way of massaging each 3-Sat problem into an expanded space of memory (compact triplets structures) and then reducing this to solve the problem (all within polynomial time).

    So the great thing about this paper is that it's short, it's not theoretical (which is a double-edged sword)/has been implemented (in Java, no less) and it's incredibly falsifiable. I'm not intelligent enough to poke holes in his proofs for this work but people can apply the code to DIMACS formatted [pdmi.ras.ru] problems to their heart's content and if they find one that produces the wrong answer then this is falsified.

    I hope we find someone capable of commenting on the proof involving the transformation of the problem from compact triplets formula to compact triplets structure and the hyperstructures presented in the paper. If this is legit, the 3-Sat problem is one that more complex problems are reduced to in order to show that said complex problems are NP-complete. And that's something that's been proved by the Cook-Levin theorem [wikipedia.org] and given in the classic Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson.

    Refreshingly tangible implementation, if I may say so myself!

    • by dvdkhlng ( 1803364 ) on Thursday January 20, 2011 @01:12PM (#34942194)

      Maybe I'm overlooking something, but to me it looks like they're doing the reduction to a polynomial-time problem already at the very beginning of the paper (I guess if there is a fault, there it hides). As soon as they go to "compact triplet" structure, the instance of 3-SAT is polynomial-time solvable using a trellis algorithm. Yes, very similar to the algorithm that is employed to decode convolutional codes.

      In fact they're decomposing the initial 3-SAT problem into multiple "compact triplet" 3-SAT problems intersected using an AND operation. But as these intersected 3-SAT formulas use the same variables, without any interleaving (permutation) applied, the trellis algorithm still applies (just like solving for a convolutional encoder with > 1 check bits per bit).

      Thinking once more about that: the compact triplet structure is clearly not general enough to express generic 3-SAT problems. This is like attempting to transform a quadratic optimization problem x^T*H*x involving a symmetric matrix H into a corresponding problem with a tri-diagonal matrix H.

      The only way I see they could do the transform is by introducing exponentially many helper variables, thus moving the problem back into NP again. But it does not look like they're attempting something like that.

    • by EuclideanSilence ( 1968630 ) on Thursday January 20, 2011 @01:34PM (#34942460)
      This is not a P=NP paper. The paper solves a problem of a related data structure in polynomial time (quartic time), then shows that it can be used to solve some cases of 3SAT. The 3 outputs the algorithm can give are "the formula is not satisfiable", "the formula is satisfiable" (and the solution is given), and "failure of classification" -- it couldn't solve the problem. The important question we wait for on the experts on this paper isn't "is it correct" (it probably is) but "how effective is it".
  • by gman003 ( 1693318 ) on Thursday January 20, 2011 @11:42AM (#34940872)
    I tried to look the problem up on Wikipedia, and all I got was inscrutable high-level math. From what I can gather, I seems like something that could be explained in layman's terms. Would someone be kind enough to do so?
    • Did you read the whole page, or just the 3-SAT section? Starting from the beginning of the article, I think it was explained simply enough.

    • by Anonymous Coward on Thursday January 20, 2011 @11:55AM (#34941062)

      It is indeed quite simple. Imagine a long string of operators like this (! is "not"):
      (x1 or x4 or !x3) and (x1 and !x4 and x3) and (x2 or !x1 or !x4) etc.
      The question is "Is there a way to assign true/false to all the variables such that the end result is true".

      This is a satisfiability problem (SAT), and what makes it 3-SAT is that it has three variables per clause.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Did you check the simple wiki?

      http://simple.wikipedia.org/wiki/Boolean_satisfiability_problem [wikipedia.org]

    • Re: (Score:3, Informative)

      by emt377 ( 610337 )

      I tried to look the problem up on Wikipedia, and all I got was inscrutable high-level math. From what I can gather, I seems like something that could be explained in layman's terms. Would someone be kind enough to do so?

      3-sat: a series of 3-term clauses, of the form: _clause1_ AND _clause2_ AND ... _clauseN_. Each clause[i] consists of (term1 OR term2 OR term3). Each term may be negated. The same term may appear in multiple clauses. The problem: find all solutions where this evaluates to true, if any. (Find the solution space.) An exhaustive search of the term space is NP-complete (this ought to be obvious).

      Not a complicated problem to understand, at all. And a fairly useful one to solve. Since boolean logic isn't

    • by debrain ( 29228 ) on Thursday January 20, 2011 @12:02PM (#34941188) Journal

      Sir –

      I did some graduate research on satisfiability [wikipedia.org], so perhaps I can offer some illumination.

      Satisfiability is a logic problem. Given a set of clauses that "or" a set of literals, which literals (and clauses) may be true or false, all "and"ed together, the question is: is there a set of literals (when true or false) that result in the overall statement being true. E.g.

      X = (A or B) & (!A or !B) & (A or !B)

      The question is: Is there a combination of A and B being true or false that would result in X being true? (In this example, X is true when A is true and B is false.)

      3-SAT is satisfiability but with only 3-literals per clause e.g.

      X = (A or B or C) and (A or !B or !C) and (!A or B or C) ...

      There's a polynomial-time mapping from any satisfiability problem to 3-SAT, as there is between all NP complete [wikipedia.org] problems, which mapping is why they are so-classified. In other words, if you solve one NP complete problem in polynomial time (namely big-O notation O(ax^n) where n is the complexity i.e. number of literals) you solve them all in polynomial time. The most famous of these NP complete problems is probably the Knapsack problem [wikipedia.org], though there are quite a few [wikipedia.org].

      All to say, it's a logic puzzle that has quite a few applications but no simple solution yet.

      I'm not sure if the above is in layman terms, but I hope it is somewhat helpful.

      • by debrain ( 29228 ) on Thursday January 20, 2011 @12:22PM (#34941512) Journal

        Correction: the big-Oh should be O(m*n^c), where c is a number that does not increase with the complexity of the input, though `n` and 'm' may.

        Usually the size of the input increases complexity, but I believe in the case of satisfiability that complexity is proportional to the number of Horne clauses - clauses with at most one positive literal - because of difficulties reducing Horne clauses. I seem to recall that reducing Horne clauses to non-Horne clauses is itself a NP complete problem, everything else (which is severable) in determining satisfiability is provably polynomial.

        In Vlad's proof he's found a solution to NP problems that are O(m * n^4).

    • You have three satellites in space, each of which broadcasts a certain number of television programs but only one at a time. Your satellite receiver can only see a maximum of 2 satellites at a given instance, but will see all three of them during one 24 hour period. Now the 3-SAT problem is determining the probability that your favorite television math program will be pre-empted by a stupid football game.

      Or something like that if I remember correctly.
  • encryption (Score:5, Informative)

    by danlip ( 737336 ) on Thursday January 20, 2011 @11:59AM (#34941126)

    Incidentally, this wouldn't necessarily imply that encryption is worthless: it may still be too slow to be practical.

    No, it means good encryption will be much less practical. Computers will always get faster so "too slow" is not a good argument. If P != NP you can always make encryption too hard to break by increasing the key size - the cost to encrypt and decrypt only goes up polynomial with key size but cost to break it goes up exponentially, which makes it too hard even for those with supercomputers. If P = NP it means the cost to break it only goes up polynomially. This put the cost to encrypt in the same realm as the cost to break. So you can use much bigger keys but it becomes more expensive and may stop petty criminals but won't stop people with supercomputers.

    • by pavon ( 30274 )

      Yeah, but if the order of the the polynomial is say, 2 million, then you can still pick key sizes that are so computationally expensive to break as to be secure for all practical purposes.

    • Re:encryption (Score:4, Interesting)

      by mr exploiter ( 1452969 ) on Thursday January 20, 2011 @12:13PM (#34941366)

      You really don't know what are you talking about, don't you? Here is a counter example: assume the encryption/decryption is O(n^2), assume the cryptanalysis is O(n^3), that is, polynomial. You can choose n so that the cryptanalysis is arbitrarily more difficult than the encryption, so this result alone (if it's true) doesn't mean a thing for crypto.

      • Re:encryption (Score:4, Informative)

        by Chowderbags ( 847952 ) on Thursday January 20, 2011 @12:46PM (#34941850)
        Also big-O notation can hide arbitrarily large constants. This is why, despite the Coppersmith-Winograd algorithm [wikipedia.org] being the "fastest" matrix multiplication algorithm (in terms of having the lowest exponent), it still isn't used. The constant is so large that the only matrices that it would actually be faster for are way beyond the size that modern hardware can handle.
  • by Bazman ( 4849 ) on Thursday January 20, 2011 @12:12PM (#34941350) Journal

    I have code for a solution to the Goldbach Conjecture, but it doesn't fit into my free storage on github....

  • If P==NP, then any company that wants to pump 10 mil into a fab building will be able to compete with their chip layout strategies.
  • by shadowrat ( 1069614 ) on Thursday January 20, 2011 @12:27PM (#34941574)
    I can't really comment on the validity of the claims in the article. It's unclear if he has indeed proven P==NP. I do know that this article clearly proves I am only of average or slightly sub average intelligence.
  • by johnwbyrd ( 251699 ) on Thursday January 20, 2011 @02:52PM (#34943478) Homepage

    Romanov's algorithm strongly resembles an algorithm from a debunked paper published around 2004.

    Sergey Gubin wrote a series of similarly designed proofs [arxiv.org] around 2004. Instead of Romanov's notion of "concretization," Gubin used the term "depletion." Gubin's paper was debunked [arxiv.org] by some students at the University of Rochester.

    Both reduction algorithms throw out key information about the original truth tables that cause the final solutions to be incorrect.

    Constructive and exhaustive proofs that P != NP should never be trusted. I'm not a huge fan of formality in proofs, but sometimes you really need it.

"Life sucks, but it's better than the alternative." -- Peter da Silva

Working...