Follow Slashdot stories on Twitter


Forgot your password?
Programming Science

Polynomial Time Code For 3-SAT Released, P==NP 700

An anonymous reader writes "Vladimir Romanov has released what he claims is a polynomial-time algorithm for solving 3-SAT. Because 3-SAT is NP-complete, this would imply that P==NP. While there's still good reason to be skeptical that this is, in fact, true, he's made source code available and appears decidedly more serious than most of the people attempting to prove that P==NP or P!=NP. Even though this is probably wrong, just based on the sheer number of prior failures, it seems more likely to lead to new discoveries than most. Note that there are already algorithms to solve 3-SAT, including one that runs in time (4/3)^n and succeeds with high probability. Incidentally, this wouldn't necessarily imply that encryption is worthless: it may still be too slow to be practical."
This discussion has been archived. No new comments can be posted.

Polynomial Time Code For 3-SAT Released, P==NP

Comments Filter:
  • by TheRaven64 ( 641858 ) on Thursday January 20, 2011 @12:52PM (#34941004) Journal

    All the experts in the field agree that P != NP

    Absolutely untrue. All experts in the field that I've spoken to think that P is probably not equal to NP, but that a proof either way is going to be very hard.

  • by Anonymous Coward on Thursday January 20, 2011 @12:55PM (#34941062)

    It is indeed quite simple. Imagine a long string of operators like this (! is "not"):
    (x1 or x4 or !x3) and (x1 and !x4 and x3) and (x2 or !x1 or !x4) etc.
    The question is "Is there a way to assign true/false to all the variables such that the end result is true".

    This is a satisfiability problem (SAT), and what makes it 3-SAT is that it has three variables per clause.

  • by Anonymous Coward on Thursday January 20, 2011 @12:57PM (#34941106)

    Did you check the simple wiki? []

  • by emurphy42 ( 631808 ) on Thursday January 20, 2011 @12:58PM (#34941114) Homepage

    If you understand Turing machines and big-O notation (and if you claim to be a programmer, you should) and polynomials (and if you claim to understand big-O notation, you should):

    • P is the set of all problems that can be solved in O(some polynomial of n) using a Turing machine. This can still be obnoxiously difficult, e.g. O(n^1000000), but 1000000*n^1000000 is still better than O(e^n) if n gets high enough.
    • NP is the set of all problems for which an alleged solution can be checked in O(some polynomial of n). The N stands for "non-deterministic" - imagine hooking up the Turing machine to another machine that spits out random maybe-solutions.
    • NP-complete problems are the hardest ones in NP, in the sense that if any of them is in P, then everything in NP is in P - because translating a solution of a NP-complete problem to a solution of any other NP problem is itself O(some polynomial of n).
  • encryption (Score:5, Informative)

    by danlip ( 737336 ) on Thursday January 20, 2011 @12:59PM (#34941126)

    Incidentally, this wouldn't necessarily imply that encryption is worthless: it may still be too slow to be practical.

    No, it means good encryption will be much less practical. Computers will always get faster so "too slow" is not a good argument. If P != NP you can always make encryption too hard to break by increasing the key size - the cost to encrypt and decrypt only goes up polynomial with key size but cost to break it goes up exponentially, which makes it too hard even for those with supercomputers. If P = NP it means the cost to break it only goes up polynomially. This put the cost to encrypt in the same realm as the cost to break. So you can use much bigger keys but it becomes more expensive and may stop petty criminals but won't stop people with supercomputers.

  • by emt377 ( 610337 ) on Thursday January 20, 2011 @12:59PM (#34941130)

    I tried to look the problem up on Wikipedia, and all I got was inscrutable high-level math. From what I can gather, I seems like something that could be explained in layman's terms. Would someone be kind enough to do so?

    3-sat: a series of 3-term clauses, of the form: _clause1_ AND _clause2_ AND ... _clauseN_. Each clause[i] consists of (term1 OR term2 OR term3). Each term may be negated. The same term may appear in multiple clauses. The problem: find all solutions where this evaluates to true, if any. (Find the solution space.) An exhaustive search of the term space is NP-complete (this ought to be obvious).

    Not a complicated problem to understand, at all. And a fairly useful one to solve. Since boolean logic isn't linear, gaussian reduction and elimination can't be used.

  • by bluefoxlucid ( 723572 ) on Thursday January 20, 2011 @12:59PM (#34941134) Homepage Journal

    And the vague reference to "encryption" in TFS is silly; there aren't any crypto algorithms in use whose security rests on P != NP.

    Every symmetrical encryption algorithm in the field currently relies on the idea that the computation of the plaintext from the ciphertext and key is easily verifiable, but the computation of the key from the ciphertext is hard. Many can be analyzed if you can perform same difficult mathematics in a short time.

    Every asymmetrical encryption algorithm in the field relies on the factoring problem, which is NP hard. If P==NP, then suddenly we know the factoring problem is NP easy. Further, solving one NP hard problem would effectively supply new strategies to solve other NP hard problems. QED.

    Encryption relies on the theory that some shit is just hard to do. Not that we don't know how, but that from what we know it's trivially doable but takes too god damn much work that can't be done in any shorter than a huge fucking length of time even with all the resources in the world pointed at it.

  • by debrain ( 29228 ) on Thursday January 20, 2011 @01:02PM (#34941188) Journal

    Sir –

    I did some graduate research on satisfiability [], so perhaps I can offer some illumination.

    Satisfiability is a logic problem. Given a set of clauses that "or" a set of literals, which literals (and clauses) may be true or false, all "and"ed together, the question is: is there a set of literals (when true or false) that result in the overall statement being true. E.g.

    X = (A or B) & (!A or !B) & (A or !B)

    The question is: Is there a combination of A and B being true or false that would result in X being true? (In this example, X is true when A is true and B is false.)

    3-SAT is satisfiability but with only 3-literals per clause e.g.

    X = (A or B or C) and (A or !B or !C) and (!A or B or C) ...

    There's a polynomial-time mapping from any satisfiability problem to 3-SAT, as there is between all NP complete [] problems, which mapping is why they are so-classified. In other words, if you solve one NP complete problem in polynomial time (namely big-O notation O(ax^n) where n is the complexity i.e. number of literals) you solve them all in polynomial time. The most famous of these NP complete problems is probably the Knapsack problem [], though there are quite a few [].

    All to say, it's a logic puzzle that has quite a few applications but no simple solution yet.

    I'm not sure if the above is in layman terms, but I hope it is somewhat helpful.

  • by skeptikos ( 220748 ) on Thursday January 20, 2011 @01:11PM (#34941338)

    NP is short for Natalie Portman, and the car analogy follows:

    Adleman's chief scientist, Nickolas Chelyapov, offered this illustration: Imagine that a fussy customer walks onto a million-car auto square and gives the dealer a complicated list of criteria for the car he wants.

    "First," he says, "I want it to be either a Cadillac or a convertible or red." Second, "if it is a Cadillac, then it has to have four seats or a locking gas cap." Third, "If it is a convertible, it should not be a Cadillac or it should have two seats."

    The customer rattles off a list of 24 such conditions, and the salesman has to find the one car in stock that meets all the requirements. (Adleman and his team chose a problem they knew had exactly one solution.) The salesman will have to run through the customer's entire list for each of the million cars in turn -- a hopeless task unless he can move and think at superhuman speed.

    This serial method is the way a digital electronic computer solves such a problem.

  • by Anonymous Coward on Thursday January 20, 2011 @01:18PM (#34941452)


    Every NP-Complete problem is based off the fact that 3-SAT is NP-Complete (3-SAT was the problem used by Levin and Cook to introduce the concept of NP-Completeness). Ever since then, getting into the NP-Complete club has meant reducing 3-SAT to your problem of choice in polynomial time.

    The theorem is actually fairly simple, but I can't remember it off the top of my head, and the wikipedia article doesn't help at all. Anyone care to update it?

  • by debrain ( 29228 ) on Thursday January 20, 2011 @01:22PM (#34941512) Journal

    Correction: the big-Oh should be O(m*n^c), where c is a number that does not increase with the complexity of the input, though `n` and 'm' may.

    Usually the size of the input increases complexity, but I believe in the case of satisfiability that complexity is proportional to the number of Horne clauses - clauses with at most one positive literal - because of difficulties reducing Horne clauses. I seem to recall that reducing Horne clauses to non-Horne clauses is itself a NP complete problem, everything else (which is severable) in determining satisfiability is provably polynomial.

    In Vlad's proof he's found a solution to NP problems that are O(m * n^4).

  • by Zediker ( 885207 ) on Thursday January 20, 2011 @01:33PM (#34941654)
    Best car analogy i can think of is:

    Is it easy to both plan a car trip and check that your car trip is planned well. If yes, the P=NP, if no, then P!=NP, if you cant determine either yes or no then its all wrong ;)
  • by hawkfish ( 8978 ) on Thursday January 20, 2011 @01:40PM (#34941768) Homepage

    Linguam romanae scio. Latina difficila non est; ego in annis tribus didici.

    Uh... I know the Roman language. Latin is not difficult; I learned it in three years.


  • Re:encryption (Score:4, Informative)

    by Chowderbags ( 847952 ) on Thursday January 20, 2011 @01:46PM (#34941850)
    Also big-O notation can hide arbitrarily large constants. This is why, despite the Coppersmith-Winograd algorithm [] being the "fastest" matrix multiplication algorithm (in terms of having the lowest exponent), it still isn't used. The constant is so large that the only matrices that it would actually be faster for are way beyond the size that modern hardware can handle.
  • by mrstrano ( 1381875 ) on Thursday January 20, 2011 @01:51PM (#34941902) Homepage

    That's exactly right!

    Please read these two blog posts about the consequences of P=NP from an expert in the field: [] []

  • by bunratty ( 545641 ) on Thursday January 20, 2011 @02:20PM (#34942296)

    You left out the most important part. It takes lots of work to find the car, but it takes a short time to check that it's the right car once it's found. In other words, if you're lucky and always guess the right car on your first try (in other words nondeterministically), you can check that it's the right solution quickly (in polynomial time). Nondeterministic polynomial time = NP.

    In 3-SAT, it takes exponential of time to find the assignment to the variables that satisfies all the conditions, but it takes polynomial time to check whether a particular assignment is correct. It takes exponential time to factor a product of two large primes, but it takes polynomial time to make sure it's the correct factorization.

  • by EuclideanSilence ( 1968630 ) on Thursday January 20, 2011 @02:34PM (#34942460)
    This is not a P=NP paper. The paper solves a problem of a related data structure in polynomial time (quartic time), then shows that it can be used to solve some cases of 3SAT. The 3 outputs the algorithm can give are "the formula is not satisfiable", "the formula is satisfiable" (and the solution is given), and "failure of classification" -- it couldn't solve the problem. The important question we wait for on the experts on this paper isn't "is it correct" (it probably is) but "how effective is it".
  • by pipatron ( 966506 ) <> on Thursday January 20, 2011 @02:40PM (#34942538) Homepage

    there's a 9 at the end

    This is why your understanding fails. There is no end.

    What is really so complicated by these simple examples:

    1/3 + 1/3 + 1/3 = 3/3 = 1

    And since 1/3 exactly equals 0.333... we have:

    0.333... + 0.333... + 0.333... = 0.999...

    And since the two sums are exactly equal, 0.999... equals 1. It's not really that magic, and your normal daily life will not be affected. It is, however, true.

  • by BrotherBeal ( 1100283 ) on Thursday January 20, 2011 @02:42PM (#34942580)
    Take a look at it through a fairly simple algebraic proof.

    1.) 0.99999... * 10 = 9.99999... // decimal multiplication by 10 means we just shift to the left and the infinite decimal expansion isn't affected.

    2.) 9.99999... - 0.99999... = 9 // the infinite decimal expansion is still a number and there's no reason we can't subtract it.

    3.) 9 / 9 = 1 // if we take the difference from the above subtraction and "undo" the multiplication in step 1, we need to divide by 9 because we've just removed one of what we multiplied.

    Therefore 0.99999... = 1. Q.E.D.
  • by Surt ( 22457 ) on Thursday January 20, 2011 @03:53PM (#34943498) Homepage Journal

    The clauses in 3sat are required to be disjunctions (OR).

  • by Fuzzy Eric ( 201529 ) on Thursday January 20, 2011 @04:19PM (#34943826)

    Actually, it's unknown whether integer factorization (into primes) is an NP problem []. Although a P-time solution to an NP-complete problem class (e.g. 3-SAT) would result in a P-time solution to integer factorization, the converse (polynomial time factorization implying P=NP) isn't known to hold. (And seems unlikely to be the case given the number of complexity classes that would collapse together. (ibid.))

    For a long time, it was believed that even just testing an integer for primality might require super-polynomial (equiv. sub-exponential) time, which would have made the problem of just testing a candidate factor for primality not lie in P-time. But Agrawal et al. produced a P-time primality test algorithm [] in 2002. This result made it less clear that integer factorization into primes was necessarily not P-time. (If it were provable that testing for primality was impossible in P-time then integer factoring is not in P-time, because otherwise an integer could be tested for primality by trying to factor it and this would either return "is prime" (i.e. the trivial factorization n = 1*n) or would return "is composite" (i.e. the nontrivial factorization n = p*q) in polynomial time.) So a P-time primality test made it less clear which time complexity class integer factorization (into primes) lived in.

    In spite of this nit-picking, your basic idea is right. :-)

  • by kbolino ( 920292 ) on Thursday January 20, 2011 @05:10PM (#34944526)

    Let me try to address your questions in a manner that is not disrespectful.

    1) The assertion that 0.999... = 1 is not equivalent to or in any way related to assertions like 0.888... = 1, which are false (why? there exists a real number between 0.888... and 1, take 0.9 for example). However, you could think of it as being related to the assertion that 0.888... = 8/9.

    2) Yes, the assertion has everything to do with the infinite number of repeating digits. For comparison, the assertion that 0.999...9 = 1, where there is a finite number of 9s, is false, no matter how large that number of 9s is.

    3) There are many explanations behind this infinite nature. One of the basic ones is that 0.999... is actually representing a series (infinite sum), in this case a geometric series of the form 0.9 * (0.1^0 + 0.1^1 + 0.1^2 + 0.1^3 + ...). The sum of a geometric series with ratio r is 1/(1-r), and so the sum of the series is 0.9 * 1/(1-0.1) = 0.9 * 1/0.9 = 1. I've seen some other, more elegant explanations offered here and elsewhere, but this explanation is sound and requires only knowledge of calculus (the proof of the summation formula requires limits).

    4) There is no "property" that enables this assertion. Instead, this claim arises because of the disparity between the representation of numbers and the numbers themselves. Within the real numbers, 0.999... and 1 are not distinct. They appear distinct because of the decimal representation (and indeed, a representation in any base exhibits the same characteristic). This is true for all rational numbers when represented as decimals, e.g. 0.4999... = 0.5 or 0.888... = 8/9 from above.

  • by Nurgled ( 63197 ) on Thursday January 20, 2011 @05:37PM (#34944932)

    You made an error in your third line:

    .5 * 10 = 5
    5 - .5 = 4.5 = 9 * .5
    4.5 / 9 = 0.5
    0.5 = 0.5. QED.

  • by caerwyn ( 38056 ) on Thursday January 20, 2011 @06:09PM (#34945438)

    One of the first things I learned is that 0.333... (as an example) is an approximation of 1/3. A fraction such as 1/3 cannot be 100% accurately represented in a decimal system.

    This is incorrect, and is apparently the source of most of your confusion. 1/3 = 0.333...; there's no approximation going on. 1/3 is approximated by 0.3, and 0.33, and 0.333, but the infinite decimal is *not* an approximation.

    The problem here is that people are generally very ill-equipped to handle the idea of infinity, and a lot of common sense doesn't really work. You can't "tack another 5 on the end" of 0.999... to get a number halfway between 0.999... and 1, as some other poster commented, precisely because there is no end for it to be tacked onto. This is why it's ultimately equal to 1.0.

"There is no distinctly American criminal class except Congress." -- Mark Twain