Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Science

Polynomial Time Code For 3-SAT Released, P==NP 700

An anonymous reader writes "Vladimir Romanov has released what he claims is a polynomial-time algorithm for solving 3-SAT. Because 3-SAT is NP-complete, this would imply that P==NP. While there's still good reason to be skeptical that this is, in fact, true, he's made source code available and appears decidedly more serious than most of the people attempting to prove that P==NP or P!=NP. Even though this is probably wrong, just based on the sheer number of prior failures, it seems more likely to lead to new discoveries than most. Note that there are already algorithms to solve 3-SAT, including one that runs in time (4/3)^n and succeeds with high probability. Incidentally, this wouldn't necessarily imply that encryption is worthless: it may still be too slow to be practical."
This discussion has been archived. No new comments can be posted.

Polynomial Time Code For 3-SAT Released, P==NP

Comments Filter:
  • Even though this is probably wrong, just based on the sheer number of prior failures ...

    Okay, so I'm going to agree with you that it's probably wrong. After reading the paper once last night in a sort of sleepy state, it's certainly a novel way of massaging each 3-Sat problem into an expanded space of memory (compact triplets structures) and then reducing this to solve the problem (all within polynomial time).

    So the great thing about this paper is that it's short, it's not theoretical (which is a double-edged sword)/has been implemented (in Java, no less) and it's incredibly falsifiable. I'm not intelligent enough to poke holes in his proofs for this work but people can apply the code to DIMACS formatted [pdmi.ras.ru] problems to their heart's content and if they find one that produces the wrong answer then this is falsified.

    I hope we find someone capable of commenting on the proof involving the transformation of the problem from compact triplets formula to compact triplets structure and the hyperstructures presented in the paper. If this is legit, the 3-Sat problem is one that more complex problems are reduced to in order to show that said complex problems are NP-complete. And that's something that's been proved by the Cook-Levin theorem [wikipedia.org] and given in the classic Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson.

    Refreshingly tangible implementation, if I may say so myself!

  • by dch24 ( 904899 ) on Thursday January 20, 2011 @01:11PM (#34941342) Journal
    I like what's already posted. But here's another one:

    I want to pick the lock on your car. It's one of those fancy code entry locks and I only have to press 3 buttons, but that's not really important.

    Everyone has been saying that it's very, very hard (NP) to crack the code by trying combinations. Now there's a guy saying it's not quite as hard (P) and he wants everyone on the internet to check his work.
  • by dch24 ( 904899 ) on Thursday January 20, 2011 @01:12PM (#34941362) Journal
    I should add: mathematicians have been saying that NP is much, much harder than P, and it has always seemed logical to say that.

    But if this guy can "crack the code" (that is, solve the 3-SAT problem), he has proven that NP is not harder than P.

    The debate about whether NP is harder than P has been going for a long time.
  • Re:encryption (Score:4, Interesting)

    by mr exploiter ( 1452969 ) on Thursday January 20, 2011 @01:13PM (#34941366)

    You really don't know what are you talking about, don't you? Here is a counter example: assume the encryption/decryption is O(n^2), assume the cryptanalysis is O(n^3), that is, polynomial. You can choose n so that the cryptanalysis is arbitrarily more difficult than the encryption, so this result alone (if it's true) doesn't mean a thing for crypto.

  • by kiwix ( 1810960 ) on Thursday January 20, 2011 @01:31PM (#34941632)

    While you did a good job at explaining the general relation between NP-hardness and cryptography, there are several factual errors in your message. First, there are many asymmetrical encryption algorithm that are not based on factorization: many are based on the discrete logarithm, and other hard problems are used in a few construction. Second, we don't know whether factoring is NP-hard and it is conjectured not to be NP-hard (which does not mean we think it's polynomial!).

    Also, you seem to have missed the point of NP-harder or NP-completeness: if we can sole one NP-hard problem in polynomial time, then by definition this proves that P=NP and we can solve all NP problems in polynomial time. (A problem is said to be NP-complete if it is NP-hard, and it is itself in NP)

  • by Anonymous Coward on Thursday January 20, 2011 @01:40PM (#34941754)

    Every asymmetrical encryption algorithm in the field relies on the factoring problem, which is NP hard. If P==NP, then suddenly we know the factoring problem is NP easy. Further, solving one NP hard problem would effectively supply new strategies to solve other NP hard problems. QED.

    No. You're neglecting the discrete log problem which underpins Differ Hellman. There are probably other esoteric algorithms that rest on other hard problems like the knapsack problem. And I believe you also misstated the RSA assumption.

    The RSA assumption is that computing the Euler phi/totient function for n=p*q where p,q are prime is HARD*. The RSA assumption is RELATED to but NOT EQUIVALENT to factoring. The relationship is that RSA can be reduced to factoring, i.e., factor n into p,q and return phi(n)=(p-1)(q-1). That's a single direction. The reverse, a reduction from factoring to RSA/totient is NOT KNOWN to exist.

    So you're correct in the sense that if factoring is easy, RSA is easy. If RSA is easy, factoring may still be HARD*. Reductions in both directions would prove "necessary and sufficient/if and only if".

    *HARD has a technical definition that's not necessary for this comment.

  • by geekoid ( 135745 ) <dadinportland&yahoo,com> on Thursday January 20, 2011 @01:44PM (#34941816) Homepage Journal

    I predict the same thing; however how awesome would it be it the paper is correct!

    I wouldn't use the term debunked. Shown to be incorrect is better. Which is fine and expected in science. To me ;debunked; implies some sort of fraud is taking place.

  • by dvdkhlng ( 1803364 ) on Thursday January 20, 2011 @02:12PM (#34942194)

    Maybe I'm overlooking something, but to me it looks like they're doing the reduction to a polynomial-time problem already at the very beginning of the paper (I guess if there is a fault, there it hides). As soon as they go to "compact triplet" structure, the instance of 3-SAT is polynomial-time solvable using a trellis algorithm. Yes, very similar to the algorithm that is employed to decode convolutional codes.

    In fact they're decomposing the initial 3-SAT problem into multiple "compact triplet" 3-SAT problems intersected using an AND operation. But as these intersected 3-SAT formulas use the same variables, without any interleaving (permutation) applied, the trellis algorithm still applies (just like solving for a convolutional encoder with > 1 check bits per bit).

    Thinking once more about that: the compact triplet structure is clearly not general enough to express generic 3-SAT problems. This is like attempting to transform a quadratic optimization problem x^T*H*x involving a symmetric matrix H into a corresponding problem with a tri-diagonal matrix H.

    The only way I see they could do the transform is by introducing exponentially many helper variables, thus moving the problem back into NP again. But it does not look like they're attempting something like that.

  • by johnwbyrd ( 251699 ) on Thursday January 20, 2011 @03:52PM (#34943478) Homepage

    Romanov's algorithm strongly resembles an algorithm from a debunked paper published around 2004.

    Sergey Gubin wrote a series of similarly designed proofs [arxiv.org] around 2004. Instead of Romanov's notion of "concretization," Gubin used the term "depletion." Gubin's paper was debunked [arxiv.org] by some students at the University of Rochester.

    Both reduction algorithms throw out key information about the original truth tables that cause the final solutions to be incorrect.

    Constructive and exhaustive proofs that P != NP should never be trusted. I'm not a huge fan of formality in proofs, but sometimes you really need it.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...