Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Math Science

Forty Years of P=NP? 222

An anonymous reader writes "In the afternoon of May 4, 1971, at the Stouffer's Somerset Inn in Shaker Heights, Ohio, Steve Cook presented his STOC paper proving that Satisfiability is NP-complete and Tautology is NP-hard. 'The theorems suggest that Tautology is a good candidate for an interesting set not in [P] and I feel it is worth spending considerable effort trying to prove this conjecture. Such a proof would be a major breakthrough in complexity theory.' And thus Cook formulated what was soon to be called the P versus NP problem. The rest is history. Here's the 1971 STOC Program (there were 143 attendees) and what that sacred ground looks like today."
This discussion has been archived. No new comments can be posted.

Forty Years of P=NP?

Comments Filter:
  • by Anonymous Coward

    Alan Cobham wrote a paper defining the complexity classes L and L+. These are exactly P and NP. He may also have originated the concept of defining the complexity of an algorithm as a function of the input size.

    Alan did not however show that there were a large number of problems which were in reducible to L+

  • P=PN (Score:3, Interesting)

    by jjohn ( 2991 ) on Wednesday May 04, 2011 @09:39AM (#36024004) Homepage Journal

    15 years of developing software and I still don't know what P vs. NP means.

    Sad, sad old hacker.

    • Re: (Score:2, Insightful)

      by war4peace ( 1628283 )
      Make that two of us. And guess what, I don't even care :)
      • Re:P=PN (Score:5, Informative)

        by Yold ( 473518 ) on Wednesday May 04, 2011 @09:56AM (#36024218)

        My computer science is rusty, but essentially it wants to know if polynomial time solution algorithms (n^2, n^3, ..., n^c: where c is constant) exist for EVERY problem that is verifiable (solution checkable) ALSO in polynomial time.

        Classic example, traveling salesman problem. Imagine you have to visit 5 cities, find the ordering of visits that yields the lowest total distance traveled. This problem is NP hard, thus it requires exhaustive search ( 5! solutions => n! time) to find an optimal solution. Verification of an optimal solution can be done in polynomial time (i.e. you already have the answer).

        The cool part about P=NP, is that if ONE algorithm is found that solves an NP hard problem in polynomial time, ALL problems are solvable. You can map one sort of NP problem (e.g. traveling salesman), to another sort (e.g. 3-SAT), and have it remain NP. So if for one NP hard problem, you find a solution in P, it follows that ALL NP problems are solvable in P.

        So basically it boils down to finding a holy grail of algorithms.

        P.S. Apologies in advance, I haven't touched my Sipser book in 3 years.

        • by blair1q ( 305137 )

          So basically it boils down to finding a holy grail of algorithms.

          Oh, is that all?

          I think Mark Twain sussed it, then, and wrote it in allegorical form.

          Just look up Tom Sawyer and whitewashing the fence.

        • NP-hard is different than NP-complete.
          • by kasperd ( 592156 )
            NP-complete is by definition the intersection between NP and NP-hard. There are for sure NP-hard problems, which are not in NP, so you are right that the sets are not identical. I believe the halting problem is NP-hard, but it certainly isn't NP-complete.
        • Re: (Score:2, Informative)

          by Anonymous Coward

          First of all, your example is flawed. The optimization problem of TSP is _not_ polynomial time verifiable. However, if the question is "is there a path on less than k visiting every city", for some k, then the problem is polynomial time verifiable. Also, there are algorithms solving TSP in less than n! time by dynamic programming O(2^n * n^2) or something like that.

          But yes, it is mainly the question that "if a solution to the problem P is _verifiable_ in polynomial, can we make an algorithm solving P in pol

        • Re:P=PN (Score:4, Informative)

          by razberry636 ( 601469 ) on Wednesday May 04, 2011 @10:29AM (#36024644)
          You're really close!

          For NP-complete you need a slight modification of the traveling salesman problem. An example would be that you need to visit 5 cities, but you need to travel less than 50 miles. Is there a route that will get you through all 5 cities in less than 50 miles?

          To find a solution need to search through all the permutations (factorial time), but to verify that you have a solution is polynomial time.

          However, your original traveling salesman problem is NP-hard because there is no way to verify that you have the shortest route without calculating all of the routes.

          I probably have an advantage here because read the Sipser book less than a year ago. Let's see what I remember in another three years. ;)

        • Re:P=PN (Score:5, Informative)

          by Missing.Matter ( 1845576 ) on Wednesday May 04, 2011 @10:32AM (#36024694)

          I think you're using NP, NP Hard, and NP Complete interchangeably, when they are very different.

          B is in NP Hard if every A in NP polynomially reduces to B, even if B isn't in NP.

          B is in NP Complete if B is NP and also, every A in NP is polynomially reducible to A.

          B is in NP if B can be decided in polynomial time by a nondeterministic Turing Machine.

          • B is in NP Complete if B is NP and also, every A in NP is polynomially reducible to A.

            And of course I mean "polynomially reducible to B" for anyone confused.

        • So basically it boils down to finding a holy grail of algorithms.

          Or alternatively, proving that there exists a single NP hard problem for which no such algorithm exists. At least, if I read your post right.

          • no such algorithm exists to solve an NP complete problem rather than NP hard problem.

            NP hard problems are at least as hard as everything in NP but might be harder than anything in NP. For example, the halting problem is NP hard.

            NP complete problems are at least as hard as everything in NP and, additionally, are in NP, thus they are the hardest possible problems in NP.

            Tim.

        • by mog007 ( 677810 )

          Spot on, but I believe you meant NP-Complete every time you said NP-Hard.

        • Sounds right to me. It's most well known as the Knapsack Problem [wikipedia.org]. It's also known as the mail carrier problem, finding the most efficient route to deliver mail.

          Little did they know in 1971 that the answer was "Hire cheap labor from foreign countries". For P=NP, if P=customer service, NP = India.

        • Actually, if you have to visit 5 cities that is O(1). That's because in your case "n" is a constant. Nothing like nitpicking to start off the day right.

    • Re:P=PN (Score:4, Informative)

      by betterunixthanunix ( 980855 ) on Wednesday May 04, 2011 @09:46AM (#36024100)
      Perhaps you should read a textbook on computational complexity, or an algorithms text, or just take a course on theoretical computer science?

      In simple terms, problems in P can be solved in a polynomial number of operations on a computer with one processor (or with a constant number of processors), and problems in NP can be solved in a polynomial number of operations on a computer with an unbounded number of processors (or in other words, a computer which can explore an unbounded number of solutions at the same time). This is not the most rigorous definition of the classes, but it is one of the more intuitive.

      The problem is this: is P a proper subclass of NP, or are there problems in NP which are not in P? The Cook-Levin theorem established the existence of "NP complete" problems, which are those which are in P if and only if P = NP.
      • Perhaps you should read a textbook on computational complexity, or an algorithms text, or just take a course on theoretical computer science?

        Why? I'm honestly curious. If the GP has been fine without much complexity theory, and has been out of academics for presumably over a decade, it doesn't seem worth it to bother. From the very little complexity theory I've encountered, it doesn't seem terribly useful to actual programmers unless they have a specific problem that other people have worked on. I don't mean to beat up on complexity theory. I'm a pure math person so I'm fine with theory having a low amount of practical value, where it's studied

        • Re:P=PN (Score:4, Informative)

          by betterunixthanunix ( 980855 ) on Wednesday May 04, 2011 @10:46AM (#36024842)
          Except that complexity theory does actually matter in the real world, and a solution to the P=NP problem would have broad implications in applied CS. There are quite a few situations in which we use heuristics where exact solutions would be significantly better (register allocation comes to mind, as does the place and route step for FPGAs), simply because finding an exact solution involves solving an NP-complete problem.

          That most programmers do not really need to deal with complexity is really a result of most programmers not working on interesting programs.
          • It might not really make much of a difference actually; what if P=NP resulted in the routing problem you mention being solvable in O(n^1000), while a probabilistic algorithm giving an acceptable result was O(n^4)?

            • by doshell ( 757915 )
              Quite true. However, I do not know of a single problem in P whose optimal known algorithm is O(n^c) with c bigger than 4 or 5. (That, of course, is no proof that none exists.) Does anyone know if there is any mathematical insight for why it should be this way?
              • by hubie ( 108345 )
                Coincidentally, I have discovered a truly marvelous proof of this, which this Slashdot box is too narrow to explain.
              • Quite true. However, I do not know of a single problem in P whose optimal known algorithm is O(n^c) with c bigger than 4 or 5.

                Primality, c = 6.

          • My question was why should the above specific programmer study complexity theory. As I mentioned, complexity theory can certainly be useful to actual programmers when they encounter a specific problem that other people have worked on. In that case it seems fine to research the relevant problem on demand, especially considering most programmers are "not working on interesting programs" anyway. To be clear, given that the OP has been fine without complexity theory and is a ways out of academics, why should th

          • by TheLink ( 130905 )
            Sure it matters, but not directly. So it still does not answer the "why".

            Research on nuclear fusion power plants might matter in the real world, but since the solution does not exist yet, most power plant designers or builders do not need to take up courses on it.

            If a very smart person solves the P=NP problem and publishes the result, the programmers will soon be able to look up the resulting solutions on Google.

            No need to: "read a textbook on computational complexity, or an algorithms text, or just take a
        • The GP has probably only been working on problems that are in P, and in fact there are a lot of those. However, there are also a LOT of problem which are in NP we would like to solve.

          Scenario 1: The classic example. You work for a presidential political campaign and your candidate asks you to plan the best possible trip to 1000 counties for the next year. The optimal trip will visit the same city only once, and in total take the least distance. This is known as the traveling salesman problem problem is NP

          • I suppose I've always assumed programmers have a feel for when their problems are particularly hard. By parallel, in math I would have expected Hilbert's 10th problem to be impossible just from hearing it. (It in fact is impossible.) I answer questions on the math sub-board of a programming forum sometimes and have gotten a few NP(-complete) problems, though they were all easily recognizable as such even with my poor background in complexity theory.
            • by nebaz ( 453974 )

              Some problems are deceptive though. Think Fermat's Last Theorem.

              • FLT wasn't terribly deceptive, though. The original conjecture was proven true, and Fermat's intuition borne out as correct. Before its proof, numerous other mathematicians agreed to its truth over the years. A great many mathematicians proved special cases (Euler, Gauss, Lebesgue, and Legendre, to name a few of the most famous from the Wikipedia page; perhaps Fermat himself, though it's hard to say). I've never heard of someone who thought a counterexample would be found. The only deceptive cases I can thi
              • My favorite example of this is Shortest Path vs. Longest Path.

                Shortest path is in P. The worst algorithm I think is Dijkstra's will solve it in O(V^2) for example. So you would think finding the longest path in the graph can't be much harder, right? Actually it's NP-Complete.

            • I answer questions on the math sub-board of a programming forum sometimes and have gotten a few NP(-complete) problems, though they were all easily recognizable as such even with my poor background in complexity theory.

              Right, that's probably reasonable as long as you are well versed in maths, you can probably infer a lot of complexity theory (which in the end is evaluating and comparing asymptotic behavior of functions). Where I think complexity theory would help is in understanding just why the problem is hard, how it relates to other problems, which could lead you do a very nice way to relax or approximate the problem.

          • by Novus ( 182265 )

            if you know your problem reduces to TSP or SAT or CLIQUE, you can tell your boss this is not feasible for our input size, so we either need to relax the problem or approximate the solution. If you don't know this you're going to waste your time writing a program that takes 4 years to calculate the answer.

            I think you mean: "you know TSP or SAT or CLIQUE reduces to your problem". By reducing one problem to another (quickly enough; polynomial time is good for NP-completeness proofs), you can show that the problem you have reduced to is at least as hard as the one you reduced from. Lots of easy problems reduce to SAT, but SAT doesn't reduce to them. In other words, "I can reduce SAT to my problem in polynomial time. Hence, if I can solve my problem in polynomial time, I can solve SAT in polynomial time, proving

          • by TheLink ( 130905 )
            All that is still mostly irrelevant.

            People write imperfect programs to imperfectly solve problems all the time, and often the customers are happy enough with the imperfect results to pay good enough money. And at least some of the time they aren't really being swindled ;).

            For example, the famous "halting problem" is actually an easier problem than detecting that a given program is malware (yes I know the halting problem is impossible to solve over turing machines, but bear with me).

            The halting problem can b
    • P is the set of problems that can be solved quickly. NP is the set of problems for which a correct answer can be checked quickly. Multiplication is in P and NP. Factoring is in NP, because to check that a number was factored correctly you can multiply the numbers together quickly. The question is: Is factoring (and the other problems in NP for which we don't have a quick algorithm) in P? If someone could find a quick algorithm for factoring, some cryptosystems could be easily broken.
      • by vlm ( 69642 )

        P is the set of problems that can be solved quickly.

        I'd take issue with that, in that P is the set of problems that scales polynomially and for simplicity people like to call that "fast" or "quick". But it can still be computationally infeasible in the real world.

        My gut level assumption from being "into" this stuff for decades is we're Probably going to find out that theoretically, yes P=NP, but all the practical P solutions of NP problems unfortunately end up with crazy coefficients, such that its useless in the real world.

        An example might be, a typical NP

        • Generally once you have any polynomial time algorithm for a problem, a polynomial time algorithm with small coefficients and small powers is found soon afterwards. This is the justification for splitting problems into P and NP. Also, the smart money is on P!=NP, but we just don't know how to prove it yet.
        • But it can still be computationally infeasible in the real world.

          I think the best example that made me realize this was that O(n^100) is in P technically. But it is intractable. However, O(n^100) is still asymptotically better than O(100^n). However, algorithms that run like this are very rare... it's that large gap that exists between O(n^k) for small k and O(k^n). Always wondered why that was, but I guess if people knew we'd have a better understanding about the relationship between P and NP.

          For most practical problems, O(n^3) is often intractable.

        • its just the polynomial constant or linear coefficient might be like a billion times the age of the universe or something like that.

          That would strike me as extremely strange. There are few large constants floating about in proofs. I should be clear that I'm imagining a polynomial time solution for some NP-complete problem, wherein one of the coefficients is minimized by "like a billion times the age of the universe or something like that". It would physically be a bit strange, too, seeing as a change in scale to our universe (say, much, much older) would make the constant unremarkable. Human-scale inconveniences just have no place in su

    • That reminds me of a random annoyance I have with complexity theory. Why use the word "reducible"--as in "exact cover reduces to knapsack". It's more direct to just say knapsack solves exact cover. Most definitions of "reducible" I've seen use "solves" or a variant of the word. I just looked at 3 I found online and two used "solves"; the other used symbols and so avoided the word. Why include an unnecessary reversal? Maybe it's just me, but I always have to translate to make sure I'm not reversing the impli
      • Because saying "solves" removes an important implication of reducibility. When I say "exact cover reduces to knapsack" what I mean is there exists a function f(x) that maps every instance of exact cover to knapsack in polynomial time. The thing is, when we're talking about any theory, it's important to be very precise with language. What exactly does "solve" mean? Can we use knapsack to decide exact cover? Or does can we only use it to recognize exact cover? The difference between these two things in comput
        • To me, "solves" definitely implies the decidability version. That is, given two languages A and B, "A solves B" means that there is a map f from B to A where f(x) is in A if and only if x is in B, for all possible x (not just those in B). As ever there are context-dependent restrictions on f--like it must be computable--but those are present regardless. It doesn't seem to me like "reduces" is any more or less precise than "solves".
          • That is, given two languages A and B, "A solves B" means that there is a map f from B to A where f(x) is in A if and only if x is in B, for all possible x (not just those in B).

            You pretty much just stated the definition for a reduction. I mean, if all you have is a problem with the word then I guess all you risk is confusing people who aren't familiar with your terminology.

            I remember being introduced to the concept of reductions with the following analogy: Say I want to go to my friend's house.This is easy if I have his address, otherwise I would need to visit every house until I found his. So the problem of going to my friend's house reduces to knowing his address. But just by

      • by doshell ( 757915 )
        Ultimately it's a matter of semantics. But it doesn't sound right to say that a problem "solves" another. Problems don't solve anything; algorithms do (on a tangent: show me a problem, and I will present you with two different algorithms for it with dramatically different run times). On the other hand, "reducible" captures the notion that you can represent the reduced problem as an instance of the problem it reduces to.
        • by doshell ( 757915 )
          Nitpick: when I said "show me a problem" above, I mean of course a problem solvable with a known algorithm. Otherwise the GP might demand that I present him with two different algorithms for the Halting Problem, and I'll be stumped. :)
          • No, it's alright, I was able to coerce it into a true statement regardless. The restriction that you use a Turing machine, while common, is only implicit. Since that doesn't make sense, you clearly must not have been working with that restriction. So, given an unrestricted algorithm to solve a problem, you can just assume it's running on a machine capable of instantly solving that problem. You can then take the first algorithm to be trivial. The second algorithm with the drastically different run time could
            • by doshell ( 757915 )

              Yes. If anyone were to challenge me to prove my assertion, prepending a useless long loop depending on input size at the beginning of a valid algorithm would be my answer. :)

              Though I still have a problem with the other part of your post. Suppose I am allowed to assume the trivial algorithm exists; I cannot however assume anything about its complexity (that would be cheating). So, assuming that by "drastically different run time" I mean to say that they're not big-O equivalent, I am stumped when deciding how

        • Problems don't solve anything; algorithms do

          I'd say similarly problems don't reduce anything; algorithms do. I expand "knapsack solves exact cover" to "if we can solve knapsack, we can turn that solution into a solution for exact cover". The mental picture that goes with that expansion is a long page of code representing knapsack, the apparently more complicated problem, and a shorter page of code representing exact cover, with a function call to the knapsack code. That image and phrase seems to capture your notion of reducibility. Of course it is ju

          • by doshell ( 757915 )

            I agree that "A reduces to B" is equally nonsensical; it's an artifact of a growing (and sometimes irritating) tendency to suppress passive clauses in English. The passive version, "A is reducible to B", is more in agreement with my reasoning, since it doesn't convey the idea that the reduction is performed by A.

            My problem with viewing problems as "pages of code" is that there is no one-to-one correspondence between problems and programs for solving them. I would rather view an algorithm as a machine where

    • by Yvanhoe ( 564877 )
      I recommend looking it up. It is a notion of algorithmics that has useful practical applications even outside the realm of theoretical computer science. It can help you notice quickly which problem just requires more hardware to solve and which problem you should forget about finding the optimal solution.
    • Forty years of fornication is better.
    • Because programming and computer science are two barely related endeavors. You could learn complexity theory without ever learning to code in anything but BASIC.

  • by JoshuaZ ( 1134087 ) on Wednesday May 04, 2011 @09:53AM (#36024180) Homepage

    So, what's happened since then? We're still a long way from resolving P ?= NP. There's been a lot of success showing some techniques won't work. The fact that P^A != NP^A for some oracle A and P^B = NP^B for some other oracle B shows that a lot of possible paths simply won't work. Thus for example, there can't be any clever but essentially straightforward reduction of NP problems to P because then we would have P^A=NP^A for all oracles. Similar remarks apply to the so-called natural proof barrier.

    There's been more success with related problems. In the last few years, the apparent size of P has grown. Thus, Agrawal et. al. showed that primality testing is in P http://en.wikipedia.org/wiki/AKS_algorithm [wikipedia.org] and there's been a lot of success with taking randomized algorithms that lie in BPP and producing non-randomized versions that are in P. This had lead many to suspect that in fact P=BPP. Most recently there's been some very good work with circuit bounds. Ryan Williams has done very recent technical work showing that NEXP and ACC are not equal. This work seems to get around the natural proof barriers. There's also been some work relating complexity questions to algebraic geometry and the permanent of a matrix. ( No, that's not a typo- http://en.wikipedia.org/wiki/Permanent [wikipedia.org] ). So there's a lot of interesting work going on, but it seems that we're pretty far from actually resolving whether P != NP. I suspect that we'll prove the weaker result that P != PSPACE before we resolve the stronger claim that P != NP.

  • Wow so close to history TWICE that day.
    Since I was only ten years old at the time I remember the the shootings but not this bit o' history. I did live around that area, I do know that hotel did turn really sleazy just a few years after that symposium.

    Coincidence?...

  • There was a question, in a round about and not obvious way, asked us to prove P = NP. Dirtiest most evil trick question I've ever encountered.
    • When I took a graduate level compilers course, the professor gave a quiz early in the semester on writing CFGs. One of the questions was essentially to write a CFG for this language:

      a^n b^n c^n

      Quite a few people had trouble finding the right answer...
  • by DNS-and-BIND ( 461968 ) on Wednesday May 04, 2011 @10:21AM (#36024532) Homepage
    I looked in to this discussion just because I wanted to see what the hardcore science/math nerds were saying, and instead I get a dozen posts asking, "What the hell is P=NP, LOL." Time was, you could look into math threads on Slashdot and see graduate students talking shop. Even if I didn't understand half of what they said, it was still enlightening.
    • Back when I started reading Slashdot (1999), being a "nerd" meant that science/math and "zomg computers are magic!" were your life so it made sense that the articles catered that crowd. Nowadays, being a "nerd" has come to mean so many different things, most of them not as nerdy as us old guys remember when we had to punch cards uphill both ways so it makes sense that this site would change accordingly.
    • by slyborg ( 524607 )

      That was probably back when the US had graduate students in math and science. They're off discussing CDOs and other advanced variations on the old shell game now.

      • Are you an idiot? The U.S. dominates every other country in the world for math research output (which is closely correlated to the number and quality of Ph.D students), and I assume the same is true in most of the sciences.

    • "What the hell is P=NP, LOL."

      maybe LOL = OL^2 but im not sure what O and L stand for in this context...

  • What the hell? It was Stephen Cook when I took his classes back in the nineties.

    Hey, /. editors, how is that guy, what's his name, Commander TacoBell doing?

Technology is dominated by those who manage what they do not understand.

Working...