## Forty Years of P=NP? 222 222

An anonymous reader writes

*"In the afternoon of May 4, 1971, at the Stouffer's Somerset Inn in Shaker Heights, Ohio, Steve Cook presented his STOC paper proving that Satisfiability is NP-complete and Tautology is NP-hard. 'The theorems suggest that Tautology is a good candidate for an interesting set not in [P] and I feel it is worth spending considerable effort trying to prove this conjecture. Such a proof would be a major breakthrough in complexity theory.' And thus Cook formulated what was soon to be called the P versus NP problem. The rest is history. Here's the 1971 STOC Program (there were 143 attendees) and what that sacred ground looks like today."*
## Actual origins are even earlier (Score:2, Informative)

Alan Cobham wrote a paper defining the complexity classes L and L+. These are exactly P and NP. He may also have originated the concept of defining the complexity of an algorithm as a function of the input size.

Alan did not however show that there were a large number of problems which were in reducible to L+

## P=PN (Score:3, Interesting)

15 years of developing software and I still don't know what P vs. NP means.

Sad, sad old hacker.

## Re: (Score:2, Insightful)

## Re:P=PN (Score:5, Informative)

My computer science is rusty, but essentially it wants to know if polynomial time solution algorithms (n^2, n^3, ..., n^c: where c is constant) exist for EVERY problem that is verifiable (solution checkable) ALSO in polynomial time.

Classic example, traveling salesman problem. Imagine you have to visit 5 cities, find the ordering of visits that yields the lowest total distance traveled. This problem is NP hard, thus it requires exhaustive search ( 5! solutions => n! time) to find an optimal solution. Verification of an optimal solution can be done in polynomial time (i.e. you already have the answer).

The cool part about P=NP, is that if ONE algorithm is found that solves an NP hard problem in polynomial time, ALL problems are solvable. You can map one sort of NP problem (e.g. traveling salesman), to another sort (e.g. 3-SAT), and have it remain NP. So if for one NP hard problem, you find a solution in P, it follows that ALL NP problems are solvable in P.

So basically it boils down to finding a holy grail of algorithms.

P.S. Apologies in advance, I haven't touched my Sipser book in 3 years.

## Re: (Score:2)

So basically it boils down to finding a holy grail of algorithms.

Oh, is that all?

I think Mark Twain sussed it, then, and wrote it in allegorical form.

Just look up Tom Sawyer and whitewashing the fence.

## Re:P=NP (Score:2)

## Re: (Score:2)

## Re: (Score:2, Informative)

First of all, your example is flawed. The optimization problem of TSP is _not_ polynomial time verifiable. However, if the question is "is there a path on less than k visiting every city", for some k, then the problem is polynomial time verifiable. Also, there are algorithms solving TSP in less than n! time by dynamic programming O(2^n * n^2) or something like that.

But yes, it is mainly the question that "if a solution to the problem P is _verifiable_ in polynomial, can we make an algorithm solving P in pol

## Re:P=PN (Score:4, Informative)

For NP-complete you need a slight modification of the traveling salesman problem. An example would be that you need to visit 5 cities, but you need to travel less than 50 miles. Is there a route that will get you through all 5 cities in less than 50 miles?

To find a solution need to search through all the permutations (factorial time), but to verify that you have a solution is polynomial time.

However, your original traveling salesman problem is NP-hard because there is no way to verify that you have the shortest route without calculating all of the routes.

I probably have an advantage here because read the Sipser book less than a year ago. Let's see what I remember in another three years. ;)

## Re:P=PN (Score:5, Informative)

I think you're using NP, NP Hard, and NP Complete interchangeably, when they are very different.

B is in NP Hard if every A in NP polynomially reduces to B, even if B isn't in NP.

B is in NP Complete if B is NP and also, every A in NP is polynomially reducible to A.

B is in NP if B can be decided in polynomial time by a nondeterministic Turing Machine.

## Re: (Score:2)

B is in NP Complete if B is NP and also, every A in NP is polynomially reducible to A.

And of course I mean "polynomially reducible to B" for anyone confused.

## Re: (Score:2)

Or alternatively, proving that there exists a single NP hard problem for which no such algorithm exists. At least, if I read your post right.

## Re: (Score:2)

no such algorithm exists to solve an NP complete problem rather than NP hard problem.

NP hard problems are at least as hard as everything in NP but might be harder than anything in NP. For example, the halting problem is NP hard.

NP complete problems are at least as hard as everything in NP and, additionally, are in NP, thus they are the hardest possible problems in NP.

Tim.

## Re: (Score:2)

Spot on, but I believe you meant NP-Complete every time you said NP-Hard.

## Re: (Score:3)

Sounds right to me. It's most well known as the Knapsack Problem [wikipedia.org]. It's also known as the mail carrier problem, finding the most efficient route to deliver mail.

Little did they know in 1971 that the answer was "Hire cheap labor from foreign countries". For P=NP, if P=customer service, NP = India.

## Re: (Score:2)

Actually, if you have to visit 5 cities that is O(1). That's because in your case "n" is a constant. Nothing like nitpicking to start off the day right.

## Re: (Score:2, Informative)

NP-complete means that a problem is NP-hard and is also known to be in NP. While finding a polynomial time solution to an NP-complete problem would also give a polynomial time solution to everything in NP it is a weaker result, although sufficient to show P=NP.

## Re:P=PN (Score:4, Informative)

In simple terms, problems in P can be solved in a polynomial number of operations on a computer with one processor (or with a constant number of processors), and problems in NP can be solved in a polynomial number of operations on a computer with an unbounded number of processors (or in other words, a computer which can explore an unbounded number of solutions at the same time). This is not the most rigorous definition of the classes, but it is one of the more intuitive.

The problem is this: is P a proper subclass of NP, or are there problems in NP which are not in P? The Cook-Levin theorem established the existence of "NP complete" problems, which are those which are in P if and only if P = NP.

## Re: (Score:2)

Perhaps you should read a textbook on computational complexity, or an algorithms text, or just take a course on theoretical computer science?

Why? I'm honestly curious. If the GP has been fine without much complexity theory, and has been out of academics for presumably over a decade, it doesn't seem worth it to bother. From the very little complexity theory I've encountered, it doesn't seem terribly useful to actual programmers unless they have a specific problem that other people have worked on. I don't mean to beat up on complexity theory. I'm a pure math person so I'm fine with theory having a low amount of practical value, where it's studied

## Re:P=PN (Score:4, Informative)

That most programmers do not really need to deal with complexity is really a result of most programmers not working on interesting programs.

## Re: (Score:2)

It might not really make much of a difference actually; what if P=NP resulted in the routing problem you mention being solvable in O(n^1000), while a probabilistic algorithm giving an acceptable result was O(n^4)?

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

Primality, c = 6.

## Re: (Score:2)

My question was why should the above specific programmer study complexity theory. As I mentioned, complexity theory can certainly be useful to actual programmers when they encounter a specific problem that other people have worked on. In that case it seems fine to research the relevant problem on demand, especially considering most programmers are "not working on interesting programs" anyway. To be clear, given that the OP has been fine without complexity theory and is a ways out of academics, why should th

## Re: (Score:2)

Research on nuclear fusion power plants might matter in the real world, but since the solution does not exist yet, most power plant designers or builders do not need to take up courses on it.

If a very smart person solves the P=NP problem and publishes the result, the programmers will soon be able to look up the resulting solutions on Google.

No need to: "read a textbook on computational complexity, or an algorithms text, or just take a

## Re: (Score:3)

Scenario 1: The classic example. You work for a presidential political campaign and your candidate asks you to plan the best possible trip to 1000 counties for the next year. The optimal trip will visit the same city only once, and in total take the least distance. This is known as the traveling salesman problem problem is NP

## Re: (Score:2)

## Re: (Score:2)

Some problems are deceptive though. Think Fermat's Last Theorem.

## Re: (Score:2)

## Re: (Score:2)

My favorite example of this is Shortest Path vs. Longest Path.

Shortest path is in P. The worst algorithm I think is Dijkstra's will solve it in O(V^2) for example. So you would think finding the longest path in the graph can't be much harder, right? Actually it's NP-Complete.

## Re: (Score:2)

I answer questions on the math sub-board of a programming forum sometimes and have gotten a few NP(-complete) problems, though they were all easily recognizable as such even with my poor background in complexity theory.

Right, that's probably reasonable as long as you are well versed in maths, you can probably infer a lot of complexity theory (which in the end is evaluating and comparing asymptotic behavior of functions). Where I think complexity theory would help is in understanding just

whythe problem is hard, how it relates to other problems, which could lead you do a very nice way to relax or approximate the problem.## Re: (Score:2)

if you know your problem reduces to TSP or SAT or CLIQUE, you can tell your boss this is not feasible for our input size, so we either need to relax the problem or approximate the solution. If you don't know this you're going to waste your time writing a program that takes 4 years to calculate the answer.

I think you mean: "you know TSP or SAT or CLIQUE reduces to your problem". By reducing one problem to another (quickly enough; polynomial time is good for NP-completeness proofs), you can show that the problem you have reduced to is at least as hard as the one you reduced from. Lots of easy problems reduce to SAT, but SAT doesn't reduce to them. In other words, "I can reduce SAT to my problem in polynomial time. Hence, if I can solve my problem in polynomial time, I can solve SAT in polynomial time, proving

## Re: (Score:2)

## Re: (Score:2)

People write imperfect programs to imperfectly solve problems all the time, and often the customers are happy enough with the imperfect results to pay good enough money. And at least some of the time they aren't really being swindled

For example, the famous "halting problem" is actually an easier problem than detecting that a given program is malware (yes I know the halting problem is impossible to solve over turing machines, but bear with me).

The halting problem can b

## Re: (Score:2)

## Re: (Score:2)

P is the set of problems that can be solved quickly.

I'd take issue with that, in that P is the set of problems that scales polynomially and for simplicity people like to call that "fast" or "quick". But it can still be computationally infeasible in the real world.

My gut level assumption from being "into" this stuff for decades is we're Probably going to find out that theoretically, yes P=NP, but all the practical P solutions of NP problems unfortunately end up with crazy coefficients, such that its useless in the real world.

An example might be, a typical NP

## Re: (Score:2)

## Re: (Score:2)

But it can still be computationally infeasible in the real world.

I think the best example that made me realize this was that O(n^100) is in P technically. But it is intractable. However, O(n^100) is still asymptotically better than O(100^n). However, algorithms that run like this are very rare... it's that large gap that exists between O(n^k) for small k and O(k^n). Always wondered why that was, but I guess if people knew we'd have a better understanding about the relationship between P and NP.

For most practical problems, O(n^3) is often intractable.

## Re: (Score:2)

its just the polynomial constant or linear coefficient might be like a billion times the age of the universe or something like that.

That would strike me as extremely strange. There are few large constants floating about in proofs. I should be clear that I'm imagining a polynomial time solution for some NP-complete problem, wherein one of the coefficients is minimized by "like a billion times the age of the universe or something like that". It would physically be a bit strange, too, seeing as a change in scale to our universe (say, much, much older) would make the constant unremarkable. Human-scale inconveniences just have no place in su

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

That is, given two languages A and B, "A solves B" means that there is a map f from B to A where f(x) is in A if and only if x is in B, for all possible x (not just those in B).

You pretty much just stated the definition for a reduction. I mean, if all you have is a problem with the word then I guess all you risk is confusing people who aren't familiar with your terminology.

I remember being introduced to the concept of reductions with the following analogy: Say I want to go to my friend's house.This is easy if I have his address, otherwise I would need to visit every house until I found his. So the problem of going to my friend's house reduces to knowing his address. But just by

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

Yes. If anyone were to challenge me to prove my assertion, prepending a useless long loop depending on input size at the beginning of a valid algorithm would be my answer. :)

Though I still have a problem with the other part of your post. Suppose I am allowed to assume the trivial algorithm exists; I cannot however assume anything about its complexity (that would be cheating). So, assuming that by "drastically different run time" I mean to say that they're not big-O equivalent, I am stumped when deciding how

## Re: (Score:2)

Problems don't solve anything; algorithms do

I'd say similarly problems don't reduce anything; algorithms do. I expand "knapsack solves exact cover" to "if we can solve knapsack, we can turn that solution into a solution for exact cover". The mental picture that goes with that expansion is a long page of code representing knapsack, the apparently more complicated problem, and a shorter page of code representing exact cover, with a function call to the knapsack code. That image and phrase seems to capture your notion of reducibility. Of course it is ju

## Re: (Score:2)

I agree that "A reduces to B" is equally nonsensical; it's an artifact of a growing (and sometimes irritating) tendency to suppress passive clauses in English. The passive version, "A is reducible to B", is more in agreement with my reasoning, since it doesn't convey the idea that the reduction is performed by A.

My problem with viewing problems as "pages of code" is that there is no one-to-one correspondence between problems and programs for solving them. I would rather view an algorithm as a machine where

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

Because programming and computer science are two barely related endeavors. You could learn complexity theory without ever learning to code in anything but BASIC.

## What has happened since then (Score:4, Informative)

So, what's happened since then? We're still a long way from resolving P ?= NP. There's been a lot of success showing some techniques won't work. The fact that P^A != NP^A for some oracle A and P^B = NP^B for some other oracle B shows that a lot of possible paths simply won't work. Thus for example, there can't be any clever but essentially straightforward reduction of NP problems to P because then we would have P^A=NP^A for all oracles. Similar remarks apply to the so-called natural proof barrier.

There's been more success with related problems. In the last few years, the apparent size of P has grown. Thus, Agrawal et. al. showed that primality testing is in P http://en.wikipedia.org/wiki/AKS_algorithm [wikipedia.org] and there's been a lot of success with taking randomized algorithms that lie in BPP and producing non-randomized versions that are in P. This had lead many to suspect that in fact P=BPP. Most recently there's been some very good work with circuit bounds. Ryan Williams has done very recent technical work showing that NEXP and ACC are not equal. This work seems to get around the natural proof barriers. There's also been some work relating complexity questions to algebraic geometry and the permanent of a matrix. ( No, that's not a typo- http://en.wikipedia.org/wiki/Permanent [wikipedia.org] ). So there's a lot of interesting work going on, but it seems that we're pretty far from actually resolving whether P != NP. I suspect that we'll prove the weaker result that P != PSPACE before we resolve the stronger claim that P != NP.

## Four Dead In O-HI-O (Score:2)

Wow so close to history TWICE that day.

Since I was only ten years old at the time I remember the the shootings but not this bit o' history. I did live around that area, I do know that hotel did turn really sleazy just a few years after that symposium.

Coincidence?...

## Re: (Score:2)

Probably. It's NP-hard out there for a pimp.

## Just finished a final exam on Theory of Automata (Score:3)

## Re: (Score:3)

a^n b^n c^n

Quite a few people had trouble finding the right answer...

## Re: (Score:2)

## Re: (Score:3)

## Wow Slashdot has gone downhill... (Score:4, Interesting)

## Re: (Score:3)

## Re: (Score:2)

That was probably back when the US had graduate students in math and science. They're off discussing CDOs and other advanced variations on the old shell game now.

## Re: (Score:2)

Are you an idiot? The U.S. dominates every other country in the world for math research output (which is closely correlated to the number and quality of Ph.D students), and I assume the same is true in most of the sciences.

## Re: (Score:3)

"What the hell is P=NP, LOL."

maybe LOL = OL^2 but im not sure what O and L stand for in this context...

## It's Stephen Cook, not Steven. (Score:2)

What the hell? It was Stephen Cook when I took his classes back in the nineties.

Hey, /. editors, how is that guy, what's his name, Commander TacoBell doing?

## Re: (Score:2)

Are you the only person who couldn't be bothered to look it up?

http://en.wikipedia.org/wiki/P_versus_NP_problem [wikipedia.org]

## Re: (Score:2)

am I the only person who does not get what this means?

The most non-mathematical way to express it, probably somewhat innacurately, is the fastest way to figure out if you can learn what "p=np" means is for you to figure out what "p=np" means and then see how long it took.

Its also used as a stereotypical filter... If you know what it is, you're somewhere on the "computer science" path. If you don't, even if you don't know it, you're on the "IT" "IS" or "code monkey" path. It's almost as accurate as asking someone if they've heard of a programmer named "Knuth"

## Re:So? (Score:5, Informative)

P vs. NP for dummies [scottaaronson.com]

## Re: (Score:3)

Every time there is a P vs NP story on slashdot, invariably there is at least one person to make a comment like this... and every time, I see comments like this modded up as amusing. Am I the only person on slashdot who has long since ceased to find it funny?

'P' stands for "polynomial", and 'N' stands for non-deterministic. Neither side of the equals sign actually represents any single value... rather, they each represent what are typically considered to be separate classes of problems.

To assert tha

## Re: (Score:2)

Am I the only person on slashdot who has long since ceased to find it funny?No. It was funny once, a very long time ago. Perhaps. Well maybe slightly amusing a very long time ago. It's getting pretty annoying these days.

## Re: (Score:2)

## Re: (Score:2)

D'oh! Maybe that's why people are having such a hard time with it. Should have subtracted P from both sides getting P(N-1)=0. OK, solving P=NP took me two tries - Now I understand the debate.

## Re: (Score:2)

Do you like to have your files, HTTPS sessions, etc encrypted? Yes? You want it to be hard to crack that encryption? Yes? Then you care that P != NP.

Although note that encryption being hard is generally a much stronger claim than P != NP. All known encryption systems that rely on complexity conjectures need stronger claims than P!=NP. Thus for example many rely on the difficulty of factoring integers into primes. It is possible (although it seems unlikely) that P != NP but factoring is still in P. The reason to care about P ?= NP in an encryption context is that if P = NP then encryption definitely doesn't work. But the other direction isn't necessarily

## Re: (Score:2)

All known encryption systems that rely on complexity conjectures need stronger claims than P!=NP.

This is not technically true; some cryptosystems based on lattice problems have security reductions to NP-hard problems and ultimately rely only on P!=NP.

## Re: (Score:2)

The reason to care about P ?= NP in an encryption context is that if P = NP then encryption definitely doesn't work.

Not true at all. Encryption works if, on average, it takes too long to break it using current technology for the data to be useful once its finally broke. Its an engineering balance, not a theoretical process.

P = NP is a scalability problem where it seems a very small amount of work and very small amount of key is very hard to break.

Something like DES scaled down to 8 bit keys (uh, good luck scaling the s-boxes down, but you get the general idea) is a horrible engineering decision regardless of how scalab

## Re: (Score:2)

## Re: (Score:2)

So if I find an O(n^{100000th Ackermann number})

algorithm for factoring integers, it means encryption is suddenly unsafe? Your understanding of this issue is very flawed.

## Re:P = NP? (Score:5, Insightful)

Can anyone explain what all the fuss is about ?

Its strongly related to the "CS" vs "IT" division amongst "computer people". Its hard to find a topic that more strongly shows the separation.

The "IT" guys simply don't care (look at many of the posts in this article) but for the "CS" guys its proof would be pretty close to the ultimate "super bowl win" or "moon landing" or "date with a girl" moment.

## Re: (Score:2)

## Re: (Score:3)

Its the basis of the argument that our CS people gave us as to why a natural language recognition system could not be developed. So a couple of flight control (mechanical) engineers sat down and wrote one.

An excellent example of science "vs" engineering.

The scientists say a theoretically perfect system cannot be made. Also true that the abstract concept of a metal cube exactly one inch on a side cannot exist, because of various quantum and thermodynamic reasons there will always exist a variation somewhere deep in the decimal points.

The engineers, however, are perfectly able to make ones that "work well enough".

The hop from theory to engineering that happens with P=NP is almost as funny/interesting to talk

## Sadly the difference is more like ... (Score:3)

This is a real world example I've seen a dozen times. Given a spec that requires a parser, the CS type will come up with a complicated (to outsiders) solution that few people can understand but works perfectly. The IT type will not recognize that it is a parser, does not know what a parser is, and will implement a very buggy "solution" with regexps (cf. the MySpace XSS fiasco from a few years back).

Oh who am I kidding; there is no such thing as a spec. I've never seen an actual one in the

enterprise.## Re: (Score:2)

I call BS.

It sounds like you're parroting material from some company's web site. Natural language recognition was an active area of research when I was a student in the early 80s, at which point everyone in the CS department would have been well versed in complexity theory. So far as I know NLP has remained an active field of academic research ever since.

In any case you'd have to assume that PNP (as most of us do), and you'd have to show that *every possible useful natural language recognition application

## Re: (Score:2)

"date with a girl" moment.

And prostitution doesn't count.

## Re:P = NP? (Score:4, Interesting)

I've been in this business since the 1980s. When I started, there were proportionately a lot more math geeks in software than there are today, but by the early 90s there wouldn't have been enough math geeks in all the world to do the work that needed to be done. The people who came into the business had the attitude that complexity and computation theory were just a lot of ivory tower rubbish.

There was a certain validity to this attitude. There was ton of work to be done, but almost all of it amounted to assembling endless variations of the same old kinds applications but in new contexts. It was like snapping together different Lego projects. The mathematically difficult work had already been done for the people engaged in that work, and so their intellectual focus shifted to issues of craft and project management, which were by no means trivial things. Some of us kept algorithm books handy for that odd job where the library routines didn't quite do, but we didn't really need much math. We just needed a rough grasp of O notation and the ability to translate pseudocode into C. Most software guys didn't even go that far. They didn't have *any* math books or references on their shelves, just big, fat tutorial books on vendor provided solutions or architectural philosophy.

Then came the Internet.

A lot of Internet related software falls into the Lego Architecture class, but given a world of data that are interconnected, there's more need than ever for people who can create *new* algorithms that can squeeze gems of value out of that mountain of data quicker than the Universe will perish from heat death. Companies like Google or Facebook or LinkedIn can't just slap a Java front end onto an Oracle database. They have to create new ways to structure and process volumes of data magnitudes beyond anything that existed in the early 1990s.

Fundamental (in the sense of "foundational" rather than "beginner's") CS is once again very practical knowledge to have.

## Re: (Score:2)

It's also worth a cool $1 million if you solve it, thanks to the Clay Mathematics Institute [claymath.org], because it's currently one of the top unsolved problems in all of mathematics as well as being the top unsolved problem in the more specialized computer science world.

And to give you an example of how awesome it would be if P=NP: You could (in theory at least) write a program that took a set of axioms and a hypothesis, and it could tell you reasonably quickly whether the hypothesis was something that could be proven

## Re: (Score:2)

"Can current 'strong' encryption be cracked by factoring large numbers quickly?"

## Re: (Score:2)

## Re: (Score:2)

Right. But it is known to be in NP. So proving P=NP would prove that RSA could be cracked quickly. But proving P!=NP would not prove that RSA cannot be cracked quickly.

## Re: (Score:2)

## Re: (Score:2)

I wrote that because "that's what all the fuss is about". There's more to it, but it's a catchy headline that explains the hype.

## Re: (Score:3)

## Re: (Score:2)

## Re: (Score:2)

Seriously? Cultivate a sense of intellectual curiosity and look back at that question. The point when you've already answered it for yourself.

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)