Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Math

The 50-year-old Problem That Eludes Theoretical Computer Science (technologyreview.com) 113

A solution to P vs NP could unlock countless computational problems -- or keep them forever out of reach. MIT Technology Review: On Monday, July 19, 2021, in the middle of another strange pandemic summer, a leading computer scientist in the field of complexity theory tweeted out a public service message about an administrative snafu at a journal. He signed off with a very loaded, "Happy Monday." In a parallel universe, it might have been a very happy Monday indeed. A proof had appeared online at the esteemed journal ACM Transactions on Computational Theory, which trades in "outstanding original research exploring the limits of feasible computation." The result purported to solve the problem of all problems -- the Holy Grail of theoretical computer science, worth a $1 million prize and fame rivaling Aristotle's forevermore.

This treasured problem -- known as "P versus NP" -- is considered at once the most important in theoretical computer science and mathematics and completely out of reach. It addresses questions central to the promise, limits, and ambitions of computation, asking:

Why are some problems harder than others?
Which problems can computers realistically solve?
How much time will it take?

And it's a quest with big philosophical and practical payoffs. "Look, this P versus NP question, what can I say?" Scott Aaronson, a computer scientist at the University of Texas at Austin, wrote in his memoir of ideas, Quantum Computing Since Democritus. "People like to describe it as 'probably the central unsolved problem of theoretical computer science.' That's a comical understatement. P vs NP is one of the deepest questions that human beings have ever asked." One way to think of this story's protagonists is as follows: "P" represents problems that a computer can handily solve. "NP" represents problems that, once solved, are easy to check -- like jigsaw puzzles, or Sudoku. Many NP problems correspond to some of the most stubborn and urgent problems society faces. The million-dollar question posed by P vs. NP is this: Are these two classes of problems one and the same? Which is to say, could the problems that seem so difficult in fact be solved with an algorithm in a reasonable amount of time, if only the right, devilishly fast algorithm could be found? If so, many hard problems are suddenly solvable. And their algorithmic solutions could bring about societal changes of utopian proportions -- in medicine and engineering and economics, biology and ecology, neuroscience and social science, industry, the arts, even politics and beyond.

This discussion has been archived. No new comments can be posted.

The 50-year-old Problem That Eludes Theoretical Computer Science

Comments Filter:
  • ... I think the disruptions with some cryptography methods suddenly being cracked would be less than utopian.
    • Re:If P = NP... (Score:5, Interesting)

      by timeOday ( 582209 ) on Wednesday October 27, 2021 @05:06PM (#61933317)
      As a practical matter, entertaining thoughts about P=NP is about like hoping to discover God, or the fountain of youth, or El Dorado the City of Gold. Nobody has proved they don't exist, and wouldn't it be cool if they did?

      I think the main attraction to P=NP is the fact that something so "obviously" false is so difficult to prove, moreso than the hope it's somehow true.

      • Or having something with the capabilities of a CRAY-2, broadcast camera, and the space to store the names of everyone on the planet, in your pocket. I mean, let's be realistic [youtu.be] here.
      • Yeah, the only interesting result is if P=NP. If it doesn't, then it's a mildly interesting problem. There are much more interesting problems in theoretical computer science.

        • Maybe this is Godelian. If you you could prove P NP then the world comes to an end.

        • by godrik ( 1287354 )

          Yeah, the only interesting result is if P=NP. If it doesn't, then it's a mildly interesting problem. There are much more interesting problems in theoretical computer science.

          Maybe, maybe not.
          If we can prove that P!=NP then we will understand that complexity class much better. For instance if P!=NP, then there are an infinite number of complexity classes separating P from NP-Complete, but we don't know for sure a single problem there.
          So maybe, we would be able to prove the complexity of problems which complexity is currently open.

          That could also tell us something about how approximable NP-hard optimization problems hard.

          That could tell us lots of things. Of course, P=NP would li

        • Even P=NP isn't that interesting except to theoretical computer scientists. It's not going to make practically solving the TSP or SAT any easier, all it'll do is show that in theory they could be solved if we knew how to solve them.
      • Not necessarily.

        A proof of P=NP may be non constructive. Or the algorithm might be O(N^1000000).

        • by gweihir ( 88907 )

          Not necessarily.

          A proof of P=NP may be non constructive. Or the algorithm might be O(N^1000000).

          Indeed. Applying the ideas of NP to _practical_ problems is _lazy_. In practice, exponents and constants matter very much. For example, if you get a one-way function that is n in one direction and n^100 in the other with somewhat similar constants, this is still completely secure for n large enough (not very large). But some intellectually lazy person required P for one direction and NP for the other. In the real world, that is just complete nonsense.

    • Re:If P = NP... (Score:5, Informative)

      by Geoffrey.landis ( 926948 ) on Wednesday October 27, 2021 @05:36PM (#61933421) Homepage

      ... I think the disruptions with some cryptography methods suddenly being cracked would be less than utopian.

      Not necessarily. The polynomial P could still be very large.

      Despite what the summary suggests, merely because something can be solved in polynomial time doesn't mean it can be solved quickly.

      • by godrik ( 1287354 )

        The polynomial P could still be very large.

        While this is technically true, we know of very few practical problems that are in P and that have high complexity algorithm. Checking whether a number is prime has an algorithm with a power of 6 and that is one of the largest degree for a problem that I know for a problem that has not been constructed for the sake of having a high degree.

        • I believe the time complexity of checking whether a number is prime is O(sqrt(N)). As for problems that fall into NP, some logistics problems are (what's the fastest way to get a set of items from here to there, where you have to pass through intermediate points--think ports, train yards, etc., and where the items are countable--not for instance oil or wheat, but boxes of stuff). Other NP problems include selection of investments, and cutting up stuff (like logs into boards of various sizes, where the dif

          • I believe the time complexity of checking whether a number is prime is O(sqrt(N)).

            No; in the model (usually -- Wikipedia notwithstanding) used for the complexity of primality, N = the number of bits in the number, rather than the magnitude of the number. The naive algorithm is therefore O(2^(N/2)), which is exponential. The current best proven is O(N^6), I believe.

            • by godrik ( 1287354 )

              The paper "P is in Prime" showed an algorithm that is either of degree 6 or degree 12 depending on whether some number theory conjecture is true or not. It was proven true about 10 years ago which makes the algorithm's complexity with degree 6.

      • by gweihir ( 88907 )

        ... I think the disruptions with some cryptography methods suddenly being cracked would be less than utopian.

        Not necessarily. The polynomial P could still be very large.

        Despite what the summary suggests, merely because something can be solved in polynomial time doesn't mean it can be solved quickly.

        Indeed. And for limited input size (the standard situation), something exponential _can_ be much faster than something polynomial. This whole thing is a _theoretical_ question and it is completely unclear whether it will ever have any practical impact if it turns out P=NP.

    • So what is the solution, outlaw quantum computing? Accelerate the spread of quantum-proof cryptography as official policy?
      • by gweihir ( 88907 )

        Quantum computing does not work for any meaningful size. It is unlikely to ever get faster than classical computers and may not even ever reach them. There are very hard problems that have eluded researchers for something like 50 years now. Also, AES-256 (for example) is already quantum-proof. So is RSA if you make it large enough. If I remember correctly, factoring RSA 4096 takes > 14k Qbits (after error correction). After 50 years of research they have something like 30 Qbits after error correction. Fo

    • by LordHighExecutioner ( 4245243 ) on Wednesday October 27, 2021 @06:16PM (#61933527)
      if P = NP then it follows that 1 = N
      • Erm: nope?

      • Cute. I'll reply with something I hope comes close.

        This was too long ago to recall the specifics, but my high school Calc teacher presented a proof that 1 = 6 (and had us all do the math, which checked out), and explained how the same technique could be used to prove that any number equals any other number. So it is possible to prove that 1 equals N, but it requires breaking 1 up into an infinite number of infinitely small parts and putting them back together again in a way that arrives at a different numbe

        • Was that one of those âoeproofsâ that contains an implicit division-by-zero, and can therefore be used to âoeproveâ anything, as long as your audience isnâ(TM)t aware that dividing by zero is an undefined operation which renders the rest of the proof invalid?

          • It was over 30 years ago, so I don't recall specific details. Naturally he proceeded to point out the flaw in that "proof" after presenting it. He liked throwing unusual aspects of Calculus at the students to try to keep them interested. His favorite was an equation that, when rotated around the x axis to make it 3-dimensional (and applying integrals to determine its surface area and volume equations), had a finite volume but an infinite surface area. He got a kick out of saying "you can fill it with paint,

    • You just have to use wider keys. It would still be polynomial to break, so instead of 256bit keys we would need 4096bit keys.

      Besides P = NP for vector machines.

    • by gweihir ( 88907 )

      ... I think the disruptions with some cryptography methods suddenly being cracked would be less than utopian.

      You are not understanding the question. That is not going to happen. Even if we get P = NP, it may still happen that we only get high-exponent P or extreme factors and that is just as secure as exponential at reasonable sizes.

  • by etash ( 1907284 ) on Wednesday October 27, 2021 @05:01PM (#61933297)
    either N=1 or P =0. Even jeff dean agrees with me.
  • by JoshuaZ ( 1134087 ) on Wednesday October 27, 2021 @05:04PM (#61933311) Homepage
    I'm having trouble seeing what the news is here. There's a lot going on terms of progress in theoretical computer science. But this article is just a summary of the problem as far as I can tell. The rest of the article is only open to subscribers. It wouldn't surprise me if the rest of the article has some new information, because the summary mentions Scott Aaronson who is a leading complexity theorist, but it isn't clear that is supposed to be new here.

    I suppose it is worth briefly mentioning where the state of things are. Right now, we know of three big barriers to resolving the problem, relativizing proofs, the algebraization barrier (Scott is one of the people who discovered this barrier), and natural proofs. The last two are a bit hard to explain, but the issue of relativizing doesn't take too much work to explain. Essentially, you can imagine a computer that has access to an "oracle" which can solve some problem very efficiently for it. You can then for a given oracle extend your entire system of definitions for computers which have access to that oracle, and talk about a notion of P and NP with respect to that oracle. We write this as P^A to be mean P but with access to an oracle for problem A, and similarly write NP^A to mean NP but defined with access to an oracle for problem A. We can construct an oracle A such that P^A = NP^A. But we can also construct an oracle B such that P^B = NP^B.

    So what? Well, most basic proof techniques we have go through and prove something just the same as if there's an oracle. For example, we know that P != EXP, where EXP is exponential time. But it turns out that the proof for this works just as well if there's an oracle present. So for any oracle A, P^A != EXP^A. So the technique used cannot by itself resolve that P != NP. We say that the proof that P != EXP "relativizes" because it is true relative to any oracle A. We have some techniques that fail to relativize, but we need more of them.

    Possibly the closest we have gotten to actually resolving the question is due to work by Ryan Williams. He proved that NEXP is not contained in ACC0 https://en.wikipedia.org/wiki/ACC0 [wikipedia.org]. Here NEXP https://en.wikipedia.org/wiki/ACC0 [wikipedia.org] is the analog of NP but we're allowing exponential time to check our proofs. (So NEXP is to NP as EXP is to P.) ACCO is a little more technically defined but it essentially involves circuits where one is allowed to make logic circuits of a certain type and also allowed for a given set of inputs bits to see if the total number of inputs which are 1s has a remainder of 0 when you divide by m. But we believe that ACC0 is very small in a certain sense, and NEXP Is really big. So this is still pretty far from proving that P != NP.

    • It might be news to anyone who has never heard of the problem. Just like some people do not know that solving the Riemann hypothesis would get them $1M. But as a warning, the Riemann hypothesis has been attempted by many mathematicians in the last 170 years so the answer is not easy.
    • by jd ( 1658 )

      Are there DOIs for the claim and counter-claim? If so, subscriber issues are a thing of the past.

    • The news is that if you pay for a slashvertisement, they'll even let you deny that P != NP.

    • Can some smart computer science person explain why you can't use a simple "Password-Test" example as study to prove that P != NP?

      Once you know the password, it's easily verifiable. But to crack it, you have no other choice than to apply brute force, no?

      • I'm probably not a smart computer science person, but I can hopefully explain this. There are two ways of interpreting your password test idea.

        The first idea is where we *don't* have access to the password database itself. In that case, yes, we can actually prove in a rigorous fashion that there's no better general method than brute force. However, this is not the situation that P ?= NP is asking about, because P ?=NP is talking about problems where we have access to all the relevant information. (Your

        • Yes, it does, thanks. I wasn't aware that having access to all the relevant information is part of the P vs NP "rule set". So your example with a hashed password nicely illustrates how this seemingly simple idea becomes a much more complex study about cryptographic algorithms, in terms of how it relates to the P vs NP problem.

    • Exactly, this is the "Generalisimo Francisco Franco is still dead" of "news".
      P=NP isn't solved? You don't say!

  • by jd ( 1658 )

    There's the obvious question. How far can you approach P with such problems, using quantum computers? Presumably not very, at least for problems that are considered interesting, otherwise even with no actual solution, mathematicians wouldn't be complaining.

    • Re:Hmmm. (Score:4, Interesting)

      by JoshuaZ ( 1134087 ) on Wednesday October 27, 2021 @05:24PM (#61933387) Homepage
      We strongly suspect that BQP https://en.wikipedia.org/wiki/BQP [wikipedia.org], the set of problems a quantum computer can do in polynomial time, does not contain NP. But this is a strictly stronger claim than that P != NP, since P is contained in BQP. Not too surprisingly, we can't prove that much in this regard. We can't even prove the reverse direction, that BQP is not contained in NP, which we're also pretty confident of. We do know that there is an oracle A, where BQP^A is not PH^A, where PH is the polynomial hierarchy (a very big set which contains NP). Scott Aaronson, who is mentioned in the snippet above, recently announced (although I don't think the preprint is public yet) a related set of results with William Kretschmer and DeVon Ingram. In particular, they've apparently constructed an oracle B where P^B=NP^B but BQP^B!=QCMA^B (Roughly speaking NP is to P as QCMA is to BQP https://en.wikipedia.org/wiki/QMA#Related_classes [wikipedia.org]).
    • How far can you approach P with such problems, using quantum computers?

      That is a complicated question and there is an entire sub-field of computer science dedicated to it: Quantum complexity theory [wikipedia.org].

    • Setting technology aside and working more abstractly. I'd like to know if I can prove that an optimal algorithm for any NP problem exist. And how to prove if a particular method is the optimal one.

      Finding algorithms for NP problems is itself an NP problem (this links to the idea of NP-complete). Because of this, lots of things are going to snowball if we figure out P vs. NP. But I'm not convince the question is even solvable, and we're probably more likely to prove that it can't be done than to prove that P

      • by jd ( 1658 )

        That sounds reasonable enough.

        How is "optimal" defined in the above, though? There are several possible ways I can see it being defined, each of which would put a different flavour on not being able to show that such a solution exists.

        • Given that we're talking about (deterministic) polynomial time versus non-deterministic polynomial time, optimization of the number of steps is the question that we usually face in computer science. It suggest the scalability of an algorithm rather than directly answering any real world concept of run time.

          Hopefully we can hack around the halting problem too.

          • by jd ( 1658 )

            Agreed, I was thinking more in terms of whether it's a proof that a given set of algorithms have the best Big O value in all cases, a proof that a given set of algorithms that may have a variable order of complexity is best on some definition of average (so it's optimal on aggregate rather than in every case), or an existence proof that one of those two sets exists whether or not any of the members can ever be established.

      • But I'm not convince the question is even solvable, and we're probably more likely to prove that it can't be done than to prove that P = NP.
        The question is probably not solvable.

        But I would say it is safe to assume that: P != NP is true.

      • by mark-t ( 151149 )
        Even a proof that it is unprovable would be a fantastic achievement in the field.
      • Setting technology aside and working more abstractly. I'd like to know if I can prove that an optimal algorithm for any NP problem exist. And how to prove if a particular method is the optimal one.

        Certainly an optimal algorithm for any NP problem exists; a problem in NP is by definition computable, and of all the possible algorithms, some subset will be optimal. But if you could create an algorithm that solved an NP-complete problem and prove that it is optimal, you'd have solved P = NP; if the algorithm r

        • by jd ( 1658 )

          It would depend a little on how optimal is defined as to whether they could be found, which I assume is what's intended.

          We can also define "problem" to mean the entire category and not just a specific case. If we then make it a strict requirement that optimal will always be better than brute-force for any case in a category of problem, in addition to any other requirement, then would we necessarily be able to show that? Or will solutions necessarily fluctuate better/worse than the baseline? If you cannot pr

  • The interesting bit (Score:5, Informative)

    by Dixie_Flatline ( 5077 ) <<moc.liamg> <ta> <hog.naj.tnecniv>> on Wednesday October 27, 2021 @05:32PM (#61933407) Homepage

    This is mostly just a rundown of the state of the research, and a bit of an excuse to venerate Cook, who certainly is deserving.

    But here's the bit on the proof that the summary alludes to:

    The result published in July presented a proof of exactly that long shot. But it was only the latest in a long tradition of proofs that don’t pass muster. Within a day of publication, in a turn of events worthy of Monty Python, the paper was removed from the online journal; then it seemed to reappear briefly before disappearing permanently. It was the most recent version of a paper that the author had posted more than 60 times to the arXiv preprint server over the last decade. The journal’s editor in chief explained on Twitter that the result had been rejected, but in a case of human error, the paper’s disposition had somehow changed from “reject” to “accept” and the proof had found its way to publication.

    • But it was only the latest in a long tradition of proofs that donâ(TM)t pass muster.

      Summarizing your summary. Thank you!

  • by presidenteloco ( 659168 ) on Wednesday October 27, 2021 @05:39PM (#61933429)
    we might actually be able to read TFA.
  • That is what in essence makes it harder already, regardless of the underlying problem.
    Image having to write that extra character before even adressing the problem itself.

  • Societal changes of utopian proportions, like atomic energy too cheap to meter and blowing people up with atomic bombs.
  • I recently read Siobhan Roberts' book, Genius at Play: The Curious Mind of John Horton Conway (2015), and enjoyed it immensely. It's not a traditional biography, in part because Conway is not a traditional subject.

    While writing the Conway biography, she was a Director's Visitor at the Institute for Advanced Study, in Princeton, and a Fellow at the Leon Levy Center for Biography, at the CUNY Graduate Center in New York City.

    I don't know how you earn that gig at her age, but mad respect for getting herself no

  • by angel'o'sphere ( 80593 ) <{ed.rotnemoo} {ta} {redienhcs.olegna}> on Wednesday October 27, 2021 @08:20PM (#61933889) Journal

    "P" represents problems that a computer can handily solve. "NP" represents problems that, once solved, are easy to check
    Complete nonsense.

    NP means - can not be done in polynomial time based on the size of input. Has absolutely nothing to do with the verifying a solution - regardless how quick/slow the solution was calculated or how quick/slow the solution can be verified.

    It literally means: "non/not polynomial"

    • "In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is 'yes', have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine."

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      Did nobody ever fix Slashdot for Unicode?

    • by godrik ( 1287354 )

      NP means - can not be done in polynomial time based on the size of input.

      You are wrong. NP is short for Non-Deterministic Polynomial Time. It is the set of problem that admit a polynomial algorithm on a Non-Deterministic Turing Machine.
      Being polynomial on an NDTM is equivalent to being verifiable on a Deterministic Turing Machine. And because of that, NP is often defined as "problems that are easy to check once solve" because that definition is easier to explain and they are equivalent.

      In case you don't believe me, see the "computer and intractability" by Garey and Johnson

      • Re:Nonsense (Score:4, Interesting)

        by angel'o'sphere ( 80593 ) <{ed.rotnemoo} {ta} {redienhcs.olegna}> on Wednesday October 27, 2021 @10:02PM (#61934091) Journal

        You are wrong. NP is short for Non-Deterministic Polynomial Time. It is the set of problem that admit a polynomial algorithm on a Non-Deterministic Turing Machine.
        No, I'm not wrong. I only left out "deterministic" as that is for laymen not relevant.

        You are mixing up:
        a) NP (that is what I talked about)
        b) NP-hard
        c) NP-complete

        The article is mixing up NP-complete with NP. Which are quite different things. Perhaps I should have worded that more clearly.

        https://en.wikipedia.org/wiki/... [wikipedia.org]

        • by godrik ( 1287354 )

          I haven't read the article, but the summary uses the terms P and NP correctly. I certainly do not confuse NP, NP-Complete, and NP-Hard; actually I wrote an NP-Completeness proof last month.

          NP can not mean Non Polynomial. Because all problems in P are also in NP, they are also in co-NP, and they are also in P-SPACE.

          Many problems in NP are the most stubborn and societaly impactful problems we know. This is essentially true for all problems we know which are in NP but may or may not be P. (All of these are stu

    • No. NP stands for Non-deterministic Polynomial. There are complexity classes that are harder than NP: PSPACE (problems that require a polynomial amount of memory space) and EXPTIME (problems that require an exponential amount of time where answers are not necessarily checkable in polynomial time).

    • NP means - can not be done in polynomial time based on the size of input.

      ITYM: NP means many people suspect it cannot be done in polynomial time based on the size of the input. Unless you can prove me wrong...

  • by 140Mandak262Jamuna ( 970587 ) on Wednesday October 27, 2021 @09:29PM (#61934003) Journal
    Inverting an nxn matrix is a common linear algebra problem. Computers are so good at it, they used very commonly.

    Every non singular matrix has an inverse. Usually engineering problems result in sparse matrices and their inverse matrix is dense. Mathematically there are equal number of sparse and dense matrices, it should be possible to construct a random dense matrix that will have the sparse matrix as the inverse.

    But that is never the case. We can never create a dense matrix, using any algorithm that will have a sparse inverse. Why?.

    Inverting a matrix is hard verifying that given to matrices are inverse of each other is easy. Does it make it a NP problem?

    • But that is never the case. We can never create a dense matrix, using any algorithm that will have a sparse inverse. Why?.

      I assume you mean with a complexity of Inverting a matrix is hard verifying that given to matrices are inverse of each other is easy. Does it make it a NP problem?

      Not in this context: in the context of complexity classes, "easy" generally means polynomial, regardless of the order, an hard is I believe the inverse. Generally people mean exponential, since every problem in NP has an worst

    • by godrik ( 1287354 )

      Assume your sparse matrix is full rank, computing its inverse can be done in polynomial time requiring about a quadratic number of elements and I am going to guess a cubic number of operation (maybe it is power 4), so it is actually polynomial.

      If it is not full rank, you have to be a bit more careful about how you represent things, but it still works.

      So the problem is in P. Note that all problems in P are also in NP. The class of problem in NP that we think is hard is called NP-Complete.

      A good book on the s

  • the Holy Grail of theoretical computer science, worth a $1 million prize and fame rivaling Aristotle's forevermore.

    If I were to solve P=NP, I will make sure it would not be wasted for a measly million dollars. That is practically breaking every encryption, solving most complex route optimizations, finding best layouts of complex circuits, and practically solving many, many hard problems.

    • If I were to solve P=NP, I will make sure it would not be wasted for a measly million dollars. That is practically breaking every encryption, solving most complex route optimizations, finding best layouts of complex circuits, and practically solving many, many hard problems.

      Only if your proof is (a) constructive and the (b) the resulting algorithm is practical.

      Also, it wouldn't be wasted. Like how?

  • Just to be clear, NP problems aren't out of reach now and never will be. It is optimal solutions we don't know about. Stochastic and heuristic approaches have proved to work well for acceptable solutions.

    • by gweihir ( 88907 )

      Indeed. In addition, NP problems of sufficiently small size can be entirely within reach, while polynomial problems with high exponents can be completely out of reach even for small sizes. P vs. NP is a _theoretical_, _asymptotic_ question. Practical computing is different.

  • by clambake ( 37702 ) on Wednesday October 27, 2021 @10:15PM (#61934129) Homepage

    which this comment is too narrow to contain.

  • People like to describe it as 'probably the central unsolved problem of theoretical computer science.' That's a comical understatement. P vs NP is one of the deepest questions that human beings have ever asked.

    42

  • P vs. NP is a _theoretical_ question with no immediate practical impact. High-order polynomial is just as intractable in practice as exponential. At the same time, exponential with an exponent close to 1 can still be practical for many actual scenarios and can even be better there than polynomial. It really depends.

    • by godrik ( 1287354 )

      well, yeah. It is technically true.

      But, whenever we get a problem that's polynomial we usually get pretty good at minimizing their complexity. Most polynomial problems have a degree less than 4. I remember the first approximation algorithm for some 3d packing problem was in n^1000 or so and within a few years it dropped to n^6.

      Once we understand why something is polynomial, we usually get pretty good at minimizing the complexity of the problem to something practical.

  • >Many NP problems correspond to some of the most stubborn and urgent problems society faces.
    No, society's biggest problem is how we are going to get along with each other. Let's keep things in proportion here, silly computer scientists.

  • Existing workarounds are good enough that I wouldn't notice the slight improvement.
  • there are technical solutions. It just happens that the technical solutions are not socially acceptable.

Neutrinos have bad breadth.

Working...