Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Does P = NP? 320

Morabbin writes: "A paper claiming to present a polynomial time algorithm for the minimal clique partition problem has been put up on Stas Busygin's Repository for Hard Problems Solving. It seems to be a genuine attempt (i.e. non-crackpot). Since the minimal clique partition problem is NP-hard, the existence of a polytime algorithm would imply that P = NP. The jury is still out." All right, I'm super skeptical, but its been 2 years since I messed with np complete, np hard, and all that stuff. But I do know enough to doubt it. Where are the CS grad students in the audience?
This discussion has been archived. No new comments can be posted.

Does P = NP?

Comments Filter:
  • Convex hull is polytime; basically it's 3D sort.

    NP does not stand for NON polynomial, it stands for nondeterministic polynomial.

    Every problem in P is in NP.

    Decryption brute force by definition could not be accomplished in polytime; the problem is if you can always guess right in polytime, then it's a different story (essentially how a nondeterministic turing machine works).

    </rant

  • As everyone else has pointed out, way too obvious. I mean, come on - divide by 0?? Not to mention the deliberate (sp.?) and shallow misunderstanding of the subject.

    But, seeing as you changed the subject to phoney math, how about this:

    1 = 1 (right?)

    1 = 1 * 1
    1 = -1 * -1

    so:

    1 * 1 = -1 * -1

    now x = y => sqrt(x) = sqrt(y),

    so:

    sqrt( 1 * 1 ) = sqrt( -1 * -1 )

    and as we all know, sqrt( x * x ) = x,

    so:

    1 = -1!

    Much more satisfying - no divide by 0! Just the cavalier assumption that sqrt is a single valued function and some blythe disregard for the domain of the values discussed.

  • My intelligent way of solving traveling salesman involved slapping a regular grid on the area such that an average of seven stops were in each gridbox (cube, n-cube, . .). Use an identical basic plan, basic circle then across the diameter, with orthogonal distance from the paths determining which veered to pick up the odd stops. Start in one corner and work in one basic direction. It gives you a path in P time, that is moderately close to optimal. By virtue of the transformation it can work on all NP problems. Is it conventional? No. Is it algorithmic? No.
  • "Minimum clique partition" refers to the minimum number of complete subgraphs into which the subject graph can be partitioned, not the smallest subgraphs (which is trivial).
  • All algorithms that run on modern computers have runtimes that can be described in terms of polynomial functions. Some examples of these are:
    N,N^2,LogN,NLogN,N^N

    The "N" in all of these describes in some fashion the size of the input. Generally when one discusses algorithm speed, the terminology O(N) or O(logN) is used as opposed to just stating the polynomial. You should read "O(N)" as "Big-Oh of N" or "Order N". There is also a "little-oh of N" which describes lower limits on the algorithms runtime and there is "Omega of N" which is a waste of time.

    Now you may be asking what is the f*ing point... I know I did in my sophomore year of Computer Science classes...

    Basically the point is only apparent when dealing with difficult problems. Before you go and code a solution to a difficult problem, you want to ensure that you wont be wasting your time. So you formalize your algorithm into polynomial form. (This can be difficult...) You can then tell by the form of the polynomial just how long the algorithm will take to run as the size of the input increases. So, if I formalize my Algorithm to be O(NLogN) ... (Incidentally most sort algorithms are O(NLogN) in case you wanted an example...) Anyhow, I know my algorithm is O(NLogN) which means as N, or the size of the input, increases... my runtime increases according to this function.

    In case you are wondering how the hell logarithms get into computer algorithms, its a by-product of recursion.

    Back to the point. Any algorithm described as having the form O(x^N) is considered to be "NP" which stands for Non-Polynomial... which you can see is plainly the case. The "N" is in the exponent, and therefore the runtime grows exponentially, not polynomially.

    Algorithms that have NP forms suck because they just take too damn long to with inputs above certain thresholds. NP alogorithms are grouped into two seperate groups that have differening properties. One is NP-complete, and the other is NP-hard. Lets leave it at: "NP-complete problems have better solutions that are more practical than NP hard problems". There are of course more formal definitions, but this should suffice for now.

    Basically NP problems are some of the most infamous in computer science and were there polynomial solutions to these problems the world would be a better place. Some of these are: Minimum Spanning Trees, The Travelling Salesman Problem, Convex Hulls (Somebody check me on that one ???), and many more including decryption brute forcing and some of the problems that go with it such as large number factoring and prime-tests...

    Some scientists believe that the set of NP-complete problems can all be mapped into polynomial algorithms, others do not. Some scientists believe that _some_ of the NP-complete problems can have a polynomial map and others don't and that therefore not all the algorithms that are considered to be NP complete actually are. Basically at this point the subject digresses into a bunch of academic hooey.

    You should come away with this: There are some problems that people would like to solve that todays computers can't solve with practical input sizes in a million years. Computer Science has developed a formalism for working with these problems and it is of interest to some mathematically minded programmers to read about these things and play with the solutions. It helps programmers keep their minds sharp and usually helps us keep our own algorithms efficient.

    I hope that I have sated your curiosity to some degree. I have glossed over and simplified some very complex subjects, partly because I would embarass myself and get thing s wrong if I went into more detail. I am a software engineer, not a theoretician. The difference is: theoreticians are smarter, but software engineers make more money. :-)
  • Actually it is even more general than that. Any NP complete problem can be written as a special case of any other NP complete problem; this is why it is such a big deal to do NP in P time. Once someone finds an algorithm for a single NP problem, they have solved *ALL* NP problems in polynomial time.

    This is also why proving an isomorphism proves x in {NP}.
  • Fscking Submit buttons is way too close to the "No Score +1 Bonus" check box!

    --
  • Oh we're out here in the audience, but know better than to get involved with that stuff. I learned enough to get past my PhD comprehensives and will be happy to leave P/NP at that.
  • I take it that you're saying you don't believe in Turing's thesis. could you specify a bit more what you mean by "intelligent way" as opposed to "analytical way"? specifically, do you see this 'intelligent way' as being specifiable by a good old formal-language algorithm?
  • So the statement "A problem is said to be within the classof complexity P if ... Otherwise the problem is said to be in complexity class NP." is not strictly accurate.

    True. Thanks.
  • by PollMastah ( 174649 ) on Tuesday October 10, 2000 @05:49AM (#718614) Homepage
    FYI, I just got an email reply from Prof. Stephen Cook who, after looking at the URL I sent him, said:
    I doubt that the algorithm performs as claimed. I'll send it on to some graph theorist; perhaps Mike Molloy.

    Prof. Cook is a respected authority on complexity theory, so if he is doubtful about this paper, I'd take an extra large grain of salt with it...

  • It has been while since I did the complexity cource, and I have not read the complete paper, but I found a small mistake (but it might be a big one).

    quote:The running time of the algorithm is equal to O(n6), where n is the number of graph vertices.

    n should be the number of graph vertices and graph edges together (think of Turing Machines).
  • > My intelligent way of solving traveling salesman involved [...]

    Isn't that what usually is called "a heuristic" in algorithmics?
  • RSA is based on the difficulty of factoring.

    Factoring is provably in NP.

    If P=NP, then there is a polynomial time factoring algorithm.

    A polynomial time factoring algorithm would significantly reduce the computation required to break RSA.

    The point you're missing is that if there is a polynomial time algorithm for an NP-complete problem, then there is a polynomial time algorithm for every problem in NP.
  • by fish ( 6585 )
    Those kind of articles appear every 3 months on sci.maths...
  • by adubey ( 82183 ) on Tuesday October 10, 2000 @04:51AM (#718634)
    If this does, in fact, prove that P=NP, perhaps the biggest open question in computer science, then why is it being published on a website and not in a peer-reviewed academic journal?

    There are problems when "science" is reported in the media before it is peer-reviewed: Fleishman and Pons, anyone?

    Then again, perhaps I shouldn't speak so quickly... the site is being Slashdotted and I can't download the paper to see for myself!
  • In most branches of CS no-one really cares about journals, the
    action is all at conferences.
    \

    Ah, I wish that were true. Unfortunately if you are seeking
    tenured positions, publications in peer-reviewed journals are still the
    `gold standard'.

  • by slickwillie ( 34689 ) on Tuesday October 10, 2000 @08:40AM (#718640)
    It's been so long since I was a CS grad student, even Hello World looks NP-complete to me.
  • by slickwillie ( 34689 ) on Tuesday October 10, 2000 @08:47AM (#718644)
    If you can solve that, you can solve them all and you're THE MAN. Or - preferably - THE WOMAN offcourse.

    Or the slime mold.

    I just tried an experiment where I laid out a maze in the classic Traveling Salesman problem. Then I put my smartest slime mold "Silly Green Putty" on it and it was solved in about 18 hours.
  • When discussing indepedent vertex sets on a graph, Plotnikov seems to say that the set of all points in the graph is such a set. (An independent vertex set is a set of vertices in the graph with no edges among the vertices in the set; thus, the whole graph is an independent set if and only if there are no edges in it.)

    This is on page 7, when he talks about partitioning the vertex set X.

    When I saw this, plus all the minor errors in the paper up to that point, I gave up on reading the rest. Maybe he doesn't use that result later on or something, but I have other things to do today.
  • There is no recursive solution to the halting problem. It
    may be that there is a physically possible, non-recursive solution,
    which would invalidate Church's Thesis, but no mathematics.
  • I am of the opinion that NP = P (or not) is one of the hardest problems in mathematics...

    It's not just your opinion: there's a sense in which you can prove it's one of the hardest problems in computer science.

    In one of my complexity theory textbooks (I'll post title/author after I get off work), it's proved that a proof of P!=NP cannot be done with diagonalization, which is the traditional means of proving two classes distinct (think of the proof that there are more real numbers than rational numbers, or the proof that the halting problem cannot be solved by a Turing machine).

    So if you're going to prove that P!=NP, you're going to have to come up with a whole new method of proof. (Note: this was around four years ago, and complexity theory is a rapidly-moving field, so there may be some advances I'm not aware of.)

    Of course, proving P=NP will be easy, if it's true. Just find a polynomial time solution to a NP-complete problem. :) Which is what this paper claims to do.

  • I have discovered a solution to the stationary salesman problem that takes polynomial time, however it is too stupid to fit in this post.
  • by SIGFPE ( 97527 ) on Tuesday October 10, 2000 @04:58AM (#718657) Homepage
    Check out the article in Scientific American this month on that great game minesweeper. Someone has proved that the problem of determining whether a position in the game is consistent is NP-complete and it gives an idea of how the proof goes. Neat!
    --
  • Some years ago Cook showed how ANY problem in NP can be reduced to SAT. SAT is the problem to show if for a given boolean formula there is an assignment of values to the variables in it which makes the formula true.
    For the gory details see chapter 7 of
    Michael Sipser "Introduction to the Theory of Computation" (PWS, Boston 1997)
  • Like my fellow slashdoteers wrote, P=NP has wide implications on the difficulty of solving NP-Complete problems(or NP U Co-NP such as primeness which is used for cryptography) and has a serious effect on encrypted data.

    HOWEVER, during all those dozins of years Computer Scientists have been trying to show P!=NP(with no success) they achieved success in showing another thing: they showed that if P=NP everything(well, almost everything) is easy.
    In(not so) short, there is an infinite "pyramid" of complexity classes built one on top of the other called Polynomail Hierarchy(or PH). This includes problems starting from P(easiest) and going higher and higher to "infinitely hard" problems. I should say they are not really "infinitely hard" because you can solve them all in Polynomial Space(if you don't know what it is, just ignore this sentence) but they are very hard(polynomial space is also known as the class of "all interesting problems").

    Now, what they showed is that if you "collapse" one layer of the pyramid, it all collapses to that layer. And showing P=NP is collapsing the very first layer.

    What's my point? Wait a second, there is one on the way(assuming I don't forget it).

    All right, says Mr. Slash D. Wiz, so what if P=NP makes all current cryptography obsolete? They can now(with the new computation power) come up with new cryptographic tools and such. Well, if P really equals NP, this means that it is going to be very hard(I don't want to say impossible) to find a cryptographic model that won't be easy to crack.
    You see where I was getting with all that math mumble? Not only current cryptography is obsolete if P=NP, the whole field of cryptography might go in the ways of alchimestry...

    So, you'd better hope this isn't true. It might put an end all all those nasty complexity courses in the Computer-Science department but you could use the time of the class to show your privacy the way out of your life.

    ---
    Oh yeah???

  • by e_lehman ( 143896 ) on Tuesday October 10, 2000 @06:26AM (#718669)

    As far as I can tell, his first algorithm (p. 5-6) contains a simple array-indexing error. Here's my paraphrase:

    INPUT: M, an n x n array of booleans.

    1. Let N = n, i = 1.
    2. If N = 0, halt.
    3. If (some condition on the ith row of M)
      • Let i = i + 1, N = N - 1.
      • Go to set 2.
    4. else
      • Let i = i + 1.
      • Go to step 2.

    If the "else" branch is ever taken, then index i will reference a location outside of array M before the procedure halts, right? Here he is just reviewing a 40-year-old algorithm for bipartite matching, but it is disturbing that his very first algorithm contains a basic error .

    Or am I missing something? :-)

  • Actually, aside from your math mistake that someone else pointed out :), the ability to get a polynomial time algorithm for an NP-complete problem could be quite problematic for cryptography, even if the terms are in the range you specified. The reason is that a polynomial time algorithm can often be transformed into a parallel algorithm with lower coefficients. So, it could be possible, ideally, to take your O(n^1260) and transform it into an O(n) algorithm running on 1260 processors. Of course, that's the absolutely best case, but the fact is that polynomial time algorithms can be made to benefit from parallel processing, which is quite cheap these days (ie, "We could make a Beowulf cluster of these!" ;), making less efficient algorithms useful.
  • Take graph isomorphism, for example. It is provably NP hard, however there is a linear
    time algorithm which works well in practice. This is used, for example, in a CAD system where you are comparing the electrical circuit network of
    your schematic or procedural description with an extracted network from the VLSI mask, to verify that your physical layout implements the same circuit as your higher level description.

    In practice, a hashing algorithm will give you
    the answer in linear time, with almost any degree of certainty. There is a small probability of a false positive, however.

    It seems like in many cases a probablistic algorithm like this gives practically certain
    results. I don't know about factoring numbers, but I sometimes think that a probabilistic algorithm such as the graph isomorphism one could be applied to the factoring problem.

  • I studied CS (graduated 2 years ago) and my first thought upon reading this post was along the lines of "NP-complete .. that sounds familiar" .. :)

  • Exactly...one would think that a TRUE discovery of such a proof would be published in a more well recognized journal.

    But what do I know
  • Computer scientists such as Mr. Platnikov are going to very complex lengths to prove that P=NP, but their approach is much too circuitous.

    A true mathematician would realize that we only need to establish that P=0, or that N=1.

    Corby
  • Ok, we prove that 0.999... (with an infinite number of nines) equals 1.

    The problem here is that this is generally accepted as true, unless you're an extremely cantankerous constructivist by the name of Fred Richman. [fau.edu] :-)

    Actually, reading the above-linked text is guaranteed to cause 9 out of 10 mathematicians to be blinded with rage...but the guy is really a math prof.

  • I'd like to mention a few facts about the problem, the papers, maths, and all those posts in general :

    ->Thousands of false proofs of either P=NP or P!=NP have been submitted.
    ->Because it hasn't been either proved or disproved doesn't mean it is false. In fact, either answer has to be true, and both are unproved !!!
    ->P and NP are sets of algorithms, not real or natural numbers, so P and NP cannot be 0 or 1, and besides, even if it was the case, then it could be many other values (correct answer would be 0 and x).
    ->Earth is populated by 90% morons who don't know what they're talking about, and most of them post on Slashdot. Conversely, 90% of posts are from morons.
    ->One must have much time to loose to reply here instead of just reading the paper, and besides, it is plenty of obvious errors.
  • Ever since I fumbled with NP completeness and NP hard problems, I've believed that it should be possible to solve several of these problems (including the one in the paper) in a P method, but only by being intelligent about the problem and not solving it cold analytically. This paper seems to present an algorithm that can do it. It's very interesting if it would work but I am skeptical as well.

  • The size of the graph description is quadratic in the number of vertices, not exponential. That's O(n^2), not O(2^n). Big difference. By the way, this particular mistake makes a good pet peeve -- it's in the same class as saying that RSA encryption depends on the difficulty of factoring large prime numbers. I think I've seen Brooks' Law justified by the argument that "the number of communication links between people grows exponentially with the number of people"? NO! QUADRATICALLY! NOT EXPONENTIALLY!

    This is even worse, though, because it indicates a real misunderstanding of the concepts. A lot of people seem not to appreciate the difference between the equations y = x^2 and y = 2^x: they seem to think the two are almost the same because their graphs look so similar (they both curve sharply upward, right?). Well, that's true if you only look at them for small y, like y < 10 or 20 (and, of course, positive x). But try relabelling your axes to use x values from 0 to 33 and y values from 0 to 1000 and graph the two functions again: y = 2^x looks a lot steeper, doesn't it? Now, try x values to 1000 and y values to 1,000,000: y = x^2 still looks about the same, but you can hardly draw y = 2^x as anything but a (nearly) vertical line, since, if the width of the paper is 1000, 20 is so close to zero. See the difference yet?

    David Gould
  • Start in one corner and work in one basic direction. It gives you a path in P time, that is moderately close to optimal.
    Unfortunately, moderately close to optimal isn't good enough. There's an entire field that's devoted to coming up with algorithms that are approximate solutions to NP-complete problems. These algorithms are usually expressed in terms of bounds: this algorithm is guaranteed to find a path that is no more than 2 times the length (or 1.5 times, or 1.1 times...) of the optimal path. So this kind of thing is definitely worth exploring, but an approximate solution to an NP-complete problem gets us no closer to knowing if P=NP.
  • by Stephen ( 20676 ) on Tuesday October 10, 2000 @07:02AM (#718719) Homepage
    By the way, there is now $1,000,000 [claymath.org] prize available for proving that P is or is not equal to NP. Of course, this could produce more correct, well-intentioned but wrong, and nutty solutions.
  • You're quite - in fact, you've understated it - as if you just consider the single human being they're definitely time-limited, unlike a Turing machine, where for discussions of computatibility (as distinct from complexity, of course) time limits are irrelevant. I should have made that proviso more explicit.

    Conversely, if you've read Roger Penrose, you'll know that he argues that human intelligence is non-computable.

    While this is all a very interesting debate, the original point I was making still applies, I think - that how human thought relates to computability isn't really relevant to a discussion on P=/!=NP.

    Thank you for your correction, though - it's nice to see that people with clue still read /. occasionally.

  • Quick definition for others: NP-hard means every problem in NP can be transformed in polynomial time into an instance of this problem, but it doesn't necessarily mean that it's itself in NP. NP-complete = NP-hard + is in NP.

    ...solutions found in Quantum algorithms for simulating NP-hard problems in polynomial time.

    Are you sure? I don't think that this is the case - while some algorithms for currently-intractable problems have been proposed for a theoretical quantum computer, none of these algorithms is for an NP-hard problem. References appreciated if I'm horribly wrong (which I may well be).

  • Part of the reason for the extreme skepticism is that thousands of brilliant people have tried to come up with a solution to this problem.
    Although I agree with you completely, I find your lack of faith...disturbing. If I strip out all references to NP, your post could just as easily apply to the proof of Fermat's theorem. Your statements imply that the guy who proposes the proof today is less smart than the people who went before and has access to exactly the same knowledge. BZZZT! He may be smarter, and he has more mathematical tools to work with. Or are you implying that mathematics does not
    advance?

    Empirical evidence (namely, failures to prove the conjecture in the past) is completely irrelevant.
  • by Chalst ( 57653 ) on Tuesday October 10, 2000 @07:07AM (#718727) Homepage Journal
    Articles are normally circulated as preprints before being accepted
    for publication by a journal: unsurprising given how long the journal
    reviewing procedure generally takes. There are all sorts of caveats
    given at the website, so I think this is perfectly respectable. The
    story wasn' `P=NP! It's official!', which really would have been
    irresponsible journalism.
  • by rlk ( 1089 ) on Tuesday October 10, 2000 @09:49AM (#718729)
    Hashing isn't really constant time. It's still logarithmic, just with a very small constant (if done well).

    To see why, look at the length of the hash key. In order for an element to hash to a unique value the space of hash keys must be at least as large as the number of elements in the table. Therefore, to achieve a perfect hash the number of bits in the hash key must be at least log(elements).

    Now, how about the computation of the hash key? Suppose we do something really simple, like xor the bytes (or log n chunks of bits) in the value. This isn't a constant time operation; it's linear in the number of bits in the key, or logarithmic in the number of elements, assuming a constant amount of hardware (obviously, if you assume an unbounded amount of hardware, the time could be constant, but that's not really very interesting).

    Hash lookup is no better. You still have log n bits to recognize.

    Maybe there are ways of performing lookup in less than log n time, but hashing is not one of those. In practice, hashing takes a small constant amount of time because we don't or can't optimize very well for very small hash tables. Computing a 16-bit hash isn't usually cheaper than computing a 32-bit hash, so we think of it as constant, but that's really not the case.

    Remember what O(log n) really means. It means that as the problem size grows the amount of work grows asymptotically and proportionately (or maybe it's at worst proportionately, I don't remember all the notation details and I'm too lazy to fetch my copy of Cormen/Leiserson/Rivest from my bookshelf) to log n. It says nothing, however, about what the proportionality constant is. You could have one algorithm that's O(e^n) (exponential complexity) and another algorithm that's O(1) (constant complexity), and in practice the former could be more efficient -- the constant factor of the constant time algorithm may overwhelm the exponential (or worse) blowup for the problem sizes that will actually be encountered.

    This brings up another point -- most practical algorithms and implementations have more than one limiting factor. The O notation only expresses the dominating limit (or limits, if a particular problem has multiple inputs) as the problem size gets arbitrarily large. So it is with hashing.
  • Why? Because every problem in P has a polynomial time for determining if the 'guess' is correct.

    --
    Ben Kosse

  • by Jerf ( 17166 ) on Tuesday October 10, 2000 @05:22AM (#718735) Journal
    There are many problems we can't think of P solution for, but until we know everything there is to know we don't know for sure that there isn't another way to solve a problem.

    Not necessarily. There are problems there is no solution for, the canonical example being the Halting Problem (look it up in Google). We know there's no solution to that. In fact, if a solution were to pop up, it would invalidate a couple hundred years worth of math, which would be a great surprise.

    Sometimes you can know there's no other way. Should someday a proof emerge that P!=NP, then for all NP-complete problems, it will have been proved that there are no easier ways, no matter how long you search for one.

    Just because a domain is infinite (such as the domain "solutions to problems") does not mean that it MUST contain an element that has any particular set of particular charecteristics. Just because there are an infinite number of integers does not mean that one of them MUST have a fractional component, if we just look hard enough!

  • Minor nit (only politeness (oops) prevents me from decrying the preceding (parent) post as utter nonsense):

    Hashing isn't really constant time. It's still logarithmic

    Not at all. Under favourable circumstances, hash insertion and searching can be logarithmic. But, in terms of complexity, such tasks didn't get any easier. In the worst case, hashing and insertion are exactly the same as a linear list. In those worst cases, they transform trivially to a linear search.

    Matthew
    - Who dealt with this shit for too long to let this one slip past.

  • heh. In unary representation, I believe composite is P. The trick is that the input is exponentially long...
  • by billybob2001 ( 234675 ) on Tuesday October 10, 2000 @03:30AM (#718745)
    P=NP when either:

    N=1

    P=0

  • Hmmm, reading your first sentence I shouldn't post that, but ...

    1. Proposition: Every object of the laws of Quantum Mechanics is non-deterministic
    2. Proposition: Every physical entity is object to the laws of Quantum Mechanics (modulo relativistic effects)
    3. Proposition: Human brains are physical entities.

    ...

    The only questions seems to be to what extent human brains are non-deterministic...

  • His maths is dreadful but his point is right: the transformations of
    problems in NPTIME can increase the complexity of the problem, eg. an
    oracle giving us an O(n^6) solution to a particular NP complete
    graph-theoretic problem might yield an O(n^12) factorisation
    algorithm (since numbers of size n might map onto graphs of size n^2).
  • Well, I guess you all see why the Prof gave me a C in that class.
  • by wjr ( 157747 ) on Tuesday October 10, 2000 @07:26AM (#718761)
    Prof. Cook is a respected authority on complexity theory, so if he is doubtful about this paper, I'd take an extra large grain of salt with it...
    Sufficiently respected that the first major theorem on NP-completeness (the proof that 3-SAT is NP-complete) is known as "Cook's theorem". An aside: some friends who took their undergraduate computational complexity course from Prof. Cook said that he always referred to that theorem as "Theorem 3.71 in the book".
  • Unless there's been a HUGE breakthrough which I didn't hear about, it is not known whether a quantum computer can solve NP-complete problems in polynomial time. None of the "hard" problems (like factoring) that are known to be amenable to quantum solutions are actually NP-complete.
  • by roca ( 43122 ) on Tuesday October 10, 2000 @07:35AM (#718768) Homepage
    Sorry, factoring is not known to be NP-hard or NP-complete.
  • can someone explain how a Turing machine can possibly be non-deterministic?

    yes

  • by ReverendGraves ( 233320 ) on Tuesday October 10, 2000 @03:37AM (#718779) Homepage
    To sum it up for those /. readers without a history of complexity theory, P and NP represent certain levels of algorithmic complexity: P representing that the algorithm/problem can be solved in polynomial time. For further discussion, take a course on the Theory of Computation.
  • Call me stupid, but can someone explain how a Turing machine can possibly be non-deterministic?

    The nondeterministic Turing machine is a separate creature and, to the best of my knowledge, I don't know that Turing ever thought about it. (Though I wouldn't swear that he didn't---he was an amazing man). It's an abstract generalization of a Turing machine. The guy who said that you should imagine a computer in which you can use an arbitrary number of fork()'s which produce completely parallel processes, any one of which might solve the problem, had the right idea. Perhaps even more roughly, think of a parallel computer that always has as many nodes as you need to create any new process you might want to run on a new node without conflicting with other nodes in any way.

    It's a rather abstract model that isn't equivalent to anything we have built. I suppose if we ever build good enough parallel computers, it might be taken as a model for massively parallel computation---but I've never seen anyone pushing this line. The number of nodes would need to grow exponentially with the problem size and this degree of parallelism for problems we might be interested in is probably fairy dust unless DNA or quantum based computers work out. (And no one has proven that either of these amount to general non-determinstic Turing machines anyway, even if they can be made to work at all). So for the moment it's an abstract model of computation that lets us consider whether we can reduce a large number of exponential time problems that are polynomial time on the abstract model to polynomial time on a real computer. In other words, don't hold your breath waiting for a non-deterministic P9 from Intel.

    Warning: I'm not an expert on this field, but I am a mathematician and I did study this stuff in a basic way long, long ago. Apologies if I've mangled some facts.

  • I've converted the document to PDF and mirrored it here [uni-bonn.de] because the original site is both being slashdotted and not all of you can probably read PostScript. The mirror is on a 40 MBit connection and should be capable of handling the load.
  • by ~MegamanX~ ( 119882 ) on Tuesday October 10, 2000 @07:55AM (#718792)
    Quoting Introduction to the theory of computation by Michael Sipser:

    Section 7.3:
    ------------

    THE P VERSUS NP QUESTION
    As we have been saying, NP is the class of languages that are solvable in polynomial time on a nondeterministic Turing machine, or, equivalently, it is the class of languages whereby membership can be tested in polynomial time.

    [...]
    P = the set of languages where membership can be decided quickly
    NP = the set of languages where membership can be verified quickly
    [...]
    ----- end of quote -----

    Actually, his definition of NP is:
    ------------
    DEFINITION 7.16: NP is the class of languages that have polynomial time verifiers.
    ----- end of quote -----

    Finding if a graph has an hamiltonian path is NP complete, but to verify that the path [x] is an hamiltonian path can be done in polynomial time (finding if the path goes through every node exactly once...)

    phobos% cat .sig
  • The implications are much huger than that. In fact, if P=NP, then mathematics as we know it is over, to be replaced by a little art and a lot of engineering. Here's why:

    Suppose I have a theorem X. I want to know if it's true, and if so, what the proof is. Now, if I have a proof, then I can certainly check it in time polynomial in the size of the proof. Therefore, because P=NP, I can also check in polynomial time whether there is any proof of X shorter than some specified size bound.

    Of course, it may be that the shortest proof of X is larger than the specified size bound. But if we set the bound large enough, then we can be pretty confident that no human would ever find the proof. Therefore, by running our algorithm on X and !X, we can get three possible answers:
    "Yes, X is true and here's the proof of X"
    "No, X is false and here's the proof of !X"
    "X may or may not be true, but you'll never be able to prove it either way"

    Useful, huh? If the polynomial running time in terms of the size bound is high degree, it might take a while for computers to overtake our mathematicians, but it would happen.

    This idea is actually not new. Godel basically posed it to Von Neumann in the 50's --- long before the concepts of P and NP became currrent.
  • In CS theory the action is at the frequent conferences, FOCS and STOCS in particular. Most results are not extensively publicised before appearing at such conferences.

    CS is not like the natural sciences where journals are the key. In most branches of CS no-one really cares about journals, the action is all at conferences.
  • Perhaps I worded that badly.

    What I meant was that the transformations often produce PROBLEMS which are polynomially bigger than the source problem. In this case, you should certainly be multiplying exponents. If the time spent transforming does not change the size of the problem (as you guys apparently thought I meant), then indeed you do use O notation and just take the biggest exponent. Maybe I am wrong that the transformations often have this size blow-up, but that's what I recall at least from doing some of these transformations for homework.
  • every hashing function [h()] is a many to one relationship (domain is larger than the range). Given a set of objects S1 , S2 , S3... such that h(S1) = h(S2) = h(S3) = ...., hashing becomes a list search. The fastest list search is log (n). Therefore, hashing is still O(log n) but is o(C). Don't get O and o confused.
  • Memories of a great Bloom County strip...

    Oliver announced to Opus that he had devised a grand unifying theory of the universe, which accounts for absolutely everything except flightless water fowl.

    You can imagine the hilarity that ensued.

    I'll dig up the strip and post the punchline if anyone is interested :-)

  • IMO there are no NP problems, only NP solutions. There are many problems we can't think of P solution for, but until we know everything there is to know we don't know for sure that there isn't another way to solve a problem.

  • I felt like I understood page 7. This was my read:

    Take a graph G. Grab an maximal independent set and call it X0. Delete these vertices from G. Grab another maximal indepdendent set from the (now shrunken) graph G and call it X1. Delete these vertices from G. Continue in this way until G is reduced to the empty graph. This gives a partition of the vertex set of G into parts X0, X1, ..., Xm. Then he uses this partition to direct the edges of G, giving him a digraph. In particular, edges always run from lower-index parts to higher-index parts. (So, for example, if (u,v) is an edge of G and u is in part X2 and v is in part X4, then that edge is directed u ---> v.)

    That said, I died on page 10, on the "densely stretched" stuff. To the extent that I can decrypt his definitions, his observations look false. For me, the first concrete test of this paper came earlier, though, in his "proof" of Lemma 1. The claim is true, I believe, but the proof seems to be a haphazard set of assertions.

    Overall, my feeling is that this paper is almost certainly bogus, and that no one has refuted it only because no one can figure out what the hell he's trying to say . I think Plotnikov needs to find a collaborator that can write coherent English (and mathematics) or else to write the whole thing in his native Russian and submit it to a refereed Russian journal. Posting an unintelligible paper on an obscure web site is just not going to generate proper peer review.

  • by Belgarion ( 11586 ) on Tuesday October 10, 2000 @03:42AM (#718806) Homepage
    "NP" means that the problem can be solved by a non-deterministic machine (I'd almost say: For example a human) in polynomial time. A problem is already NP if there's a solution for it.

    "P" is part of NP, as even a deterministic machine (like an everyday computer) can solve this in polynomial time.

    "NP Complete" means that this problem is just as hard to solve as any other NP problem. If you can solve an NP complete problem in polynomial time, you prove that the whole NP Complete set can be solved in polynomial time, and thus you prove that NP=P.

    Of course, there are two possible outcomes:
    1. The algorithm has a flaw and won't work.
    2. The problem presented was not NP Complete, but only NP or P.

  • His name is even Plotnikov! Bahahaha.... god this is funny. "I am Plotnikov! I have many plots, Mr. Bond. Now, die a slow death as my N=NP solution eats away your brain!"

    --
  • Consistent in the sense that the position describes a possible situation. For example an impossible situation is a single covered tile where the rest of the board is empty space and where the single tile is surrounded by 2's. This is impossible because at most there is one bomb under the tile so the 2's should be 1's. This is a trivial example but you probably get the idea now...
    --
  • by BMazurek ( 137285 ) on Tuesday October 10, 2000 @03:47AM (#718824)
    Given an instance of a graph G=(V,E), partition G into disjoint subsets V1, V2, ... Vk such that for 1 <= i <= k, the subgraph induced by Vi is a complete graph (a clique).

    Now, you want to minimize k, the number of cliques.

    But, considering the number of NP-Hard/NP-Complete problems out there, it's not surprising that you haven't heard of it.

  • The initial user community that the web was designed for were physicists who needed a faster and more convenient method for distributing preprints than the academic journal process. Of course, one of the reasons you distribute preprints is so that your peers (or betters :-) can see and review your work, and find errors before you get into the official journal review process.
  • The halting problem is simply deciding whether a given Turing machine
    will terminate when given a particular input. Turing proved that
    there was no Turing machine capable of deciding this problem, and
    Turing also showed the the functions computable by Turing machines
    were just the same as the total recursive functions.

    Church's thesis is the claim that the effectively computable
    functions are just the same as those computable by Turing machines,
    ie. the total recursive functions. It may, as a matter of physical
    fact, happen to be the case that there are effective physical methods
    for computing functions strictly stronger than this.


  • You are wrong on the number of communication links. It most certainly does grow exponentially.

    No I'm not, and no it doesn't.

    Each person may communicate with N others.

    Yes. (Actually, N-1 unless you count people talking to themselves, but that doesn't make much difference.)

    That's N^N.

    No, that's N times N, also known as N^2. (Again, actually N(N-1) unless people talk to themselves, and N(N-1)/2 if the links are undirected.) You multiply the number of links per person by the number of people. N links per person times N people = N^2 total links. The number of possible edges in a graph with N vertices (counting direction and self-loops) is N^2. Not that it has much to do with counting edges, but the number of possible graphs is 2^(n^2) and the log of this is N^2.

    As far as Brooks' Law goes, I still agree with the spirit of it: there may also be some sense in which the (administrative?) cost associated with each link increases along with the total number of links. Also, the set of possible committees that the team can form is the power set of the team, or 2^N, but that's a different matter. Besides, a quadratic increase is already pretty expensive. However, none of that applies to counting the edges in the graph, and the misuse of the word "exponential" is still a pet peeve of mine.

    As for the analogy with RSA: first of all, I hope you noticed that the statement about "factoring large prime numbers" was wrong. Factoring large prime numbers is easy. If you give me a prime number P, then no matter how large it is, I can assure you that its factors are 1 and P. That's what it means to be prime. The statement should say "factoring products of large prime numbers", or simply "factoring large numbers". It's a silly mistake, I'll admit it doesn't really have anything to do with the graph question, and I don't mean to imply that you don't know all this, but it's surprising how often people make that mistake. It seems to happen practically every time a cryptography story is posted here. The person is promptly pounced on and it's a lot of fun, but it's still a distressingly significant error. I remember someone's sig had the quote in which Bill Gates said it. I mentioned it here because it is an error that seems to involve a similar slip to that of confusing quadratic and exponential.

    As for the stuff about drawing graphs, I was just trying to drive in how different the two equations y=x^2 and y=2^x are. I would have included a drawing, but Taco's (ironically lame) "lameness filter" rejected it for being ASCII art.

    Finally, by "moderating logic", I assume you mean the fact that my post has (Score:2), and you don't think it deserved to be marked up? That's because it started that way; nobody has marked it up, though I do think it deserved to be. Being so late and so far down, you're probably about the only person who saw it.

    I know I said "finally" in the previous paragraph, but I can't resist this:

    I think that you are correct about the graph, but it's obviously just chance.

    Sure, given that there are two of us, and only one can count, I guess in a way it's just chance that I'm that one.

    David Gould
  • Where can I get me one of these "quantum computer" things? Do they have the OS preinstalled?
  • interesting quote from the link..


    We design an algorithm for an exact solution of the Minimum Clique Partition Problem. For an arbitrary undirected graph G, we use a technique for finite partially ordered sets, in particular, a partition of such sets into the minimum number of paths. The running time of the algorithm is equal to O(n^6), where n is the number of graph vertices.


    Am I to understand that through this alrotithm can solve any NP problem in O(n^6)???? Doesn't that mean that schemes like RSA would be completly fucked. I wish I knew more on the subject.. well my dad does, he wrote me this about it..


    Yes, very interesting, thank you. I didn't look at the actual paper because I'm sure it's beyond me. There are
    over 2000 problems known to be equivalent -- an algorithm for one will lead to algorithms for all. O(n^6) isn't
    very attractive, but if it really works it will be a huge breakthrough. A couple of months ago there was a
    conference of top mathematicians at UCLA to decide what are the most important problems for the next century.
    (There was a similar conference in 1900).
    I think P=NP? was near the top of the list.

    P is simply the problems that can be solved in polynomial time, and a polynomial is an expression that has fixed
    powers.
    t = n^10 + 5 is a polynomial, but t = 2^n + 5 isn't.
    NP stands for nondeterministic polynomial, and it refers to a problem for which, if the computer guesses an answer
    (that's the nondeterministic bit), it can check the answer in polynomial time. The NP-complete problems are those
    which are all equivalent by Cook's Theorem (1970) to the Satisfiability Problem, which is a boolean problem. These
    include the travelling salesman problem.



    IMHO this is way more important (and maybe sexy) then the release of something like 2.4.pre9, or ever 2.4 release! This could really change everything, espically when it comes to encrytion.
    We're going to have to change PGP to PBP (pretty bad privacy)

    -Jon

  • There are some problems (such as the Halting Problem) that are formally undecidable - not solvable in polynomial time even on a non-deterministic computer.

    Undecidable problems are not only not solvable in polynomial time, they're not solvable - ever! If you're interested in this kind of thing, you can try reading Languages and Machines, by Thomas A. Sudkamp. It might be a bit of a struggle if you haven't done any university-level mathematics, though.

  • Why wouldn't the existance of a P time algorithm simply show that this particular problem was misgrouped? Your statement is like saying if A implies B, then B implies C. The P = NP problem is discusses on Stas Busygin's NP-Completeness Page [busygin.dp.ua]. He has good links to his research and published papers. Good resource if you want to know about any of this.
  • by nihilogos ( 87025 ) on Tuesday October 10, 2000 @03:51AM (#718847)
    I can probably remind people of a little stuff.

    It's probably easiest to think of this stuff in terms of Turing Machines, any problem can be expressed according to a TM which solves it. Input to a turing machine can be sized according to the number of squares on the tape it takes to represent the input. For an input of size n, the time complexity function of a turing machine (or algorithm which it computes) is defined as the maximum number of steps it takes the TM to halt (if it does) ranging over all inputs of size n. Considering all possible input sizes we end up with a function C(n) which maps positive integers to positive integers.

    A problem is said to be within the class of complexity P if there exists a (deterministic) Turing Machine having complexity function C(n) and a polynomial function p(n) such that for all n C(n) < p(n). Otherwise the problem is said to be in complexity class NP.

    Factoring integers is an NP problem, note that the Shor's polynomial time algorithm for use on a Quantum Computer is non-deterministic.

    An NP-complete problem is one that firstly is in NP, and secondly is one in which for every other problem in NP there is some TM or algorithm which takes a polynomial number of steps to restate it in terms of the NP-complete candidate. (Moreover the solution to this problem provides a solution to the second one.)

    So to solve P = NP the author needs to show first of all it's NP-complete. Which may or may not have been shown elsewhere, a lot of these graph partition problems have subtle but important differences, and secondly provide a polynomial time algorithm to solve it.

    I am of the opinion that NP = P (or not) is one of the hardest problems in mathematics and really needs some brilliant new theories to be solved, so I'm sceptical but I'll read the paper.

  • Parallel computers are neat, but I think you're overstating the benefits just a tad :)

    The best you could do, is transform your O(n^1260) algorithm running on one processor into an O(n) algorithm running on (n^1260) processors. Even if you built self-replicating computers, n^1260 of them (for n>1) is likely to chew up a fair bit of space and energy . . . :)

  • I'm Stas Busygin, the holder of the site where the paper was published. First of all, thanx to all for the attention. Please take a look at my publishing policy [geocities.com] to avoid any misunderstanding. Besides, please use the mirror in the U.S. [geocities.com] as my main server in Ukraine is overheated!
  • Actually, bringing humans into this discussion is not particularly useful. We do not know whether humans are Turing-equivalent or not (humans may have abilities beyond a Turing machine), let alone whether they have the capabilities of a deterministic or non-deterministic Turing machine. They do have the abilities of a deterministic Turing machine (get a pen and paper and you can simulate the operation of one, so yes we do have at least the capabilities of a deterministic Turing machine).

    Keep in mind also that "nondeterminism" in this context means much more than just "can't predict the system's behaviour". It has a much more exacting definition than that - to a first approximation imagine a program that can, whenever it needs to make a decision, executes a fork() call, and continues executing simultaneously down both paths, either one of which can find the answer.

  • Am I to understand that through this alrotithm can solve any NP problem in O(n^6)????

    Not quite (and I apologise in advance for any details I skip over, or mistakes I've made, it's been a year or so since I got out of academia). To solve say, SAT (the satisfiability problem) using this algorithm, first you have put a converter over the top to convert SAT problems into instances of this problem. Say that given a SAT problem of size n, the converter takes O(n^2) time to do the conversion to the clique problem, and generates a clique problem of size O(n^2). Now, in this case the time the conversion takes isn't relevant, but the size of the generated clique problem is. The hybrid method will now take O((n^2)^6)) = O(n^8) time to solve the problem.

    However, from a theoretical perspective, we don't really care whether it's O(n^2), O(n^8), or O(n^93), just as long as it's O(n^x) rather than O(y^n). In practice, I'd expect that if we ever found that P=NP through a O(n^50) algorithm, once the world's computer scientists and mathematicians got over their week-long party they'd quickly knock the polynomial factor down to something more reasonable.

    I'm still betting that P!=NP though - even if it's just the "I can't do it, so it must be impossible" factor coming through :)

  • I don't even look at these anymore. Well, I would if someone wrote a program that started actually solving these problems. That would be truly excellent. The reason I don't look at them is, as explained by my advisor, is they are an incredible time suck. Since this is a holy grail, lots of reasonably talented people give it a spin. And it never pans out. And it takes a huge ammount of time to figure out what the subtle error in their logic/proof is. Put another way, I know lots of people that will happily take a $10 -> $1k bet that your discovery won't be blessed by a tier 1 journal in 5 years time or less. I'm one of them. Part of the reason for the extreme skepticism is that thousands of brilliant people have tried to come up with a solution to this problem. This is because all NP problems are polynomially transformable to each other, and there are lots of very useful real world problems that are NP. Ergo, some (or all) of the best practicing algorithmers have tried and failed to get an efficient exact algorithm for this problem in their own very well understood (to them) subdomain. Not holding my breath (but I'd love to eat crow on this, as would every other OR practitioner). See Gary & Jhonson for all the gory details.
  • by e_lehman ( 143896 ) on Tuesday October 10, 2000 @03:33PM (#718873)

    This is not a decision procedure, so there is no "success" or "failure", only termination. The purpose of this algorithm is to construct a sizable bipartite matching, which is subsequently augmented. My point is that such sloppiness in describing a 40-year old procedure does not bode well for the correctness of the paper as a whole. Since posting, I've decrypted the paper through page 10. The entire thing is sloppy, but there it becomes utterly opaque to me.

    (I'm a PhD student at MIT specializing in design of combinatorial algorithms and, by coincidence, presently studying fast algorithms for NP-hard problems, e.g. k-SAT, MAX-2-SAT. I referee papers in theoretical CS pretty reguarly. This all to say... yes, I'm familiar with pseudo-code. :-) )

  • This could have huge implications for cryptography. If P=NP, then that means that any problem that has been proven to be in NP can be solved in polynomial time. One good example is factoring, which is generally believed to be in the class of problems that aren't NP-hard, but still aren't in P.

    So if P=NP, then RSA breaks.

    Probably all public key cryptography breaks.

    Secret key cryptography is probably still fine, though. Of course, those secret keys are usually exchanged using public key algorithms.

    Scary.
  • spectacular...now, can you do that without dividing by zero?
  • As a disclaimer, let me first state that I am a numerical analyst, not a theorist. But I am a CS grad student:) For a long time now, there have been NP-complete problems which are almost solvable in polynomial time. For example, I remember hearing about a heuristic for the bin packing problem what was right about 85% percent of the time. (Had they got up to about 92%, they could have proven that P=NP. Likewise, modern algorithms for travelling salesmen (like those developed at Rice University) find a solution incredibly quickly..... However, the algorithm needs to spend a long time verifying its solution, so it doesn't work in polynomial time.

    In my opinion, I feel that P!=NP. There are too many problems (eg. in combinatorial optimization) where there is no useful heuristic that guarantees the right answer.

  • Generally, the usefulness of Quantum Computers would come from the inability of a classical computer to solve NP-hard from problems, since there have been solutions found in Quantum algorithms for simulating NP-hard problems in polynomial time. Of course, it can't be proved that NP-hard can't be solved in polynomial time on a classical computer.

    As someone else has mentioned, this guy looks like an evil mastermind from a James Bond movie. I believe that this guy's evil scheme to create an RSA-factoring algorithm for classical computers that runs in polynomial time and then hold intercepted informatian and broken passwords for ransom. Then, he will build a fleet of spaceships to colonize the moon. Damn, 007, where are you??
    ---

  • You show that a problem is NP complete by showing how a PTIME solution
    of this problem can be transformed into a PTIME solution of all NP
    hard problems. It can only be misclassified if there is a mistake in
    this proof.
  • NP-hard problems don't actually have to be in NP. I like to think of NP problems as those such that if you were given a solution, you could verify the solution in polynomial time. But there are some problems for which this is not the case, and yet it is still true that you could reduce any problem in NP to this problem.
  • by Tom7 ( 102298 ) on Tuesday October 10, 2000 @04:16AM (#718888) Homepage Journal
    I hear a lot of people say that proving P = NP is an immediate devastating blow to cryptography.

    This algorithm purports to solve a particular NP complete task in O(n^6). I don't believe it, but let's pretend it's possible.

    Calculating the discrete logarithm modulo M (RSA is based on this being difficult) for binary numbers is certainly in NP (it's poly time to verify the answer once you find it). So we can transform any instance of the discrete log problem into a min clique problem, then solve it in O(n^6), then transform it back. These transformations often involve passing through other NP-complete problems (graph coloring, three coloring, three-sat, circuit-sat, algorithm-sat is a common pathway). These transformations usually incur a nontrivial amount of polynomial time (ie, they are in O(n^2) or O(n^6) or something).

    Therefore, even if this works, solving the discrete log problem or factoring would probably still be in something like O(((((n^2)^7)^3)^5)^6) time! It's polynomial, sure, but O(n^1260) is NOT fast. (For a 4096-bit key, that poly time solution is MUCH WORSE than a brute-force guessing attempt, even ignoring multiplicative factors lost in O-notation). I just pulled these numbers out of my ass, but it's likely that it would turn out that way.

    Of course, the insights gained from a P time solution to any NP-complete problem might be enough to eventually topple certain kinds of cryptography. But don't go delete PGP or GPG because of this claim (which probably isn't valid, anyway); the problem of P vs NP is more of an intellectual curiosity than a practical concern.
  • Humans can not solve NP problems in polynomial time (unless P=NP). NP essentially means a correct solution could be verified in polynomail time, while P means a correct solution can be found in polynomial time.
  • Ok, so I majored in CS not math (or "maths" as some of our friends across the pond like to call it.) but here is my $0.02.

    Showing that a problem is NP complete is a pretty easy. IIRC, you just have to show that there is an isomorphism between your problem and another NP complete problem. This is the sort of problem they threw at us in the undergrad algorithms class I took. I don't know the details of this particular problem, but I'm guessing if more than one reputable mathmatician says its NP complete, then it probably is.

    I personally (and with no real basis except for a general hunch) think that P!=NP. At least I hope that it isn't. If I had to hazard a guess, I'd say that it is much more likely that there is a problem with their algorithm than their proof that this problem is NP complete.

    spreer
  • by Walker ( 96239 ) on Tuesday October 10, 2000 @04:08AM (#718897)
    The P=NP problem has been the death of many a young researcher because it is such a hard problem and there are many subtleties involved in such a proof. Every year there are genuinely smart people who propose a proof one way or another and it requires a significant amount of peer review and analysis to spot the mistake in their proof. After this, of course, they get shunned for wasting everyone's time for submitting a bad proof, even though it took a large number of man-hours to spot the flaw. There are many interesting "proofs" out there right now for which I am interested in learning the results. A very respected proof theorist this summer submitted a proof in the major logic conference in Paris this summer. He claims that the P=NP problem is independent of the Peano axioms (Something that I have long since believed, but never tried to prove). This means that any proof either way about this question would require very high level techniques from set theory, and that standard CS combinatorial arguments are not enough. So, I suggest that we just sit back and wait for each of these to be peer reviewed. Either way, the answer should be very interesting.
  • I have to admit, I feel a lot better responding to a troll with:

    you are dividing by zero between lines 4 and 5;

    than the usual:

    -1: Troll.

    Congratulations.

    --
  • by johndiii ( 229824 ) on Tuesday October 10, 2000 @04:10AM (#718901) Journal
    One inaccuracy...

    NP does not mean non-P. An NP problem is one that can be solved in polynomial time on a "non-deterministic" (think infinitely parallel) computer. There are some problems (such as the Halting Problem) that are formally undecidable - not solvable in polynomial time even on a non-deterministic computer. So the statement "A problem is said to be within the class of complexity P if ... Otherwise the problem is said to be in complexity class NP." is not strictly accurate.

    I tend to agree with the person who is inclined to wait for publication in a peer-reviewed journal.

  • Well the thing about NP-complete problems is this: some years ago Cook proved that if you take ANY NP problem there is a Polynomial-time procedure which can convert it to one specific NP problem.
    Later it was shown for quite a few NP problems that this specific problem can be reduced to them.

    Thus a prpblem which is NP-complete is one for which we know that there is a Poly.time procedure which reduces a NP-Complete problem to it (And therefore ANY problem in NP is reducable to it in poly-time.)

    So for a problem to be classified NP-Complete you have to reduce a known NP-Complete problem to it in poly-time. So

    1) misgrouping one of those wouldn't be easy. and
    B) even if it was in P to begin with and someone has reduced an NP-Complete to it, that still shows that P=NP.

    Which among other things shows that strong crypto is as bad as weak crypto....
  • Graph isomorphism is no worse than NP-complete, and it's not even known to be that hard; you likely mean subgraph isomorphism. Further, electrical networks have particular characteristics, and they are not designed so as to give rise to hard isomorphism problems.

    This is a general characteristic. It's true that useful heuristics exist for many hard problems. For example, "random" boolean satisfiability problems are almost always easy to solve using simple search methods. But the hardest instances of satisfiability are totally intractable using these methods. And for applications like factoring, reduction to a problem such as satisfiability yields very hard instances.
  • by grettaeb ( 80927 ) on Tuesday October 10, 2000 @05:40AM (#718916)
    Of course, there are two possible outcomes:
    1. The algorithm has a flaw and won't work.
    2. The problem presented was not NP Complete, but only NP or P.

    Actually, there's a third possibility, by far the most likely one. The algorithm posed may solve the problem exactly, but still require NP time to do it. That is, the algorithm is right, but the analysis of the algorithm is wrong. I think most stories about NP=P finally being proven end up this way.

  • He makes some interesting points. I would argue with his conclusion that
    0.9* + x = 1 + x for all nonterminating x

    with an assertion that if there exists an x such that x + y = x + z then y = z. This is just addition of real numbers we're talking about.

    Alternatively, you could just declare that the value of a nonterminating decimal is the limit of the series and the whole problem goes away.

    I can see that this is the kind of problem that leads to flamewars, so I'm eschewing my +1 for the protection of my precious karma.

    --

Do molecular biologists wear designer genes?

Working...