Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Math

Optical Solution For an NP-Complete Problem? 232

6 writes to let us know that two optical researchers have proposed, as a thought experiment, a novel idea for solving the traveling salesman problem. From the abstract: We introduce an optical method based on white light interferometry in order to solve the well-known NP-complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non-polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to NN for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal-to-noise ratio. The proposed method is meant purely as a gedankenexperiment."
This discussion has been archived. No new comments can be posted.

Optical Solution For an NP-Complete Problem?

Comments Filter:
  • by jfengel ( 409917 ) on Thursday August 09, 2007 @01:26PM (#20172049) Homepage Journal
    This solves a nondeterministic-polynomial algorithm by using a very large number of parallel computations to simulate nondeterminism.

    This was proposed some years ago for DNA computers as well, until somebody figured out that it would take a mass of DNA the size of the earth to figure out a non-trivial problem. So this is NOT the first time somebody has proposed a method for reducing NP problems to polynomial time, though this mechanism is novel as far as I know.

    Photons are a lot smaller than DNA. N^N photons seems much more feasible. But even so... once N=100, 100^100 photons is way more than we can handle.
  • by PhysicsPhil ( 880677 ) on Thursday August 09, 2007 @01:42PM (#20172269)

    This solves a nondeterministic-polynomial algorithm by using a very large number of parallel computations to simulate nondeterminism.

    This was proposed some years ago for DNA computers as well, until somebody figured out that it would take a mass of DNA the size of the earth to figure out a non-trivial problem. So this is NOT the first time somebody has proposed a method for reducing NP problems to polynomial time, though this mechanism is novel as far as I know.

    I'm not sure this comparison is correct. The use of DNA just increased the computational power available to the problem, but didn't change the fundamental methods of calculation (i.e. one step at a time). The DNA computer didn't make the NP problem go away, it just threw more power at it.

    This uses the interference of the light within an optical network to perform the calculation. The "operation", such as it is, relates to physically constructing the network rather than the number of photons required. In a very tenuous way, this is similar to a quantum computer, where multiple calculations are performed simultaneously. Of course it's not a quantum computer, but it does appear to be a polynomial algorithm.

  • by Bender0x7D1 ( 536254 ) on Thursday August 09, 2007 @02:03PM (#20172557)

    One important part of any solution is the amount of time/cycles it takes to encode the problem for use in your algorithm.

    For example, I can't claim that my algorithm can factor a number in O(1) if I require that the input for the algorithm is a vector of the prime factors for the desired number. Yes, my part of the algorithm is O(1), but to take a number and convert it to the format for my algorithm is not O(1), meaning the overall performance can't be considered O(1).

    In summary, the time/cycles to set up the problem counts.

  • by reverseengineer ( 580922 ) on Thursday August 09, 2007 @02:07PM (#20172621)
    On the subject of Wikipedia, the authors of this paper actually use Wikipedia on page 9 of their paper to provide a definition of a quantum computer. (They don't bother with a full citation like they use for their other sources though.)

    While this is just a thought experiment, I think their use of interferometry to solve problems is pretty interesting, since it really is in some respects quantum computing. While, as they note, it isn't a quantum computer in the usual sense with entanglement and qubits, their method does, after all, depend on light following Fermat's Principle of Least Time, which in turn can be considered a consequence of quantum electrodynamics. It makes me wonder what other sorts of computational problems can be solved using invariant properties of the physical world.

  • Increasing Orders (Score:2, Interesting)

    by Doc Ruby ( 173196 ) on Thursday August 09, 2007 @02:14PM (#20172695) Homepage Journal
    "N+X" is called "addition": additive increase. "N+N" is called multiplication (2N): geometric increase, as is "N*X". "N*N" is called exponential (NX). What is "NN" called? And is there a higher order of increase?

    And what are all those kinds of operations called?
  • by SeanMon ( 929653 ) on Thursday August 09, 2007 @02:17PM (#20172747) Homepage Journal
    A CO2 laser at 500 kW generating a beam of 10 micrometer wavelength produces about about 2.52 x 10^25 photons per second. It will only take 1.25 x 10^167 years to generate 100^100 photons.
    Just a bit more than we can handle.
  • by MikeTheMan ( 944825 ) on Thursday August 09, 2007 @04:38PM (#20174575)
  • by asuffield ( 111848 ) <asuffield@suffields.me.uk> on Thursday August 09, 2007 @04:56PM (#20174793)

    IMO, the only way to reduce NP-Complete problems is using something like quantum entanglement or another similar characteristic that is not bounded by classical physics.


    Photons are not bounded by classical physics, and their behaviour can only be explained by quantum physics. Whether or not this behaviour can be exploited to perform computation more efficiently than a Turing machine is unknown (and will likely remain that way until we untangle the problem of how quantum physics really works). We still do not have a complete understanding of how they behave, and much of what we think we know is unproven. This idea's pretty far out on the edge of plausibility, but even if this particular method turns out to be flawed, we don't know enough to be sure that no other similar methods could work. (We also don't know whether quantum computing can do it, which is one of the big open questions in that field)

    The entire point of NP-complete problems is that they cannot be solved and verified in reasonable time using anything that has a physical limitation: a clock speed, a limited number-of-sides, a finite number of nodes in a graph, finite degrees of spin, etc.


    That statement is true only under the assumption that the model of computation is a form of Turing machine. The essential property of a Turing machine is that they can compute any effectively computable function. There is no particular reason to believe that they also have the property of describing the limits of what the physical universe can compute in a given space of time, and in fact they don't for some simple problems, although we haven't found any machines that can tackle NP problems yet. Even under "classical" physics, we're uncertain whether such a machine could exist; nothing in classical physics says that they can't, so far as we know.

    This question is right up there alongside "P=NP?", and has similar properties: we suspect and usually assume that such machines don't exist, but don't have anything resembling a good reason to believe this, other than "we haven't found one yet". It's not as interesting as "P=NP?" because in most cases, there is no way to prove something to be physically impossible (since you can never be sure that you have found the most fundamental laws of physics), so the question may remain unresolved forever.

If you want to put yourself on the map, publish your own map.

Working...