Forgot your password?
typodupeerror
Math Science

A Quantum Linear Equation Solver 171

Posted by kdawson
from the traveling-salesman-gets-a-break dept.
joe writes "Aram Harrow and colleagues have just published on the arXiv a quantum algorithm for solving systems of linear equations (paper, PDF). Until now, the only quantum algorithms of practical consequence have been Shor's algorithm for prime factoring, and Feynman-inspired quantum simulation algorithms. All other algorithms either solve problems with no known practical applications, or produce only a polynomial speedup versus classical algorithms. Harrow et. al.'s algorithm provides an exponential speedup over the best-known classical algorithms. Since solving linear equations is such a common task in computational science and engineering, this algorithm makes many more important problems that currently use thousands of hours of CPU time on supercomputers amenable to significant quantum speedup. Now we just need a large-scale quantum computer. Hurry up, guys!"
This discussion has been archived. No new comments can be posted.

A Quantum Linear Equation Solver

Comments Filter:
  • n to log(n) (Score:3, Informative)

    by rcallan (1256716) on Friday December 05, 2008 @01:29PM (#26004571)
    The summary cleverly omits that solving a linear equation is neither NP complete nor NP hard, the speed up is from O(n) to O(log n). I think you'd need a ginormous matrix for this to be useful depending on the constants and such (Of course it'd be crime for me to read paper instead of the abstract to actually find out the details). They already have the applications for quantum computing, it will be much more interesting when they actually figure out a way to build the damn things.

    I'm sure it's still a significant result and there's a good chance they did something new that can be used in other applications.
  • by blind biker (1066130) on Friday December 05, 2008 @01:43PM (#26004745) Journal

    Are arXiv articles peer-reviewed?

    If not (as I suspect), that puts a serious question mark on them. On the other hadn, there are excellent non-peer reviewed scientific articles - and almost all the scientific articles produced in Europe before the 2nd WW were not peer-reviewed (back then that was an american practice, and a very good one I would add). However, nowadays peer review is a valuable and available resource that should be utilized.

    THAT SAID again... some of the best, most innovative scientific articles are not being, nowadays, accepted for publication because the reviewers are one degree too dumb for the article. Eventually they do get published, but with an unnecessary 5-6 year delay.

    Also, I have read in my life thousands of published peer-reviewed articles, and many of them contain incredible imbecillities where I have to question whether all the reviewers were high on hallucinogens. Very very high on hallucinogens.

  • by Andr T. (1006215) <andretaff@g[ ]l.com ['mai' in gap]> on Friday December 05, 2008 @01:45PM (#26004771)
    I think he liked to say things like those, because I just found a page [jstor.org] quoting him just like I said.
  • by jonniesmokes (323978) on Friday December 05, 2008 @01:48PM (#26004825)

    Finally a cool article on /. This is extremely cool! There are a lot of problems in the real world that have extremely large sparse matrices that need to be inverted. Fluid dynamics and solutions to Maxwells equations come to mind. But I am sure there are other applications in relativity and plasma physics. Estimating a solution to a linear dynamic system of say 2^128 degrees of freedom in only 128 cycles would change a lot of things.

    And... Yes, we [uibk.ac.at] are working very hard on building the computers.

  • by aram.harrow (1424757) on Friday December 05, 2008 @01:59PM (#26004981)
    are my coauthors, and the ordering of author names was alphabetical, and doesn't reflect our level of contributions (which were more or less equal).

    So please cite this as "Harrow, Hassidim and Lloyd" and not "Harrow and coauthors."

    That said, we're pleased about that attention. :)

    In response to one question: the matrix inversion problem for the parameters we consider is (almost certainly) not NP-complete, but instead is BQP-complete, meaning that it is as difficult as any other problem that quantum computers can solve. We plan to update our arXiv paper shortly with an explanation of this point.

  • by blueg3 (192743) on Friday December 05, 2008 @02:08PM (#26005115)

    To rephrase the other response, log vs. polynomial is the same as polynomial vs. exponential. Moving from polynomial time to logarithmic time is an exponential speedup (just as is moving from exponential time to polynomial time is).

    O(n^3) vs. O(log n)
    ->
    O(exp(n^3)) vs. O(n)

  • Bzzzt, wrong! (Score:1, Informative)

    by fph il quozientatore (971015) on Friday December 05, 2008 @02:13PM (#26005177) Homepage
    I can spot the first error in the abstract:

    In this case, when A is sparse and well-conditioned, with largest dimension n, the best classical algorithms can find x and estimate x^\dag M x in O(n) time

    First of all, "with largest dimension n" means nothing because if we are solving a linear system, we need A to be square. Then, the complexity of the best classical algorithms does not depend only on the size "n" of the matrix but also on the number "nnz" of non-zero entries.

  • Re:Seems bogus (Score:3, Informative)

    by adrianwn (1262452) on Friday December 05, 2008 @02:51PM (#26005675)

    They talk about dealing with sparse matrices but they would have to be O(log n) sparse to be useful for a single application of the algorithm. For example, a matrix with n=1000000 would have to have around 15 non-zero values and a trillion zero values.

    Wrong. An upper bound in big O notation [wikipedia.org] does not give any indication as to constant factors of the "real" bound (nor to other summands which are in o(log n)). In fact, your statement doesn't even make sense, because it is an asymptotical bound and hence can't be applied to a fixed size of the input size.

  • by FrangoAssado (561740) on Friday December 05, 2008 @03:00PM (#26005767)

    If you think quantum computing is about putting a physical system in a certain state and manipulating it, then you should also think classical computing is about setting a tape in a certain way and manipulating it.

    When Turing first described a computer (a turing machine), it was essentially a mechanism where you put a tape in a certain state, manipulate it according to certain rules and in the end you get the tape in another state containing the "result" of your computation. What was so interesting about it it that it was theoretically possible to build a machine that would be able to do the manipulations automatically.

    Quantum computation is analogous: get a certain vector in an n-dimensional Hilbert space, multiply it by these certain matrices in this specific order, and in the end you get a vector that corresponds to the "calculation" you did. This is absolutely only math, it has nothing to do with physics. What's exciting about this is that we expect to be able to build (real) machines that perform these operations way faster than any computer today can, because the specific operations with which be built our algorithm are exactly the things nature is capable of doing fast.

    Of course, we only discovered this when we discovered quantum mechanics -- that's the relation to physics.

  • Re:Implications (Score:1, Informative)

    by Anonymous Coward on Friday December 05, 2008 @04:50PM (#26007145)

    No. Or, not to the best of our knowledge.

    Quantum computers solve problems in BQP. We don't even know if that is any larger than BPP, the probabilistic algorithm class.

    We also do not know if there are any problems in BQP that are not in BPP; that is, we do not know if a quantum computer can solve any problems a classical computer (with a random source) can't.

    We *do* know that there are problems for which the solution with a quantum algorithm is exponentially better than any known solution with a probabilistic classical algorithm (Shor's factoring, and TFA). Since we happen to also care if a problem is solve in a day or in the age of the universe, instead of just whether the problem is solved at all, this matters a great deal.

  • Re:Implications (Score:3, Informative)

    by Warped-Reality (125140) on Friday December 05, 2008 @05:49PM (#26007793) Journal

    You're both wrong. They're equivalent.

    From wikipedia:

    "A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. "

  • by aram.harrow (1424757) on Friday December 05, 2008 @05:51PM (#26007821)
    This is a great question. Here's how I like to think about it: If A is a stochastic matrix and b is a probability distribution that we can sample from, then given a few assumptions, we can sample from Ab efficiently. This is more or less the idea behind so-called Monte Carlo simulations, which are a tremendously useful tool in classical computing. However, we don't know how to get sampling access to A^{-1} b. Our algorithm gives us something like sampling access to A^{-1} b. Not exactly, because we're talking about quantum amplitudes, rather than probabilities. But more importantly, taking the inverse can make a sparse matrix dense, and (as we often see for problems admitting quantum speedups) sampling-based approaches to computing it fail because the samples have alternating signs.
  • Re:Hm (Score:3, Informative)

    by Hercynium (237328) <Hercynium&gmail,com> on Friday December 05, 2008 @06:41PM (#26008381) Homepage Journal
    This was solved in Perl a long time ago.

    No, really!!

    Quantum::Superpositions [cpan.org]

One picture is worth 128K words.

Working...