Forgot your password?
typodupeerror
Math Science

A Quantum Linear Equation Solver 171

Posted by kdawson
from the traveling-salesman-gets-a-break dept.
joe writes "Aram Harrow and colleagues have just published on the arXiv a quantum algorithm for solving systems of linear equations (paper, PDF). Until now, the only quantum algorithms of practical consequence have been Shor's algorithm for prime factoring, and Feynman-inspired quantum simulation algorithms. All other algorithms either solve problems with no known practical applications, or produce only a polynomial speedup versus classical algorithms. Harrow et. al.'s algorithm provides an exponential speedup over the best-known classical algorithms. Since solving linear equations is such a common task in computational science and engineering, this algorithm makes many more important problems that currently use thousands of hours of CPU time on supercomputers amenable to significant quantum speedup. Now we just need a large-scale quantum computer. Hurry up, guys!"
This discussion has been archived. No new comments can be posted.

A Quantum Linear Equation Solver

Comments Filter:
  • Hm (Score:3, Funny)

    by Andr T. (1006215) <andretaff@gmailP ... minus physicist> on Friday December 05, 2008 @01:10PM (#26004367)
    Is this algorithm in Haskell or somethin'?

    I'll wait until I can program in VB. Will it take long?

    • Re:Hm (Score:5, Funny)

      by FrozenFOXX (1048276) on Friday December 05, 2008 @01:17PM (#26004449)

      Is this algorithm in Haskell or somethin'?

      I'll wait until I can program in VB. Will it take long?

      It may or may not.

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Also, it may AND may not.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      Following the protocol of quantum physicists to be un-understandable by anyone but the top people in their field and 13-year-olds with too much time who watch NOVA and read Popular Science, they wrote it in Perl

      • Following the protocol of quantum physicists to be un-understandable...

        "Derstandable"!?
        That word is incomprehensible to me. I am unable to understand it, is what I'm trying to get at...y'know: baffling, beyond comprehension, clear as mud, cryptic, Delphic, enigmatic, fathomless, Greek, impenetrable, incognizable, inconceivable, inscrutable, mysterious, mystifying, obscure, opaque, perplexing, puzzling, sibylline, unclear, unfathomable, ungraspable, unimaginable, unintelligible, unknowable.

      • by mikael (484)

        And used a Brownian Motion generator using a fresh cup of really hot tea..

    • Re:Hm (Score:5, Funny)

      by AdmiralXyz (1378985) on Friday December 05, 2008 @01:39PM (#26004697)
      Quantum computing is nondeterministic and probability-based: when you put in a certain input, you have a finite probability of getting the right answer out, it could just as easily be anything else. So in other words, coming from VB, you'll be at a major advantage.
      • Sure it might "help" VB but just think of the possibilities with PHP...

        $numberClueless = "0";
        if(empty($numberClueless)) print("Yay for PHP");

    • Re: (Score:3, Informative)

      by Hercynium (237328)
      This was solved in Perl a long time ago.

      No, really!!

      Quantum::Superpositions [cpan.org]
  • n to log(n) (Score:3, Informative)

    by rcallan (1256716) on Friday December 05, 2008 @01:29PM (#26004571)
    The summary cleverly omits that solving a linear equation is neither NP complete nor NP hard, the speed up is from O(n) to O(log n). I think you'd need a ginormous matrix for this to be useful depending on the constants and such (Of course it'd be crime for me to read paper instead of the abstract to actually find out the details). They already have the applications for quantum computing, it will be much more interesting when they actually figure out a way to build the damn things.

    I'm sure it's still a significant result and there's a good chance they did something new that can be used in other applications.
    • Re: (Score:3, Insightful)

      by popmaker (570147)

      Often, one does not need to know the solution x itself, but rather an approximation of the expectation value of some operator associated with x, e.g., xMx for some matrix M . In this case, when A is sparse and well-conditioned, with largest dimension n, the best classical algorithms can ïnd x and estimate xMx in O(n) time. Here, we exhibit a quantum algorithm for solving linear sets of equations that runs in O(log n) time, an exponential improvement over the best classical algorithm.

      This is a long quote, which is at top of the actual paper cited (I'd trim it down, but I'd need to brush op on my linear algebra to be sure not to ruin it).

      According to them, the best algorithms are O(n) and their algorithm improves that to O(log n). It has nothing to do with P vs NP but it is an exponential improvement anyway (going from n to log n), as promised in the summary.

    • by kvezach (1199717)
      If you're an evil socialist (or just a planner faced with limited resources), you can also use the exponential speedup to invert detailed Leontief matrices quickly. That's pretty useful. As for expansion, well, let's just hope there's a similar speedup to be found in linear programming :)
      • by Darby (84953)

        As for expansion, well, let's just hope there's a similar speedup to be found in linear programming :)

        Simplex method forever, Baby!

    • A Slashdot summary that is still technically correct, despite the usual omission of key facts?

      I think your discovery is perhaps even more notable than the algorithm!

    • by Timmmm (636430)

      > I think you'd need a ginormous matrix for this to be useful

      There are ginormous matrices. For example finite element analysis often uses matrices with millions of rows.

    • by bloobloo (957543)

      Analysis of pollution monitoring stations, using historical wind flow analysis to determine where in the world pollution has been released from.

      ABSOLUTELY ginormous matrices are used for this.

  • All other algorithms either solve problems with no known practical applications

    Yes, because all quantum algorithms are hugely practical these days...

    • by blueg3 (192743)

      You have your word semantics wrong. Shor's algorithm has enormous practical application. However, as there are no computers to run it, it's impractical to actually put it to use.

  • by blind biker (1066130) on Friday December 05, 2008 @01:43PM (#26004745) Journal

    Are arXiv articles peer-reviewed?

    If not (as I suspect), that puts a serious question mark on them. On the other hadn, there are excellent non-peer reviewed scientific articles - and almost all the scientific articles produced in Europe before the 2nd WW were not peer-reviewed (back then that was an american practice, and a very good one I would add). However, nowadays peer review is a valuable and available resource that should be utilized.

    THAT SAID again... some of the best, most innovative scientific articles are not being, nowadays, accepted for publication because the reviewers are one degree too dumb for the article. Eventually they do get published, but with an unnecessary 5-6 year delay.

    Also, I have read in my life thousands of published peer-reviewed articles, and many of them contain incredible imbecillities where I have to question whether all the reviewers were high on hallucinogens. Very very high on hallucinogens.

    • I thought it had been nearly proven that peer review for highly specialized complez subjects was pretty much worthless, since most reviewers will not understand the subject matter and also won't be willing to admit that they don't understand it. Didn't some students at MIT create a giberish generating program that was able to produce papers that pass peer review?

      The basic problem is, for truly ground breaking research, there often isn't a ready supply of peers to do the review. There are plenty of subject

      • by Chocky2 (99588)

        Didn't some students at MIT create a giberish generating program that was able to produce papers that pass peer review?

        I doubt it, unless they were sociologists :)

        Peer review may not be able to reliably tell for certain if something is correct, but it's often a good mechanism for spotting things that are wrong.

        You don't need ten years post-doc work in QCD to spot an error in addition, though if you're trying to extend the number of colours in a gauge theory you probably do :)

      • Didn't some students at MIT create a giberish generating program that was able to produce papers that pass peer review?

        I don't know. But sure as heck would like to - any more info on this so I can look it up?

        • Re: (Score:3, Interesting)

          by Helios1182 (629010)

          Yes and no. It was a program called SciGen. The purpose was to weed out conferences that are crap. The ones that exist to take, and make money, rather than really promote scholarly work.

          ---

          About:

          SCIgen is a program that generates random Computer Science research papers, including graphs, figures, and citations. It uses a hand-written context-free grammar to form all elements of the papers. Our aim here is to maximize amusement, rather than coherence.

          One useful purpose for such a program is to auto-genera

          • I read about the WMSCI 2005 fiasco. That's not really a merit of the paper generated by SCIgen, I think, but the fact that WMSCI 2005 did not actually review the submitted papers. Correct me if I'm wrong. In any case, WMSCI 2005 comes out as total losertown.

            I briefly read through the WMSCI 2009 call-for-papers, and they now seem to have a double-boind review, and something strange: "a non-blind, open reviewing by means of 1-3 reviewers suggested by the submitting authors." I can see potential for

    • by fph il quozientatore (971015) on Friday December 05, 2008 @02:10PM (#26005135) Homepage

      Are arXiv articles peer-reviewed?

      No, they aren't [arxiv.org].

    • by Chocky2 (99588)

      arXiv holds preprints, so no -- they've not been peer-reviewed.

      However it has moderation and endorsement systems which (in theory) spot any Archimedes Plutonium wannabes.

    • by XchristX (839963)

      Peer-review isn't always what it's cracked up to be. Read about the Bogdanov controversy that erupted in my uni some years back that exposes some serious flaws in the peer-review process.

      http://en.wikipedia.org/wiki/Bogdanov_Affair [wikipedia.org]


    • Also, I have read in my life thousands of published peer-reviewed articles, and many of them contain incredible imbecillities where I have to question whether all the reviewers were high on hallucinogens. Very very high on hallucinogens.

      Could someone mod the parent up +1 whatever [insightful, informative, funny - really, a "tragicomic" mod point would be most appropriate here].

      Back in the day, Goro Shimura used to say that something like 50% of all published mathematics is simply rank nonsense, but the
      • I don't read maths articles. My field of research is in nanosciences, materials science, physical chemistry etc. etc. (chemistry is really quite important for my research on multiple levels). In the kinds of articles I read, the nonsense is easy to spot - for me. I'm quite pedantic (or anal, if you prefer) when it comes to scientific rigour.

        I take it you're a maths researcher? I just had a nice mini-thread/discussion with a maths grad student, he tells me it takes quite a long time to read a maths article.


        • I think reviewers should be held accountable in some way.

          Quis custodiet ipsos custodes? [wikipedia.org]

          The truth of the matter is that the problem is inherently intractable.

          If you have good people writing papers, then you will get good papers.

          If you have lousy people writing papers, then you will get lousy papers.

          What people need to realize [and what almost all people in the grant-writing racket lack the strength of character to admit] is that most published "research" is simply false.

          And if to "false" you wer
          • By grant-writing racket, do you mean grant-application-writing racket, or grant-granting (uh, what's the verb, help me out..) racket? Two separate sets of people. That said, I agree with the general idea of your post. Though I strongly disagree that almost all published research is either false or "trivially tautological or essentially meaningless". There's plenty of good stuff out there. At least in natural sciences.

            C'mon now - "almost all"? You didn't really mean it, did you?

  • by jonniesmokes (323978) on Friday December 05, 2008 @01:48PM (#26004825)

    Finally a cool article on /. This is extremely cool! There are a lot of problems in the real world that have extremely large sparse matrices that need to be inverted. Fluid dynamics and solutions to Maxwells equations come to mind. But I am sure there are other applications in relativity and plasma physics. Estimating a solution to a linear dynamic system of say 2^128 degrees of freedom in only 128 cycles would change a lot of things.

    And... Yes, we [uibk.ac.at] are working very hard on building the computers.

  • by aram.harrow (1424757) on Friday December 05, 2008 @01:59PM (#26004981)
    are my coauthors, and the ordering of author names was alphabetical, and doesn't reflect our level of contributions (which were more or less equal).

    So please cite this as "Harrow, Hassidim and Lloyd" and not "Harrow and coauthors."

    That said, we're pleased about that attention. :)

    In response to one question: the matrix inversion problem for the parameters we consider is (almost certainly) not NP-complete, but instead is BQP-complete, meaning that it is as difficult as any other problem that quantum computers can solve. We plan to update our arXiv paper shortly with an explanation of this point.

    • Re: (Score:2, Insightful)

      by saburai (515221)

      I work in CFD, so this is all thrilling to me. I suspected it was only a matter of time before methods were discovered for applying quantum computation to large systems of linear equations, and I certainly hope your work stands up to peer review. Cheers!

    • by wolfemi1 (765089)
      Aram,

      It's Mike Wolfe, congratulations on making Slashdot, if this is your first time. What a happy surprise seeing your name there!

      -Mike

    • by JohnFluxx (413620)

      It's awesome that you posted here :)

      A quick question if I may..

      Could raytracing be done efficently on quantum computers?

    • by khallow (566160)
      Does this solution help with the more general case of finding the solution to a dense, well conditioned system of linear equations? I briefly glanced over the Andrew Childs paper you cited [arxiv.org], but enlightenment did not sink in.
    • by MoxFulder (159829)

      Great work, Aram! I can't believe you managed to get your name on slashdot. I seethe with jealousy.

      Heck, I'd settle for having a decent paper to put on arXiv!

      -Dan Lenski

  • I don't have a clue what a quantum linear equation is or why I'd need to solve it, let alone know how to.
  • What kind of side effects will a progress bar have since you can't know what its doing if you know where it is?

  • A cool algorithm that should be possible on a quantum computer is "perfect" data compression. IOW, "what is the smallest turing machine + input string that outputs the following string in less than 1 billion steps?"

    Such an algorithm would need a quantum computer to run, but the decompression could happen on a classical computer.

    Anyone aware if such an algorithm exists? The summary would seem to indicate not.

  • by internic (453511) on Friday December 05, 2008 @04:29PM (#26006875)

    This looks like a really interesting result, but having look a little at the paper it's not 100% clear to me what the quantum algorithm is being compared to. First a caveat, I study quantum physics but not CS, so my knowledge of algorithms and complexity theory is quite limited. Anyway, in solving problems on a good old classical computer, you can seek to solve a linear system exactly (up to the bounds of finite arithmetic) or you can seek to solve it approximately to within some error.

    So the thing I'm wondering is what classical algorithm are they comparing their result to, and is this comparing apples to apples? They say

    In this case, when A is sparse and well-conditioned, with largest dimension n, the best classical algorithms can find x and estimate x* M x in O(n) time. Here, we exhibit a quantum algorithm for solving linear sets of equations that runs in O(log n) time, an exponential improvement over the best classical algorithm. ... In particular, we can approximately construct |x> = A^-1 |b> and make a measurement M whose expectation value x* M x corresponds to the feature of x that we wish to evaluate.

    So it looks like the authors' algorithm gives an approximate solution to the linear system and they're comparing it to a classical algorithm that gives an exact solution to the problem. This seems like comparing apples to oranges. Perhaps someone who knows more about the features of the various classical algorithms can comment on whether it looks like the correct comparison to make and why.

    I bring this up because I recall that given a matrix representing the density matrix of a bipartite quantum system, determining whether it represents an entangled or separable quantum state is in general NP-Hard, but IIRC there exist semi-definite programming techniques to get the answer probabilistically in polynomial time. The point is that in that case there's a big gain for accepting an answer that will be wrong every once in a while. I was just curious if settling for an approximate answer in solving linear systems changes the complexity of that problem drastically as well.

    • by BZ (40346)

      Of course any quantum computing algorithm (including Shor's) has the same issue.

      In practice, once the probability of the answer being incorrect is smaller than some threshold (e.g. the probability of the classical computer giving a wrong answer due to cosmic rays) it really doesn't matter that you're using a probabilistic algorithm.

      • by internic (453511)

        It's true that something like Shor's algorithm is probabilistic, and so, yes, you're always sort of comparing apples to oranges. So in that situation you can define a probability distribution on the amount of time it will take to get the correct answer. To compare apples to apples you simply compare some number like the mean time for solution, or the 95th percentile longest time for solution (there's also the question of best- and worst-case scenarios) for the two algorithms. If the classical algorithm

    • by ASkGNet (695262)

      Let's say the probability is 1/2. The solution to the system can be verified in polynomial time. If the solution is incorrect, run the algorithm again.

      After you run it N times, the probability of error is 1/(2^N), and you only wasted polynomial time on verification. Clearly, when your probability of error is smaller than the cardinality of possible outcomes, the algorithm will perform as designed in a sane amount (polynomial) of time.

      • by internic (453511)

        I'm not sure the number of possible outcomes has anything to do with it. These quantum algorithms can in principle give the same wrong answer over and over again (though this is extremely improbable), so it's not like you exhaust the set of wrong answers or something. In any case, it is certainly true that you can put a probability distribution on the time it will take the algorithm to get the answer, and you ask how long it will take to get a solution on average or that it will take less than T operation

    • by aram.harrow (1424757) on Friday December 05, 2008 @05:51PM (#26007821)
      This is a great question. Here's how I like to think about it: If A is a stochastic matrix and b is a probability distribution that we can sample from, then given a few assumptions, we can sample from Ab efficiently. This is more or less the idea behind so-called Monte Carlo simulations, which are a tremendously useful tool in classical computing. However, we don't know how to get sampling access to A^{-1} b. Our algorithm gives us something like sampling access to A^{-1} b. Not exactly, because we're talking about quantum amplitudes, rather than probabilities. But more importantly, taking the inverse can make a sparse matrix dense, and (as we often see for problems admitting quantum speedups) sampling-based approaches to computing it fail because the samples have alternating signs.
  • Where is the on-line version of the 'Quantum Linear Equation Solver' built with hookers and blackjack?

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...