## A Quantum Linear Equation Solver 171

joe writes

*"Aram Harrow and colleagues have just published on the arXiv a quantum algorithm for solving systems of linear equations (paper, PDF). Until now, the only quantum algorithms of practical consequence have been Shor's algorithm for prime factoring, and Feynman-inspired quantum simulation algorithms. All other algorithms either solve problems with no known practical applications, or produce only a polynomial speedup versus classical algorithms. Harrow et. al.'s algorithm provides an exponential speedup over the best-known classical algorithms. Since solving linear equations is such a common task in computational science and engineering, this algorithm makes many more important problems that currently use thousands of hours of CPU time on supercomputers amenable to significant quantum speedup. Now we just need a large-scale quantum computer. Hurry up, guys!"*
## n to log(n) (Score:3, Informative)

I'm sure it's still a significant result and there's a good chance they did something new that can be used in other applications.

## arXiv articles - question (Score:3, Informative)

Are arXiv articles peer-reviewed?

If not (as I suspect), that puts a serious question mark on them. On the other hadn, there are excellent non-peer reviewed scientific articles - and almost all the scientific articles produced in Europe before the 2nd WW were not peer-reviewed (back then that was an american practice, and a very good one I would add). However, nowadays peer review is a valuable and available resource that should be utilized.

THAT SAID again... some of the best, most innovative scientific articles are not being, nowadays, accepted for publication because the reviewers are one degree too dumb for the article. Eventually they do get published, but with an unnecessary 5-6 year delay.

Also, I have read in my life thousands of published peer-reviewed articles, and many of them contain incredible imbecillities where I have to question whether all the reviewers were high on hallucinogens. Very very high on hallucinogens.

## Re:not able to be used == not useful (Score:4, Informative)

## Looks like a big deal to me. (Score:5, Informative)

Finally a cool article on /. This is extremely cool! There are a lot of problems in the real world that have extremely large sparse matrices that need to be inverted. Fluid dynamics and solutions to Maxwells equations come to mind. But I am sure there are other applications in relativity and plasma physics. Estimating a solution to a linear dynamic system of say 2^128 degrees of freedom in only 128 cycles would change a lot of things.

And... Yes, we [uibk.ac.at] are working very hard on building the computers.

## Avinatan Hassidim and Seth Lloyd (Score:5, Informative)

So please cite this as "Harrow, Hassidim and Lloyd" and not "Harrow and coauthors."

That said, we're pleased about that attention. :)

In response to one question: the matrix inversion problem for the parameters we consider is (almost certainly) not NP-complete, but instead is BQP-complete, meaning that it is as difficult as any other problem that quantum computers can solve. We plan to update our arXiv paper shortly with an explanation of this point.

## Re:Polynomial Speedup (Score:5, Informative)

To rephrase the other response, log vs. polynomial is the same as polynomial vs. exponential. Moving from polynomial time to logarithmic time is an exponential speedup (just as is moving from exponential time to polynomial time is).

O(n^3) vs. O(log n)

->

O(exp(n^3)) vs. O(n)

## Bzzzt, wrong! (Score:1, Informative)

In this case, when A is sparse and well-conditioned, with largest dimension n, the best classical algorithms can find x and estimate x^\dag M x in O(n) time

First of all, "with largest dimension n" means nothing because if we are solving a linear system, we need A to be square. Then, the complexity of the best classical algorithms does not depend only on the size "n" of the matrix but also on the number "nnz" of non-zero entries.

## Re:Seems bogus (Score:3, Informative)

They talk about dealing with sparse matrices but they would have to be O(log n) sparse to be useful for a single application of the algorithm. For example, a matrix with n=1000000 would have to have around 15 non-zero values and a trillion zero values.

Wrong. An upper bound in big O notation [wikipedia.org] does not give any indication as to constant factors of the "real" bound (nor to other summands which are in o(log n)). In fact, your statement doesn't even make sense, because it is an asymptotical bound and hence can't be applied to a fixed size of the input size.

## Re:not able to be used == not useful (Score:4, Informative)

If you think quantum computing is about putting a physical system in a certain state and manipulating it, then you should also think classical computing is about setting a tape in a certain way and manipulating it.

When Turing first described a computer (a turing machine), it was essentially a mechanism where you put a tape in a certain state, manipulate it according to certain rules and in the end you get the tape in another state containing the "result" of your computation. What was so interesting about it it that it was theoretically possible to build a machine that would be able to do the manipulations automatically.

Quantum computation is analogous: get a certain vector in an n-dimensional Hilbert space, multiply it by these certain matrices in this specific order, and in the end you get a vector that corresponds to the "calculation" you did. This is absolutely only math, it has nothing to do with physics. What's exciting about this is that we expect to be able to build (real) machines that perform these operations way faster than any computer today can, because the specific operations with which be built our algorithm are exactly the things nature is capable of doing fast.

Of course, we only discovered this when we discovered quantum mechanics -- that's the relation to physics.

## Re:Implications (Score:1, Informative)

No. Or, not to the best of our knowledge.

Quantum computers solve problems in BQP. We don't even know if that is any larger than BPP, the probabilistic algorithm class.

We also do not know if there are any problems in BQP that are not in BPP; that is, we do not know if a quantum computer can solve any problems a classical computer (with a random source) can't.

We *do* know that there are problems for which the solution with a quantum algorithm is exponentially better than any known solution with a probabilistic classical algorithm (Shor's factoring, and TFA). Since we happen to also care if a problem is solve in a day or in the age of the universe, instead of just whether the problem is solved at all, this matters a great deal.

## Re:Implications (Score:3, Informative)

You're both wrong. They're equivalent.

From wikipedia:

"A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. "

## Re:Comparing Apples to Apples (Score:4, Informative)

## Re:Hm (Score:3, Informative)

No, really!!

Quantum::Superpositions [cpan.org]