typodupeerror

## A Quantum Linear Equation Solver171

joe writes "Aram Harrow and colleagues have just published on the arXiv a quantum algorithm for solving systems of linear equations (paper, PDF). Until now, the only quantum algorithms of practical consequence have been Shor's algorithm for prime factoring, and Feynman-inspired quantum simulation algorithms. All other algorithms either solve problems with no known practical applications, or produce only a polynomial speedup versus classical algorithms. Harrow et. al.'s algorithm provides an exponential speedup over the best-known classical algorithms. Since solving linear equations is such a common task in computational science and engineering, this algorithm makes many more important problems that currently use thousands of hours of CPU time on supercomputers amenable to significant quantum speedup. Now we just need a large-scale quantum computer. Hurry up, guys!"
This discussion has been archived. No new comments can be posted.

## A Quantum Linear Equation Solver

• #### Re:not able to be used == not useful (Score:5, Interesting)

on Friday December 05, 2008 @01:37PM (#26004649)

It's not a 'new toy' for computer science. Computer science has pretty much nothing to do with it. Mostly, it's a toy, or rather, academic persuit for theoretical physicists.

No, it's of real interest to theoretical computer science. Quantum computing defines a new class of algorithmic complexity: there are, for instance, sub-exponential quantum algorithms for problems which have only exponential-time classical solutions. There is a whole subindustry of algorithmic complexity theory devoted to exploring the differences between algorithms that can be executed on quantum computers and those which are executed on classical computers. Scott Aaronson's lecture notes [scottaaronson.com] are a good introduction to this subject.

• #### Re:Bzzzt, wrong! (Score:4, Interesting)

on Friday December 05, 2008 @02:33PM (#26005423)

if we are solving a linear system, we need A to be square

No. You can "solve" an under-determined system (i.e., reduce it). You can also "solve" an over-determined system via a least-squares fit. Both are common problems in scientific computing.

• #### Comparing Apples to Apples (Score:4, Interesting)

on Friday December 05, 2008 @04:29PM (#26006875)

This looks like a really interesting result, but having look a little at the paper it's not 100% clear to me what the quantum algorithm is being compared to. First a caveat, I study quantum physics but not CS, so my knowledge of algorithms and complexity theory is quite limited. Anyway, in solving problems on a good old classical computer, you can seek to solve a linear system exactly (up to the bounds of finite arithmetic) or you can seek to solve it approximately to within some error.

So the thing I'm wondering is what classical algorithm are they comparing their result to, and is this comparing apples to apples? They say

In this case, when A is sparse and well-conditioned, with largest dimension n, the best classical algorithms can find x and estimate x* M x in O(n) time. Here, we exhibit a quantum algorithm for solving linear sets of equations that runs in O(log n) time, an exponential improvement over the best classical algorithm. ... In particular, we can approximately construct |x> = A^-1 |b> and make a measurement M whose expectation value x* M x corresponds to the feature of x that we wish to evaluate.

So it looks like the authors' algorithm gives an approximate solution to the linear system and they're comparing it to a classical algorithm that gives an exact solution to the problem. This seems like comparing apples to oranges. Perhaps someone who knows more about the features of the various classical algorithms can comment on whether it looks like the correct comparison to make and why.

I bring this up because I recall that given a matrix representing the density matrix of a bipartite quantum system, determining whether it represents an entangled or separable quantum state is in general NP-Hard, but IIRC there exist semi-definite programming techniques to get the answer probabilistically in polynomial time. The point is that in that case there's a big gain for accepting an answer that will be wrong every once in a while. I was just curious if settling for an approximate answer in solving linear systems changes the complexity of that problem drastically as well.

• #### Re:arXiv articles - question (Score:3, Interesting)

on Friday December 05, 2008 @05:22PM (#26007465)

Yes and no. It was a program called SciGen. The purpose was to weed out conferences that are crap. The ones that exist to take, and make money, rather than really promote scholarly work.

---

SCIgen is a program that generates random Computer Science research papers, including graphs, figures, and citations. It uses a hand-written context-free grammar to form all elements of the papers. Our aim here is to maximize amusement, rather than coherence.

One useful purpose for such a program is to auto-generate submissions to conferences that you suspect might have very low submission standards. A prime example, which you may recognize from spam in your inbox, is SCI/IIIS and its dozens of co-located conferences (check out the very broad conference description on the WMSCI 2005 website). There's also a list of known bogus conferences. Using SCIgen to generate submissions for conferences like this gives us pleasure to no end. In fact, one of our papers was accepted to SCI 2005! See Examples for more details.

http://pdos.csail.mit.edu/scigen/ [mit.edu]

• #### It IS bogus (probably) (Score:4, Interesting)

on Friday December 05, 2008 @06:24PM (#26008169)
The weak point to me is where they "map A to a quantum-mechanical operator". They completely ignore the timing of this step, which amounts to assuming assuming that multiplication by A takes time O(1). With that assumption I'm sure you can do linear algebra blazingly fast.
• #### Re:It IS bogus (probably) (Score:2, Interesting)

on Saturday December 06, 2008 @01:02AM (#26010989)
This is explained in the technical appendix. We assume that A has at most s non-zero entries per row, and that A is stored in a data structure that efficiently answers "return all non-zero entries in a row" queries. For example, an array of lists would work. Our run-time then has an O(s^2) dependence.

Basically we need to assume random-access memory. There has been some preliminary work on models for quantum RAM, some of which we cite in our paper. But as a sign that the model isn't too powerful, we know that classical computers with RAM are not exponentially faster than weaker models, like single-tape Turing machines or 1-D cellular automata.

Alternatively, there might be a short algorithm that generates the entries of A. In this case, we could simply use it as a subroutine without having to make any assumptions about the quantum computer's architecture.

#### Related LinksTop of the: day, week, month.

Where it is a duty to worship the sun it is pretty sure to be a crime to examine the laws of heat. -- Christopher Morley

Working...