Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

Tiny Holes Advance Quantum Computing 255

Nick writes "Worldwide, scientists are racing to develop computers that exploit the quantum mechanical properties of atoms - quantum computers. One strategy for making them involves packaging individual atoms on a chip so that laser beams can read quantum data. Scientists at Ohio State University have taken a step toward the development of quantum computers by making tiny holes that contain nothing at all. The holes - dark spots in an egg carton-shaped surface of laser light - could one day cradle atoms for quantum computing."
This discussion has been archived. No new comments can be posted.

Tiny Holes Advance Quantum Computing

Comments Filter:
  • Re:Great principle (Score:5, Informative)

    by Urkki ( 668283 ) on Tuesday April 26, 2005 @09:13AM (#12347177)
    • they're not an improvement over silicon for everything.

    Indeed, talking about quantum computers as an improvment on silicon computers is like talking about jumbo jets as an improvement over cars. Ie not an improvment at all, unless you have something very specific to do (factor a large integer or cross an ocean). And you need the simpler alternative to use the more advanced one (car to get to the airport, regular computer to feed and extract data for quantum computing).
  • Re:Great principle (Score:5, Informative)

    by AKAImBatman ( 238306 ) * <[moc.liamg] [ta] [namtabmiaka]> on Tuesday April 26, 2005 @09:28AM (#12347300) Homepage Journal
    Quantum Leap [imdb.com] was an excellent TV show that ran through the late 80's and early 90's. The premise was that "Dr. Sam Becket" (who now plays Captain Archer on Enterprise) invented a time machine that would allow him to reach points throughout his lifetime. The problem is that he never quite got the kinks worked out of his retrieval program, and now finds himself randomly leaping from life to life. The tagline of the show was, "striving to put right what once went wrong and hoping each time that his next leap will be the leap home." (Usually then followed by us seeing him leaping into someone's life. Something utterly confusing then happens to him and he utters the words, "Oh boy".)

    And now you know... the rest of the story.
  • by Milalwi ( 134223 ) on Tuesday April 26, 2005 @09:43AM (#12347412)
    The original news release [osu.edu], which has an animation to support the story is available at the Ohio State University Research News [osu.edu] site.

    Milalwi
  • by moslevin ( 874414 ) on Tuesday April 26, 2005 @09:52AM (#12347477)
    It sounds like the terminology is confusing a few people here. As I understand it, there are two kinds of quantum computer being researched- the one that everybody seems to be familiar with (ie. the ones that can solve cryptographic problems very quickly), and the other kind, which involves using the physical properties of individual quanta to create quantum wires, transistors, and gates to form clocked, general purpose architectures. This article is talking about the second type of QCs. Currently, the biggest challenges (from what I've read) associated with implementing the second type of QCs has to do with manufacturing tolerances required to create quantum wells capable of keeping quantum data encapsulated and determinsitic. The other challenges include finding ways to clock these quantum circuits, and ways of inputting/outputting the data. So, from my interpretation, this article is really just talking about some potential solutions for aspects related to the second type of QCs I mentioned.
  • Re:Great principle (Score:2, Informative)

    by SysSupport ( 872059 ) on Tuesday April 26, 2005 @09:55AM (#12347509)
    Paul Harvey, g'Day.
  • by karvind ( 833059 ) <karvind@COLAgmail.com minus caffeine> on Tuesday April 26, 2005 @10:01AM (#12347553) Journal
    Does anybody really know exactly how atoms and sub-atomic particles are going to behave in less-than perfect environments? What about gamma-ray bursts from stars and nuclear emissions from our Sun? Will these possibly have an adverse effect on a chip that is running on the atomic level?

    One of the key to making things at nanoscale is to have fault and defect tolerance. With billions of elements in the system, you are bound to get manufacturing defects as well as many run-time defects. Even in modern DRAMs they have redundant columns of memory cells to improve the yield by swapping the defective ones with spare ones. FPGA(Field Programmable Gate Arrays) offer in-circuit reconfigurability. HP showed Teremac [hp.com] few years ago which had millions of defects yet it worked just "fine" by detecting the defects and reconfiguring around it.

    In short there will be sources of errors and faults in these systems, but there are various ways to get around it. Also in quantum computing, you can encode your data in such a way that it is immune to noise (atleast to certain extent) and is called Quantum error correction [qubit.org].

    But also remember that science is not just about destination but also the journey. Even if practical quantum computers are never built, we are likely to learn many interesting aspects which may be used elsewhere.

  • by advocate_one ( 662832 ) on Tuesday April 26, 2005 @10:09AM (#12347625)
    Diamonds have the highest conductivity rate of any known metal, which makes them perfect candidates for traditional computing. You may think "oh, but they're so expensive," but this isn't necessarily true. Natural diamonds are expensive, but this isn't due to its scarcity.

    Diamonds are not a metal... and Diamonds have the highest thermal conductivity... the last thing you want here for semiconductor devices is a substrate with the highest electrical conductivity... you want a very good insulator, which also gets heat away very quickly... this is where Diamond layers come in... not solid machined diamonds, but diamond deposited or grown into a thin layer...

  • by Catullus ( 30857 ) on Tuesday April 26, 2005 @10:29AM (#12347772) Journal
    In fact, it would be very surprising if it turns out to be NP-complete, as it is in NP intersect co-NP. Also, no efficient quantum algorithms are known for NP-complete problems, and it is generally suspected that quantum computers won't be able to solve them efficiently. For example, see this semi-technical paper [arxiv.org].

    You had better get that right in your undergrad thesis ;)
  • Re:Great principle (Score:3, Informative)

    by murphj ( 321112 ) on Tuesday April 26, 2005 @10:38AM (#12347902) Homepage
    So, what kind of scale are we talking about here? To simulate, say, a million-transistor CPU and a megabyte of RAM, how many qubits would you need? About as many as you need transistors, or radically less?

    From TFA: "In principle, quantum computers would need only 10,000 qubits to outperform today's state-of-the-art computers with billions and billions of regular bits," Lafyatis said.
  • by Deanalator ( 806515 ) <pierce403@gmail.com> on Tuesday April 26, 2005 @10:48AM (#12348014) Homepage
    I like to think of quantum computers as binary on the outside, analog on the inside. You can only read and write in binary, but the operators in the middle can be real valued (complex valued even).

    Nielson and Chuang's book is neat (I have it sitting on my floor 3 feet from me ATM). It's mainly written for the physicist to learn quantum circuits and algorithms. It takes a year to read, but by the time you are done, you should be able to read and understand most of the papers in the field.

    A much lighter book on the subject is "Explortions in Quantum Computing" by Williams and Clearwater. It gives a basic overview without much assumed knowledge.

    Also "Problems & Solutions in Quantum Computing & Quantum Information" by Willi-Hans Steeb and Yorick Hardy has alot of fun problems in it. It's the kind of book thats good to read on a bus, or an airplane.
  • Re:Great principle (Score:2, Informative)

    by frakir ( 760204 ) on Tuesday April 26, 2005 @10:54AM (#12348073)
    But, Shor takes roughly 2^N qubits to factor N

    Make that log2(N) qbits. 2^N would be a bit excessive (to factor 15 they'd need 32000 qbits. They used 7 of them)
  • Re:I wonder... (Score:2, Informative)

    by SolFire ( 710842 ) on Tuesday April 26, 2005 @10:55AM (#12348092)
    Acutally that won't happen because if it happend, it would imply information travelling faster than light, and that does not happen. Even if you have a pair of perfectly entangled qubits (called an e-bit), and you seperate them by a great distance and perform a quantum operation on the first qubit, the measurment outcome of the first qubit will not affect the measurement outcome of the second qubit.

    The idea of Quantum Teleportation has been misunderstood. Quantum Teleportation is not like the Star Trek transporters. Quantum Teleportation is a method of sending a qubit to another person. In order to do this you need to share an e-bit. Ex. Alice has a qubit Y she wants to send to Bob. Alice and Bob also share an e-bit E (which is a perfectly entagled pair of qubits 1/sqrt(2)(|00>+|11>). Alice performs a controlled-not operation on her part of the e-bit Y, then she does a Haddamard transformation (Quatum Fourrier Tranform on 1 bit) on the qubit Y, then she measures both the qubit Y and her part of the e-bit E. At this point we have two classical bits 00, 01, 10, or 11. She then sends this to Bob. Bob the performs a controlled-not on his part of the e-bit Y based on the first bit, and a controlled phase-flip based on the second bit at which point Bob's qubit that he now has is Y. This process perfectly sends a qubit from Alice to Bob, but the key part of this method that needs to be remembered is that the two classical bits that Alice measured had to be sent to Bob. Without these bits Bob would not be able to get Y and sending the classical bits, takes the usual ammount of time.

    What's even cooler is that if Alice's qubit Y had been entangled with another qubit X, the entanglement is preserved after the QT process so that the qubit Bob has is now entangled with X.

    Disclaimer: I am not a quantum physicist. I am a recent computer science grad who just took a course on Quantum Computing. Just one.
  • Re:Great principle (Score:5, Informative)

    by QuantumFTL ( 197300 ) * on Tuesday April 26, 2005 @11:26AM (#12348420)
    Not exactly. Quantum computers can simulate classical computers with no problems. That's one of the tenets of quantum computation.

    If by "no problems" you mean "severe and most likely insurmountable quantum coherence issues". Any quantum computer big enough to simulate a modern sized classical computer will contain so many qubits as to have problems with interference from the outside world. IIRC the problem of quantum coherence is roughly exponential in the number of qubits in a system (one of the reason we don't have 1000 qubit computers sitting around). Just having enough qubits to remember my RAM would get pretty ridiculous.

    The truth is that quantum computers, in the forseeable future, will likely be an orthogonal type of computing system to classical computers - a coprocessor used for certain problems with small memory requirements but large search spaces. Many of our most important computations lie in this regime, but I doubt quantum computers will outperform classical computers on most ordinary stuff (i.e. word processing, running a webserver, handling large databases) due to its seriality and memory intensive nature. (Insert quote like "640 k ought to be enough for anybody" here)

    Also, the fact that quantum computers can factor large integers efficiently necessarily implies that they can do other NP-complete problems efficiently, such as the traveling salesman problem.

    It implies no such thing [wikipedia.org]. Traveling salesman problem is NP-complete [wikipedia.org], and while we have no solid proof that a quantum computer cannot solve an NP-complete problem in polynomial time, Shor's algorithm is also in no way any kind of proof, as integer factorization is merely NP [wikipedia.org], not fully NP-complete as you claimed.

    Yes, IAWAUGTOQC (I am writing an undergrad thesis on quantum computation).

    Yes, I do have a degree in physics. You may wish to check said thesis in light of errors explained above.
  • Re:Great principle (Score:3, Informative)

    by bobhagopian ( 681765 ) on Tuesday April 26, 2005 @01:46PM (#12349879)
    The real difference is that quantum bits ("qubits") can exist in superpositions. Take some qubit, like the spin of nucleus (this is actually a pretty popular choice). Classically, it can point up or point down in a magnetic field. We call these orientations 0 and 1; accordingly, the spin stores the exact same information as a regular bit. Here's where the quantumness comes in: the 0 and 1 states can exist on top of each other, so that a spin is in a combination of the two states. To imagine the potential utility of this superposition, consider this (somewhat artificial) example. To store the integers 0 through 7 on a classical computer, you'd need 8 sets of three bits (i.e., 000, 001, 010, 011, 100, 101, 110, 111). You can store the same set of integers on a SINGLE set of three qubits (i.e., 0/1 0/1 0/1). Why is that useful? Say you want to figure out which of those numbers plus 5 equals 10. Classically, you'd go through the possibilities one by one until you found a match. Quantum mechanically, you can do the operation, and the only state out of the superposition that survives is the 101 state. Pretty cool, huh?
  • by MenTaLguY ( 5483 ) on Tuesday April 26, 2005 @02:24PM (#12350323) Homepage

    why is the cat both dead and alive? why can a bit be both one and zero? i don't understand this and would like to hear an explanation that makes sense.

    While I'm not sure there is an explanation that makes intuitive sense, it does appear to be the way the universe works at small scales.

    Schroedinger's thought experiment was intended to illustrate the weirdness of the issue by tying the state of a macroscopic object (a cat) to a quantum state (the decay/not decay of the particle), mainly. It's not a realistic experiment because you couldn't isolate the macroscopic contents of the box from the outside world sufficiently (and besides, it's cruel).

    But, real experiments do demonstrate that quantum stuff consistently behaves in really bizzare and counterintuitive it-is-but-it-isn't ways.

    One famous example is the oft-repeated "double slit" experiment (hopefully I won't mangle the summary too much).

    You remember light-as-waves? If you take a coherent light source (i.e. a laser) and shine it onto a screen through a mask with two small parallel slits in it, you will see a pattern on the screen resulting from the two interfering wavefronts.

    That's simple enough. But light is also particles (photons). You can put a filter between the lazer and the mask that only allows one photon at a time to dribble through. Now you have individual photons going through the mask, and you see individual spots as they hit the screen. Intuitive enough.

    But it starts to get weird. If you measure the brightness of those spots, though, they still follow the brightness of the interference pattern. That would suggest that the photon is going through both slits at once and somehow interfering with itself. Hmm, that's not very intuitive.

    But, okay. We can test that by using detectors at the slits to note the photons as they go by. Hmm. No, each photon is only going through one slit or the other, not both at once. So why are we getting the interference pattern? Wait, where did the interference pattern go?

    Huh. We stop observing which slit the photon is going through, and the interference pattern comes back (i.e. it effectively went through both slots). We start observing again, and it starts "picking" one or the other slot again...

    Basically it looks as if, to employ a gross anthoropomorphism, on quantum scales the universe is "lazy", and only commits to a specific choice if it has to (because somebody is watching). No, that's not intuitive, and no, we have no clue how this happens exactly (although we're getting better at describing it and exploiting it for practical purposes like primitive quantum computers), but that's what happens.

    why does it need an "observer"? what exactly is an observer"?

    Physicists are wrestling with that one. We don't really know. A person directly observing the quality being tested (directly or via instrumentation) seems to be sufficient, but not necessary.

    That's one of the downsides of the "Copenhagen Interpretation", which is the most common interpretation of these phenomena -- that an observer observing "forces" the universe to make a "choice" (the grossly anthropomorphic word choice is mine though -- the actual way of putting it is that the act of observation "collapses the wave function").

    There are other interpretations, too, that don't require a privileged position of "observer", but they have other very awkward quirks.

    this all seems counterintuitive to normal logic so why should i believe it is true?

    Certainly you shouldn't accept it just because someone says so, or because a few experiments suggest it might be true. In this case, though, the experiments have been repeated too many times by too many different people for the weird results to be the result of experimental error though, and also experiments designed to disprove these behaviors have fai

  • Re:Great principle (Score:5, Informative)

    by tbo ( 35008 ) on Tuesday April 26, 2005 @05:12PM (#12351939) Journal
    Yes, I do have a degree in physics. You may wish to check said thesis in light of errors explained above.

    And I'm doing a Ph.D in physics on quantum computing. Sorry to be a prick about it, but you were a bit rough on the undergrad who posted above, and what goes around comes around. As long as that guy isn't doing his research on slashdot, he'll probably be OK..

    If by "no problems" you mean "severe and most likely insurmountable quantum coherence issues". Any quantum computer big enough to simulate a modern sized classical computer will contain so many qubits as to have problems with interference from the outside world. IIRC the problem of quantum coherence is roughly exponential in the number of qubits in a system

    No, the problem is not exponential in nature. It has been shown that if the error rates for storage, gates, etc. can be brought below certain thresholds (typically 10^-3 to 10^-6), then arbitrarily long computations can be performed. There are many papers on the subject, but here is one [arxiv.org].

    The only way in which decoherence could pose an insurmountable problem is if there is fundamentally new physics that plays a role in the regime between "quantum" and "classical". Nobel Laureate Tony Leggett has talked (in a recent issue of Science, and at the 2005 Gordon Research Conference) about how we might find such new laws of physics if they exist, or otherwise rule out their existence.

    It implies no such thing.

    You are correct. In the early days of the field, I think there was a little bit of confusion about whether quantum computers could do NP-complete, but it has long since been sorted out.

    I recently attended a talk by Ike Chuang about general issues in the field. Chuang feels that quantum simulation and quantum communication will be the important applications, although he emphasized communication. I think quantum simulation is way, WAY underappreciated. Not only is it going to revolutionize protein folding, drug design, and other biomed applications, I have a hunch it may prove to be a prerequisite for advanced nanotech.

    The article is not particularly good. The supposed problems that optical lattices will have in addressing qubits in the interior of a 3-D lattice are "solved" by using what is essentially a 2-D lattice on a chip. The same can easily be done with optical lattices.

    Of course, addressing atoms inside a lattice of moderate size can be done using a high numerical aperature lens to focus an addressing beam onto a single atom. The addressing beam produces an AC Stark Shift of the appropriate hyperfine sublevels of the atom (in the case of Cesium-133 qubits, it shifts each of the mF sublevels of the F=3 and F=4 states), with the exact shift being different for different sublevels. This allows transitions in that particular atom to be driven by a microwave pulse which is detuned from all the other atoms in the lattice. Just how well can we address one atom while not disturbing atoms in adjacent planes? I'll know in a week or two. I'm currently simulating one and two qubit gates in this exact scheme. The actual experiment is also under construction, at Penn State.

    Anyone interested in a distributed computing project to develop quantum computers? I could use help from developers, and later, also regular user input.
  • Re:Great principle (Score:2, Informative)

    by Gate-c ( 879182 ) on Tuesday April 26, 2005 @05:23PM (#12352040)
    I am the one who actually made this animation for OSU research communications last week! The main story was posted at OSU with the full movies, not just screen shots http://researchnews.osu.edu/archive/eggcarton.htm [osu.edu]

All your files have been destroyed (sorry). Paul.

Working...