Forgot your password?
typodupeerror
Science

Tiny Holes Advance Quantum Computing 255

Posted by CmdrTaco
from the nothing-is-something dept.
Nick writes "Worldwide, scientists are racing to develop computers that exploit the quantum mechanical properties of atoms - quantum computers. One strategy for making them involves packaging individual atoms on a chip so that laser beams can read quantum data. Scientists at Ohio State University have taken a step toward the development of quantum computers by making tiny holes that contain nothing at all. The holes - dark spots in an egg carton-shaped surface of laser light - could one day cradle atoms for quantum computing."
This discussion has been archived. No new comments can be posted.

Tiny Holes Advance Quantum Computing

Comments Filter:
  • by treff89 (874098) on Tuesday April 26, 2005 @09:03AM (#12347097)
    Quantum computing is quite simply where we turn after existing silicon is exhausted. Once the basics about the random nature of quantum particles, which is extremely interesting, the meaning of computer and mechanics thereof can be redefined.
    • after existing silicon is exhausted
      Good one.
    • Re:Great principle (Score:3, Insightful)

      by koreaman (835838)
      Not necessarily. We still have a long way to go before we have useful quantum computers, and they're not an improvement over silicon for everything. We may well have diamond computers or something else fundamentally similar to silicon computers before we make the leap to quantum.
      • Heh. Leap to Quantum. Don't EVER reference that show again.
      • Re:Great principle (Score:5, Informative)

        by Urkki (668283) on Tuesday April 26, 2005 @09:13AM (#12347177)
        • they're not an improvement over silicon for everything.

        Indeed, talking about quantum computers as an improvment on silicon computers is like talking about jumbo jets as an improvement over cars. Ie not an improvment at all, unless you have something very specific to do (factor a large integer or cross an ocean). And you need the simpler alternative to use the more advanced one (car to get to the airport, regular computer to feed and extract data for quantum computing).
        • Re:Great principle (Score:4, Interesting)

          by liquidpele (663430) on Tuesday April 26, 2005 @09:18AM (#12347226) Journal
          Well, this seems to be more of a "quantum storage" artical anyways... since you would read data off the stored atoms. Something like that would benifit just about everything. Who doesn't need more storage for their movies?
        • Re:Great principle (Score:5, Interesting)

          by stevok (818024) on Tuesday April 26, 2005 @09:37AM (#12347358)
          Not exactly. Quantum computers can simulate classical computers with no problems. That's one of the tenets of quantum computation. I would love to see a 747 parallel park in Manhattan. Also, the fact that quantum computers can factor large integers efficiently necessarily implies that they can do other NP-complete problems efficiently, such as the traveling salesman problem. If we can ever get more than seven qubits to behave, we'll be amazed by the things quantum computers can do. But, alas, scientists have only implemented Shor's Algorithm for factoring integers on one number. 15. And hot damn, they got the factors right, 3 and 5. Yes, IAWAUGTOQC (I am writing an undergrad thesis on quantum computation).
          • Re:Great principle (Score:4, Interesting)

            by Urkki (668283) on Tuesday April 26, 2005 @10:06AM (#12347595)
            • Quantum computers can simulate classical computers with no problems

            So, what kind of scale are we talking about here? To simulate, say, a million-transistor CPU and a megabyte of RAM, how many qubits would you need? About as many as you need transistors, or radically less?

            If the answer is millions, then I think my comparison to a jumbo jet is valid, as we're probably about as far from a quantum computer simulating even a 4004 with hundreds of bytes of RAM, than we're from ubiquitous flying cars replacing jumbos ;-)
            • Re:Great principle (Score:4, Interesting)

              by stevok (818024) on Tuesday April 26, 2005 @10:14AM (#12347653)
              Like the article said, the issue isn't processor speed, it's algorithm time as a function of input size, i.e. logN. Factoring integers takes an exponential amount of time on classical computers. The best known classical algorithm (called GNFS) is O(exp((logN)^{1/3}(loglogN)^{2/3})), whereas Shor's algorithm can factor N in O((logN)^3) time. But, Shor takes roughly 2^N qubits to factor N. So, if we're talking about factoring a 200 digit RSA number, that's a whole crapload of qubits to control. Many orders of magnitude more than we can control now. In short, you're absolutely right about quantum computers being completely impractical until there are some huge breakthroughs in engineering and physics. This is why I love being a math major. We don't have to worry about silly things like actually building a quantum computer. We just sit around and daydream about how a quantum computer would work, then when we've got it all figured out, we blame the physicists and engineers for not building one.
              • Re:Great principle (Score:2, Informative)

                by frakir (760204)
                But, Shor takes roughly 2^N qubits to factor N

                Make that log2(N) qbits. 2^N would be a bit excessive (to factor 15 they'd need 32000 qbits. They used 7 of them)
                • Make that log2(N) qbits. 2^N would be a bit excessive (to factor 15 they'd need 32000 qbits. They used 7 of them)

                  I thought it was around 2N qbits to factor an N-bit number.
            • Re:Great principle (Score:3, Informative)

              by murphj (321112)
              So, what kind of scale are we talking about here? To simulate, say, a million-transistor CPU and a megabyte of RAM, how many qubits would you need? About as many as you need transistors, or radically less?

              From TFA: "In principle, quantum computers would need only 10,000 qubits to outperform today's state-of-the-art computers with billions and billions of regular bits," Lafyatis said.
                • From TFA: "In principle, quantum computers would need only 10,000 qubits to outperform today's state-of-the-art computers with billions and billions of regular bits," Lafyatis said.

                Yes, but as far as I understand, that *only* applies to problems you can solve with 10k qubits, and that's quite a limited set.

                It is my understanding that, for example, doing a filtering operation on a 2 hour film consisting of 5 megapixel images, 10k qubits would not help you much. AFAIK 1 qubit is still just one "serial" b

              • From TFA: "In principle, quantum computers would need only 10,000 qubits to outperform today's state-of-the-art computers with billions and billions of regular bits," Lafyatis said.

                Sure, for specific classes of problems that quantum computers are really really good at. But the whole discussion in this thread was about quantum computers simulating classical computers doing their everyday, mundane, classical computer things.

                The quote shines no light on the actual question: How powerful of a quantum compute
          • by Catullus (30857) on Tuesday April 26, 2005 @10:29AM (#12347772) Journal
            In fact, it would be very surprising if it turns out to be NP-complete, as it is in NP intersect co-NP. Also, no efficient quantum algorithms are known for NP-complete problems, and it is generally suspected that quantum computers won't be able to solve them efficiently. For example, see this semi-technical paper [arxiv.org].

            You had better get that right in your undergrad thesis ;)
            • While I have not read the paper you mention completely enough yet to understand its argument, let me point out the obvious fact that nature, by definition, "simulates itself," i.e. is its own computer. Now, the processes involved are extremely complex; just to simulate the processes going on inside of a single atom can take years of computation time on an ordinary computer, yet it happens essentially instantly in actuality. Is the problem of simulating nature on a submicroscopic level NP-complete? I'm n
              • While I have not read the paper you mention completely enough yet to understand its argument, let me point out the obvious fact that nature, by definition, "simulates itself," i.e. is its own computer.

                You're misusing that first word.

                A "simulation" is a testable model of something, usually created for a specific kind of testing, that specifically is NOT the thing itself. By way of example, consider "simulating" adding numbers on a computer chip. Most of the time you wouldn't bother doing it, because it'
          • I would love to see a 747 parallel park in Manhattan.

            Didn't they try that back in 2001?
          • Re:Great principle (Score:5, Informative)

            by QuantumFTL (197300) * <justin.wick@nOSPaM.gmail.com> on Tuesday April 26, 2005 @11:26AM (#12348420)
            Not exactly. Quantum computers can simulate classical computers with no problems. That's one of the tenets of quantum computation.

            If by "no problems" you mean "severe and most likely insurmountable quantum coherence issues". Any quantum computer big enough to simulate a modern sized classical computer will contain so many qubits as to have problems with interference from the outside world. IIRC the problem of quantum coherence is roughly exponential in the number of qubits in a system (one of the reason we don't have 1000 qubit computers sitting around). Just having enough qubits to remember my RAM would get pretty ridiculous.

            The truth is that quantum computers, in the forseeable future, will likely be an orthogonal type of computing system to classical computers - a coprocessor used for certain problems with small memory requirements but large search spaces. Many of our most important computations lie in this regime, but I doubt quantum computers will outperform classical computers on most ordinary stuff (i.e. word processing, running a webserver, handling large databases) due to its seriality and memory intensive nature. (Insert quote like "640 k ought to be enough for anybody" here)

            Also, the fact that quantum computers can factor large integers efficiently necessarily implies that they can do other NP-complete problems efficiently, such as the traveling salesman problem.

            It implies no such thing [wikipedia.org]. Traveling salesman problem is NP-complete [wikipedia.org], and while we have no solid proof that a quantum computer cannot solve an NP-complete problem in polynomial time, Shor's algorithm is also in no way any kind of proof, as integer factorization is merely NP [wikipedia.org], not fully NP-complete as you claimed.

            Yes, IAWAUGTOQC (I am writing an undergrad thesis on quantum computation).

            Yes, I do have a degree in physics. You may wish to check said thesis in light of errors explained above.
            • The problem with the current quantum computer research is there are always butterflies in China flapping their wings ... interfering with the research done in the US.

            • Re:Great principle (Score:5, Informative)

              by tbo (35008) on Tuesday April 26, 2005 @05:12PM (#12351939) Journal
              Yes, I do have a degree in physics. You may wish to check said thesis in light of errors explained above.

              And I'm doing a Ph.D in physics on quantum computing. Sorry to be a prick about it, but you were a bit rough on the undergrad who posted above, and what goes around comes around. As long as that guy isn't doing his research on slashdot, he'll probably be OK..

              If by "no problems" you mean "severe and most likely insurmountable quantum coherence issues". Any quantum computer big enough to simulate a modern sized classical computer will contain so many qubits as to have problems with interference from the outside world. IIRC the problem of quantum coherence is roughly exponential in the number of qubits in a system

              No, the problem is not exponential in nature. It has been shown that if the error rates for storage, gates, etc. can be brought below certain thresholds (typically 10^-3 to 10^-6), then arbitrarily long computations can be performed. There are many papers on the subject, but here is one [arxiv.org].

              The only way in which decoherence could pose an insurmountable problem is if there is fundamentally new physics that plays a role in the regime between "quantum" and "classical". Nobel Laureate Tony Leggett has talked (in a recent issue of Science, and at the 2005 Gordon Research Conference) about how we might find such new laws of physics if they exist, or otherwise rule out their existence.

              It implies no such thing.

              You are correct. In the early days of the field, I think there was a little bit of confusion about whether quantum computers could do NP-complete, but it has long since been sorted out.

              I recently attended a talk by Ike Chuang about general issues in the field. Chuang feels that quantum simulation and quantum communication will be the important applications, although he emphasized communication. I think quantum simulation is way, WAY underappreciated. Not only is it going to revolutionize protein folding, drug design, and other biomed applications, I have a hunch it may prove to be a prerequisite for advanced nanotech.

              The article is not particularly good. The supposed problems that optical lattices will have in addressing qubits in the interior of a 3-D lattice are "solved" by using what is essentially a 2-D lattice on a chip. The same can easily be done with optical lattices.

              Of course, addressing atoms inside a lattice of moderate size can be done using a high numerical aperature lens to focus an addressing beam onto a single atom. The addressing beam produces an AC Stark Shift of the appropriate hyperfine sublevels of the atom (in the case of Cesium-133 qubits, it shifts each of the mF sublevels of the F=3 and F=4 states), with the exact shift being different for different sublevels. This allows transitions in that particular atom to be driven by a microwave pulse which is detuned from all the other atoms in the lattice. Just how well can we address one atom while not disturbing atoms in adjacent planes? I'll know in a week or two. I'm currently simulating one and two qubit gates in this exact scheme. The actual experiment is also under construction, at Penn State.

              Anyone interested in a distributed computing project to develop quantum computers? I could use help from developers, and later, also regular user input.
      • and they're not an improvement over silicon for everything

        But they might if they figure out a way to make quantum breast implants... :)
    • Re:Great principle (Score:2, Insightful)

      by treff89 (874098)
      That's supposed to be, once we understand the basics.. From what I remember of a lecture, the real issue is actually being able to control the particle itself, but once controllable, the powers are immense.. for example, it would be possible to tell if an email has been read by "simply" observing the state of the quantum particles. Extremely advanced stuff but hugely powerful for the distant future,.
      • I don't get that. Wouldn't you observing the qbits change it?
        • I need to brush up on my QC theory, but if I remember correctly, quantum computers exploit "spooky action at a distance" (aka Quantum Entanglement). This allows one set of particles to perform the real calculations, while another set of particles is observed.
    • the random nature of quantum particles
      *enters 1 + 1 into the built-in calculator*
      *gets 2,124,972, 421 as an answer*
      *enters 1 + 1 again*
      *gets 0.0012 as an answer*
      • *enters 1 + 1 into the built-in calculator*
        *gets 2,124,972, 421 as an answer*
        *enters 1 + 1 again*
        *gets 0.0012 as an answer*


        So the Pentium was a quantum computer?
      • Actually it's more like:

        *enters 1 + 1 into the built-in calculator*
        *gets 2 as an answer*
        *enters 1 + 1 into the built-in calculator*
        *gets 2 as an answer*
        *enters 1 + 1 into the built-in calculator*
        *gets 2,124,972, 421 as an answer*
        *enters 1 + 1 into the built-in calculator*
        *gets 2 as an answer*

        hmm. the answer's probably 2!
    • I don't think first poster meant to be funny. I also think he has trouble with sentence structure. Maybe in a rush to be first poster.....

  • I realize all new technology comes in baby steps, but its somehow disappointing to hear that they "have taken a step toward the development of quantum computers" by making one little piece.

    With all the talk of quantum computers on /., one would have thought they were so much closer. :(
  • by Rinzai (694786) on Tuesday April 26, 2005 @09:09AM (#12347147) Journal
    "...making tiny holes that contain nothing at all."

    Well, yes, that rather is the definition of "hole," isn't it? Having nothing in them is what distinguishes them from the rest of the surroundings.

    • Re:Definitions? (Score:3, Insightful)

      by AviLazar (741826)
      I have a hole, I place a golf ball in it - I still have a hole. It just happens to have a golf ball. The difference is one is an empty hole, the other is not.
      • By your reasoning, every cubic millimeter of every solid object is a hole that just happens to be filled.
        • No that is not my reasoning at all. I said, I have a hole, I place an object in the hole, I still have a hole; it just happens to be a hole with an object in it. As for, if the hole is filled up is it still a hole, that depends on your definition of a hole.
    • in there!

      Then they would be tiny holes that contain gohphers, you see?

      Fore!
    • Re:Definitions? (Score:3, Insightful)

      Thirty spokes share the wheel's hub;
      It is the center hole that makes it useful.
      Shape clay into a vessel;
      It is the space within that makes it useful.
      Cut doors and windows for a room;
      It is the holes which make it useful.
      Therefore profit comes from what is there;
      Usefulness from what is not there.

      --Lao Tsu, The Tao Te Ching, Chapter 10
  • by CleverNickedName (644160) on Tuesday April 26, 2005 @09:13AM (#12347180) Journal
    Scientists ... making tiny holes that contain nothing at all.

    So these boffins have developed "nothing", but one day, in the far future, this nothing could be filled with something important.
    Wow. What an age we live in.
  • by TheAxeMaster (762000) on Tuesday April 26, 2005 @09:13AM (#12347181)

    They're speed holes, they make the computer go faster....
  • The thing I'm really looking forward to on Slashdot 2015 are all the posts:

    "Why would anyone need that much power? I remember 9 years ago when we only had 10 qubits [wikipedia.org] to work with! Quantum programmers sure are spoiled and lazy today."

    • Ingrates. Back in my day I had 128 kilobits to work with, and I was privileged. Most of my friends only had 64. We had to write our own games, and we had to save them on Floppy disks! We didn't have hard drives. Hell, we didn't even have mice. We worked with our bare hands on keyboards, and by gum we were grateful.
  • And how many would it take to fill the Albert Hall?
    • by meringuoid (568297) on Tuesday April 26, 2005 @09:23AM (#12347257)
      And how many would it take to fill the Albert Hall?

      Four thousand.

      I was never quite clear on how the holes from Blackburn, Lancs. could possibly fill the Albert Hall. I mean, they're holes - defined as being something not there. How can they fill anything?

      Then I discovered marijuana, and understood :-)

  • by Analogy Man (601298) on Tuesday April 26, 2005 @09:18AM (#12347225)
    Everyone knows current computers and consumer electronics work using magic blue smoke. If the smoke escapes your device no longer works. Overclockers are very clumsy about letting out the blue smoke and sell their processors (depleted of magic) on e-bay under dubious accounts.

    Quantum computers will use red smoke (the Rubium cloud). Will we call the hobbiests that push the limits of these machines Quark shakers?

  • . . . won't quantum computers mean an end to binary?

    In the old days, a cat in a box was either alive or dead - one or zero, you might say. Nice and easy.

    But when it gets quantum? How the hell is a simple machine going to cope when it asks "Is it one or zero?" and gets told "Both"

    "We've had to replace 'if' and 'and' with 'maybe' and 'probably'. And 'not' has become obsolete."

    • by x4A6D74 (614651) on Tuesday April 26, 2005 @09:43AM (#12347411)
      The computer does not ask "is it one or zero" and get told "both."

      Going back to the same metaphor you began to use, the principle that the Schroedinger's Cat Experiment is suppposed to illustrate is not the concept of superposition (that the cat is both alive and dead whilst in its quantum state in the box) but the concept of decoherence of the quantum state under observation.

      It's currently a postulate of quantum mechanics (i.e. everyone observes this phenomenon but nobody can explain it) that observation of a quantum state in a superposition (say, a "qubit" -- perhaps an electron spinning up for 0 and down for 1) will have one of the two values, with certain probability. Once read, the state loses that superposition and remains in the observed state (Recall: in the SCE, the cat stays alive or dead once you open the box).

      If you don't want to measure your qubits, and thus maintain their superpositions, entanglements, etc., that's fine ... of course, you can't get any information out of them. If you've properly designed your quantum machine, you may have a guess as to what the possible states are; you may even know the probability of each one.

      As soon as you ask to see a qubit, however, it becomes a classical bit and stays one. That's the downside to all this quantum stuff.

      Quantum computers also do not mean an end to binary -- currently, since humans have, and are trained to use, primarily classical faculties, quantum research is aimed at extending classical computation. So we typically discuss a "qubit" which may be 0, 1, or some combination thereof (specifically residing in the field C x C). But, if we ever want to interface a quantum computer with a classical instrument (for example, some sort of I/O device, or a classical computer, or a human) then we will unavoidably devolve back to binary.

      For more information, I recommend Nielsen & Chuang's book on Quantum Computation and Quantum Information (I think; I don't have it in front of me right now).

      Disclaimer: I am not a quantum mechanic. I am, however, an junior finishing up my degrees in mathematics and computer science so that I can go on in a year to work on a PhD in quantum computation. --0x4a6d74

      • by Deanalator (806515) <pierce403@gmail.com> on Tuesday April 26, 2005 @10:48AM (#12348014) Homepage
        I like to think of quantum computers as binary on the outside, analog on the inside. You can only read and write in binary, but the operators in the middle can be real valued (complex valued even).

        Nielson and Chuang's book is neat (I have it sitting on my floor 3 feet from me ATM). It's mainly written for the physicist to learn quantum circuits and algorithms. It takes a year to read, but by the time you are done, you should be able to read and understand most of the papers in the field.

        A much lighter book on the subject is "Explortions in Quantum Computing" by Williams and Clearwater. It gives a basic overview without much assumed knowledge.

        Also "Problems & Solutions in Quantum Computing & Quantum Information" by Willi-Hans Steeb and Yorick Hardy has alot of fun problems in it. It's the kind of book thats good to read on a bus, or an airplane.
        • I think that is the best summary I've read so far! Basically they are analog computers. And this is why I'm also so skeptical of them - I doubt you can do analog physics with the required degree of accuracy, even with error correction. Every paper I've seen on quantum error correction assumes a special form for Hamiltonian of an external interaction and yet an analog system is never like that. There are always other terms interacting, and in the case of QM those terms grow exponentially and swamp the data y
      • why is the cat both dead and alive? why can a bit be both one and zero? i don't understand this and would like to hear an explanation that makes sense.

        why does it need an "observer"? what exactly is an "observer"? how do we know this is the case? seems to me that the cat is either dead or alive how can it be both? you open the box and find the cat dead or alive, so how do you know it was in some other state before "observing" it? this all seems counterintuitive to normal logic so why should i belie
        • by MenTaLguY (5483) on Tuesday April 26, 2005 @02:24PM (#12350323) Homepage

          why is the cat both dead and alive? why can a bit be both one and zero? i don't understand this and would like to hear an explanation that makes sense.

          While I'm not sure there is an explanation that makes intuitive sense, it does appear to be the way the universe works at small scales.

          Schroedinger's thought experiment was intended to illustrate the weirdness of the issue by tying the state of a macroscopic object (a cat) to a quantum state (the decay/not decay of the particle), mainly. It's not a realistic experiment because you couldn't isolate the macroscopic contents of the box from the outside world sufficiently (and besides, it's cruel).

          But, real experiments do demonstrate that quantum stuff consistently behaves in really bizzare and counterintuitive it-is-but-it-isn't ways.

          One famous example is the oft-repeated "double slit" experiment (hopefully I won't mangle the summary too much).

          You remember light-as-waves? If you take a coherent light source (i.e. a laser) and shine it onto a screen through a mask with two small parallel slits in it, you will see a pattern on the screen resulting from the two interfering wavefronts.

          That's simple enough. But light is also particles (photons). You can put a filter between the lazer and the mask that only allows one photon at a time to dribble through. Now you have individual photons going through the mask, and you see individual spots as they hit the screen. Intuitive enough.

          But it starts to get weird. If you measure the brightness of those spots, though, they still follow the brightness of the interference pattern. That would suggest that the photon is going through both slits at once and somehow interfering with itself. Hmm, that's not very intuitive.

          But, okay. We can test that by using detectors at the slits to note the photons as they go by. Hmm. No, each photon is only going through one slit or the other, not both at once. So why are we getting the interference pattern? Wait, where did the interference pattern go?

          Huh. We stop observing which slit the photon is going through, and the interference pattern comes back (i.e. it effectively went through both slots). We start observing again, and it starts "picking" one or the other slot again...

          Basically it looks as if, to employ a gross anthoropomorphism, on quantum scales the universe is "lazy", and only commits to a specific choice if it has to (because somebody is watching). No, that's not intuitive, and no, we have no clue how this happens exactly (although we're getting better at describing it and exploiting it for practical purposes like primitive quantum computers), but that's what happens.

          why does it need an "observer"? what exactly is an observer"?

          Physicists are wrestling with that one. We don't really know. A person directly observing the quality being tested (directly or via instrumentation) seems to be sufficient, but not necessary.

          That's one of the downsides of the "Copenhagen Interpretation", which is the most common interpretation of these phenomena -- that an observer observing "forces" the universe to make a "choice" (the grossly anthropomorphic word choice is mine though -- the actual way of putting it is that the act of observation "collapses the wave function").

          There are other interpretations, too, that don't require a privileged position of "observer", but they have other very awkward quirks.

          this all seems counterintuitive to normal logic so why should i believe it is true?

          Certainly you shouldn't accept it just because someone says so, or because a few experiments suggest it might be true. In this case, though, the experiments have been repeated too many times by too many different people for the weird results to be the result of experimental error though, and also experiments designed to disprove these behaviors have fai

      • the cat stays alive or dead once you open the box

        Unless it's been reported alive or dead on Fark. ("The cat, once thought dead, then alive again, then dead, is actually, alive... for now. Quantum physics make my head asplode.")

    • by ciroknight (601098) on Tuesday April 26, 2005 @09:49AM (#12347460)
      A better explaination would be, "Is it a one or a zero?" "Depends on your perspective."

      Quantum computing, as I understand it (IANAQCS/P) works off the principal of super position; the ability for a bit to represent multiple bits, simply by the spin of the electron, or some other random thing that I wouldn't know how to explain.

      If you defined a zero as a square, and a one as a circle, then a quantum bit would be a cylinder; from one perspective you see the square, yet turn it on its side and you see its other property. But since you have other posibilities (cubes and spheres in this system), the "third dimension" persay has to be explicitly asked for by the requesting computer.

      So it's able to perform a massive amount of calculations based on a little bit of data, and store it as one neat little package at the end (either the cube, the sphere, or the cylinder). When someone comes along to ask, "was the answer a zero or a one" then, the only way to answer is "depends on the perspective".
  • by Milalwi (134223) on Tuesday April 26, 2005 @09:43AM (#12347412)
    The original news release [osu.edu], which has an animation to support the story is available at the Ohio State University Research News [osu.edu] site.

    Milalwi
    • As a Buckeye (about to graduate in ECE), this is great to see. Most of our best research is done in the medical field, where we have world-reknowned centers. It's good to see our physics and computer guys stealing some thunder. We're not the MIT of computing and physics, but there's certainly quite a few brains here.
      • I also like the research done on the football field.
        Muck Fichigan - GO BUCKEYES!
        • Hahaha, nice! I just hope it goes better than during the spring game

          I just ordered my Texas ticket through a friend's student ticket. I'll be living in Austin starting in July, but you can rest assured that i'll be talkin plenty of friendly smack for that game!

          • What game? It's not going to be a game, it's going to be an asskicking. Bob Stoops at OU has proven time and time again that the way to beat Texasis to contain Vince Young, and make him try to beat you with his arm and not his legs.

            Tressel knows this, and with the secondary Ohio State has (not to mention the offense) Texas is toast. Plus, Mack Brown is as big of a moron as Lloyd Carr is (hooray Lloyd - 1-3 vs. Tressel).

            I live in Texas - these fans down here sure are proud of the vastly overrated Big 12.

  • by karvind (833059)
    From the article: "We're pretty sure we can trap atoms -- the first step towards making a quantum memory chip," Lafyatis said. A working computer based on the design is many years away, though, he cautioned. In fact, Christandl suspects that they are at least two years away from being able to isolate one atom per trap -- the physical arrangement required for a true quantum memory device.

    1. What is the working principle behind this (mechanism of trapping) ?

    2. Are these experiments performed at room temp

    • 1. What is the working principle behind this (mechanism of trapping) ?

      They usually trap ions which are charged, and so can be trapped with an electro-magnetic field. A slight subtelty is that you need an oscillating electric-magnetic field which is exactly what the laser provides.

      2. Are these experiments performed at room temperature ?

      Hell no, unless you have a holiday house on Triton. They are cooled to almost absolute zero, because otherwise the ions are just moving too quickly to be trapped.

      3. H
  • Related article (Score:2, Interesting)

    by c0ldfusi0n (736058)
    Physicists could soon be creating black holes in the laboratory [sciam.com]

    When shall we get pet dark holes?
    Imagine cleaning the house with one of these around!
  • > Scientists at Ohio State University have taken a step toward the development of quantum computers by making tiny holes that contain nothing at all

    In related news, Ohio State University has recieved research funding from the NSA to perform Ear Exams on all members of Congress twice a year...
  • The Law. (Score:3, Interesting)

    by k96822 (838564) * on Tuesday April 26, 2005 @09:58AM (#12347531) Journal
    ...and this is why Moore's Law will continue, even though Moore himself says that it won't. Never underestimate the cleverness of the Human.
    • Re:The Law. (Score:2, Interesting)

      Never underestimate the cleverness of the Human.

      *cough*fusionpower*cough* The eternally "just around the corner" technology.

      Hey, I tease mankind. :)

  • My money is on nanomechanical quantum computing [physorg.com]. Forget all this ultracold gas vapor stuff, it is like vaccuum tubes...

  • by Anonymous Coward
    Here are posted movies of the experiment

    http://researchnews.osu.edu/archive/eggcarton.htm [osu.edu]

COBOL is for morons. -- E.W. Dijkstra

Working...