Fermilab To Test Holographic Universe Theory 166
eldavojohn writes "Scientists at Fermilab have decided that it's high time they build a 'holometer' to test the smoothness of space-time. Theoretical physicists like Stephen Hawking have proposed that space-time is not smooth but it's been a lot of math and no actual data. The Fermilab team plans to build two relatively small devices that act as 'holographic interferometers' to measure the shaking or vibration in split beams of light traveling through a vacuum. If the team finds the shaking in their measurements and records them, the theory of a holographic universe will have some evidence of non-smoothness in space-time and perhaps a foothold in bringing light to the heavily debated theoretical physics."
A Douglas Adams quote comes to mind (Score:5, Informative)
"There is a theory which states that if ever for any reason anyone discovers what exactly the Universe is for and why it is here it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another that states that this has already happened." -- Douglas Adams
all fun and games (Score:2, Informative)
until they actually do shut you down. http://www.simulation-argument.com/ [simulation-argument.com]
Re:Physicists (Score:5, Informative)
When they say 'holographic universe', what they are saying is that while we think we live in three dimensions, we're really only living in two. The universe stores information that the rules of physics turn into the illusion of a third dimension.
You *could* extrapolate that to mean that our universe is, when you get down to its bare essence, only data. And you *could* extrapolate that to mean we are data in a simulation somewhere. But that's two leaps of logic past what the science is actually saying.
Re:Reality of data gathered on Earth (Score:1, Informative)
Because photons travel differently in other enviroments than Earth?
Possibly [physorg.com]
Re:Why is it always Hawking? (Score:3, Informative)
The key idea of the holographic principle is that the physics of a volume can be expressed on the surface area of a sphere containing that volume. Hawking was the first one to find that result, specifically he found that the entropy of a black hole was in proportion to the surface area of the event horizon, rather than the volume it enclosed.
I don't know... obviously Hawking's work doesn't exist in a vacuum. Obviously there are lots and lots of important discoveries and insights that lots and lots of other scientists have come up with. But it doesn't seem unreasonable to me to say that Hawking was the one to come up with the core ideas of the holographic principle, 't Hooft expanded that into a way to describe the whole universe.
Re:Why is it always Hawking? (Score:3, Informative)
"Ut". It's an abbrevation of "het", meaning "the".
Here's a video of him speaking (in Dutch), introducing himself, sounding like HE-rard ut-HOFT:
http://www.youtube.com/watch?v=2Qsf6Q4xSrU&feature=related [youtube.com]
Re:Physicists (Score:5, Informative)
I think the holographic universe term means that all of the information inside a volume can be encoded on the surface of the volume. That's where the two-dimensional versus three-dimensional part of the discussion comes from.
Re:Physicists (Score:5, Informative)
So given Moore's law you will eventually end up with a single physical universe and hugely many simulated universes.
Moore's law is an observation about how fast technology is developing, not an incontrovertible law of physics. It will not hold forever, because eventually we will run up against physical limits preventing us from cramming more computing power into a given region.
In particular, it is impossible for a given amount of matter to perfectly simulate more matter than itself. If it were possible -- if you could e.g. use a ten kilogram computer to simulate twenty kilograms of matter -- then your ten kilogram computer could simulate two of itself, doubling its storage. Further, each of those computers could then simulate two more, and so forth, leading to an obvious contradiction (infinite storage requires infinite entropy, which has been proved impossible). Note that this argument holds even if the simulation is slower than real time; no matter how long it takes to simulate, you can't store more memory than you had to start with.
Now, of course this all hinges on the word "perfectly". There's no reason a computer can't simulate large amounts of matter with less-than-perfect fidelity, which is something that we do all the time. But given that we can build working computers, nuclear reactors, particle accelerators, and all that, let alone the vastly-more-complicated processes going on in each and every cell in your body, we are clearly not living in some cut-rate simulation which is hand-waving the laws of physics. We don't know how to model all of this stuff in a computer, but given that it takes supercomputers to simulate hydrogen atoms accurately, and we can't even solve the equations by the time we get to helium, it seems safe to assume that no matter how sophisticated our technology becomes, it will always require a couple orders of magnitude more matter than what you're trying to simulate (if you doubt this, consider a practical example of a computer trying to simulate itself. Can you really picture a computer with 4GB of memory accurately simulating the behavior of 4GB of RAM at the subatomic level? It can't even emulate a different computer with 4GB of memory, let alone simulate it at the subatomic level). So, we're talking about a computer which is, at an absolute minimum, a couple of orders of magnitude bigger than the entire universe.
(For completeness, I will point out two possible "outs" for this problem: First, it's possible that there's some trickiness going on, and "the entire universe" isn't actually modeled. Maybe only a small portion of the universe is modeled accurately, and everything else is an easy low-grade simulation used to trick us. That's certainly possible, but it's also unfalsifiable, so I'm not sure it's worth seriously debating. Second, this assumes that the simulator and the simulation are operating under the same laws of physics. If the "real world" which is simulating our world has different laws of physics, which allow for vastly more powerful computers than anything we could possible hope to build using our cheap low-grade physics, this scenario wouldn't be as ridiculous. And, really, quantum mechanics is so weird that "it was outsourced to the lowest bidder" may actually be a decent explanation for it.)
Regardless, though, I don't understand how the "it is much more likely that we exist in a simulated universe" idea is getting serious traction. No, it's not impossible, but "likely" is a hell of a stretch.
Re:Why is it always Hawking? (Score:5, Informative)
No, that would be Jacob Bekenstein [scholarpedia.org]. Hawking and Bekenstein collaborated to precisely fix the ratio of entropy to surface area, but the original idea wasn't Hawking's.