Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science

Entanglement Could Be a Deterministic Phenomenon 259

KentuckyFC writes "Nobel prize-winning physicist Gerard 't Hooft has joined the likes of computer scientists Stephen Wolfram and Ed Fredkin in claiming that the universe can be accurately modeled by cellular automata. The novel aspect of 't Hooft's model is that it allows quantum mechanics and, in particular, the spooky action at a distance known as entanglement to be deterministic. The idea that quantum mechanics is fundamentally deterministic is known as hidden variable theory but has been widely discounted by physicists because numerous experiments have shown its predictions to be wrong. But 't Hooft says his cellular automaton model is a new class of hidden variable theory that falls outside the remit of previous tests. However, he readily admits that the new model has serious shortcomings — it lacks some of the basic symmetries that our universe enjoys, such as rotational symmetry. However, 't Hooft adds that he is working on modifications that will make the model more realistic (abstract)."
This discussion has been archived. No new comments can be posted.

Entanglement Could Be a Deterministic Phenomenon

Comments Filter:
  • by etymxris ( 121288 ) on Friday August 28, 2009 @11:18AM (#29231353)

    Bell's inequalities fall apart if current particles can "know" about future measuring devices. However, for particle physics, neither direction of time is privileged. Particles are just as likely to be influenced by future interactions as they are by past interactions. Because of this, there is no "action at a distance". Influences travel along the backwards light cone and remain perfectly relativistic.

    This simple, straightforward solution has been largely ignored.

    Note that most interpretations of quantum mechanics are explicitly time asymmetric due to the "collapse" caused by observation. Cramer's transactional theory is an exception, it is symmetric and there is no collapse, but it doesn't get much attention.

  • by Anonymous Coward on Friday August 28, 2009 @12:04PM (#29232003)

    Exactly, it seems that most quantum mechanicists assume that the fundamental equations must be hyperbolic in nature. However, general relativity admits solutions with closed time-like curves. This means that a theory combining quantum mechanics and general relativity must as well. (Since in the classical limit it must reduce to general relativity.) Closed time-like curves mean that forward-evolving your hyperbolic equations of motion is impossible. In effect, the loops in time cause future boundary conditions that you must satisfy.

    Coarse-graining over the spacetime foam at plank length must include the effect of these small-scale timelike loops. The result of this probably changes the fundamental equations to be elliptic in nature. If this is done, then most of the mystery of quantum mechanics disappears.

    As you have said, Bell's inequality requires one of locality or causality to hold. (Most quantum mechanicists assume causality, which then implies that "funny action at a distance" exists.) However, if you drop causality, then the interpretation of wave-function superposition is just your lack of knowledge of future boundary conditions. It becomes a calculational tool to solve your elliptic equations. (Note that the same sum-over-histories technique used in quantum mechanics appears in purely classical situations like a billiard-ball table with a worm-hole that can alter the ball trajectories.)

    This interpretation, locality over causality, is so much nicer since it removes the special "measurements" that collapse wave-functions. However, it implies that the universe is a static solution for all time, and has been completely determined. There is no such thing as free will etc. This obviously annoys some people. However, I contend that even if we don't have a free will, we might as well act as if we do, since the future boundary conditions that constrain everything are unknown.

  • Re:I knew it. (Score:1, Interesting)

    by Anonymous Coward on Friday August 28, 2009 @01:05PM (#29232825)

    That's where the philosophy chokes. It assumes making a decision, i.e. weighing pros and cons and your emotions and information, is somehow magically free of both determinism and random control. They may have influence, but ultimately there's some mysterious spiritual thing beyond determinism and randomness that's doing the deciding in a manner that doesn't involve either.

    The mind may or may not have a spritual (in the supernatural sense of the word) aspect, but that is beside the point. The mind could very well be an emergent phenomena that involves a combination of both deterministic and random features which are the result of purely natural features. In that case the mind is neither purely deterministic nor random, rather the mind is both simultaneously!

    Which, I submit, makes no sense. Weighing options is the essence of determinism, for that matter.

    Weighing options is also the essence of Free Will. There is also more than one [wikipedia.org] philsophical formulation of Determinism [wikipedia.org]. While various Incompatibilist stances seems most prevalent on Slashdot (and with IT professionals in general), it has no more empericial evidence than any of the Compatibilist stances. The former essentially believes that of all theoretical options only one (the one selected by the agent) was possible based on all circustances surrounding a given decision, basically reducing every decision to a huge conditional function (which might partially explain why IT proefessionials tend to prefer it:P). The latter believes that while the circustances surrounding the decision may heavily influence what the agent chooses, they are still free to choose one of multiple options and which one they choose can't always be predicted regardless if the prediction has complete and perfect information about both the circumstances and the agent involved in the choice.

    I haven't studied the parts of Physics involved enough to meaningfully comment on the second half of your post.

  • by Anonymous Coward on Friday August 28, 2009 @01:10PM (#29232915)

    The current model of the universe is not necessarily composed of will-less mechanisms.

    In fact the non-determinism of QM (if it is so) could be exactly the mechanism by which free will is introduced into the universe. QM does not have to be random as insinuated by the GP, but it instead could be the method by which free will forces (perhaps our 'souls') outside the universe (as we see it) inject their free will into the universe (by slight manipulation of the odds so to speak).

    I don't believe this myself, but I also don't see why it isn't theoretically possible.

  • by blackraven14250 ( 902843 ) * on Friday August 28, 2009 @01:49PM (#29233457)
    That's really insightful, and I'd give you a +1 for it. You're completely right that the introduction of our will could very well be us, without knowing due to barriers beyond science, changing a quantum particle from a superposition into one of it's potential positions. In fact, there's no proof that the essence of a person's mind actually is created on this plane of existence, lending a large amount of potential to this argument.
  • by etymxris ( 121288 ) on Friday August 28, 2009 @02:54PM (#29234443)

    On macroscopic scales not much changes since backward causes are limited...

    Says who? What is the definitive study of backwards causation? I'd like to see some sources which claim that violating causality would not cause experimental problems. What about simple particle physics experiments where we are working on microscopic scales?

    Without an entropy gradient from past to future we would be in heat death. The only bodies of knowledge that have any relevance in heat death are particle physics and perhaps some chemistry. Anything that depends on the entropy gradient for its existence, such as all biological creatures, will be strongly asymmetric in time. Thus, animals die after being born and not vice versa. What I'm saying is that backwards influences will exist, but they will be incredibly overpowered by the asymmetry of the entropy gradient such as to be ignorable. For disciplines studying anything influenced by entropy, reverse causation is ignorable.

    As for micro physics...

    You're not understanding my point. I didn't say the calculations or experiments would be difficult. I said that in any experiment where future events would have to be taken into account, you couldn't make definitive statements about your results. If I do an experiment to show A causes B and future events can also cause B, there is no way for me to state definitively that a seemingly positive result is caused by A and not some future event I can't control for. This is what makes causality so essential for science.

    We already control the future in the particle experiments. The future is the interaction with the measuring device. The measuring device is partly controlled (however we choose to set it up) and partly determined (otherwise our experiment would have no results). As for the general case, you shield yourself from future influences the same way you shield from past influences: set up a lead wall or something.

    How do you know what causes what? There isn't any fundamental problem. You just have two dependent variables where you used to have one. S1, S2,...Sn as the source setups. M1, M2, ...Mn as the measurement setups. And then the dependent variable will the reading of your device in the future: R1, R2, ...Rn. This is already what is being done and is what allowed Bell to determine his problematic inequalities.

    Perhaps you can give me a more concrete example to work with. I'm having trouble understanding your actual objection.

  • by mindbrane ( 1548037 ) on Friday August 28, 2009 @03:30PM (#29234949) Journal

    Threshold is a good working concept when addressing how to model a complex thing. In science threshold can be ostensibly seen in terms of the first microscope and the first telescope. From there spectroscopy presents another method with certain thresholds. Studying sound to model the inner sun is a recent example to getting around limitations to extend our present thresholds enabling and constraining our ability to model the Universe. The fact that we've hypotheses like String Theory suggests there are thresholds we've not yet crossed that would enable us to answer certain questions. The question arises as to why most people seem desperately to need a concept like truth rather than living in an interesting and engaging state of doubt.

    Models of the world or the Universe should express elegance, or, simplicity, like Einstein said, a theory should be as simple as possible but not too simple, but for a theory to be elegant it should, IMHO, be rigorous, where rigorous is taken to mean all or 'enough' particulars have been inspected to warrant an elegant theory. This idea seems to me to go back to threshold.

    Ideas about free will are speculative. I don't know that free will is viable except as a fiction because I'm not sure it's right to say an individual exists in any meaningful way. Language is heavily vested in purposiveness and unsuited to some subject matter. Whenever I think about free will I recall my idea for a slasher flic starring Ludwig Wittgenstein wielding Occam's Razor (it's still in development, but I like it).

  • by lgw ( 121541 ) on Friday August 28, 2009 @03:32PM (#29234981) Journal

    My own definition of free will, from the philosophical side, is the same as "conscious choice". Free will reduces to the question of sentience or self-awareness (or actually a precondition for that), which is not itself well-defined but is still interesting. Basically, if you think you have free will, you can't be wrong, any more than if you think you're in pain you can be wrong. It's empirical, but only as a concious state, just like pain.

  • Re:I knew it. (Score:3, Interesting)

    by Torodung ( 31985 ) on Friday August 28, 2009 @03:49PM (#29235221) Journal

    Poor Occam. Mistranslated, misinterpreted, and probably misattributed.

    I agree with you, and perhaps you misunderstood me.

    I parse the original with a context of: "Do not multiply entities needlessly, because you will rapidly exceed your own faculties. It's a limited resource. Make it count."

    This translates to "simple solutions are more productive because they are more readily understood and implemented." It's an engineering application, rather than theoretical.

    Corollary to Occam's Razor: Increased complexity has a logarithmically diminishing return. :^)

    --
    Toro

  • Re:I knew it. (Score:3, Interesting)

    by ceoyoyo ( 59147 ) on Friday August 28, 2009 @04:35PM (#29235785)

    PS: Cellular automata are in many ways very similar to string theory. The idea is that by starting with something very simple, you can get very complicated behaviour. The problem is, there aren't any proper mathematical tools for predicting that behaviour, except in very simple cases. The best you can do it try it out and see.

    Take Conway's game of life, for example. Given a non-trivial starting arrangement, without actually running through all the iterations, can you predict the state the system will stabilize at? Can you even predict (for non-special cases) if it will ever stabilize?

  • Re:I knew it. (Score:3, Interesting)

    by Toonol ( 1057698 ) on Friday August 28, 2009 @05:31PM (#29236445)
    That's obviously true. However, I think in the case of the brain, it's even more explicit. There are mechanisms in place that act to massively amplify signals, specifically geared to utilize quantum effects. It's going to be one of the difficulties in building an actual replica of the human brain in software; emulation at the level of the neuron is insufficient. There are quantum effects that need to be simulated within the inner structure of a single neuron.

Say "twenty-three-skiddoo" to logout.

Working...