Cause and Effect: How a Revolutionary New Statistical Test Can Tease Them Apart 137
KentuckyFC writes Statisticians have long thought it impossible to tell cause and effect apart using observational data. The problem is to take two sets of measurements that are correlated, say X and Y, and to find out if X caused Y or Y caused X. That's straightforward with a controlled experiment in which one variable can be held constant to see how this influences the other. Take for example, a correlation between wind speed and the rotation speed of a wind turbine. Observational data gives no clue about cause and effect but an experiment that holds the wind speed constant while measuring the speed of the turbine, and vice versa, would soon give an answer. But in the last couple of years, statisticians have developed a technique that can tease apart cause and effect from the observational data alone. It is based on the idea that any set of measurements always contain noise. However, the noise in the cause variable can influence the effect but not the other way round. So the noise in the effect dataset is always more complex than the noise in the cause dataset. The new statistical test, known as the additive noise model, is designed to find this asymmetry. Now statisticians have tested the model on 88 sets of cause-and-effect data, ranging from altitude and temperature measurements at German weather stations to the correlation between rent and apartment size in student accommodation.The results suggest that the additive noise model can tease apart cause and effect correctly in up to 80 per cent of the cases (provided there are no confounding factors or selection effects). That's a useful new trick in a statistician's armoury, particularly in areas of science where controlled experiments are expensive, unethical or practically impossible.
No problem. (Score:5, Insightful)
>provided there are no confounding factors or selection effects
So that'll provide plenty of material for medical researchers, nutrition researchers, education researchers and economists to keep doing what they're doing.
Re: (Score:1)
Re:No problem. (Score:5, Funny)
one weird trick to separate cause and effect!
Re: (Score:2, Funny)
Statisticians HATE him!
Re: (Score:2)
With good reason.
Re: (Score:2)
Re:No problem. (Score:5, Insightful)
I can't thing of any cases where I know there are no confounding factors but don't know which is the cause and which is the effect.
Also, when it comes to medical stuff, or any human observational study, I can't think of any that don't have selection effects as well. Its a neat trick, but I honestly can't think of a single case where it applies in a useful way. Does anyone have an example?
The article starts with this example of a confounding factor (which makes this test not applicable):
That turned out to be an erroneous conclusion. Later studies showed that women who took hormone replacement therapy were likely to be from higher socio-economic groups with higher incomes, better diets and generally healthier outcomes. It was this that caused the correlation the earlier studies had found. By contrast, proper randomised controlled trials showed that hormone replacement therapy actually increased the risk of heart disease.
This test may sometimes be able to provide evidence against causation in such cases (which is useful) but it can't determine causation (because there may be confounding factors). That may be news worthy, but it deserves a more accurate headline: new statistical test can form confidence bounds for how unlikely a it would be for a new parameter to be of this magnitude if there were causation: when combined with existing test it may discredit more potential claims of causation than previously practical.
Re: (Score:2)
> That may be news worthy, but it deserves a more accurate headline: new statistical test can form confidence bounds for how unlikely a it would be for a new parameter to be of this magnitude if there were causation: when combined with existing test it may discredit more potential claims of causation than previously practical.
Bingo. You have won the internets.
Re: (Score:2)
Re: (Score:2)
The other thing they fail to understand is that causality is patently obvious in the vast majority of cases where there are no confounding factors.
Probably the social sciences are most in need tests like this, as they are always trying to pin some outcome on some input in a bubbling cauldron of alternatives. But of course, the cauldron is full of confounding factors.
Re: (Score:2)
The other thing they fail to understand is that causality is patently obvious in the vast majority of cases where there are no confounding factors.
Probably the social sciences are most in need tests like this, as they are always trying to pin some outcome on some input in a bubbling cauldron of alternatives. But of course, the cauldron is full of confounding factors.
Still going to need to elucidate reasonably valid mechanism to convince anybody of anything.
Re: (Score:2)
It may be obvious, but that doesn't mean it isn't contested. An example (which Pearl uses in his book Causality) is lung cancer and smoking. It was obvious to most people that smoking caused lung cancer, but another possibility was that there was a genetic predisposition to lung cancer, and that genetic factor also caused people to want to smoke. The tobacco industry in fact argued this, and (IIUC) it took some time before the direction of causation could be established in the legal sense.
An example I he
Re: (Score:2)
One thing we're very interested in is the consequences of certain actions. Suppose we were to decriminalize many drugs, so nobody would ever be sent to prison for them: what effects would that have? I personally am very interested in the influence of diet on heart disease, and while a lot of people are willing to tell me what they know they contradict each other.
Re: (Score:2)
[I]t deserves a more accurate headline: new statistical test can form confidence bounds for how unlikely a it would be for a new parameter to be of this magnitude if there were causation: when combined with existing test it may discredit more potential claims of causation than previously practical.
Did you really need to give it such an obviously click-baity title?
Re: (Score:2)
Re: (Score:3)
I suspect the test could be generalized to work for N variables, since the noise should increase as we move along a causal chain. The only issue is the exponential drop-off in confidence. If the accuracy could be improved, it could be quite useful for deriving or verifying Bayesian networks.
Re: (Score:2)
Re: (Score:3)
At least PBS will be able keep up their snake oil infomercials and I won't feel guilty for not supporting them.
Re: (Score:2)
Re: No problem. (Score:1)
Re: No problem. (Score:5, Insightful)
If you stop the wind all of a sudden, the turbine will continue to turn, causing wind, until the energy in the turbine is spent.
Re: (Score:2)
Re: (Score:3)
The standard t-test for detecting an effect is already probabalistic. In science and medicine a 95% confidence value is commonly used, which means a 1/20 of detecting something that isn't there.
Re: (Score:2)
The standard t-test for detecting an effect is already probabalistic. In science and medicine a 95% confidence value is commonly used, which means a 1/20 of detecting something that isn't there.
Unless things have been radically relaxed in the last decade, the standard in hard sciences and medicine remains a 99% confidence interval. It's the social sciences that allow for a 95% confidence interval. Having worked in all the different schools out there, I think I have some confidence in my assertion.
Re: (Score:2)
Some confidence? Would you give yourself a 99% confidence interval, or only a 95%?
Re: (Score:2)
Would you give yourself a 99% confidence interval, or only a 95%?
The question of the different criteria used in different fields of research is itself a social science question. Using OP's own criteria, they would require only 95% confidence. Obvious ... no?
Re: (Score:1)
Some confidence? Would you give yourself a 99% confidence interval, or only a 95%?
95% confidence. It depends on what you're testing, the purpose of the test, and the design of the experiment. For example, in some cases you might go with 90% simply because you're doing a pilot study--think of it as beta testing, or perhaps the alpha testing round. These are typically small and, well, simple, and you may go with a higher alpha simply because you're doing rough measurements to see if it works at all before investing the resources into doing a larger study with a lower alpha.
On the other han
Re: (Score:2)
You've got your statistics all wrong: you misrepresent significance testing, and overlook that t-tests are only suitable for a small range of problems. Plus it doesn't bear on the discussion of causality. You should have been downmodded into oblivion.
Re: (Score:1)
Re: (Score:3)
So once we start using this on everything, 1 out of every 5 times, it will lead us to bogus conclusions with false statistical confidence....
Apparently the Trident Gum people have been using this for decades.
Re:Great... (Score:4, Funny)
So once we start using this on everything, 1 out of every 5 times, it will lead us to bogus conclusions with false statistical confidence....
So, a vast improvement then? ;-)
Always (Score:4, Interesting)
So the noise in the effect dataset is always more complex than the noise in the cause dataset....... the additive noise model can tease apart cause and effect correctly in up to 80 per cent of the cases
In other words, not always.
Re:Always (Score:5, Interesting)
I recall some equipment monitoring techniques used in my industry. There were reams of data. If a piece of equipment failed, you could go back and look at the data and see that there were indications. But filtering those indications out as useful input was always the problem. Only the blatant, in your face indications were caught. I see a similar problem here, that you might be able to show cause and effect with this data in hindsight, but it won't be so clear when you don't know the answer already.
Re:Always (Score:4, Interesting)
Now you might say, "obviously a low-pass filter is in the way, and that's causing the difference" but that gets back to your point, where it's easy to figure out when you already know the system, but if you don't, then it's not so easy.
Re: (Score:2)
Data dependent changes. They're a problem in statistics and they're evil in crypto.
Re: (Score:2)
Re:Always (Score:4, Informative)
An algorithm changes its behavior based on the value.
The example I gave is a sneaky algorithm in the FIPS spec that deletes consecutive values when they match.
I.E.
If this_value == last_value:
don't output this_value
else
do output this_value.
This is on the output of an RNG and so it reduces the entropy in the random numbers because there are no matching consecutive numbers, whereas in a full entropy stream, all pairs would be equally likely.
In the context of noise in statistical analysis, it can confound the additive noise models.
Algorithms that do things to data, but don't look at the values of the data when deciding what to do are not data dependent and so that limits the scope various bad things to happen.
Re: (Score:2)
Nice description. Obviously this one is susceptible to a dead sensor, or stuck value. I run into these issues all the time, which I circumvent by keeping track of the local error (when the error decreases to zero too, I know it's a dead sensor).
Re: (Score:1)
I don't understand the initial problem. If A temporally precedes B, then B cannot have caused A but it may have been caused by A. (Unless there is backwards causation, which presumably violates many laws of physics.)
So why can't they properly take into account time?
Re: (Score:2)
Seriously? You clicked on an article on statistics, yet haven't the faintest clue what 'correlation' means?
Try going to Yahoo Answers instead of Slashdot. You'll do everybody a favor.
Re: (Score:3)
Re: (Score:2)
So why can't they properly take into account time?
Because the original set up may be buried in time. You find a wind-turbine turning, the wind is blowing. Merely by measuring the speed of each how can you tell which came first? (Yes, I know ... you compare the noise profile of the respective data sets.)
But now back to dwelling exclusively on the potential problems without acknowledging any even limited usefulness of this methodology might have ...
Re: (Score:1)
80 percent of the time it works every time...
Re: (Score:1)
Now, now. (https://www.youtube.com/watch?v=Hw5OoVXeJbU)
That 20% is the killer though (Score:3)
Reading through the article, it wasn't clear to me how it is determined whether it worked correctly or not.
But still, an interesting statistical breakthrough, and one that allows researches to ask interesting questions about their data.
Re: (Score:2)
I agree, as long as it is recognized as a tool to assist in defining the confidence, as opposed to a guarantee of confidence. I also wonder what adjustments can be made to 'tune' the result to the desired conclusions.
What's that saying? (Score:1)
Lies, damned lies and statistics. [wikipedia.org]
So, correlation CAN mean causation? (Score:5, Insightful)
Well, of course it can. How do you think causation is determined? First by noticing a correlation. There can't be causation without correlation.
Gawd I hate the brain-dead fools who thoughtlessly parrot, "Correlation is not causation!"
Re: (Score:1)
There can't be causation without correlation.
That is an interesting statement. I would love to see some proof of that.
Wouldn't a one-shot event with a delayed consequence have causation without correlation?
I speculate that there can't be correlation between non-repeating, non-simultaneous events.
Re: (Score:1)
Every consequence is delayed. Except for maybe some effects of quantum entanglement. (grin)
What delay do you suppose is the cutoff for something to not be correlated?
Re: (Score:1)
Every consequence is delayed. Except for maybe some effects of quantum entanglement. (grin)
What delay do you suppose is the cutoff for something to not be correlated?
Actually, no.
Repetitive events may not have a beginning or end, within the data gathering period. It is quite common to not be able to tell which is "first". In that test condition, the result can "come before" the cause, at least mathemetically. Or on the display screens. It's called "phase reset", and probably other things in other tech dialects.
And most of the questions discussed on slashdot are repetitive events...
The article seems practical, though. The cause and effect both have noise. The noise from
Re: (Score:2)
There can't be causation without correlation.
That is an interesting statement. I would love to see some proof of that.
Wouldn't a one-shot event with a delayed consequence have causation without correlation?
I speculate that there can't be correlation between non-repeating, non-simultaneous events.
You are correct, it is possible to have a causal relationship that does not result in a correlation.
This occurs if the consequence of the cause has a mediating factor occurring before the consequence, and the mediating factor varies in some way that is not dependent upon the causal action.
Here's a simplified example:
There are causes that make stock market prices vary, but the direction of the price depends upon how the information regarding the event is presented.
Falling oil prices cause oil company share p
Re: (Score:2)
"You are correct, it is possible to have a causal relationship that does not result in a correlation."
Too long to explain. An easier way: there can be causation without correlation. This is called deterministic chaos (the butterfly inducing a hurricane in the other side of the world, remember?).
Re: (Score:1)
Re: (Score:2, Interesting)
Gawd I hate the brain-dead fools who thoughtlessly parrot, "Correlation is not causation!"
The proper term is: "Correlation does not imply causation". Perhaps you are being pendantic, but I'd rather hang around people who think "Correlation is not causation" (since it is more correct), than people who think "Correlation is causation".
Re:So, correlation CAN mean causation? (Score:5, Interesting)
I prefer "Correlation does not prove causation".
Edward Tufte suggested "Correlation is not causation but it sure is a hint."
Re: (Score:1)
Re: (Score:1)
What if there is a third party? (Score:2)
Re: (Score:2)
That was one of my thoughts, as well. I think I understand the concept, and it seems like an interesting and possibly useful approach. However, it doesn't seem like it will necessarily give us causal links in a very certain way, since many real-life situations have many factors with complex relationships. Like: Z causes X and Y, but perhaps it always causes X and only makes Y more likely. Or: A, B, and C all independently increase the chances of E, but only when an unknown factor D is present.
So I'd gu
Re: (Score:3)
So if Z causes both X and Y, I assume that this amazing test gives garbage?
Perhaps in some cases it would be possible to detect that both X and Y were being affected by the same noise, implying the existence of some unknown Z?
Re: (Score:1)
Ideally, this test should say that neither caused the other.
Re: (Score:2)
Not to mention A causes X and B causes Y. You know, a coincidence.
One would assume that multiple trials would cause A and B to occur at different times, thus eliminating any perceived correlation between X and Y.
Re:David Hume (Score:5, Insightful)
Yes, but now we can find out whether we read Slashdot because we are nerds, or we are nerds because we read Slashdot.
Re: (Score:1)
"up to 80%" - is that a joke? (Score:1)
Is that a joke for the quantitatively pedantic?
Hey, we have this new technique. It's somewhere between 0% and 80% reliable.
Other causality tests exist (Score:5, Informative)
Many other attempts at detecting causality exist. There's one based on dynamical systems theory (Takens' theorem): in a multidimensional, causally linked dynamical system, all the information in the high-dimensional system can be recovered from a multiple values of a single dimension over time.
The method works by reconstructing values of X from lagged vectors of Y(t) nearest-neighbor lagged vectors of Y in a training set. As the training set gets larger, the predictions get better. If they keep getting better, X probably causes Y. The idea that the noise in X(t) shows up in Y(t) but not the other way around is implicitly captured in that approach, although not in a statistically rigorous way.
Sugihara et al. Science 2012 [sciencemag.org] (sorry about paywall).
Re: (Score:1)
Proofreading...should say:
The method works by reconstructing values of X(t) from lagged vectors of Y(t) USING nearest-neighbor lagged vectors of Y(t) in a training set.
I predict (Score:3)
2) A whole bunch of people totally ignoring this study because they don't like what it means.
Re: (Score:2)
I think you're dead on with #2, but I doubt anyone will ever say that violent behavior caused video games....
Re: (Score:2)
I'm hazy on the details but ISTR sports deriving from wardances which dervied from a desire not to have all your warriors killed just to figure out who got to use the better stream for the following month.
Re: (Score:2)
That is, you don't become violent because you play a lot of violent video games. Instead you play a lot of violent video games because you have an aggressive personality.
This assumes some sort of causal relationship. (Score:2)
It implicitly presumes that there is some relatively direct casual relationship between the two events.
Fundamental flaw.
Re: (Score:1)
Bad turbine example (Score:2)
The turbine example is poor. Adequate data will show causality in time between a wind gust, and a delayed turbine rotation rate. Momentum easily causes a lag between one data set and the other, and the concept of time running in one direction can easily be used to suggest causality.
I am more curious about a test that would show if 2 data sets are clearly caused by a third non-measured factor.
Re:Bad turbine example (Score:4, Funny)
This tells us nothing about the arrow of time (Score:4, Insightful)
Re: (Score:1)
Which direction in time does cause/effect flow? The world may never know.
Time flows in all directions, but you only see one direction. That direction depends on which direction -you- are from the "big bang".
covariance matrix? (Score:2)
How helpful? (Score:2)
Can they say:
Does A cause B? Probably not.
Does B cause A? Probably not.
So there's probably a C causing A and B.
There's a lot of probablys in that.
At last... (Score:2)
Didn't Judea Pearl solve this decades ago? (Score:2)
This excellent blog article [michaelnielsen.org] describes a technique developed by Judea Pearl decades ago to do exactly this. Would be interested to understand how this is different/better.
Re: (Score:1)
They cite him in the article, so they're aware of him. I haven't read the article enough to know how this relates, but I think they're proposing another way to infer causation from causality, beyond the structural modeling (and experimental methods) that Pearl describes.
Temp vs CO2? (Score:1)
Hmmm. there is a lot more noise in the global temperature data than there is the atmospheric concentration of CO2.
Re: (Score:2)
80% of the time it confirms the scientits' exectations.
Re: (Score:2)
Now that they've found a way to filter out ("ignore") data that doesn't fit, maybe now they'll actually be able to conclusively prove that climate change exists!
AGW skeptic here, but I'd be very cautions about applying this technique to climatic data to try and prove anything.
This technique works best there there are a limited number (read: two) variables, and a clear cause & effect (ie. one variable is dependant). At least that's my understanding.
Climate data is mindbogglingly complex, with a huge number of know variables with known and unknown dependencies. Even something as seemingly straightforward as the carbon cycle has a large number of feedbacks, whic
Re: (Score:1)
Now that they've found a way to filter out ("ignore") data that doesn't fit, maybe now they'll actually be able to conclusively prove that climate change exists!
Oh, wait. They are already ignoring the data that doesn't fit, so I guess this won't help. Well, maybe sometime in the next 50 years they'll actually come up with a model that is accurate for more than 2-3 years in the future.
There's undoubtedly more noise in climate data than in CO2 data, so you've just reminded us about the "climate makes CO2 rise, not the other way around" argument and it is now even more clear that it is false. Good job!.
Re: (Score:2)
Don't fucking delete your fucking data you fucking dipshit.
Unless, of course, you know that some fucking data is bad, and other fucking data is good. In that case it makes sense to fucking delete the fucking bad data.
Re: (Score:2)
You can't know your data is bad when doing experimentation. That's the point of experimentation - you control variables and observe others to test a hypothesis.
The point at which you can KNOW data is bad is the point at which you know all of the variables and all the details of the phenomena observing. It's like "experimenting" with 1+1 on a calculator. When it give you a 12 you know you've got bad data (you keyed in 11+1 or 1+11 or something), but that's only because you know the entire system and what
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
I would like to officially confirm that, indeed, OP is angry.
sexconker, can you please point to me the place on the doll where the bad ebil statistician touched you?
We will get you some therapy sorted out. Please dont rape, torture and mutilate the dead body of an innocent person in the meantime.
So angry!
Re: (Score:2)
I love statistics. I hate "statisticians".
Re: (Score:2)
Understandable reaction to the quantity of smugness in the story?
Re: (Score:3)
Almost any level of accuracy above pure randomness can be fruitfully added to the bayesion inference process. You can pretty harmlessly add the pure noise as well, it's just not going to be fruitful.
Re: (Score:2)
"Finally, we can discover whether increased crime causes ice cream sales to rise...or if it's the other way around."
Nonsense... increased ice cream sales comes from global warming which, in turn, reduces pirates as everybody know, therefore reducing crime, not the other way around.
Re: (Score:1)
I don't know about that, but I can tell you that global warming causes the ground to get harder. Proof: when I was younger, I could camp out on the ground with no air mattress; can't do that now.