Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Math Stats Science

Cause and Effect: How a Revolutionary New Statistical Test Can Tease Them Apart 137

KentuckyFC writes Statisticians have long thought it impossible to tell cause and effect apart using observational data. The problem is to take two sets of measurements that are correlated, say X and Y, and to find out if X caused Y or Y caused X. That's straightforward with a controlled experiment in which one variable can be held constant to see how this influences the other. Take for example, a correlation between wind speed and the rotation speed of a wind turbine. Observational data gives no clue about cause and effect but an experiment that holds the wind speed constant while measuring the speed of the turbine, and vice versa, would soon give an answer. But in the last couple of years, statisticians have developed a technique that can tease apart cause and effect from the observational data alone. It is based on the idea that any set of measurements always contain noise. However, the noise in the cause variable can influence the effect but not the other way round. So the noise in the effect dataset is always more complex than the noise in the cause dataset. The new statistical test, known as the additive noise model, is designed to find this asymmetry. Now statisticians have tested the model on 88 sets of cause-and-effect data, ranging from altitude and temperature measurements at German weather stations to the correlation between rent and apartment size in student accommodation.The results suggest that the additive noise model can tease apart cause and effect correctly in up to 80 per cent of the cases (provided there are no confounding factors or selection effects). That's a useful new trick in a statistician's armoury, particularly in areas of science where controlled experiments are expensive, unethical or practically impossible.
This discussion has been archived. No new comments can be posted.

Cause and Effect: How a Revolutionary New Statistical Test Can Tease Them Apart

Comments Filter:
  • No problem. (Score:5, Insightful)

    by TechyImmigrant ( 175943 ) on Thursday December 18, 2014 @01:15PM (#48627355) Homepage Journal

    >provided there are no confounding factors or selection effects

    So that'll provide plenty of material for medical researchers, nutrition researchers, education researchers and economists to keep doing what they're doing.
     

    • by digsbo ( 1292334 )
      You left out people with an opinion on climate change.
    • Re:No problem. (Score:5, Insightful)

      by Anonymous Coward on Thursday December 18, 2014 @02:30PM (#48628077)

      I can't thing of any cases where I know there are no confounding factors but don't know which is the cause and which is the effect.

      Also, when it comes to medical stuff, or any human observational study, I can't think of any that don't have selection effects as well. Its a neat trick, but I honestly can't think of a single case where it applies in a useful way. Does anyone have an example?

      The article starts with this example of a confounding factor (which makes this test not applicable):

      That turned out to be an erroneous conclusion. Later studies showed that women who took hormone replacement therapy were likely to be from higher socio-economic groups with higher incomes, better diets and generally healthier outcomes. It was this that caused the correlation the earlier studies had found. By contrast, proper randomised controlled trials showed that hormone replacement therapy actually increased the risk of heart disease.

      This test may sometimes be able to provide evidence against causation in such cases (which is useful) but it can't determine causation (because there may be confounding factors). That may be news worthy, but it deserves a more accurate headline: new statistical test can form confidence bounds for how unlikely a it would be for a new parameter to be of this magnitude if there were causation: when combined with existing test it may discredit more potential claims of causation than previously practical.

      • > That may be news worthy, but it deserves a more accurate headline: new statistical test can form confidence bounds for how unlikely a it would be for a new parameter to be of this magnitude if there were causation: when combined with existing test it may discredit more potential claims of causation than previously practical.

        Bingo. You have won the internets.

      • The whole question of "which direction is the causality" is misleading in the first place; pure, uni-directional causality in situations of interest to people is almost non-existent. What we should usually look for is stable configurations ("stable" not implying "good," as in poverty), and self-reinforcing cycles (whether virtuous or vicious). Even if manipulating A causes B to change, it may also be that manipulating B would cause A to change.
        • by icebike ( 68054 )

          The other thing they fail to understand is that causality is patently obvious in the vast majority of cases where there are no confounding factors.

          Probably the social sciences are most in need tests like this, as they are always trying to pin some outcome on some input in a bubbling cauldron of alternatives. But of course, the cauldron is full of confounding factors.

          • The other thing they fail to understand is that causality is patently obvious in the vast majority of cases where there are no confounding factors.

            Probably the social sciences are most in need tests like this, as they are always trying to pin some outcome on some input in a bubbling cauldron of alternatives. But of course, the cauldron is full of confounding factors.

            Still going to need to elucidate reasonably valid mechanism to convince anybody of anything.

          • It may be obvious, but that doesn't mean it isn't contested. An example (which Pearl uses in his book Causality) is lung cancer and smoking. It was obvious to most people that smoking caused lung cancer, but another possibility was that there was a genetic predisposition to lung cancer, and that genetic factor also caused people to want to smoke. The tobacco industry in fact argued this, and (IIUC) it took some time before the direction of causation could be established in the legal sense.

            An example I he

        • One thing we're very interested in is the consequences of certain actions. Suppose we were to decriminalize many drugs, so nobody would ever be sent to prison for them: what effects would that have? I personally am very interested in the influence of diet on heart disease, and while a lot of people are willing to tell me what they know they contradict each other.

      • [I]t deserves a more accurate headline: new statistical test can form confidence bounds for how unlikely a it would be for a new parameter to be of this magnitude if there were causation: when combined with existing test it may discredit more potential claims of causation than previously practical.

        Did you really need to give it such an obviously click-baity title?

      • by rdnetto ( 955205 )

        I suspect the test could be generalized to work for N variables, since the noise should increase as we move along a causal chain. The only issue is the exponential drop-off in confidence. If the accuracy could be improved, it could be quite useful for deriving or verifying Bayesian networks.

      • Indeed. Famous, perhaps apocryphal, finding that storks bring babies, correlating postwar stork population in Europe with birth rate; confounding factor was spike in marriage rate after war, resulting in more babies, and more houses (in the chimneys of which storks nest). Can't take that apart by comparing noise rate in stork count and in baby count.
    • At least PBS will be able keep up their snake oil infomercials and I won't feel guilty for not supporting them.

      • God. I seriously miss the PBS before this. I guess I'm in the minority, but I'd love it if more of my tax dollars went to them once again so they didn't have to pull that shit anymore.
    • When machine learning based on this model is applied to the improvement of this statistical model we might have a problem.
  • Always (Score:4, Interesting)

    by phantomfive ( 622387 ) on Thursday December 18, 2014 @01:16PM (#48627367) Journal

    So the noise in the effect dataset is always more complex than the noise in the cause dataset....... the additive noise model can tease apart cause and effect correctly in up to 80 per cent of the cases

    In other words, not always.

    • Re:Always (Score:5, Interesting)

      by Mr D from 63 ( 3395377 ) on Thursday December 18, 2014 @01:25PM (#48627485)
      This is the tricky part, and it seems to work if you know exactly the cause and effect in advance, so you know which data to look at. It is quite clever though, and would seem to have application as an indicator if nothing else.

      I recall some equipment monitoring techniques used in my industry. There were reams of data. If a piece of equipment failed, you could go back and look at the data and see that there were indications. But filtering those indications out as useful input was always the problem. Only the blatant, in your face indications were caught. I see a similar problem here, that you might be able to show cause and effect with this data in hindsight, but it won't be so clear when you don't know the answer already.
      • Re:Always (Score:4, Interesting)

        by phantomfive ( 622387 ) on Thursday December 18, 2014 @01:35PM (#48627567) Journal
        Indeed, it's easy to think of situations where the opposite is true, where the noise is simpler in the 'effect' than in the 'cause,' because there is some attenuation factor in between that reduces the noise. That's more or less what a damper or shock absorber is designed to do. And a low pass filter in audio does the same thing.

        Now you might say, "obviously a low-pass filter is in the way, and that's causing the difference" but that gets back to your point, where it's easy to figure out when you already know the system, but if you don't, then it's not so easy.
        • Data dependent changes. They're a problem in statistics and they're evil in crypto.

          • I'm not sure what a data dependent change is.
            • Re:Always (Score:4, Informative)

              by TechyImmigrant ( 175943 ) on Thursday December 18, 2014 @02:51PM (#48628199) Homepage Journal

              An algorithm changes its behavior based on the value.

              The example I gave is a sneaky algorithm in the FIPS spec that deletes consecutive values when they match.
              I.E.
              If this_value == last_value:
                  don't output this_value
              else
                  do output this_value.

              This is on the output of an RNG and so it reduces the entropy in the random numbers because there are no matching consecutive numbers, whereas in a full entropy stream, all pairs would be equally likely.

              In the context of noise in statistical analysis, it can confound the additive noise models.

              Algorithms that do things to data, but don't look at the values of the data when deciding what to do are not data dependent and so that limits the scope various bad things to happen.

              • Nice description. Obviously this one is susceptible to a dead sensor, or stuck value. I run into these issues all the time, which I circumvent by keeping track of the local error (when the error decreases to zero too, I know it's a dead sensor).

    • I don't understand the initial problem. If A temporally precedes B, then B cannot have caused A but it may have been caused by A. (Unless there is backwards causation, which presumably violates many laws of physics.)

      So why can't they properly take into account time?

      • Seriously? You clicked on an article on statistics, yet haven't the faintest clue what 'correlation' means?
        Try going to Yahoo Answers instead of Slashdot. You'll do everybody a favor.

      • by itzly ( 3699663 )
        Probably because A and B have a large overlap in time, combined with poor record keeping at the beginning.
      • So why can't they properly take into account time?

        Because the original set up may be buried in time. You find a wind-turbine turning, the wind is blowing. Merely by measuring the speed of each how can you tell which came first? (Yes, I know ... you compare the noise profile of the respective data sets.)

        But now back to dwelling exclusively on the potential problems without acknowledging any even limited usefulness of this methodology might have ...

    • by Anonymous Coward

      80 percent of the time it works every time...

  • by neilo_1701D ( 2765337 ) on Thursday December 18, 2014 @01:17PM (#48627379)

    Reading through the article, it wasn't clear to me how it is determined whether it worked correctly or not.

    But still, an interesting statistical breakthrough, and one that allows researches to ask interesting questions about their data.

    • I agree, as long as it is recognized as a tool to assist in defining the confidence, as opposed to a guarantee of confidence. I also wonder what adjustments can be made to 'tune' the result to the desired conclusions.

  • by Anonymous Coward on Thursday December 18, 2014 @01:18PM (#48627391)

    Well, of course it can. How do you think causation is determined? First by noticing a correlation. There can't be causation without correlation.

    Gawd I hate the brain-dead fools who thoughtlessly parrot, "Correlation is not causation!"

    • by Anonymous Coward

      There can't be causation without correlation.

      That is an interesting statement. I would love to see some proof of that.

      Wouldn't a one-shot event with a delayed consequence have causation without correlation?
      I speculate that there can't be correlation between non-repeating, non-simultaneous events.

      • by Anonymous Coward

        Every consequence is delayed. Except for maybe some effects of quantum entanglement. (grin)
        What delay do you suppose is the cutoff for something to not be correlated?

        • Every consequence is delayed. Except for maybe some effects of quantum entanglement. (grin)
          What delay do you suppose is the cutoff for something to not be correlated?

          Actually, no.

          Repetitive events may not have a beginning or end, within the data gathering period. It is quite common to not be able to tell which is "first". In that test condition, the result can "come before" the cause, at least mathemetically. Or on the display screens. It's called "phase reset", and probably other things in other tech dialects.

          And most of the questions discussed on slashdot are repetitive events...

          The article seems practical, though. The cause and effect both have noise. The noise from

      • by clovis ( 4684 )

        There can't be causation without correlation.

        That is an interesting statement. I would love to see some proof of that.

        Wouldn't a one-shot event with a delayed consequence have causation without correlation?
        I speculate that there can't be correlation between non-repeating, non-simultaneous events.

        You are correct, it is possible to have a causal relationship that does not result in a correlation.
        This occurs if the consequence of the cause has a mediating factor occurring before the consequence, and the mediating factor varies in some way that is not dependent upon the causal action.
        Here's a simplified example:
        There are causes that make stock market prices vary, but the direction of the price depends upon how the information regarding the event is presented.
        Falling oil prices cause oil company share p

        • "You are correct, it is possible to have a causal relationship that does not result in a correlation."

          Too long to explain. An easier way: there can be causation without correlation. This is called deterministic chaos (the butterfly inducing a hurricane in the other side of the world, remember?).

        • Or stuff like "this drug increases survival in men, but decreases survival in women" If you don't suspect enough to look for that and just look at overall survival, you'd be mystified.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Gawd I hate the brain-dead fools who thoughtlessly parrot, "Correlation is not causation!"

      The proper term is: "Correlation does not imply causation". Perhaps you are being pendantic, but I'd rather hang around people who think "Correlation is not causation" (since it is more correct), than people who think "Correlation is causation".

      • by Wraithlyn ( 133796 ) on Thursday December 18, 2014 @03:42PM (#48628629)

        I prefer "Correlation does not prove causation".

        Edward Tufte suggested "Correlation is not causation but it sure is a hint."

  • So if Z causes both X and Y, I assume that this amazing test gives garbage?
    • That was one of my thoughts, as well. I think I understand the concept, and it seems like an interesting and possibly useful approach. However, it doesn't seem like it will necessarily give us causal links in a very certain way, since many real-life situations have many factors with complex relationships. Like: Z causes X and Y, but perhaps it always causes X and only makes Y more likely. Or: A, B, and C all independently increase the chances of E, but only when an unknown factor D is present.

      So I'd gu

    • So if Z causes both X and Y, I assume that this amazing test gives garbage?

      Perhaps in some cases it would be possible to detect that both X and Y were being affected by the same noise, implying the existence of some unknown Z?

    • by Anonymous Coward

      Ideally, this test should say that neither caused the other.

  • by Anonymous Coward

    Is that a joke for the quantitatively pedantic?

    Hey, we have this new technique. It's somewhere between 0% and 80% reliable.

  • by Anonymous Coward on Thursday December 18, 2014 @01:57PM (#48627783)

    Many other attempts at detecting causality exist. There's one based on dynamical systems theory (Takens' theorem): in a multidimensional, causally linked dynamical system, all the information in the high-dimensional system can be recovered from a multiple values of a single dimension over time.

    The method works by reconstructing values of X from lagged vectors of Y(t) nearest-neighbor lagged vectors of Y in a training set. As the training set gets larger, the predictions get better. If they keep getting better, X probably causes Y. The idea that the noise in X(t) shows up in Y(t) but not the other way around is implicitly captured in that approach, although not in a statistically rigorous way.

    Sugihara et al. Science 2012 [sciencemag.org] (sorry about paywall).

    • by Anonymous Coward

      Proofreading...should say:

      The method works by reconstructing values of X(t) from lagged vectors of Y(t) USING nearest-neighbor lagged vectors of Y(t) in a training set.

  • by gurps_npc ( 621217 ) on Thursday December 18, 2014 @02:03PM (#48627837) Homepage
    1) A sudden disappearance of studies claiming that video games cause violent behavior rather than the other way around.

    2) A whole bunch of people totally ignoring this study because they don't like what it means.

    • I think you're dead on with #2, but I doubt anyone will ever say that violent behavior caused video games....

      • by skids ( 119237 )

        I'm hazy on the details but ISTR sports deriving from wardances which dervied from a desire not to have all your warriors killed just to figure out who got to use the better stream for the following month.

      • I meant that violent people enjoy violent video games.

        That is, you don't become violent because you play a lot of violent video games. Instead you play a lot of violent video games because you have an aggressive personality.

  • It implicitly presumes that there is some relatively direct casual relationship between the two events.

    Fundamental flaw.

    • ... a flaw which is explicitly pointed out early on in the paper and stated pretty plainly, but that would take actually reading it to know. I'm actually kind of interested in this. We've got a few data sets we do that have highly-correlated variables, and I'll be curious to apply one or two of the methods here (once I figure them out - I'm not a statistician, my boss is) to see if there's any difference in the noise between them. Won't tell me anything definitively, but could suggest which if any might
  • The turbine example is poor. Adequate data will show causality in time between a wind gust, and a delayed turbine rotation rate. Momentum easily causes a lag between one data set and the other, and the concept of time running in one direction can easily be used to suggest causality.

    I am more curious about a test that would show if 2 data sets are clearly caused by a third non-measured factor.

  • by Culture20 ( 968837 ) on Thursday December 18, 2014 @02:40PM (#48628141)
    Which direction in time does cause/effect flow? The world may never know.
    • Which direction in time does cause/effect flow? The world may never know.

      Time flows in all directions, but you only see one direction. That direction depends on which direction -you- are from the "big bang".

  • I looked at the article - I don't understand how this is different than a covariance matrix?
  • It's not clear from the article how helpful this would be in the HRT example they give. The test can generally tell you which direction the causality runs in, but if there is no causation, will this be a clear enough test?

    Can they say:

    Does A cause B? Probably not.
    Does B cause A? Probably not.
    So there's probably a C causing A and B.

    There's a lot of probablys in that.

  • ... light at the end of the tunnel re: Chicken v. Egg... Pretty interesting though!
  • This excellent blog article [michaelnielsen.org] describes a technique developed by Judea Pearl decades ago to do exactly this. Would be interested to understand how this is different/better.

    • They cite him in the article, so they're aware of him. I haven't read the article enough to know how this relates, but I think they're proposing another way to infer causation from causality, beyond the structural modeling (and experimental methods) that Pearl describes.

  • Hmmm. there is a lot more noise in the global temperature data than there is the atmospheric concentration of CO2.

"Hello again, Peabody here..." -- Mister Peabody

Working...