Busting the MythBusters' Yawn Experiment 397
markmcb writes "Most everyone knows and loves the MythBusters, two guys who attempt to set the story straight on things people just take for granted. Well, maybe everyone except Brandon Hansen, who has offered them a taste of their own medicine as he busts the MythBusters' improper use of statistics in their experiment to determine whether yawning is contagious. While the article maintains that the contagion of yawns is still a possibility, Hansen is clearly giving the MythBusters no credit for proving such a claim, 'not with a correlation coefficient of .045835.'"
Re:well... truthfully... (Score:3, Informative)
But to test this, they used SUVs (if you are concerned about fuel efficiency, are you driving one?) going at about 40 mph (air drag I think increases by the square of the speed at those speeds, so highway speeds could significantly change the results), and, most stupidly, running the A/C cold enough that Jamie was commenting that he was glad that he was wearing a fairly heavy jacket and (IIRC) a scarf!
Yeah, real useful result that test was.
Not quite, OmniNerd (Score:5, Informative)
Doesn't anyone know statistics any more? (Score:5, Informative)
There is a very well-known test, the chi-square test, that deals with exactly this case. (Given the small sample sizes, the Fisher exact test may give better results.) Someone should point Hansen to the Wikipedia page on the topic.
For example, if there are 16 non-primed people, with 4 yawning and 12 not (for 25%), and there are 34 primed people, with 10 yawning and 24 not (for 29%), the chi square test gives a p value of 0.74.
The values Hansen supposes are significant 4,12 and 12,24 are not: p = 0.29.
You have to go all the way to 4,12 and 17,19 (i.e. 47% on a sample of 36) to get significance.
MythBusters was wrong to conclude that their results were significant, but Hansen was equally wrong to conclude that he had shown that Mythbusters was wrong.
Busting the MythBusters busters (Score:3, Informative)
MythBusters numbers may mean that someone is 20% more likely to yawn if seeded. Now, what's important is to evaluate the margin of error for this statement given the sample size.
What the article is definitely wrong about is that the sample size does not change anything. The sample size basically reduces the probability of error. The higher the sample size, the more likely that the statement "someone is 20% more likely to yawn if seeded" is true. However, at their sample size, it is not unlikely that the error marging is comparable with that 20% difference, which would invalidate the experiment.
The detailed calculations for sufficient sample size are left as an excercise for the reader.
Re:Doesn't anyone know statistics any more? (Score:5, Informative)
But all this aside, I'm not sure I like the experiment. Why bore people? Why have so many in the room. the 4,12 number is way too high, I'd say the were better off looking at narrow time slices and natural yawns (i.e. do yawns happen at random or do they set off avalanches). Then there is only one group and you're just testing the Poisson process assumption of uncorrelatedness.
Re:Science (Score:5, Informative)
Would that be Rough Science [open2.net]? In particular, it sounds like the second series [wikipedia.org]. I've seen a couple of the series over the past few years, and I believe it did a pretty good job of being a science show – the interest comes from watching people who actually know what they're doing, designing and building ingenious solutions (admittedly with very convenient tools and materials available) to problems that aren't inherently interesting (like making toothpaste or measuring the speed of a glacier), rather than relying on 'interesting' problems that are large/dangerous/explosive and lacking focus on the solution process.
Re:well... truthfully... (Score:2, Informative)
Re:Mythbusters is not scientific (Score:5, Informative)
Whoa there... (Score:3, Informative)
So, TFA's conclusion was correct, and so - whether intentionally or not - was the method. It's just not nearly as common as chi-squared test for a 2x2 table.
Re:well... truthfully... (Score:3, Informative)
Or, more likely, test a subset and claim it applies to the entire set.
E.g., the Cell phones on a plane [wikipedia.org] episode, where they claim a cellphone will not interfere with the avionics of a jet. Unfortunately, this is true for the cellphones they tested, plus the jet they tested. It unfortunately doesn't apply to the general case (sure, they did find 800MHz phones interfered, but modern phones don't...). The IEEE did a more elaborate series of experiments [ieee.org] and found some very surprising results, including loss of GPS satellite lock, to instrumentation drift. Or heck, I've even heard interference of the cellphones while flying (the characteristic buzzing was easily heard over the radio).
Re:Submitter gets an F on this one (Score:3, Informative)
The mean of 1.0 and 4.0 on the other hand is 2.5. Assuming you know your '1' and '4' in the above to that level of accuracy. Otherwise especially your 4 could be anywhere from 3.0_1 to 4.9_9 because it might only be accurate to half a digit.
Re:well... truthfully... (Score:3, Informative)
That's one thing I've seen a lot online. People watch maybe 80% of the show and then go online and say "they made a gross procedural error!" when in fact they're testing something subtly different or not doing what those people think. From what I've seen they usually set up something that tests their myth, although occasionally they do screw up. They also tend to word their problems such that they can be solved without overly elaborate test procedures.
An example of a badly botched experiment was the light blub experiment, in particular the part they tacked on at the end about how much wear and tear turning lightbulbs on and off is compared to leaving them on. Since they both failed to create a control group and failed to set up any way to document when each bulb burned out, the test was a complete loss. At least they pretty much admitted it afterward (when they came back 2 months later and only the LED based bulb was still working). The other half of the test, where they compared the energy use of the various forms of lightbulbs was pretty good though, if a little basic.
Re:Submitter gets an F on this one (Score:5, Informative)
The number of significant figures in an answer depends on how the function propagates errors. It's INCORRECT in general to think that if the inputs are given with two significant digits (say), then the output is only good for two significant digits.
The CORRECT way is to perform error analysis [wikipedia.org] on the function being computed. If the function is linear, then the error magnitude is essentially multiplied by a constant. If that constant is close to 1 (and only then) will the output accuracy be close to the input accuracy.
In general, a function being computed is nonlinear, and the resulting number of significant digits can be either more or less than for the input. Examples are chaotic systems (high accuracy in input -> low accuracy in output) or stable attractive systems (low accuracy in input -> high accuracy in output).
These "statistics" are COMPLETELY flawed. (Score:3, Informative)
The Fact that they use a standard deviation to test an Hypothesis, you know, instead of Hypothesis Testing [wikipedia.org] makes me certain that he doesn't know jack about statistics.
you do _NOT_ use descriptive statistics to study samples!!!
I can't believe how wrong this analysis is... What you're supposed to test is that when seeded with a yawn, you're more susceptible then when not seeded, and this is a whole other set of calculations...
Re:well... truthfully... (Score:3, Informative)
Wiki has the details: http://en.wikipedia.org/wiki/MythBusters_(season_