Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Math It's funny.  Laugh. Media Television

Busting the MythBusters' Yawn Experiment 397

markmcb writes "Most everyone knows and loves the MythBusters, two guys who attempt to set the story straight on things people just take for granted. Well, maybe everyone except Brandon Hansen, who has offered them a taste of their own medicine as he busts the MythBusters' improper use of statistics in their experiment to determine whether yawning is contagious. While the article maintains that the contagion of yawns is still a possibility, Hansen is clearly giving the MythBusters no credit for proving such a claim, 'not with a correlation coefficient of .045835.'"
This discussion has been archived. No new comments can be posted.

Busting the MythBusters' Yawn Experiment

Comments Filter:
  • Well... (Score:5, Insightful)

    by Dyeane ( 1011019 ) on Monday April 23, 2007 @09:24PM (#18848461)
    If they find out, they may very well make an announcement on the show. Wouldn't be the first time.
    • by Cleon ( 471197 )
      Yeah. They've always been really good about revisiting myths and correcting themselves if they screwed up.
    • Re:Well... (Score:5, Insightful)

      by nocomment ( 239368 ) on Monday April 23, 2007 @09:39PM (#18848597) Homepage Journal
      I still wouldn't care if they did. I just like to watch them blow shit up. I'm not a fan of the show because of thorough statistical analysis.

      • by EmbeddedJanitor ( 597831 ) on Monday April 23, 2007 @11:14PM (#18849403)
        Anyone who watches Mythbusters for scientific reasons should maybe start watching Startrek instead. This is all entrtainment, it has nothing to do with scientific accuracy.

        • Re: (Score:3, Interesting)

          i would put the scientific accuracy between 2% and 98%, depending on the myth.. i mean, why buy a boat and try to ram it into a channel marker if you can simply solve the problem on paper? again.. entertainment. remember freshman physics where the prof made you draw a crappy little diagram for every problem so you could "visualize" it? yea, think how fun it's got to be to just straight up crash the boat; no diagram needed. :D which brings me to my 2nd point. i do, in fact, watch mythbusters for scientific
        • Re: (Score:3, Insightful)

          Yeah, but they document their methods pretty well, what with a camera crew following them around. Except in cases where they could reveal something like the formula to build volatile chemicals. Which is understandable.

          Isn't the real crux of science documentation and repeatability? I mean, if someone comes by examines their methods, and finds out that they did it wrong and can show it, isn't that proof that they're acting in the spirit of science?
      • Re:Well... (Score:5, Insightful)

        by nametaken ( 610866 ) on Tuesday April 24, 2007 @12:13AM (#18849977)
        I always laugh a little when people feel clever pointing out little problems with MB episodes. Anyone who thinks they're meant to be rigorous experiments is missing the whole point of the show. Mythbusters is like a YouTube series with a fun cast and a budget... and I love it that way. As Kari and Grant said on tour recently, they're often figuring this stuff out as they go... learning cool stuff as they shoot.

        Besides, I think most of us already know that the best ways to test most myths would be so boring it would never make TV in the first place.

      • Re:Well... (Score:5, Funny)

        by AGMW ( 594303 ) on Tuesday April 24, 2007 @09:06AM (#18853643) Homepage
        What I can't work out is whether I'm yawning because I'm reading about yawning, or yawning because I'm reading about statistical analysis.

    • by paiute ( 550198 ) on Monday April 23, 2007 @09:47PM (#18848661)
      You do not report five significant figures derived from data with only two.
      • Re: (Score:3, Insightful)

        by drewski3420 ( 969623 )
        Er, aren't significant figures supposed to tell you to what degree a measurement is accurate?

        I mean, since there can't be any fractions of a person, if we know there are 50 people, we know that there are 50.0, 50.00, 50.000000000000 people, right?

        It doesn't seem like sig-figs is applicable here.
        • Re: (Score:2, Redundant)

          by mackyrae ( 999347 )
          Some things have infinite numbers of sig-figs, such as counting whole things or using conversion numbers (like 1.54 when going between inches and centimeters). Regardless, sig-figs are based on the *least* accurate measurement involved, so if you do 50 people * 1.54976 * 2.4 and with the last two numbers those decimals are as closely as you can round based on your measuring device, you can only have 1 significant digit because 1.4 only has one significant digit.
      • by martin-boundary ( 547041 ) on Tuesday April 24, 2007 @02:20AM (#18850975)
        Who marked this informative?

        The number of significant figures in an answer depends on how the function propagates errors. It's INCORRECT in general to think that if the inputs are given with two significant digits (say), then the output is only good for two significant digits.

        The CORRECT way is to perform error analysis [wikipedia.org] on the function being computed. If the function is linear, then the error magnitude is essentially multiplied by a constant. If that constant is close to 1 (and only then) will the output accuracy be close to the input accuracy.

        In general, a function being computed is nonlinear, and the resulting number of significant digits can be either more or less than for the input. Examples are chaotic systems (high accuracy in input -> low accuracy in output) or stable attractive systems (low accuracy in input -> high accuracy in output).

    • Re: (Score:3, Insightful)

      by Reaperducer ( 871695 )
      While the MythBusters is entertaining, it's not exactly science. It's closer to tabloid junk science. Rarely are there control groups for most of their "experiments," and there are many other transgressions.

      Sure, it's popular on /. because things go up in flames, but I think the show is giving a kids a bad idea of what science is. Won't they be disappointed when they get to college and have to follow strict scientific procedures instead of watching things to boom.
      • Re:Well... (Score:4, Insightful)

        by Overly Critical Guy ( 663429 ) on Monday April 23, 2007 @10:29PM (#18849007)
        Hey, they're teaching kids to go out and prove things for themselves rather than believe them off the bat, and that's never a bad thing.
        • Re:Well... (Score:4, Funny)

          by Max_Abernethy ( 750192 ) on Monday April 23, 2007 @11:06PM (#18849343) Homepage
          pfft, more like underly critical guy.
          • Re:Well... (Score:5, Insightful)

            by cyphercell ( 843398 ) on Tuesday April 24, 2007 @12:28AM (#18850133) Homepage Journal

            Mythbusters is no different than Bill Nye, Mr. Wizard or who ever the hell came first. They use the same basic methods for all of their problems. There are some differences though:

            1. Budget (much higher, but it doesn't always meet the problem at hand)
            2. Problems (completely open ended)
            3. Math (same level as most jr. high - high school science, however, sometimes severely short for the issue at hand - see 2)
            4. End result is not known. (again see 2)
            5. Time (they have time constraints - see 1-4)

            When I was ten I know I'd much rather watch two guys drive two semis into a small economy car rather than watch Mr. Wizard mix baking soda and vinegar again and again. Mythbusters rocks, because it is exactly what the 10+ set is capable of and it also shows them the constraints of their knowledge because the Mythbusters actually do discredit themselves on the show, you'll hear them say things like "I think you're way off base with your method" or "I'm really happy with the results" and if you hear that from the old guy in the beret it's usually because it was an effective (or ineffective as the case may be) low-level experiment. It's a simple formula:

            1. Find a problem.
            2. Conduct an experiment.
            3. Measure the results (for better or worse)!
            4. Blow something up!!
            5. Profit!!!

            Now I'm not saying that all of their experiments are 100% right for all levels of science, I'm just suggesting that they are about as good as you get with pre-algebra to algebra level math. And that isn't that bad, after all that's where we get things like the lever, steam engine, plumbing, and a lot of other cool crap (like higher math). I remember building a trebuchet for a lower level physics class (10*?), they mostly sucked but we did the algebra (Newtonian mechanics) some of us got A's, most of us didn't, but when we were done we had learned a little (by trial and error) about trajectories and conflicting forces, not to mention recording our results. It wasn't in vain, it was a nice precursor for things to come. Between Mythbusters and American Idol I'd easily rather have my kids watch Mythbusters even if they're wrong 80% of the time.

            I'm not apologizing for the Mythbusters in the least.

            • Re: (Score:3, Insightful)

              by Skater ( 41976 )
              I just wish they wouldn't use "CAUTION! SCIENCE CONTENT!" like it's torture to have to talk about science. That and their occasional butchering of statistics are the only things I don't like about the show.
        • Re:Well... (Score:5, Interesting)

          by syousef ( 465911 ) on Tuesday April 24, 2007 @12:30AM (#18850159) Journal
          Hey, they're teaching kids to go out and prove things for themselves rather than believe them off the bat, and that's never a bad thing.

          Yes it fucking well is a bad thing when they don't teach you how to do it. They're teaching skepticism but then they're teaching hillbilly scientific practice instead of logic and the scientific method. The result is you get a bunch of kids who are rude, and think they know everything just because they can provide a counter-argument backed up with nothing but the shoddiest proof. That is very much a bad thing.

          The Mythbusters basically piss on the scientific method in every show, drawing wild conclusions from a single lll thought out experiment, often with no controls (or weak ones), and often testing a single instance or brand and then generalising for all of that type of product.

          Another poster put it correctly. People watch because they blow shit up, which is fine as far as entertainment goes. However no other show presents bad pseudo-science as science and fucks up the minds of kids who then think they understand science, when at best they understand skepticism.

          Every time I've said this here I've been modded down but fuck it, it needs to be said.
          • Re:Well... (Score:5, Insightful)

            by shadwstalkr ( 111149 ) on Tuesday April 24, 2007 @08:36AM (#18853285) Homepage
            Maybe we should be teaching kids how to do science in school instead of letting the Discovery channel do it. Mythbusters can inspire kids to be passionate about science, and I think that's about all we can expect from a TV show.
            • Re:Well... (Score:5, Interesting)

              by That's Unpossible! ( 722232 ) on Tuesday April 24, 2007 @12:58PM (#18857299)
              Why can't a privately funded entity teach science? What makes a government school the best choice to teach science? I agree that this particular show is not a good choice, but let's not just wipe TV or the internet out and put government schools up on a pedestal.

              At the very least, scientific TV shows encourage people to learn more about science and the scientific method.

              Carl Sagan taught me more about science with his Cosmos series (that has stuck with me) than any government school ever did. When I heard about this search engine named "Google" back on Slashdot so many years ago, I can still remember thinking back to the Cosmos episode where Sagan was talking about large numbers, like googol and googolplex. To see him try to roll out a piece of paper not with a googolplex of numbers on it, but merely the standard notation of googolplex (1 followed by a googol zeroes), it sticks with you. And on the smaller scale, to watch him place a drop of oil on a lake, and come back an hour later to explain that the entire surface of the lake now had a microscopic layer of oil across the entire surface. Or to demonstrate Einstein's theories of gravity with a stretchy sheet of material and some heavy balls of different sizes. Or demonstrating the 4th dimension by showing a "shadow" of a 4th dimensional item as a 3 dimensional item, much as we can see the shadow of a 3 dimensional item drawn on paper. I haven't seen Cosmos in a decade, and can still remember things he talked about.

              This is something government schools rarely ever do, unless you happen to be assigned to the one-in-a-million inspirational teacher.

              Another example -- planet earth, now running on Discovery HD Theatre. An absolutely stunning piece of scientifically interesting video.
        • Re: (Score:3, Interesting)

          by gsslay ( 807818 )
          Hey, they're teaching kids to go out and prove things for themselves rather than believe them off the bat, and that's never a bad thing.

          It's only a good thing if they're first taught how to think critically and how to prove things. Too much of people "going out and proving things for themselves" involves three steps;

          1/ Whoah! This is very complicated! It would take years of study to fully understand it.
          2/ Screw that, I'm just going to apply my in depth knowledge of what looks right, and what seems to me

          • Re: (Score:3, Insightful)

            by dosquatch ( 924618 )

            It's only a good thing if they're first taught how to think critically and how to prove things. Too much of people "going out and proving things for themselves" involves three steps;
            <trim>

            It all depends on what your question is. "Do things always fall down?" is quite a different question than "Why do things always fall down?" Describing the nature of gravity is an entirely different field than describing its effects. There is a lot more math and laboratory rigor in designing and proving cold fusion than there is in launching a car off of a large mound of dirt to see if it would still be drivable*. There is also a lot less of the process that would make for compelling television. The drive,

      • While the MythBusters is entertaining, it's not exactly science.

        Its "Science entertainment", much as the super 12, rugby 7's or one day cricket are "Sports entertainment".

        Sorry I can't give any USA-centric examples of sports entertainment... maybe baseball?
        • Sorry I can't give any USA-centric examples of sports entertainment... maybe baseball?
          Professional wrestling.
  • by evwah ( 954864 ) on Monday April 23, 2007 @09:27PM (#18848489)
    it always seems to me that their conclusions are specious. I can't think of any specific episodes right now but they over simplify the data, build elaborate setups that are prone to error, and use inadequate controls.

    not to mention that they always try to prove stupid crap like "a rolling stone gathers no moss". I'm waiting for them to try "the grass is always greener on the other side", or "it takes one to know one".
    • Re: (Score:3, Informative)

      by EvanED ( 569694 )
      The example I like to use, though apparently they revisited this one (I "can't" afford cable unfortunately), is they were trying to figure out whether the aerodynamic drag of running your car with your windows down was greater than the engine drag of running the A/C.

      But to test this, they used SUVs (if you are concerned about fuel efficiency, are you driving one?) going at about 40 mph (air drag I think increases by the square of the speed at those speeds, so highway speeds could significantly change the re
      • Re: (Score:2, Insightful)

        by maxume ( 22995 )
        If AC on full is better than windows down at low speed, AC on half is quite likely to be better than windows down at higher speed. It's a reasonable first test for a television show. It would have been nice if they tried it in a couple of different models though.
        • They later came back and said that above 50Mph is about when A/C is more efficient than windows down, 50mph being about when it crosses over. I'm surprised it made much difference, but then, for my car, A/C on vs. off is practically negligible.
      • by alshithead ( 981606 ) on Monday April 23, 2007 @10:06PM (#18848809)
        They also tested tailgate up or down on a pickup truck for mpg. Up won and they fully explained why. I also really enjoyed the show that included bullets being shot into a pool including a big .50 cal. with the idea of being submerged could save your life if you're being shot at. I don't think you can completely pan them for a couple of specious results when overall their show is REALLY cool.
        • by BobPaul ( 710574 ) * on Monday April 23, 2007 @11:43PM (#18849665) Journal

          They also tested tailgate up or down on a pickup truck for mpg. Up won and they fully explained why.
          They then revisited that one using a flow meter in the gas line instead of extrapolating the data based off the air intake sensors. They found tailgate removed to be most efficient, followed by mesh, tailgate up/hardcover, and tailgate down. Source [kwc.org]
      • Bob knows how you got modded insightful. Orders of magnitude (pushing things until they're measurable)
        is perfectly reasonable and useful.
      • Re: (Score:2, Informative)

        YMMV (hah!), but in my last car, the AC was either "on" or "not on," and temperature was controlled by the cooled air being blown over the engine to a varying degree (standard heater). You may have a little dial on the interface that you think is adjusting "how much cold," but the reality and energy consumption may be functionally quite different. In other words, having the AC on full or not does not always have an appreciable effect on gas consumption. Just something to think about.
      • Re: (Score:3, Informative)

        They DID revisit this one, and they did say that it changes based on speed. For the car they used, at under 50mph the window open was better, and above 50mph AC was better. So as a broad statement, AC is better when you're going fast and having the window open is better when you're going slow.

        Wiki has the details: http://en.wikipedia.org/wiki/MythBusters_(season_3 )#AC_vs._Windows_Down [wikipedia.org]
      • Re: (Score:3, Insightful)

        by brunes69 ( 86786 )
        It doesn't matter what the MPG of the cars were as long as it's the exact same car and year for comparison.

        What was stupid is how small their sample was - they were planing on driving a whole tank off but then said that would take too long so they sucked it down to a gallon in each car or something. Which I don't think is a fair test; how do you know if the AC performs better as it runs longer or something?

        Also it'd dumb that they had the AC running full blast as that's not a realistic scenario - once the c
    • I am reminded of the episode where they busted the myth from old cartoons where blocking the barrel of a gun with a finger supposedly causes the gun to backfire injuring the gun owner and splitting the barrel. However the barrel of the gun used was not as flimsy as guns of yesteryear. Especially of those relics depicted in the old cartoons.
    • Agreed. Every time I've watched their show, they make some gross procedural or statistical error. Not that the show isn't enjoyable, but it does piss me off when they decisively say their conclusion without using methods with any sort of validity.

      I've wanted to write an article like that for a while now.
    • I agree. Their show on the Hindenburg particularly disappointed me. They built the experimental rigs, showed that the mixture used to water proof the Balloon would in fact form thermite, showed that a scale model covered in the mixture and filled with hydrogen burned much faster than than a scale model filled with hydrogen alone. Conclusion? The water proofing mixture wasn't at fault, and it was hydrogen than caused the disaster. WTF?! How do you reach that conclusion in direct opposition to what your
      • Re: (Score:3, Informative)

        by jandrese ( 485 )
        Did you watch the same one I did? The conclusion I got from it was "both the Hydrogen and the Paint had something to do with it." They were debunking the myth that the Hydrogen had little or nothing to do with the fire and it was the paint that did the Hindenburg in.

        That's one thing I've seen a lot online. People watch maybe 80% of the show and then go online and say "they made a gross procedural error!" when in fact they're testing something subtly different or not doing what those people think. From
    • Re: (Score:3, Informative)

      by tlhIngan ( 30335 )

      it always seems to me that their conclusions are specious. I can't think of any specific episodes right now but they over simplify the data, build elaborate setups that are prone to error, and use inadequate controls.

      Or, more likely, test a subset and claim it applies to the entire set.

      E.g., the Cell phones on a plane [wikipedia.org] episode, where they claim a cellphone will not interfere with the avionics of a jet. Unfortunately, this is true for the cellphones they tested, plus the jet they tested. It unfortunately does

  • Precision? (Score:5, Insightful)

    by Bill Walker ( 835082 ) on Monday April 23, 2007 @09:29PM (#18848509)
    I dunno, the fact that he's willing to state the correlation coefficient so precisely makes me leary of his own statistical expertise.
    • I dunno, the fact that he's willing to state the correlation coefficient so precisely makes me leary of his own statistical expertise.

      The Fact that they use a standard deviation to test an Hypothesis, you know, instead of Hypothesis Testing [wikipedia.org] makes me certain that he doesn't know jack about statistics.

      you do _NOT_ use descriptive statistics to study samples!!!

      I can't believe how wrong this analysis is... What you're supposed to test is that when seeded with a yawn, you're more susceptible then wh
  • by Timesprout ( 579035 ) on Monday April 23, 2007 @09:30PM (#18848519)
    Facinatingly detailed observations like this must go down a treat at the parties you attend Brandon Hansen.
  • by slughead ( 592713 ) on Monday April 23, 2007 @09:31PM (#18848525) Homepage Journal
    In almost every episode they do something that invalidates their own findings.

    Sometimes they don't things more than once (even when required), other times they don't adequately recreate the conditions of the "myth."

    The show is entertaining as hell, and sometimes they do conclusively prove things.
    • Re: (Score:3, Insightful)

      In prime time television, Blowing Shit Up > Conducting Scientifically Sound experiments. That said, I love the show. My favorite is when they put an air-powered ejection seat inconspicuously into a normal car. And it worked!
    • Re: (Score:2, Funny)

      by Anonymous Coward
      Announcer: In today's episode, Jamie and Adam test whether Slashdot poster slughead is capable of posting intelligent, insightful comments.

      {Commercial break. }

      { Adam and Jamie read slughead's comment. }
      { Jamie chuckles. Adam laughs really hard, pisses his pants, and continues to laugh really hard. }

      { Commercial break. }

      Announcer: BUSTED! slughead can apparently only state nothing but the blatantly obvious!

    • by wesmills ( 18791 ) on Monday April 23, 2007 @11:04PM (#18849321) Homepage
      They have stated both on the show and in other interviews that a lot more testing goes on than just what we see on the show. For the "showcase" experiment on each show (the one that opens and closes the program), the producers have taken to placing video of most or all of the tests on their Discovery website: http://www.discovery.com/mythbusters [discovery.com]
  • by Anonymous Coward
    I'm often surprised at how many people take the MythBusters seriously. Their show is entertaining, but it's important to realize that neither Jamie nor Adam really have a scientific or engineering background. To think that they could "bust" a "myth" with any degree of certainty is laughable. But every so often I hear somebody use MythBusters as a reference, even intelligent people with at least some scientific background, like medical doctors and geologists.

    I'm all for watching their show for its entertainm
    • My favorite episode is when they proved that diving in to water is effective in evading gunfire. They placed a gelatin mold 18inches under water in a swimming pool and fired a .50cal at it. The gelatin wasn't pierced. They repeated this test time and time again at several calibers (IIRC even went to 12inches below surface).

      Sometimes they don't do so good, but other times they do extremely well.

    • Re: (Score:2, Insightful)

      by maxume ( 22995 )
      They occasionally do stuff worth pointing at; they spend big piles of money on stupid shit, and often demonstrate that 'simple' approaches are worthless for doing this or that.
    • by allenw ( 33234 )
      It should probably be noted that Grant [discovery.com] has a BS in EE from USC.
      • Re: (Score:3, Funny)

        by cyphercell ( 843398 )

        In addition to operating R2-D2 (one of only a handful of official operators), Grant ...

        R2-D2, Official Operator? This guy has more nerd creds than everyone in this thread put together!

      • Unfortunately, he BSs about a vast number of other kinds of E, in a manor that is less entertaining and more, assholish, often insulting other cast members or the audience.
  • Science (Score:5, Insightful)

    by Turn-X Alphonse ( 789240 ) on Monday April 23, 2007 @09:33PM (#18848543) Journal
    Science and entertainment do not play well together, it is mostly true because science requires real thought and watching TV basicly does not. If you attempt to put real science on TV today you will watch the other 6.9 million TV stations each gain 1 more viewer while you get a dust bowl rolling through. Maybe it's time we started to realize what the mass public want are crappy reality shows, cooking and some bullshit made to look information but that is infact 75% CGI or "docudrama".

    The above is why I wouldn't trust Mythbusters as far as I could throw them. The entire show screams entertainment rather than Science. Unfortunaely I can't find the name of a program that aired in the UK about 6 months ago. It took a team of 4 people to a deserted island and each week they had a task to complete each, they were only allowed to use what was on the island and what was given to them each week (as well as a tool set because, well no tools = screwed). They had to do things like make fireworks, record a song and various other "minor" things which required them to render down various things to achieve the chemicals they needed to complete each task. What they did and what it resulted in was very clearly labeled, having real science explained behind it.

    Saddly as I recall it basicly got replaced with some crappy school based soap opera where the kids say "innit" and the teachers fuck anything with two legs (including the kids as the current trailer at least implies). So after this long rant, I guess we just give up on science and go back to the discoery channel, maybe we can catch the 3 minutes of it that isn't Nazis or some form of sport!
    • Re:Science (Score:5, Informative)

      by Excors ( 807434 ) on Monday April 23, 2007 @10:04PM (#18848781)

      Unfortunaely I can't find the name of a program that aired in the UK about 6 months ago. It took a team of 4 people to a deserted island and each week they had a task to complete each, they were only allowed to use what was on the island and what was given to them each week (as well as a tool set because, well no tools = screwed). They had to do things like make fireworks, record a song and various other "minor" things which required them to render down various things to achieve the chemicals they needed to complete each task. What they did and what it resulted in was very clearly labeled, having real science explained behind it.

      Would that be Rough Science [open2.net]? In particular, it sounds like the second series [wikipedia.org]. I've seen a couple of the series over the past few years, and I believe it did a pretty good job of being a science show – the interest comes from watching people who actually know what they're doing, designing and building ingenious solutions (admittedly with very convenient tools and materials available) to problems that aren't inherently interesting (like making toothpaste or measuring the speed of a glacier), rather than relying on 'interesting' problems that are large/dangerous/explosive and lacking focus on the solution process.

    • you mean the History Channel? since afaik Myth Busters is on Discovery - granted aside from the WW2 stuff, the history channel is also IMHO going to crap with how much they show of UFO's. But I suppose at least they do it in a "semi-scientific" manner and actually examine why UFO's are or aren't real, along with the history and social phenomena of the matter...
    • If you attempt to put real science on TV today you will watch the other 6.9 million TV stations each gain 1 more viewer while you get a dust bowl rolling through.
      To this I can only say: "Cosmos"
    • by cananian ( 73735 )
      "Escape from Experiment Island" was another Discovery Channel show with a concept similar to the one you describe. The science was rather more basic, however, and it aired in 2003, not "6 months ago". It lasted for a single season before being canned.
    • Re:Science (Score:5, Interesting)

      by krayzkrok ( 889340 ) on Tuesday April 24, 2007 @12:04AM (#18849885) Homepage
      What "science" doesn't need, though, is the attitude that "real science" is above casual entertainment because "real science" is so staggeringly boring that hardly anyone would want to watch it. Science isn't some ivory tower, exclusive club that only the most arrogant can subscribe to. All science is, and this is what programs like Mythbusters try to get across, is applying logic and investigation to theories, instead of believing heresay and anecdote without question. You don't have to be a nuclear physicist to do science. Kids do science in science class every day in schools across the world. Teaching those kids normally involves simple examples of science to get them interested in asking more in-depth questions over time. This is what program like Mythbusters are all about. That some adults like to watch them because they "blow shit up" helps to broaden its appeal so that it doesn't get cancelled. It's not supposed to be rigorous, it's supposed to get you thinking. Here we are on Slashdot talking about it, so it achieved something.

      Of course that's not to say there isn't room for more demanding science shows on television, and you cite a good example, because whether TV forces you to think or not is purely down to the quality of the programming. There is a serious issue in terms of the bias TV has towards undemanding entertainment, but where should the blame lie? Ultimately the people behind these stations are trying to make money, and they do that by giving people what they want (or what they think they want). We've created a monster.

  • Lies, Damn Lies and !!!! .... oh, you know...
  • Not quite, OmniNerd (Score:5, Informative)

    by Miang ( 1040408 ) on Monday April 23, 2007 @09:38PM (#18848587) Journal
    TFA's conclusion is correct but their methods are wrong. For these kind of data, correlations aren't the appropriate test; they should have used a chi-square distribution test. Using TFA's assumptions -- total sample size of 50, 4 yawners out of 16 not seeded, 10 yawners out of 34 seeded -- the chi-square value is .10, which pretty strongly misses the critical value of 3.84 for significance. Not that it matters anyway, but it's pretty funny to read an article debunking statistics that employs inappropriate statistics itself...
    • by twifosp ( 532320 )
      I agree, however, they didn't have enough of a sample of a chi-square. Or an fitted yx ANOVA for that matter. But yes, any kind of coeffecient using an R or R squared is definitely not the appropriate test in this case.

      It always bugs me that they don't even try and collect data to fit a normal distribution or do any proper means testing. But oh well, it's TV, thems the breaks.

    • references (Score:3, Insightful)

      by Anonymous Coward
      I like his references, too..

      reference 5 is an episode that won't air for 2 days (maybe he's from the future!)

      references 7 and 8 are forum posts (ref. 8 has just 2 replies)

      two references are news stories..

      these do not suggest a thorough exploration of the matter, but he cites them as if they are authoritative sources
    • Whoa there... (Score:3, Informative)

      by Winawer ( 935589 )
      Well, a chi-squared test would have worked too, but so would Phi correlation (a correlation between two dichotomous / binary variables), which can be computed exactly the same as ... Pearson correlation, which TFA used. In fact, if you take the chi-square value you worked out to a few more decimal places: 0.10504 (from R), divide by 50 (=N, the sample size), and then take the square root, you get 0.046, which is the phi (and hence Pearson) correlation coefficient for the TFA's data. I can't tell if OmniN
    • Re: (Score:3, Interesting)

      A great example I used to use when I taught 2nd year stats at uni was the odd correlation between the number of Protestant minsters per 1000 people and the number of teenage pregnancies per 1000 people in rural New South Wales in the 1980's - the correlation is about 0.96.

      That doesn't mean that the Ministers are running around getting under-age girls pregnant, but rather that during the drought people turned to "simple pleasures" and sought spiritual easing of their hardships. Often a high correlation im

  • by Ichoran ( 106539 ) on Monday April 23, 2007 @09:49PM (#18848681)
    Not only was MythBusters embarassingly statistics-free, but the "busting" was done using a wholly inappropriate statistical technique. Hansen used a correlation-based test, which assumes that the data follows a Normal distribution (which a bunch of 1s and 0s do not).

    There is a very well-known test, the chi-square test, that deals with exactly this case. (Given the small sample sizes, the Fisher exact test may give better results.) Someone should point Hansen to the Wikipedia page on the topic.

    For example, if there are 16 non-primed people, with 4 yawning and 12 not (for 25%), and there are 34 primed people, with 10 yawning and 24 not (for 29%), the chi square test gives a p value of 0.74.

    The values Hansen supposes are significant 4,12 and 12,24 are not: p = 0.29.

    You have to go all the way to 4,12 and 17,19 (i.e. 47% on a sample of 36) to get significance.

    MythBusters was wrong to conclude that their results were significant, but Hansen was equally wrong to conclude that he had shown that Mythbusters was wrong.
    • by gumbi west ( 610122 ) on Monday April 23, 2007 @10:02PM (#18848773) Journal
      You were actually right that it's Fisher's exact test that you want, it's similar to doing a complete permutation test which is exact. Because this is a 2x2 table, there's no reason not to use the exact test. The actual result has a p-value of 1.0 in a two-tailed test (whoops!) and even 4,12 and 17,19 has a p-value of 0.22 in the two-tailed test. In deed, it would have to go all the way to 4,12 and 21,15 to be significant at the 5 percent level for the two-tailed test. The two-tailed test is the right one because you had better believe that they would have made a big stink if it had come out the other way!

      But all this aside, I'm not sure I like the experiment. Why bore people? Why have so many in the room. the 4,12 number is way too high, I'd say the were better off looking at narrow time slices and natural yawns (i.e. do yawns happen at random or do they set off avalanches). Then there is only one group and you're just testing the Poisson process assumption of uncorrelatedness.

      • by ozbird ( 127571 )
        Perhaps, but there certainly seems to be a strong correlation between discussions on the finer points of statistical analysis and yawning... Perhaps you've discovered the Yawndot effect?
      • by Otter ( 3800 )
        I'd say the were better off looking at narrow time slices and natural yawns (i.e. do yawns happen at random or do they set off avalanches). Then there is only one group and you're just testing the Poisson process assumption of uncorrelatedness.

        The advantage of their way is that you know the causality of an increased yawning rate. In your design, it's harder to rule out temperature fluctuations or conversations about, say, the appropriateness of a correlation test for binary data.

  • by ingo23 ( 848315 ) on Monday April 23, 2007 @09:58PM (#18848733)
    Actually, the article shows only a basic understanding of statistics. Correlation is indeed a measure of a relationship between a cause and effect, but it's only a part of the picture. Yes, a correlation of 0.04 is far from obvious dependency, but that's not the point.

    MythBusters numbers may mean that someone is 20% more likely to yawn if seeded. Now, what's important is to evaluate the margin of error for this statement given the sample size.

    What the article is definitely wrong about is that the sample size does not change anything. The sample size basically reduces the probability of error. The higher the sample size, the more likely that the statement "someone is 20% more likely to yawn if seeded" is true. However, at their sample size, it is not unlikely that the error marging is comparable with that 20% difference, which would invalidate the experiment.

    The detailed calculations for sufficient sample size are left as an excercise for the reader.

  • you spent way too much time on this. move on, man.
  • by acvh ( 120205 ) <geek.mscigars@com> on Monday April 23, 2007 @10:25PM (#18848967) Homepage
    Dr. Seuss has already proved the contagion of yawns.
  • The Myth Busters is, of course, fun. It is fun to see things blow up. It is fun to watch two pseudo-nerds fight.

    Here's the thing, not everything they do is crap, sometimes they get it more or less right, or at least right enough that one should pause. Most of they time they do a pretty mediocre job at it.

    Don't confuse science with entertainment. It seems that the myth busters work from a layman's perspective and as such, fancy methodologies would confuse the audience.
  • by tfoss ( 203340 ) on Monday April 23, 2007 @10:52PM (#18849201)
    At least, it can be. A quick search at Pubmed [nih.gov] brings up eight [sciencedirect.com] studies [sciencedirect.com] that examine the phenomenon of 'contagious yawning,' including in macaques [royalsoc.ac.uk] and chimps. [nih.gov] So even if the mythbusters experimental setup was pretty crappy, and their sample was too small to have enough power to find an effect, at least their conclusion agreed with the literature.

    -Ted
    • Maybe it depends on the person.

      I'm very susceptible to "contagious yawning", When I watched this episode of Mythbusters, it set off a yawning fit that lasted for hours. Now, I refuse to watch it when it is re-broadcast.

      Just reading a discussion about the episode has set me off. It's not as bad this time, but I had no urge to yawn 5 minutes ago, and now I'm yawning about twice a minute.

      I expect that some people aren't affected by "contagious yawning". Maybe, even most people aren't affected. But,

  • Oy (Score:5, Insightful)

    by Billosaur ( 927319 ) * <wgrotherNO@SPAMoptonline.net> on Monday April 23, 2007 @11:16PM (#18849413) Journal

    Look, to spare everyone the continued arguing over which statistical test to use at what probability level and the lack of proper control groups, let me say that MythBusters has never claimed to be a science show like Mr. Wizard. The guys are special effects designers for crying out loud! They are good at what they do, and while their scientific methodology and statistics may be a bit wonky at times, there are some experiments I've seen in peer-reviewed journals that aren't much better. Science education in the United States gets worse all the time, and if these guys can inject some life and curiosity into the current generation to get them interested in science, I applaud the effort.

  • Seriously does anyone watch Mythbusters looking to prove or disprove a myth. They take a VERY rudimentary look at most myths. One example is mythes that involve old inventors such as Leonardo Da Vinci. The take less than a week to prove or disprove the theory. Leonardo likely worked on it for years so what does 1 week of two hollywood effect men mean?

    The show is about entertainment and fun, not about the scientific method.
  • by dj42 ( 765300 )
    I'm not kidding, I actually yawned when I read tfa. Maybe I'm just tired, but poking holes in a loosely scientific tv show for those reasons is stupid. I mean, if you're taking results of Mythbuster experiments that seriously, I would urge you to find a secondary information source that might be better suited to that kind of analysis.
  • by SilentChris ( 452960 ) on Tuesday April 24, 2007 @12:38AM (#18850229) Homepage
    I've been reading through the comments and I'm fairly alarmed by how many people think Mythbusters isn't worthwhile based solely on scientific merit.

    Look, the show never said it was teaching people about science. Adam and Jaime themselves have said many times they're more entertainment than science. They're special effects people by trade, not scientists. They build things and blow shit up. It's what they enjoy doing. You can even see it on Jaime's face when they're doing myths that don't involve blowing things up (e.g. Adam building a wind tunnel for the penny drop myth).

    When the show first started, there wasn't even mention of science. They looked at urban legends such as rocket car and getting airborne in a lawnchair. The show was about the stories themselves, not the methods. Only in about season 2 or so did they start including things like "controls" and "variables" (probably by Discovery's request), but they never lost sight of the fact that they're a TV show, and television (by and large) is meant to entertain.

    But that leads to an interesting question: even if they DID follow proper scientific method, how do you even apply that to some of the myths they examine? For example, they did a myth where a hillbilly chased a raccoon into a sewer pipe, decided to throw gas down it, attempted to fill the thing with fire to kill the raccoon and was purportedly "shot out". How on earth do you test that scientifically? Nowhere in the myth does it says how big the pipe was, how much gasoline was used, etc. Nowhere does it mention if he was stuck (which is important, as they found the man could only be shot out of he was wrapped in a sabot). All they have is a fun story to go off of.

    If nothing else, Mythbusters gets people interested in the process of examining life, not teaching how to use proper scientific method. If their only accomplishment is making people critically question things that are usually taken at face value, they'll have succeeded in my mind.
  • Statistics (Score:4, Funny)

    by argStyopa ( 232550 ) on Tuesday April 24, 2007 @08:54AM (#18853471) Journal
    The only scientifically relevant statistic for Mythbusters is that a high percentage of the audience (most of the men, and probably a significant fraction of the women) want to see them do more experiments with Kari in a body suit (ala the butt-moulding exercise).

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...