Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Stats Math Science

Why Standard Deviation Should Be Retired From Scientific Use 312

An anonymous reader writes "Statistician and author Nassim Taleb has a suggestion for scientific researchers: stop trying to use standard deviations in your work. He says it's misunderstood more often than not, and also not the best tool for its purpose. Taleb thinks researchers should use mean deviation instead. 'It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term "standard deviation" for what had been known as "root mean square error." The confusion started then: people thought it meant mean deviation. The idea stuck: every time a newspaper has attempted to clarify the concept of market "volatility", it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation. But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of commerce and the Federal Reserve partaking of the conflation, even regulators in statements on market volatility. What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life.'"
This discussion has been archived. No new comments can be posted.

Why Standard Deviation Should Be Retired From Scientific Use

Comments Filter:
  • by Anonymous Coward on Wednesday January 15, 2014 @04:53PM (#45969865)

    ...because people use it incorrectly in economics? Get bent. The standard deviation is a useful tool for statistical analysis of large populations.

    • by Fouquet ( 753286 ) on Wednesday January 15, 2014 @05:50PM (#45970447)
      +1 this. The problem here is the author's impression that "social scientists" and economists are scientists. The groups that he excludes in the first paragraph (physicists) are scientists. Anyone attempting to implement a statistical model designed for a large (and Gaussian) data set on a small number of data points (as the article's example does) should expect to get an answer that is at best marginal. Any scientists who ever received even the most basic of statistics and/or data analysis training knows this. Understand the problem first, then take enough data points, then carry out your statistical analysis & formulate conclusions.
    • by rwa2 ( 4391 ) * on Wednesday January 15, 2014 @08:08PM (#45971681) Homepage Journal

      ...and besides... JUST THINK of all the rigorous Lean Management courses that will have to re-certify all of their "Six-Sigma Black Belts" to some kind of "Half-Dozen of the Other" degrees!

      PANDEMONIUM!!!

    • So you want to retire a statistical term......because people use it incorrectly in economics? Get bent. The standard deviation is a useful tool for statistical analysis of large populations.

      Agreed that this is a ridiculous proposal. He probably just wants more publicity.

      This was the guy who wrote the book "Anti-Fragile", which I had hoped would educate and broaden my way of thinking, in the same way that the Malcolm Gladwell books ("Tipping Point", "Blink", "Outliers") did. He ended up droning on and on w

  • Basic Statistics (Score:5, Insightful)

    by TechyImmigrant ( 175943 ) on Wednesday January 15, 2014 @04:55PM (#45969895) Homepage Journal

    The meaning of standard deviation is something you learn on a basic statistics course.

    We don't ask biochemists to change their terms because the electron transport chain is complicated.
    We don't ask cryptographers to change their terms because the difference between extra entropy and multiplicative prediction resistance is not obvious.

    We should not ask statisticians to change their terms because people are too stupid to understand them.

    • by Mr D from 63 ( 3395377 ) on Wednesday January 15, 2014 @05:04PM (#45970029)

      We should not ask statisticians to change their terms because people are too stupid to understand them.

      But doesn't that give an unfair advantage to statisticians? You have to give everyone a chance!

    • Actually, meaningful and readily understood labels are a considered a good thing, and beneficial to those who work in the field they apply to.

      Except programming, there, based on my experience, you should use whatever label happens to be laying around - never change it, even if it means the opposite of what it does.

    • We should not ask statisticians to change their terms because people are too stupid to understand them.

      I've always wondered about this attitude.

      For me, any change requires an analysis of risk/reward versus value. For example, if code contains confusing names, it might be worthwhile to refactor it.

      The tradeoff is in the time spent refactoring versus the perceived value - if it's a mature product that largely works with few planned updates and few people will have to deal with the confusion, then the effort outweighs the returned value. If the code is open source, being actively developed and with many eyes lo

      • by PRMan ( 959735 )
        Especially when in Visual Studio you can right-click and Rename throughout the project. I change names that don't make sense and change them to what makes sense all the time.
      • Everyone understands the US Measures? How many pottles are there in a firkin? Or how many nails in a chain?
        • Everyone understands the US Measures? How many pottles are there in a firkin? Or how many nails in a chain?

          Everyone else understands what I meant.

          What are you going on about?

        • Re:The big picture (Score:5, Interesting)

          by mythosaz ( 572040 ) on Wednesday January 15, 2014 @07:09PM (#45971175)

          I would have said "18 half gallon pottles to the quarter-barrel firkin."
          Wolfram Alpha says 15.75 pottles to the firkin, but that's because of US/UK gallon conversions, I reckon.

          352 nails in a chain - which was interesting to me, in that Google includes those units in its calculator.

          I now know more about pottles, firkins, nails and chains that I did when I woke up. I shudder to think about what got pushed out of my old head to make way for this new minutia.

      • Re:The big picture (Score:5, Informative)

        by FriendlyStatistician ( 2652203 ) on Wednesday January 15, 2014 @07:01PM (#45971095)

        Hi, I'm a statistician.

        It's not so simple to just say "ok, we're going to use the Mean Absolute Deviation from now on." The use of standard deviation is not quite the historical accident that Taleb makes it out to be--there are good reasons for using it. Because it is a one-to-one function of the second central moment (variance), it inherits a bunch of nice properties that the mean absolute deviation does not. There is not a one-to-one correspondence between variance and mean absolute deviation.

        Taleb is correct that the mean absolute deviation is easier to explain to people, but this is not just a matter of changing units of measure (where there is a one-to-one correspondence) or changing function and variable names in code (where there is again a one-to-one correspondence). Standard deviation and mean absolute deviation have different theoretical properties. These differences have led most statisticians over the last hundred years to conclude that the standard deviation is a better measure of variability, even though it is harder to explain.

        • Re:The big picture (Score:4, Insightful)

          by reve_etrange ( 2377702 ) on Wednesday January 15, 2014 @07:21PM (#45971283)

          I think NNT is saying that the MAD ought to be used when you are conveying a numerical representation of the "deviations" with the intent that readers use this number to imagine or intuit the size of the "deviations." His example is that of how much the temperature might change on a day-to-day basis. According to him, it's not just that the concept is easier to explain, but that it is the more accurate measure to use for this purpose.

          Based on his other work I'm sure he understands that the STD is generally superior for optimization purposes, fit comparison, etc.

    • by ClioCJS ( 264898 )
      If only scientists were statisticians, your comment might have actually been actionable.
    • Re:Basic Statistics (Score:4, Informative)

      by ShanghaiBill ( 739463 ) on Wednesday January 15, 2014 @05:20PM (#45970173)

      The meaning of standard deviation is something you learn on a basic statistics course.

      I took a statistics course in college. The statistics professor taught us to think of the standard deviation as the "average distance from the average". So if you know the average (mean) then any random data sample will be (on average) one SD away. That is simple, neat, and easy to remember.

      It is also wrong.

    • Re:Basic Statistics (Score:4, Interesting)

      by Mashdar ( 876825 ) on Wednesday January 15, 2014 @05:22PM (#45970181)

      Didn't you hear? Guassians are so 1893. And so are all of the other distributions with convenient sigma terms...

      And TFS calls standard deviation "root mean square error", which is only true if you assume the mean is a constant estimator for the distribution :(

      In any case, no one picked Gaussians because they are so easy to integrate... While we're at it, TFA should suggest we round the number e to 3, because irrational numbers are hard, and who cares what natural law dictates.

    • by MobyDisk ( 75490 )

      We should not ask statisticians to change their terms because people are too stupid to understand them.

      The author didn't ask anyone to change any terms. They asked people to stop using the wrong statistic. Ex: Don't use mean if you needed the median.

    • by fermion ( 181285 )
      I can't really get to the article right now, but one this that is true is that Standard deviations only make sense if a sample results in a normal distribution. Normal distributions has certain qualities, one is that the mean=median=mode. If this is not true then one can still have a skewed normal curve. Many distributions a skewed normal curves, which means that a standard distribution is not necessarily the best model. Yet they are still used. This can be a problem. Here is why standard deviation is
    • by gninnor ( 792931 )

      Honestly, of the different things I have studied all had jargon that could have been explained in simpler terms, often in shorter common words. So much of it is a wall to the "stupid" people and their understanding.

      Other times there are specific concepts with only one word. These need to be simplified and taught to when it is being introduced in journals, but that would be work and very few people have been trained to speak to laymen.

      Even within the sciences some shorthand jargon means one thing in chemistr

      • by camperdave ( 969942 ) on Wednesday January 15, 2014 @07:24PM (#45971307) Journal
        The phrase "orbital process" means entirely different things to brain surgeons and rocket scientists.
      • Bruce Lee summed it up - "Before I started martial arts, a punch was a punch and a kick was a kick. When I started martial arts, a punch was no longer a punch and a kick was no longer a kick. When I understood martial arts, a punch was a punch and a kick was a kick."

        Most people are stuck at the second level - stuck in technicalities. Few people ever reach the third level - where a punch is a punch and a kick is a kick, not because of ignorance of technicalities. But because they have transcended the technic

  • Issues (Score:5, Informative)

    by Edward Kmett ( 123105 ) on Wednesday January 15, 2014 @04:56PM (#45969913) Homepage

    On the other hand, you also need to use 2-pass algorithms to compute Mean Absolute Deviation, whereas STD can be easily calculated in one pass. And you still need standard deviation as it relates directly to the second moment about the mean.

    Also, annoyingly, Median Absolute Deviation competes for the MAD name and is more robust against outliers.

    • by Animats ( 122034 )

      On the other hand, you also need to use 2-pass algorithms to compute Mean Absolute Deviation, whereas STD can be easily calculated in one pass. And you still need standard deviation as it relates directly to the second moment about the mean.

      Right. Some common measures in statistics date from the paper and pencil era, back when computation was really expensive. The same issue applies to least mean squares curve fitting, which is cheap to compute but overweights values far from the curve. This is well known, and was recognized decades ago. This is not something Talib "discovered", or even popularized.

      (If you want to annoy Taleb and his flunkies, ask hard questions about the actual performance of his funds in years other than 2008.)

    • On the other hand, you also need to use 2-pass algorithms to compute Mean Absolute Deviation, whereas STD can be easily calculated in one pass.

      Okay, just to be clear: you're saying that we should use STD because (in part) it's faster and easier to calculate?

      Isn't that like the drunk looking for his keys under the lamppost - instead of where he dropped them - because the light is better?

      • Mean absolute deviation is a useful statistic, though I tend to actually prefer median absolute deviation.

        You can actually prove that.

        median abs dev = mean abs dev = standard deviation

        You can also prove that median abs dev will provide the minimal absolute deviation from any number, so in that sense mean abs dev is kind of a strange choice here.

        That said, we can say a few things about it.

        It is a pain in the ass to calculate. It also tends to favor solutions that let outliers run wild. Least squares provides

  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday January 15, 2014 @04:56PM (#45969919)

    The problem is that people think they understand statistics when all they know is how to enter numbers into a program to generate "statistics".

    They mistake the tools-used-to-make-the-model for reality. Whether intentionally or not.

    • by Deadstick ( 535032 ) on Wednesday January 15, 2014 @05:01PM (#45969987)

      Three characterizations of statistics, in ascending order of accuracy:

      1. There are lies, damned lies, and statistics.

      2. Figures don't lie, but liars figure.

      3. Statistics is like dynamite. Use it properly, and you can move mountains. Use it improperly, and the mountain comes down on you.

    • by JoeMerchant ( 803320 ) on Wednesday January 15, 2014 @05:13PM (#45970107)

      The problem is that peoples' attention spans are rapidly approaching that of a water-flea.

      Up until the past 50 or so years, people who learned about Standard Deviation would do so in environments with far less stimulation and distraction. Their lives weren't so filled with extra-curricular activities and entertainments that they never sat for a moment from waking until sleep without some form of stimulus based pastime. When they "understood" the concept, there was time for it to ruminate and gel into a meaningful set of connections with how it is calculated and commonly applied. Today, if you can guess the right answer from a set of 4 choices often enough, you are certified expert and given a high level degree in the subject.

      Not bashing modern life, it's great, but it isn't making many "great thinkers" in the mold of the 19th century mathematicians. We do more, with less understanding of how, or why.

      • by khasim ( 1285 )

        Up until the past 50 or so years, people who learned about Standard Deviation would do so in environments with far less stimulation and distraction.

        They also did so in an environment where they had to do all the math by hand (or with a slide rule).

        The math is not difficult. But it is repetetive in the extreme. So unless you were a savant you learned to pay very close attention to the numbers and what they represented. For those of you who didn't take statistics, here's a link to show you how standard deviat

      • by Nemyst ( 1383049 )
        I really, really wish I could finish my Master's just by guessing the right answer on a multiple-choices exam. Sadly, it would appear that either you're wrong, or I've picked a subject where that gross oversimplification does not apply. Either way, I think you're being blinded by your nostalgia goggles.
        • I took the thesis option for my Masters', but I was in the minority, most preferred to take extra classes and just get the paper.

          If you select your institution, courses and professors carefully, I bet you can get a degree with mostly multiple choice testing determining the grades.

        • Comment removed based on user account deletion
      • by TsuruchiBrian ( 2731979 ) on Wednesday January 15, 2014 @06:20PM (#45970747)

        Not bashing modern life, it's great, but it isn't making many "great thinkers" in the mold of the 19th century mathematicians. We do more, with less understanding of how, or why.

        The easier math problems are lower hanging fruit. As time goes on, the problems that are left become increasingly hard. Even when they get solved, average people can't understand what it means, and that makes it hard to care about, and hward for newspapers to make money covering that story.

        Also when you read about the history of mathematics, it's easy to feel like these breakthroughs were happening all the time, compared with now, when in fact they were very slowly, and the pace of discovery is probably higher now than at any point in the past.

        It's easy to say music was better in the 70's than now when you condense the 70's down to 100 truly great songs, forgetting all the crap, and compare it to whats playing on the radio today.

        • True, I'm comparing today's "median" or perhaps "average" University student with the same "average" student from 50 to 80 years ago.

          We've got a lot more population, and probably more great thinkers alive today than in the entirety of the 1800-1950 timespan, more people with opportunity, means, etc.

          It's just the everyday UniGrad you meet that I'm lamenting.

          • by TsuruchiBrian ( 2731979 ) on Wednesday January 15, 2014 @06:45PM (#45970967)

            I think it's also true that a larger percentage of people are going to university, so the average "intelligence" of people in university in terms of natural ability is probably lower now than when it was just the very best students attending.

            Most of the mediocre students today would have simply not gone to university in the past. I think the same principle holds when it comes to things like blogs. The fact that public discourse can sometimes make it seem as if people are getting dumber, when it is really just that more and more people know how to read and write and can now even be published, whereas in the past, there was a higher cost to publishing, and you were more likely to have something important to say before being willing to incur that cost.

  • by njnnja ( 2833511 ) on Wednesday January 15, 2014 @05:02PM (#45970005)

    Standard Deviation is the square root of the second moment about the mean [wikipedia.org], an important fundamental concept to probability distributions. Looking at moments of probability distributions gives us lots of tools that have been developed over the years and in many cases we can apply closed form solutions with reasonably lenient assumptions. Then we apply the square root in order to put it in the same units as the original list of observations and get some of the heuristic advantages that he attributes to the mean absolute deviation.

    But it is a balance, and any data set should be looked at from multiple angles, with multiple summary statistics. To say MAD is better that standard deviation is a reasonable point (with which I would disagree), but to say we should stop using standard deviation (the point made in TFA) is totally incorrect.

    • by JanneM ( 7445 )

      What he is saying is not that statisticians should stop using SD in statistical theory or anything. What he's saying is that non-statisticians should stop using SD as a measure of variability when describing their data to each other. And since everybody (except statisticians) think SD is the average deviation from the mean, then people should perhaps use that instead, and reduce confusion for everyone.

    • by neonsignal ( 890658 ) on Wednesday January 15, 2014 @06:13PM (#45970677)

      I'm a little surprised at Nassim Taleb's position on this.

      He has rightly pointed out that not all distributions that we encounter are Gaussian, and that the outliers (the 'black swans') can be more common than we expect. But moving to a mean absolute deviation hides these effects even more than standard deviation; outliers are further discounted. This would mean that the null hypothesis in studies is more likely to be rejected (mean absolute deviation is typically smaller than standard deviation), and we will be finding 'correlations' everywhere.

      For non-Gaussian distributions, the solution is not to discard standard deviation, but to reframe the distribution. For example, for some scale invariant distributions, one could take the standard deviation of the log of the values, which would then translate to a deviation 'index' or 'factor'.

      I agree with him that standard deviation is not trustworthy if you apply it blindly. If the standard deviation of a particular distribution is not stable, I want to know about it (not hide it), and come up with a better measure of deviation for that distribution. But I think the emphasis should be on identifying the distributions being studied, rather than trying to push mean absolute deviation as a catch-all measure.

      And for Gaussian distributions (which are not uncommon), standard deviation makes a lot of sense mathematically (for the reasons outlined in the parent post).

    • So, the solution to "standard deviation is hard" to is rephrase it in terms of "square root of the second moment about the mean"? I'm on board! that's totally simpler and more intuitive!

      (note for the sarcasm-impaired: that was sarcasm)

  • by RichMan ( 8097 ) on Wednesday January 15, 2014 @05:07PM (#45970057)

    There is a great difference between a mean value and an RMS value. Scientific people can work with the appropriate version so I don't see a problem with using the correct one for the correct occasion. And certainly science should stay with the correct term as appropriate.

    What I believe the person is calling for here is the most appropriate use when communicating to the non-scientific person. This is an education issue in that the communication really should not use either term as a shorthand but should explain in full the effect of the distribution. Science uses mean and standard deviation (often also requiring a named distribution) because they are shorthands that describe the random behavior and have full meaning without any other explanation needed. So I say use neither term when communicating to the non-scientific as they do not fulfill the communication role to which they are intended.

    What I believe should actually be done is proper education of all so that they understand the differences between various random distributions and move totally away from a "it is cold today, so global climate change based on heating must be a lie".

    • He isn't talking about Non Science people. He is talking about Social Science people and Science Journalist people. Both of whom have educations.
      • He isn't talking about Non Science people. He is talking about Social Science people and Science Journalist people. Both of whom have educations.

        So he is talking about Non Science people. Have you never read the output of a Science Journalist when they write about something you are familiar with?

        "Hav[ing] an education" doesn't make one a scientist. Doing things the scientific way makes one a scientist.

    • The problem with people accepting anthropogenic global warming is not a matter of understanding. It's a matter of people believing what they want to believe. If people want to believe in God and think that evolution diminishes the importance God or think that evolutions are saying that God doesn't exist, they look for any evidence that evolution doesn't happen, no matter how flimsy it is, to prevent having to feel uncomfortable emotions. If people believe that they have to give up a comfortable lifestyle to
      • It is even worse than that with religious folk. All sorts of people will go into "Cognitive Dissonance" mode when their strongly held beliefs are challenged and some will refuse to change their beliefs no matter what evidence is presented. With religious people you get this plus also they are convinced that it is their faith (how strongly they cling to their beliefs - no matter what) that determines their reward in the afterlife. There is no point in attempting logical or evidence based discussions with som
  • If there are "data scientists" who don't understand what the standard deviation is, then they certainly shouldn't be calling themselves "data scientists," and quite possibly not scientists at all. What subjects are their PhDs in, I wonder? This doesn't do anything to reduce my skepticism that such a thing as "data science" really needs to exist.

    • If there are "data scientists" who don't understand what the standard deviation is, then they certainly shouldn't be calling themselves "data scientists," and quite possibly not scientists at all. What subjects are their PhDs in, I wonder?

      The problem isn't with highly-educated people, it people who are not highly educated, or who are highly educated but in a different field.

      If a particular intersection attracts a lot of accidents, we consider the accidents to be the fault of the drivers involved. But at the same time, we recognize that aspects of the intersection might be a contributing factor as well.

      Expert drivers would never have such accidents, but if we spend some effort reblocking the intersection we could get improved safety, and some

  • Yes, use the interquartile range instead https://en.wikipedia.org/wiki/Interquartile_range [wikipedia.org]

    It is like the median a very robust method, not readily influenced by outliers. https://en.wikipedia.org/wiki/Median [wikipedia.org]

    The median is wickedly robust, with a breakdown point at 50%, meaning that you can throw a huge a mount of junk data at it and it still doesn't care.

    The arithmetic mean and the standatd deviation are both junk, often worse than the too-often-assumed-normal data thrown at it.

    • "It is like the median a very robust method, not readily influenced by outliers. The median is wickedly robust, with a breakdown point at 50%, meaning that you can throw a huge a mount of junk data at it and it still doesn't care. The arithmetic mean and the standatd deviation are both junk, often worse than the too-often-assumed-normal data thrown at it."

      That depends entirely on what you are trying to show. None of them are junk for all purposes; all of them are junk for the wrong purposes.

      For example, if you're talking about salaries of employees of a corporation, the mean might not mean much: the CEO makes 30 times as much as everyone else, and other managers 20 times more, lower managers 10 times more... so the mean is thrown way off. The median is much more meaningful.

      On the other hand, even the mode can be useful sometimes. Suppose the corporatio

  • That's a good enough replacement term.

  • by PacoSuarez ( 530275 ) on Wednesday January 15, 2014 @05:33PM (#45970275)

    Perhaps non-mathematicians don't have a problem with this, but it rubs me the wrong way.

    What makes the mean an interesting quantity is that it is the constant that best approximates the data, where the measure of goodness of the approximation is precisely the way I like it: As the sum of the squares of the differences.

    I understand that not everybody is an "L2" kind of guy, like I am. "L1" people prefer to measure the distance between things as the sum of the absolute values of the differences. But in that case, what makes the mean important? The constant that minimizes the sum of absolute values of the differences is the median, not the mean.

    So you either use mean and standard deviation, or you use median and mean absolute deviation. But this notion of measuring mean absolute deviation from the mean is strange.

    Anyway, his proposal is preposterous: I use the standard deviation daily and I don't care if others lack the sophistication to understand what it means.

  • I hate averages (Score:5, Interesting)

    by tthomas48 ( 180798 ) on Wednesday January 15, 2014 @05:38PM (#45970337)

    I also think averages should go away. Most people think they are being reported the median (the number in the middle) when people tell them the average. It's great for real estate agents, and people trying to advocate for tax reform, but the numbers are not what people think they are.

  • I find this article quite confusing. Is the actual suggestion that we should be going around using the mean deviation as a way of capturing the general variance of our data sets? Or to put it another way, does he want "deviation" measures not to give us a real sense of the larger deviations that might occur with some real probability. For example, with temperatures, standard deviation is more likely to suggest that we can have periods of significantly higher and lower temperatures than a simple "mean deviat
  • by dcollins ( 135727 ) on Wednesday January 15, 2014 @05:47PM (#45970423) Homepage

    Well... first of all, summary has it wrong. It's not "mean deviation", it's "mean absolute deviation", or just "absolute deviation" from the literature I've seen. (Mean deviation is actually always zero, the most useless thing you could possibly consider.)

    Keep in mind that standard deviation is the provably best basis if your goal is to estimate a population *mean*, the most commonly used measure of center. Absolute deviation, on the other hand, is the best basis to use for an estimate of a population *median*, which is maybe fine for finances, which is what the linked paper seems mostly focused on. (Bayesian best estimators, if I recall correctly.)

    If the main critique is that economists and social scientists don't know what the F they're doing, then I won't disagree with that. But no need to metastasize the infection to math and statistics in general.

  • I studied geodesy in germany as diploma on a technical university. Standard deviation has its right to exist and to be there and to be used. If this man really means what he says he should not say to abandon standard deviation but to write BOOKS that teach people correctly what it is and how it is calculated on the data which you have. Yes I also meet people (talking of themselves as scientists and researchers) who have no fucking clue how to work with data and standard deviation, but on the other hand I al
  • When I was in school, they still taught the central limit theorem which explains why so many error distributions are "normal". Our world provides us with millions of examples in everyday life where the standard deviation of our experiences is the best statistic to estimate the probability of future events.

    What you do with a statistic is what counts. It's easy to look at the standard deviation and estimate the probability that the conclusion was reached by chances of the draw, though it takes some practice

  • normal densities (Score:4, Informative)

    by stenvar ( 2789879 ) on Wednesday January 15, 2014 @07:22PM (#45971293)

    For normal densities, standard deviations and MAD are just proportional, with a factor of about 1.25, so it doesn't matter which you use.

    For non-normal densities, neither of them really is universally "right" for characterizing the deviation, but it's mathematically a whole lot easier to understand how standard deviation behaves in those cases than MAD. So even there, standard deviations are usually the better choice.

  • by GauteL ( 29207 ) on Thursday January 16, 2014 @04:18AM (#45974117)

    And neither does the media-consuming public. Most would totally ignore your measure of precision regardless of whether you call it standard deviation or mean absolute deviation. For them your average is absolute and if any values aren't at all near it something is terribly wrong. They will also not rest until every school performs above average and nothing in your work will convince them otherwise. The public doesn't like uncertainty and will assume every outcome is for a special reason, and this even goes for the non-religious ones. The idea that some things aren't absolute and are actually uncertain and variable terrifies them.

    Nowhere is this more apparent than in sports. Everything there is always "written in the stars" or "destiny" and if you win it always proves beyond doubt your are better than your opposition (or you were 100% cheated by the refs). Hell, journalists may have had a full article written up 2 minutes before the end of a game and then completely change everything to be about one team's dogged determination because chance would have it they scored in the last minute. I love football (soccer), but discussing it can be frustrating.

    If you still believe you can convince them, use mean absolute deviation in your "executive summary" or press release and leave the standard deviation as is in your actual paper. The only ones that actually read the paper are scientists anyway. The typical journalist reading your actual paper is likely to misunderstand something in every paragraph anyway. Changing real science to pander to the masses is a fucking huge mistake.

A good supervisor can step on your toes without messing up your shine.

Working...