Why Standard Deviation Should Be Retired From Scientific Use 312
An anonymous reader writes "Statistician and author Nassim Taleb has a suggestion for scientific researchers: stop trying to use standard deviations in your work. He says it's misunderstood more often than not, and also not the best tool for its purpose. Taleb thinks researchers should use mean deviation instead. 'It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term "standard deviation" for what had been known as "root mean square error." The confusion started then: people thought it meant mean deviation. The idea stuck: every time a newspaper has attempted to clarify the concept of market "volatility", it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation. But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of commerce and the Federal Reserve partaking of the conflation, even regulators in statements on market volatility. What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life.'"
So you want to retire a statistical term... (Score:5, Insightful)
...because people use it incorrectly in economics? Get bent. The standard deviation is a useful tool for statistical analysis of large populations.
Re:So you want to retire a statistical term... (Score:5, Insightful)
Standard Deviation is fn of 2nd moment of the data (Score:4, Informative)
I can really go for renaming standard deviation, but it should not be abolished.
Standard deviation is a function of the second moment of the data, and if you remember your laws for combining moments of inertia (the parallel axis theorem), then you'll understand better what you're dealing with.
2nd moments detail resistance to spin, and thus the resiliance of your findings to changes and errors.
Re:So you want to retire a statistical term... (Score:5, Informative)
Re: (Score:3, Interesting)
I always hated frequentist statistical methods and the Gaussian Distribution.
I see below an Astrophysicist echos your claim.
I'm happily surprised to learn I am not the only one who thinks the whole 'gaussian' should be banished.
I come from a Systems Science and Research Methodology background in this area. One of my favorite parts of grad school was a 4 hr one on one tutoring session every week I did for a semester with my large State Univ. Research Director who is the person faculty/staff go to for questio
Re: (Score:3)
Gaussian distributions are a natural phenomenon, like pi, e, etc. The distribution was not made up for convenience (ever tried to integrate it?). You can't will away the base behavior of the universe (or our number line).
Sampling any random variable which itself is a sample of smaller events will trend Gaussian by natural law. Noise is also often the sum of a huge number of random events, and is often quite Gaussian.
Like binomial, Poisson, Laplacian, etc, Gaussians make a lot of sense for certain situations
Re: (Score:3)
Re: (Score:3)
Re:So you want to decertify a management degree... (Score:5, Funny)
...and besides... JUST THINK of all the rigorous Lean Management courses that will have to re-certify all of their "Six-Sigma Black Belts" to some kind of "Half-Dozen of the Other" degrees!
PANDEMONIUM!!!
agree: as bad as "Anti-Fragile" (Score:3)
Agreed that this is a ridiculous proposal. He probably just wants more publicity.
This was the guy who wrote the book "Anti-Fragile", which I had hoped would educate and broaden my way of thinking, in the same way that the Malcolm Gladwell books ("Tipping Point", "Blink", "Outliers") did. He ended up droning on and on w
Re:So you want to retire a statistical term... (Score:5, Informative)
the mean *absolute* deviation, rather than the square root of the mean *squared* deviation (the standard deviation).
The mean absolute deviation is a simpler measure of variability. However....
The algebraic manipulation of the standard deviation is simpler; the absolute deviation is more difficult to deal with.
Further, when drawing a number of samples from a large population --- the standard deviation of their mean deviations is substantially higher than the standard deviations of their individual standard deviations; that is to say, the standard deviation of a sample provides an estimate that is more in-line with the whole.
That is to say.... there are cases where the Standard Deviation may be better, AND, much of statistics is using standard deviation as its basis.
Fisher, R. 1920 Monthly Notes of the Royal Astronomical Society, 80, 758-770:
the quality of any statistic could be judged in terms of three characteristics. The statistic, and the population parameter that it represents, should be consistent , The statistic should be sufficient, and the statistic should be efficient -- e.g. the smallest probable error as an estimate of the population. Both the standard deviation and mean deviation met the first two criteria (to the same extent); however, in meeting the third criterion -- the standard deviation proves superior.
Basic Statistics (Score:5, Insightful)
The meaning of standard deviation is something you learn on a basic statistics course.
We don't ask biochemists to change their terms because the electron transport chain is complicated.
We don't ask cryptographers to change their terms because the difference between extra entropy and multiplicative prediction resistance is not obvious.
We should not ask statisticians to change their terms because people are too stupid to understand them.
Re:Basic Statistics (Score:5, Funny)
We should not ask statisticians to change their terms because people are too stupid to understand them.
But doesn't that give an unfair advantage to statisticians? You have to give everyone a chance!
Re: (Score:3, Insightful)
Someone should tell that to the lawyers!
Re: (Score:3)
Check your math privilege!
Re: (Score:2)
Actually, meaningful and readily understood labels are a considered a good thing, and beneficial to those who work in the field they apply to.
Except programming, there, based on my experience, you should use whatever label happens to be laying around - never change it, even if it means the opposite of what it does.
Re:Basic Statistics (Score:4, Informative)
I should note that, contrary to the summary, Taleb is not properly a statistician--he's an economist
To be fair, economics has contributed a lot to the growth of statistics as a field of study. Due to various historical quirks, econometrics developed as almost a separate field from statistics for decades, and economists have often looked at statistical problems with a fresh eye, and had insights that people working in the mainstream of statistics and biostatistics might have missed. In my own work, biostatistics-flavored bioinformatics, I've often found myself referring to the econometric literature.
I have no idea if any of this applies to Taleb, though. Certainly TFA doesn't strike me as a particularly profound example of statistical reasoning ...
The big picture (Score:2)
We should not ask statisticians to change their terms because people are too stupid to understand them.
I've always wondered about this attitude.
For me, any change requires an analysis of risk/reward versus value. For example, if code contains confusing names, it might be worthwhile to refactor it.
The tradeoff is in the time spent refactoring versus the perceived value - if it's a mature product that largely works with few planned updates and few people will have to deal with the confusion, then the effort outweighs the returned value. If the code is open source, being actively developed and with many eyes lo
Re: (Score:2)
Re:The big picture (Score:5, Funny)
I often change CSensiblyNamedClassThatDescribesItsFunctionWell to bTrue throughout the code for precisely this reason and no-one ever appreciates it :(
Re:The big picture (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
Everyone understands the US Measures? How many pottles are there in a firkin? Or how many nails in a chain?
Everyone else understands what I meant.
What are you going on about?
Re: (Score:3)
No, there is always 16 ounces in a pound regardless of what you are "weighing" (measuring the mass of). Perhaps you've conflated ounce with fluid ounce -- a distinct, though confusingly named, unit.
And yes, a mile is 5280 ft = 63360 inches. I don't know where you pulled "3mm shy of" from but if you're measuring in miles and worrying about being 3mm shy, you're doing it wrong.
BZZZT! Wrong. Gold, silver, and other precious metals are measured in Troy ounces, which are slightly heavier than regular ounces. Oddly, Troy pounds only have 12 Troy ounces in them. Thus, an ounce of gold is heavier than an ounce of lead, but a pound of gold is lighter than a pound of lead.
Similarly, long distances are measured in statute (or survey) miles, which are based on a longer standard than customary measures. The volume of a hogshead is different between ale and beer.
But thanks for correc
Re:The big picture (Score:5, Interesting)
I would have said "18 half gallon pottles to the quarter-barrel firkin."
Wolfram Alpha says 15.75 pottles to the firkin, but that's because of US/UK gallon conversions, I reckon.
352 nails in a chain - which was interesting to me, in that Google includes those units in its calculator.
I now know more about pottles, firkins, nails and chains that I did when I woke up. I shudder to think about what got pushed out of my old head to make way for this new minutia.
Re:The big picture (Score:5, Informative)
Hi, I'm a statistician.
It's not so simple to just say "ok, we're going to use the Mean Absolute Deviation from now on." The use of standard deviation is not quite the historical accident that Taleb makes it out to be--there are good reasons for using it. Because it is a one-to-one function of the second central moment (variance), it inherits a bunch of nice properties that the mean absolute deviation does not. There is not a one-to-one correspondence between variance and mean absolute deviation.
Taleb is correct that the mean absolute deviation is easier to explain to people, but this is not just a matter of changing units of measure (where there is a one-to-one correspondence) or changing function and variable names in code (where there is again a one-to-one correspondence). Standard deviation and mean absolute deviation have different theoretical properties. These differences have led most statisticians over the last hundred years to conclude that the standard deviation is a better measure of variability, even though it is harder to explain.
Re:The big picture (Score:4, Insightful)
I think NNT is saying that the MAD ought to be used when you are conveying a numerical representation of the "deviations" with the intent that readers use this number to imagine or intuit the size of the "deviations." His example is that of how much the temperature might change on a day-to-day basis. According to him, it's not just that the concept is easier to explain, but that it is the more accurate measure to use for this purpose.
Based on his other work I'm sure he understands that the STD is generally superior for optimization purposes, fit comparison, etc.
Re: (Score:3)
Not within one day, but across several days.
Say someone just asked you to measure the "average daily variations" for the temperature of your town (or for the stock price of a company, or the blood pressure of your uncle) over the past five days. The five changes are: (-23, 7, -3, 20, -1). How do you do it?
The STD of the series is 15.7 and the MAD is 10.8. NNT argues that MAD is more useful in this context.
In fact, whenever people make decisions after being supplied with the standard deviation number, they act as if it were the expected mean deviation...every time a newspaper has attempted to clarify the concept of market "volatility", it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation.
He says he's seen this mistake made not just in popular articles, but also in financial publications and regulatory documents. The daily temperature is just a contrived example; he's mainly talking about financial analysis (which is his field).
Re: (Score:2)
Re:Basic Statistics (Score:4, Informative)
The meaning of standard deviation is something you learn on a basic statistics course.
I took a statistics course in college. The statistics professor taught us to think of the standard deviation as the "average distance from the average". So if you know the average (mean) then any random data sample will be (on average) one SD away. That is simple, neat, and easy to remember.
It is also wrong.
Re: (Score:2)
Re:Basic Statistics (Score:5, Informative)
In fact, it is wrong in exactly the way that TFA suggests: you're describing the mean deviation...
Re:Basic Statistics (Score:4, Interesting)
Didn't you hear? Guassians are so 1893. And so are all of the other distributions with convenient sigma terms...
And TFS calls standard deviation "root mean square error", which is only true if you assume the mean is a constant estimator for the distribution :(
In any case, no one picked Gaussians because they are so easy to integrate... While we're at it, TFA should suggest we round the number e to 3, because irrational numbers are hard, and who cares what natural law dictates.
Re:Basic Statistics (Score:5, Funny)
Then it would be the same as pi, and that would just be silly.
Re: (Score:2)
Not at all. If e^pi = pi^e then a common interview question would be a hell of a lot easier to answer.
Re: (Score:3)
Mmm, pi^e.
Re: (Score:2)
We should not ask statisticians to change their terms because people are too stupid to understand them.
The author didn't ask anyone to change any terms. They asked people to stop using the wrong statistic. Ex: Don't use mean if you needed the median.
Re: (Score:2)
Re: (Score:2)
The replacement the article proposes (mean absolute deviation or MAD) is also only particularly meaningful if you're dealing with a symmetric distribution, so it really doesn't address the problem you identify.
Re: Basic Statistics (Score:2)
Re: (Score:3)
Re: (Score:3)
Honestly, of the different things I have studied all had jargon that could have been explained in simpler terms, often in shorter common words. So much of it is a wall to the "stupid" people and their understanding.
Other times there are specific concepts with only one word. These need to be simplified and taught to when it is being introduced in journals, but that would be work and very few people have been trained to speak to laymen.
Even within the sciences some shorthand jargon means one thing in chemistr
Re:Basic Statistics (Score:5, Funny)
Re: (Score:3)
Bruce Lee summed it up - "Before I started martial arts, a punch was a punch and a kick was a kick. When I started martial arts, a punch was no longer a punch and a kick was no longer a kick. When I understood martial arts, a punch was a punch and a kick was a kick."
Most people are stuck at the second level - stuck in technicalities. Few people ever reach the third level - where a punch is a punch and a kick is a kick, not because of ignorance of technicalities. But because they have transcended the technic
Re: (Score:2)
And yet, everyone refers to the act of cooking in a microwave as "nuking," and no one seems to have a problem with that.
Re: (Score:2)
Except me. I believe the correct term is "Radaranging".
Re: (Score:2)
Re: Basic Statistics (Score:2)
Re: (Score:2)
Re:Basic Statistics (Score:5, Funny)
Also, if you arrived at a hospital saying you were there for an NMR, you might have received something other than what you were expecting.
Issues (Score:5, Informative)
On the other hand, you also need to use 2-pass algorithms to compute Mean Absolute Deviation, whereas STD can be easily calculated in one pass. And you still need standard deviation as it relates directly to the second moment about the mean.
Also, annoyingly, Median Absolute Deviation competes for the MAD name and is more robust against outliers.
Re: (Score:3)
On the other hand, you also need to use 2-pass algorithms to compute Mean Absolute Deviation, whereas STD can be easily calculated in one pass. And you still need standard deviation as it relates directly to the second moment about the mean.
Right. Some common measures in statistics date from the paper and pencil era, back when computation was really expensive. The same issue applies to least mean squares curve fitting, which is cheap to compute but overweights values far from the curve. This is well known, and was recognized decades ago. This is not something Talib "discovered", or even popularized.
(If you want to annoy Taleb and his flunkies, ask hard questions about the actual performance of his funds in years other than 2008.)
Re: (Score:2)
On the other hand, you also need to use 2-pass algorithms to compute Mean Absolute Deviation, whereas STD can be easily calculated in one pass.
Okay, just to be clear: you're saying that we should use STD because (in part) it's faster and easier to calculate?
Isn't that like the drunk looking for his keys under the lamppost - instead of where he dropped them - because the light is better?
Re: (Score:2)
Mean absolute deviation is a useful statistic, though I tend to actually prefer median absolute deviation.
You can actually prove that.
median abs dev = mean abs dev = standard deviation
You can also prove that median abs dev will provide the minimal absolute deviation from any number, so in that sense mean abs dev is kind of a strange choice here.
That said, we can say a few things about it.
It is a pain in the ass to calculate. It also tends to favor solutions that let outliers run wild. Least squares provides
Re: (Score:2)
Sadly, the problem is your proposed algorithm doesn't work.
Consider the sequence [1,2..100], The real mean absolute deviation is 25.
Your algorithms yields er.. something around 1?
Mean absolute deviation requires you to sum over a bunch of absolute values of differences to a number you don't know a priori.
Unlike stddev's use of x^2, abs doesn't have a continuous derivative and can't be split out into calculations in terms of the moments around 0, and you can't borrow Chan's algorithm.
Basically, to my knowled
Re: (Score:2)
Data points: 1 2 10
First pair: (1,2). S = 0+abs(1-2) = 0+1 = 1
Second pair: (2,10). S = 1+abs(2-10) = 1+8 = 9
MAD=9/2=4.5
----
Data points: 1 10 2
First pair: (1,10). S = 0+abs(1-10) = 0+9 = 9
Second pair: (10,2). S = 9+abs(10-2) = 9+8 = 17
MAD=17/2=8.5
----
Two different results from the same data points. Have I misunderstood something?
That's not the problem. (Score:5, Insightful)
The problem is that people think they understand statistics when all they know is how to enter numbers into a program to generate "statistics".
They mistake the tools-used-to-make-the-model for reality. Whether intentionally or not.
Re:That's not the problem. (Score:5, Interesting)
Three characterizations of statistics, in ascending order of accuracy:
1. There are lies, damned lies, and statistics.
2. Figures don't lie, but liars figure.
3. Statistics is like dynamite. Use it properly, and you can move mountains. Use it improperly, and the mountain comes down on you.
Re: (Score:3)
In Soviet Russia, improper statistics puts you up on mountain.
Re:That's not the problem. (Score:5, Insightful)
The problem is that peoples' attention spans are rapidly approaching that of a water-flea.
Up until the past 50 or so years, people who learned about Standard Deviation would do so in environments with far less stimulation and distraction. Their lives weren't so filled with extra-curricular activities and entertainments that they never sat for a moment from waking until sleep without some form of stimulus based pastime. When they "understood" the concept, there was time for it to ruminate and gel into a meaningful set of connections with how it is calculated and commonly applied. Today, if you can guess the right answer from a set of 4 choices often enough, you are certified expert and given a high level degree in the subject.
Not bashing modern life, it's great, but it isn't making many "great thinkers" in the mold of the 19th century mathematicians. We do more, with less understanding of how, or why.
Re: (Score:2)
They also did so in an environment where they had to do all the math by hand (or with a slide rule).
The math is not difficult. But it is repetetive in the extreme. So unless you were a savant you learned to pay very close attention to the numbers and what they represented. For those of you who didn't take statistics, here's a link to show you how standard deviat
Re: (Score:2)
Re: (Score:2)
I took the thesis option for my Masters', but I was in the minority, most preferred to take extra classes and just get the paper.
If you select your institution, courses and professors carefully, I bet you can get a degree with mostly multiple choice testing determining the grades.
Re: (Score:2)
Re:That's not the problem. (Score:4, Informative)
Not bashing modern life, it's great, but it isn't making many "great thinkers" in the mold of the 19th century mathematicians. We do more, with less understanding of how, or why.
The easier math problems are lower hanging fruit. As time goes on, the problems that are left become increasingly hard. Even when they get solved, average people can't understand what it means, and that makes it hard to care about, and hward for newspapers to make money covering that story.
Also when you read about the history of mathematics, it's easy to feel like these breakthroughs were happening all the time, compared with now, when in fact they were very slowly, and the pace of discovery is probably higher now than at any point in the past.
It's easy to say music was better in the 70's than now when you condense the 70's down to 100 truly great songs, forgetting all the crap, and compare it to whats playing on the radio today.
Re: (Score:2)
True, I'm comparing today's "median" or perhaps "average" University student with the same "average" student from 50 to 80 years ago.
We've got a lot more population, and probably more great thinkers alive today than in the entirety of the 1800-1950 timespan, more people with opportunity, means, etc.
It's just the everyday UniGrad you meet that I'm lamenting.
Re:That's not the problem. (Score:5, Insightful)
I think it's also true that a larger percentage of people are going to university, so the average "intelligence" of people in university in terms of natural ability is probably lower now than when it was just the very best students attending.
Most of the mediocre students today would have simply not gone to university in the past. I think the same principle holds when it comes to things like blogs. The fact that public discourse can sometimes make it seem as if people are getting dumber, when it is really just that more and more people know how to read and write and can now even be published, whereas in the past, there was a higher cost to publishing, and you were more likely to have something important to say before being willing to incur that cost.
Standard Deviation is Important (Score:5, Informative)
Standard Deviation is the square root of the second moment about the mean [wikipedia.org], an important fundamental concept to probability distributions. Looking at moments of probability distributions gives us lots of tools that have been developed over the years and in many cases we can apply closed form solutions with reasonably lenient assumptions. Then we apply the square root in order to put it in the same units as the original list of observations and get some of the heuristic advantages that he attributes to the mean absolute deviation.
But it is a balance, and any data set should be looked at from multiple angles, with multiple summary statistics. To say MAD is better that standard deviation is a reasonable point (with which I would disagree), but to say we should stop using standard deviation (the point made in TFA) is totally incorrect.
Re: (Score:3)
What he is saying is not that statisticians should stop using SD in statistical theory or anything. What he's saying is that non-statisticians should stop using SD as a measure of variability when describing their data to each other. And since everybody (except statisticians) think SD is the average deviation from the mean, then people should perhaps use that instead, and reduce confusion for everyone.
Re:Standard Deviation is Important (Score:4, Insightful)
I'm a little surprised at Nassim Taleb's position on this.
He has rightly pointed out that not all distributions that we encounter are Gaussian, and that the outliers (the 'black swans') can be more common than we expect. But moving to a mean absolute deviation hides these effects even more than standard deviation; outliers are further discounted. This would mean that the null hypothesis in studies is more likely to be rejected (mean absolute deviation is typically smaller than standard deviation), and we will be finding 'correlations' everywhere.
For non-Gaussian distributions, the solution is not to discard standard deviation, but to reframe the distribution. For example, for some scale invariant distributions, one could take the standard deviation of the log of the values, which would then translate to a deviation 'index' or 'factor'.
I agree with him that standard deviation is not trustworthy if you apply it blindly. If the standard deviation of a particular distribution is not stable, I want to know about it (not hide it), and come up with a better measure of deviation for that distribution. But I think the emphasis should be on identifying the distributions being studied, rather than trying to push mean absolute deviation as a catch-all measure.
And for Gaussian distributions (which are not uncommon), standard deviation makes a lot of sense mathematically (for the reasons outlined in the parent post).
Re: (Score:2)
So, the solution to "standard deviation is hard" to is rephrase it in terms of "square root of the second moment about the mean"? I'm on board! that's totally simpler and more intuitive!
(note for the sarcasm-impaired: that was sarcasm)
Education!! (Score:3)
There is a great difference between a mean value and an RMS value. Scientific people can work with the appropriate version so I don't see a problem with using the correct one for the correct occasion. And certainly science should stay with the correct term as appropriate.
What I believe the person is calling for here is the most appropriate use when communicating to the non-scientific person. This is an education issue in that the communication really should not use either term as a shorthand but should explain in full the effect of the distribution. Science uses mean and standard deviation (often also requiring a named distribution) because they are shorthands that describe the random behavior and have full meaning without any other explanation needed. So I say use neither term when communicating to the non-scientific as they do not fulfill the communication role to which they are intended.
What I believe should actually be done is proper education of all so that they understand the differences between various random distributions and move totally away from a "it is cold today, so global climate change based on heating must be a lie".
Re: (Score:2)
Re: (Score:2)
He isn't talking about Non Science people. He is talking about Social Science people and Science Journalist people. Both of whom have educations.
So he is talking about Non Science people. Have you never read the output of a Science Journalist when they write about something you are familiar with?
"Hav[ing] an education" doesn't make one a scientist. Doing things the scientific way makes one a scientist.
Re: (Score:3)
Re: (Score:3)
"many with PhDs" (Score:2)
If there are "data scientists" who don't understand what the standard deviation is, then they certainly shouldn't be calling themselves "data scientists," and quite possibly not scientists at all. What subjects are their PhDs in, I wonder? This doesn't do anything to reduce my skepticism that such a thing as "data science" really needs to exist.
Roadway Intersections (Score:2)
If there are "data scientists" who don't understand what the standard deviation is, then they certainly shouldn't be calling themselves "data scientists," and quite possibly not scientists at all. What subjects are their PhDs in, I wonder?
The problem isn't with highly-educated people, it people who are not highly educated, or who are highly educated but in a different field.
If a particular intersection attracts a lot of accidents, we consider the accidents to be the fault of the drivers involved. But at the same time, we recognize that aspects of the intersection might be a contributing factor as well.
Expert drivers would never have such accidents, but if we spend some effort reblocking the intersection we could get improved safety, and some
Re:"many with PhDs" (Score:4, Interesting)
What other existing specialization in computer science, physics, etc,. do you feel is qualified to use Hadoop to process trillions of triple stores into a network and subsequently build highly multivariate link prediction models and evaluate their output statistically with respect to ground truth, to name but one trifling task?
As it happens, one of my colleagues runs a project which, among other things, does exactly that. His PhD is in computer science. I'm a bioinformaticist with a background primarily in biostatistics; I couldn't develop a tool like that, but I can certainly see the value in it. In general, I'm not arguing that the tasks currently getting lumped together under "data science" aren't valuable. I'm just saying that I'm not convinced they fit together into a coherent field that can meaningfully be studied in a single degree program, and attempts to make them so may well run into the problem of "jack of all trades, master of none."
Yes, use the interquartile range instead (Score:2)
Yes, use the interquartile range instead https://en.wikipedia.org/wiki/Interquartile_range [wikipedia.org]
It is like the median a very robust method, not readily influenced by outliers. https://en.wikipedia.org/wiki/Median [wikipedia.org]
The median is wickedly robust, with a breakdown point at 50%, meaning that you can throw a huge a mount of junk data at it and it still doesn't care.
The arithmetic mean and the standatd deviation are both junk, often worse than the too-often-assumed-normal data thrown at it.
Re: (Score:3)
"It is like the median a very robust method, not readily influenced by outliers. The median is wickedly robust, with a breakdown point at 50%, meaning that you can throw a huge a mount of junk data at it and it still doesn't care. The arithmetic mean and the standatd deviation are both junk, often worse than the too-often-assumed-normal data thrown at it."
That depends entirely on what you are trying to show. None of them are junk for all purposes; all of them are junk for the wrong purposes.
For example, if you're talking about salaries of employees of a corporation, the mean might not mean much: the CEO makes 30 times as much as everyone else, and other managers 20 times more, lower managers 10 times more... so the mean is thrown way off. The median is much more meaningful.
On the other hand, even the mode can be useful sometimes. Suppose the corporatio
How about "somewhere in the middle" (Score:2)
That's a good enough replacement term.
Standard deviation BAD, but mean GOOD? (Score:5, Interesting)
Perhaps non-mathematicians don't have a problem with this, but it rubs me the wrong way.
What makes the mean an interesting quantity is that it is the constant that best approximates the data, where the measure of goodness of the approximation is precisely the way I like it: As the sum of the squares of the differences.
I understand that not everybody is an "L2" kind of guy, like I am. "L1" people prefer to measure the distance between things as the sum of the absolute values of the differences. But in that case, what makes the mean important? The constant that minimizes the sum of absolute values of the differences is the median, not the mean.
So you either use mean and standard deviation, or you use median and mean absolute deviation. But this notion of measuring mean absolute deviation from the mean is strange.
Anyway, his proposal is preposterous: I use the standard deviation daily and I don't care if others lack the sophistication to understand what it means.
I hate averages (Score:5, Interesting)
I also think averages should go away. Most people think they are being reported the median (the number in the middle) when people tell them the average. It's great for real estate agents, and people trying to advocate for tax reform, but the numbers are not what people think they are.
Bell Curve (Score:2)
Mean Deviation is Always Zero (Score:4, Interesting)
Well... first of all, summary has it wrong. It's not "mean deviation", it's "mean absolute deviation", or just "absolute deviation" from the literature I've seen. (Mean deviation is actually always zero, the most useless thing you could possibly consider.)
Keep in mind that standard deviation is the provably best basis if your goal is to estimate a population *mean*, the most commonly used measure of center. Absolute deviation, on the other hand, is the best basis to use for an estimate of a population *median*, which is maybe fine for finances, which is what the linked paper seems mostly focused on. (Bayesian best estimators, if I recall correctly.)
If the main critique is that economists and social scientists don't know what the F they're doing, then I won't disagree with that. But no need to metastasize the infection to math and statistics in general.
Know it, use it. (Score:2)
Taleb doesn't live in a normal world (Score:2)
When I was in school, they still taught the central limit theorem which explains why so many error distributions are "normal". Our world provides us with millions of examples in everyday life where the standard deviation of our experiences is the best statistic to estimate the probability of future events.
What you do with a statistic is what counts. It's easy to look at the standard deviation and estimate the probability that the conclusion was reached by chances of the draw, though it takes some practice
normal densities (Score:4, Informative)
For normal densities, standard deviations and MAD are just proportional, with a factor of about 1.25, so it doesn't matter which you use.
For non-normal densities, neither of them really is universally "right" for characterizing the deviation, but it's mathematically a whole lot easier to understand how standard deviation behaves in those cases than MAD. So even there, standard deviations are usually the better choice.
Journalists don't care for uncertainty anyway (Score:3)
And neither does the media-consuming public. Most would totally ignore your measure of precision regardless of whether you call it standard deviation or mean absolute deviation. For them your average is absolute and if any values aren't at all near it something is terribly wrong. They will also not rest until every school performs above average and nothing in your work will convince them otherwise. The public doesn't like uncertainty and will assume every outcome is for a special reason, and this even goes for the non-religious ones. The idea that some things aren't absolute and are actually uncertain and variable terrifies them.
Nowhere is this more apparent than in sports. Everything there is always "written in the stars" or "destiny" and if you win it always proves beyond doubt your are better than your opposition (or you were 100% cheated by the refs). Hell, journalists may have had a full article written up 2 minutes before the end of a game and then completely change everything to be about one team's dogged determination because chance would have it they scored in the last minute. I love football (soccer), but discussing it can be frustrating.
If you still believe you can convince them, use mean absolute deviation in your "executive summary" or press release and leave the standard deviation as is in your actual paper. The only ones that actually read the paper are scientists anyway. The typical journalist reading your actual paper is likely to misunderstand something in every paragraph anyway. Changing real science to pander to the masses is a fucking huge mistake.
Re: (Score:2)
I don't know which is more foolish, thinking that saying nothing, but saying it first, is a worthwhile goal, or claiming to be first when you're not. No need for you to choose, however: you did both.
Re:response (Score:5, Funny)
First!
... to within 0.5 standard deviations.
Actually, the more posts this story attracts, the more accurate your statement is, and the fewer standard deviations you are away from true first. Response times not being distributed in a Gaussian curve perhaps complicates things.
Re: (Score:3)
Re: (Score:3)
Re: (Score:2)
Re: (Score:3, Interesting)
Cancer research and particle physics use data scientists. Unfortunately so does amazon.com.
Okay, since cancer research is a very large field, I can't say for sure one way or the other ... but I do know that working in bioinformatics at a major academic research center, I've never known a single person in medical research of any kind who called themselves a "data scientist." We have lots of computer scientists and statisticians, most of whom, fortunately, get along well enough to make use of each other's strengths. Regarding particle physics I have no idea, but yeah, I'm willing to bet Amazon or
Re:Would those data scientists with PhDs (Score:4, Insightful)
Data Science (Score:4, Informative)
Data science is a field that combines machine learning and statistics to derive meaning from data. Data scientists should be reasonably well-versed in classical stats, but the data sets they deal with are often huge, ill-defined, and not amenable to analysis using classical methods. To deal with such challenges, data science recruits a healthy combination of certain areas of comp-sci (databases, machine learning, NLP, AI), statistical methods, and, quite often, improvisation.
Strange that there are so many people on here that are unfamiliar with data science.
Re: (Score:3)
Um, yes it does.
if you imagine a random draw in which you had 99 realisations of the number "1" and 1 realisation of the number "101". The mean would equal 2 while the (population) standard deviation would equal sqrt(1/100*(99+99^2)) (which is a fairly large number).
If you removed the one observation of "101" from the sample, you would have zero standard deviation. So one additional outlier will cause the standard deviation to go from zero to 10 without changing the mean by nearly as much (or the median at
Re: (Score:3)
I like that little poke at journalists:
In other words, it's not just journalists who fall for the mistake, so do educated people.