Science and the Shortcomings of Statistics 429
Kilrah_il writes "The linked article provides a short summary of the problems scientists have with statistics. As an intern, I see it many times: Doctors do lots of research but don't have a clue when it comes to statistics — and in the social science area, it's even worse. From the article: 'Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.'"
Lies, Damned Lies, and Statistics. (Score:5, Informative)
In other news math may not lie but people still can, all the honesty and good statistics in the world doesnt help end-user stupidity, and there are statistically two popes per square kilometer in the vatican.
Re: (Score:3, Informative)
Re: (Score:2, Insightful)
in the end, it's only a problem if the person listening is an idiot...
Re:Lies, Damned Lies, and Statistics. (Score:5, Funny)
Re:Lies, Damned Lies, and Statistics. (Score:4, Insightful)
The problem is that a lot of people believe statistics produced by an expert such as a doctor. Sri Roy Meadow [meactionuk.org.uk] had people sent to prison, and lots of children taken away from their parents, by misinterpreting statistics.
Re: (Score:3, Interesting)
I saw a fascinating presentation by an eminent professor of physics on what Meadow did wrong. It boiled down to mis-applying bayes' theorem. Meadow had got an extremely high probability of the accused being guilty out of it, what the professor did was poit out that (a) the probability put in for chance of two babies dying couldn't be taken by simply multiplying the chance of one dying by itself as the event may not be independent (b) that would be rather moot because the accused's chance of having 2 dead ba
Re:Lies, Damned Lies, and Statistics. (Score:4, Funny)
As with everything, xkcd delivers [xkcd.com]. My personal favorite :)
People often get caught assuming that Correlation == Causation.
Re:Lies, Damned Lies, and Statistics. (Score:5, Funny)
I'm actually at a scientific meeting and saw 7 presentations in which they "double dipped" on their statisitics before we broke for lunch.
The use and abuse of statistics. (Score:4, Interesting)
I'm actually at a scientific meeting and saw 7 presentations in which they "double dipped" on their statisitics before we broke for lunch.
Double-dipping is bad enough, but the medical field is rife with multiple-dipping. Each dataset is plumbed to test dozens of hypotheses, without appropriately adjusting the acceptance criteria. Even with separate datasets, if you test 20 hypotheses and discover that each one is just valid at the 95% confidence level, then there is a very good chance that there are some false positives. In the medical alleged-sciences, however, all 20 would be blindly proclaimed as truth.
And then there are the social nonsenses^W sciences... If practitioners of some discipline do not understand how to use quantitative methods, they should limit themselves to qualitative argument only. Unfortunately, in statistics as in other fields, those who are ignorant or incompetent are generally unaware of the extent of their ignorance and incompetence.
Re:The use and abuse of statistics. (Score:5, Insightful)
Has it ever been demonstrated that social scientists have a worse understanding of statistics than physical scientists? I ask because my observations are the opposite. The physical scientists run a t-test and declare the matter resolved (significant or not-significant). Given the complexities of social sciences, these scientists check the assumptions required to use a test (e.g., normalcy) and have a good understanding of the statistics involved. (The obligatory exception is statistical genetics: physical science with a solid statistical basis.)
Re:Lies, Damned Lies, and Statistics. (Score:5, Funny)
That since a dead clock is right twice a day, those two times cause the clock to work again?
No, the clock is right all of the time, it just shows local sidereal time and is often in the wrong place
Re: (Score:3, Informative)
The actual truth, as usual, is a bit more complex than the bit we all remember and quote.
Where a correlation occurs there are four distinct possible reasons:
Let say a correlation that during the time when X is known to have increased, Y showed a corelatory increase. then
1) It is possible that this is because X caused Y - e.g. the causation that way isn't implied - yes it's a possible implication.
2) It is possible that X in fact caused Y (e.g. the causation is in fact in the opposite direction of what the qu
Re:Lies, Damned Lies, and Statistics. (Score:5, Funny)
Indeed. For example: 6 out of 7 dwarves aren't Happy.
The problem is statisticians (Score:5, Insightful)
Usually (in science at least) it's not even a matter of lying. Part of the problem is that the multi-headed monster that statistics has become has a tendency to lead people to over-use numerical "answers" vomited up by stats packages, without really understanding what they are for, or how to interpret them.
Statistics are very useful for predicting certain things, but all too often they are submitted as "proof" of a given condition, which is dangerous. Sometimes we need to throw away statistics and start applying common sense.
Re:The problem is statisticians (Score:5, Interesting)
Actually, one of the most dangerous uses of statistics is exactly predicting with them inappropriately. Curve fitting is especially prone to this error- attempting to make any predictions outside of the central mass of the points used to *produce* the curve is completely bogus, and yet people do it all the time.
Re: (Score:3, Funny)
Here's a good example (credits to Nassim Taleb and his "The Black Swan" book) on the risks of extrapolation (of which curve fitting is one method): ...
- Based on previous experience, a turkey will confidently predict that he will wake up every morning be fed during the day and go to spleep in the evening. He can be easilly extrapolate this from the fact that it has happened every day of it's life. At some point before Christmas this turkey is going to have a big surprise
Re: (Score:3, Funny)
*Ahem* Cue carbon dating.
To be fair the problem with carbon dating is not merely curve fitting. A larger problem is the when God created the universe in Oct 4004BC (or thereabouts), He created Adam with a belly-button.
Re:The problem is statisticians (Score:4, Funny)
I feel somewhat vindicated for being no good at econometrics when I see where the people who were good at it have landed us.....
Re:The problem is statisticians (Score:4, Insightful)
Many times the answer that "just can't be right" is; the problem comes when we "throw away the statistics" instead of figuring out why and how it gave the answer it did.
I've adopted in my life a truism I learned from my flight training: deal with things as they are, not how we would wish them to be.
In my work in network security, I often come across some oddities, which I present to management. They can present some uncomfortable episodes, and management sometimes wishes to just sweep them under the rug instead of addressing the problems. Now that we have a newly-upgraded IDS, we're seeing things that we never noticed before, and I suspect that we're going to be getting new guidelines on what is important.
I hope that's just cynicism leaking through the rum, but I've been there long enough to thing it might be reality instead.
Re: (Score:2)
While we're at it, stay away from hospitals! Most people in civilised countries die there rather than anywhere else!
Re: (Score:3, Funny)
In other news, researches proved that water causes cancer. 100% of the cancer patients that died in 2009 drank water regularly.
Re: (Score:3, Funny)
Not only that, but it is also the key ingredient in most of today's problems. It's the core element of acid rain, it's a main ingredient in beer and many other alcoholic beverages that cause families to break apart, you find it in fattening food and it is the main ingredient in all high carbon soft drinks.
Consuming that stuff might also lead to antisocial behaviour, as it has been confirmed that all murderers, gunmen and even terrorists have consumed it pretty much all their life. When are we going to ban t
Summery? (Score:5, Funny)
It's not just statistics that people have a problem with...
Re: (Score:2)
Re:Summery? (Score:4, Funny)
What's that law about spelling/grammar corrections inevitably having spelling or grammar mistakes in them?
Re: (Score:3, Informative)
That would be Muphry's law [wikipedia.org].
For details on Muphry's law, click on the above hyperlink. For more fun laws, click on the below hyperlink.
More fun here. [wikipedia.org]
Re:Summery? (Score:5, Informative)
Re:Summery? (Score:5, Funny)
Only if the sentence misspells Hilter.
Example: Standard Deviation (Score:5, Interesting)
Re: (Score:2, Insightful)
As a statistics teacher (HS / Tech school level), this doesn't surprise me in the least. Statistics and statistics education has become a giant game of "plug the numbers in and damn the understanding". When a student has never calculated a standard deviation by hand, how can they be expected to know what the heck a root mean square deviation from the sample mean really is?
Going further, I would say that statistics is a tool for answering questions. Like any other tool, it works well for some jobs and not
Re:Example: Standard Deviation (Score:5, Interesting)
Doctors are notoriously bad with statistics. But the real kings of bad statistics are psychiatrists.
Notice how a LOT of studies in psychiatry are essentially statistics, statistics and a bit of statistics? It might be the reason why a lot of the courses you have to pass to become a shrink also consist of a lot of statistics, statistics... you get the idea.
NOBODY who decides that his course of studies would be psychiatry decided for that because he enjoy statistics that much, though. Actually, most psych students struggle badly with statistics. Psychiatry is one of the fields where the label doesn't match the contents. It looks like you're going to do a lot of messing with people's minds (aka "solving their psychology problems") but actually, judging from the courses, you become a refined statistician who had a bit of a counceling tutoring on the side.
That's not what people become shrinks for, though. They want to sit in their office, put people on their couch (or, more modern, in a comfy chair) and get 100 bucks an hour for listening to some idiot whine. And most do just that and will do fine.
It gets bizarre when they somehow end up in a spot where they have to rely on their statistics. Hey, you got a masters in that, and that entails a buttload of statistics, so you can do it... Nobody really cares that 9 out of 10 that somehow managed to get their diploma by either learning what they absolutely needed (and forgot it right after the test, certain that they'd never need it again, because ... ya know, listening to idiots and stuff, not sitting there plotting standard deviations...) or by cribbing altogether.
And then you get studies of the usefulness of psychotropic drugs and wonder whose black hole they pulled that out of...
Re: (Score:3, Interesting)
And then you get studies of the usefulness of psychotropic drugs and wonder whose black hole they pulled that out of...
Indeed. Normally I would never cite an article in a McNews magazine like Time or Newsweek, but I found this explanation of the state of antidepressant drug efficacy to be one of the best I've run across so far - hundreds of billions of dollars all depending on some really, really bad math. Its like the collateralized debt securities of the drug & psychiatric industries:
http://www.newsweek.com/id/232781 [newsweek.com]
Re: (Score:3, Insightful)
Except, if you had read this story, you would have found that the antidepressant = placebo story to be incorrect due to poor statistical reasoning:
"Another concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to d
Re:Example: Standard Deviation (Score:4, Informative)
You're mixing up psychiatrists, psychologists and psychotherapists.
A psychiatrist went to med school, got a doctors degree and specialized in problems with the brain. A psychologist went to university to learn the study of behavior of people. This involves a lot of statistics and many of them probably do consider it something they didn't go to college for, but it's a study that is supposed to follow the scientific method and prepare students for doing research, not therapy.
A psychotherapist is anyone who feels like calling themselves that. As a preparation they may have studied psychology at university, or they may have spent 20 years meditating in the Himalayas, or followed a short course at a religious group such as an institute of multiple personality disorder therapists or scientology.
Re:Example: Standard Deviation (Score:4, Interesting)
Back when I was in graduate school me and my colleagues in graduate science taught pre-med chemistry and physics, which was a really watered down version of chemistry and physics which were taught to engineers and science majors. To be honest I thought it was kind of scary. All these years I was taught that medical student were supposed to be the best and the brightest, but we spoon fed them "baby chemistry" and "baby physics".
Since that time I have had many discussions with professors about this and they and I have come to the same conclusion, "the best and the brightest do not go into medical school". Thirty or forty years ago this may have been true, but economics has taken a turn and it just isn't the case anymore.
And why would they? They can make more money on Wall Street, they don't have to hassle with bureaucracy of health insurance, they don't have to hassle with lawyers, so why would the best and brightest go into medicine.
And you want to know what kind of income a hot little girl with a business degree can get. Pharmaceutical sales can pay 6 figures for one good figure. So the next time you see that good looking girl pulling that bag through your doctors office realize she is probably making a lot of money. More money than the average general practitioner .
Re:Example: Standard Deviation (Score:5, Informative)
Re: (Score:2, Interesting)
I certainly don't remember how to do all those statistics calculations by hand but I use SAS and excel almost every day and they don't seem to have forgotten...give me a few more years and I might be at the point where I wouldn't be confident trying to explain what a standard deviation actually "is"
Re:Example: Standard Deviation (Score:5, Insightful)
Re:Example: Standard Deviation (Score:5, Insightful)
Re:Example: Standard Deviation (Score:5, Interesting)
There are some things you should never be able to forge.... Do people forget basic definitions so easily?
Given a couple years with little contact with people who speak your native language, you'll actually begin to forget that very language you have lived speaking all your life. So it doesn't surprise me at all that people would forget basic definitions if they don't actually think about those definitions very often.
I figure if you can forget your native language then pretty much all bets are off for the stuff you've known for a lot less time and used a much smaller percentage of your thinking life.
Re:Example: Standard Deviation (Score:4, Insightful)
There's a reason why you keep getting modded up and those disagreeing with you keep getting modded down.
You're exactly right. Modern diagnostic medicine is predicated on interpreting statistical studies to make diagnoses. It is practically incompetence for a practicing medical doctor to not know what standard deviation means.
Re: (Score:3, Insightful)
I believe that forgetting something usually means you never really understood it. I don't think that if you really understand something, you'll ever forget it. There are things I do rather rarely yet I don't forget them because I understand them and could re-derive them from first principles.
Usually if I forget something, it means I never quite understood it in the first place.
I think that real understanding implies almost indefinite retention, and lack of retention can be usually be explained by lack of un
Re: (Score:3, Informative)
Re: (Score:2, Informative)
sigma = population standard deviation = sqrt((sum(x-mu)^2)/N), where mu is the mean
s is approximately equal to (highestValue-lowestValue)/4, range rule of thumb
Unusual values are outside +/- 2 standard deviations
Z = ((x-mu)/sigma) where Z is in terms of standard deviations.
Re:Example: Standard Deviation (Score:4, Insightful)
Standard deviation is what you learn very early in school.
So early in fact that by you forget the details by the time you have had some serious study under your belt. Do you have any idea of the stuff you have to keep in your head to be an endocrinologist? So long as he remembers that it's a measure of variance (which he obviously does), it hardly matters whether he can explain to a mathematician how to derive it? And if OP gets off tripping up specialists with such minutae it ain't the specialist who has issues.
And you are telling me that it's not his "job" to know?
YMMV, but I would prefer to visit an endocrinologist who was an expert on the subject of hormones etc rather than stats.
Re:Example: Standard Deviation (Score:4, Insightful)
If MD's are reading medical journals and interpreting their results, which they all are expected to do (especially those with a Board Certified Specialty like Endocrinology) then there is no excuse for them to have forgotten what what the standard deviation is a measure of. They should be using the variance estimates provided in a data table to interpret the results it contains every time they read an article. If not, then they aren't worth the exorbinant fee's they are charging, because critical thinking is part of a physicians job description, and accepting whatever gets publish in the New England Journal of Medicine at face value is not.
I can accept forgetting the equation, but there is NO EXCUSE for forgetting that SD is a measure of varition (along with SEM, SED, and CV) as opposed to a measure of central tendancy (mean, mode, median). That is something they teach you in the first week of a statistics course, and is used every subsequent class because it is so fundamental to the interpretation of statistics. If I were cytoman, I'd be looking for a new Endocrinologist.
It's a tough situation (Score:2)
Actually, it's a tough situation. There is no real life experimental data can 100% fit the assumptions of commonly used statistical models. Real life data is messy. There is some degree of simplification. In addition, resorting to whiz-bang fancy methods that "fit" the real data may not be easily interpretable. Ease of result interpretability is what medical scientists want. There are other issues as well, such as computing time, equations derivability, etc.
In addition, many many medical scientists use stat
Two weeks of six sigma classes... (Score:3, Funny)
Re: (Score:3, Funny)
Personal experience (Score:5, Interesting)
As a doctor myself, I feel I should add my $0.02...
Throughout med school we had the odd scattered lecture on statistics, and later when reading papers I used to skim over most of the maths just to look for the P value at the end (one representation of how statistically significant a result is).
However, I then took a formal stats course and was amazed at how little I understood - Monte Carlo techniques, Markov models, and even something as trivial yet important as the difference between a parametric versus a non-parametric test.
And then it struck me - most of the research I had read had applied parametric statistical tests to their data - that it, the researchers made an assumption that the underlying distribution of results would fall on a normal curve. Yet this simple assumption may be all it takes to skew the data when they should have chosen a non-parametric test instead.
So yes, stats are vitally important, badly taught, and focus too much on the maths rather than the concepts. Remember that we're doctors, not mathematicians - the last set of sums I did were in high school. If I need to analyse data, I'll probably plug it into SPSS - although now with my eyes open.
-Nano.
Re: (Score:2)
And that, my friend, is why the NIH's constant push to produce more 'physician-researchers' continues to drive me nuts. Because they rarely insist K awards and other early-career training mechanisms require physicians intending to do research in areas where stats are important actually get any stats training..
Re:Personal experience (Score:5, Insightful)
...And then it struck me - most of the research I had read had applied parametric statistical tests to their data - that it, the researchers made an assumption that the underlying distribution of results would fall on a normal curve. Yet this simple assumption may be all it takes to skew the data when they should have chosen a non-parametric test instead.
So yes, stats are vitally important, badly taught, and focus too much on the maths rather than the concepts. Remember that we're doctors, not mathematicians - the last set of sums I did were in high school. If I need to analyse data, I'll probably plug it into SPSS - although now with my eyes open.
That's a good insight. I'm a statistics professor, and some of the problems I see are a) people generally get exposed to a single course in statistics; b) they're usually mathematically unprepared for it; c) so much gets squeezed into that one opportunity that heads are exploding; d) because of (a) - (c), everybody wants you to "just give 'em the formula"; e) since statistics is so widely used, there's a plethora of courses that are being taught by people who themselves are victims/products of (a) - (d), and are very happy to "just give 'em the formula"; and so e) most people plug and chug data through a stats package with no idea of the applicability, limitations, and interpretation of the results. The sheer volume of bad analyses is enough to make you weep, and contributes to the widely held perception about "lies, damned lies, and statistics". And that completely ignores the intentional falsehoods propagated by people who are trying to support various advocacy viewpoints, and will happily mislead the public with biased samples, Simpson's paradox [wikipedia.org], invalid assumptions, etc.
Re:Personal experience (Score:4, Interesting)
Even ten to fifteen years ago, students in Statistics courses had very little computer exposure, and that of course means any practical analyses would imply the use of approximations - hence the widespread use of chi squared tests and normal distributions for everything, whether appropriate or not.
If the statistician -> textbook -> student/scientist -> textbook -> scientist process is factored in, I have no doubt that it will take another generation or two before the old style of statistics is replaced sufficiently widely to be only a memory.
Re: (Score:3, Informative)
It's the approach that you can just pump the numbers into SPSS or Statistica, and then call on a battery of tests until you get a "significant" result that results in the kind of errors the article (and a disturbing number of /. readers) fall into.
Unless you're dealing with large samples, all z and t tests assume normality in the population, with insignificant skew or kurtosis. Yet by definition, if we have enough data to be sure we have a normal population, we have enough data that the central limit theor
Re: (Score:3, Interesting)
I think the term "Statistics" has become too general that people don't understand how complicated it can be. People think of Statistics as Bob saying to Alice - "Get me the stats on this weeks' sales." Alice just goes digging around and gets Bob the total sales in $, # of units sold .... etc. People don't understand or know of the concepts that are involved in polling, they just thing they called 2,000 random people and that's it. That's statistics to the public and many college graduates.
I had to take a fe
Re: (Score:2)
Pirates cause cool weather (Score:2)
Re: (Score:2)
"Luke Skywalker's a Jedi of course;
And he's prone to have much interco
Re: (Score:2)
It's funny that you say that because is not global warming a statistical creature?
Fair and Balanced: Fox quotes the Bible as saying (Score:3, Funny)
that there are only 3 kinds of scientists: those that are good at math and those that aren't.
Excellent (Score:2, Insightful)
bad title (Score:5, Interesting)
It is not a shortcoming of the Copenhagen interpretation of quantum mechanics or the Chicago school of economics if I don't understand or know how to correctly interpret their results. It is my shortcoming and fault for not knowing enough to connect the dots.
I do statistical research some of that is through interacting with researchers in the biosciences. Often when I go to talk to a researcher and ask them if they could use some statistical or mathematical or computational assistance with their research it has almost always been a fruitful starting point to long conversations and getting into the research. Now sometimes it was simply a matter of looking at their F-test results or ANOVA scores and telling them what it meant (like with a regression model relating proportions of certain characteristics between taxa), more useful interactions for me often mean working on new algorithms or estimators or working with fitting a model from their empirical data because there isn't a reliable standard model to work off of (like intergenic distance between genes in an operon) that kind of challenge makes less engaging work worth the hassle. Maybe I'm odd because I've worked hard to have a good background in both statistics and biology, but I shouldn't be.
Although here is an observation that perhaps supports some of the intent of the article from my own experience. I was speaking with a biology graduate student and it came up that they had a biostatistics course in the department. Of course as a statistician my mind goes towards survival function, failure rate, life tables, censored data, bioassy, epidemiology, microarrays, clincal trials, topics along those lines. It turned out their course focused z tests, t tests, f tests, confidence intervals, point predictions, least squares regression, multiple regression, ANOVA, and things along these lines just with simulated problems in a lab setting. That is not necessarily a bad thing, but much of the core math was under played or missing like model assumptions and alternate formulations or things like dummy variables. The worst part was that even though they were doing well with the class they had no confidence in actually using the statistics and didn't understand how to interpret the meaning of something like a confidence interval, they knew how to calculate one, but it wasn't clear what it actually meant to them.
The corollary to the notion in the summary I'd rant and claim is that scientists overall have less than desirable skills in mathematics, statistics, and computation than those who studied those disciplines principally and that's hurting science. However many in those three disciplines really know little beyond basic results in any of the sciences which hurts the applicability of these mathematical fields to the sciences and likely hurt our ability to develop certain types of discipline specific results that can be generalized from work in application problems.
In either case whether you're a typical scientist or a typical math/stat/comp person in order to become proficient enough in the other areas it requires going an awfully long out of the way compared to any counterpart who simply does not care and goes straight through as many before have. While in some areas of research on either side it is no problem to do as has been done and not further knowledge into those other areas. Increasingly results that have the highest levels of impact are coming more and more from truly interdisciplinary research. In order to further encourage that for those who are interested in such fields (aside from making more clear what areas in any of the fields fringe to such interdisciplinary work) we need more incentive to study more than one field and/or better ways of enabling fruitful cooperation between the camps.
only in medicine (Score:5, Interesting)
Re:only in medicine (Score:4, Funny)
Physics (yes, Physics, THE hardest of hard sciences) is full of terrible mathematics, absolutely terrible, shockingly bad stuff. The good ones know it, some will say it doesn't matter because their butchery comes up with "accurate" results. If they can't even get their analysis right, what can we expect of the softer sciences? That said physics is not so much concerned with statistics as it is probability, none the less, they have some serious problems, for example they often simply decide highly non-convergent things should converge because the experiment says it should...
The greatest tragedy in modern science (in my eyes) is the loss of physics as a hard science, currently these guys are way off with the fairies and producing nothing of worth, string theorists are the worst. We'll see what the CERN guys manage to come up with, but right now the mathematicians have taken the ball and run with it. It has been said that physics has become too hard for the Physicists...
I am not trolling, I am quite serious about Physicists playing dodgey games with mathematics.
Re: (Score:3, Interesting)
I've had my name included on several 'hard science' papers that had horrible statistical assumptions. I fought, and lost, because my professor had a big grant to maintain, and nobody else understood the underlying assumptions (we used an absolute scaling function, guaranteeing that our distribution was not normal, then tried to assume that it was normal). The second half of my thesis refutes the math in the last three papers I was on. Not one single person who read it understood it, which is sad because
Bad outcomes due to statistics? (Score:2)
From TFA:
One has to wonder, though: how much of that is due to misuse of statistics and how much is because it's paid research expected to get certain results in favour of those paying for the research?
The problem is with statistics itself (Score:3, Informative)
Statistics is changing slowly (mostly because computers and R make non-classical statistics more practical) but the way it's taught still leads to problems.
Looking for a good book on statistics (Score:4, Interesting)
I'm interested in learning the essentials of statistics. What would be a good book to start me out?
I got The Manga Guide to Statistics [nostarch.com] and it did introduce me to the very basics. However, there are many places where it just gives you an equation, without deriving it or even explaining it. After reading this book, I now know how to calculate standard deviation, but I'm still a bit vague on how people actually use it. I would like to see some examples of how people use statistics in (for example) science experiments.
My ideal book would explain the basics, with examples, and show how the math works. Ideally it wouldn't be a thousand pages long, either, but that's a secondary consideration.
Recommendations, please?
P.S. Those of you who know about statistics: how good are the Wikipedia pages on statistics?
steveha
Re: (Score:3, Informative)
Devore's Probability and Statistics for Engineering and the Sciences is probably the best one-volume, undergrad-level intro to statistics out there. Get a copy (I think it's on the sixth or seventh edition now; you can pick up a fifth edition for cheap) and work your way through that, and you'll have a pretty good idea of where all those formulae come from and how they're used. Get a copy of R [r-project.org] and check out the "Devore*" packages in the package list [r-project.org] too. If you want to learn more after that, I recommend
MY common conversation (Score:5, Informative)
The largest demographic in american prisons are black americans. Real statistic but is it true?
Given a particular sample that indicates blacks are 60% of the prison population this would appear to be true.
But what if I said: "The largest demographic in prison is minority, non-whites." Suddenly the % jumps from 60% (black) to 80% (minority). Which is more right? This is the problem with statistics. Context.
Now I can say readily that the largest demographic in prison is actually right-handed people. The % now jumps to 90%.
But wait! There is more! The largest demographic is prison is actually people who prior to arrest were below the poverty line which jumps to 99% of the population. Again, all of the above are accurate based on a sample but which is MORE correct? Linear Algebra is coming into play here quickly....
When that kind of issue comes into play, it is the classic "Correlation != Causation" confusion. The majority of people in prison are in there because of "Being black? Being a minority? being right handed? or being poor?" None of the above. The majority of them are in there because they were convicted of a crime and sentenced. That is the causation of their imprisonment, the rest is correlation which may have a direct causation on the conviction or sentencing, but no direct causation on being in prison. (e.g. You cannot be thrown into prison for being poor, black, minority, right handed)
Same with medical research, politics, economics, etc. The price of oil rising 10% and a subsequent 5% drop in shipping orders. Measuring the significance of regessors is important but oddly never reported most of the time. Many factors get masked or shadowed by higher level regressors (e.g. being a minority masks a variety of other social and economic factors. In addition it can distort statistical work by being too broad. Asians have a variety of different economic and social factors as north american blacks versus even african immigrants.)
Back to the orignal subject:
We can take 100 prisoners and 100 non-prisoners and figure out rather quickly if being black is statistically significant in prison population. Non-prison population blacks would account for 25%-45% of the population (Depending on location). We can see that 60% of prisoners are black. There is a 20+% deviation from the norm. We can test to see the significance of that. Same with minorities. Now we find something quickly that right handed is insignificant because it doesn't deviate from the norm. We can test left-handed and right-handed populations and rule out the handed-ness of a convict being significant.
We can find the economic status is considerable MORE significant then minority or black as a status. We can determine that the reason minorities or blacks are disporotinally more prevelant in prison is that blacks and minorities have higher rates of poverty. We can extract and determine the statistical weight of POVERTY in regards to imprisonment (Since we find a high % of white in prison that are poor compared to the normal population.) Once we figure that out we can remove that and continue an investigation and figure out what weight minority and black has once we have removed POVERTY from the model (Residual analysis).
The problem in reporting is without providing the whole, comprehensive analysis you can miss important things. For instance to correct the injustice in sentencing, without reporting the weight POVERTY has in contrast to BLACK or MINORITY you may lose sight that you may have better success addressing POVERTY to normalize sentencing rather then MINORITY or BLACK (or not).
The same happens in medical reasearch. Given a cocktail of drugs wirthout having the whole analysis you may end up providing more of Medicine A versus B but lose sight that A & B are limited by the dosage of Medicine C.
Satistics are not bullshit, rather mearly observations with no intrinsic agenda or even implication of truth. Purely amoral, like a hand gun.. useful to both the good and evil.
Statistics don't lie, nor do they tell the truth. They simple show the relationship of the data as it stands. The Truth or Thruthiness of it is subjective and vulnerable to context.
Re: (Score:3, Funny)
Re:Its common knowledge (Score:4, Insightful)
And 77.335% of all statistics claim more accuracy than their expected deviation warrants.
Re:Maths anxiety (Score:5, Informative)
Re: (Score:3, Funny)
Re: (Score:2)
The entire article can be summed up by the tiresome cliche "correlation != causation".
That misses a lot of the problem. For example, observer bias through poor statistical design of the experiment or throwing out data can cause the appearance of correlation or causation in data that isn't so.
Re: (Score:3, Informative)
Re: (Score:2)
The entire article can be summed up by the tiresome cliche "correlation != causation"...
The logical fallacy is called "post hoc, ergo propter hoc" - "after this, therefore because of this".
Sort of like - I get a headache every time someone turns on the television, therefore headaches are caused by the television.
Oh, hang on...
Re: (Score:2)
science is not in the bussiness of proof
So what is it in the business of?
What it actually said (Score:5, Informative)
And lots of others. It then suggests Bayesian reasoning as an alternative to traditional statistical tests.
Most post-PhD scientists are aware of the common mistakes, but being aware that we make mistakes doesn't necessarily stop us from making them. If you chose a random set of conference proceedings, it is almost certain you will find at least one paper (and I suspect usually a dozen or more) that have statistical mistakes in them.
Re:What it actually said (Score:5, Insightful)
The result, repeatedly proven mathematically and by experience, is that the magic number is always Signal-to-Noise-Ratio. You can't get good information from crappy, scant, data.
Humanities and social-"science" types, and unfortunately the med school set, are by and large composed of people with varying degrees of pathological fear of mathematics, computation, and computer programming. I'd be willing to bet that a largish portion of even the post-PhD scientists who 'know' how to make a proper calculation for a statistical test don't really understand the physical meaning of the numbers they're copying and pasting in and out of excel.
When your attention and skill set are focused on looking through a microscope, or cutting up lab rats, or synthesizing chemicals, you probably never have the experience of being up to your eyeballs in noise estimates and P_FA's that bludgeon in the fact that your data really sucks because it's too noisy, and never need to answer fundamental questions like 'what's the probability that the ruskies will fire off a missile and this radar won't see it'/[insert biologically relevant example here], which *requires* learning the right way to do statistics.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Good summary, but I call bullshit on the article. Most of the problems you mention and the others in the article are common popular misinterpretations of statistical results, but that doesn't mean they're common mistakes made by researchers in the studies themselves. Any rookie peer-reviewer would spot them immediately if they ever make it into a manuscript.
This doesn't mean that there aren't a lot of bad statistics-based studies out there, especially in medicine. But the problems are usually much more s
Re: (Score:3, Interesting)
Fair point, I only skimmed the TFA but I still stand by my assertion that it's a troll of the "scientists don't understand statistics" genre, it even starts by claiming statistics is a "mutant form of math". Had they ommitted that drivel and not refrenced discredited papers then maybe I would have read the whole thing.
Re: (Score:2)
Re:Long winded troll (Score:4, Insightful)
Actually the subtler issue here has nothing to do with statistics, they are implying peer-review does not work.
"Peer review" is another of the things that has been over-sold to the public. A science research group spends six months and a hundred thousand dollars conducting a research study using highly specialised equipement. They submit a paper to an academic conference or a small journal. It gets put out to review by three people who each spend about four hours reading it and reviewing it, and who usually do not have access to the equipment or the original data that was used in the study. Do you really think we're likely to catch every mistake at review? We certainly can't check the stats (except for the most egregious errors) because we don't have the full data tables they analyzed.
Scientists actually accept that inevitably some incorrect results will be published. More often in the smaller conferences than in the most prestigious journals, but even the journals have to publish a retraction every now and then. We also accept that most studies are never repeated, and so the "objective repeatable experiment" is rarely really tested for being either objective or repeatable. However, science has long had the "many eyes" effect at work. There are hundreds of thousands of scientists reading papers and using them in our own experiments. If some theorised effect out there is wrong, usually we'll find out eventually.
Re:Long winded troll (Score:5, Informative)
Re: (Score:3, Informative)
Re: (Score:2)
this is why people now consider master's degrees to be the equivalent of a high school diploma.
if you want real fun, take the average master's degree idiot and start having them manually add fractions without changing them to decimals. such as adding a bunch of measurements off a tape measure together.
hilarity ensues....................
Re:No surprise here (Score:5, Funny)
I think your example would be more persuasive if it involved algebra, though.
Re: (Score:3, Informative)
It's perfectly reasonable that someone use a calculator for sales tax (if an exact answer is desired).
Also, sales tax is multiplication - not algebra.
Re: (Score:2)
And what are we supposed to make of your post where your supposed case for people not knowing algebra has nothing to do with algebra?
Re:No surprise here (Score:5, Insightful)
You are a jerk.
You are insulting your sister because she is bad at mental math? It is a skill; one not required for extensive knowledge of the social sciences. Additionally, maybe if sales tax is simple in your state like 10%, but where I live it is 4.5% which is not always easy to get exactly right in your head.
I had a roommate who was brilliant,funny, a singer and an artist, and yet, he couldn't calculate tip to save his life, but I don't certainly hold that against him.
Re: (Score:2)
I had a friend who is doing a PhD in maths and he can't calculate basic arithmetic to save his life. It's a redundant skill for pretty much everyone.
Re: (Score:3, Interesting)
Re: (Score:2)
Given that sales tax varies based on type of purchase in some states, and is weird numbers like 6.5% in others, it can vary quite a lot. And oh, my dear lord, try dealing with "valua-added-tax" in Europe....
Re: (Score:3, Funny)
I don't have to be a statistician to know that the above post is 97% bullshit.
Re: (Score:2)
You think I'm full of it? Wait till you hear professors at seminars, making up whatever theories they like. I've witnessed professors from household-name schools acting like this.
Re: (Score:2)
On the other hand, plenty of very smart physicists, mathematicians, etc. have approached medicine spouting much the same rhetoric as you. They very quickly became embarassed when they tried to apply their fanciful theories to medicine. If you have a better idea on how to tell apart correlation from causation in a medical context, let them know.
Re:Statistical assumptions are often ignored (Score:4, Informative)
and IAAB (biologist) and I can tell you that most scientists don't have access to statisticians or don't have the grant money to pay for them. I also don't have time to learn SAS and code my own tests, therefore I use stuff like SPSS or Genstat (both of which do allow you to code your own tests as well). Just because they are easy to use doesn't mean I do or do not understand the tests, the assumptions or their results. I would say my grasp of stats is above average for my peer group, below where I would like it to be and obviously limited.
One thing that is interesting to me is that throughout my education and career I have been warned off using multiple means comparisons and LSD in particular (I understand why and have avoided where I can and the latter always). Yet the only actual statisticians I have dealt with in recent years have recommended me to use LSD on means comparisons with 10s of means. I would be hard pressed to publish those results.
In summary, whilst statisticians like to blame easy to use stats programs for bad stats the reality is they are just a tool and if statisticians can't agree on the acceptable use of the simplest procedures I'm not sure what chance the rest of us have of getting it right.
Re: (Score:3, Insightful)