Psychology's Replication Battle 172
An anonymous reader sends this excerpt from Slate:
Psychologists are up in arms over, of all things, the editorial process that led to the recent publication of a special issue of the journal Social Psychology. This may seem like a classic case of ivory tower navel gazing, but its impact extends far beyond academia. ... Those who oppose funding for behavioral science make a fundamental mistake: They assume that valuable science is limited to the "hard sciences." Social science can be just as valuable, but it's difficult to demonstrate that an experiment is valuable when you can't even demonstrate that it's replicable. ...Given the stakes involved and its centrality to the scientific method, it may seem perplexing that replication is the exception rather than the rule. The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view "positive" findings that announce a novel relationship or support a theoretical claim as more interesting than "negative" findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate."
Freud's problem too (Score:2)
Re:Freud's problem too (Score:5, Insightful)
When psychologists stop producing so many studies with obvious bias, subjective terminology, subjective conclusions, and stop arbitrarily coming to conclusions based on data flawed for those reasons, maybe it could be taken seriously. Obviously, replication is needed, too.
But so many people are fooled by it. Want a study that says video games cause people to be aggressive? There's a psychology study for you, but there's also one for your opponents. And all of them are bad science.
Re:Freud's problem too (Score:5, Insightful)
Yup, like the recent one about men not being able to 'be alone with their own thouhgs' [washingtonpost.com]..
That same data can also read 'Men, more willing to put up with pain' or 'Men, more curious and want to know what they may experience'
Re: (Score:1, Offtopic)
Yup, like the recent one about men not being able to 'be alone with their own thouhgs' [washingtonpost.com]..
Yeah, note it was already discussed here too [slashdot.org].
That same data can also read 'Men, more willing to put up with pain' or 'Men, more curious and want to know what they may experience'
Perhaps the one thing more common than flawed social science experiments is Slashdot commenters who think they can find flaws but haven't actually read the paper or thought about it.
This "same data" really CAN'T be "read" that way: the researchers specifically asked the subjects to experience the shock FIRST (so we can't assume they were just curious). And that stage of the study specifically excluded those who weren't seriously offended by the shock (they only
Re: (Score:2)
Easy to measure versus important (Score:4, Insightful)
Perhaps they need some therapy :-)
Software engineering has a similar problem. Things that are objective to measure, such as code volume (lines of code) are often only part of the picture. The psychology of developers (perception, etc.), especially during maintenance, plays a big role, but is difficult and expensive to objectively measure.
Thus, arguments break out about whether to focus on parsimony or on "grokkability". Some will also argue that if your developers can't read parsimony-friendly code, they should be fired and replaced with those who can. This gets into tricky staffing issues as sometimes a developer is valued for their people skills or domain (industry) knowledge even if they are not so adept at "clever" code.
Thus, the "my code style can beat up your style" fights involve both easy-to-measure "solid" metrics and very difficult-to-measure factors about staffing, side knowledge, people skills, corporate politics, economics, etc.
Re: (Score:3)
Completely different situation. In programming discussions are how to optimise the processes involved, the problem with psychology its that they aren't sure if they're working on computers or breakfast cereal boxes with a few rectangles drawn on them. The main value that psychologist bring to the table today is to fulfill the role of that good friend who isn't afraid to lay out a few home truths. Of course if you already have such a friend, the need to attend a psychologist is naturally obviated...
So, I'm j [arachnoid.com]
Re: (Score:2)
Completely different situation. In programming discussions are how to optimise the processes involved
But the discussion is based on nonsense. "Separation of Concerns" is a Good Thing, right? Who says? Gang of Four patterns are the proper approach, right? Why?
Re:Easy to measure versus important (Score:4, Funny)
should both validate the idea
Over the years we've heard that a good Waterfall process was the magic bullet with Data Flow Diagrams documenting everything before a line of code is written.. . No wait, it's Object Oriented Analysis/Design that will save the day...but no, that didn't work either - but Service Oriented Architecture is the way to go. The latest fad is whatever book sold well recently; none of it is based on any metrics or real science.
Re: (Score:2)
The main value that psychologist bring to the table today is to fulfill the role of that good friend who isn't afraid to lay out a few home truths.
Actually, I don't think so. In my limited experience, there seems a large methodological bias towards non directed therapy. Let the patient talk. Maybe ask leading questions, but no trace of "lay out a few home truths", at least about the patient himself.
Re: (Score:2)
As you said, "limited experience". That is one (or a few related) schools of psychology. Others, are much more directional.
OTOH, nobody goes around "laying out a few home truths", because that is counter-productive. (Some psychologists don't seem to do better than random, but they all avoid known bad choices...like "laying out a few home truths".)
Re: (Score:2)
Also, it's unlikely that you'd be willing to be *completely* honest with your friend. Having an objective person totally separate from the scenario is valuable.
Re: (Score:3)
Because drastically different styles can end up with good code, I see that as a sign that we as programmers haven't figured out the elem
Coding at that level becomes art (Score:2)
Re: (Score:2)
There's a basic foundation that's roughly agreed upon, delineated by rules and best practices.
What rules and best practices are those? As far as I can tell, everyone has their own set.
Re: (Score:2)
There's a very basic level of hygienic measures that are are taught to first graders and nobody disagrees with. Things like don't overuse global variables, don't build one-mile-long procedures, avoid spaghetti code by banning goto, declare the type of your parameters in C.
For other rules of style, yes, every house has their own rulebook.
Re: (Score:2)
because it's wishful thinking (Score:1)
Scientific journals typically view "positive" findings that announce a novel relationship or support a theoretical claim as more interesting than "negative" findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate.
Because it's always wishful thinking and the 'findings' are always BS. About time it's called out for the non-science nonsense that it is.
"less likely to be accurate" (Score:4, Funny)
That's a surprise.
WTF? (Score:5, Insightful)
it's difficult to demonstrate that an experiment is valuable when you can't even demonstrate that it's replicable
Duh. That's because an experiment that is not replicable has *no* value.
Re: (Score:3, Interesting)
There's different levels of replication.
In physics, you can generally replicate an experiment vary precisely if you've got a handle on the factors that went into that experiment - control the environment, etc. You can have an almost perfect replication. Yay, science!
In social psychology research you can't ever even approach that same level of control over the environment the experiment takes place in. The subject will be different - even if it's the same subject used in the first experiment, because people
Re: (Score:2)
Exactly. This alone negates any claim as to whatever result is "found". That basic replication you refer to later is like saying "I used copper wiring in all my experiments" and never giving the length or gauge.
Re: (Score:2)
Actually, hell let me throw a challenge at you:
Please explain the value in trying to understand gravity in a way that is general enough to also apply to numerous other fields that are deemed to "have value" but that excludes trying to understand human behavior.
If you can do so in a way that is meaningful and isn't intellectually dishonest I'll be surprised.
Re: (Score:2)
I'd agree with you if you made a few changes, thus:
Yet, can have much more positive impact on society and individuals.
to:
Yet, can in principle have much more positive impact on society and individuals.
and:
That's bad reporting, not necessarily bad experiments.
to:
That's bad reporting. It's nomal in reports on experiments.
and
Lastly, a negative outcome is also valuable.
to:
Lastly, a negative outcome is as valuable as the original result. It's just not seen as as newsworthy.
Additionally, with respect to the firs
Re: (Score:2)
In social sciences repro
Re: (Score:2)
Re: (Score:3)
??? Did you notice that the guy first mentioning "thought experiment" claimed to be a physicist? Moral high ground? Please tell me what "moral high ground" was involved in Einstein's famous "elevator" thought experiment.
I will grant that there are those who misuse the term, but give him the credit for properly using it.
OTOH, "thought experiments" in the area of psychology are, in my experience, so poorly done that they neither demonstrate nor validly support any argument. Some of them do point in intere
Re: (Score:3)
Just to add to what you're saying, thought experiments can be perfectly valid in the physical sciences. Newton had a great one determining that differently weighted things falling will fall at the same speed (all other things being equal.)
If you assume that a light cannon ball will fall slower than a heavy one when you drop them, and then you tie them together, it stands that they must fall at a speed in the middle of what they will each fall at. But tying them together makes them effectively one object, so
Re: (Score:3)
I missed your examples. Could you repeat them?
Re: (Score:2)
speak straight, without sarcasm.
there are examples, if you wish you can find out about them yourself.
Re:WTF? (Score:5, Informative)
Not an experiment.
Not an experiment.
You got one.
Actually the analysis can be replicated ad nauseam.
Re:WTF? (Score:4, Interesting)
None of those are experiments. Experiments test hypothesis. You have to specifically DO something to test your claim and NOT do other things for control for it to be an experiment.
Re: (Score:3)
There are valid definitions of "experiment" for which those are experiments.
E.g., theories are often only checkable by conditions around a supernova. This means you have a theory and a prediction. You won't be able to prove everything about the theory by observing a single supernova, but you may be able to disprove it. And in science you can never prove a theory correct, you can only fail to disprove it.
FWIW, the Higgs boson has been a terrific disappointment because it didn't prove any theories wrong.
Re:WTF? (Score:4, Insightful)
But in that case the word "experiment" has been defined so narrowly it's no longer the sole validator of scientific theory. For example, General Relativity predicted that light would be affected by Sun's gravitational field, which was later observed during a solar eclipse, which is a naturally occurring event.
Re: (Score:2)
But in that case the word "experiment" has been defined so narrowly it's no longer the sole validator of scientific theory. For example, General Relativity predicted that light would be affected by Sun's gravitational field, which was later observed during a solar eclipse, which is a naturally occurring event.
Experimentation has never been "the sole validator of scientific theory". Simple observation has been and always will be the primary tool we use to learn shit. If you want to rely only on strictly formal validation, then you're going to have to design and conduct an actual experiment to test gravitational lensing.
Re: (Score:3)
History is useful.
Not Just Psychology (Score:4, Insightful)
The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view "positive" findings that announce a novel relationship or support a theoretical claim as more interesting than "negative" findings ...
This applies to all science, not just psychology.
Old saying (Score:2)
Once is an anomaly
Twice is a coincidence
Three times is a pattern
Re: (Score:3)
I sampled a random bit sequence just the other day. I can now assure you that a random bit stream is all ones! all friggin' ones I tell you!
Re: (Score:2)
Once is happenstance. Twice is coincidence. Three times is enemy action. [goodreads.com]
FTFY
Re: (Score:2)
As Goldfinger was written in 1959 that quote is a paraphrase of a much older saying [barrypopik.com].
It’s unclear that the saying’s origin is from Chicago; Fleming was probably thinking of Chicago’s gangster years of the 1920s-1930s. “Once is an accident, twice is a coincidence, three times is a habit” has been cited in print since at least 1921. “Once is nothing, twice is coincidence, three times is a moral certainty” has been cited in print since 1923.
So, it's not a science, it's a religion (Score:3, Insightful)
Falling into the 'cult' category
Wrong premice (Score:2)
I think that too many "studies" set out to prove a hypothesis instead of test a hypothesis. The drive to prove something puts bias into the study and skews the outcome. No one wants to be proven wrong. This is especially important when the measurements are subjective as in many psychology studies.
Re: (Score:3)
Here's an analogy. You plant a dozen tulips in your garden, and observe how well they grow when you do X. Now you claim all plants will grow like that when you do X. The claim is way too broad. Even if you had a d
Re: (Score:2)
The other problem is sample size. Psychology sample sizes are *way* too small. In a world of 8 billion people today, anything you find out in a psychological experiment that involves at most a few hundred subjects, often less, cannot have anything universal to say. The samples are just too small.
Sample size is independent of population size, and sample sizes far less than "a few hundred" can be significant. In the social sciences, errors are far more likely to be caused by sample bias than size. Most psychology experiments conducted on people use university undergraduates as subjects, which are more likely to be politically liberal, altruistic, trusting of others, etc. Increasing the sample size isn't going to fix that.
Re: (Score:2)
Re: (Score:3)
On the contrary, increasing the sample size to big data sizes of say 2 billion subjects would definitely fix that bias problem.
Not at all. For example, try extrapolating behavior from 2 billion young men to older women. You can have huge sample sizes and yet still have sample bias simply because you've excluded an important category (such as the people you actually wanted to study).
Re: (Score:2)
Even if you try hard for a "representative sample" you can still have a problem where you lack a "box" to "tick" for something which turns out to be important.
Re: (Score:2)
Do you know how much effort it would take to get 2 billion people on board and not have a single woman or older person? It could be done but you'd REALLY have to be doing that on purpose.
They could just be pouring through health records for adults who serve or might serve in one of the world's militaries.
Even if they were doing that deliberately, what would your point be? Sample bias is bias whether it is accidental or not.
Re: (Score:2)
The point would be that we can play "what if?" games with unlikely and made-up hypotheticals all day long
The thing is, these are not hypotheticals, but real world problems. For example, there are a variety of pollsters who call a few thousand people in a region to answer some survey. They could call all of the several billion people who have phones and they would still encounter most of the same sample biases because they aren't calling the people who don't have phones.
Enlarging the sample size by many orders of magnitude doesn't help here because the biases are cooked into the means of sampling.
Re: (Score:2)
Let alone the cultural environment. Behavioral psychology often attempts to extrapolate its findings on the whole Earth population, without taking into account that the cultural background of its subjects is (virtually) identical for each subject. The cultural background _most definitely_ influences behavior. Do the same study on Western Europeans, Arabs and Japanese, and you'll likely get huge differences per group.
Re:Wrong premise (Score:2)
Fixed typo.
Agreed on study size, which is why social scientists look at meta-studies of hundreds of studies performed over as much as a decade, to eliminate the noise and other transient junk.
What they really need to do, though, is examine more hypotheses. You need 7-10 additional hypotheses, not including the null hypothesis, that are orthogonal to each other and to the hypothesis being tested. This would allow you to binary subdivide the problem space, not only showing what something isn't but also showin
Re: (Score:2)
my psychological disorder compels me to point out the misspelling of "premise"
Re: (Score:2)
But hardly confined to "psychology". Possibly even not confined to "soft" sciences. Since attempts at falsification can easily turn out to be very politically incorrect.
"Social science can be just as valuable" (Score:2, Insightful)
No, and it shouldn't carry the same "science" label to start with. Make it "social studies" or whatever. To call it science, one tries to put it on the same level as real science, where the processes are completely different on numerous levels. It's an insult to real science. For example, when a scientist builds a collider to find a particle, and he finds one, he puts up the results so they can be verified by peers, and if the collective brainpower finds an error and puts it down, the process is considered
Re: (Score:1)
I once read a study that claimed that porn makes people have a callous attitude towards women. To 'prove' this, they asked college students how long rapists should be sent to prison. Then, they showed those students some porn videos. Afterwards, they asked the same question, and some of them supported reduced sentences for rapists. The arbitrary, subjective conclusion they came to in the face of the subjective data they gathered using biased methods was that porn makes people callous towards women. If you w
Re: (Score:2)
Did the
Re: (Score:2)
Psychologists often use subjective terminology as if it's objective, set out to prove their hypotheses, try to measure the subjective in absolutely arbitrary ways ("Are you happy?"), and come to arbitrary conclusions based on the data they collect using their flawed methods of data gathering, therefore fuck psychology.
You're kidding yourself if you think this is limited to a single example. Psychology's status as a science is seriously laughable.
Initial issue was lying to get cooperation. (Score:2)
As I read it the initial flap was over the people and journal involved in the replication lying to get the cooperation of the original researcher.
They promised to give an opportunity to review and publish a comment on their own results. They secured her cooperation, getting detailed descriptions of the methodology - far beyond what was in the publication - copies of the original film, and the like. Then, when they got differing results, they denied her the percieved-as-promised opportunity to examine thei
Who writes this crap (Score:5, Insightful)
"Those who oppose funding for behavioral science make a fundamental mistake: They assume that valuable science is limited to the "hard sciences." Social science can be just as valuable, but it's difficult to demonstrate that an experiment is valuable when you can't even demonstrate that it's replicable."
No, those of us that oppose the funding of this crap recognise that if you cannot replicate your "study" then it is not an experiment. If what you are doing cannot be proved (one way or the other) by experiment then IT IS NOT SCIENCE. I don't really care what it gets called and some of it may even be valuable for some values of valuable however the amount of dross that is produce by social researchers that try and call themselves scientists is truly extraordinary and a plague on our world.
Re: (Score:1)
Re:Who writes this crap (Score:5, Insightful)
The above comment is precisely why these "social sciences" need to be delegitimised and rubber-roomed until they can figure out the meaning of the phrase "scientific method". Grant them no authority in deciding government policy, massively defund them in academia, get them out of the courtrooms, and generally pillory them for the witchdoctors they are.
If you have to ask why, you're part of the problem.
Re: (Score:2, Insightful)
Here's my challenge to individuals such as yourself who denigrate psychological science:
How would *you* study behavior?
It's very easy to dismiss behavioral sciences when you're not trying to study behavior. It's a very complex, difficult topic. E.g., how do you define depression? How do you define psychosis? How do you determine whether or not early childhood interventions actually have an effect on adult outcomes?
Maybe you would argue that behavior shouldn't be approached scientifically, but that's a cop-o
Re: (Score:2)
You can start here -> http://www.arachnoid.com/psych... [arachnoid.com]
Re: (Score:2)
How would *you* study behavior?
The absence of a good way to study behavior does not make psychology good science. If you don't have a good way to do so, then you don't have a good way to do so; the end.
I'm sick of ignorant arm-chair narcissists denigrating psychology when they don't have the balls to admit they have no clue how to approach the subject because it's too hard for them to understand.
And I'm sick of people who come up with absolutely illogical defenses of the indefensible.
Re: (Score:2)
Sure, that's a noble goal. But that wasn't my fucking point. The implication of that comment was that since we don't have a good way of doing something right now, that must mean that our current (and ridiculously flawed) way of going about it is good.
Otherwise, why would he attack people for not putting forth alternate solutions? You don't have to put forth an alternate solution to correctly recognize that our current solution is garbage, and that psychology shouldn't be taken seriously until this is fixed.
Re: (Score:2)
The part you've missed is that far too often "hard science" has the same problems.
Not nearly as often. Usually the hard sciences aren't busy trying to measure people's emotional states and asking people how they feel (among other things), which are inherently subjective. While problems in the "hard sciences" obviously exist, psychology (and others) take it to a whole new level. They're not even comparable.
And then the news writes about every nice sounding psychology study that comes out, and people use it as a reason to support the alteration of laws or public policy (Video games cause/d
Re: (Score:2)
* See his Slashdot alias, then laugh. It's funny.
Re: (Score:2)
Must be mixed carefully. It's OK to use network cables for bondage, just don't put them back in the network afterwards.
Re: (Score:2)
There are a lot of people in society with mistaken ideas and if science like this can be used to push their repugnant ideas out of the mainstream then all the better.
If their ideas are truly repugnant, then science can do the job of showing why they are mistaken. You don't need to use fake science. Fake science used the way you describe is more succinctly known as lying.
Re: (Score:1)
The failure of the rational market economists is that they just study large, very well organized markets dominated by professionals that are now mostly run by computers - the finanancial markets (because there's a shit load of publically available data). So of course, the market looks completely rational.
In other words, their "failure" is that they use models which are descriptive of the markets that they study. That sounds more like science in action than failure to me.
They then extrapolated their finding to everything else - even to people buying that new house that they just "fell in love with".
This is a strawman. They don't actually do this.
Re: (Score:2)
Unfortunately, it's not a strawman. Not exactly. The researchers themselves may not do that extrapolation, but those they convince of their finding do. The GP post, however, is historically inaccurate. The "rational market theory" originated some time before 1950, and was dominant during the 1950 and later. It has recently been challenged by people doing actual reasearch that proved it an invalid model.
IIRC it originated by an economic school that was ideologicaly comitted to the Free Market, despite t
Re: (Score:2)
The researchers themselves may not do that extrapolation, but those they convince of their finding do.
I believe the number one lesson of economics is that people including the economists themselves have a huge capacity to rationalize all sorts of things, sometimes in very elaborate ways, when their interests are at stake. This has nothing to do with the rational market model.
The "rational market theory" originated some time before 1950, and was dominant during the 1950 and later. It has recently been challenged by people doing actual reasearch that proved it an invalid model.
Except that the model hasn't been proven to be invalid. It works well for describing stock markets, for example.
IIRC it originated by an economic school that was ideologicaly comitted to the Free Market, despite the obvious fact that never throughout the course of history has there ever BEEN a free market.
At some point, there weren't public sanitation, canals, railroads, or electronic computers either. We didn't let the obvious
Re: (Score:2)
No. The rational man theory does NOT work on the stock market. It doesn't even work well for those sections that are computer driven, because the models are always based on incorrect presumptions.
OTOH, I will agree that it OFTEN works on the stock market. This is a far different statement. But much of the stock market is driven by gambling fever, often played with "other people's money". (And in that since, since the player doesn't risk much, I suppose you could call it rational from his point of view.
Re: (Score:2)
OTOH, I will agree that it OFTEN works on the stock market. This is a far different statement.
And there we go. It's not a far different statement. I didn't claim the rational market model perfectly modeled the stock market.
But much of the stock market is driven by gambling fever, often played with "other people's money".
While this observation isn't entirely irrelevant, it remains that traders who come to markets to gamble, lose money to traders who don't. And it's still a lot better than many of the economic alternatives, such as having the above gamblers in control of a central planning bureau or rent seeking.
Re: (Score:2)
And I've seen applications that are ridiculous.
Then give an example rather than just make empty allegations. Henry Ford wasn't an economist.
Re: (Score:2)
Re: (Score:2)
Yeah, you are going to be seriously confused if you think "rational actor" economics assumes a Straw Vulcan who won't buy the chocolate ice cream which he likes better if the vanilla is a cent cheaper. But the fault, dear AC, is not in the economics, but your own skull.
replication = good (Score:2)
Replicating scientific results (or failing to) is a good thing.
Being rude about it, as was apparently the case here, is plain old asshattery.
Re: (Score:3)
No the asshat is not saying that if you cannot get the same results it's not science (in fact the exact opposite), but rather that if you cannot demonstrate that the experiment itself is replicable then it is not science. The contention in the article that in social sciences this lack of replication of experiment may just be a reality up with which we must put IS the reason why whatever you want to call it, it is not science.
If you can't replicate it... (Score:1)
Re: (Score:3)
Define 'replicate'.
Re:Define 'replicate' (Score:3)
To replicate an experiment, you take the description of the conditions, tasks, environment, fixed independent and dependent variables, analytical method and results provided by the original experimenter in the (peer-reviewed) paper they published.
If you can show the same results, with the same statistical significance, then it's reasonable to assume that the experiment shows a valid scientific phenomenon.
If you can't then one of the two experiments got it wrong and more work is needed.
The basic proble
Re: (Score:2)
If you can't then one of the two experiments got it wrong and more work is needed.
Actually it woul
Simple solution (Score:3)
Have a journal, call it Debunker's Weekly if you want, that is divided evenly between papers on replication and papers showing negative correlation at the start. Pay authors a nominal amount, according to the thoroughness of the work as judged by referees. Provide the journal free to University libraries. Submit summaries of major stories to Slashdot, The Guardian, various Skeptical societies and other places likely to raise the extreme ire of dodgy researchers. In fact, the more ire, the better.
The journal doesn't have to last long. Just long enough to force bad researchers to improve or quit, force regular journals to publish a wider range of findings to avoid humiliation, and to correct dangerously erroneous beliefs. Since there must be a stockpile of unpublished papers of this sort, you should probably be able to get six or seven bumper editions out before anyone notices the dates, and maybe another two before the journal is sued into oblivion for defamation.
That would be plenty to make some major course corrections and to "out" a few frauds.
Re: (Score:2)
"Pay authors"
The journal doesn't have to last long
Don't worry, it won't. I'd reckon on one edition.
Of course, what this whole field of study needs is a rich uncle (or sugar daddy) to provide funding for specific, basic, pieces of research. You'd think that for all the money they've made from social media, some of the FB/Twitter/others founders or major beneficiaries could put their hands in their pocket.
Or maybe they are the *last* people who want to make this subject rigourous and scientific?
Re: (Score:2)
It needs to be funded the same way as the British BBC, by license fee, or the same was as for public utilities, tax.
It needs to have a charter guaranteeing payment in advance for the requested service and guaranteeing immunity for any actions provided within the terms of the charter. (If it's not chartered, you'll have every drug company and its brother suing you for publishing the suppressed papers Ben Goldacre keeps talking about.)
If it's not free, it won't have readers. Negative results aren't as desirab
Re: (Score:2)
Because research is expensive and governments are cheap. If a researcher has been humiliated a couple of times, publicly, their papers become worthless to the big names financing the work. The corporations cut funding, so the universities cut funding. The researcher has a job, technically, but no office, no lab, no work. Further, the job isn't guaranteed. Tenure can be withdrawn for gross malpractice. Being exposed as a fraud probably qualifies. So, no job either.
Tenure is poorly understood. It does not mea
Economics (Score:2)
If you think Psychology has a replication problem, get a load of Economics.
When it comes to "hard" sciences, Economics is basically remote viewing with a political agenda.
Re: (Score:2)
Nonsense, economics has quantities and flows that can be measured. Psychology has none of that.
Re: (Score:2)
It's not the data that's the problem with Economics, it's the postulates that are formed from whole cloth, and "laws" that are similarly . In fact, even the data in a lot of Economics is just hokum, based upon opinions more than anything measurable. MMT is a good example.
I would suggest that Psychology's reputation has caused researchers to become a lot more rigorous. The opposite has happened in Economics.
Re: (Score:2)
MMT is "hokkum" because you say so? Some economists find it useful model, others don't. That proves exactly nothing about economics having hard data to deal with, whereas psychology has nothing.
Re: (Score:2)
You are funny, you do know that "trickle down economics" was coined by humorist Will Rogers in the Depression? None such taught in serious economics class.
Without replication, science goes nowhere. (Score:2)
Without replication, science doesn't build on previous results. It just thrashes around. Psychology (and theology) are like that. They change, but don't improve much.
There's a practical problem. Without repeatable scientific results, a technology cannot be built based on the science. "Science is prediction, not explanation." - Fred Hoyle.
The problem with soft science experiments (Score:4, Interesting)
There are plenty of good psychology experiments/case studies that produce a lot of really useful information and are repeatable (albeit over a very long period of time). The problem is there are also a lot of complete and utter ass psychology experiments. It is really really hard to produce a good study that provides useful results in soft sciences, and in cases of psychology, they take a very long time and sometimes a lot of money to complete. Yes, they have to account for a lot of variables and exclude them via statistical analysis, but the ones that do it right do it exceptionally well.
I used to think negatively on those types of studies until I actually took the time to read one while helping my girlfriend with a paper. I was amazed at the level of detail and the amount of effort they took to isolate the results into meaningful data.
Re: (Score:2)
Tom? Is that you?
Do you know where I can get a copy of Dianetics? I've heard its da bomb!
Abreaction (Score:2)
Re: (Score:2)
Sorry, real life is messy.
1 - Some replicable tests are a good idea
Some people see Aliens at Roswell when they are there at night and take drugs.
This is a replicable experiment - is it because they have taken drugs or because Aliens are sometimes there?
Generally (sadly) if you have a randomised double-blind controlled experement that controls for the likely deciding factors, you can decide whether or not it is more likely because people take drugs (happily you cannot be sure about the presence or absence of aliens)
2 - Some replicable tests are a bad idea
Do the really expensive cancer|baby-saving|altzhiemer etc drugs we use really help?
This is also replicable experiment
Give some people the drug and some a placebo.
Not too ethical even if you disclose that there might be a placebo
3 - Some things cannot be replicated
Was it right to have QE - did we have the right amount of QE
This is not replicable.
You dont get to re-run an economy for the last 6 years - all you can do is watch and measure and argue about causation afterwards.
In the scope of psychology, you get a mix of all 3 experiment types. All these questions are very good questions.
What troubles me is that there will be a growing tendency to not attempt to answer the hard ones.
1) Occam's razor already tells you it's the drugs. Unless aliens show up only when taking drugs, or we suddenly get super-alien-viewing-powers when using drugs, aliens could be there. That's (apart from being ridiculous) such a complicated model compared to the simple "your drugs give you hallucinations" model (which we even know is true) model that occam's razor can rule out the other ones.
2) Erm.. you know that this is EXACTLY how drugs are tested every day? Not unethical. Extremely common.
3) You could r
Falsify the simulation itself (Score:2)
[To repeat an experiment about international macroeconomics] You could run a simulation.
So how would you go about falsifying the accuracy of the simulation's model?
Re: (Score:2)