Researchers Feel Pressure To Cite Superfluous Papers 107
ananyo writes "One in five academics in a variety of social science and business fields say they have been asked to pad their papers with superfluous references in order to get published. The figures, from a survey published in the journal Science (abstract), also suggest that journal editors strategically target junior faculty, who in turn were more willing to acquiesce. The controversial practice is not new: those studying publication ethics have for many years noted that some editors encourage extra references in order to boost a journal's impact factor (a measure of the average number of citations an article in the journal receives over two years). But the survey is the first to try to quantify what it calls 'coercive citation,' and shows that this is 'uncomfortably common.' Perhaps the most striking finding of the survey was that although 86% of the respondents said that coercion was inappropriate, and 81% thought it damaged a journal's prestige, 57% said they would add superfluous citations to a paper before submitting it to a journal known to coerce. However, figures from Thomson Reuters suggest that social-science journals tend to have more self-citations than basic-science journals."
Next up, writing superfluous papers... (Score:4, Insightful)
The surest way to get something on Wikipedia, is get something published then cite it. Accuracy notwithstanding.
Re: (Score:2, Insightful)
Yes. I remember a lot of articles or sections of articles citing Medical Hypotheses and other junk journals. They were eventually challenged and killed, but new ones would always crop up. It felt like a losing battle :\
Re: (Score:2)
Maybe more in terms of TFA, I'm a biochemist and on the last paper I wrote I got the opposite pushback from the
All too familiar. (Score:5, Funny)
Mod parent up! (Score:2)
Who modded this flamebait? It's right on point.
Re:Mod parent up! (Score:5, Funny)
Who modded this flamebait? It's right on point.
[citation needed]
Re: (Score:2)
No, no it's not. Academic journals are doing it because a paper's worth is measured by how many times it's cited by other academic papers. Nobody cares about wikipedia citations.
Re: (Score:3)
http://www.youtube.com/watch?v=dQw4w9WgXcQ [youtube.com]
Re: (Score:3, Insightful)
not necessarily ... wikipedia is an encyclopedia, not a research platform. also decent normal encyclopedias have citations to where they get their knowledge from or will tell you on request. So, for an encyclopedia citations are everything (so number of citations could be a decent measure)
however, number of citations is not a good measure for articles that produce original research, as the field they research might be rather narrow or the article might be written as an answer to another article, verifying o
Re: (Score:2)
Now come on Wikipedia (like all encyclopedias) is not a peer-reviewed journal you can't expect them to check the scientific value of their sources.
Re: (Score:2)
Nobodies career is riding on getting their wikipedia edits accepted.
Re: (Score:2)
Maybe they just need their own "social media" site, but change "social" to "academic", and make "media" be the papers they publish.
Re: (Score:1)
Very much, yes.
I recently completed my M.S. and we had one professor in particular who was all about superfluous cites. Basically, if you need to pass his class, you need to meet his ridiculous requirements.
LK
Re: (Score:2)
I had to deal with a PhD like that who always ended up being PI on the biggest, fattest, juiciest grants. If you wanted to get pay, you had to learn to play. At least, that's how he liked to phrase things.
It always bugged me that 90% of the PhDs I met were barely scraping by, living in shit-shacks or couch surfing, driving the nastiest junkers (if they could even afford such a thing), comparison shopping for clothes at the thrift store, and eating like beggars when they were pumping out their best research.
Re: (Score:1)
It would seem that one can only do so much. Either you can pour your heart and soul into your work or you can put that effort into playing the game. It's easy to get drawn into playing the game. IMHO, All too often, the people at the University who have decision making authority are game players and don't see the value in honest pure research. It's a downward spiral. More game-playing, less work.
LK
Metrics (Score:5, Insightful)
This is what happens when you have metrics. You create a metric like "impact factor", and before long people will figure out ways to maximize "impact factor" that have nothing to do what the metric was originally supposed to measure. Hyperfocusing on metrics like that ends up undermining the things you really value in favor of increasing your scores.
This happens all over the place. Games in every game find ways to increase their score in ways that the game designers wouldn't really consider valid. Universities do things simply to make their "US News" ratings go up, not because they will make themselves better. Students figure out ways to raise their grades that have nothing to do with mastering the material of the course. Heck, the entire US (and world?) economy suffers from this; the most reliably rich people are the ones who manipulate money transactions, and do absolutely nothing with the underlying reality that money is supposed to be an abstract representation of.
People strive to improve the things that they are rewarded for and that they are evaluated on. When you focus too much on the wrong thing, people will do the wrong things in response.
Re: (Score:2)
It's like Google ranking. Academics have found a way to hack their own system.
Re: (Score:2, Insightful)
"their own system" nope.
Perceived word of mouth reputation of journals preceded impact factor. I do not know who decided to use impact factor as a main measure of a journal, but I take an insult in the fact that you called them "academics"
Re: (Score:3)
This is one of the most truly insightful comments I have read on /. in 10 years.
As usual with great insights and discoveries, it strikes you with triviality of a great thought that nevertheless never crossed your mind (that's the sign of greatness I learned from my scientific advisor).
Re:Metrics (Score:4, Interesting)
As a reviewer i have often suggested citations that IMO where missing. In fact some scientist deliberately leave out citations that may have inconvenient viewpoints/data/results, I have such a paper that once side of the debate just pretends does not exist. Out of all the times i have reviewed and suggested citations, i have only suggested one of my papers once. Also i typically don't require anonymous review, ie i give my name when permitted.
Re: (Score:2)
Well, after a couple of years as a postdoc I can tell you there is no good metric for success. Except looking like you were appreciated and considered good by your senior peers. Maybe you published only four papers in crappy journals, but it so happens that somebody who matters thinks you are a genius. And there you go. You can win against highly cited people who are considered morons by the powers that be.
That is the way of the scientist. Never forget you are in fact a monk. There might be sacrifices.
I usu
Re: (Score:2)
I am skeptical of the article's claim.
When an editor reads a paper and feels that the author is misinformed or making incorrect assumptions, they may respond by requesting the author to cite certain papers, with the intention that by forcing the author to read the papers, they will become better informed and correct their erroneous assumptions.
Meanwhile, the author, not believing themselves to be wrong, refuses to acknowledge the suggested references and considers the material "superfluous" to the work. As
100 reasons not to go to graduate school (Score:1)
Not exactly on topic but nevertheless an interesting read:
http://100rsns.blogspot.com/ [blogspot.com]
Re:100 reasons not to go to graduate school (Score:4, Interesting)
Here is one that might be relevant:
Reason 33. There is too much academic publishing.
http://100rsns.blogspot.com/2010/11/33-there-is-too-much-academic.html [blogspot.com]
When one's job is on the line (i.e. tenure-track faculty), people will do almost anything.
Not Being One That Must Publish (Score:2)
Re:Not Being One That Must Publish (Score:5, Informative)
There's a tricky balance between saying we build on this work and so necessarily reference it, and it is related to this other work (or may incorporate results from it, even if I'm not directly involved in that research) which is trying to give credit where it is due, not too much.
Science struggles how to cite widely disbursed facts. What is the speed of light? Right. You *can* cite the people who precisely measure it, but most research that relies on that data doesn't really need to cite it, because they aren't building on it. If I'm looking at spectra from a gas or a star the speed of light is really really really important to my work, but how the actual number was arrived at isn't that important. If you're a young researcher you want your work to look like it is related to a lot of things (think resume padding) so you cite more than you need to, and if you're new to the field you want your work cited as a measure how much impact it has - but judging what counts as impact is not always clear and you're pretty easy to persuade that it is better to over cite than under cite and risk being called out for plagiarism.
Another example. I'm working on a current project. One of the relevant facts to the work is the history of the Bismarck battleship (the nazi one). This is because the documentation about the history is relevant to how to quantify the statistics of the ship. But what is a valid reference for that? If anything? How about the mere existence of the ship? What factual information am I digging for that isn't sufficiently well known to be on wikipedia? Do I cite comparable information about the dozen or so other ships and aircraft she actually fought with, or do I just sort of take for granted that the ship had 38 cm guns (which directly maps to the problem I'm talking about which is balancing the relative combat power of the ship). If it was a history paper it's sort of obvious that the historical work is to be cited - discovering (or rediscovering) that information would be a worthwhile historical paper, but what about something that is only tangentially related, which is trying to define those statistics in a game?
It hasn't been uncommon for scientists to base research on 3 or 4 papers (possibly one or two of which was from their own group), and then when they're getting ready to publish to look for papers in the target journal that are related that can be cited as well (this is like self censorship, or self coercion, rather than be asked to do it, you do it on your own first0. It's not really a good practice, but I'm not sure it's as bad as the article tries to make it out to be. You really are legitimately looking for work that might be related to what you did, especially if you didn't do your literature search very well (which is harder than it sounds sometimes), and you are, as you say, citing authors you're extrapolating from (or at the very least doing related work to).
Re: (Score:2)
Oh, I meant before submission. If you realize a reviewer is going to tell you to do a better literature review it's probably a good idea to do that before you send in the paper. Then we're back to discussing what really warrants being cited and how.
"Pressure" (Score:2)
There will also be some bias in terms of what makes a citation "superfluous." A good editor is concerned about insuring that the submitter's statements are accurate and well-sourced. On good legal journals, editors actually go and look at every cited source (hundreds per article) to see if it contains the proposition it is cited for. When a journal is well-sourced, it is more reliable as a source for practitioners and as a research tool for academics, as well as being a better stepping-off point for furt
Misuse of the word "However" (Score:3)
So Basically (Score:3)
I'm shocked [3][12][21]! (Score:5, Funny)
Shocked [3][12][21], I tell you! [4][7]! Studies [14][17][31] have shown [11][15] that this [26] never [21][22] happens [25] with reputable [5][14][24] papers [19]! How could this [32] have happened? [12][16]
Happened to Me (Score:4, Insightful)
I put together an economics paper and sent it around to a few PhDs I know. Two of them came back with the exact response that this article indicates; "It needs more references if you want to get published." I asked if the math, logic, or conclusions were off, both responded they were not, but that was not the point. They made it clear that to get published it had to have more references to existing work, regardless of the content.
I can come up with arguments why such a policy has some merit -- keeping wacky stuff like modern monetary theory's hypothesis that there is no such thing as too much debt from distracting researchers, for example -- but good, bad, or indifferent; the fact that there is a barrier to papers which do not pay homage to existing academics is very real.
Re:Happened to Me (Score:4, Insightful)
Yes, but that's not what the problem is. It's not about "paying homage", it's about being honest about the novelty of your work. Academic publishing is not a no-holds-barred debate. Every paper is expected to present a balanced view of the subject at hand, even if the author has a particular point of view they want to get across. This is why we have a peer review process. There is a reason for accurately representing previous work.
The problem that the article discusses is specifically *irrelevant* citations added for suspicious reasons. That is a different problem.
No... (Score:2)
It's discussing *superfluous* citations, not irrelevant ones. The authors are the ones deciding if they are superfluous. Authors will always think that having to add a citation is superfluous. That doesn't mean that they should be able to ramble on for a forty page paper with less than an absolute minimum of, say, 120 footnotes. Is it possible that the paper is perfectly correct without them? Sure. But if I'm putting a journal's name on them, and I'm responsible for the journal's reputation, I'd like
Re: (Score:1)
There is a difference between randomly sprinkling a paper with references in a superficial effort to make it look "serious" and conform to the usual academic mold; and actively researching, citing and discussing earlier relevant references in comparison to your own work in a balanced way. The latter is how good quality academic writing should be done. The former tends to give rise to papers with pointless laundry lists of citations. I hope your friends were suggesting the latter way. Even if they were not a
Re:Happened to Me (Score:4, Interesting)
Including copious references is not only a way to "pay homage to existing academics". It makes sure that you went through the literature to see if your contribution is really a novelty, and forces you to compare your work against others', which is great both for the expert in the field to better understand your contribution, and for the non-expert, who gets pointers to better grasp some parts or to navigate towards the important papers of a field of research. These are still very important, even if you think your work is technically sound.
I'm talking out of my ass now, and it depends on the research area and the paper, but "It needs more references if you want to get published" might be a polite way for your acquaintances to say, have you provided sufficient motivation for the problem you are solving, thoroughly explored the literature for related proposals, you should compare your ideas against other papers', etc.
Re: (Score:2)
So, suppose the guy adds 47 references to prove to the reviewers that the material is a genuine contribution (as if that even made sense - any number of citations does not prove that you didn't miss a complete previous duplication)? The reviewer is satisfied that the guy has done his homework and gives the OK. Why not then remove the references before publication? If the purpose is to ensure that due diligence is done then removing the references would have no negative effect at all.
No, references are pr
Re: (Score:1)
You are off with this idea that citations are just a way to pay homage. They are very important. And you want to remove citations after the author has gone through the great effort of reviewing other people's work, comparing their ideas, etc? That makes no sense.
The very particular case you point of citations to old well-known papers is pretty irrelevant, and unrelated to the problem that TFA is discussing. While that might be purely homage, what is the big problem of giving credit to a given researcher if
Re: (Score:2)
The suggestion that was raised was that it was important to have citations to prove that the author did their homework and was aware of related work. My point was that if that were REALLY the goal then it wasn't important to publish the citations. Obviously it is silly to add them and then remove them, but the obvious solution to that is to never add them in the first place.
What in your opinion is the true importance of citations? I certainly don't think they should be used to test the author's knowledge
Re: (Score:1)
Re: (Score:2)
It isn't supposed to be a test of the author's abilities - just of the accuracy of the paper. The author could be an idiot but if the paper accurately describes the experimental results and their validity then the paper should be published.
Sample Size Errors (Score:5, Informative)
Re:Sample Size Errors (Score:5, Funny)
I did RTFA. The authors of the paper surveyed 54,000 academics, and about 1,300 responded to say, "Yes we felt pressured." That's 2.5%. Only 1/3 of those named a single journal that pressured them. Another 2.5% said, "We've heard that others have been pressured, but never us." 7.5% said, "We've never heard of it." And 87.5% didn't respond. The survey shows extreme self-selection as 7 of 8 academics did not respond. So before someone gets excited that 20% of academics are pressured, note that under 13% of academics responded.
... because the other ~87% were pressured to keep silent?
Re: (Score:2)
No, they're already repeatedly published, so it's their papers being cited. XD
Re: (Score:2)
I've already posted - someone else mod up please.
Re:Sample Size Errors (Score:5, Funny)
I'm sorry, but it's hard for me to take this post seriously without at least one citation.
Re: (Score:2)
Sigh... wrong lesson learned. (Score:3)
They're being asked to pad their paper because the actual evidence being cited might not look that convincing on it's own right. And many of the conclusions aren't properly supported. Come on, we all had that experience writing papers. You've got a deadline, you're trying to get from point A to point B and you just don't have enough to make it all the way. So you make a statement you don't have support for and then link it to source material you know no one will read. So it looks like your conclusion is supported when in fact it isn't. You don't care though because the point is to get from point A to point B... and the only person you have to fool is the teacher or in this case the peer review that probably doesn't care that much anyway. Also... everyone else is doing it... and for the teacher to actually verify all those citations would be pretty much impossible. The only thing you have to be careful of is to not say something the teacher knows is false or will think is false. If you do that they might check the citation. But if you go outside of their knowledge forcing them to basically check everything or nothing... or stick closely to whatever the teacher is likely to believe anyway... you can get away with about 99 times out of a hundred. And the time you're caught... slap on the wrist or a small hit to your grade.
Now I have no experience with what happens when you actually start publishing things. I fully admit my ignorance here. But I'd be surprised if an academic history conditioned by this environment didn't predispose graduates to try the same thing. And really, who is going to stop them? They've had their whole academic career to perfect the best ways to scam the system. All those years they weren't just learning the subject but they were also learning how the subject is taught, how it is graded, the social characteristics of their judges, human psychology as it relates to auditing, etc. We learn all this stuff naturally.
Anyway, that there is fraud in academia isn't shocking. All human interactions involve fraud. If there's a benefit in deceiving someone then we probably do it and we get very good at it. This is indifferent to morality. It has more to do with intelligence. If you're clever whether you're a good person or a bad person... you learn to lie. Even if you don't use it for evil it's just a skill you acquire.
If there is anything I find bothersome here it is the conspiratorial aspect where someone is encouraged to decieve others. This sort of thing is marginally less offensive when it's kept isolated to individuals even if everyone is doing it. And really what people SHOULD be doing rather then finding bogus sources is find ACTUAL sources.
It's actually not useful to anyone if it's fifty percent bullshit. I don't care if it's half brilliant and half bullshit. Even ten percent bullshit isn't acceptable. Strip out everything that isn't backed by bullshit. If you can't get from point A to point B without using bullshit sources then maybe those two dots don't actually connect. I know you need to make a connection and maybe you are even required to make that specific connection because your peers won't tolerate anything short of it. But that isn't science and it isn't academically useful. Sure you get your grade or you get your degree or you get your job or you get paid. You get what you want. But you do it at the expense of system's integrity.
I don't know... it's hard to audit this stuff without investing unreasonable numbers of man hours.
Re:Sigh... wrong lesson learned. (Score:4, Interesting)
The overwhelming majority of papers are read once and then never read again. I know that much.
This is especially true of people drawing conclusions rather then reporting data.
It's actually odd that they focus on asking graduate students to draw novel conclusions when it probably more useful to ask them to discover novel data. No interpretation. Just come up with an experiment or find something that has never been measured before. Then report in detail everything so it can be repeated or remeasured.
Most reports would probably be more useful.
There's nothing wrong with spending your life collecting dots for other people to connect. It's an absolutely vital portion of science. Too often sciences get too theoretical... too full of conjecture. Science is supposed to be about empiricism. Which requires 99 percent data and 1 percent conclusion.
Internet Reference Counter Points? (Score:3)
Maybe there is some sort of Internet Reference Counter worming its way through the Web. It looks at papers, and gives points to people who get referenced a lot. People who reference you are tallied as your friends, so you will know to reference them. People who publish in your area, but don't reference you, are foes, and get negative points. You can buy or sell references or points on eBay and pay for them with Bitcoins. People with lots of points are "Blue Chip" in the points futures markets. Points can be used to suppress rival research.
Hey, doesn't Facebook or Google do this already . . . ? . . . for an extra fee ? . . . ?
Re: (Score:2)
Hey, doesn't Facebook or Google do this already . . . ? . . . for an extra fee ? . . . ?
Google does a lot of it for free via Google Scholar. Almost all the rest is done through various funding agencies (who have long used bibliometrics to determine scientific relevance, the same core algorithm as Google's ranking algorithm). The exception is the friends/foes part, which is left to you to organize.
Of course! That happens when ... (Score:3)
money and career depend on what a brainless bean-counter adds up in a spread sheet.
Can not confirm that (for physics) (Score:5, Interesting)
In my (former researcher who left to industry) opinion/experience its not the editors who put the pressure, but the possibility that you ignored a work of somebody who is important enough to referee for Nature or Science. There are some components of these phenomena:
a) Maybe the work really is important, and you did not know it because it's too long ago. There is usually nothing wrong with a referee saying "hey that is similar to what [xyz]" did, even if they are on the list of authors on the reference in question.
b) some referees dont react positively to not getting cited and will shoot down any paper not referring to *their* theory for other reasons (i believe that happened to me once)
c) In the abstract (which is the part really read by the editors before the refereeing process) you compare your paper to the previous publications. Authors are under the impression that comparing your work to previous important papers makes a better impression. How far this is true i cant judge. I found the editor stage *before* the refereeing in Nature and Science the most intransparent thing I have experienced as an author. Unlike the refereeing process there is no way to appeal, there is not information on what the editors disliked so much to refuse directly. (There is the saying that once you had Nature/Science papers it gets more likely to pass this stage, and i have seen at least one example of a paper being passed to Nature which for sure would have been rejected by the editors had it come from a less important group in the field)
kids gets paper published without spurious refs (Score:2)
in fact, these kids got their research published without any references *at all*. http://rsbl.royalsocietypublishing.org/site/misc/BlackawtonBees.xhtml [royalsocie...ishing.org]
i particularly like the section headings "once upon a time" and "the puzzle duh duh duhhhhh". i think however in the context of this article, the following exerpt from the background puts the corruption that has been highlighted by TFA to shame:
"So what follows is a novel study (scientifically and conceptually) in ‘kids speak’ without references
No, we feel pressure to publish superfluous papers (Score:1)
In the researcher world, it's all about publishing, publishing, publishing. 90% of the papers is crap, and at leat 75% of the conferences is crap. People fake results, publish even before they have the actual results, only to get noticed. I'm in the middle of it, and I sometimes feel so ashamed by the quality of what I'm forced to publish that I feel like stepping out.
Not true in the biological sciences (Score:2)
In my field, I usually end up struggling prior to submission to cut references, because many of the journals I submit to have limits on the numbers of citations. Often, this means citing a review rather than the primary literature (because one review can take the place of multiple primary papers), or citing a recent work using the most current methods and dropping citations of the earlier ground-breaking work in the field
I have never seen this (Score:1)
Re:Social Science is an oxymoron (Score:5, Insightful)
I have bad news for you ... dig deep enough, and you find out all the other 'sciences' suffer from exactly the same problems.
Re:Social Science is an oxymoron (Score:4, Insightful)
dig deep enough, and you find out all the other 'sciences' suffer from exactly the same problems.
As a recovering Mathematician I take issue with that.
Hint: There is one science founded (mostly) on logic that doesn't let reality get in the way of quality navel gazing.
Re: (Score:3)
Yeah, let's all just accept Peano's axioms, shall we? Mathematics is founded on air.
Re: (Score:1)
I won't dispute the thin air plucking of axioms - but that wasn't what the original poster alluded to - with which you swept all of science under the same rug:
Their only connection to science is that they attempt to use statistics, often improperly.
My point remains that Mathematical logic (alone?) is a science concerned with tautological proofs, not based on "closest enough" statistical significance.
Re:Social Science is an oxymoron (Score:5, Interesting)
Yeah, but if you go down that path, you wind up with mathematics being nothing but a tool, more a part of the process of science than science itself. Mathematics alone tells us nothing about the universe, other than that mathematics can be derived in it (and then you get back to whether or not you've really derived anything since the foundation is ultimately belief).
Re: (Score:2)
Yeah, but if you go down that path, you wind up with mathematics being nothing but a tool, more a part of the process of science than science itself.
It's an interesting point that gave me pause. Without doubt it is a tool, but I'm not sure that it's a given that research around the tool isn't science.
Your suggestion started to make me consider science as only being related to subjects with probabilities where we can measure and test how close we are getting - but my gut is saying that view is too narrow.
Re: (Score:2)
Indeed, science itself is poorly defined! (Or perhaps, there are just a lot of opinions about what it is.)
I have only a BS level math background, and so I struggle on the issue myself ... Math is very different from everything else 'science', almost a category unto itself. If you throw everything else out of the science category for being probability based, why not just call math math (seems more convenient), and put everything else in a new category. Which we could call science for the sake of argument.
Re:Social Science is an oxymoron (Score:5, Insightful)
Basically, Science is:
Observable
Repeatable
Falsifiable and
Communicated.
I've always found this definition useful.
Re: (Score:2)
Axioms are unproven, that's what I have against them.
Re: (Score:2)
While your second claim is fine, the parallel in your first is not.
A hypothesis is a thing we wish to discover the truth of. An axiom is a thing we accept on faith before investigating the 'truth' of other things. If mathematicians want to be taken seriously, they need to acknowledge that they are no better than religious philosophers debating how many angels can dance on the head of a pin based on the truth of the bible.
Re: (Score:2)
I have bad news for you ... dig deep enough, and you find out all the other 'sciences' suffer from exactly the same problems.
In all sciences, reviewers will often ask authors to cite additional papers and provide some commentary on them. [How do your results differ?] Whether the papers are relevant is difficult to answer.
I have been on both sides of this. As a reviewer, I have often asked that additional papers be cited, sometimes my own. After all, I wouldn't be a reviewer if I didn't have some expertise. As an author, usually I agree the additional citations are relevant. Even I don't think so, I 'suck it up' in orde
Re: (Score:3)
I can tolerate most of the other fields, with the excepti
Re:Social Science is an oxymoron (Score:5, Informative)
Not all economists habitually claim that wealth inequality and deregulation are good things. Also there is a strong movement to get economics away from simply math-based models and starting to use more empirical data. For example - if you take any decent class on international trade, you should find countless examples that disprove the basic models that have been around for ages. The Heckscher-Ohlin and Ricardian models (two of the most basic models upon which trade is based) have been tested and found wanting, especially H-O. Ricardian has been modified and updated somewhat but it still contains flaws. Newer models such as the New Trade Theory (inventive naming schemes abound! there's also a New New Trade Theory) and the Gravity Model have come into vogue, especially with the advent of FTAs in the past 20 years and the WTO.
As for monetary policy, it has certainly been the unfortunate case that in America and the UK (and to a certain degree Canada - although with Harper and his cronies with their goddamned majority it's sure to increase) the Austrian school has been prominent. However, people are starting to see that regulation needs to be enforced. If we look at the differences in banking between Canada and the US for example, we can see that Canada has fewer regulations, but enforcement is stricter and penalties more severe. I'm not, by the way, advocating that the Canadian banking system is without flaws, just that it tends to be less volatile, and their practices are less risky due to the regulations being observed. Now obviously the US is not going to adopt the Canadian banking system wholesale, but there are lessons to be learned.
There is a growing understanding that economists need to look at bigger pictures, and so political economy and comparative governance are big fields of study now.
I know that being on a board dominated by engineers and "hard" scientists this post will probably get downmodded, but it's important to recognize that there is a great diversity in the opinions of professional economists, and many of the younger generation will not buy into the simple ideas of "deregulate, lower taxes, and eliminate minimum wage".
Re: (Score:2)
So, assuming you study this stuff: please explain why "deregulate, lower taxes, and eliminate minimum wage bad"?
Re: (Score:3)
So, assuming you study this stuff: please explain why "deregulate, lower taxes, and eliminate minimum wage bad"?
Deregulation - do you really need me to provide examples of why unfettered deregulation is bad? The banking example I provided above shows that simplifying regulation but ensuring enforcement can be much more effective than just straight deregulation.
Lower taxes - I'm assuming that you like civilization and the benefits provided to you by your government - things like roads, sewage treatment, fire and police services, courts of law etc.? These things generally cost money and since society as a whole benefit
Re:Social Science is an oxymoron (Score:4, Insightful)
Because an economy is a lot like an ecosystem. You eliminate the regulators (like, say the predators), in an ecosystem and you get populations booms followed by population collapse when the resources are depleted. Sometimes to extinction. Regulation is good in that it prevents this boom and bust scenario. Guess what the economic system advocated by conventional economists looks like? Boom and bust the whole length. The periods of stability in between have usually come as governments start enforcing regulations after a pariticularly boom and bust event has left a good part of the population in hardship. Then over time people forget. Little by little regulations are dropped, and of course so arrives a period of apparent prosperity (the boom phase) during which we are lured into dropping more and more regulations because obviously regulations are hampering this prosperity. Of course, it can't last. Without the moderation provided by regulation we enter this boom phase, that once all the suckers at the bottom of this pyramidal scheme have contributed can only end in a bust. Then we start talking of regulation again (which is needed).
Until we get rid of the models that advocate the "deregulate" point of view, we have no chance of getting out of this history of boom and bust. Current economic models encourage waste (we've dilapidated in a couple of generations all the energy wealth stored up over millions of years). That's hardly "economic" if it's wasteful. Boom and bust cycles, and chronic waste are a sure proof that accepted economic models are broken.
Re: (Score:1)
So, in which regulated economy there was no boom-bust cycle ? Even the Communist economies in Eastern Europe had those because they were connected to the world market, but when the last one came the whose system crashed instead of only some companies crashing.
The "boom-bust cycle" is given exaggerated importance: the variations were, for the last 200 years and with very few exceptions, about 7 percent but the overall curve was ascending.
Re: (Score:3)
Got any data to back that up?
no other science uses "bad" and "good" to (Score:3)
evaluate its theories, and whether or not they match reality. because, as mentioned elsewhere, economics is the only 'science' where nobody cares about actually measuring reality. they come up with a bunch of theories, get payed fat 'consultant' bonuses by corrupt leaders, and live in a state of suspended, deluded animation.
a great example is from Ferguson's film "Inside Job", where he flat out proves that professors took money to lie about the state of Iceland's banking system right before it fell into the
Re: (Score:2)
Someone hasn't been paying attention for the past 30 years. Reaganomics doesn't work. Why it doesn't work is an academic question, but simple reality is that it has been disastrous for this country.
Re: (Score:2)
I would agree that half-assed libertarianism probably doesn't work. I disagree that a system as complex as human society can ever be summed up into a "simple reality".
Re: (Score:2)
Thus proving that it's not science, but simply people pushing their personal opinions draped in the prestige of science.
If they don't buy into it, then they will be ignored, because the libertarian hands-off mantra better serves the interests of the vultures who want to loot the socie
Re:Social Science is an oxymoron (Score:5, Insightful)
Thus proving that it's not science, but simply people pushing their personal opinions draped in the prestige of science.
Oh I forgot there are no differing opinions in science. I guess Stephen Jay Gould and Richard Dawkins agree completely on how evolution works.
Re: (Score:1)
"movement to start using empiricism" (Score:2)
wow, congratulations, after 150+ years, this 'science' is starting to maybe finally think about using the idea that you should use actual data from the real world in your formulation of theory. brilliant concept. let me know how it works out.
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
This comment is clearly written by someone who has no idea what anthropology, sociology and psychology are. The "non rigorous" comment is particularly laughable, because clearly you have no knowledge of epistemology and the evolution (ahem!) of your own theories of what makes knowledge. Sociologists and anthropologists must study research methods, be examined on that knowledge, and better yet, understand their own role in the production of knowledge.
I'd take a sociologist over a scientist any day when it c
Re: (Score:2)
http://xkcd.com/435/ [xkcd.com]
Re: (Score:1)
Researchers misunderstand confidence intervals (Score:4, Informative)
Researchers Misunderstand Confidence Intervals and Standard Error Bars.
Belia, Sarah;Fidler, Fiona;Williams, Jennifer;Cumming, Geoff
Psychological Methods, Vol 10(4), Dec 2005, 389-396.
Little is known about researchers' understanding of confidence intervals (CIs) and standard error (SE) bars. Authors of journal articles in psychology, behavioral neuroscience, and medicine were invited to visit a Web site where they adjusted a figure until they judged 2 means, with error bars, to be just statistically significantly different (p .05). Results from 473 respondents suggest that many leading researchers have severe misconceptions about how error bars relate to statistical significance, do not adequately distinguish CIs and SE bars, and do not appreciate the importance of whether the 2 means are independent or come from a repeated measures design. Better guidelines for researchers and less ambiguous graphical conventions are needed before the advantages of CIs for research communication can be realized.
(http://psycnet.apa.org/journals/met/10/4/389/)
Re: (Score:1)
Yeah, you're right, us social and behavior scientists are just running a huge con on the world and we deserve your derision. Why stop here? Soft sciences like biology are pretty much bunk too. I mean, what the hell? When has a biologist ever been able to tell you, mathematically, how a giraffe is going to act or how a protein will interact with a peptide? Pure crap.
More to your point, when has anyone in social science done anything for the advancement of the species? Never.
So, good show sir. You have sussed