Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Why Published Research Findings Are Often False 453

Hugh Pickens writes "Jonah Lehrer has an interesting article in the New Yorker reporting that all sorts of well-established, multiply confirmed findings in science have started to look increasingly uncertain as they cannot be replicated. This phenomenon doesn't yet have an official name, but it's occurring across a wide range of fields, from psychology to ecology and in the field of medicine, the phenomenon seems extremely widespread, affecting not only anti-psychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants. 'One of my mentors told me that my real mistake was trying to replicate my work,' says researcher Jonathon Schooler. 'He told me doing that was just setting myself up for disappointment.' For many scientists, the effect is especially troubling because of what it exposes about the scientific process. 'If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved?' writes Lehrer. 'Which results should we believe?' Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to 'put nature to the question' but it now appears that nature often gives us different answers. According to John Ioannidis, author of Why Most Published Research Findings Are False, the main problem is that too many researchers engage in what he calls 'significance chasing,' or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. 'The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,'"
This discussion has been archived. No new comments can be posted.

Why Published Research Findings Are Often False

Comments Filter:
  • Yes it does. (Score:2, Insightful)

    by Dr_Ken ( 1163339 ) on Sunday January 02, 2011 @12:31PM (#34737670) Journal
    The article says "this phenomenon doesn't yet have an official name," [yet] but it actually does. It's called "lying".
  • by Anonymous Coward on Sunday January 02, 2011 @12:32PM (#34737678)

    The scientist favorite song:

    The best things in life are free
    But you can keep 'em for the birds and bees
    Now give me money (that's what I want)
    That's what I want (that's what I want)
    That's what I want (that's what I want), yeah
    That's what I want

  • by girlintraining ( 1395911 ) on Sunday January 02, 2011 @12:33PM (#34737688)

    After years of speculation, the a study has revealed that scientists are, in fact, human. The poor wages, long hours, and relative obscurity that most scientists dwell in has apparently caused widespread errors, making them almost pathetically human and just like every other working schmuck out there. Every major news organization south of the mason-dixon line in the United States and many religious organizations took this to mean that faith is better, as it is better suited to slavery, long hours, and no recognition than science, a relatively new kind of faith that has only recently received any recognition. In other news, the TSA banned popcorn from flights on fears that the strong smell could cause rioting from hungry and naked passengers who cannot be fed, go to the bathroom, or leave their seats for the duration of the flight for safety reasons....

  • by water-vole ( 1183257 ) on Sunday January 02, 2011 @12:38PM (#34737730)
    I'm a scientist myself. It's quite clear from where I'm standing that to get good jobs, research grants, etc one needs plenty of published articles. Whether the conclusions of those are true or false is not something that hiring committees will delve into too much. If you are young and have a family to support, it can be tempting to take shortcuts.
  • by Pecisk ( 688001 ) on Sunday January 02, 2011 @12:42PM (#34737750)

    NYT article is well written and informative. It's clearly not assuming that there is something wrong with scientific method, but just asks - could it be? There is excellent reply by George Musser at "Scientific American" http://cot.ag/hWqKo2 [cot.ag]

    This is what I call interesting and engaging public discussion and journalism.

  • Re:Hmmmmm (Score:5, Insightful)

    by Joce640k ( 829181 ) on Sunday January 02, 2011 @12:43PM (#34737754) Homepage

    Maybe it's just the the truths being presented in the article are the sort of 'truths' that are hard to measure 100% objectively. Whenever results have a human element there's always the possibility of experimental bias.

    Triply so when the phrase "one of the fastest-growing and most profitable pharmaceutical classes" appears in the business plan.

    Fortunately for science, the *real* truth usually rears it's head in the end.

  • Re:Yes it does. (Score:5, Insightful)

    by shadowofwind ( 1209890 ) on Sunday January 02, 2011 @12:45PM (#34737772)

    I agree. Though its not lying with the Clintonesque definition of lying that most people use. Its more lying my omission, distorting the meaning of the results by not putting them in their complete context. At least that's how it is with the papers I've read and known enough about to have an educated opinion on. Although the misrepresentation is usually at least partially intentional, I don't think its all intentional.

  • by rennerik ( 1256370 ) on Sunday January 02, 2011 @12:47PM (#34737784)
    > 'Which results should we believe?'

    What a ridiculous question. How about the results that are replicated, accurately, time and time again, and not ones that aren't based off of scientific theory, or failed attempts at scientific theory?
  • Re:Hmmmmm (Score:4, Insightful)

    by gilleain ( 1310105 ) on Sunday January 02, 2011 @12:50PM (#34737824)

    Maybe it's just the the truths being presented in the article are the sort of 'truths' that are hard to measure 100% objectively.

    Alternatively, this article is almost unbearably stupid. It starts off heavily implying that reality itself is somehow changeable - including a surfing professor who says something like "It's like the Universe doesn't like me...maaan".

    This is just a journalistic tactic, though. Start with a ridiculous premise to get people reading, then break out what's really happening : poor use of statistics in science. What was really the point of implying that truth can change?

  • Already debunked (Score:5, Insightful)

    by mangu ( 126918 ) on Sunday January 02, 2011 @12:52PM (#34737836)

    Is it possible that there has always been error, but it is just more noticeable now given that reporting is more accurate?

    Precisely. As mentioned in a Scientific American [scientificamerican.com] blog:

    "The difficulties Lehrer describes do not signal a failing of the scientific method, but a triumph: our knowledge is so good that new discoveries are increasingly hard to make, indicating that scientists really are converging on some objective truth."

  • Publish or Perish (Score:4, Insightful)

    by 0101000001001010 ( 466440 ) on Sunday January 02, 2011 @12:56PM (#34737874)

    This is the natural outcome of 'publish or perish.' If keeping your job depends almost solely on getting 'results' published, you will find those results.

    Discovery is more prestigious than replication. I don't see how to fix that.

  • Re:It's simple. (Score:1, Insightful)

    by shadowofwind ( 1209890 ) on Sunday January 02, 2011 @12:56PM (#34737882)

    Yes, but of course its only a matter of time until global warming deniers, creationists, geocentrists, and/or hollow earth believers hijack this discussion and use it as grounds to dismiss every objectional fact that's ever been established.

  • Re:Yes it does. (Score:5, Insightful)

    by IICV ( 652597 ) on Sunday January 02, 2011 @01:07PM (#34737944)

    It's only lying if you do it intentionally. If ten labs independently and without knowing of each other perform essentially the same experiment, and one of them has a statistically significant result, is that lying? The other nine won't get published because, unfortunately, people only rarely (and for large or controversial experiments [nytimes.com]) publish negative results, but the one anomalous study will.

    The vast majority of science is performed with all the good will in the world, but it's simply impossible for scientists to not be human. That's why we do replicate experiments - hell, my wife just published a paper where she tried to replicate someone else's results and got entirely different ones, and analyzed why the first guy got it wrong.

  • by toppavak ( 943659 ) on Sunday January 02, 2011 @01:17PM (#34738014)
    Scale is also an important factor. With better statistical methodology, more rigorous epidemiology and a growing usage of bio-statisticians in the interpretation of results, we're seeing that weak associations that were once considered significant cannot be replicated in larger experiments with more subjects, more quantitative and accurate measurements. Unlike many, many other fields (particularly theology) when scientific theories are overturned, it is a success of the methodology itself.

    That's not to say that individual scientists don't sometimes dislike the outcome and ultimately attempt to ignore and/or discredit the counter-evidence, but in the long run this can never work since hard data cannot be hand-waved away forever.
  • by dachshund ( 300733 ) on Sunday January 02, 2011 @01:22PM (#34738036)

    Whether the conclusions of those are true or false is not something that hiring committees will delve into too much. If you are young and have a family to support, it can be tempting to take shortcuts.

    Yes, the incentive to publish, publish, publish leads to all kinds of problems. But more importantly, the incentives for detailed peer-reviewing and repeating others' work just aren't there. Peer-reviewing in most cases is just a drag, and while it's somewhat important for your career, nobody's going to give you Tenure on the basis of your excellent journal reviews.

    The inventives for repeating experiments are even worse. How often do top conferences/journals publish a result like "Researchers repeat non-controversial experiment, find exactly the same results"?

  • Re:Yes it does. (Score:4, Insightful)

    by patjhal ( 1423249 ) on Sunday January 02, 2011 @01:26PM (#34738066)
    I agree. I was a science major and saw quite a willingness to fudge/manipulate data and I believe it has worked its way into general research. During a breif PhD stint I redid some experiments showed the opposite of what other students had done. Mine showed some significance why theirs had not. Funny thing was my data was ugly, while theirs was pretty. This was from an experiment where organisms where growing in media and had to be counted via microscope and measured with a spectrograph at set time periods. My guess is their data was pretty because they fudged it by saying they took the samples at exactly a particular time ratio. Since I recorded the actual elapsed time (the procedure was complicated and there was variability on how long it took me to complete the tasks sometimes being more than the next check point). I also guess that the student wanted pretty looking data because he thought that would look better to his boss (the professor who ran the lab). Even if the scientists are not doing this from pressure to go higher then their underlings might be doing it to be "impressive". Part of the problem is science is no longer something people do because they love it. It is too commoditized and has become just a job at the low end and a vicious battle for survival at the high end.
  • Re:Hmmmmm (Score:5, Insightful)

    by causality ( 777677 ) on Sunday January 02, 2011 @01:28PM (#34738078)

    Maybe it's just the the truths being presented in the article are the sort of 'truths' that are hard to measure 100% objectively. Whenever results have a human element there's always the possibility of experimental bias.

    Triply so when the phrase "one of the fastest-growing and most profitable pharmaceutical classes" appears in the business plan.

    The pharmaceutical industry is easily one of the most corrupt industries known to man. Perhaps some defense contractors are worse, but if so, then just barely. It's got just the right combination of billions of dollars at play, strong dependency on the part of many of its customers, a basis on intellectual property, financial leverage over most of the rest of the medical industry, and a strong disincentive against actually ever curing anything since it cannot make a profit from healthy people. Many of the tests and trials for new drugs are also funded by the very same companies trying to market those drugs.

    Fortunately for science, the *real* truth usually rears it's [sic] head in the end.

    Sure, after the person suggesting that all is not as it appears to be is laughed at, ridiculed, cursed, given the old standby of "I doubt you know more than the other thousands of real scientists, mmmkay?" for daring to question the holy sacred authority of the Scientific Establishment and daring to suggest that it could ever steer us wrong or that this, too is unworthy of 100% blind faith or that it may have the same problems that plague other large institutions. The rest of us who have been willing to entertain less mainstream, more "fringe" theories that are easy to demagogue by people who have never investigated them already knew that the whole endeavor is pretty good but not nearly as good as it is made out to be by people who really want to believe in it.

  • Huh? (Score:5, Insightful)

    by SpinyNorman ( 33776 ) on Sunday January 02, 2011 @01:32PM (#34738110)

    Did you even read the article?

    This is basically about poorly designed clinical drug trials without sufficient controls. Sloppy work, even if it seemed rigorous enough at the time.

    The sensationalistic "scientific method in question" stuff is pure BS, but after all this is New Yorker magazine we're talking about, so one wouldn't expect too much scientific literacy. It was the scientific method of "predict and test" that caught these erroneous results, so the method itself is fine. The "scientist" who designed a sloppy experiment is too blame, not the method.

    However, I'm not sure that psychiatric drug trials even deserve to be called science in the first place. The principle of GIGO (Garbage In - Garbage Out) applies. This is touchy-feely soft science at best. How do you feel today on a scale of 1-10? Do the green pills make you happy?

  • by __aapspi39 ( 944843 ) on Sunday January 02, 2011 @01:39PM (#34738186)

    It looks like a lot of the studies that suffer from this effect are concerned with people and their behavior. Personally I don't think its a matter of whether the science is hard or soft but just that the domain has some issues that are not so important with other fields, e.g. the structure of a galaxy or the behavior of a gas with respect to pressure.

    The main problem is that when you're looking at anything that has something to do with humans then the tool with which you carry out the investigation is in part the very thing you are investigating [the mind.] This increases the potential for bias no end, and in the opinion of some, renders the whole exercise a completely futile and confounded endeavor. But I would tend to believe that this problem is the exact reason why one should study the mind, exactly because it is the lens through which we view the universe.

    In many respects it's a flawed tool for research. Not only filters but active perceptual mechanisms are at work, and function in such a way as to ensure that people seem to create a large part of the reality that they live in. This shouldn't stop scientists from investigating imho, but means that in looking at an area such as the mind, humility is indeed appropriate.

    Soft science as you call it should not be conflated with astrology - like many other practices astrology is closer to a very ancient and wonderful art- that of separating people from their money, than it is to scientific investigation. But then perhaps i would say that, being a virgo.

  • Re:Hmmmmm (Score:5, Insightful)

    by arikol ( 728226 ) on Sunday January 02, 2011 @01:40PM (#34738198) Journal

    nahh, the problem is a misunderstanding of statistics (thinking that post-hoc analysis with this fishing for statistical significance) is as valid as proper hypothesis testing. The proper way is where the hypothesis is fully pre-formed and then tested. The numbers and statistics apply ONLY TO THE HYPOTHESIS being tested, so you cannot hunt for a statistical significance just somewhere in the data and then re-formulate your hypothesis.

    The need to publish (a scientist's income relies on what he publishes in most cases) as well as funding issues force scientists to try to find some usable results from their science, and by trawling through their data they can often salvage what would otherwise have been a failed bit of research. Except this salvaging operation may actually be absolutely worthless. This is most often not done on purpose but rather due to only partly understanding what statistics and significance testing tell us.

    So, a capitalistic, fully performance based (with results being the performance metric) environment does not seem to work well for science.
    Surprised?
    Me neither.

  • Re:Hmmmmm (Score:5, Insightful)

    by khallow ( 566160 ) on Sunday January 02, 2011 @02:03PM (#34738388)

    Yet another reason for the routine use of 95% frequentist statistics to be replaced with bayesian methods.

    Frequentist statistics is a reasonable approximation of Bayesian statistics when applied to large numbers. You may well be right above, but my impression is that these sorts of statistical mistakes don't come from inappropriate use of frequentist methods, but rather from conclusions based on poor evidence and a heaping helping of observer bias. Do enough studies and eventually you will see spurious artifacts. If your study is to duplicate someone's study, you may feel pressure (conscious or not) to duplicate the results as well.

  • by HungryHobo ( 1314109 ) on Sunday January 02, 2011 @02:25PM (#34738532)

    When doing research in one particular area I did find that even in an area with very few people working in it many of them would have a large number of citations for papers by

    1:themselves
    2:their former/current students
    3:their former/current teachers

    but at the same time that isn't very surprising.
    citing yourself can make sense when expanding on an earlier pieces of research and you're vastly more familiar with the work being done by people you know well like your students or teachers.

  • Re:Hmmmmm (Score:5, Insightful)

    by burnin1965 ( 535071 ) on Sunday January 02, 2011 @02:26PM (#34738538) Homepage

    It looked to me much more like acknowledgement of widespread difficulties with randomness, scale, and human fallibility. Exactly the kinds of things that would make someone who's a staunch defender of "science as a means to truth" to disregard valuable critical information about it.

    The problem is actually the opposite. Note the E.S.P. experiment cited in the article. Rhine's initial experiment suggested to him that E.S.P. was real. Before publishing his results he did the right thing and reran the tests and the results proving E.S.P. were not repeatable.

    The next part is his absolute failure to understand the scientific method and statistics. He concluded that "extra-sensory perception ability has gone through a marked decline.” In fact what he experienced was Regression toward the mean [wikipedia.org].

    Taking a well understood principle, renaming it with a term that suggests an action is taking place, then arguing that you have found some new phenomenon that proves science doesn't work is not critical information about anything.

    It is ignorance that will be dismissed for obvious reasons. Too much time and energy is wasted repeatedly addressing these attacks on science by people who want so badly for their pseudo-science or supernatural beliefs to be true. In a perfect world when somebody stumbles upon regression to the mean without knowing it they would do additional research to understand what it is they are observing rather than conclude that their initial experiment was correct and the supernatural ability they detected was "declining" rather than accept the alternate, it was never there in the first place.

  • Re:Hmmmmm (Score:5, Insightful)

    by arikol ( 728226 ) on Sunday January 02, 2011 @02:35PM (#34738600) Journal

    no, it's not that they do it on purpose at all. At least most of them.
    It's that most scientists have mostly a very basic understanding of statistics (except statisticians, obviously) and don't understand the implications of those shortcuts. They genuinely believe that their results (after trawling through data to find some statistical link) are strong, and feel confident in presenting them, especially as they have nice and shiny statistics to back the results up.

    The capitalistic and performance based system is what pushes scientists into taking these shortcuts to begin with, thinking that it's no big problem ("dude, I can see a link here, and it's at .002!! YAY!").

    So, two problems:
    1 not everybody is an expert at statistics
            (1a is that numbers look impressive when
            you only half understand them...)

    and 2. The pressure to put bread on their tables pushes scientists to try to find usable results somewhere in their research, otherwise they don't get funding for further research and may lose their jobs.

  • Re:Yes it does. (Score:5, Insightful)

    by burnin1965 ( 535071 ) on Sunday January 02, 2011 @02:53PM (#34738720) Homepage

    Are you serious? Many thousands of people are dead simply because a few people were trying to stay gainfully employed to support their families?

    I am truly sorry if this comes off as offensive as I think it does but if you believe there would be mass suffering from unemployment if we did not bomb the shit out of Iraq and that was the basis for the lies that resulted in many thousands losing their lives then you are seriously deluded.

    As a U.S. citizen I found Clinton's actions and lies embarrassing, but the lies from Bush transferred billions, if not trillions, of public funds into the hands of a few and resulted in the deaths of many thousands of people.

    Comparing lies about a blow job to lies resulting in debt and death is absurdity on a grand scale.

  • Re:Hmmmmm (Score:5, Insightful)

    by syousef ( 465911 ) on Sunday January 02, 2011 @03:15PM (#34738880) Journal

    nahh, the problem is a misunderstanding of statistics (thinking that post-hoc analysis with this fishing for statistical significance) is as valid as proper hypothesis testing. The proper way is where the hypothesis is fully pre-formed and then tested. The numbers and statistics apply ONLY TO THE HYPOTHESIS being tested, so you cannot hunt for a statistical significance just somewhere in the data and then re-formulate your hypothesis.

    This significance of this fundamental mistake cannot be overstated. It seems to be prevalent in medical literature and there was a doctor doing the rounds lecturing about this a couple of years back. I wish I could recall exactly which podcast but he covered all sorts of common fundamental errors in medical research statistics and did it in a very accessible way. The key thing to remember is that if you have enough variables there WILL by complete coincidence be correlation between some of them in any given sample. So to test a hypothesis properly, not only must you formulate it in advance without looking for any correlation within the data, but you must look at more than one data set to verify your findings.

  • Re:It's simple. (Score:2, Insightful)

    by shadowofwind ( 1209890 ) on Sunday January 02, 2011 @03:26PM (#34738956)

    Yes, I'm not arguing for government action to address global warming. I also see it as a power grab. I'm sorry I wasn't clear. It seems you're trying to have an argument with me as if I'm taking a position that I'm not taking. But political motivations aside, if you look at the science, there is warming. And the scientific arguments are often dismissed based on the political dimension, rather than on the grounds of scientific merit, which was my whole point.

  • by Skippy_kangaroo ( 850507 ) on Sunday January 02, 2011 @03:27PM (#34738958)

    Consider also that most researchers run more than 20 regressions when testing their data. That means that the 95% significance level is grossly overstated.

    The key is that the 95% level applies only if you don't data snoop beforehand and only if you run the regression once and only once. The true significance levels of many studies that claim a 95% level is likely to be 50% when you consider all the pretesting and data snooping that goes on in reality - not the rather idealised setup that is reported in the journal publication.

  • Re:Hmmmmm (Score:5, Insightful)

    by arth1 ( 260657 ) on Sunday January 02, 2011 @04:24PM (#34739268) Homepage Journal

    No, that doesn't solve the problem, it increases it.
    The consistent lack of results is a result, and a very useful one too.
    The logical next step is to ban marketing of humbug until and unless the snake oil sellers can show valid scientific theories and peer reviews for their remedies.

    Likewise, capitalist-funded research needs to stop rewarding findings, but start treating all results as equally valid science, and stop punishing scientists who produce negative and inconclusive results. That's good science, which is what they should pay for.

    Consistently publishing more results than randomness would dictate is a clear indication of bad science, and should be punished, not rewarded.

  • Re:Hmmmmm (Score:5, Insightful)

    by Bowling Moses ( 591924 ) on Sunday January 02, 2011 @05:30PM (#34739616) Journal
    I'm a biochemist. After earning my PhD five years ago I've been working in academia, but my funding's about to run out and I'm applying for jobs at biotech and pharmaceutical companies. Do you think I had the empathy and morality centers of my brain removed or something? Do you think that every single person working in those sectors underwent the same procedure or were blessed from birth with complete amorality? The reality is that science is hard. The reality is that science is expensive. The reality is that our knowledge is incomplete and we do the best we can with the limited resources at our disposal. If we're lucky, that means we can turn a life-destroying illness into something treatable. Take cancer, for example. There's no magic pill to take it away and probably never will be, but it's because cancer is a large family of disease caused by different breakdowns of cellular mechanisms, many mechanisms that we don't understand very well and that are very hard to tease apart. That's why cancer, and diseases in general, tend to end up with treatments and not one-pill cures, not because big pharma's hiding it.

    My brother went through 10 months of chemotherapy. 10 months of being nauseous, 10 months of not wanting to eat, 10 months without a sense of smell, 10 months with no sense of taste, 10 months of physical weakness, 10 months of diminished mental capacity, 10 months of needles, 10 months of IVs full of chemicals that burned when they went in, 10 months of doctors prodding and poking. He's now cancer-free and has been for 12 years. Back when he went through that his odds of surviving Hodgkin's lymphoma were about 80%. Current treatment has reached 90%, and a recent experimental treatment is at 98%. They're all still unpleasant and take months. Do you honestly think that if I had the ability to jump in with a magic pill and spare my brother those 10 months I wouldn't do it because it might hurt the corporate bottom line? Fuck the bottom line. Fuck having a job if it came to it. That's the prevailing attitude in biotech and pharmaceutical companies because they're made up of people like me, people who have seen loved ones go through horrible illness, and not the monsters your fantasy requires.
  • Re:Hmmmmm (Score:2, Insightful)

    by Anonymous Coward on Sunday January 02, 2011 @10:56PM (#34740884)

    Nope. nope, nope. I'm guessing you are not from the science world. Let me explain, in order to get the money from governments... (the places you list), you have to publish. No one cares if your research is improving human kind. If your publication count is high, you can get the funding. This is the capilistic force in the research business; the currency here is the publication count.

    I have been in the science business for almost 15 years now, and I have observed the following short cuts in increasing the publication count:
    1. Statistics game: More or less the things explained in the TFA.
    2. Form gangs: A gang controls conferences, journals. They dedicate %60 of the # of papers to be published to the members of the gang.
    3. Write alot, the idiot reviewers will not get it. Simply obfuscation. For a conference a reviewer might get around 20 papers. Well these reviewers are mostly professors who don't have time. They forward the reviews to their phd students. A phd student (especially at the beginning), in my opinion, cannot distinguish between what is good work and what is obfuscation.
    4. Solve part of the problem and write to the paper as if all the problem is solved. Here good language skills are important. This is related to the previous case, but here there is actually some work.
    These mostly happen in research funded by governments (or government agencies). I think (my two cents) this is mostly because the governments et al. mostly do not care if the research is meaningful or not. I think the government et al. has too much money compared to a company. They fund say 500 research projects, out of these if 2-3 can end up being used can have huge benefits to the economy.The rest is used to increase the scientific reputation and turn the economy wheel. With the funding, you can hire some phd students, they try to live with their salary. This in turn, can increase the development in under developed regions of a country.

    For reputation, the citation count and publication count is taken. And all the above mentioned methods work for these metrics. Hence, people publish just to publish something in the end they will good reputation. Good reputation means more funding. if you get funding, you can feed yourself. You also help the government.

    However, for research founded by industry, you simply cannot publish just to increase your count. The company has limited resources and they want to use the results; as a scientist you cannot waste resources. This makes a huge difference in why you do research: you have concrete problem, you have to find a nice concrete solution. If your solution is general, you can publish.

    Seeing all these publishing short cuts, discourages my phd students. They do not continue on research, they just get the title, go to a company and do something completely unrelated to their field. Some stay in the academia and learn the publication game. We all need money so we need to publish to get some funding; ~%70 percent of the funding comes from the government,

    BTW: I'm a slashdot reader not commenter. But I just wanted to write about my observations.

One way to make your old car run better is to look up the price of a new model.

Working...