Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Stats Science

Some of the World's Most-Cited Scientists Have a Secret That's Just Been Exposed (sciencealert.com) 88

Long-time Slashdot reader schwit1 quotes Science Alert: Among the 100,000 most cited scientists between 1996 to 2017, there's a stealthy pocket of researchers who represent "extreme self-citations and 'citation farms' (relatively small clusters of authors massively citing each other's papers)," explain the authors of the new study, led by physician turned meta-researcher John Ioannidis from Stanford University.

Ioannidis helps to run Stanford's meta-research innovation centre, called Metrics, which looks at identifying and solving systemic problems in scientific research. One of those problems, Ioannidis says, is how self-citations compromise the reliability of citation metrics as a whole, especially at the hands of extreme self-citers and their associated clusters. "I think that self-citation farms are far more common than we believe," Ioannidis told Nature. "Those with greater than 25 percent self-citation are not necessarily engaging in unethical behaviour, but closer scrutiny may be needed."

This discussion has been archived. No new comments can be posted.

Some of the World's Most-Cited Scientists Have a Secret That's Just Been Exposed

Comments Filter:
  • Goodhart's law (Score:5, Insightful)

    by VeryFluffyBunny ( 5037285 ) on Saturday August 31, 2019 @11:44AM (#59144066)

    Just another example of Goodhart's law in action:

    "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes."

    In other words, when a measure becomes a target, it ceases to be a good measure. In the publish or perish world of academia, where the pressure to publish & get good metrics is intense, the scientists were just doing what the metrics system asked of them. Yes, there are extreme cases like the ones mentioned int the article, but there's an "understanding" between many researchers that citing each others' papers is a good thing to do. It's reciprocal altruism. Rather than pointing the finger at individuals, why not change the system so that we can have better quality science?

    • Citation needed. ;-P

      • Re:Goodhart's law (Score:4, Informative)

        by VeryFluffyBunny ( 5037285 ) on Saturday August 31, 2019 @11:55AM (#59144100)

        There you go:

        Goodhart, C. (1981). Problems of Monetary Management: The U.K. Experience. In A. S. Courakis (Ed.), Inflation, Depression, and Economic Policy in the West (pp. 111–146). Rowman & Littlefield.

        • I was being ironic, but thanks.

          • I was being ironic, but thanks.

            Strange thing is that I have known this for at least 30 years. Amazing that scientists have "kept this a secret" for so long.

            There have always been citation sluts and those who implant their co authorship on every paper they can get away with. We always laughed at them.

            In the end, it doesn't make much difference. It's the science version of people who collect friends on FaceBook.

            • by ceoyoyo ( 59147 )

              I do medical research. During my PhD I was told to include someone as a co-author because 'we might want to collaborate with him in the future.'

              A few of the journals are requiring that authors now provide a statement of their contribution to the paper. Generally they go something like this:

              1st author: conceived and designed study, collected data, performed statistical analysis, wrote and revised paper.

              2nd author: collected data, revised paper

              subsequent half dozen plus authors: "contributed to study design a

            • This. I thought everyone in academia knew about this for years.
              It's one of the things that woke me up the fact that academia has its share of scam artists and the rest are completely aware and fine with it.
              • This. I thought everyone in academia knew about this for years. It's one of the things that woke me up the fact that academia has its share of scam artists and the rest are completely aware and fine with it.

                Well - academia is made up of humans, so suffers from all of the veracity issues that other humans suffer from.

                In a peer group, the sluts are more ridiculed than praised. Vanity means nothing. And it isn't that the rest are fine with it. Its just not a crime, so snicker, shake your head, and get on with the work at hand.

                When it backfires is when the citation slut puts their name on a paper that gets retracted. That often cures them. Oh yeah.

        • You are fucking hilarious!
      • by TWX ( 665546 )

        Citation provided [jalopnik.com].

    • Re:Goodhart's law (Score:5, Insightful)

      by hey! ( 33014 ) on Saturday August 31, 2019 @12:00PM (#59144124) Homepage Journal

      Also of Campbell's Law [wikipedia.org]:

      The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.

      Campbell's Law is why structuring education reform around high stakes testing doesn't work as well as one might reasonably anticipate. Testing is an integral part of controlling any process, and in itself is a good thing. But as soon as you try to use it to oversimplify what is an inherently complex problem, things go wrong. People start gaming the system because the system is structured as a game.

      Using citation counts to rank researchers and journals is more or less the same mistake. It's something worth looking at, but you simply can't boil down a researcher's value to a single number, and if you try to you effectively gamify the system.

      • by Izuzan ( 2620111 )

        I remember when MCSE courses first came out. after the first 2 years or so, an MCSE certificate was worthless, as people were gaming the system by memorizing the books and not doing any of the actual work. so they aced the tests, but couldn't do the actual work because they had never worked on the system before.

    • That's a good way to put it. The publication focus has become science's version of 'teach the test'. The problem is that science cannot be quantified. You can't just assign a number to how much a person or project contributes to human knowledge or bettering the world through science, which in the end is all that really matters. But that's what we're trying to do, that's what matters so darned much for science as a profession, and I don't think it is at all beneficial.
    • That is very well put. I didn't know Goodhart either.
      The citation metrics as a reputation measure leads to gaming the system not only just in how citations are collected and traded but also in what is considered worth researching and how it is published.
      focussing on the excesses is a bad approach. Tightening the citations system is a bad approach. But who is going to reduce the usage of reputation measures?

      I'm reminded of David Bohm who pointed out that in an experiment they were letting monkeys make painti

      • Re:Goodhart's law (Score:4, Interesting)

        by Ol Olsoc ( 1175323 ) on Saturday August 31, 2019 @04:31PM (#59144688)

        The citation metrics as a reputation measure leads to gaming the system not only just in how citations are collected and traded but also in what is considered worth researching and how it is published.

        And virtually everyone involved knows exactly what the citation sluts are doing, and ridicule them - discretely of course. It means virtually nothing other than vanity.

        • In some sectors it will mean little. But when reputation metrics become a measure on which your funding depends then whether you like it or not, you're going to use them. Apart from a principled minority.

    • "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes."

      This sounds like something from quantum mechanics. Once you observe the system closely it changes its behavior.

  • in most press conferences, while announcing their findings, that they weren't wearing pants.

  • I think that self-citation farms are far more common than we believe," Ioannidis told Nature

    Such unprecedented attack on the credibility of Scientists and Science weren't possible before the Trump era. Why do you hate Science?!

    Lrnu, n shaal gebyy, vf abg vg?

    • Um hmm... ;-)

      Seriously, though, I know you realize this, but many people don't: criticizing scientific corruption is not attacking science, it is defending science.

      Trump has little to do with it, anyhow. Dr. Ioannidis is most famous for this seminal paper, published eleven years before President Trump was elected:
      Why Most Published Research Findings Are False [plosmedicine.org], by John P. A. Ioannidis, 2005. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124.

      Yes, you read that right: the peer-reviewed literature s

      • That Loannidis article appears to claim we have a 'Millikan epidemic' , measurements suffering from strong historic bias.

        One should add that the more you're able to verify yourself (because you have the time and the competence) the less you have to rely on trust. For non scientists trust in science is a reasonable approach. For non -climate scientists trust in climate scientists is a reasonable approach. It's not guaranteed to work but it is reasonable. The polarized response to switch all trust to opponent

      • published eleven years before President Trump was elected

        WHAT! We can't blame Trump! I thought we blamed him for everything?

  • The reason why Science is often wrong is because of the fallible, greedy, agenda driven, political, socially and economically compromised humans that are part of the community.

    When we attach the word professional to someone... we should only be saying this persons knowledge is greater, but too often people use it for gatekeeping where people without a diploma or "professional" experience are expected to only listen and not be able to challenge the professional.

    I have met several professionals in multiple fi

    • by Uecker ( 1842596 ) on Saturday August 31, 2019 @01:32PM (#59144376)

      While scientists may be wrong very often, Scientific knowledge is actually extremely reliable. The problem is that it may be difficult for non-scientists to know what is established science and what is not. If you get your information about science only from the headlines in the news who prematurely report about every questionable study if they think it might interest their reader (so there is a selection bias for unlikely result), and then often misunderstand a lot things, then it is easy to get the impression that science is a complete mess. But is often very obvious to scientists what is reliable knowledge and what is not. Good sources for information are consensus statements published by big scientific societies.

  • Citation scores are the equivalent of of saying "trust me" - the exact opposite of science. They were introduced for petty administrators to use for determining pay offers to staff and naturally that means people who want paid manipulate them.

    • by Uecker ( 1842596 )

      Citation scores are not the equivalent of saying "trust me". They are the equivalent of others (!) saying "there is something interesting". This is why you might want to consider removal of the self-citations for the purpose of rankings. Although I do not think it matters too much.

  • by JoshuaZ ( 1134087 ) on Saturday August 31, 2019 @02:16PM (#59144456) Homepage

    I'm not sure that this level of self-citation is a cause for concern by itself, and I'm especially unconvinced given that they are including papers of people who are co-authors. In very narrow fields, or in areas where a small number of people are doing a lot of the work, this doesn't strike me as at all strange. Another factor might be that some people are more willing to list tangentially related works in general which also includes some of their own works; there's a lot of variation in how much people think how closely connected something should be to be cited.

    At the risk of being a little egotistical, I'm going to look at some of my own papers. I had a paper which improved a certain technical result about odd perfect numbers http://math.colgate.edu/~integers/vol18.html [colgate.edu] . A followup paper to that one is currently under review, where I improve on the bounds even further. That's a self-citation, but it would be extremely odd not to cite that. Similarly, I had a paper which was just recently published about the 2nd largest prime factor of an odd perfect number. I'm working with two other authors now on a paper which does something similar with the third largest prime factor. Not citing that would be very odd.

    I'm particularly concerned about the discussion of groups of people engaging in citations. How does one distinguish between groups of people citing each other because they are engaging in mutual promotion and how that they are all working in the same, possibly narrow area? Similarly, the article acknowledges that the inclusion of coauthors in the self-citation index has potential issues.

    I'm more comfortable with Ioannidis acknowledging in the article that this isn't a perfect metric and that it may be a reason to look closer more than anything else, and not a specific cause for concern.

    One thing the article does not discuss that would be worth discussing is whether the higher percentage of self-citation in some countries is due in part to language and cultural barriers. The authors may be writing very good works that aren't getting noticed outside their own countries, so the percentage of self-citations ends up looking higher.

    • by Uecker ( 1842596 ) on Saturday August 31, 2019 @04:25PM (#59144678)

      Self-citing is completely normal and acceptable (and as you point out often required) and the paper by Ioannidis does not state anything else. Tthe slashdot headline and summary is again bit misleading.

      • Self-citation is a bad thing to include when using citations to measure the importance or impact of a piece of research.

        As the GP pointed out, it would be very odd for him not NOT cite his own earlier work when improving upon it. But if even if he writes 100 more papers, each improving on the previous, it means nothing if no one else ever cites any of his papers.

        Of course, citation counts is a terrible way to measure important or impact of research papers in the first place, but it is the metric that is us

  • by Jerry ( 6400 ) on Saturday August 31, 2019 @05:32PM (#59144794)
    a group of scientists shamelessly "peer-reviewed" their own articles. This tactic was revealed in the CRU 2009 zip files released by a whistle-blower, who followed up with a 2011 release as well.. http://file.wikileaks.org/file... [wikileaks.org] In the 1,072 emails there were a part of the zip file is a sequence of exchanges which discusses 1) how to get the editors of certain periodicals replaced with with ones friendly to the claims the CRU were making. 2) how to prevent certain researchers papers from being published 3) setting up a website to respond to published papers that contradict their claims without having to publish letters or papers They have been rather successful in reaching their goals. To make matters worse, this group has, for many years, been methodically changing or throwing out historical data in order to create "historical" data that fits their theories. All of their research is designed to support their theories and in turn give ammunition to political groups which want to use their "science" to change or create public policies, which is their real goal. Translated from the Swiss-German magazine NZZ; "New Journal of Zürich" https://translate.google.com/t... [google.com] "But one has to say clearly: we are effectively redistributing world wealth through climate policy. That the owners of coal and oil are not enthusiastic, is obvious. One has to free oneself from the illusion that international climate policy is environmental policy. This has almost nothing to do with environmental policy..." - Ottmar Edenhofer, 2010 From 2008 to 2015 Edenhofer served as one of the co-chairs of the Intergovernmental Panel on Climate Change (IPCC) Working Group III "Mitigation of Climate Change". When I was in grad school we were taught to design experimental tests of our theories using a null hypothesis. In other words, the design of our experiment was to attempt to prove our hypothesis wrong, not right. We would assume it was right, otherwise why create a null hypothesis test? To deliberately design an test with the sole purpose of proving your theory right was considered unprofessional, if not unethical. Cooking, trimming, fudging, padding or making up data out of thin air has been going on for decades, if not centuries. About 30 years ago NOVA had an episode titled "Do Scientists Cheat?". Two federal paid research scientists guided the program. It featured the whistle-blowers who exposed the cheaters. The program stated that about 48% of all scientific papers involved some sort of cheating. After the show's release several scientists lost their jobs. One PhD was pulled back by the awarding university, all of the whistle-blowers were punished for being honest, and the two federal scientists were posted to meaningless desk jobs in remote areas like Alaska. What drives all the cheating is Federal grant money, which is awarded to applicants who propose research proving what the bureaucrats running the grant money programs want the research to prove.
    • The program stated that about 48% of all scientific papers involved some sort of cheating.

      I knew that when it started talking about whistle blowers and scientists living high on the hog with government contracts it was going to be about Global Warming. It couldn't be ANYTHING else, nope, every scientist that wanted to cheat just for some reason said; "I want to study clouds mom and dad."

    • by Uecker ( 1842596 )

      This issue was investigated by several committees which looked it this in detail. They all came to the conclusion that there was no misconduct. Are you saying they are part of a huge conspiracy?

  • by joe_frisch ( 1366229 ) on Saturday August 31, 2019 @07:15PM (#59144950)

    Citations are fine. Self-citations are fine - nothing wrong with citing your own work if it is the best, or at least most convenient reference. The problem comes from using citations to somehow measure the quality / importance of a publication.

    Some survey publications may be extremely valuable as a reference but have nothing really new in them. Other times the very first paper in a field may represent a truly new discovery, but later papers that fill in details are better references for future work.

    I don't think there is an objective way to measure the performance of a scientist without having a deep knowledge of the field. Imagine trying to rank art, music or literature based on the number of viewers.

Keep up the good work! But please don't ask me to help.

Working...