Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

How Many Scientists Does It Take To Write a Paper? Apparently, Thousands 122

An anonymous reader writes: The Wall Street Journal takes a look at the current spike in number of contributors cedited in scientific journals. The problem is highlighted by a recent physics paper which credits 5,154 researchers. The journal reports: "In fact, there has been a notable spike since 2009 in the number of technical reports whose author counts exceeded 1,000 people, according to the Thomson Reuters Web of Science, which analyzed citation data. In the ever-expanding universe of credit where credit is apparently due, the practice has become so widespread that some scientists now joke that they measure their collaborators in bulk—by the 'kilo-author.'"
This discussion has been archived. No new comments can be posted.

How Many Scientists Does It Take To Write a Paper? Apparently, Thousands

Comments Filter:
  • by Anonymous Coward
    How many editors does it take to spell-check TFS?
    • by Anonymous Coward

      According to samzenpus...zero.

      • It is a problem (Score:5, Interesting)

        by The Real Dr John ( 716876 ) on Tuesday August 11, 2015 @10:24AM (#50293667) Homepage

        I publish journal articles and reviews fairly regularly, and we only rarely include authors who did literally nothing other than be in the lab or consult. It happens occasionally when a young researcher has been working hard on a related project, but still does not have nearly enough data for a whole new paper. There are some cases where a researcher in another lab provides a reagent you can't get anywhere else, but even then they often review the final manuscript and make suggestions. However, I think that in larger labs this may occur much more often. Our lab is small (3 scientists and 2 student researchers). On one recent paper several collaborating scientists at NIST worked extensively on a part of the project that did not pan out, so they are in the author list even though their work does not appear in the paper. But you can't say they didn't do any work. There are always issues like this when deciding on authorship that are unique to each situation.

        However, the physics and genetic articles that have thousands of authors are much harder to justify, and absurd to even think that anyone would go through the list. All but the first few authors are lost in any citation list when the paper is cited. No one will ever see the other names. Plus, it really messes with citation software like Reference Manager (newer versions of Endnote seem to handle it well).

        The bigger journals now require author contributions to be listed, which is a good thing (and would be very difficult on a paper with 1500 "authors")

        • Look here: Our US scientists should clearly be using Imperial technology.

          3,000 contributors should be designated as either:

          20.83 gross contributors

          ...or...

          .3 myriad contributors

          Damned metric boot-lickers.

          Imperial: "Because MURICA!"

        • However, the physics and genetic articles that have thousands of authors are much harder to justify, and absurd to even think that anyone would go through the list.

          Consider high energy physics, where the papers require the combination of major efforts by more than a thousand physicists: People who designed and built the detectors, people who operated them 24/7 for runs lasting several months, people who wrote the triggering and data analysis code, people who conducted the data analysis, etc.

          All of these different aspects are major contributions, deserving co-authorship, and the papers draw on all those parts. Leave any out. You can't write a paper using only the dat

          • Physicists are very smart people. They have to be able to figure out a way to simplify the authorship issue on projects that large. It requires a new type of attribution system for high density projects. Maybe working groups could be credited, with online attribution to each team. Teams would range from a handful, to dozens of members. This way the teams would get credit, and the authors are the members of those teams.

            • Physicists are very smart people. They have to be able to figure out a way to simplify the authorship issue on projects that large.

              If large authorship were a problem, the kinds of fixes you suggest might be in order, but what is the problem? As publishing moves from dead trees to electrons, why is it a problem to list everyone who made a significant contribution to a large project as an author?

  • by umafuckit ( 2980809 ) on Tuesday August 11, 2015 @07:22AM (#50292309)
    Yes, there's a trend going upwards but there are only 1,400 papers with 50 or more authors. In 2009 about 1 million biomedical papers were published [quora.com]. So if we make the unlikely assumption that all the high author number papers are biomedical, that means that a whopping ~0.15% of the papers published each year have more than 50 authors. Not exactly a big deal.
    • by Z00L00K ( 682162 )

      But the question is actually if it makes sense to have many authors on a paper. If you have 10 or more it should already raise a warning flag.

      • by Anonymous Coward

        Depends on the field. On genetics paper, you could easily have:

        * The principle investigator and whoever in their lab obtained/prepared the samples.
        * The lab that performed the sequencing, sometimes even in another institution, with plenty of people who might have handled the samples/nurse-maided the machines at some point, plus whoever leads that lab.
        * The bioinformatics teams that processed the data, including checking for quality control and early analysis.
        * In-depth analysis from a different team in the

        • by arth1 ( 260657 ) on Tuesday August 11, 2015 @07:57AM (#50292469) Homepage Journal

          But are all of those authors, and not data contributors?

          • by guruevi ( 827432 )

            It is common practice for more renowned professors to include grad students, PhD's in the lab etc on papers just to get their names out there. Since your science worth is often based on the number of papers on your resume, it's a great way of starting a career by having your professor include your name in grad school and beyond. I've seen papers with 20+ names on it where most of them, I know for a fact, have done practically nothing for the paper.

            • It typically works the other way. The grad student writes the paper and his adviser wants his name included because he "advised".

              • The grad student writes the paper and his adviser wants his name included because he "advised".

                I've read enough PhD theses to know that very few students write their own papers. The difference between the chapters that have been accepted for publication in a journal and those that submission is still pending is easily determined. Nevermind the unseen contributions refining the research topic, the methodology, and analysis.

                There's a reason science still works by the apprenticeship model. If you think you wrote your own Ph.D. thesis, you're either pathologically egotistical or your advisor died in y

                • If you mean wrote it alone, I concur. But the third option I have seen is likewise quite common, which is the disinterested advisor. But research papers in general, and the model for contribution, needs work. As does the value of researchers being assigned by volume of publications. A similar problem has arisen in placement testing: volume over substance. The longer the essay, the higher the score, whether grammar and grasp operated on the post-graduate level or the post-Kindergarten level. Quantity is rar
            • It is common practice for more renowned professors to include grad students, PhD's in the lab etc on papers just to get their names out there.

              It is far more common for the grad students to be included because they did the actual work, and the "renowned professor" is a free loader who didn't even check the data, and may not have even read the paper.

            • by IgnitusBoyone ( 840214 ) on Tuesday August 11, 2015 @08:35AM (#50292709)

              It is common practice for grad students to include more renowned professors in the lab etc on papers out of respect of their funding. Since your worth as a Primary Investigator (grant holder) is based on the number of papers on your resume, It is the only way of having a career. I've seen papers with 20+ PH.Ds on it where most of them, I know for a fact, have done practically nothing for the paper.

              I needed to fix that for you. In general a well funded lab is so busy the PIs do almost nothing buy advise and review to make sure the grant work stays on progress. In reality Grads and Post Docs do 90% of the work in hopes of one day moving to a career in management aka a PI position. I've never meet a PI that has time to work the day to day minutia of science discovery in fact they often lament if they wanted to keep doing science they would have never started a lab.

              • This has been my experience as well - dating back to the late 80's. A principle investigator is basically a grant-writing machine who has built enough of a reputation to get his grants funded. He also has to come up with new areas of inquiry and prod his students and post-docs into getting enough data to write a grant for that line of investigation.

                They visit the lab, but pretty much never get to do the bench work. And the distance from the lab means that they cannot remember how long things take, so the

            • by Gr8Apes ( 679165 )

              It is common practice for more renowned professors to include grad students, PhD's in the lab etc on papers just to get their names out there. Since your science worth is often based on the number of papers on your resume, it's a great way of starting a career by having your professor include your name in grad school and beyond. I've seen papers with 20+ names on it where most of them, I know for a fact, have done practically nothing for the paper.

              Common practice or not, those people don't belong in the list. It demeans the value of those that actually did work on the paper.

          • by Plumpaquatsch ( 2701653 ) on Tuesday August 11, 2015 @08:09AM (#50292537) Journal

            But are all of those authors, and not data contributors?

            Collecting the data is the actual work. Any idiot with a computer can make the analysis. And draw the wrong conclusions from that.

            • by tburkhol ( 121842 ) on Tuesday August 11, 2015 @11:12AM (#50294099)

              Collecting the data is the actual work. Any idiot with a computer can make the analysis. And draw the wrong conclusions from that.

              It's a shame that so many people seem to agree with this. I would put it exactly the opposite: any idiot can be trained to collect data; knowing what data to collect, why to collect it, how it fits in with 100 years of pre-existing data, and how to condense all of that into a concise but readable story is the difficult (and creative) part.

              Maybe it's learned from student science labs, where you spend a lot of time getting the mechanics of an experiment to work, then plug the numbers into a pre-set template for analysis. Of course, those labs are exactly about training students to collect data, and not so much about doing science. Maybe it's learned from the media, where an uninformed reporter picks a bit of data out of a paper and concludes the green coffee extract is a fat-cure-all. Maybe it's just that it takes a lot of work to be able to distinguish good work that advances our state of knowledge from a mountain of data that doesn't really say anything.

              • Nice try, but even if you ignore all the work to get the data, it still only takes one guy (with a computer) to do the analysis. And that one guy can only get a correct result if the data was taken correctly. But he can still screw up the result all by his own.
                • by delt0r ( 999393 )
                  Maybe the data you analyze. But we need more than one guy if we want to finish in anything called a reasonable time period. And we use 100 of thousands of CPU hours doing it. Don't assume the world is as small as your experience.
              • Collecting the data is the actual work. Any idiot with a computer can make the analysis. And draw the wrong conclusions from that.

                It's a shame that so many people seem to agree with this. I would put it exactly the opposite: any idiot can be trained to collect data; knowing

                Both are hard. I've done both and I can assure you that I find gathering data is often not easy. It obviously depends what you're doing, but in my field a good experimentalist is highly regarded. Things often don't work and debugging your protocols takes brains. I agree that it's stupid to say "Any idiot with a computer can make the analysis". I'd love the fool who said that to come and look over my shoulder for day.

        • by KGIII ( 973947 )

          I am listed in a number of physics papers but I have nothing to do with physics - my graduate was in Applied Mathematics. At the time we were doing more and more with computers (four years in the early 1980s and then another four years at the end of the decade and into the next). I crunched a bunch of numbers for them - some of it was verifying the computer's work - and did this on a number of occasions and for other departments. Thus my name ended up in a number of papers. It was common enough that I had t

      • I find a greater red flag to be a large number of reference sources, without specific denotations of which data is pulled from which source, making it impossible for the reader to establish context in how the source data was established and therefore if it is properly employed. But that is a stray from the topic at hand.

        References;
        1. Me
        • Why would they give you the exact denotations when all you're going to do is try to find something wrong with it?
      • by umafuckit ( 2980809 ) on Tuesday August 11, 2015 @07:50AM (#50292425)

        But the question is actually if it makes sense to have many authors on a paper. If you have 10 or more it should already raise a warning flag.

        One of the examples they highlight is CERN, where thousands of researchers do indeed contribute. Since the current way of assigning credit in academic circles is to put people on a paper, it's hard to see what else can be done. Yes, it's a bit weird but it happens very rarely. I don't see what the "warning flag" is being raised in aid of. There's no reason to think the science is worse in a large author count paper. If I was interviewing someone with only authorship in high author count papers then I'd ask the the appropriate questions. Then again, I'd likely ask those questions anyway. I've interviewed first authors (of a paper with under 5 authors) who couldn't explain the analyses described in the article. There are warning flags, but the number of authors likely isn't one of them.

        • Absolutely agree that it's much ado about nothing. AND bad statistics ! CERN as an example is a lot of nonsense... it's a HUGE project with a HUGE population of PhD's, grad students, undergrads, managers, technicians, and everybody else. All working towards a common goal. And the science developed by those thousands of authors is an enormous collaboration, enabled by ... yeah, you guessed it, the World Wide Web. Which was INVENTED at CERN to enable... Collaboration.

          WSJ, you look like a bunch of idiots. Stick to talking about stocks and rich people stuff. You suck at science.

        • I think it started with institutes like CERN, but there are more and more large "instruments" that require hundreds, if not thousands of people to get at a few very interesting results. The main reason is that for instruments like that, the boundary between engineer and researcher is very vague, as every part is a unique design and requires quite a bit of R&D to create.

          As a result the papers that do come out of research like that are going to have large lists of authors/contributors.

          Pushing the boundari

      • But the question is actually if it makes sense to have many authors on a paper. If you have 10 or more it should already raise a warning flag.

        Root cause analysis. Why is the paper being written in the first place.

        I would not put obfuscation represented by putting thousands of authors on policy-manipulating research papers as something beyond reproach when trying to secure profits.

        Let's not pretend we've never heard of a lobbyist before, or why they exist.

      • by rgmoore ( 133276 )

        But the question is actually if it makes sense to have many authors on a paper.

        That depends on the nature of the paper. Yes, a large number of authors is suspicious on a paper that represents the amount of work that could be carried out by a few researchers in a reasonable length of time. But there are more and more papers out there that come from and could only practically be produced by large-scale collaborations. For those papers, there is no longer a single dominant contributor, or even a small group

    • Yes, there's a trend going upwards but there are only 1,400 papers with 50 or more authors. In 2009 about 1 million biomedical papers were published [quora.com]. So if we make the unlikely assumption that all the high author number papers are biomedical, that means that a whopping ~0.15% of the papers published each year have more than 50 authors. Not exactly a big deal.

      Not exactly a big deal?

      Guess that depends on just how much of that ~0.15% is used to drive change and affect policy for millions of citizens.

      Don't dismiss what these papers are used for. We're not exactly gathering thousands of minds together to document how to build a lemonade stand.

      • Not exactly a big deal?

        Guess that depends on just how much of that ~0.15% is used to drive change and affect policy for millions of citizens.

        Don't dismiss what these papers are used for. We're not exactly gathering thousands of minds together to document how to build a lemonade stand.

        I think you're talking about the impact of big science on society and funding. This is an interesting question but it's a different one to how many authors are on the papers, which is what the article is about. The article implies that we're entering an era where huge multi-author papers are common. This isn't so because the phenomenon they are describing accounts of a small fraction of a percent of all published papers. The phenomenon they are describing is not a big deal. The content of the science itsel

  • Considering the classic trend to have lots of people do the work for you while taking all the credit and delaying their graduation from grad school as long as possible, it's probably better this way.

  • by Anonymous Coward

    More clickbait. The article in question is built on work done at teh ATLAS collaboration and the CMS collaboration, two of the biggest physics experiments around. Unsurprising that there are so many authors (they need to reference everyone who worked on the project).

    Article reference info (since professional media shops are no longer able to do the minimal leg work required):
    Title: "Combined Measurement of the Higgs Boson Mass in pp Collisions at s=7 and 8 TeV with the ATLAS and CMS Experiments"
    Published: P

    • More clickbait. The article in question is built on work done at teh ATLAS collaboration and the CMS collaboration, two of the biggest physics experiments around.

      You're confusing the article with the example. Work done at the ATLAS collaboration was one of the examples given of massively-multiple-author science papers. It's not "the article".

      The interesting part of the article is the graph.

  • Recognition Creep (Score:5, Insightful)

    by Overzeetop ( 214511 ) on Tuesday August 11, 2015 @07:44AM (#50292405) Journal

    When everyone has been credited on dozens or hundreds of scientific papers it dilutes the perceived value of the average researchers involvement in each paper on which they claim credit. In the competitive worlds of grant application and academic positions, this means you have to be more worried about getting credit on as many studies as possible than about actually doing meaningful research on a single issue in order to make it through the initial review/HR screening process.

  • by Antique Geekmeister ( 740220 ) on Tuesday August 11, 2015 @07:53AM (#50292439)

    The Human Genome Project, assembling work from thousands of researchers, developers, and technicians worldwide had hundreds of authors.

                            http://www.nature.com/nature/j... [nature.com]

    It can make sense in such a large project to list as many of the contributors as possible.

    • Think of it a bit like movie credits. A small indie production probably only has a few people involved. A big blockbuster (like the Human Genome Project) easily has thousands of people making meaningful contributions. The size of the credits should match accordingly.

      • With academic papers the author list is usually a flat list. Sometimes the first authors are the key ones but many papers simply have the authors in alphabetical order. Combine that with a writing style that ephasisies what was done over who did it and it's pretty difficult to figure out who the main drivers of the project were and who were just workers.

        Contrast that to movie or videogame credits where there is a structure that tell you who played (or in the case of videogames voiced), who the people drivin

  • by 140Mandak262Jamuna ( 970587 ) on Tuesday August 11, 2015 @07:54AM (#50292443) Journal
    It mainly happens in high energy physics. The high energy collider labs are run by a large team of scientists and every one in the control room who pushed the buttons wants to be credited as a contributor to the experiment.

    Scientists have played pranks with co-author names. The famous Alpha-Beta-Gamma paper [wikipedia.org] comes to mind. Low temp physicist Hetherington had included his dog [wikipedia.org] as a co-author so that he could use "we" in the paper.

    The spoof paper authored by S Candlestickmaker [harvard.edu] done by a student of S Chandrashekar was very famous. The student later lamented that spoof paper was his best known contribution to science than his PhD or his entire research career. It is telling I remember S Candlestickmaker but not the student's name.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Also, the Cox-Zucker Machine [wikipedia.org]. When they met in graduate school they immediately decided that they needed to collaborate. Although this was an actual collaboration.

  • by Anonymous Coward

    Big reports like the IPCC assessment reports compile hundreds or thousands of papers. In that context, it makes sense to cite thousands of authors. It's also possible that massive efforts requiring collaboration on such a scale like analyzing LHC data or sequencing a genome might actually involve hundreds or thousands of people making significant contributions to the work. I don't really see that as being the real issue with citations.

    The real issue is the abuse of the peer review and funding proposal revie

  • Writing papers by the dump truck load. It's become the religion of science. It's all bullshit.
  • Search science papers online, find "I. Abstract" to be a very prolific author with expertise in practically all scientific fields. A true renaissance man (or woman, or whatever).

  • We are in the age of big science. The papers with enormous author lists are often the ones dealing with terabytes of data - think supercolliders and the like - so indeed here are huge numbers of people working on it. This is not the great conspiracy that "failure machine" samzenpus is trying to portray it as here. Rather, the fact of the matter is that very little science is done by individuals any more; the big questions we seek to answer now require more resources and time than what an individual can p
    • Lol, Terabytes.

      I think you mean Petabytes.
      Terabytes is what we did in the nineties.

      - Radioastronomy (and probably other disciplines).

  • Thank you, "failure machine" samzenpus. I was wondering how long you would make it this week before touting the anti-science agenda. It would certainly be terrible if people on slashdot had an appreciation for how big science works in the 21st century, it is great that they can count on people like you to blindly demonize it instead.
  • by Goldsmith ( 561202 ) on Tuesday August 11, 2015 @09:30AM (#50293187)

    Science today is judged by two metrics: papers published and students graduated.

    It's important to actually understand that statement if you want to understand some of the quirks and problems with scientific culture.

    You do not get credit for projects, advancements, talks, transition to industry, programs, results, etc. The government granting agencies only track papers published and students graduated when comparing different granting offices. Put another way, the government internally sets funding targets for each sub-field based on papers published and students graduated. Thus only papers published and students graduated are meaningful to science (again, not results, but papers).

    Papers and # of PhDs became the currency of science, and are used to judge everything from the readiness of a student to graduate to the differential societal contribution of different scientific fields.

    This has led to a situation where if you want to graduate students in fields like particle physics, you need to include them on the very rare papers that come out. Failing to graduate students would lead to a decrease in funding. For a student to get "credit" for working at CERN or NASA, that student needs to be on a paper. It's as simple as that.

    • The US National Science Foundation does now have a policy of allowing researchers to list other types of products, such as the ones that you mention, on their CVs when applying for grants.
      • That's not what I mean. Internally, how does NSF set its budget? Do the PMs get their money based on "other products" or papers published? If the "selection pressure" on program officers is to stick to the metrics (papers published), then they're going to pass that down to their performers.

        I have been a program officer in other government agencies (not NSF), and I speak from my experience there. Our mission was technology transition, but our personal performance metrics were papers published. We rarely

    • Science today is judged by two metrics: papers published and students graduated.

      It's important to actually understand that statement if you want to understand some of the quirks and problems with scientific culture.

      You do not get credit for projects, advancements, talks, transition to industry, programs, results, etc..

      Wrong.

      First, the National Science Foundation only allows you to list ten papers on your biography for grant applications, so whether you published 10 papers or 1000 papers, you still can only list ten on your biographical sketch.

      Also, regarding things other than papers: you are required to include in your grant applications a report on the results (including "boarder impacts to society") of all your previously funded research projects. People get big credit when applications of their work is picked up by in

      • Maybe you didn't understand what I was talking about. Your grant proposals and tenure review processes are secondary effects here. The primary driver is how the government sets its budget internally.

        I've been a grant officer. Of course we like seeing practical applications! It's wonderful to see people somehow get past the system to develop something that we can at least pretend is practical. The bottom line is, though, that papers published and students graduated are the hard metrics used to bash weak

        • Maybe you didn't understand what I was talking about. Your grant proposals and tenure review processes are secondary effects here. The primary driver is how the government sets its budget internally.

          Thank you for the clarification. That context wasn't clear to me in your original comment and this makes it much clearer. I thought you were talking about how individual scientists' work is judged, but you were talking about how science is judged on a much larger scale, and in that context you are correct.

  • Go look at the credits for a movie from the 1950s and compare it to what you see for any modern film. Not just the ones dripping with CGI, but simple dramas (i.e. compare "Casablanca" with "The Fault in Our Stars").

  • Am I really the first here to link to this classic from the Journal of Irreproducible Results? http://www.improbable.com/airc... [improbable.com]
  • To answer the headline, I don't know. But no amount of sheer quantity will ever displace the paper by Alpher, Bethe, and Gamov for the most entertaining authors list. (I swear I heard a version where someone tried to recruit a Delter, too, but a quick google search isn't turning that part of my theory up for me.)

  • This need for crediting every breathing thing that remotely touched the article makes me think of the credits for movies when the kid fetching the sandwich also has a place due to union rules.

    • Ah, then which jobs would you include or not include? Make-up? Costume design? Modelling? Lighting? Sound?

      What would be your criteria for being an important enough contribution?

      • by JigJag ( 2046772 )

        I'd remove them all, even actors.

        Then, if I was a movie industry professional (director, producer, reward giver, etc), then I would look up who was that make-up artist that did such a fabulous job, or that lighting engineer that really achieved the goal, etc using a movie referral tool like IMDB Pro or something like that to locate that person.
        And to make everyone happy, instead of the long winded credit list at the end, just one long (say 30 second) frame with a link or QR code to that IMDB page (or whatev

  • What matters is how many other papers cite your groundbreaking work.

    Not the number of collaborators you have.

  • 5,000 people is more people then you could even know by name. That is probably more scientists than are in your entire field of study. You could not have even of has small conversations with all of these people during the course of the research project. That is just too many people for it to ever make sense for most of them to have had an impact on the paper. If I were to credit every single historic scientists who enabled me to finally conduct my research it probably would not even equal 5K people.

Technology is dominated by those who manage what they do not understand.

Working...