Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

Nobel Winner Schekman Boycotts Journals For 'Branding Tyranny' 106

An anonymous reader writes "One of this year's winners of the Nobel Peace prize has declared a boycott on leading academic journals after he accused them of contributing to the 'disfigurement' of science. Randy Schekman, who won the Nobel Prize in physiology or medicine, said he would no longer contribute papers or research to the prestigious journals, Nature, Cell and Science, and called for other scientists to fight the 'tyranny' of the publications." And if you'd rather not listen to the sound of auto-playing ads, you'll find Schekman's manifesto at The Guardian.
This discussion has been archived. No new comments can be posted.

Nobel Winner Schekman Boycotts Journals For 'Branding Tyranny'

Comments Filter:
  • crossing fingers. (Score:5, Insightful)

    by jythie ( 914043 ) on Tuesday December 10, 2013 @10:13AM (#45650525)
    I suspect most academics and researchers at this point are fed up with the way journals work, I have yet to hear one of them actually praise the current system of publication. I am not sure how it could be restructured, but what is happening today is retarding research and frustrating a lot of good people who would rather just be doing what they are supposed to be doing, teaching and research.
    • Re:crossing fingers. (Score:5, Interesting)

      by hubie ( 108345 ) on Tuesday December 10, 2013 @10:29AM (#45650705)

      I'm not so sure what he is complaining about is a big problem because not too many places can keep chasing the fad topics. To keep your lab alive, you need to establish some kind of expertise. It is after you've set up a self-sustaining lab that you can afford to repeatably chase after the hot topic du jour. In other words, you've most likely got your tenure.

      I can't say how familiar I am with the machinations of those particular journals, but I think most of the blame for the things that cause the issues you mention lie with the colleges and universities who put so much emphasis on publication counts and impact factors.

      An interesting aside, to me at least, is that I only recently installed Ghostery and when I went to the article linked in the summary, I was notified of 88 different tracking entities that were blocked. Eighty-eight on one web page!

      • by jythie ( 914043 ) on Tuesday December 10, 2013 @10:45AM (#45650857)
        Though that touches on one of the other major problems, the one I would argue is bigger then the publishing one. Setting up labs with expertise is a nightmare since you are not allowed to have a 'war chest'. If you have a 6 month grant, a month gap, then a 6 month grant, you loose all your people between the two grants. Unless you are one of the tenured people who is immune to the gaps, working in university research is riskier then corporate, which causes a significant brain drain and leads to inferior research since keeping experienced people over time is difficult.
        • What field are you talking about. In Sheckman's field (basic biomedical research), startup packages tend to carry you for a few years, and the core funding mechanism is a 5 year grant (NIH R01)

          • by jythie ( 914043 )
            Most of the funding I have worked with came from NSF or DARPA. One nasty bit we have discovered over the years is that a multi-year grant is not as multi-year as one would hope, and often disappears after the first spiral or even worse the paperwork is slow so there is a gap between the blocks of funding, but without an active grant in the bank the university doesn't fund the lab.
      • Re:crossing fingers. (Score:4, Interesting)

        by Adam Ricketson ( 2821631 ) on Tuesday December 10, 2013 @12:38PM (#45652239)

        I can't say how familiar I am with the machinations of those particular journals, but I think most of the blame for the things that cause the issues you mention lie with the colleges and universities who put so much emphasis on publication counts and impact factors.

        It's a symbiotic network of publications, promotions, and grant awards -- and those journals are one of the core components of the network. Those journals are not just passive beneficiaries of this system, they actively promote their role in the system (by publicizing their impact factor, for instance). On top of that, these journals have made some major mistakes. I could add more examples to Sheckman's list of bad publications. They are not being responsible powerholders, therefore it is urgent that we remove their power.

        I think Sheckman's point is to break the link between "high profile" work and those journals, so that universities cannot use publication in those journals as a proxy for work being interesting.

      • by Anonymous Coward

        Part of the point of tenure is to be secure enough to not have to chase after the topic du jour, but to other topics.

    • It's pretty simple. Publish online.

      • by jythie ( 914043 ) on Tuesday December 10, 2013 @10:40AM (#45650817)
        That has its own problems. As much as bloggers try to claim otherwise, publishing online has generally been a rather poor substitute for peer review and generally allows for a lot of really bad science to get wide attention. While journals are not perfect, they do (usually) maintain some minimum bars and filters for the material that goes into them.
        • by garutnivore ( 970623 ) on Tuesday December 10, 2013 @12:32PM (#45652161)

          As much as bloggers try to claim otherwise, publishing online has generally been a rather poor substitute for peer review and generally allows for a lot of really bad science to get wide attention.

          What randoms unidentified bloggers think about publishing has no bearing whatsoever on what scientists think about scientific publishing. Publishing online does not necessitate that peer review be dispensed with. I've not ever met an academic, be it in the sciences or elsewhere, who ever argued that print peer-reviewed publications should be replaced by online publications that are not peer reviewed.

          You're attacking a strawman.

          • I've not ever met an academic, be it in the sciences or elsewhere, who ever argued that print peer-reviewed publications should be replaced by online publications that are not peer reviewed.

            Strictly speaking, this is correct, but there certainly are serious scientists arguing for peer review taking place after publication, not before - under this scheme, we would simply post our raw manuscripts online (i.e. arXiv or similar server). But the ultimate goal is to have more peer review, not less, with the part

            • by lgw ( 121541 )

              That sort of system could work quite well, but you'd need some commonly accepted system whereby you'd know when a paper had been peer reviewed by some reasonable number of reveiwers. arXiv or similar could certainly provide that service easily enough, as well as some way to know that the reviewers were in fact reasonable choices themselves, but it all runs the risk of becoming dependent on a new central authority.

        • Re:crossing fingers. (Score:5, Informative)

          by bitt3n ( 941736 ) on Tuesday December 10, 2013 @12:40PM (#45652279)

          While journals are not perfect, they do (usually) maintain some minimum bars and filters for the material that goes into them.

          "Journals published by Elsevier, Wolters Kluwer, and Sage all accepted my bogus paper." [retractionwatch.com]

    • by cranky_chemist ( 1592441 ) on Tuesday December 10, 2013 @12:02PM (#45651715)

      The problem is that Schekman's argument is off base.

      From the article (yes, I read it):

      "These luxury journals are supposed to be the epitome of quality, publishing only the best research. Because funding and appointment panels often use place of publication as a proxy for quality of science, appearing in these titles often leads to grants and professorships."

      His argument appears to revolve around these three high-impact journals serving as the gate keepers of "good" science. But his ire is misdirected. If funding and appointment panels are giving undue weight to publications in these journals, then THE PROBLEM LIES WITH THE FUNDING AND APPOINTMENT PANELS, not the journals.

      His argument is paramount to "Scientists shouldn't publish in these journals because they're too highly regarded."

      • His argument is paramount to "Scientists shouldn't publish in these journals because they're too highly regarded."

        Perhaps those trees are overwatered, and some others could use a little love.

    • Re:crossing fingers. (Score:4, Interesting)

      by interkin3tic ( 1469267 ) on Tuesday December 10, 2013 @12:15PM (#45651881)
      Because it's not set up for the benefit of the researchers or the research. It's set up to benefit the people handing out the grants or hiring researchers, to determine who is good and who is bad. Number of publications in the top journals. It's a terrible metric, but the other ones also have problems.

      You can't really determine who is the best researcher by understanding the quality of the research. If you have 50 grant applications and 10 grants to award, how do you decide who to give it to? Are you going to read through the entire research of all 50 to determine who is the best researcher in the weekend you're given to determine who gets funded?

      An adjusted citation index would probably be the best option, but that gets back to the top journals, which are more likely to be read and cited than lesser impact journals, so you arrive back at comparing where one has published. Perhaps citation indexes should be adjusted to factor out the journal brand name effect, but that won't ever happen since it would be penalizing the current top researchers who have the reigns. And it's probably a stupid idea anyway.

      Cronyism is the preferred alternative to looking at where one has published, but obviously that has it's problems and is worse than simply looking at journal brand name. Although whether you get published in a great journal often depends on cronyism as well.

      So all the realistic options are shitty.
  • by TWX ( 665546 ) on Tuesday December 10, 2013 @10:14AM (#45650541)
    This is just a symptom of college and university boards wanting to attract attention to their institutions, which pushes tenure-track professors and researchers into 'flashier' research to help their cause to get tenure, which then drives what gets submitted to journals.

    Either make tenure easier to get so that professors are less likely to pursue fad or headline-grabbing science in order to achieve it, or encourage more grants to scientists that aren't affiliated with particular schools, so that they don't have to dance for their boards...

    Unfortunately most major companies aren't conducting basic research like IBM, Xerox, Bell, and other big organizations did fifty+ years ago, so getting grants from big entities is harder than it once was.
    • by femtobyte ( 710429 ) on Tuesday December 10, 2013 @10:30AM (#45650719)

      According to Schekman's argument, journals --- specifically the highest-impact-factor "luxury" journals --- do play a causal rather than merely symptomatic role in the process. Such journals court papers that are "flashy," which will get lots of citations and attention (thus lots of journal subscriptions), possibly because they are wrong and focused more on attention-getting controversial claims than scientific rigor. This provides feedback on the other side of the tenure-seeking "publish or perish" culture to shape what sort of articles the tenure-seeking professors are pressured to churn out. If a scientist wants to establish their reputation by publishing ground-breaking, exciting discoveries, there's nothing a-priori wrong with that; the failure comes when joined with impact-factor-seeking journals applying distorted lower standards for scientific rigor for "attention-getting" work (while rejecting solid but "boring" research papers).

  • by hubie ( 108345 ) on Tuesday December 10, 2013 @10:15AM (#45650549)
    So many people call every Nobel prize the Peace Prize.
    • Re: (Score:3, Funny)

      by ColdWetDog ( 752185 )

      The moon in your eye,
      like a big Peace-a Prize.

      No, I guess not.

    • by mcmonkey ( 96054 )

      It's time to start a boycott of Slashdot. These summaries are getting too bad to ignore. Weeklong hours? Peace prize in physiology or medicine?

      I might have to resosrt to doing work to pass the day.

      • You could just work on installing a spell checker. That might keep you off the streets for a while....

        Don't do anything you would regret for the rest of your life.

    • by Anonymous Coward

      Maybe it's part of the branding.

      Francis Crick once remarked that people who met him for the first time sometimes said, "Oh, I thought your name was Watson-Crick".

  • Contradiction (Score:5, Informative)

    by kruach aum ( 1934852 ) on Tuesday December 10, 2013 @10:15AM (#45650559)

    I don't know why I need to point this out, but the Nobel Peace Prize and the Nobel Prize for Physiology or Medicine are not the same thing. Schekman has only won the latter, not the former.

    • "One of this year's winners of the Nobel Peace prize"
      Taking into account that only about 1/4 of peace prices are shared and that this year's was not, that sentence makes the post wrong from the first one or two words.

      I just wanted to point out that this might be a record. A feat deserving 2013's "Fastest Error Award", also called Nobel Peace Price by some.

      • Exactly how much is the "Nobel Peace Price"? I'd like to mod you Funny, but I'm out of points...
  • by umafuckit ( 2980809 ) on Tuesday December 10, 2013 @10:32AM (#45650727)
    Obviously it's easy for someone in his position to take this stance (it would be suicide for most early career scientists), but it's still laudable. I've seen instances of this go the other way: Nobel appears and the person turns it into a licence to publish craptacular papers in top tier journals. When this happens it's bad on every level: harms the field, harms the first author, harms the journal.
    • It's not an issue of early-career vs established scientists -- it's an issue of pedigreed vs. self-made scientists.

      Sheckman is saying that he won't support a student's desire to submit a paper to SNC. His students will still have the benefit of being associated with a Nobel Prize winner. I see this as a sort of unilateral disarmament from someone whose influence is assured. Sheckman and his people have already been noticed, so he's letting everyone else have a chance at getting noticed too (by publishing in

  • I have sympathy for Shekman's argument, but it's the same story throughout publishing, not just for science. Publishers build their reputation on brilliant authors, but I don't know a single publisher that only publishes brilliant books/journals. Where his argument wobbles for me is when he mentions elife as being free to view but sponsored by industry. How will he and his sponsors measure success of that venture? The cynic in me suggests that as well as readership figures and brilliance of content, reputat
  • by bradley13 ( 1118935 ) on Tuesday December 10, 2013 @10:40AM (#45650815) Homepage

    Not only are many (most?) academics fed up with the big journals, we are also generally fed up with publication pressure. Our school is just now going through a review. The accreditation people want number of publication. It doesn't matter what you wrote about, or whether you had anything useful to say, it's just numbers.

    Who read about the University of Edinburgh physicist: He just won the Nobel prize, and has published a total of 10 papers in his entire career. As he said: today he wouldn't even get a job.

    I understand that school administrations want some way to measure faculty performance. But just as student reviews are a dumb way to assess teaching quality (because demanding teachers may be rated as poorly as incompetent teachers), number of publications is a dumb way to assess research quality.

    • by umafuckit ( 2980809 ) on Tuesday December 10, 2013 @10:49AM (#45650897)

      Who read about the University of Edinburgh physicist: He just won the Nobel prize, and has published a total of 10 papers in his entire career. As he said: today he wouldn't even get a job.

      You mean Peter Higgs?

    • Looking at things like impact factor of the journal or the number of times the article is cited require reading/counting* skills most deans don't seem to have--at least based on how most of them seem unable to read contracts or faculty handbooks. (*It seems skills learned while counting beans do not transfer well.)
  • by sinij ( 911942 ) on Tuesday December 10, 2013 @10:49AM (#45650899)
    Journals are only partially to blame for dysfunction of scientific publishing. By far the most harmful actor is pressure to publish papers regardless of quality and sometimes even fraudulently.

    "Publish or perish" is a unique pressure on mid-career academics to churn out publications. It is administrative metric that when applied can lead to career-ending outcomes for academics that are deemed "unproductive" This highly arbitrary metric looks at a number of papers published and sometimes journal impact factor, but it fails to measure scientific contribution to the field. Application of this metric linked to all kinds of scientific misconduct - from correlation fishing expeditions, to questionable practices in formulating research questions, to outright 'data cooking' and fraud.
    • by umafuckit ( 2980809 ) on Tuesday December 10, 2013 @11:19AM (#45651209)
      Saying "Publish or perish must go" is great, we all like the sound of that. But then what do you replace it with? Any metric that you come up with will be gamed if the people being measured know how you're measuring them. It's easy to point the finger at journals, funding committees, and hiring committees and say that the publish or perish mentality is their fault. But it's also the fault of researchers who choose to play the game. Researchers choose to break down papers into many smaller ones in order to increase publication count. Researchers choose to waste everyone's time by gambling and submitting to progressively lower tier journals until the paper sticks, rather than being honest with themselves and pitching the manuscript correctly from the start. Researchers choose to publish the shit stuff they barely believe anyway, wherever it'll get in, rather than consign it to the scrap heap and start over.
      • by MarkvW ( 1037596 )

        Properly evaluating other peoples' work is very hard. The comparative evaluation of that work with other people's work is even harder. But in an optimal system, such evaluation is essential. This is one of the fundamental problems of leadership--and universities suck at it.

        Publication isn't even the most important category of work output--teaching quality is. But teaching gets shunted aside, because nobody is really taking the time to carefully evaluate the quality of the teaching. Prospective students

      • by Atmchicago ( 555403 ) on Tuesday December 10, 2013 @12:08PM (#45651807)
        The way it should be is that the metrics for performance are the aggregate quality and impact of the work, not the number of publications or the impact factors of the journals they go into. Why doesn't this work? Because administrators generally don't understand the science that they are "administering." A possible solution would be to make sure that the people running the show behind the scenes are knowledgeable and competent, but we all know that's never going to happen...
        • On the ground, hiring decisions are made by an academic department's faculty (with oversight from deans, etc. who are themselves researchers)

        • The way it should be is that the metrics for performance are the aggregate quality and impact of the work, not the number of publications or the impact factors of the journals they go into. Why doesn't this work? Because administrators generally don't understand the science that they are "administering."

          That isn't why it doesn't work. It doesn't work because there's no particularly good objective metric for "quality" or "impact." For "impact" you have number of citations and where the work was published. If you want to get fancy, you can make a metric that takes into account what those citations were. Quality is pretty much impossible to judge objectively, particularly if you want to compare across fields. It doesn't matter how competent or knowledgeable your assessors are--that's not the limiting factor.

        • by Xylantiel ( 177496 ) on Tuesday December 10, 2013 @02:15PM (#45653371)
          Have you aver heard of H-index [wikipedia.org]. It combines rate and impact (measured by number of citations) in a way that also de-emphasizes one off flukes. I actually tend to compare H-index per year, which is a useful measure of contribution rate. But, that said, there are massive variations between even sub-fields in the same discipline due to different publishing and citation culture.
      • by Prune ( 557140 ) on Tuesday December 10, 2013 @12:44PM (#45652315)
        I don't think you can really blame academics. It seems to be, rather, that universities have succumbed to the same general trend that made MBAs and other business/management types infuse institutions beyond just the corporate world with a management style and optimization strategies that look only at narrowly defined metrics (usually revolving around financials, PR, etc.). Academic institutions seem to be run more like businesses these days than places of learning and research, and this is reflected in their employment distribution: in just one example, "employment of administrators jumped 60 percent from 1993 to 2009, 10 times the growth rate for tenured faculty" (source: http://www.businessweek.com/articles/2012-11-21/the-troubling-dean-to-professor-ratio [businessweek.com] ). I remember reading about this trend of falling faculty-to-administrator ratio quite a few years ago, along with the claim that it's been going on since at least the 1970s; it really struck home, however, when I noticed it affecting very schools I had attended. With the falling powers of faculty associations (like unions in general), I doubt that researchers and instructors could have stemmed this.
        • I blame everyone, including academics. Academics are probably at the bottom of the blame list, but they're still on the blame list. Perhaps there are too many administrators now; but if so, where did they come from and why isn't something being done about it? If academics don't like the way they're being assessed then they need to get together and come up with a viable alternative instead of continuing to play the game.
        • Yeah but if scientists don't stem this, then who's going to? Science is as scientists do to a large degree; they're the talent without which there is no science. They're MORE than mere employees and their commitment has to be something greater than "keeping your job" . At some point, they need to take raise collective consciousness and take responsibility for the forces that are effecting their lives and the process of doing science.
      • by epine ( 68316 )

        Saying "Publish or perish must go" is great, we all like the sound of that. But then what do you replace it with?

        Duh, it's not so hard. The scientists could actually bother to replicate more than a tiny sliver of all results published, and citations of papers not replicated could be treated at damning with faint praise.

        One thing peer review can not catch is chance aberration in the experimental data (structural aberration is a different matter).

        Without actually replicating the significant results, it all d

        • Duh, it's not so hard. The scientists could actually bother to replicate more than a tiny sliver of all results published, and citations of papers not replicated could be treated at damning with faint praise.

          Firstly, you're addressing a different a problem--repeatability in science--not how to evaluate the performance of an individual. Saying, for example, that a researcher can't get tenure until their results are replicated is too high a bar. Furthermore, what credit to do the people doing the direct replication experiments get? Not a whole lot. Neither can you wait for the field as a whole to validate a researcher's findings, that can take decades. Science often works on a longer time scale than one person's

      • The answer is the same as it always is. Don't measure what's easy - measure for the results you want to see. Do you want the janitor to check the restaurant bathroom every 15 minutes, or do you want the bathroom to be spotless after the janitor leaves every 15 minutes. One is easy; just have him sign a sheet in the bathroom. The other requires actual work, spot checks after the janitor leaves the bathroom for instance. Similarly, a factory worker may be easy to assess. Throughput without errors or inj

  • by Anonymous Coward

    From the TFA:

    "I have now committed my lab to avoiding luxury journals"

    This is bad news for his students and postdocs who wish to get a job in the future. Publishing in a 'luxury' journal is almost a requirement for getting a permanent position in scientific research. However much I may agree with his point of view, he also has a responsibility to advance the careers of the promising students and postdocs in his group.

  • by oldhack ( 1037484 ) on Tuesday December 10, 2013 @11:21AM (#45651241)
    Nobel editorial prize.
    • by Anonymous Coward

      Nobel editorial prize.

      You mean the Nobel Peace Editorial Prize?

  • by Spinalcold ( 955025 ) on Tuesday December 10, 2013 @12:19PM (#45651935)
    Pretty much every physics paper is pre-submitted to arXiv and after it is published in a paper the final copy is again resubmitted. The arXiv archive is there for peer review too, so it goes through two rounds of peer review. This has been the case for a decade now, I don't understand why this hasn't been taken up by other fields by now.
    • No. ArXiv papers are generally not resubmitted after being peer reviewed (that would be against the rules of most journals), and no peer review is done on ArXiv.
      • I don't have statistics on this, but resubmitting after peer review is the standard way of doing things in my field (cosmology). That doesn't mean submitting the version that appears in the actual journal, with its formatting etc, but the version that passed the peer review, with all the reviewer's comments addressed.

        As supporting evidence, here is the license of one of the most heavily used pieces of software in my field, camb:

        You are licensed to use this software free of charge on condition that:

        Any publication using results of the code must be submitted to arXiv at the same time as, or before, submitting to a journal. arXiv must be updated with a version equivalent to that accepted by the journal on journal acceptance.
        If you identify any bugs you report them as soon as confirmed

        Journals would not be in a position to try to fight this - nobody reads the jour

    • I don't understand why this hasn't been taken up by other fields by now.

      Speaking as a member of another field: a mixture of disorganisation and Stockholm syndrome.

  • by Jeremy Erwin ( 2054 ) on Tuesday December 10, 2013 @12:34PM (#45652189) Journal

    Shouldn't that be "Peace Prize in Economic Sciences in Memory of Alfred Nobel for Medicine and Physiology"

  • In my field (electrochemistry) the last 5/10 years caused a great deal of researchers to move away from the "traditional" journals (Journal of the Electrochemical Society, Solid State Letters, Electrochimica Acta) to the flashier, more general publications (ACS and RSC publications, mostly). These journals are more widely read, so their impact factor is much higher. But most of their content is also mostly irrelevant, and since the public reading them is not a real expert in my field, what is important is t

  • by umafuckit ( 2980809 ) on Tuesday December 10, 2013 @01:43PM (#45653051)
    Here's a way of doing it differently:

    Articles are submitted anonymously to a central site. Perhaps rough statistics on the author's past work can be included but nothing more. Each paper sits there for a fixed time period, maybe 3 or 4 weeks. Editors scour the site and bid for which papers they want to put through peer review at their journal. The community can assign ratings (1 to 10 stars in 2 or 3 different categories) to papers to help guide editors. At the end of the 3 or 4 weeks, the authors choose which journal of the ones which applied should get their submission. Journal sends paper to reviewers. Reviewers know which journal sent them the paper but obviously don't know the author names. Reviewers aren't allowed to reject a paper due to it being not novel (the journal already made that value judgement). The reviewers can only make objective scientific critiques. If it fails to get in, authors can send their paper and (optionally) reviews to the next journal on the list. That journal is not allowed to ask for new reviewers if the authors have already supplied reviews and addressed criticisms. Adding too many reviewers invariably results in unrealistic demands on authors. The final anonymous reviews are available as supplemental info following publication; this may decrease the incidence of shitty, biased, reviews.

    So this is somewhat like arXiv, but papers not accepted get pulled down (they can be resubmitted) and it's intended to be a gateway to publication.

  • by Goldsmith ( 561202 ) on Tuesday December 10, 2013 @01:57PM (#45653179)

    Who are the editors at these journals? They're largely former researchers from popular academic research groups.

    Who are the government program managers looking at journal statistics to judge research quality? They're largely former researchers from popular academic groups.

    Who are the university administrators creating the publish or perish environment? They're largely former researchers from popular academic groups.

    These relationships are the defining characteristic of modern scientific research. Despite the heartache and frustration the system causes, it also produces a huge amount of value for the rest of us.

    Over the last 30 years, the commercial labs, defense contractors and government facilities have all become subordinate to university R&D. This has combined the metrics university research has traditionally used with the competition of the private sector. If we want to change things, we need to change the basic structure of how we do research again.

    We didn't like using private funding as a success metric. Now we don't like using citations as a success metric. Ok, what else can we use?

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...