Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Businesses Stats Science

Study Attempts To Predict Scientists' Career Success 64

First time accepted submitter nerdyalien writes "In the academic world, it's publish or perish; getting papers accepted by the right journals can make or break a researcher's career. But beyond a cushy tenured position, it's difficult to measure success. In 2005, physicist Jorge Hurst suggested the h-index, a quantitative way to measure the success of scientists via their publication record. This score takes into account both the number and the quality of papers a researcher has published, with quality measured as the number of times each paper has been cited in peer-reviewed journals. H-indices are commonly considered in tenure decisions, making this measure an important one, especially for scientists early in their career. However, this index only measures the success a researcher achieved so far; it doesn't predict their future career trajectory. Some scientists stall out after a few big papers; others become breakthrough stars after a slow start. So how we estimate what a scientist's career will look like several years down the road? A recent article in Nature suggests that we can predict scientific success, but that we need to take into account several attributes of the researcher (such as the breadth of their research)."
This discussion has been archived. No new comments can be posted.

Study Attempts To Predict Scientists' Career Success

Comments Filter:
  • Teaching? (Score:5, Insightful)

    by theNAM666 ( 179776 ) on Monday September 17, 2012 @02:18PM (#41365251)

    Ah, nah, what was I thinking. Whether someone produces future scientists or students who know science, doesn't matter one bit. Let's continue to fetishize publication, and the system of duchies it rests on!

    • by Anonymous Coward
      In my country, lecture attendence is not obligatory. Even when lectures are offered, often the majority of students never come in until the day of the exam. Making a talented scholar spend his precious little time in front of a classroom, and making the students come in every morning when they could otherwise have decided their own schedule to study, is a waste of everyone's time in so many fields. There's really little that an academic can add to the excellent textbooks already published, but there's a lot
      • Re:Teaching? (Score:5, Insightful)

        by Anonymous Coward on Monday September 17, 2012 @02:35PM (#41365429)

        There's really little that an academic can add to the excellent textbooks already published...

        You have obviously never:

        a) Had to learn from papers, rather than textbooks
        b) Had a GOOD lecture

        If you need to get an up-to-date view of a field have a "lecture-ised" literature review from someone who knows what they are talking about can save you literal weeks of sifting through papers. All the best lecture courses don't teach textbooks but go through the primary literature and this takes a good academic to do.

        • Re:Teaching? (Score:5, Interesting)

          by rmstar ( 114746 ) on Monday September 17, 2012 @02:55PM (#41365657)

          b) Had a GOOD lecture

          Good lectures make lazy students.

          A student must feel truly abandoned, left to his own designs in an unjust game. Only students like that go to libraries, ask around, and make an effort to actually acquire their skills by themselves. Only those students have a long term chance of achieving anything.

          The products of "good lectures"? Like fat and complacent castrated cats that never learned to fend for themselves. Useless.

          I do teach at a university. I do good lectures, and produce lots of useless, well fed and lazy fat cats. They rate me as a good prof and everybody is happy. I do what I get paid for - but I know better.

          • Re:Teaching? (Score:5, Insightful)

            by The Dancing Panda ( 1321121 ) on Monday September 17, 2012 @04:05PM (#41366571)
            I'm just going to put this out there: you're wrong. A good lecture should produce students who want to learn more about the subjects on their own. Not because they have to to pass your class, but because they want to because you made the subject interesting. If you're not doing that, but rather just telling the kids all they need to know to pass your test, it's not a good lecture, it's a good study session.
            • No, sir/mme, you are dead WRONG!

              If the student takes a sincere interest in the subject matter, and came from a background where he/she knows that working hard will help avoid:
              1) A low paying career
              2) A meaningless job
              3) A lifetime of misery

              Then that student either forces his or her own self to either show a committed interest in the field, or finds one where he or she can naturally develop such an interest.

              You seem to be of the impression that lectures should instill a DESIRE in students. My bac
          • by sjames ( 1099 )

            What good then is the University? A good public library is much cheaper.

        • Yet a LOT of textbooks just go over the same things over and over and over again. Sadly, a large book amount does not necessarily mean an equally large diversity.

          It is very hard nowadays to write a good textbook. Mostly because the classical subjects have been more or less covered by excellent books (that survived the all-judging lapse of time), and because the new subjects are really, really hard. For my dissertation I had to go through hundreds of papers, which I tried to group and summarize in the early

    • Why not? Clearly no one involved, students, teachers, universities, value good teaching skills as highly as research grants and name brand recognition. If you want good teaching, look at Khan Academy or a smaller school.

      Or better yet, teach yourself. Science classes are for memorizing facts, which is not exactly science. I'm skeptical that one can really teach a roomful of people how to think scientifically in 3 hours a week for a semester.
      • Re: (Score:2, Insightful)

        by CanHasDIY ( 1672858 )

        Or better yet, teach yourself. Science classes are for memorizing facts, which is not exactly science. I'm skeptical that one can really teach a roomful of people how to think scientifically in 3 hours a week for a semester.

        Actual knowledge is not the reason to go to school - that piece of paper you get at the end, which confirms you possess said knowledge (or rather, confirms you paid for said piece of paper), which is essential to getting a job in about 90% of the market, is.

        There are a lot of we 'self-taught' scientists, engineers, etc, who get stuck with entry-level wages because we lack the aforementioned written credentials, actual skills and know-how notwithstanding.

      • Well... my first semester Physics experience, which was quite fortunate, was 10 students for 6 hours of class a week, plus 6-10 hours of lab, in Morely's old basement facilty, taught by a theoritcian and an experimentalist, with a dedicated TA and an undergrad assistant.

        Only 3 of us became professional physicists; one of us, works in theatre. But each of us derived E=mc^2 from experiment on our own-- that's great teaching, which you can't do on your own. We learned how to think; we learned tha

    • Ah, nah, what was I thinking. Whether someone produces future scientists or students who know science, doesn't matter one bit. Let's continue to fetishize publication, and the system of duchies it rests on!

      Skill at teaching and skill at researching may not be correlated. And if not, someone who is very good at research should spend their time on discovery, and let someone else do the teaching.

    • All agree on the above, though I've known a few people who were great at balancing both. My point was that 'sucess' should not be measured solely in terms of the research itself, but activities that broadly contribute to the advancement of science/knowledge.

  • by RichMan ( 8097 ) on Monday September 17, 2012 @02:20PM (#41365279)

    I am interested in how anyone would predict the successfull contributions of people who have been hiding in the patent office for several years being denied promotions for their lack of credentials.

    Exceptions are exceptionally hard to predict.

    • by Sir_Sri ( 199544 )

      Well his problem was lack of credentials as a patent officer. I'm getting a PhD in computer science, in a specific branch of computer science (AI/Games), if I needed money and got a job working at IBM on computer languages I'd be years behind my colleagues who are going straight into it from PhD's in languages, I'd even be behind some undergrads because I've done fuck all with the theory of languages in 5 years.

      The other thing to keep in mind with this is that 'success' sometimes means 'can recognize proj

    • by radtea ( 464814 )

      I am interested in how anyone would predict the successfull contributions of people who have been hiding in the patent office for several years being denied promotions for their lack of credentials.

      I believe the general statement is "Prediction is hard, especially with regard to the future".

      As to that famous patent clerk, he couldn't get a job not because of lack of credentials (although he didn't complete his doctoral degree until 1905) but at least in part because he belonged to some religious or ethnic classification that was moderately unpopular at the time.

      Finally, based on this metric, I must be enormously successful in academia, because I've published (in good journals) on everything from pure

  • by Githaron ( 2462596 ) on Monday September 17, 2012 @02:21PM (#41365295)
    Even if they start successfully predicting individuals careers, wouldn't the system eventually break down since professors would probably change based on the results of the prediction?
    • From TFA:

      If promotion, hiring or funding were largely based on indices (h-index, the model used here or any other measure), then some scientists would adapt their behaviour to maximize their chances of success. Models such as ours that take into account several dimensions of scientific careers should be more difficult for researchers to game than those that focus on a single measure.

    • by drooling-dog ( 189103 ) on Monday September 17, 2012 @02:55PM (#41365659)

      It's worse than that. If such an index were used widely in hiring decisions, then its success would be a sef-fulfilling prophecy. It would be guaranteed to work amazingly well, because only scientists scoring highly on it would be allowed to succeed. And if you don't secure a high rating for yourself by the age of 28 or so, then you can just forget it and move on to something else.

      Of course, the world pretty much works that way already, without reducing hiring criteria to a single number. The evil is that HR people will use it to minimize risk and simplify decision making, and so every employer will in effect be using the same hiring criteria. There might as well be a hiring monopoly to ensure that no "square pegs" get through all of the identical round holes.

    • Even if they start successfully predicting individuals careers

      That's a very big 'if'. For example in experimental particle physics we publish in large collaborations. My h-index score is better than Dirac's but Dirac was a far greater scientist than I! (and that is just the theory/experiment difference in the same field!) Similarly I have papers in a large variety of journals but so have the majority of us in the field. In fact to accurately assess experimental particle physicists you need to rely more on what they do inside their collaborations than the publishing r

  • "Publish or Perish" is a great way to put it. In the current world one has to to go above and beyond to reach people through various social media and internet channels to get their information out. People now find the information that they want to read instead of just trusting the media to find it for them. Well said :)
  • by strangeattraction ( 1058568 ) on Monday September 17, 2012 @02:26PM (#41365337)
    Yes if your had the top thesis advisor, went to the best schools and work in a lab with good funding you do well. What a surprise! This would probably ignore patent clerks that discover Relativity however. I recall one paper that claimed to be able to predict your whereabouts by some kind of cell phone info. I can predict it without any data. %90 of the population spends %90 of the of their time within 1/4 mile of their place of residence or employment/school etc. Wow that was hard. Can I get a grant for that?
    • Sure you can. If read the paper (and any other papers relevant to this) and tell us why you believe you can do better, I am sure you can get a grant for it. Believe it or not, this how the scientific community works. If the author of paper you are quoting made sure no body else has tried this before, and he publishes his results, he has pretty much earned his grant. His results (presented in a conference or somewhere) will inspire other researchers to do better. Someone else like you will do better and publ

  • The summary is pretty inaccurate. The h-index was proposed by Jorge Hirsch, not Jorge Hurst. Rather than give a vague description, why not simply provide an exact definition? The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more papers.This is easier to understand if you look at the picture in the Wikipedia entry for h-index [wikipedia.org].
    • The summary is pretty inaccurate. The h-index was proposed by Jorge Hirsch, not Jorge Hurst. Rather than give a vague description, why not simply provide an exact definition? The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more [citations]. This is easier to understand if you look at the picture in the Wikipedia entry for h-index [wikipedia.org].

      Made a minor clarification to your definition [in brackets].

      IMO the definition should be modified to exclude self-citations. Scientists like to cite their earlier work (and should, if it is on the same topic), but the h-index as currently defined temps spamming your papers with self-cites just to drive your index up.

      If anyone is interested, if you can find an author's profile on Google Scholar it will show their h-index, plus a modified h-index that only counts citations in the past five years. It will al

      • IMO the definition should be modified to exclude self-citations.

        Also it doesn't consider the quality of the venue you publish in. There are journals out there that will publish *anything*.

      • by RGuns ( 904742 )

        The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more [citations].

        Made a minor clarification to your definition [in brackets].

        Oops, thanks for the correction!

        IMO the definition should be modified to exclude self-citations. Scientists like to cite their earlier work (and should, if it is on the same topic), but the h-index as currently defined temps spamming your papers with self-cites just to drive your index up.

        Good point. IMO exluding self-citations is good practice for pretty much all citation-based indicators.

      • IMO the definition should be modified to exclude self-citations. Scientists like to cite their earlier work (and should, if it is on the same topic), but the h-index as currently defined temps spamming your papers with self-cites just to drive your index up.

        That wouldn't work. Where do you draw the line? Do you not count citations from papers with the same first-author? If you do that then savvy scientists will rotate authorship on papers from their lab. Do you make it so that no citations count when there are any common authors between the citer and citee paper? That's even more unworkable considering how much scientists move around and collaborate across institutions. The only smart thing for a scientist to do then would be to strategically omit authors off

    • The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more papers. [nb. excluding self-citations].

      And that definition shows that the results of this paper are in fact so trivial as to be meaningless.

      h-index is cumulative. Their results were "Five factors contributed most to a researcher’s future h-index: their total number of published articles to date, the number of years since their first article was published, the total number of distinct journals in which they had published, the number of articles they had published in “prestigious” journals (such as Science and Nature), and their

    • That seems like a pretty useless measure.

      Researcher 1: 3 papers each cited 700 times, H=3

      Researcher 2: 10 papers each cited 10 times, H=10

      I'd still be picking researcher number 1.
  • by Anonymous Coward

    This reminds me of some research about artists which found that you could divide the most 'successful' artists into two rough categories: those who made a big splash right away and those whose classic work did not emerge until much later.

    http://www.nber.org/papers/w8368

  • This reminds me of a relevant comic - where once this system is worked out, scientists will be trying to game the system:

    http://www.smbc-comics.com/index.php?db=comics&id=1624#comic

  • by docmordin ( 2654319 ) on Monday September 17, 2012 @03:02PM (#41365743)

    It should be noted that the usefulness of h-indices varies from field to field. For example, in various branches of pure mathematics, a heavily-referenced paper is one that, maybe, garners 25 to 100 citations. In applied mathematics and certain subsets of statistics, the threshold would be a factor of magnitude larger.

    Also, as a preference, I tend to ignore metrics like h-indices when evaluating a researcher, as they provide very little evidence for his her her capabilities, let alone the quality of the work.

    To elaborate, at least from my own experiences, in certain portions of applied mathematics that bleed over into computer vision, machine learning, and pattern recognition, I've seen papers that are relatively mathematically prosaic, but possibly easy to understand or where the code is made available, be heralded and heavily cited for a period. In contrast, I've come across papers that provide a much more sound, but complicated, framework along with better results, possibly after the topic is no longer in vogue, and go unnoticed or scarcely acknowledged.

    In a different vein, there are times when a problem is so niche, but nevertheless important to someone or some funding agency, that there would be little reason for anyone else to cite it.

    Touching on an almost completely opposite factor, there are times when the generality of the papers, coupled with the subject area, artificially inflates certain scores. For instance, if a researcher spends his or her career developing general tools, e.g., in the context of computer vision, things like optical flow estimators, object recognition schemes, etc., those papers will likely end up more heavily cited, as they can be applied in many ways, than those dealing with a specific, non-niche problem, e.g., car detection in traffic videos. Furthermore, the propensity for certain disciplines to favor almost-complete bibliographies can skew things too.

    Finally, albeit rare, are those papers that introduce and find the "best" way to solve a problem that no other discussion is warranted.

    • by hweimer ( 709734 )

      Also, as a preference, I tend to ignore metrics like h-indices when evaluating a researcher, as they provide very little evidence for his her her capabilities, let alone the quality of the work.

      Suppose you have a position to fill and get several hundred applications. What do you do?

      • In the (incomplete) hypothetical situation you proposed, there would be a myriad number of paths that I'd take depending on various circumstances.

        To elaborate on just one, if I was reviewing candidates for tenure-track junior faculty and research positions in an area that I was familiar with, I'd sit down and read through all of their publications. (Since I am a prolific reader, I can easily go through 300 full-length journal papers, in my spare time, every 2-3 weeks; considering prospective faculty review

  • by Okian Warrior ( 537106 ) on Monday September 17, 2012 @03:04PM (#41365753) Homepage Journal

    Unless you need publishing cred for your job, I can't see why anyone would bother going that route.

    It's only really useful for tenure in a teaching position, and *slightly* useful for other job prospects. If you're not pursuing either of those, why bother?

    1) Your information is owned by the publisher, you can't reprint or send copies to friends.
    2) You make no money from having done the work.
    3) The work gets restricted to a small audience - the ones who can afford the access fees
    4) It's rife with politics and petty, spiteful people
    5) The standard format is cripplingly small, confining, and constrained.
    6) The standard format requires jargonized cant [wikipedia.org] to promote exclusion.

    A website or blog serves much better as a means to disseminate the information. It allows the author to bypass all of the disadvantages, and uses the world as a referee.

    Alternately, you could write a book (cf: Quantum Electrodynamics [amazon.com] by Feynman). There's no better way to tell if your ideas are good than by writing a book and submitting it to the world for review.

    Alternately, you could just not bother. For the vast majority of people, even if they discover a new process or idea publishing it makes no sense. There's perhaps some value in patenting, but otherwise there's no real value in making it public.

    Today's scientific publishing is just a made-up barrier with made-up benefits. In the modern world it's been supplanted by better technology.

    • by Anonymous Coward

      I'm not familiar with other fields, but certainly in the realm of computer science research, there is great value in publication, and your assertion that information is locked away just isn't true.

      The primary value in publishing, from the world's perspective, is that there is a permanent, cataloged copy of your work that others can find and read. Web pages and blog posts come and go over time, but published research papers are placed in organized repositories that are available for decades (hopefully indef

    • and soon they'll determine your potential success metric in the womb based on how many papers your parents published and just abort you if you (they) don't measure up. "papers! where are your papers?"
    • by RGuns ( 904742 )

      1) Your information is owned by the publisher, you can't reprint or send copies to friends.

      This is a sweeping generalization at best and wrong in most cases. It is perfectly possible to publish in a 'gold' open acess journal like the ones owned by PLoS, in which case your information is published under, for instance, some CC license. Even most 'traditional' journals nowadays allow self-archiving preprint and/or postprints in an institutional repository like Harvard's [harvard.edu] or in ArXiv [arxiv.org] ('green' open access).

      3) The work gets restricted to a small audience - the ones who can afford the access fees

      Not necessarily true, for the reasons outlined above.

      4) It's rife with politics and petty, spiteful people

      The same goes for Slashdot, Wikipedia, the l

  • What if this person and their articles are cited as an example of a moron?

    Yeah, I know; peer reviewed articles tend not to drag colleagues through the muck, so to speak. Citations are made to build your own case, not so much to cut others down.

  • In 2005, physicist Jorge Hurst suggested the h-index, a quantitative way to measure the success of scientists via their publication record. This score takes into account both the number and the quality of papers a researcher has published, with quality measured as the number of times each paper has been cited in peer-reviewed journals.

    That sounds like exactly what Google does with its pagerank search algorithms. Though I suspect Google is much, much further along in thwarting people's attempts to game th

  • ... quality is now going to be measured by popularity?

    I see the little narcissists are growing up and setting policy now...

  • This is going to be slightly off-topic, but it something I have been mulling in my head

    Rating research of people who supposed to be also teaching puts them firmly in publish or perish mode and that's not good for students. Universities are both research institutions and teaching facilities, really they should choose one. There was a time you needed all three together because of the cost and learning efficiency, however, that time has come and gone as the number of students entering all three segments has in

  • There should be some rule on /. about posting articles that you can only see if you pay someone money...grrr.

  • It's too damned hard to succeed as a scientist. These men and women are already the best of the best, and they've chosen a field where you need to be the best of the best of the best to succeed. They've already gone through trials to get where they are, but it's still not enough to guarantee a permanent position or a decent wage. There's so much pressure and the competition for tenure is so tough. How is one supposed to distinguish oneself when everyone is a genius and a workaholic? It's true that competiti

  • ...cynical about a career as a Scientist/Academic Researcher.

    IMHO, there is absolutely no legitimate way of quantifying "success of a scientist". It is down to: 1) how a particular study stands the test of time; 2) extended studies that reassures the accuracy of original results, will make the original investigating scientist a true success. Best example I can provide is, Prof Higgs... even Prof Einstein.

    All these 'publish-and-perish' claptrap will only do is: dilute the quality of academic research, discou

One man's constant is another man's variable. -- A.J. Perlis

Working...