Study Attempts To Predict Scientists' Career Success 64
First time accepted submitter nerdyalien writes "In the academic world, it's publish or perish; getting papers accepted by the right journals can make or break a researcher's career. But beyond a cushy tenured position, it's difficult to measure success. In 2005, physicist Jorge Hurst suggested the h-index, a quantitative way to measure the success of scientists via their publication record. This score takes into account both the number and the quality of papers a researcher has published, with quality measured as the number of times each paper has been cited in peer-reviewed journals. H-indices are commonly considered in tenure decisions, making this measure an important one, especially for scientists early in their career. However, this index only measures the success a researcher achieved so far; it doesn't predict their future career trajectory. Some scientists stall out after a few big papers; others become breakthrough stars after a slow start. So how we estimate what a scientist's career will look like several years down the road? A recent article in Nature suggests that we can predict scientific success, but that we need to take into account several attributes of the researcher (such as the breadth of their research)."
Teaching? (Score:5, Insightful)
Ah, nah, what was I thinking. Whether someone produces future scientists or students who know science, doesn't matter one bit. Let's continue to fetishize publication, and the system of duchies it rests on!
Re: (Score:1)
Re:Teaching? (Score:5, Insightful)
There's really little that an academic can add to the excellent textbooks already published...
You have obviously never:
a) Had to learn from papers, rather than textbooks
b) Had a GOOD lecture
If you need to get an up-to-date view of a field have a "lecture-ised" literature review from someone who knows what they are talking about can save you literal weeks of sifting through papers. All the best lecture courses don't teach textbooks but go through the primary literature and this takes a good academic to do.
Re:Teaching? (Score:5, Interesting)
Good lectures make lazy students.
A student must feel truly abandoned, left to his own designs in an unjust game. Only students like that go to libraries, ask around, and make an effort to actually acquire their skills by themselves. Only those students have a long term chance of achieving anything.
The products of "good lectures"? Like fat and complacent castrated cats that never learned to fend for themselves. Useless.
I do teach at a university. I do good lectures, and produce lots of useless, well fed and lazy fat cats. They rate me as a good prof and everybody is happy. I do what I get paid for - but I know better.
Re:Teaching? (Score:5, Insightful)
NO! (Score:2)
If the student takes a sincere interest in the subject matter, and came from a background where he/she knows that working hard will help avoid:
1) A low paying career
2) A meaningless job
3) A lifetime of misery
Then that student either forces his or her own self to either show a committed interest in the field, or finds one where he or she can naturally develop such an interest.
You seem to be of the impression that lectures should instill a DESIRE in students. My bac
Re: (Score:2)
What good then is the University? A good public library is much cheaper.
Re: (Score:2)
Yet a LOT of textbooks just go over the same things over and over and over again. Sadly, a large book amount does not necessarily mean an equally large diversity.
It is very hard nowadays to write a good textbook. Mostly because the classical subjects have been more or less covered by excellent books (that survived the all-judging lapse of time), and because the new subjects are really, really hard. For my dissertation I had to go through hundreds of papers, which I tried to group and summarize in the early
Re: (Score:2)
Or better yet, teach yourself. Science classes are for memorizing facts, which is not exactly science. I'm skeptical that one can really teach a roomful of people how to think scientifically in 3 hours a week for a semester.
Re: (Score:2, Insightful)
Or better yet, teach yourself. Science classes are for memorizing facts, which is not exactly science. I'm skeptical that one can really teach a roomful of people how to think scientifically in 3 hours a week for a semester.
Actual knowledge is not the reason to go to school - that piece of paper you get at the end, which confirms you possess said knowledge (or rather, confirms you paid for said piece of paper), which is essential to getting a job in about 90% of the market, is.
There are a lot of we 'self-taught' scientists, engineers, etc, who get stuck with entry-level wages because we lack the aforementioned written credentials, actual skills and know-how notwithstanding.
Re: (Score:2)
Well... my first semester Physics experience, which was quite fortunate, was 10 students for 6 hours of class a week, plus 6-10 hours of lab, in Morely's old basement facilty, taught by a theoritcian and an experimentalist, with a dedicated TA and an undergrad assistant.
Only 3 of us became professional physicists; one of us, works in theatre. But each of us derived E=mc^2 from experiment on our own-- that's great teaching, which you can't do on your own. We learned how to think; we learned tha
Re: (Score:2)
Ah, nah, what was I thinking. Whether someone produces future scientists or students who know science, doesn't matter one bit. Let's continue to fetishize publication, and the system of duchies it rests on!
Skill at teaching and skill at researching may not be correlated. And if not, someone who is very good at research should spend their time on discovery, and let someone else do the teaching.
Re: (Score:2)
All agree on the above, though I've known a few people who were great at balancing both. My point was that 'sucess' should not be measured solely in terms of the research itself, but activities that broadly contribute to the advancement of science/knowledge.
predicting success is hard (Score:5, Interesting)
I am interested in how anyone would predict the successfull contributions of people who have been hiding in the patent office for several years being denied promotions for their lack of credentials.
Exceptions are exceptionally hard to predict.
Re: (Score:3)
Well his problem was lack of credentials as a patent officer. I'm getting a PhD in computer science, in a specific branch of computer science (AI/Games), if I needed money and got a job working at IBM on computer languages I'd be years behind my colleagues who are going straight into it from PhD's in languages, I'd even be behind some undergrads because I've done fuck all with the theory of languages in 5 years.
The other thing to keep in mind with this is that 'success' sometimes means 'can recognize proj
Re: (Score:2)
I am interested in how anyone would predict the successfull contributions of people who have been hiding in the patent office for several years being denied promotions for their lack of credentials.
I believe the general statement is "Prediction is hard, especially with regard to the future".
As to that famous patent clerk, he couldn't get a job not because of lack of credentials (although he didn't complete his doctoral degree until 1905) but at least in part because he belonged to some religious or ethnic classification that was moderately unpopular at the time.
Finally, based on this metric, I must be enormously successful in academia, because I've published (in good journals) on everything from pure
Successful Predictions Feedback Loop Overload (Score:5, Interesting)
Re: (Score:2)
If promotion, hiring or funding were largely based on indices (h-index, the model used here or any other measure), then some scientists would adapt their behaviour to maximize their chances of success. Models such as ours that take into account several dimensions of scientific careers should be more difficult for researchers to game than those that focus on a single measure.
Re:Successful Predictions Feedback Loop Overload (Score:5, Interesting)
It's worse than that. If such an index were used widely in hiring decisions, then its success would be a sef-fulfilling prophecy. It would be guaranteed to work amazingly well, because only scientists scoring highly on it would be allowed to succeed. And if you don't secure a high rating for yourself by the age of 28 or so, then you can just forget it and move on to something else.
Of course, the world pretty much works that way already, without reducing hiring criteria to a single number. The evil is that HR people will use it to minimize risk and simplify decision making, and so every employer will in effect be using the same hiring criteria. There might as well be a hiring monopoly to ensure that no "square pegs" get through all of the identical round holes.
Different Fields Different Standards (Score:2)
Even if they start successfully predicting individuals careers
That's a very big 'if'. For example in experimental particle physics we publish in large collaborations. My h-index score is better than Dirac's but Dirac was a far greater scientist than I! (and that is just the theory/experiment difference in the same field!) Similarly I have papers in a large variety of journals but so have the majority of us in the field. In fact to accurately assess experimental particle physicists you need to rely more on what they do inside their collaborations than the publishing r
Brilliant (Score:1)
Excluding Patent Clerks (Score:4, Insightful)
Re: (Score:2)
It is certainly not beyond the realm of the possible, provided that you employ language that is at once more erudite and less accessible.
Re: (Score:3)
Sure you can. If read the paper (and any other papers relevant to this) and tell us why you believe you can do better, I am sure you can get a grant for it. Believe it or not, this how the scientific community works. If the author of paper you are quoting made sure no body else has tried this before, and he publishes his results, he has pretty much earned his grant. His results (presented in a conference or somewhere) will inspire other researchers to do better. Someone else like you will do better and publ
Inaccurate (Score:2)
Re: (Score:2)
The summary is pretty inaccurate. The h-index was proposed by Jorge Hirsch, not Jorge Hurst. Rather than give a vague description, why not simply provide an exact definition? The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more [citations]. This is easier to understand if you look at the picture in the Wikipedia entry for h-index [wikipedia.org].
Made a minor clarification to your definition [in brackets].
IMO the definition should be modified to exclude self-citations. Scientists like to cite their earlier work (and should, if it is on the same topic), but the h-index as currently defined temps spamming your papers with self-cites just to drive your index up.
If anyone is interested, if you can find an author's profile on Google Scholar it will show their h-index, plus a modified h-index that only counts citations in the past five years. It will al
Re: (Score:2)
IMO the definition should be modified to exclude self-citations.
Also it doesn't consider the quality of the venue you publish in. There are journals out there that will publish *anything*.
Re: (Score:1)
The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more [citations].
Made a minor clarification to your definition [in brackets].
Oops, thanks for the correction!
IMO the definition should be modified to exclude self-citations. Scientists like to cite their earlier work (and should, if it is on the same topic), but the h-index as currently defined temps spamming your papers with self-cites just to drive your index up.
Good point. IMO exluding self-citations is good practice for pretty much all citation-based indicators.
Re: (Score:2)
IMO the definition should be modified to exclude self-citations. Scientists like to cite their earlier work (and should, if it is on the same topic), but the h-index as currently defined temps spamming your papers with self-cites just to drive your index up.
That wouldn't work. Where do you draw the line? Do you not count citations from papers with the same first-author? If you do that then savvy scientists will rotate authorship on papers from their lab. Do you make it so that no citations count when there are any common authors between the citer and citee paper? That's even more unworkable considering how much scientists move around and collaborate across institutions. The only smart thing for a scientist to do then would be to strategically omit authors off
the results are not surprising (Score:2)
The h-index of a scientist is the largest number h, such that he/she has at least h papers each of which have received h or more papers. [nb. excluding self-citations].
And that definition shows that the results of this paper are in fact so trivial as to be meaningless.
h-index is cumulative. Their results were "Five factors contributed most to a researcher’s future h-index: their total number of published articles to date, the number of years since their first article was published, the total number of distinct journals in which they had published, the number of articles they had published in “prestigious” journals (such as Science and Nature), and their
Re: (Score:2)
Researcher 1: 3 papers each cited 700 times, H=3
Researcher 2: 10 papers each cited 10 times, H=10
I'd still be picking researcher number 1.
Young Geniuses Versus Old Masters (Score:2, Interesting)
This reminds me of some research about artists which found that you could divide the most 'successful' artists into two rough categories: those who made a big splash right away and those whose classic work did not emerge until much later.
http://www.nber.org/papers/w8368
SMBC - gaming the science publishing system (Score:1)
This reminds me of a relevant comic - where once this system is worked out, scientists will be trying to game the system:
http://www.smbc-comics.com/index.php?db=comics&id=1624#comic
Why bother? (Score:3)
It should be noted that the usefulness of h-indices varies from field to field. For example, in various branches of pure mathematics, a heavily-referenced paper is one that, maybe, garners 25 to 100 citations. In applied mathematics and certain subsets of statistics, the threshold would be a factor of magnitude larger.
Also, as a preference, I tend to ignore metrics like h-indices when evaluating a researcher, as they provide very little evidence for his her her capabilities, let alone the quality of the work.
To elaborate, at least from my own experiences, in certain portions of applied mathematics that bleed over into computer vision, machine learning, and pattern recognition, I've seen papers that are relatively mathematically prosaic, but possibly easy to understand or where the code is made available, be heralded and heavily cited for a period. In contrast, I've come across papers that provide a much more sound, but complicated, framework along with better results, possibly after the topic is no longer in vogue, and go unnoticed or scarcely acknowledged.
In a different vein, there are times when a problem is so niche, but nevertheless important to someone or some funding agency, that there would be little reason for anyone else to cite it.
Touching on an almost completely opposite factor, there are times when the generality of the papers, coupled with the subject area, artificially inflates certain scores. For instance, if a researcher spends his or her career developing general tools, e.g., in the context of computer vision, things like optical flow estimators, object recognition schemes, etc., those papers will likely end up more heavily cited, as they can be applied in many ways, than those dealing with a specific, non-niche problem, e.g., car detection in traffic videos. Furthermore, the propensity for certain disciplines to favor almost-complete bibliographies can skew things too.
Finally, albeit rare, are those papers that introduce and find the "best" way to solve a problem that no other discussion is warranted.
Re: (Score:2)
Also, as a preference, I tend to ignore metrics like h-indices when evaluating a researcher, as they provide very little evidence for his her her capabilities, let alone the quality of the work.
Suppose you have a position to fill and get several hundred applications. What do you do?
Re: (Score:2)
In the (incomplete) hypothetical situation you proposed, there would be a myriad number of paths that I'd take depending on various circumstances.
To elaborate on just one, if I was reviewing candidates for tenure-track junior faculty and research positions in an area that I was familiar with, I'd sit down and read through all of their publications. (Since I am a prolific reader, I can easily go through 300 full-length journal papers, in my spare time, every 2-3 weeks; considering prospective faculty review
Why is publishing useful? (Score:4, Interesting)
Unless you need publishing cred for your job, I can't see why anyone would bother going that route.
It's only really useful for tenure in a teaching position, and *slightly* useful for other job prospects. If you're not pursuing either of those, why bother?
1) Your information is owned by the publisher, you can't reprint or send copies to friends.
2) You make no money from having done the work.
3) The work gets restricted to a small audience - the ones who can afford the access fees
4) It's rife with politics and petty, spiteful people
5) The standard format is cripplingly small, confining, and constrained.
6) The standard format requires jargonized cant [wikipedia.org] to promote exclusion.
A website or blog serves much better as a means to disseminate the information. It allows the author to bypass all of the disadvantages, and uses the world as a referee.
Alternately, you could write a book (cf: Quantum Electrodynamics [amazon.com] by Feynman). There's no better way to tell if your ideas are good than by writing a book and submitting it to the world for review.
Alternately, you could just not bother. For the vast majority of people, even if they discover a new process or idea publishing it makes no sense. There's perhaps some value in patenting, but otherwise there's no real value in making it public.
Today's scientific publishing is just a made-up barrier with made-up benefits. In the modern world it's been supplanted by better technology.
Re: (Score:1)
I'm not familiar with other fields, but certainly in the realm of computer science research, there is great value in publication, and your assertion that information is locked away just isn't true.
The primary value in publishing, from the world's perspective, is that there is a permanent, cataloged copy of your work that others can find and read. Web pages and blog posts come and go over time, but published research papers are placed in organized repositories that are available for decades (hopefully indef
Re: (Score:2)
Re: (Score:1)
1) Your information is owned by the publisher, you can't reprint or send copies to friends.
This is a sweeping generalization at best and wrong in most cases. It is perfectly possible to publish in a 'gold' open acess journal like the ones owned by PLoS, in which case your information is published under, for instance, some CC license. Even most 'traditional' journals nowadays allow self-archiving preprint and/or postprints in an institutional repository like Harvard's [harvard.edu] or in ArXiv [arxiv.org] ('green' open access).
3) The work gets restricted to a small audience - the ones who can afford the access fees
Not necessarily true, for the reasons outlined above.
4) It's rife with politics and petty, spiteful people
The same goes for Slashdot, Wikipedia, the l
Often cited? (Score:2)
What if this person and their articles are cited as an example of a moron?
Yeah, I know; peer reviewed articles tend not to drag colleagues through the muck, so to speak. Citations are made to build your own case, not so much to cut others down.
Talk to Google (Score:2)
That sounds like exactly what Google does with its pagerank search algorithms. Though I suspect Google is much, much further along in thwarting people's attempts to game th
So let me see if I get this right... (Score:2)
... quality is now going to be measured by popularity?
I see the little narcissists are growing up and setting policy now...
Re: (Score:2)
Divorce Research from Undergraduate Education (Score:2)
This is going to be slightly off-topic, but it something I have been mulling in my head
Rating research of people who supposed to be also teaching puts them firmly in publish or perish mode and that's not good for students. Universities are both research institutions and teaching facilities, really they should choose one. There was a time you needed all three together because of the cost and learning efficiency, however, that time has come and gone as the number of students entering all three segments has in
Friggin' paywalls... (Score:2)
There should be some rule on /. about posting articles that you can only see if you pay someone money...grrr.
depressing (Score:2)
It's too damned hard to succeed as a scientist. These men and women are already the best of the best, and they've chosen a field where you need to be the best of the best of the best to succeed. They've already gone through trials to get where they are, but it's still not enough to guarantee a permanent position or a decent wage. There's so much pressure and the competition for tenure is so tough. How is one supposed to distinguish oneself when everyone is a genius and a workaholic? It's true that competiti
Re: (Score:2)
Yeah. This probably cuts down on our national scientific output quite a bit.
I for one... (Score:1)
...cynical about a career as a Scientist/Academic Researcher.
IMHO, there is absolutely no legitimate way of quantifying "success of a scientist". It is down to: 1) how a particular study stands the test of time; 2) extended studies that reassures the accuracy of original results, will make the original investigating scientist a true success. Best example I can provide is, Prof Higgs... even Prof Einstein.
All these 'publish-and-perish' claptrap will only do is: dilute the quality of academic research, discou