How Scientists Know An Idea Is a Good One 140
Physicist Chris Lee explains one of the toughest judgment calls scientists have to make: figuring out if their crazy ideas are worth pursuing. He says:
"Research takes resources. I don't mean money—all right, I do mean money—but it also requires time and people and lab space and support. There is a human and physical infrastructure that I have to make use of. I may be part of a research organization, but I have no automatic right of access to any of this infrastructure. ... This also has implications for scale. A PhD student has the right to expect a project that generates a decent body of work within those four years. A project that is going to take eight years of construction work before it produces any scientific results cannot and should not be built by a PhD student. On the other hand, a project that dries up in two years is equally bad. ... the core idea also needs to be structured so, should certain experiments not work, they still build something that can lead to experiments which do work. Or, if the cool new instrument we want to build can't measure exactly what I intended, there are other things it can measure. One of those other things must be fairly certain of success. To put it bluntly: all paths must lead to results of some form."
For certain values of "good" (Score:5, Insightful)
That's not a description of a good idea. That's a description of an idea that fits into an arbitrary 4-year timescale that fits with a PhD program's average length.
Re:For certain values of "good" (Score:1)
Mods... before you rate this as insightful... read the article, maybe?
Re:For certain values of "good" (Score:5, Funny)
read the article, maybe?
No time! I need to post within an arbitrary slashdot timescale that fits with getting modded up!
Re:For certain values of "good" (Score:2)
There's also some wiggle room in either. Maybe some physics projects take necessarily 8 years or so, but with many projects in at least cell biology, you can speed it up or shape it to your timescale. Publishing as is, without all the experiments that could be done, also happens, leaving stuff for the next student, a postdoct, or the lab head to finish up. A project that only lasts 2 years doesn't happen very often: there's almost always more experiments to do, push the knowledge further. I can't think of any biologist who could say "And that's basically all there is to research in this particular project: DONE."
When I choose what to research, length of study is not really a big concern.
Re:For certain values of "good" (Score:2, Interesting)
I can only speak for my own field (physics), but the national average length of a Ph.D. is almost 7 years. This is according to an AIP study I read about 8 years ago. There is a large spike at 5 years (theorists) and a long tail on the high end (experimentalists). Also, during the first two years before you qualify for candidacy you are rather inundated by classwork. In which case, aiming for 4 year project sounds about right. It allows for a bit of a buffer for when things break.
Re:For certain values of "good" (Score:2, Informative)
It is highly dependent on local conditions. In France, PhDs are by definition 3 years long.
The main point is unaffected by the value of this number, though, just that it exists and is hopefully a small fraction of a person's career.
Re:For certain values of "good" (Score:2)
In Switzerland at ETH Zurich a PhD usually takes approx. 5 years. Starting a PhD program here requires a Master's degree which is usually obtained after 5 years of studying (master and bachelor together).
Re:For certain values of "good" (Score:1)
Re:For certain values of "good" (Score:3)
Almost everybody I know who got both a masters and PhD in physics did it for one of two reasons-- 1) they started grad work at a smaller school that didn't have a PhD program (or not much of one) and switched to a larger/better program 2) they expected to work at large companies (e.g. 3M) where the pay scale gave you slightly more money if you had a masters+PhD than PhD alone, even if all you did for the masters was fill out a few extra forms and bind up some intermediate result (that you had anyway) into a thesis.
Re:For certain values of "good" (Score:1)
Or, a series of small interrelated projects (Score:2)
Or, a series of small interrelated projects.
That is the customary approach I've seen the last decade.
Re:For certain values of "good" (Score:2)
Failures are very necessary part of science (Score:4, Insightful)
To quote Thomas A. Edison, "If I find 10,000 ways something won't work, I haven't failed. I am not discouraged, because every wrong attempt discarded is another step forward".
Re:Failures are very necessary part of science (Score:3)
Re:Failures are very necessary part of science (Score:5, Insightful)
Re:Failures are very necessary part of science (Score:1)
Nothing breeds lack of funding like failure.
If only that were true for "War on terror", "War on Drugs", and a host of "Great Society" programs.
Re:Failures are very necessary part of science (Score:2)
Those are very successful corruption study hypothesis. I fail to see your point
Yeah, it's a pathetic way to do science, but... (Score:2)
... it gets even worse: http://www.pdfernhout.net/to-james-randi-on-skepticism-about-mainstream-science.html#Some_quotes_on_social_problems_in_science [pdfernhout.net]
Re:Failures are very necessary part of science (Score:3)
While you are theoretically correct, you are real-world dead-in-the-water. A big problem with getting science funding these days is what I'll call the Golden Fleece Award Effect (for Sen. William Proxmire's Golden Fleece Award - wikipdeida it). While funding organizations are well aware that a solid negative result in a difficult research area is just as pertinent and useful as a positive one, Congress (the source of all funding) doesn't understand it and doesn't like it. Money out needs to be balanced by succes in. I know many researchers who do 90% of the research needed for a given NSF (or NASA) proposal before they propose it so they can (a) show it will indeed result in success, and (b) it will succeed so they can get more NSF funding. Nothing breeds lack of funding like failure. This is a dumb-ass way to do science, but since all funding comes from the Kingdom of the Dumbasses you get what you'd expect.
You hit the nail on the head.
With basic research, projecting milestones is impossible and everyone from the researchers to the project managers is well aware of that fact. Thus you end up with people proposing research that is already well past proof-of-concept (90% is atypical in my field because of the large overheads) and listing milestones for research that is already being written up. The absurdity is that these results had to be funded from another grant, where you promised to do other research that was already done, so you wind up committing fraud on two counts; 1) by doing research outside of what you proposed in order to seed results for your next grant and 2) by promising to do research that you've already done. (Problem 1 is exaserbated outside the US where continuing grants--renewals--are less common.) Open-ended, "prove your concept" funding is almost unheard of these days--everything is based on deliverables.
It's an absurd system to begin with because a funding body (Congress, the DoD, the European Commission, a parliament, a private company, whatever) are investing money in research with the expectation of seeing a return on their investment. While they can directly measure the return by looking at the commercial success of a technology that started with research grants, that process often takes decades and they cannot accurately estimate the investment. During the Cold War, Congress adopted the Infinite Horizon model that said that simply placing funds with qualified scientists was sufficient to drive discovery. But that time has passed, and we're left with an arbitrary set of key performance indicators (KPIs) to justify spending all this taxpayer money on research. Since science, by and large, hasn't changed (have idea, test idea, if it works, poop out some technology). Instead, scientists have had to change the way they construct proposals to make it appear that they are hitting KPIs. Thus you wind up in a situation where projects have to be stretched, compressed, or rearranged into bite-sized PhD theses with a postdoc sprinkled here and there that can generate papers at a regular rate. It's worse in countries with fixed PhD contracts than in the US where PhD projects are still flexible.
The root of the problem, as with most problems in modern science, is the publish or perish model. Combined with the lack of any high-impact "journals of negative results," you either have to play along or find another career; there is no room for principles in the Age of Austerity. And what's worse is that the funding agencies know exactly what is going on, all the way down to the level of project manager, but they look the other way... unless an audit comes down the pike, in which case the scientists are on the hook. It's an entire system where everyone obeys the letter of the rules while violating their spirit. The worse consequence, in my opinion, is the rise of "research for show" where projects get funded based on their ability to generate papers and media hype rather than their contributions to science. It also tends to lock people into successful (as in monetarily) lines of research instead of branching out or taking chances.
Re:Failures are very necessary part of science (Score:1)
The problem is expecting to get funding from someone else! If they give you money then they own you, and your research.
Instead get rich on something else, and then fund yourself. Then maybe you can get somewhere. And maybe get rich again.
Rinse and repeat...
Re:Failures are very necessary part of science (Score:1)
Re:Failures are very necessary part of science (Score:2)
If there are 10000 ways of doing it and only one of them is right, then finding a wrong one only gives you about 0.0013 bits of information, while finding the right one would give you 13.29 bits. While a negative isn't a waste of time, its value in general is not the same as that of a positive.
4 years.. (Score:5, Informative)
A PhD student has the right to expect a project that generates a decent body of work within those four years.
Four years? Ha! That's a good one!
Re:4 years.. (Score:4, Interesting)
A PhD student has the right to expect a project that generates a decent body of work within those four years.
Four years? Ha! That's a good one!
The easiest way to enforce that is for the awarding institution to say that if it isn't done in 4 years, it will be taken as a complete failure. Suddenly, people find that it is possible to write up in time. (Seriously, if you can't stop pissing around "doing just one more experiment" or "reading just one more paper" and write up your thesis, you're a failure as a researcher and should be publicly branded as such.)
Re:4 years.. (Score:5, Insightful)
Four years? Ha! That's a good one!
The easiest way to enforce that is for the awarding institution to say that if it isn't done in 4 years, it will be taken as a complete failure.
No, that rule would result in a lot of thesis committees approving completely crap theses. Believe it or not, thesis committee members are human and have a lot of difficulty telling kids that their last four (or five, or eight) years of work are worth no recognition and please leave. Thesis advisors become emotionally attached to their students and want to see the succeed/graduate, even if those students are incompetent. Sometimes you can compensate for the incompetence with time. Only rarely will a thesis committee 'over-rule' the advisor, with their input generally taking the form of 'this would become acceptable if the student adds [foo] over the next year or so.' Mandated time to completion is a recipe for diminishing the quality of theses and migrating a PhD from someone prepared for reasonably independent work to a glorified MS. Probably already moving in that direction, as many 'PhD's aren't really ready to work independently until they've finished two or more post-doctoral internships.
Re:4 years.. (Score:2)
Four years? Ha! That's a good one!
The easiest way to enforce that is for the awarding institution to say that if it isn't done in 4 years, it will be taken as a complete failure.
No, that rule would result in a lot of thesis committees approving completely crap theses. Believe it or not, thesis committee members are human and have a lot of difficulty telling kids that their last four (or five, or eight) years of work are worth no recognition and please leave. Thesis advisors become emotionally attached to their students and want to see the succeed/graduate, even if those students are incompetent. Sometimes you can compensate for the incompetence with time. Only rarely will a thesis committee 'over-rule' the advisor, with their input generally taking the form of 'this would become acceptable if the student adds [foo] over the next year or so.' Mandated time to completion is a recipe for diminishing the quality of theses and migrating a PhD from someone prepared for reasonably independent work to a glorified MS. Probably already moving in that direction, as many 'PhD's aren't really ready to work independently until they've finished two or more post-doctoral internships.
Except in systems where PhD contracts are fixed (e.g., most of Europe). When a PhD student starts with the certain knowledge that s/he will have to write a thesis in four years, they get super motivated to start pushing papers out around half-way through and advisers aren't so flippant about writing up results because they know they have a fixed amount of time to milk their students for results.
Re:4 years.. (Score:2)
I think that you underestimate the value of soft modes of failure, particularly in maintaining quality standards. Examiners are human and it is much easier to say "not yet" than it is to say "and you're out of here".
Read the literature... or not (Score:4, Insightful)
A big part of the problem is that there are few negative results in scientific literature. Ever found a paper with a clear negative outcome? I didn't. This "positive bias" in scientific publications is probably causing a major blow to the efficiency of scientific research.
Re:Read the literature... or not (Score:4, Insightful)
There is a reason why you are wrong. There aren't enough forests to support publishing all possible negative results or enough time to read them. More aptly, there are plenty of "negative" results in the scientific literature. If you count the number of scientific papers that are in disagreement on a particular point, there are a great many of them. Science works best, when there is actually evidence gathered to accept or reject a particular scientific hypothesis. A purely negative result can be obtained without taking any data at all and hence, is of little value in advancing science.
Re:Read the literature... or not (Score:2)
Let me put it another way: assume you are starting a research project just now (perhaps you are starting your PhD), and some wizard would approach you and ask you if you'd like to receive, instantly, complete knowledge of all negative results in your field. Would your answer be "yes" or "no"?
Science is not about "value" or "usefulness". Science just expands knowledge about the universe, regardless of whether it is good or bad knowledge, whatever that may mean.
When an astronomer reports about a star that collapsed into a black hole, is that good or bad? Is that failure or success? Of course, it would probably be more helpful if the researcher reported a method that prevented the star from collapsing. Nevertheless, something can be learned from the report. Perhaps, if the researcher details the mechanisms that lead to the collapse of the star, somebody else can later find a "cure" for collapsing stars.
Similarly, if I report about a failed experiment, it represents something others can learn from, and it may help them doing experiments the right way.
Re:Read the literature... or not (Score:2)
The correct answer to that wizard is no. Also, if someone tells you you can have all the knowledge in the world, this is a nasty trick they are playing on you.
This is because knowledge is largely useless, only relevant knowledge is valuable. Also, the number of negative results in any field is probable infinite and uncountable. Although it is useful to know that this or that theory which had some data supporting it is wrong (a false positive, and a publication of a negative result), it is not useful to know that this other theory no one ever thought was right except perhaps you in your lab is in fact wrong.
Re:Read the literature... or not (Score:2)
You seem to misunderstand what one can learn from a negative result.
A negative result is in a sense like determining that a function is non-linear, Knowing that some phenomenon behaves as a linear function is a very tight constraint upon what one can subsequently infer about that phenomenon, since there is only one way to interpret linearity. On the other hand, simply knowing that a function in non-linear, doesn't place much of a constraint at all, since there are an infinite number of ways of begin non-linear, none of which may necessarily be related to one another. One needs to know more about the precise nature of the function to reach any kind of conclusion, which of course, requires a subsequent experiment. It does NOT tell others "something" about the phenomenon (other than it is non-linear). The same is true of a null-hypothesis that can not be rejected. The outcome tells you NOTHING about the phenomenon you are attempting to study, except the fact that the experiment failed to result in a significant finding. Unlike a rejection of a null hypothesis, the acceptance of a null-hypothesis tells us nothing about the nature of error. If you have no estimate of error, then you are in effect making no scientific statement.
Re:Read the literature... or not (Score:1)
Re:Read the literature... or not (Score:1)
Re:Read the literature... or not (Score:4, Insightful)
Re:Read the literature... or not (Score:4, Informative)
Anyway, I was taught early on this is one of the main reasons to attend conferences -- after seeing an interesting presentation (or even poster) about stuff close to yours, you go for a beer or two with the presenter and hear all the failures they suffered and the wrong turns they took on the way. And share your own, too.
The body of science is so much more than just the published papers, you know.
Re:Read the literature... or not (Score:4, Interesting)
Anyway, I was taught early on this is one of the main reasons to attend conferences -- after seeing an interesting presentation (or even poster) about stuff close to yours, you go for a beer or two with the presenter and hear all the failures they suffered and the wrong turns they took on the way. And share your own, too.
And that's just one of the reasons I left academic science - people quit doing that. As funding dried up, people dried up. In fact, there were labs who had a reputation of getting it's post docs and grad students to 'hoover' the conference looking for ideas, strategies, concepts and bringing them back and working on some of the more likely leads. If that lab has eight post docs and 10 grad students, they can generally beat your solo effort if they so chose. So you didn't say much. Not much fun.
That and the beer. Man, I hate beer.
Re:Read the literature... or not (Score:2)
Usually, this isn't much of a problem, since grants are so competitive these days that they must be based on already previously published results that the potential grantee seeks money to extend and explore the consequences of. No "Nobel Laureate" would be caught dead attempting to copy an idea already developed in published work and call it their own. It would be a pointless exercise that would only make them look foolish to their peers.
Personal connections (Score:3)
Re:Read the literature... or not (Score:2)
Actually, that's why clinical trials are now supposed to be registered (clinicaltrials.gov), so that when they end, we get to know what they found (or not). This way, pharma cannot avoid bad publicity, for example. It doesn't work perfectly because I'm not aware of someone actually verifying that studies did get published, but the mechanism is there and if agency "X" decides to have a look it should have a quick idea of who studied what. The situation is of course much less well-documented when it doesn't concern real patients, but most funding agencies do want to know what you did with their money, including not finding stuff.
Re:Read the literature... or not (Score:3)
Michelson Morley was a negative outcome, wasn't it? This is one of the classic modern physics experiments. In general, tests of Lorentz Invariance are experiments with a "negative outcome." Many have tried to find a violation of Lorentz' dictum, all have failed.
Re:Read the literature... or not (Score:2)
There are many negative results in clinical medicine. For example, all drugs that don't work in a phase III trial deserve their own publication. This is a costly failure for pharma, but less costly than failing post-marketing and being sued by everyone.
Anyway, the term negative results is rather vague. A negative result coming from a well-designed and powered experiment can be very exciting (say, not finding the Higgs boson despite adequate design) because it makes us reconsider current theories. In my domain, for example, showing beyond reasonable doubt that smoking does NOT cause cancer would be a result of profound significance in preventive medicine. This kind of negative result is interesting, but rare. On the other hand, most of the time when a result goes against a very well established theory, the method is probably flawed, or underpowered or the interpretation is incomplete. This is the frequent kind of negative result, the one that most PhD's fear. There is yet another kind of negative result, also frustrating, when your new code/algorithm proves to be inferior to the competition. At least in this case you do contribute something new that might be of use in specific circumstances or in designing a better version in the future.
So, what I'm saying is that most of the time unexpected negative results come from bad methodology, which is why everyone hates them. True negative results are great but require extreme rigor and luck.
Re:Read the literature... or not (Score:2)
A big part of the problem is that there are few negative results in scientific literature. Ever found a paper with a clear negative outcome? I didn't.
Perhaps you should publish this finding.
Re:Read the literature... or not (Score:2)
A big part of the problem is that there are few negative results in scientific literature. Ever found a paper with a clear negative outcome? I didn't. This "positive bias" in scientific publications is probably causing a major blow to the efficiency of scientific research.
I think the problem is that there aren't any reputable journals that publish negative results. If you could drive your h-index publishing all the stuff that didn't work, the paucity of negative results in the literature would vanish overnight. But it's a chicken-and-egg problem to make that happen.
Luck... (Score:4, Insightful)
...and the ability to think on your feet.
It is not possible to plan 4 years ahead to ensure success. What you get instead is a PhD project plan that's wrapped in a set of general concepts (AKA escape routes) in case you hit a dead end. I'm currently doing a life science PhD and have changed tack at the half way point. A number of my colleagues have also, often quite drastically, whether for reasons of practical feasibility or time constraints.
If we know accurately what we were going to work on that far in advance, it has probably already been done.
Re:Luck... (Score:4, Informative)
Yeah, the trick is that you should always try to get funding for projects you have already completed, thus claiming a 100% success rate. Of course, this only happens in very large lab and has a bootstrap problem.
On the other hand, the biological sciences are especially tough because experiments are hard, expensive and unreliable, and those doing them typically not so sophisticated with data analyses. Or else you are doing bioinformatics, which is either algorithmic research or also costly and generally inconclusive unless you do in vivo validation, in which case you are back to problem number one.
But seriously, if you work with old-school biologists, do the world a favour, and teach them that a Gaussian error on a number of cells is dumb and wrong.
Re:Luck... (Score:5, Interesting)
But seriously, if you work with old-school biologists, do the world a favour, and teach them that a Gaussian error on a number of cells is dumb and wrong.
I think that entry into either medicine or the biological sciences should require a passing grade on a graduate level statistics course. Only then do you stand a chance in hell to start moving away from a century of misconstrued numbers. In medicine, it's still painfully obvious that most researchers couldn't get past Stats 101. And that is even after they have the manuscript reviewed by a biostatistician (who is probably shivering in a basement closet hoping that the next group of researchers gives up looking for him and goes to a bar.)
Of course, I'd still be fixing cars for a living, but that might have been a better outcome for myself and society....
Re:Luck... (Score:3)
90% of MDs don't understand conditional probabilities. This is probably about the same as the general public (see the Monty Hall problem), but in that case it has very real consequences.
But then I don't expect much from MDs anyway.
But for researchers, not understanding what a model is (never mind a statistical one), this is a sin.
Re:Luck... (Score:3)
On the other hand, the biological sciences are especially tough because experiments are hard, expensive and unreliable, and those doing them typically not so sophisticated with data analyses.
Try low temperature physics...
When I was in grad school I used to ride bikes with a guy who was a biology PhD-- I can't remember if he was a post-doc or staff somewhere. One time we were out and he asked "How many experiments do you do a week?" I almost fell off my bike laughing. I ran my experiment 3 times in 6 years (all in the last 1.5 years), and each time it ran for no less than 4 weeks (I think the longest run was 12 or 13 weeks). But up to the first successful run (as in all the engineering worked and it was possible to get data): design, build, test, fix (hardware, software, and electronics).
Spoken like a true manager (Score:1)
Chris Lee may be a physicist when he dons that hat, but in TFA he speaks as a people/resource manager, not as a physicist. In the science of physics, the only thing that determines whether an idea has merit is the scientific method, and that's very well documented.
Resource management is much more about cost and "return on investment" than about physics, even in the hard sciences, He wasn't speaking as a physicist in any way that's relevant to the science of his field.
Re:Spoken like a true manager (Score:1)
The Persian method (Score:5, Funny)
Ancient Persians would debate ideas twice - once sober and once drunk. It had to sound feasible in both states to be a good idea.
Re:The Persian method (Score:3)
"Man, we should totally invade Greece! That Alexander is a real sissy and needs a lesson."
"Hear, hear! Now let's drink so we can evaluate this proposal more thoroughly!"
"Good for PhD" is not "good science"t (Score:3)
I'm afraid the title of your note is misleading. Good science, much more than good engineering, involves testing new or old theories, to find how they work in previously untested ways, or to make sure that the previous test was really valid and caught all the important factors. A good graduate school project, involves a constrained project that can be reasonably tested in a few years, that does involve something of interest to the adviser, and that with good luck can be turned into a career of related questions.
The key is to make the initial question relatively simple, so that the concept can be expanded into tests or other related fields as time and funding permits. This isn't asking the "right size" of question, it's asking a question with enough related, interesting implications but that still has relevance if only the simplest parts can be addressed. Let me take an example of something I'd love to find a good thesis for: the cost of using different sorting algorithms.
The maximum computational costs of complex sorting algorithms is well understood (and well described at Wikipedia). But the additional computational cost of maintaining registers is not factored in, especially for small or modest data sets, and the cost of comparison _itself_ between different formats, or between positive and negative numbers, is not factored in to those computational costs. Neither is the cost of a partial sort that has to be started over from scratch or the benefit of algorithms that can be used when it is partially sorted. There is _wonderful_ material for a thesis in that kind of question, and even material for almost immediate application to industry. The preliminary survey and testing work with computational models can be done within a year by someone competent, but testing it against different CPU or software environments would be even more valuable and could easily fill out the rest of a graduate program, even leading to a creer in optimization of computational algorithms.
Re:"Good for PhD" is not "good science"t (Score:2)
Wow, good thing you're not (...just guessing here...) in a position to hand out PhD thesis tasks. That type of grindwork sounds like a fine thing to foist off on a high-school summer intern. Not that doing thesis research doesn't involve a lot of tedious grinding on sub-tasks; however, you seem to be confusing "immediately useful for industry" with "good thesis project."
You also (...just assuming here...) don't seem to have ever gotten deeper in the study/practice of programming than reading Wikipedia pages. For speed/resource-critical programming tasks, yes, there actually is a software engineer looking at every detail of those "additional computational costs" --- counting cycles in assembly code and checking missed cache hits in memory. And this is exactly where your proposed research belongs: the person figuring out the fastest assembly routine for sorting 3 to 11-character unicode strings on an ATmega8535 microprocessor is the engineer tasked with building such a device, not some poor sucker of a grad student grinding through every conceivable hardware/task configuration.
Re:"Good for PhD" is not "good science"t (Score:3)
Even though your example was a throw-away, it demonstrates the problem with your thinking about what a PhD should be. A PhD isn't about producing someone who is a technically skilled, hard-working worker who can do work worthy of a 6-figure salary in industry. There are engineering and vocational degrees/educations that provide that --- the ability to clearly articulate and crunch through the necessary steps to solve known engineering problems. While degree inflation and high unemployment has turned the PhD into the new BA/Masters in industry, it really ought to be considered a different (not better or more useful, just different) approach.
PhD research is about working at the edges of knowledge, doing experiments where there is no established "best practices" approach to the problem --- originality and "figuring it out the hard way from first principles" are the key skills, rather than comprehensive real-world technology knowledge. A useful industry engineer, however, is super-skilled and knowledgeable about applying known state-of-the-art methods (they're the kind of people who would spend a year collecting, cataloging, and benchmarking sort algorithms, that a PhD computer scientist invented decades ago but never bothered to turn into marketable products). It's a pity when PhD programs are turned into industry vocational mills, because it both devalues the expertise of highly-skilled workers who don't have a PhD, and ruins the potential of what an academic/bleeding-edge-research PhD program can produce.
failure is a result (Score:1)
That's easy (Score:2)
Good ideas are discovered after the fact! (Score:5, Interesting)
Good ideas are hard to determine, and sometimes you find out something was actually a really bad idea only after several years like trans fats, or saccharin.
The results of scientific discovery are diminished by classifying them as success/failure. The only 2 classifications should be "A Truth Discovered" or "Pseudo Science".
Any lab experiment which is conducted to seek the truth even if it does not yield a commercially viable result is still a truth discovered. A so-called failed experiment still is a success at discovering a method which does not work to achieve desired results, and discovering what does not work in some cases can be more important then finding out what does and is an actual truth discovered.
Any experiment performed to skew results in a particular direction, or where evidence is tossed that does not agree with your idea's is nothing but pure Pseudo Science. Unfortunately we have so much of this it has made people distrust scientists because they have proven that they are just as opportunistic as normal people and will do just about any dishonest thing for a buck! True Science be damned!
Re:Good ideas are discovered after the fact! (Score:1)
The problem there are infinite number of truths to be discovered. There already have hundreds of dissertations and results that will never be read.
For example, in any subfield of mathematics, there are infinitely many theorems but probably infinitely many useful ones as well. However, only a small fraction is published that lends to solving a goal common goal to the community.
Basically they don't. At least they shouldn't (Score:5, Interesting)
But that is theory. In practice, having some realistic goals based on available resources of money and time is common to all fields, not just science.
[*] Chandrashekar was not bitter about Eddington, he credits being forced to change fields in his late 20s, taught him how to learn and he deliberately abandoned his field of study about every ten years, he continued to be productive into his late 70s. If you find the spoof paper written in his style The Imperturbability of Elevator Operators, by S Candlestickmaker, by one of his grad students, it makes hilarious reading for the geeks. ]
Four Years??! (Score:4, Informative)
Four years? Not in Canada - and presumably not in the US either. The department average in my program was more like 6 (I took about 6.5), and I've known people who have taken as long as 10 to complete their PhD.
From some document I found on startpage: http://careerchem.com/CAREER-INFO-ACADEMIC/Frank-Elgar.pdf
"Median time-to-completion of the PhD has nearly doubled during the last three 2 decades (from 6.5 to 11 years). "
Re:Four Years??! (Score:1)
I'm a little shocked that 10 or 11 years is at all normal. In my experience, anything beyond 7 years is the stuff of horror stories. 10+ years must be in fields where a grad student has to do their research after working as a TA for 30+ hours per week to afford their ramen.
Hindsight (Score:5, Insightful)
wrong premise (Score:2)
"Scientists" isn't some coherent group that "knows" something. Some take guesses, some succeed, some fail. Many get it wrong too, and quite frequently.
Test It (Score:3)
Scientists tell if an idea is a good one by trying to prove it wrong, over-and-over-again and in as logical a thought-out way as possible, til they give up. This is known as "science", and the fact that they do it this way is why we call them "scientists".
the economics of science (Score:2)
PhD = right-size research projects (Score:2)
And then there's charisma... (Score:2)
If I had a dollar for every time some moron with a really stupid idea was able to get other people to part with their money for it, I'd be a rich man. George Carlin sums it up nicely when he said "You nail two pieces of wood together that have never been nailed together before and some schmuck will buy it from you." I would extend that further by saying "If you have charisma, you are able to convince people that the words coming out of your mouth are pure gold." In my experience, as with role-playing games, if someone with high charisma, chances is also a moron but people won't see it until it's too late. Chances are also high that such a person has the ability to blame their miserable failures on something or someone else, often a smart person with low charisma.
Re:What? (Score:2)
That there is a little math that should be done when faced with the old " It can be done, but , should it be done?" question
$= (time + obtanium) / desire * beer
Re:What? (Score:2, Funny)
It is obvious that you're a mathematician. Your equation is dimensionally wrong.
Re:What? (Score:1)
It's a philosofmatic equation. ,not a complaint, because it looks like a whine,makes you sound like a lil girl.
Bring a solution
Re:What? (Score:2)
I see you're not familiar with the curriculum of the Subgenius Foundation.
All Slack flows from Bob to those who've paid their dues.
Re:What? (Score:5, Funny)
It is obvious that you're a mathematician. Your equation is dimensionally wrong.
No, it's correct. Let's do the analysis: $= (time + obtanium) / desire * beer
time is in seconds
obtanium is in seconds (how long to obtain it)
desire is in seconds/liter (the longer you wait, the more you want)
beer is in dollars/liter
so we have (seconds + seconds)/(seconds/liter) * (dollars/liter) = dollars
Q.E.D.
Re:What? (Score:2)
Is beer in the denominator, because then the limit of $ as beer approaches infinity is zero, which is contradicted by reality.
Re:What? (Score:3)
The triangle of supply and demand works in this case as well.
Good/ Fast/ Cheap
Pick any two. Then use the profit algorithmic function to determine if the time utilized is an asset or boat anchor.
Re:What? (Score:3)
"Good/ Fast/ Cheap
Pick any two. Then use the profit algorithmic function to determine if the time utilized is an asset or boat anchor.
--"
Fast and cheap may be easy to measure, good on the other hand is not so easy.
For example, during the early years of the cold war it was thought that nukes would be a fast and cheap way to deal with a Russian invasion of Europe.
(and it would kill plrnty of commies, so it was obviously good as well, however the radiation and nuclear winter effects would have killed most of the rest of us, but they didn't know that at the time.
Re:What? (Score:2)
More akin to quantum mechanics. ,it's outcome is tied to the mandelbrot set dependent on the researcher involved and the brand / volume of beer consumed.
If it's fast and cheap, it won't be good
If it's good and fast it won't be cheap
If it's good and cheap, it won't be fast.
Motivational modifiers from the other equation can then be applied.
Is the profit $ equal to or greater than, the time and raw material, divided by your personal interest and the amount of beer you can expend for the project?
Utilizing these mathematics we can proceed with a quantum magnet that reduces the chance answer from a Schrödinger's magic 8 ball to either "Chances are good" or " Options look bleak".
To explore abstracts like ideas with statistically derived methods to define the problem is folly, in that, a bigger picture is needed rather than a detail and will result with the equivalent of urinating into a north wind on a cold day.
Relativistically
Nothing hard about decisions to proceed, with some simple math first.
It came from the same classes you and I had in Gravitational Yoga and Round Earth Theory.
Re:What? (Score:2)
That's why everything is fast and cheap.
Re:What? (Score:2)
For example, during the early years of the cold war it was thought that nukes would be a fast and cheap way to deal with a Russian invasion of Europe.
And they were correct. It's worth noting that the USSR after the Second World War was far less aggressive militarily than the one previous to the war. They didn't invade another country directly until Hungary in 1956 while they had invaded a quite a number prior to the war (and were a huge contributor to starting the Second World War).
Re:What? (Score:2)
Nope.
After the revolutionary period (including the Russo-Polish War), until WWII, the Soviet Union was not at all adventurous militarily. This changed in 1939, when Germany presented a real danger to the Soviets. Stalin's first idea was to ally with France and Britain, but this attempt was not doing well when Stalin allied with Hitler instead, intending to use that alliance as time to re-equip (and doing a bad job of it). This started some Soviet aggression, largely to gain physical buffer areas.
After the German invasion and its failure, the Soviets wound up marching through Eastern Europe, not really by choice, and setting up Communist governments. The border of Soviet vs. Western influence was decided months before the first atomic bomb was ready. (It's worth noting that Communism gained legitimacy by being involved in most effective resistance operations; in fact, the most important legitimizing events were caused by German aggression rather than anything smart the Communists did.) Similarly, the Soviets launched offensives against Japan that were settled by negotiations with the West, attacking the Japanese in Manchuria on the exact date agreed on.
After the first two cities were nuked, the Soviets increased their levels of peacetime aggression, including intervention in Eastern Europe and more or less proxy fighting in various areas.
What the nukes did was make wars like WWII impossible, which I suppose counts as dealing with a Soviet invasion of Western Europe. They allowed a lot more minor aggression on the part of superpowers, since they eliminated the threat of it escalating into full-scale warfare.
Re:What? (Score:2)
After the revolutionary period (including the Russo-Polish War), until WWII, the Soviet Union was not at all adventurous militarily.
First, you're speaking of a rather short 12 year period between the first conquest of territory in (1920-1924) and the second bout starting in 1936. That period was dominated by a long and bitter power struggle after Lenin's death. Even after Stalin's rise to power, he still spent several years clearing the government (and society in general) of those who might oppose him (or perhaps just because he could). This also was the period of the Homodor, the deliberate starvation of millions of rebellious Ukrainians.
But in 1936, Stalin was secure in power and the USSR absorbed considerable territory in the Middle East, creating 5 more soviet republics. He also dabbled in the Spanish civil war (1936-1939) on the Republican side. Then the USSR partitioned Poland with Germany in 1939, absorbed the Baltic states, and declared war on Finland as well. After the Second World War, the USSR kept most of the territory it had conquered as the "Eastern Bloc".
So before the Second World War, we have the only pause in the USSR's conquests being a horrific period when it was killing many millions of its own citizens. You can continue to make excuses for this brutal and aggressive regime, but I see nuclear weapons as being the only thing that prevented a third world war between the USSR, and western Europe and the US.
Re:What? (Score:2)
I'm speaking of about the end of Russian Civil War and associated wars to 1939, not 1936. I went out and looked for evidence that the Soviet Union went for territorial aggrandizement in 1936-39, and found nothing. All I found was a reorganization of already controlled territory in 1936. So, we're talking about a period in which the Soviet Union had by far the biggest tank forces on the planet and did nothing with it.
I find it hard to understand what's so aggressive about supporting the legitimate government of Spain in its civil war. I don't think Spain would have become an SSR no matter what happened there.
Now, look at 1939. Stalin was primarily looking at security from an obviously aggressive Germany. His first idea was an alliance with France and Britain, although he was justifiably afraid of their possible game of "Let's You And Him FIght". It's unclear whether this alliance ever would have worked, but the British in particular showed no enthusiasm, and Molotov-RIbbentrop was signed on the day the British envoy got authorization to say something on his own more than proposing a toilet break.
Stalin knew at this time that he was in a bad position, and set about to improve it. The Soviets occupied eastern Poland, the Baltic States, part of Romania, and fought a thoroughly inept war against Finland. This is aggression, but it isn't the same as going out and conquering or the sake of conquest. The later war naturally involved Soviet military control over much of eastern Europe and Manchuria. The Soviets wound up in military control of the places they'd agreed on with the Western Allies, no more.
At this point, the first nuclear weapons were used, and the Soviet aggression continued, although without the same question of ensuring safety. This includes several proxy wars, some of them quite large, and direct if not acknowledged participation. It looks to me like nukes did nothing to deter low-level Soviet aggression, and may well have encouraged it. (It's also possible that the Soviets took their new-found position and ran with it - WWII did quite a bit to legitimize Communism, after all.) It's very likely that the nukes prevented another large-scale war in Europe, but it's not certain. I know some of the Soviet reaction to WWII, but I don't know how it shaped high-level decision making. Were they determined to avoid anything like WWII again, as seems likely?
I have no idea why you think I'm making excuses for the Soviet Union, when I'm trying to put it into historical context. I see no reason to get into their iniquities (it would make this post much longer, for example), when the main point is whether nukes deter aggression.
Obviously, it's not possible to militarily conquer a country with nukes, or to conquer what a country considers its absolutely vital interests. That says nothing about lower-level aggression, including the conquest in Vietnam and attempted conquest in Afghanistan. In fact, having some sort of safety net encourages some forms of aggression. According to Luttwak, the Egyptian plan in 1973 was to advance into the Sinai as far as they safely could, and wait for the UN cease-fire that was sure to come. Without knowing that the UN would bail them out before things could get too bad, would they have attacked?
Re:What? (Score:1)
It's more like the square of supply and demand:
Price, Performance, Product & Delivery.
Pick two.
Re:What? (Score:2)
I like that. It applies more to the luthiery division of my company.
Re:What? (Score:2)
goodness = (1 - p(random explosion))
Iron triangle (Score:2)
The triangle of supply and demand, where one of the sides isn't supply and neither of the others is demand?
Don't repeat stuff you've misheard while listening to the grownups. It makes you look stupid.
Re:Iron triangle (Score:2)
Supply and demand is the institution.
The triangle is the modifier to competitive practice.
So go back , lock the bathroom door and practice.
He tells us... (Score:2)
Re:He tells us... (Score:2, Informative)
Re:He tells us... (Score:2)
A project that is going to take eight years of construction work before it produces any scientific results cannot and should not be built by a PhD student.
ATLAS took over 8 years of construction work before it produced scientific results. Some PhD students worked on detector construction and physics simulations - in fact it required agreement with universities in some cases that they would allow a PhD with monte-carlo studies instead of a real physics result. Yes their projects were shorter in scope than ATLAS as a whole but those projects, while essential to the success of the overall ATLAS experiment, did not produce real, experimental scientific results because there was no data.
So, contrary to what the article claims you can do a PhD on a project which is more than 8 years before scientific results come from it. This is not miscontruing the article - what the article says is provably wrong for the field of particle physics.
Re:He tells us... (Score:2)
Re:but ... (Score:4, Insightful)
The good ones need ink as well.
yet Math is applied Logic (Score:4, Interesting)
Math is applied Logic
Logic is applied Philosophy
Philosophy is applied Sociology
and "the circle is now complete."
Re:yet Math is applied Logic (Score:1)
Re:Advisors cherry pick PhD projects? (Score:4, Informative)
"A PhD student has the right to expect a project that generates a decent body of work within those four years."
For a Masters degree, this is acceptable. For a PhD, they had better be coming up with their own idea, a plan, funding, and then have their advisor and committee evaluate during the prospectus defense. Having their topic/project dropped in their lap so they can turn the crank is not what a PhD is all about.
Funding?
There are areas of physics where the cycle time for proposals is 2 years (from announcement to release of funds) with a success rate of less than 10% for even senior people (NIH has an even lower funding rate, and an expectation that most things get proposed a couple times before being funded). Many, if not most, graduate students in science can easily get funding to cover their salary through fellowships/RA positions/TA positions, etc, but the chances of a grad student writing their own grant proposal in most subfields is pretty small. Sure, there are areas where you can do good science with dimestore materials (and a few places that specialize in that), but that's a pretty narrow slice of science in almost any field. Some of the most successful faculty I've known at one of the top science/engineering universities in the world are successful because they let their post-docs be PI on proposals (which is relatively uncommon). Then if the project is awarded the post-doc starts the work as a post-doc and manages to spin it into a faculty job.
Re:Advisors cherry pick PhD projects? (Score:2)
"A PhD student has the right to expect a project that generates a decent body of work within those four years."
For a Masters degree, this is acceptable. For a PhD, they had better be coming up with their own idea, a plan, funding, and then have their advisor and committee evaluate during the prospectus defense. Having their topic/project dropped in their lap so they can turn the crank is not what a PhD is all about.
Funding?
There are areas of physics where the cycle time for proposals is 2 years (from announcement to release of funds) with a success rate of less than 10% for even senior people (NIH has an even lower funding rate, and an expectation that most things get proposed a couple times before being funded). Many, if not most, graduate students in science can easily get funding to cover their salary through fellowships/RA positions/TA positions, etc, but the chances of a grad student writing their own grant proposal in most subfields is pretty small. Sure, there are areas where you can do good science with dimestore materials (and a few places that specialize in that), but that's a pretty narrow slice of science in almost any field. Some of the most successful faculty I've known at one of the top science/engineering universities in the world are successful because they let their post-docs be PI on proposals (which is relatively uncommon). Then if the project is awarded the post-doc starts the work as a post-doc and manages to spin it into a faculty job.
Given the recent surge in "professional grant writing consultants" you'd have to be insane to let someone write a proposal for their own PhD research (in the US that means a bachelors student!). With funding rates around 10% excellent, well-written proposals are already routinely rejected. Being able to ask postdocs to write proposals is a luxury of being a professor at a top school where you can attract postdocs that are that competent.
Re:Advisors cherry pick PhD projects? (Score:2)
University of Chicago used to have at least a few people doing it-- granular media experiments can be done pretty cheaply, and there was someone there doing theory and experiments on migration of suspensions in droplets as they evaporate that started out with a bunch of experiments using coffee droplets. Poke around 4 year colleges that have good physics departments and there's probably someone doing good physics on the cheap.
Re:Advisors cherry pick PhD projects? (Score:1)
Usually a PhD program is about seeing if the student is capable of doing their own research, and communicating results in a clear and complete way. It is not a complete training for going into academia (maybe it varies with field, but my experience is with physics). As such, you rarely see any effort for the student to get their own funding. Although getting funding yourself looks good for your resume (it might not really help with a postdoc position, but after that it would), but sometimes that comes down to doing things like putting travel grants and other small funding awards on the CV to show you did something along those lines. Even at the postdoc level, it seems rare to see researchers doing their own, complete grant proposals. At that point you would need to be hired by a group anyways, because I don't think I've seen a school hire a generic postdoc and let them get funding for their own project. You can still try to get funding for a side project, and it would look good. But typically, it is not until you are a beginning researcher in a tenure track position that you begin the funding grind.
As far as what to work on, it often is somewhere in between being told exactly what to do and having to come up with your own complete project. There is a reason they have advisors, who are their to advise on feasibility, potential directions of research, potential problems and questions that need to be answered, and to keep things on a decent time track. It is not just supposed to be a sink-or-swim process, it is supposed to be educational so the students build up experience, much of which they lack at that point (e.g. it is easy to get over-optimistic when younger about how long some things can take to do). Additionally, the group and their grants have usually some bigger picture issue they are working toward, and there may be limits to what they can use their resources for, and they additionally have a good idea of what would be most helpful to add to the team's effort.
The process I've seen for such students is usually more of a gradient in self design. An incoming grad student may be given at first a simple, cookbook project that is supposed to take a month or little more, with the point of giving them a chance to learn the code base, or the lab layout, or the general work going on around the experiment. Then the advisor will let them know several of the big questions they have and some of the work that could contribute to that. The student usually has a fair bit of choice in which direction they want to go, and then they start off with some basic work. From there, and the results, and from talking to the advisor, the thesis project evolves, with the student having quite a bit of say, and the role of the advisor becoming more about pointing out gaps that need to be filled in, or keeping projects from getting over ambitious if time starts running short. A more concrete example would be what I've seen done on various plasma physics experiments, where you have a single large machine, and then a multitude of diagnostics run by different people and subgroups. A student, after some introduction to the experiment, would be given options like: no one currently is running magnetic diagnostics and we need a person there, here are some unresolved questions that could work toward, we also have several people working on spectroscopy at the moment, but there are some other questions and a lot of data there that could still support another person or two, the x-ray diagnostic just go a new person and we're not sure if much could be added by adding a second person to that subproject, so we don't think that would be a productive area for you, although it might still be an option for using the data in combination with another diagnostic...
Re:They get grant money, thats how. (Score:3)
If you think that "scientists" are mostly after money, then you don't know anything about how science works or where funding for science is actually spent.