Study Can't Confirm Lab Results For Many Cancer Experiments (apnews.com) 126
Hmmmmmm shares a report from the Associated Press: Eight years ago, a team of researchers launched a project to carefully repeat early but influential lab experiments in cancer research. They recreated 50 experiments, the type of preliminary research with mice and test tubes that sets the stage for new cancer drugs. The results reported Tuesday: About half the scientific claims didn't hold up. [...] For the project, the researchers tried to repeat experiments from cancer biology papers published from 2010 to 2012 in major journals such as Cell, Science and Nature. Overall, 54% of the original findings failed to measure up to statistical criteria set ahead of time by the Reproducibility Project, according to the team's [two studies] published online Tuesday by eLife. The nonprofit eLife receives funding from the Howard Hughes Medical Institute, which also supports The Associated Press Health and Science Department.
Among the studies that did not hold up was one that found a certain gut bacteria was tied to colon cancer in humans. Another was for a type of drug that shrunk breast tumors in mice. A third was a mouse study of a potential prostate cancer drug. The researchers tried to minimize differences in how the cancer experiments were conducted. Often, they couldn't get help from the scientists who did the original work when they had questions about which strain of mice to use or where to find specially engineered tumor cells. "I wasn't surprised, but it is concerning that about a third of scientists were not helpful, and, in some cases, were beyond not helpful," said Michael Lauer, deputy director of extramural research at the National Institutes of Health. NIH will try to improve data sharing among scientists by requiring it of grant-funded institutions in 2023, Lauer said.
Among the studies that did not hold up was one that found a certain gut bacteria was tied to colon cancer in humans. Another was for a type of drug that shrunk breast tumors in mice. A third was a mouse study of a potential prostate cancer drug. The researchers tried to minimize differences in how the cancer experiments were conducted. Often, they couldn't get help from the scientists who did the original work when they had questions about which strain of mice to use or where to find specially engineered tumor cells. "I wasn't surprised, but it is concerning that about a third of scientists were not helpful, and, in some cases, were beyond not helpful," said Michael Lauer, deputy director of extramural research at the National Institutes of Health. NIH will try to improve data sharing among scientists by requiring it of grant-funded institutions in 2023, Lauer said.
Pretty Common (Score:5, Insightful)
Re: Pretty Common (Score:5, Insightful)
Re: (Score:3, Insightful)
Another thing would be to reduce perverse incentives such as the competitive 'publish or perish' conditions
They're largely the result of the mistrust of science by the general populace, who elect politicians that demand taxpayer funded universities produce results at all times, while giving free reign to the industrial military complex to waste billions with no repercussions.
A bulk of time researchers spend is on preparing for grants instead of doing research.
Re: (Score:3)
It's a result of a high demand for professionals given lab space, staff, equipment, and authority over their research to produce something verifiable. It can be very burdensome for a competent theoretical or practical scientist to also negotiate for funding, but that is a distinct requirement from publishing their work frequently.
Re: Pretty Common (Score:5, Insightful)
Re: (Score:2)
Re: (Score:3)
The incentive of publish or perish tends to produce quantity over quality. This means reviewing the research on a particular topic can be particularly burdensome when reviewers have to wade through piles of low-quality crap to find a few decent quality papers. If you read meta-studies, you'll see that they usually pointedly enumerate how many papers they evaluated & how many they had to reject because they were of such poor quality. Sometimes, what should be single coherent, cohesive research papers get
Re: Pretty Common (Score:4, Insightful)
Basically, we're describing Goodhart's law: When a metric becomes a target, it ceases to be a good metric.
It is a good metric if it is what you are actually trying to accomplish.
For years, DARPA funded robotics research the traditional way: By giving out money based on grant proposals. The result was money going to people who were good at writing grant proposals but not much meaningful research.
Then DARPA changed course and instead issued a "grand challenge" with a series of contests. The money went to the team with the best result, not the best proposal.
A similar "money for results" approach may also produce better batteries, solar panels, and high-temperature superconductors.
I'm not sure how competitive funding would apply to cancer treatments.
Re: (Score:3)
Re: Pretty Common (Score:4, Informative)
Biomedicine has that to some extent, due to the massive budget of NIH. However, that massive budget doesn't go as far as you might think considering that NIH has a hard-on for larger more expensive studies, and doesn't like to look at smaller grants. In my experience, its $1 million or up. Smaller studies, if they can't be linked together into a larger study, are unlikely to get funded by NIH. There is also he novelty criteria. Replication work, if it adds nothing new, is also of little interest to NIH.
In other fields, like mine, the vast majority of research dollars come from private industry funding research they think they can profit off of in the next 2 to 5 years. It works out ok when there are two groups working at cross purposes, so that pissing of one group doesn't matter because you win over the other in the process. But in more narrow areas, where there is no one who stands to gain from proving something does/doesn't work, it is much harder to find funding.
Re: (Score:2)
"Replication work, if it adds nothing new, is also of little interest to NIH."
These are great examples of some of the problems in science biasing the outcomes. There is also political bias interfering in some areas. Showing a failure to replicate or strong
Re: (Score:2)
I think something similar to speeding tickets would be in order here. It would be nice if we re-routed a few hundred billion dollars from say defense into things that actually save and improve many lives like science, health & medicine, heck even education research. And in doing so could provide perhaps 1-3 follow-up studies for some of these things to prove validity and discourage cheating. Since that's not going to happen perhaps the various institutions, govt orgs, or publishers could fund a set of
Re: (Score:2)
Re: (Score:3)
1) poor design of experiment so the results don't really correlate to the hypothesis
2) assuming that data that shows the hypothesis to be true is "proof
Re:Pretty Common (Score:5, Insightful)
Publishing negative studies is EXTREMELY important. A lot of this work is done by PhD students and normally you need 2-3 publications to graduate. If you don't get them, you don't get your PhD. You don't know ahead of time if the project you are working on for years is going to work or not. Putting in years of effort and then not getting the degree because the process didn't work but you did learn why is really bad. It is also wrong because it sends the message that science has to always work. Even worse PhD students don't normally choose their exact topics so because of something your advisor decided on would determine if you could even get the PhD or not.
Being able to publish negative results easier would remove a lot of these screwed up incentives. I also think it would be easier to get scientists to cooperate because they probably know the results are false but can't say it.
Publish or perish creates some really nasty problems. If I told you that at the end of 5 years you would either get to move on to another job or you would waste the 5 years and leave with nothing of value and the entire determinant was on if your project worked you would make sure it worked regardless of it if actually worked. This has created a lot of safe science where people take something they already know works and make a minor improvement to it that they are sure will work ahead of time or they take more risks and find a way to make it look like it worked even if it failed.
Re: (Score:2)
Publishing negative studies is EXTREMELY important. A lot of this work is done by PhD students and normally you need 2-3 publications to graduate... Putting in years of effort and then not getting the degree because the process didn't work but you did learn why is really bad. It is also wrong because it sends the message that science has to always work.
I totally agree, and I think we need to find ways to convince the average person of the value of negative studies. First, eliminating that which doesn't work increases the likelihood of finding something that does work. Second, negative studies can discover approaches that may actually be harmful, and do so under controlled conditions where the potential harm can be limited and mitigated. Success is almost built on a foundation of necessary failures - that's why good science is often tedious and always pain
Re:Pretty Common (Score:4, Informative)
Publishing negative studies is EXTREMELY important.
Yep.
My thesis didn't get published because it was a negative result. I demonstrated that a "novel" mathematical software package a major player in the field was using had no computational benefit, and produced no better results than not using it. However, it was flashy and hard to understand, and had a couple of good papers backing it up. Those papers work because they carefully constructed the situations in which they demonstrated it to work. When applied to almost any other situation, it's garbage.
Lots of people are still using that software package today, and there's no reason to. There's no reason to spend the time tweaking it and configuring it and trying to figure out how it does its magic. The time and effort put into fucking with it would be far, FAR better spent on other things.
The papers they put out make it look like a fucking sonic screwdriver, and in reality it's more like a lockpick. Handy in very limited circumstances, but not useful for most applications.
Re: (Score:3)
This is something I sympathize with. I have worked with methods that are supposed to be amazing and found they only worked under REALLY narrow circumstances that are basically identical to what the paper shows.
The ones I have found more annoying are dealing with optimization algorithms. So many papers show off their amazing algorithm but if you look really carefully often the problem is carefully chosen to look hard but not to actually be hard.
Re: (Score:2)
Tenure was supposed to be the balancing force -- once you had your degree, then you could take the big chances. But it hasn't worked in practice.
Re: (Score:2)
Except the Tenure is much rarer now and most of the research is done by PhD students and they have no such protection.
Re: (Score:2)
Exactly. Therein lies part of why it doesn't fix the problem. Part.
Re: (Score:2)
It is normal for vaccine effectiveness to drop over time. There is a strong selective pressure to mutate around current immune adaptations so you would expect any virus that rapidly mutates to drop the effectiveness of vaccines over time.
Re: (Score:2)
That's not what his video shows. What it clearly shows is that if you pick a large number of different unconnected percentages and then put them in order you get a descending sequence which likely starts at 100 or in the high 90s and then likely descends well below 50.
Nothing more, nothing less.
At a meta level there is more however. In a sense it's a murderous fraud attempting to kill people by keeping a virus live in the world that we could otherwise eliminate. In a sense it's a piece of art showing how
Re: (Score:2)
The vaccine is very effective against the strain it was developed for. Delta and Omicron have many mutations which reduce the effectiveness. If we had enough people not get vaccinated for measles the effectiveness of the vaccine would also drop over time. Covid is a recent zoonotic virus and it is still undergoing rapid mutation. The more people that are infected and the longer they are infected the more mutations we will get. If this goes on long enough we will likely need a completely new vaccine.
Re: (Score:2)
The evidence indicates that natural immunity is generally more effective and durable than vaccination. Which isn't surprising, the resistance gained in this way will be more varied and triggered by differing features of the virus. This suggests that allowing natural spread among low risk populations would curtail the virus sooner and reduce the time it has to mut
Re: (Score:2)
I can't agree with this conclusion. The problem is that even though low risk populations are unlikely to die and may develop better long term resistance they also still end up with long term damage. Even for young and healthy people many end up with damage that so far looks permanent. There is damage to the lungs, heart and neural tissue. You could easily have mild symptoms and still end up permanently damaged.
Also given what this virus infects it is likely we will see further long term damage. So far it lo
Re: (Score:2)
I can see the argument with the vaccine appearing to mitigate some of the long term damage. The statement that young and healthy may end up with the long term damage is somewhat misleading because it masks the reduction in probability and severity of known lingering effects in those populations. Youth still offers a greater contribution to reduced risk of long term effects than vaccination.
I'm not sure I can agree that intentionally inflicting widespread genetic
Re: (Score:3)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
It's funny, if we applied the scientific method (necessary and sufficient falsifiable hypothesis statement) against our own political tendencies, we might actually break out of the bubbles we're sorted into. It can start off with just simple falsification criteria:
For pro-mandaters, "what evidence would convince you that the vaccines aren't save for specific types of individuals?"
For anti-mandaters, "what evidence would convince you that the vaccines should be mandated on most humans on the planet?"
For me,
Re: Pretty Common (Score:3)
"What evidence would convince you that the vaccines aren't save for specific types of individuals?"
Framing questions is one of the harder parts of science. For instance, this is terrible framing. You're pre-supposing the reason to avoid mandates must be dangerous vaccines. This ignores the entire panoply of reasons people don't get vaccines. It ignores the gigantic impact of public health policy on the spread of infectious disease. It doesn't care if natural immunity is better than vaccine immunity.
This is
Re: (Score:2)
'For pro-mandaters, "what evidence would convince you that the vaccines aren't save for specific types of individuals?"'
Perhaps could be revised to
'For pro-mandaters, "what evidence or criteria would convince you that forced vaccination isn't the sole and best response to the continued existence of SARS–CoV–2?"'
I would expect for many who oppose mandates the answer would be obvious. There would need to be strong evidence the virus represented a civilization end
Re: (Score:2)
Re: Pretty Common (Score:2)
No profit motive. Biosciences is dominated by pharmaceutical bros looking to cash in on specious research.
Re: (Score:2)
To be fair, physics hasn't had the same scale of reproducibility problem that the social sciences and medicine have had.
There's a significant difference between physics and social sciences/medicine: humans.
Physics gives you precise results. Human responses to questions (in psychology, sociology, etc.) are very imprecise.
Experiments in medicine are difficult because most people consider it unethical to inject drugs or cut up humans in whatever way you feel like.
Re: (Score:2)
It is highly questionable to expand this framework to many dimensional higher order topics which may have no underlying
Re: (Score:2)
Re: Pretty Common (Score:2)
I want to start a science journal called Null that only publishes studies that prove the null hypothesis. With a focus on studies debunking older studies. Give researchers an incentive to dismantle bad science, and a much needed avenue to publication of results that haven't been doctored to show "interesting" results.
Re: (Score:2)
Re: (Score:2)
It's not unfortunate. It's the way it's supposed to work. Journal papers are really the modern equivalent to writing to your colleagues and saying "hey, I did this and I saw this. How about you?"
What's unfortunate is that we've somehow decided a published paper in a journal is somehow "true."
What we *do* need to get better at is actually replicating experiments, and doing so in such a way that we produce negative results. The vast majority of those "negative results" that are never published are not negativ
The Failure of Peer Review (Score:5, Insightful)
People often talk about peer review as if it's a panacea, but this and many other reports have shown that the current state of peer review is an utter failure. The reasons span from things like there's no money aimed at peer review, to there's no glory in peer review...nobody makes a name for themselves by confirming study results.
Re:The Failure of Peer Review (Score:4, Interesting)
There are other incentives than that. The people reviewing the paper also know that the ability to graduate for the PhD students involved is often tied to the paper being published. They know that the person working on it didn't get to choose if reality would work out or not and that an honest negative paper that covers exactly why something did not work and why will probably not be publishable.
Would you want to lose all the work you put into a PhD and leave with nothing just because something didn't work? You still learned a lot and expanded our understanding of a subject and there was no way to know ahead of time if the idea would work or not. That encourages either zero risk science or science that falsifies if something does not work.
Re: (Score:2)
"There are other incentives than that. "
Yes, I believe I alluded to that when I said that the reasons span...certainly there are others I didn't itemize.
Re:The Failure of Peer Review (Score:4, Insightful)
...nobody makes a name for themselves by confirming study results.
I do not think that has to be the case. Someone could make a hell of a name for themselves debunking bad studies. Especially right now.
Re: (Score:2)
Could being the key word. It hasn't happened, and when I said the "current state", I implied that it's not irreparable.
Re: (Score:2)
Re: (Score:2)
Not quite what we're talking about though. This is someone chasing down fraud, vs. bad studies where the researchers just did a bad job (which there are many).
Re: (Score:2)
Re: (Score:2)
Exactly. Maybe at some point we should require that the budget for any study include funds for an independent peer review.
Re: (Score:2)
Peer review mostly works as intended. The purpose is to filter out obvious garbage. It's *important* to publish things that are probably wrong. Journals are a conversation, not a bible containing absolute truth.
Re: (Score:2)
Then why are there countless articles about it not working? Feel free to google.
Re: (Score:2)
There are countless articles about Elvis being sighted too.
Peer review has problems for sure. This isn't one of them.
Re: (Score:2)
People often talk about peer review as if it's a panacea, but this and many other reports have shown that the current state of peer review is an utter failure
Peer review is a low-pass filter. It ensures that the really bad stuff has been filtered out. It improves the quality of papers that get published (basically by proofreading). It doesn't ensure that a study is reproduceable.
That's what you get... (Score:5, Funny)
Concerning part (Score:2)
Others mention the issues at work here. However, the effect of those issues leading to scientists not helping each other reproduce the results will be the death of meaningful science.
I wonder if someone can summarize why they failed, giving a breakdown. I am sure it's covered in the paper.
Re: (Score:2)
Re: (Score:2)
And in both cases something is dreadfully wrong with science... If your a head of a lab and you are too busy fine, but consider delegating. This person posted results that a bunch of experiments failed to be reproducible which in itself seems like quite the achievement. Having my lab "called out' in this way would be embarrassing though I do not think the publication goes about it in this manner. The lack of supporting reproducibility in your research, is like admitting it's horse shit.
Btw, I say this as a
From the referenced article: (Score:2)
Stock tip: digital storage manufacturers and resellers
This is why IBM Watson failed (Score:5, Interesting)
as a professional scientist in industry, this number is not surprising at all. However, putting this in a bigger picture, the current attempts to apply machine learning to biology keeps on failing miserably because of exactly this point. The ML people just feed literature into their algo, not understanding that the signal-to-noise ration is so high nothing reliable ever comes out
Re: (Score:3)
The ML people just feed literature into their algo, not understanding that the signal-to-noise ration is so high nothing reliable ever comes out
I think you meant to say "the signal-to-noise ratio is so low". Personally, I always strive for as much signal and as little noise as possible... ;-)
Re: (Score:2)
That failure of Watson could be used as an algorithmic test of whether the problem is fixed in the future: when a machine chewing on all the collected results starts returning consistent results, then we could believe that there's enough signal to trust the aggregate body of knowledge.
"Publish or perish" at its best (Score:4, Interesting)
"Publish or perish" at its best. If you spend months on a study, you must publish some significant result. "We didn't find anything" is not publishable, so the scientists fudge their data and twist the statistics into pretzels. It's no surprise that they aren't helpful, when someone is trying to reproduce their results, because they know what's going to happen.
Reproducing studies is important work. Not only because of academic malfeasance, but because studies can also simply be mistaken, through a variety of legitimate error sources.
No paper should be published without full and complete data. What went into the study? How it was conducted? What was the raw data produced? What computer code was used? Etc. That should all be permanently available, and checked by the reviewers, so that anyone can attempt to reproduce the study. Incomplete or inconsistent information means no publication.
Re: (Score:2)
Sometimes you find something negative that is really important. I have seen people do some really amazing work that proved something we had long thought was true was wrong and exactly why it was wrong. It really added a lot of useful information but that stuff is very hard to get published. If you want to graduate you better turn that into a success in some way or you are screwed.
But is the science bad? (Score:5, Insightful)
There's non-reproducible and then there's bad science. Those are frequently one and the same but not always. Data from the LHC for example can't be reproduced anywhere else but that doesn't make it bad science. That's an obvious example but appears more subtly in other fields. Almost all experiments have a set of variables that are under control, and ones that aren't under control but may or may not make a difference in your final result. Many labs have spent years finding those uncontrolled variables and getting them under control, allowing them to get data with higher signal to noise than if they let those variables go uncontrolled. Some of that is "institutional knowledge" like passed from grad student to grad student (e.g. "we found that we need to autoclave this spatula between uses to prevent bacterial crossover, even if you don't think it needs it"; or "we found that the hplc-grade water from supplier A is higher purity, so we use that"). Ideally this is all laid out in papers, but not always.
So the question is whether lab #2 that tried to reproduce the result had the same level of expertise and institutional knowledge, or if their data was noisier due to lack of that knowledge.
Either way, there's also the possibility that Lab #1 had great data, or Lab #2 had bad data because there was an uncontrolled variable in both cases that biased their results.
My takeaway from this sort of study is that there's either a lot of finesse to getting these results right, or there are unknown variables that need to get controlled to determine if the original results are correct.
I'm dismayed by the lack of support from previous researchers and hope they are explicitly called out.
Re: (Score:2)
I've been told "reproducibility" is a requirement of good "science". Otherwise it's anecdotal.
What you're saying is similar to the Vienna Beef story about moving buildings. But the goal of science is understanding the why. So if an experiment is not reproducible, then the results are not valid because you haven't ascertained the cause.
If the second lab doesn't have the institutional knowledge to reproduce the first lab's results, then it's up to the first lab to help the second lab gain the knowledge, becau
Re: (Score:2)
Most experiments do have reproducibility built in, i.e. they do the same experiment on 10 mice instead of just one. Or they will repeat the same experiment over and over. So in that regard it's usually reproduced. The problem is when the same experiment is reproduced in a different lab. Unfortunately writing something like that into a grant proposal (having two labs do nominally the same thing) would probably be a non-starter, albeit admirable.
I'm reminded of a photoelectron spectroscopy battle I had in
Re: (Score:2)
This is not actually always true. For instance if you are trying to breed yeast to create and survive in higher concentrations of ethanol it is common to use a random mutation factor and selective breeding. Even if the odds are extremely low it only has to work once for you to make more of that organism. Anyone can take your sample and grow it and verify it works but if they try the same experiment they probably won't end up with the same result.
Progress (Score:2)
And it's not the same across all fields of science. Results in the harder sciences like chemistry and physics are much more reproducible. Of course, with the proliferation of journals, and publications, most of them aren't worth repeating anyhow.
Re: (Score:2)
Amgen and Bayer have done the same things. Many companies have at this point.
I find that engineering journals have a FAR higher rate of being correct. Usually above 95% in my experience. How many scientists don't like publishing in engineering journals because they get low citations and thus don't help their careers very much. There is a lot that goes on in getting hired and journals have scores and your points get added up and part of your resume for getting jobs.
If you can't replicate the results of your peers (Score:2)
try searching for a developer exit! (old Mario Maker wisdom). ;)
Social "Science" is complete quackery (Score:2)
Not news I would want to hear (Score:2)
If I'm a cancer patient seeking a cure, the last thing I want to hear is that the treatment they're prescribing may not work because they skipped a few steps.
Oh wait, we're doing that with a pandemic right now so I guess it'll just be the new normal.
Re: (Score:2)
Don't confuse basic science results in cancer research with approved therapies. Randomized clinical trials are very expensive, but they're also the only way to be sure a therapy actually works. We do not want to set the bar that high for more basic results (i.e. the ones that don't get injected into cancer patients) because nothing would ever get done.
Re: (Score:2)
disclosure is disclosure and it's up to the patient to decide what risks they're willing to take vs. the potential benefits of a therapy. What's astonishing from the TFA is the lack of reproducibility, to me that means a lack of professional standards. Medicine isn't an exact science but at least you should be assured that a theraputic will actually help the condition, not just placebo it.
Re: (Score:2)
Research tests on very few people and they can't control for a lot of factors because of the few people. If you hold them to the same standards as a clinical trial you will just have almost none of the research done anymore and most new medicines will stop being created.
Re: (Score:2)
This isn't medicine, it's biology. As I said in my post, don't confuse basic science results in cancer research with approved therapies.
I don't know what you think disclosure is or what it has to do with this topic.
Half don't hold up (Score:2)
is another way of saying that bullshit on the edge of statistical significance is being published.
Let's make a useful analogy. Picture a bell curve representing the natural noise in any measurement. If the bell curve is dead center on the threshold called "statistical significance" based on a prior noise model based on some assumption about underlying reality, then exactly half the stuff will be "right" and half the stuff will be "wrong" just from experimental error.
How can the bell curve of experimental er
No penalties for lying in academia (Score:2)
Re: (Score:2)
Actually this if false. If you are caught for academic fraud your career is normally over. If that fraud was used to get your degree then your degree is normally also removed. The penalties are quite harsh.
Wait, what? (Score:2)
Your pessimism doesn't match the real world stats (Score:5, Insightful)
I don't donate any money to cancer charities because of shit like this. The worst thing that can hold back actually finding more effective treatments for cancer is bad research wasting everyone's time trying to make something work when it doesn't.
I remember growing up in the 80s and 90s when diagnosis of the big C was still basically death (the 70s and before were of course worse) and I think you are really throwing the baby out with the bathwater here. Cancer research works, the proof is in the survival rates. Over the last decade, mortality rates for all cancers combined have decreased by around a tenth (9%) in the UK (source: CRUK). That's not just a number CRUK can make up, it's based on statistics from our public health system here. Yes there is crap research out there, but most of it is pretty damn good stuff.
Re: (Score:2)
This study shows that most of it is not damn good stuff. 54% of it is shit, and their researchers are shady.
Re: (Score:2)
No, the research can be good, but they can still fail to reproduce the results. They may have failed to reproduce the actual test by failing to understand the test conditions, or because the original research doesn't state the conditions accurately enough (as in the discussion about which mice to use).
The problem is two-fold, bad research making it into the accepted canon, but also valid research being thrown out because it can't be reproduced.
I do agree with your point about early detection/changes in beha
Re:Your pessimism doesn't match the real world sta (Score:5, Interesting)
And you've got people like Elizabeth Bik, on her own time, doing simple things like finding duplicated and touched up images in research papers that can be nothing but fraud. And she gets attacked for exposing all the shady shit that medical researchers do. That is what I find worth funding, and in this case I put my money where my mouth is.
Re: (Score:2)
Oh, I understand completely where you are coming from and agree. I was just trying to provide some nuance.
Re:Your pessimism doesn't match the real world sta (Score:4, Informative)
Medical Research is Really Bad (Score:2)
Medical research is particularly bad because it's often run by "Doctors" who think by being a doctor they know how to do research. But they are not PhD's and know nothing about setting up good studies. They just get paid by vendors to do "research." I've seen studies that were vendor funded for their laser machine had just 6 participants.
Re: (Score:2)
Many of the ones they couldn't reproduce the researchers would not help them at all. They reached out to them, they didn't merely just discard a result at the first sign of not being reproduced. The fact that happens is incredibly shady.
This is dishearteningly common in cancer research.
Re: (Score:2)
I'm not sure if I understand the concept of "valid research that can't be reproduced".
I mean, I get it, if you're talking about studying, say, the entire planet earth and cloud albedo due to cosmic ray fluctuations, it's not like you can spin up another planet earth as a control - but "reproduced" is more than "run the same experiment again", it's also "literally run the python code used to process the dataset" (although you might need to make sure you run it on the same processor type too in order to get i
Re: Your pessimism doesn't match the real world st (Score:2)
Re: (Score:2)
Most of that is due to early detection. The other is a constant push to reduce cigarette and alcohol consumption, sunscreen, and maybe varied diets.
You know as well as I do that diet and screening is unlikely to explain a difference of such magnitude especially as if you go back 20 years it is a drop of nearly 25% (see Francis Collins recent interview). We have had screening programs for decades and they've done a lot of good. Unfortunately for many types of cancer no matter how early you catch it if you don't have an effective treatment there's nothing you can do. Childhood leukaemia's are a classic example, 60 years ago almost nobody lived, now it's
Re: (Score:2)
You know as well as I do that diet
You know as well as I do that I put diet last, and said "maybe", and that I listed other more important things before that.
Given that smoking, alcohol, and UV are the cause of the most common cancers, the reduction of intake or awareness around those can easily account for the drop. Early detection of those cancers, like skin, lung, and bowel cancers which are very common, helps reduce the numbers a lot.
It is also a problem with how we fund and train scientists, in a survival of the fittest model. I can guarantee that many of the unhelpful responses they get are because the corresponding authors don't actually know the answer.
And throwing money at researchers who don't know how to research is going to solve the problem?
That
Re: (Score:2)
one of the issues is that the success of many experiments relies on the lab skills of the experimenters. In chemistry it is very well known that the same reaction can be carried out by skilled practitioners, but fail for non skilled. There are many manual operations and steps.
It is like saying most people fail when they try to sculpt statues, so the art school projects are not reproducible.
Re: (Score:2)
Over the last decade, mortality rates for all cancers combined have decreased by around a tenth (9%) in the UK (source: CRUK).
So if it was basically 100% chance of death in the 80s, it's 91% death in the 2020s ? Hmm.
No he said 'decade' so if it was a 100% chance of death in 2010, it's 91% chance of death in the 2021 .... exceeept ... it isn't because a cancer diagnosis wasn't 100% chance of death in 2010 in the first place. When dealing with something as resilient, complex and lethal as cancer I'd say a 10% reduction in mortality rate over a decade is pretty good results.
Re: (Score:2)
Many experiments take years to do. You would have to give the PhD for just reproducing the work depending on how much effort it took. Most of the new ideas are not whacky at all. I have worked on topics where everything we knew ahead of time told us that a certain method should work. It took almost a year to find out why it would not work and it revealed a lot of important information on the subject. The reasons it would not work where subtle and not remotely obvious. That information is used today industry
Re: (Score:3, Funny)
Still holding out on that ivermectin study done in Egypt?
Re: (Score:2)
True. Even though some studies use completely made up data, some studies move us along. But the faulty studies can also cause a lot of harm.
Re: (Score:2)
My take from this is that the cancer people probably aren't publishing enough.
You want most published papers to be wrong. If they're not, it means that the many years long work you mentioned is all being done in secret in one lab or among a small group of labs that chat through backchannels.
Re: (Score:2)
Can the gift sets be at least used to confirm or infirm lab results for many cancer experiments? You know, maybe try to get your spam right, we have standards here.