Computer Modeling Failed During the Ebola Outbreak 193
the_newsbeagle writes: Last fall, the trajectory of the Ebola outbreak looked downright terrifying: Computational epidemiologists at Virginia Tech predicted 175,000 cases in Liberia by the end of 2014, while the CDC predicted 1.4 million cases in Liberia and Sierra Leone. They were way off. The actual tally as of January 2015: A total of 20,712 cases in Guinea, Liberia, and Sierra Leone combined, and in all three countries, the epidemic was dying down. But the modelers argue that this really wasn't a failure, because their predictions served as worst-case scenarios that mobilized international efforts.
wrong is right (Score:1)
Re:wrong is right (Score:5, Insightful)
Well, IF there hadn't been a very robust response, it could easily have been that bad.
Re: (Score:2)
Exactly. Do those asking the question "did the modelers get it wrong?" think that the models can actually account for the level of response there will be from every country in the world that has the ability to help mitigate the spread of the disease?
I can see it now... epidemiologists sit down, come up with a model of the outbreak based on what they know about how the disease spreads, and where it's starting from, and then ask themselves "OK, now what's the World Aid Fudge Factor?".
Re: (Score:3)
Goes without saying that I haven't read the article, but that's my question. If the model was "this is what will happen if no action is taken" then the model may or may not have been right. But you certainly can't say that it's wrong because...action was taken. What's interesting is akin to what you say. Not a "fudge factor," but as another independent variable in the model. "Here's the outcome with no aid. Here's the outcome with these different types and quantities of aid."
Re:wrong is right (Score:5, Insightful)
But the modelers argue that this really wasn't a failure, because their predictions served as worst-case scenarios that mobilized international efforts.
so.... how about those climate models out there????
So, you're saying that climate models that do not reflect the mobilization of international efforts mean that we should not attempt to push for international efforts to ensure that those worst-case predictions do not happen?
Climate science is always evolving. Scientists learn more about the planet and how different aspects of our planet's behavior interact, and they discover new aspects through this process. I don't think there's a lot of argument that humans are taking huge carbon deposits that are the result of plants using carbon from the air as building material in their structures and reintroducing that carbon into the atmosphere again. The debate is what that does to climate.
Re: (Score:2)
But the modelers argue that this really wasn't a failure, because their predictions served as worst-case scenarios that mobilized international efforts.
so.... how about those climate models out there????
So, you're saying that climate models that do not reflect the mobilization of international efforts mean that we should not attempt to push for international efforts to ensure that those worst-case predictions do not happen?
Climate science is always evolving. Scientists learn more about the planet and how different aspects of our planet's behavior interact, and they discover new aspects through this process. I don't think there's a lot of argument that humans are taking huge carbon deposits that are the result of plants using carbon from the air as building material in their structures and reintroducing that carbon into the atmosphere again. The debate is what that does to climate.
I think the more salient point is our call to action should take into account uncertainties within the models. If the actions we call people to are costly, people should reasonably expect that the evidence brought forward is certain enough to justify the cost...
Modelling climate is insanely challenging as the scope is our entire planet, and the components involved number in the hundreds or thousands, and the interactions between them are again almost universally dependant upon one another. Climate models ar
Re: (Score:1)
Re: (Score:2)
Excuse me, it is politically incorrect to doubt the climate change models.
The OP didn't mention climate models nor provide any basis for doubting climate models.
Besides which most people who cast aspersions on the accuracy of climate models fail to recognise the consequences of that argument. If climate models are inaccurate than there can be no basis for the denialist claim that changing the concentration of greenhouse gases in the atmosphere won't impact the climate. Denialists need models, and they need those models to agree with them.
Re: (Score:2)
Denialists need models, and they need those models to agree with them.
No, they don't. All they need to do is take the model output and compare it to reality.
Re: (Score:2)
Luckily for them, nobody believes their lies.
Re: (Score:2)
I suppose you might panic, but saying the models are wrong doesn't make me panic, because it appears the situation is much better than predicted.
Feel free to point us to a reputable journal is which you've published your analysis of the present situation, including. specifically how you extrapolated from current observations to a prediction of future events.
I don't suppose anybody believes the lies,
That's quite possible.
I've engaged in conversations where denialists make an assertion, and when rebutted, come back a few days later and make the same assertion again. It's impossible to exhibit those behavioural traits without deliberately concealing that you know you are wrong.
You're rig
Re: (Score:2)
.... and being downmodded to 'troll' proves my assertion.
Re: (Score:2)
.... and being downmodded to 'troll' proves my assertion.
There isn't a "-1 delusional fucking idiot" mod option on slashdot.
Re: (Score:2)
.... and being downmodded to 'troll' proves my assertion.
There isn't a "-1 delusional fucking idiot" mod option on slashdot.
Thank you for your well-thought-out and courteous reply.
If you think it's politically correct to question climate change models (the converse of my assertion), try it and see what happens.
Re: (Score:2)
Re: (Score:2)
No, he didn't. The idea that true science obeys strict Popperian laws is false.
Re: (Score:2)
No, he didn't. It would appear that you're the one who is confused.
Re: (Score:1, Flamebait)
Computer models when taking into account no action obviously do not match computer models taking into account action. Obviously idiot lead head (lead head being an interesting failure of greedy corporations to forecast the outcome of lead in fuels used in cities) conservatives do not realise this, hardly a surprise. Normal computer modelling practice is to create several models that reflect different scenarios and responses. To look at the models and ignore the different forecasted possible responses and s
Re:wrong is right (Score:5, Insightful)
This is almost a cliche, it's exactly what happened after Y2K. We saw a potential threat, a huge one, and a way to prevent it. We mustered great resources to prevent it - and succeeded. But unlike in the movies those who prevented the threats were not celebrated - immediately afterwards they were accused of having made up the threat to justify the resources.
It's a fundamentally stupid failure of logic, but it happens over and over. If you manage to prevent a threat from realizing, people claim the threat was never real.
Re: (Score:3)
This is almost a cliche, it's exactly what happened after Y2K.
Some time back in 1996, an employee at a British supermarket handling deliveries of goods tried to enter the best-before date of a can of beans. Tinned beans can be stored safely for a very, very long time. The beans that had been delivered were the first goods that supermarket received with a best-before date in 2000. The employee couldn't enter that date.
It was obvious then that if no action was taken, the supermarket would be in bad trouble in December 1999 and in deep shit in January 2000. It's also
Re: (Score:2)
Re: (Score:2)
But how is that any different from moving the goalposts? If they predicted something that didn't come to pass they're still wrong. Now it might be that they caused more work to be done than would have been, but it's still an incorrect model.
I predict if a brick falls on your head it will hurt. If you move your head out of the way I'm still correct. You seem to have some comprehension problems. In another model I predict if you move your head the brick won't fall on it and you won't get hurt. Pick a model.
Re: wrong is right (Score:2)
You're dead wrong because it is falsifiable. Just see what happens if nothing was done while controlling for likely impact (that means excluding anybody who were largely unaffected or who didn't need to act due to upstream efforts). That shows a different picture including one nuclear power plant that actually shut down.
There is very few legitimate examples. Most who did nothing were using systems like Unix which were never at risk. They did nothing because they didn't have to. Many of the rest got helped
Re: (Score:2)
Oh great resources WERE mustered in third world countries.
I was one of them, and I live in a third world country.
In CS, there is a thing known as ... (Score:2, Insightful)
Garbage in ... Garbage out - GIGO
That computer simulation failed simply because all the input that the program got fed with were erroneous
Re:In CS, there is a thing known as ... (Score:5, Insightful)
Certainly not all the input was inaccurate. There could have been incorrect accounting for the effectiveness of education or news efforts. The medical personnel may have improved faster than predicted. Mobility limitations might have reduced the spread to a manageable rate. Or it could simply be the outcome was a 1:20 chance that beat the predicted odds. There are way too many variables to even know which was the least accurate, but I wouldn't claim any were as bad as "garbage".
Re: (Score:3)
If there are way too many variables, then it probably is a really poor candidate for simulation in the first place, and it is just garbage in, garbage out.
Re:In CS, there is a thing known as ... (Score:4, Insightful)
Complex systems have a lot of variables; that doesn't make them poor candidates for modeling. On the contrary, you simply have to rerun the model as you learn more and better data.
The prediction of "a million victims" was made in the earlier stages of the outbreak and grabbed the attention of the world, so we more clearly remember it as it was on that day. But just because we remember what the media said in October of 2014 doesn't mean they weren't continually working on it. After the model started reversing course the headlines stopped being so alarmist, and so the general public barely remembers the much less dramatic follow-on news "Ebola trending downwards", "revised estimates", etc.
Was this a deliberate attempt by the people generating the model to drum up public support? Was this simply the media grabbing on to the worst case scenario because it made the best headlines? Was it an honest mistake in reporting? Perhaps the "garbage in -- headlines out" came from a source other than the model's data.
Re: (Score:2)
If there are way too many variables, then it probably is a really poor candidate for simulation in the first place, and it is just garbage in, garbage out.
What nonsense. On that basis we wouldn't even bother trying to do weather forecasts at all. But, in fact, we are steadily improving their accuracy.
Re:In CS, there is a thing known as ... (Score:5, Interesting)
There could have been incorrect accounting for the effectiveness of education or news efforts. The medical personnel may have improved faster than predicted.
The educational effort was far more effective than expected. Medical personnel had very little to do with it. Ebola was stopped by convincing people that they should wash their hands with soap. That turned out to be easier than expected, in spite of low literacy rates.
Re: (Score:1)
Computational epidemiology (Score:4, Insightful)
People who work in population dynamics know that the models are based on a very crude understanding of the disease and ultimately general epidemiological assumptions. The fact that there are basic assumptions is sometimes disguised in the process of making fancy computer models.
These models may be the best predictions that people can make. However, GIGO (garbage in, garbage out) still applies. However, sometimes the best predictions are not good enough since they can be very misleading.
(I used to work on similar models and became disenchanted; I need to post anonymously.)
Not surpising. (Score:5, Interesting)
I've been involved in contracts that had public health modeling components. Being "way off" is not necessarily a proof the model is no good when you're modeling a chaotic process which depends on future parameters that aren't predictable. In our case it was the exact timing of future rainfall. In their case it probably had to do with human behavior. A small thing, like an unseasonable rainstorm, or an infected person showing up in an unexpected place, can have immense consequences.
You look at all the data you have, and you think, "Hey, this is a lot of data, I should be able to predict stuff from it," but the truth is while it looks like a lot of data it's a tiny fraction of all the data that's out there in the world -- and not even a representative sample. So you have to guess "plausible" values, and if they're wrong you might not see the kind of result that eventually happens, even after many model runs.
So in most cases you can't expect a computer model to have the power to predict specific future events. It can do other things, like generate research questions. One of our models suggested that having a lot of infected mosquitoes early in the season reduced human transmission of a certain mosquito borne disease later in the season, which was a surprising result. When we looked at it, it turned out that the reason was that the epidemic peaked in the animal population early in the season before people were out doing summer stuff and getting bit. Does that actually happen? We had no idea, but it sounded plausible. The model didn't give us any answers, it generated an interesting question.
Re: (Score:2)
"I've been involved in contracts that had public health modeling components. Being "way off" is not necessarily a proof the model is no good when you're modeling a chaotic proces"
Well, in fact, it is. If you know you are working on a chaotic dynamics you know any prediction aside of the existence of an attractor is moot.
Either your model was not chaotic, but still trying to model a chaotic phenomenon, in which case the model was wrong, or the model was chaotic but then it was presented as it was not, being
Re: (Score:2)
Or ... the chaotic bits may lay outside your nice linear model. Either way prediction of future events is off the table.
Re: (Score:2)
Look weather might be chaotic but we can say it will rain next week,
... or even better: we are able to say with almost 100% certainty that it will rain sometimes next year (we're just unable to tell the specific days, obviously...)
So, even if some aspects of a chaotic system are not predictable (almost by definition...), others are.
MORE importantly, in this case the modeling was (Score:5, Insightful)
done precisely in order to encourage behavior that would change the inputs to the model.
Nobody looks at cigarettes today and says, "Gosh, nobody smokes anyway and death rates are coming down, there was no need for all that worry, smoke away!"
The whole point of the data in that case (and in this one) was to encourage the world to change behavior (i.e. alter the inputs) to ensure that the modeled outcome didn't occur.
To peer at the originally modeled outcome after the fact and say that it was "wrong" make no sense.
When we tell a kid, "finish high school or you're going to suffer!" and then they finish high school and don't suffer a decade down the road, we don't say "well, you didn't suffer after all, guess there was no point in you finishing high school!"
That would be silly. As is the idea that the modeling was wrong after the modeling itself led to a change in the behavior being modeled.
Re: (Score:2)
And all the climate change deniers are considered nuts for thinking that the scientists don't have the climate models right?
Without proof? Yes, this amounts to conspiracy theory.
Also climate deniers tend to make a very specific prediction with regard to climate: that the climate will remain pretty much the same regardless of the concentrations of greenhouse gas, or alternatively, that negative feedback will effectively overwhelm any positive feedback (leading to climate staying essentially the same). These assertions require proof, and this proof requires predictive modelling.
Re: (Score:3)
Re: (Score:2)
Then rigorously show how this is the case. It's really that simple. The fact it has not been done yet should tell you something.
I'm not sure of the 'this' that needs to be proven, but I'm going to interpret it as the GP statement scientists don't have the climate models rightas something that isn't nuts or crazy.
From Chapter 9 [www.ipcc.ch] of the IPCC AR5, complete with more than a half dozen citations:
For instance, maintaining the global mean top of the atmosphere (TOA) energy balance in a simulation of pre-industrial climate is essential to prevent the climate system from drifting to an unrealistic state. The models used in this report almost
Re: (Score:2)
And all the climate change deniers are considered nuts for thinking that the scientists don't have the climate models right?
Proof positive that ANY computer model can be inaccurate. What is more dynamic and chaotic than the atmosphere?
Let's just give up on doing science altogether. After all, gravity and evolution and relativity are just theories not FACTS, so if I believe that the universe was created 6000 years ago it's equally as right as any so-called scientific theory.
The most important thing we've learned from this (Score:1)
Re: (Score:1)
Well Stated, Reasoned and Accurate. No surprise that someone modded you overrated.
Anyway just to add to the statement, Everyone has biases, computer models tend to amplify those biases to absurd proportions, You can only hope whoever is running the model is doing a good job of validation and doesn't have an agenda.
Re: (Score:2)
Re: (Score:2)
You really don't know how any of this works, do you? This is becoming more and more obvious with every post you make. Learned the difference between sea ice and land ice yet? :)
:D, why don't you show the good people my errors.
Re:The most important thing we've learned from thi (Score:5, Insightful)
No, it doesn't show that. The point of the computer models is not to predict exactly how bad the outbreak will be. What good would that do? All you have to do to find out how bad the outbreak will be is wait. What computer models do is give us some kind of idea of how seriously we should take the situation. For that, the models did a fine job. They probably shouldn't have been bandied about so much on the news, but that's not a problem with the science--that's a problem with the science reporting, which is a well known problem.
But it's really, really frustrating when people predict a possible bad outcome and suggest steps be taken to prevent it, and then steps are taken, and then the bad outcome doesn't happen, possibly because the steps were taken (it's never possible to know for sure) and then somebody says "you cried wolf." No. Crying wolf is when you lie about a threat you know doesn't exist. The Y2K threat wasn't crying wolf, and this wasn't crying wolf. What both things were were attempts to mitigate a very real risk the severity of which was uncertain. The fact that we didn't have a massive breakdown in 2000, and that we didn't have an Ebola pandemic, are both really good outcomes.
Re: (Score:2)
this shows just how bad an idea it is to put too much trust in computer models
What's this? What exactly did the output of their model harm?
If anything, it was a reality check reminding people who don't study the spread of disease just how bad things can get if something this harmful goes unchecked.
Feature (Score:2)
"Its not a bug...... Its a feature!"
This is like a car manufacturer claiming that their car will have 1,000 horsepower, and after several months/years the people who have preordered it finally get theirs and find out it has 20 horsepower and the manufacturer says its a good thing because it makes the car safer.
Re: (Score:3)
This is like a car manufacturer claiming that their car will have 20 horsepower, but much more if they look after it, and after several months/years the people who have preordered it finally get theirs and are careful about looking after it, and find out it has 1000 horsepower and the manufacturer says its a good thing because it encouraged them to be careful.
Re: (Score:2)
"Its not a bug...... Its a feature!"
This is like a car manufacturer claiming that their car will have 1,000 horsepower, and after several months/years the people who have preordered it finally get theirs and find out it has 20 horsepower and the manufacturer says its a good thing because it makes the car safer.
Thanks for adding to the glorious slashdot pantheon of completely fucking stupid car analogies.
Y2K not as bad as predicted either (Score:5, Insightful)
The problem with blaming prediction models after the fact is that the models were based on the assumption of current and continued support. Ebola just like the Y2K bug was everything the disaster it may have been had it not been for the efforts involved in preventing it.
End result, all the people who did the hard work making the world aware of the problem are blamed for crying wolf.
Re: (Score:3, Insightful)
You don't know what "chaotic" means, and you've flown on airplanes designed by computer models. When's the last time you saw a new wind tunnel being built?
Re: (Score:3)
they still do wind testing
Re: (Score:2)
The Wind Shear's Full Scale Rolling Road was built in 2008, but I guess that also isn't currently under construction so you still might cling to that...
The underlying important point remains. When it comes to engineering planes and automobiles, computer modelling is heavily used, but the use of real world tests like wind tunnels are also still in use because the models are not yet perfect.
Re: (Score:2)
Some of the time, when you do something to avert what you perceive is an impending negative consequence, it doesn't help. Doing nothing in response to external stimuli meant to conjure up angst in you helps an alarmingly smallish to infinitesimal percentage of the time.
Doing is statistically likely to elicit a better outcome for you than not doing.
Re: (Score:2)
It just shows that you can not predict the future, from psychics to computer modeling it all fails because the future is inherently unpredictable.
Indeed, there's no better than a 50/50 chance the Sun will rise tomorrow.
Clown.
The modelers should learn from Paul Ehrlich (Score:3, Interesting)
To quote my beloved Bryan Caplan (http://econlog.econlib.org/archives/2015/06/the_sum_of_all.html):
"I didn't think there was anything more to say about infamous doomsayer Paul Ehrlich. Until he decided to justify his career to the New York Times. Background:
'No one was more influential -- or more terrifying, some would say -- than Paul R. Ehrlich, a Stanford University biologist... He later went on to forecast that hundreds of millions would starve to death in the 1970s, that 65 million of them would be Americans, that crowded India was essentially doomed, that odds were fair "England will not exist in the year 2000." Dr. Ehrlich was so sure of himself that he warned in 1970 that "sometime in the next 15 years, the end will come." By "the end," he meant "an utter breakdown of the capacity of the planet to support humanity."'
Okay, here's Ehrlich's side of the story:
After the passage of 47 years, Dr. Ehrlich offers little in the way of a mea culpa. Quite the contrary. Timetables for disaster like those he once offered have no significance, he told Retro Report, because to someone in his field they mean something "very, very different" from what they do to the average person.
In the video interview, Ehrlich elaborates:
'I was recently criticized because I had said many years ago, that I would bet that England wouldn't exist in the year 2000. Well, England did exist in the year 2000. But that was only 14 years ago... One of the things that people don't understand is that timing to an ecologist is very very different than timing to an average person.'"
Re: (Score:2)
He was confident enough about his dates to put money on it. [wikipedia.org] He lost very, very badly. Sorry, the business about not meaning those exact dates are just alibis from a guy who missed very badly.
Opened Eyes (Score:2, Insightful)
...maybe the crazy predictions opened enough eyes to get the ball rolling on containment...therefore nullifying the predictions?
Pandemic threats are always overhyped. (Score:2)
Failed logic.... (Score:2, Troll)
So the ends justifies the means. Got it.
Most have a longer timeframe... (Score:1)
Better than missing the other way (Score:2)
computer models can't be wrong (Score:1)
Computers are infallible. If a computer 'says' something we know that it is correct. The link between the assumptions that make the model and the model itself is seamless and is science. In this 'science' experiment is not needed and is possibly even harmful.
You learn the truth about the world from building your computer model. This isn't the 19th century anymore. Now we have computers. Actually doing something out there in meatspace to observe what happens when you try it is crude and unnecessary and in an
Re: (Score:2)
1. People start dying from Ebola
2. Scientists model the outbreak, making predictions based on what is currently happening
3. The world's governments and NGOs see the model's predictions and decide that's not an acceptable outcome, and make an effort to reduce it
4. The effort is so effective that the outcome doesn't match the predictions which triggered the effort to be made in the first place
5. The prediction then doesn't match the outcome, so idiots on Slashdot think the models were inaccurate and usel
You ever wonder (Score:1)
If math errors by computers due to hardware or software make us take the wrong assumptions? Especially in physics or other high end math fields.
Numbers reported incorrectly (Score:5, Insightful)
The way the model results are reported needs to change. The worst case results were presented to the public as the expected outcome. This is something between highly deceptive and unethical. (think yelling "Fire!" in a crowded movie theater.) The best, worst and average outcomes from the model need to be reported. Perhaps even two sets of best, worst and average outcomes. One with large scale intervention and one with zero intervention.
A very simple way to think about when you know the model has failed: The model has failed when it makes 100 predictions with 95% certainty and more than 5 of the actual outcomes are outside the bounds defined by the best and worst outcomes. Note: I said SIMPLE.
The modelers need to be careful about what they say. Next time they predict armageddon, no one will take them seriously.
Re: Numbers reported incorrectly (Score:1)
When it comes to an outbreak of Ebola 5% is not acceptable. Look up ROC curves where one sets a liberal or conservative criterion.
irrational rants pissing me off (Score:1)
What a bunch of idiotic posts. Model results are associated with predicted range of probabilities (eg 80 percent chance of rain). We depend on weather reporting even when they are wrong on occasion. Why? ...because our weather models are pretty good(despite chaos).
Re: irrational rants pissing me off (Score:1)
Yet we can say that Spring is often wetter than Summer.
The rest of your post is just you spewing bullshit.
It usually does... (Score:2)
Computer modeling is vastly overrated. It is mostly based on the abstraction of trend lines. Which is the assumption that existing trends will continue. That is less a prediction of hte future than a picture of the present.
Look at the growth trend line of a six year old... then graph that out... in 10,000 years think how big he'll be!
Right?... that's what trend lines do... They're only useful if people that know what they are and how they work use them. Often as not, people that aren't educated or knowledge
Re: (Score:2)
Computer modeling is vastly overrated.
I'm not convinced.
It is mostly based on the abstraction of trend lines.
Can you provide an example of modelling based on abstraction of trend lines?
Which is the assumption that existing trends will continue. That is less a prediction of hte future than a picture of the present.
In my experience, this false assumption (that arbitrarily short trends, rather than the trends at the granularity of the model) seems to be the assumption made by people asserting that modelling is useless. Mind you, my experience mostly comes from dealing with morons.
Re: (Score:2)
examples of trend line based modeling...
Finance, population statistics, various biological modeling applications, and basically all weather modeling works this way.
Take the 2008 credit crunch. At the time, the finance industry was using a market model that abstracted all risk to a single variable.
So an investment would have a risk number and the higher that risk number the riskier the investment.
The problem with it was that it over simplified risk and it could only boil it down to a single variable by makin
Re: (Score:2)
Finance, population statistics, various biological modeling applications, and basically all weather modeling works this way.
Fascinating. So based on 4 examples of modelling based abstraction of trend lines, you feel confident to declare that all modelling is overrated. Even though your examples don't include the model from the article.
As to long trends versus short trends, that's all subjective. What is long or short is arbitrary.
No it isn't.
And that is another big problem with trend lines, they do not show causation. They show correlation. Getting causation from a trend line is almost impossible.
Pretty show nobody is graphing a trend line to find the cause of ebola. We already knew what caused ebola before anybody drew a line.
Re: (Score:2)
Because of four examples? Fuck off. What is your arbitrary number of examples you'd consider credible? 5? 50? How many do I need?
Give me a break.
As to the difference between long and short being arbitrary. It literally is actually. Jesus. Are you another idiot that doesn't know what the word arbitrary means?
I ran into about dozen of you fuckwits in the metric versus imperial discussion.
As to causation, people try to show causation with trend lines all the time.
They'll say "we did this thing at time X" and y
Re: (Score:2)
Look at the growth trend line of a six year old... then graph that out... in 10,000 years think how big he'll be!
As an aside, this appears to be the model used for valuing Apple shares. There seems to be an underlying assumption that we will soon discover FTL travel and find whole new alien galactic empires in need of iPhones, and watches with a day's battery life.
Re: (Score:2)
Exactly.
Ironically around the time most people start focusing on the graph it is usually about due to change.
That isn't a hard and fast rule or anything. I've just noticed that often by the time anyone even notice a pattern the pattern tends to change. It has happened so many times that whenever I hear someone talking about one pattern or another, I think "oh, we'll then its going to flip"
I've actually made a lot of money on the stock market by doing that. By low and sell high... When people start running a
Re: (Score:2)
As I said, they work fine when people understand the data and they understand what they're talking about.
You do that and you understand that the kid isn't going to keep growing at the same rate for the next 10,000 years.
The trend lines are mostly useful for showing what happened not what will happen.
As Mark Twain said, there are lies, damned dirty lies, and statistics.
And that is just to explain that specious manipulation of numbers has been a thing in political and economic discourse for generations. It is
Um... so the model was correct (Score:4, Insightful)
Re: (Score:2)
In that case, you have no need to object to them.
But I suppose you would still allow that they have a negative effect.
Hard to predict (Score:2)
These kind of events often follow a power law for how bad they get. It's notoriously (maybe impossible) to guess the magnitude of a power law event in advance. Also, my guess is they are not able to accurately account for adaptation in behavior of normal people in the disease zone. That's the sort of factor that will almost by definition be under-predicted.
Models are for fear mongering, nothing more (Score:2)
Just wondering if the models came with a prediction score or some measure of their accuracy. As with climate predictions, the untold story is that the models are no more accurate than their inputs and the validity of the theories used to create them. You might expect models to come with warnings, but they don't, at least nothing that gets transmitted to the public.
As the world embraces 'big data' and the modeling it spawns, this should be a bit of a lesson. The worry should be: how many times can models
I disagree with this statement: (Score:2)
But the modelers argue that this really wasn't a failure, because their predictions served as worst-case scenarios that mobilized international efforts.
Sure, this worked this time in everyone's favor. But what about the next epidemic? Let's say the modeling is better next time (which it should be) and it predicts another disaster. What then? People will look to the modeling on Ebola and say "it's not going to be that bad" and regard the next warning more lightly. This does nobody any good.
A good ex
Here's the deal (Score:2)
If you keep making Chicken Little the sky is falling predictions with computer models as your proof to bring about some desired political change. Then eventually you will lose all credibility and no one will listen to you anymore. You damage the reputation of the sciences and certainly of computer models when you do this. Knock it off.
Re: (Score:2)
If you keep making Chicken Little the sky is falling predictions with computer models as your proof to bring about some desired political change. Then eventually you will lose all credibility and no one will listen to you anymore. You damage the reputation of the sciences and certainly of computer models when you do this. Knock it off.
I bet you're the sort of person who complains about people bringing politics into everything, by which they mean politics they don't agree with.
Re: (Score:1)
Sounds glorious, a reduction of population by at least 6 billion.
No- Re:Crying Wolf (Score:2)
TWENTY THOUSAND PEOPLE died and you claim this as 'crying wolf'???
If the international community hadn't jumped on this, it could have been way, way, way worse.
Second leading cause of death in the US... (Score:2)
...and first among children and young adults (5 - 24) are cars, with 33804 deaths in 2013 alone.
6510 of those being in the 15 - 24 years of age range.
Now... Maybe you're out there, in the night, prowling garages and parking lots, killing all those cars in their sleep in order to prevent further deaths.
Which would explain why you missed OP's point.
Which was that those "worst-case scenarios that mobilized international efforts" predicting "175,000 cases in Liberia by the end of 2014... [and] 1.4 million cases
Re: (Score:2)
Cars aren't a contagious disease that grows exponentially.
You might think there's a big difference between 20,000 deaths and 200,000 deaths, but with exponential phenomenon you have to take logarithms, it's the difference between 4.3 and 5.3; the models were only off by 20%.
If the international response hadn't been what it was, it could have been 6 or 7, tens of millions of deaths.
Really, this whole post is the most dangerously stupid thing I've ever read. The modelling didn't fail, and even if you live in
Re: (Score:2)
Cars aren't a contagious disease that grows exponentially.
You might think there's a big difference between 20,000 deaths and 200,000 deaths, but with exponential phenomenon you have to take logarithms, it's the difference between 4.3 and 5.3; the models were only off by 20%.
And if you look at it from a really high altitude it is practically the same number.
Or on a scale that only counts complete, round, millions - where 20 thousand and 200 thousand are exactly ZERO.
Just because point and percentage SEEM smaller when you decide to only count orders of magnitude, that does NOT mean that the resulting error isn't HUGE.
Had the same model been used to predict whether a certain building code will produce earthquake-proof buildings, rating them a Richter 7-7.9 instead of 6-6.9 - it w
Re: (Score:2)
You're completely wrong on every point, flu is fucking scary to epidemiologists. I had swine flu, that was *awful*; but that was only slightly worse than normal flu.
But flu killed more people in 1918 than the whole of WWI; and there's no reason to think that's worse case. The 1918 flu took fit, healthy soldiers and people and left them dead surrounded by blood they'd coughed up within a day. It had something like a 10% deathrate.
Nobody can even predict earthquakes right now. An accurate series of prediction
Re: (Score:2)
Re: (Score:2)
Exactly. This is just like the Facebookers that (Score:4, Insightful)
spend their days posting, "Why in the hell do we vaccinate against polio? It's a scam! After all, how many people do *you* know that have ever had it? Hmmmm? Case closed!"
Re: (Score:2)
Do you think Y2K would have gone so quietly had the entire IT industry simply ignored the problems created during the prior four decades of programming? Do you think the ebola outbreak would have been stopped so quickly had the world's health care organizations simply ignored the problem?
So yes, it was Y2K all over again. Some people noticed a huge looming threat, they brought it to the attention of the world, the world eventually responded with enough resources to solve the problem.
Re: (Score:2)
You don't, and I don't, but it should be obvious from discussion here that a lot of people do!
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Software people are unable to understand the real world. But they're convinced we can build Mars colonizing spaceships based on decades-old fantasies...
I maintain our ancestors came from Mars after an environmental catastrophe and colonised Earth. Why are we trying to return to a dead planet from which we escaped?
It's always nice to see someone post a good, solid theory grounded in facts and confirmed observations on slashdot.
Maybe there'll be one tomorrow.
Re: (Score:2)
Because the Earth may explode and form a new asteroid belt at any moment.
You're thinking too small. The correct space fan argument is "because in just ten billion years the Sun will die and with it all life on Earth, so we need to have colonised other galaxies by then".