Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Medicine Stats Technology

Computer Modeling Failed During the Ebola Outbreak 193

the_newsbeagle writes: Last fall, the trajectory of the Ebola outbreak looked downright terrifying: Computational epidemiologists at Virginia Tech predicted 175,000 cases in Liberia by the end of 2014, while the CDC predicted 1.4 million cases in Liberia and Sierra Leone. They were way off. The actual tally as of January 2015: A total of 20,712 cases in Guinea, Liberia, and Sierra Leone combined, and in all three countries, the epidemic was dying down. But the modelers argue that this really wasn't a failure, because their predictions served as worst-case scenarios that mobilized international efforts.
This discussion has been archived. No new comments can be posted.

Computer Modeling Failed During the Ebola Outbreak

Comments Filter:
  • in the end.
    • Re:wrong is right (Score:5, Insightful)

      by Anonymous Coward on Monday June 08, 2015 @09:19PM (#49872663)

      Well, IF there hadn't been a very robust response, it could easily have been that bad.

      • Exactly. Do those asking the question "did the modelers get it wrong?" think that the models can actually account for the level of response there will be from every country in the world that has the ability to help mitigate the spread of the disease?

        I can see it now... epidemiologists sit down, come up with a model of the outbreak based on what they know about how the disease spreads, and where it's starting from, and then ask themselves "OK, now what's the World Aid Fudge Factor?".

        • Goes without saying that I haven't read the article, but that's my question. If the model was "this is what will happen if no action is taken" then the model may or may not have been right. But you certainly can't say that it's wrong because...action was taken. What's interesting is akin to what you say. Not a "fudge factor," but as another independent variable in the model. "Here's the outcome with no aid. Here's the outcome with these different types and quantities of aid."

  • by Anonymous Coward

    Garbage in ... Garbage out - GIGO

    That computer simulation failed simply because all the input that the program got fed with were erroneous

    • by plover ( 150551 ) on Monday June 08, 2015 @09:35PM (#49872775) Homepage Journal

      Certainly not all the input was inaccurate. There could have been incorrect accounting for the effectiveness of education or news efforts. The medical personnel may have improved faster than predicted. Mobility limitations might have reduced the spread to a manageable rate. Or it could simply be the outcome was a 1:20 chance that beat the predicted odds. There are way too many variables to even know which was the least accurate, but I wouldn't claim any were as bad as "garbage".

      • If there are way too many variables, then it probably is a really poor candidate for simulation in the first place, and it is just garbage in, garbage out.

        • by plover ( 150551 ) on Tuesday June 09, 2015 @12:26AM (#49873395) Homepage Journal

          Complex systems have a lot of variables; that doesn't make them poor candidates for modeling. On the contrary, you simply have to rerun the model as you learn more and better data.

          The prediction of "a million victims" was made in the earlier stages of the outbreak and grabbed the attention of the world, so we more clearly remember it as it was on that day. But just because we remember what the media said in October of 2014 doesn't mean they weren't continually working on it. After the model started reversing course the headlines stopped being so alarmist, and so the general public barely remembers the much less dramatic follow-on news "Ebola trending downwards", "revised estimates", etc.

          Was this a deliberate attempt by the people generating the model to drum up public support? Was this simply the media grabbing on to the worst case scenario because it made the best headlines? Was it an honest mistake in reporting? Perhaps the "garbage in -- headlines out" came from a source other than the model's data.

        • If there are way too many variables, then it probably is a really poor candidate for simulation in the first place, and it is just garbage in, garbage out.

          What nonsense. On that basis we wouldn't even bother trying to do weather forecasts at all. But, in fact, we are steadily improving their accuracy.

      • by ShanghaiBill ( 739463 ) on Monday June 08, 2015 @10:59PM (#49873111)

        There could have been incorrect accounting for the effectiveness of education or news efforts. The medical personnel may have improved faster than predicted.

        The educational effort was far more effective than expected. Medical personnel had very little to do with it. Ebola was stopped by convincing people that they should wash their hands with soap. That turned out to be easier than expected, in spite of low literacy rates.

    • So, you're saying that both of these (different) models were accurate (curious how you'd know) but that the data they were operating on was faulty. A good reminder to ignore ACs!
  • by Anonymous Coward on Monday June 08, 2015 @09:20PM (#49872673)

    People who work in population dynamics know that the models are based on a very crude understanding of the disease and ultimately general epidemiological assumptions. The fact that there are basic assumptions is sometimes disguised in the process of making fancy computer models.

    These models may be the best predictions that people can make. However, GIGO (garbage in, garbage out) still applies. However, sometimes the best predictions are not good enough since they can be very misleading.

    (I used to work on similar models and became disenchanted; I need to post anonymously.)

  • Not surpising. (Score:5, Interesting)

    by hey! ( 33014 ) on Monday June 08, 2015 @09:22PM (#49872683) Homepage Journal

    I've been involved in contracts that had public health modeling components. Being "way off" is not necessarily a proof the model is no good when you're modeling a chaotic process which depends on future parameters that aren't predictable. In our case it was the exact timing of future rainfall. In their case it probably had to do with human behavior. A small thing, like an unseasonable rainstorm, or an infected person showing up in an unexpected place, can have immense consequences.

    You look at all the data you have, and you think, "Hey, this is a lot of data, I should be able to predict stuff from it," but the truth is while it looks like a lot of data it's a tiny fraction of all the data that's out there in the world -- and not even a representative sample. So you have to guess "plausible" values, and if they're wrong you might not see the kind of result that eventually happens, even after many model runs.

    So in most cases you can't expect a computer model to have the power to predict specific future events. It can do other things, like generate research questions. One of our models suggested that having a lot of infected mosquitoes early in the season reduced human transmission of a certain mosquito borne disease later in the season, which was a surprising result. When we looked at it, it turned out that the reason was that the epidemic peaked in the animal population early in the season before people were out doing summer stuff and getting bit. Does that actually happen? We had no idea, but it sounded plausible. The model didn't give us any answers, it generated an interesting question.

    • "I've been involved in contracts that had public health modeling components. Being "way off" is not necessarily a proof the model is no good when you're modeling a chaotic proces"

      Well, in fact, it is. If you know you are working on a chaotic dynamics you know any prediction aside of the existence of an attractor is moot.

      Either your model was not chaotic, but still trying to model a chaotic phenomenon, in which case the model was wrong, or the model was chaotic but then it was presented as it was not, being

      • by hey! ( 33014 )

        Or ... the chaotic bits may lay outside your nice linear model. Either way prediction of future events is off the table.

    • by aussersterne ( 212916 ) on Tuesday June 09, 2015 @12:14AM (#49873359) Homepage

      done precisely in order to encourage behavior that would change the inputs to the model.

      Nobody looks at cigarettes today and says, "Gosh, nobody smokes anyway and death rates are coming down, there was no need for all that worry, smoke away!"

      The whole point of the data in that case (and in this one) was to encourage the world to change behavior (i.e. alter the inputs) to ensure that the modeled outcome didn't occur.

      To peer at the originally modeled outcome after the fact and say that it was "wrong" make no sense.

      When we tell a kid, "finish high school or you're going to suffer!" and then they finish high school and don't suffer a decade down the road, we don't say "well, you didn't suffer after all, guess there was no point in you finishing high school!"

      That would be silly. As is the idea that the modeling was wrong after the modeling itself led to a change in the behavior being modeled.

  • If nothing else, this shows just how bad an idea it is to put too much trust in computer models. There are always factors that we either don't know about or don't know how to include properly and getting even one of them wrong can throw the whole model off. Yes, computer simulations and models can be very, very useful but you have to take the results with a grain of salt and remember that they're only approximations at best.
    • Well Stated, Reasoned and Accurate. No surprise that someone modded you overrated.

      Anyway just to add to the statement, Everyone has biases, computer models tend to amplify those biases to absurd proportions, You can only hope whoever is running the model is doing a good job of validation and doesn't have an agenda.

      • by dave420 ( 699308 )
        You really don't know how any of this works, do you? This is becoming more and more obvious with every post you make. Learned the difference between sea ice and land ice yet? :)
        • You really don't know how any of this works, do you? This is becoming more and more obvious with every post you make. Learned the difference between sea ice and land ice yet? :)

          :D, why don't you show the good people my errors.

    • by mellon ( 7048 ) on Monday June 08, 2015 @11:23PM (#49873187) Homepage

      No, it doesn't show that. The point of the computer models is not to predict exactly how bad the outbreak will be. What good would that do? All you have to do to find out how bad the outbreak will be is wait. What computer models do is give us some kind of idea of how seriously we should take the situation. For that, the models did a fine job. They probably shouldn't have been bandied about so much on the news, but that's not a problem with the science--that's a problem with the science reporting, which is a well known problem.

      But it's really, really frustrating when people predict a possible bad outcome and suggest steps be taken to prevent it, and then steps are taken, and then the bad outcome doesn't happen, possibly because the steps were taken (it's never possible to know for sure) and then somebody says "you cried wolf." No. Crying wolf is when you lie about a threat you know doesn't exist. The Y2K threat wasn't crying wolf, and this wasn't crying wolf. What both things were were attempts to mitigate a very real risk the severity of which was uncertain. The fact that we didn't have a massive breakdown in 2000, and that we didn't have an Ebola pandemic, are both really good outcomes.

    • this shows just how bad an idea it is to put too much trust in computer models

      What's this? What exactly did the output of their model harm?

      If anything, it was a reality check reminding people who don't study the spread of disease just how bad things can get if something this harmful goes unchecked.

  • "Its not a bug...... Its a feature!"

    This is like a car manufacturer claiming that their car will have 1,000 horsepower, and after several months/years the people who have preordered it finally get theirs and find out it has 20 horsepower and the manufacturer says its a good thing because it makes the car safer.

    • by gringer ( 252588 )

      This is like a car manufacturer claiming that their car will have 20 horsepower, but much more if they look after it, and after several months/years the people who have preordered it finally get theirs and are careful about looking after it, and find out it has 1000 horsepower and the manufacturer says its a good thing because it encouraged them to be careful.

    • "Its not a bug...... Its a feature!"

      This is like a car manufacturer claiming that their car will have 1,000 horsepower, and after several months/years the people who have preordered it finally get theirs and find out it has 20 horsepower and the manufacturer says its a good thing because it makes the car safer.

      Thanks for adding to the glorious slashdot pantheon of completely fucking stupid car analogies.

  • by thegarbz ( 1787294 ) on Monday June 08, 2015 @09:29PM (#49872745)

    The problem with blaming prediction models after the fact is that the models were based on the assumption of current and continued support. Ebola just like the Y2K bug was everything the disaster it may have been had it not been for the efforts involved in preventing it.

    End result, all the people who did the hard work making the world aware of the problem are blamed for crying wolf.

  • by Anonymous Coward on Monday June 08, 2015 @09:32PM (#49872761)

    To quote my beloved Bryan Caplan (http://econlog.econlib.org/archives/2015/06/the_sum_of_all.html):

    "I didn't think there was anything more to say about infamous doomsayer Paul Ehrlich. Until he decided to justify his career to the New York Times. Background:

    'No one was more influential -- or more terrifying, some would say -- than Paul R. Ehrlich, a Stanford University biologist... He later went on to forecast that hundreds of millions would starve to death in the 1970s, that 65 million of them would be Americans, that crowded India was essentially doomed, that odds were fair "England will not exist in the year 2000." Dr. Ehrlich was so sure of himself that he warned in 1970 that "sometime in the next 15 years, the end will come." By "the end," he meant "an utter breakdown of the capacity of the planet to support humanity."'

    Okay, here's Ehrlich's side of the story:

    After the passage of 47 years, Dr. Ehrlich offers little in the way of a mea culpa. Quite the contrary. Timetables for disaster like those he once offered have no significance, he told Retro Report, because to someone in his field they mean something "very, very different" from what they do to the average person.

    In the video interview, Ehrlich elaborates:

    'I was recently criticized because I had said many years ago, that I would bet that England wouldn't exist in the year 2000. Well, England did exist in the year 2000. But that was only 14 years ago... One of the things that people don't understand is that timing to an ecologist is very very different than timing to an average person.'"

    • He was confident enough about his dates to put money on it. [wikipedia.org] He lost very, very badly. Sorry, the business about not meaning those exact dates are just alibis from a guy who missed very badly.

  • Opened Eyes (Score:2, Insightful)

    by Anonymous Coward

    ...maybe the crazy predictions opened enough eyes to get the ball rolling on containment...therefore nullifying the predictions?

  • The media love it because scary headlines attract eyeballs, but scare tactics can backfire: Researchers might get more funding, but they also might get their research banned: http://victimsofdsto.byethost3... [byethost31.com]
  • But the modelers argue that this really wasn't a failure, because their predictions served as worst-case scenarios that mobilized international efforts.

    So the ends justifies the means. Got it.

  • I think computer models only work when the target data is too far out to verify.
  • I would rather we end up with fewer losses than expected, than more. We learned something from this problem, and the cost was not as bad as it would have been had the error been in the other direction.
  • Computers are infallible. If a computer 'says' something we know that it is correct. The link between the assumptions that make the model and the model itself is seamless and is science. In this 'science' experiment is not needed and is possibly even harmful.

    You learn the truth about the world from building your computer model. This isn't the 19th century anymore. Now we have computers. Actually doing something out there in meatspace to observe what happens when you try it is crude and unnecessary and in an

    • by dave420 ( 699308 )

      1. People start dying from Ebola
      2. Scientists model the outbreak, making predictions based on what is currently happening
      3. The world's governments and NGOs see the model's predictions and decide that's not an acceptable outcome, and make an effort to reduce it
      4. The effort is so effective that the outcome doesn't match the predictions which triggered the effort to be made in the first place
      5. The prediction then doesn't match the outcome, so idiots on Slashdot think the models were inaccurate and usel

  • If math errors by computers due to hardware or software make us take the wrong assumptions? Especially in physics or other high end math fields.

  • by ebonum ( 830686 ) on Monday June 08, 2015 @10:28PM (#49873039)

    The way the model results are reported needs to change. The worst case results were presented to the public as the expected outcome. This is something between highly deceptive and unethical. (think yelling "Fire!" in a crowded movie theater.) The best, worst and average outcomes from the model need to be reported. Perhaps even two sets of best, worst and average outcomes. One with large scale intervention and one with zero intervention.

    A very simple way to think about when you know the model has failed: The model has failed when it makes 100 predictions with 95% certainty and more than 5 of the actual outcomes are outside the bounds defined by the best and worst outcomes. Note: I said SIMPLE.

    The modelers need to be careful about what they say. Next time they predict armageddon, no one will take them seriously.

  • What a bunch of idiotic posts. Model results are associated with predicted range of probabilities (eg 80 percent chance of rain). We depend on weather reporting even when they are wrong on occasion. Why? ...because our weather models are pretty good(despite chaos).

  • Computer modeling is vastly overrated. It is mostly based on the abstraction of trend lines. Which is the assumption that existing trends will continue. That is less a prediction of hte future than a picture of the present.

    Look at the growth trend line of a six year old... then graph that out... in 10,000 years think how big he'll be!

    Right?... that's what trend lines do... They're only useful if people that know what they are and how they work use them. Often as not, people that aren't educated or knowledge

    • Computer modeling is vastly overrated.

      I'm not convinced.

      It is mostly based on the abstraction of trend lines.

      Can you provide an example of modelling based on abstraction of trend lines?

      Which is the assumption that existing trends will continue. That is less a prediction of hte future than a picture of the present.

      In my experience, this false assumption (that arbitrarily short trends, rather than the trends at the granularity of the model) seems to be the assumption made by people asserting that modelling is useless. Mind you, my experience mostly comes from dealing with morons.

      • examples of trend line based modeling...

        Finance, population statistics, various biological modeling applications, and basically all weather modeling works this way.

        Take the 2008 credit crunch. At the time, the finance industry was using a market model that abstracted all risk to a single variable.

        So an investment would have a risk number and the higher that risk number the riskier the investment.

        The problem with it was that it over simplified risk and it could only boil it down to a single variable by makin

        • Finance, population statistics, various biological modeling applications, and basically all weather modeling works this way.

          Fascinating. So based on 4 examples of modelling based abstraction of trend lines, you feel confident to declare that all modelling is overrated. Even though your examples don't include the model from the article.

          As to long trends versus short trends, that's all subjective. What is long or short is arbitrary.

          No it isn't.

          And that is another big problem with trend lines, they do not show causation. They show correlation. Getting causation from a trend line is almost impossible.

          Pretty show nobody is graphing a trend line to find the cause of ebola. We already knew what caused ebola before anybody drew a line.

          • Because of four examples? Fuck off. What is your arbitrary number of examples you'd consider credible? 5? 50? How many do I need?

            Give me a break.

            As to the difference between long and short being arbitrary. It literally is actually. Jesus. Are you another idiot that doesn't know what the word arbitrary means?

            I ran into about dozen of you fuckwits in the metric versus imperial discussion.

            As to causation, people try to show causation with trend lines all the time.

            They'll say "we did this thing at time X" and y

    • Look at the growth trend line of a six year old... then graph that out... in 10,000 years think how big he'll be!

      As an aside, this appears to be the model used for valuing Apple shares. There seems to be an underlying assumption that we will soon discover FTL travel and find whole new alien galactic empires in need of iPhones, and watches with a day's battery life.

      • Exactly.

        Ironically around the time most people start focusing on the graph it is usually about due to change.

        That isn't a hard and fast rule or anything. I've just noticed that often by the time anyone even notice a pattern the pattern tends to change. It has happened so many times that whenever I hear someone talking about one pattern or another, I think "oh, we'll then its going to flip"

        I've actually made a lot of money on the stock market by doing that. By low and sell high... When people start running a

  • by rsilvergun ( 571051 ) on Monday June 08, 2015 @11:14PM (#49873153)
    if we let things go to shit like we normally do. Because of this we didn't let things go to shit and so the model was wrong...? How is this a bad thing, or a failure of the model. This is like when people say regulations aren't needed anymore because the abuses they were put in place to stop have stopped. They stopped for a reason people. Figure it out, it's not that difficult.
  • These kind of events often follow a power law for how bad they get. It's notoriously (maybe impossible) to guess the magnitude of a power law event in advance. Also, my guess is they are not able to accurately account for adaptation in behavior of normal people in the disease zone. That's the sort of factor that will almost by definition be under-predicted.

  • Just wondering if the models came with a prediction score or some measure of their accuracy. As with climate predictions, the untold story is that the models are no more accurate than their inputs and the validity of the theories used to create them. You might expect models to come with warnings, but they don't, at least nothing that gets transmitted to the public.

    As the world embraces 'big data' and the modeling it spawns, this should be a bit of a lesson. The worry should be: how many times can models

  • But the modelers argue that this really wasn't a failure, because their predictions served as worst-case scenarios that mobilized international efforts.

    Sure, this worked this time in everyone's favor. But what about the next epidemic? Let's say the modeling is better next time (which it should be) and it predicts another disaster. What then? People will look to the modeling on Ebola and say "it's not going to be that bad" and regard the next warning more lightly. This does nobody any good.

    A good ex

  • If you keep making Chicken Little the sky is falling predictions with computer models as your proof to bring about some desired political change. Then eventually you will lose all credibility and no one will listen to you anymore. You damage the reputation of the sciences and certainly of computer models when you do this. Knock it off.

    • If you keep making Chicken Little the sky is falling predictions with computer models as your proof to bring about some desired political change. Then eventually you will lose all credibility and no one will listen to you anymore. You damage the reputation of the sciences and certainly of computer models when you do this. Knock it off.

      I bet you're the sort of person who complains about people bringing politics into everything, by which they mean politics they don't agree with.

You know you've landed gear-up when it takes full power to taxi.

Working...