Faulty modeling studies led to overstated predictions of Ebola outbreak

March 31, 2015
Contact:

Rendered image of Earth rotated to show Africa. (stock image)ANN ARBOR—Frequently used approaches to understanding and forecasting emerging epidemics—including the West African Ebola outbreak—can lead to big errors that mask their own presence, according to a University of Michigan ecologist and his colleagues.

“In the early days of the Ebola outbreak, a lot of people got into the forecasting business. They did it using appealingly simple mathematical models, and the result was a series of warnings that alerted the world, quite rightly, to the seriousness of the situation,” said Aaron King, an associate professor in the U-M Department of Ecology and Evolutionary Biology.

“But in the end, most of those predictions turned out to be overstated.”

On March 23, exactly one year after it announced there was an Ebola outbreak in Guinea, the World Health Organization released a situation update stating that there had been 24,842 Ebola cases, including 10,299 deaths, to date in Sierra Leone, Liberia and Guinea.

Last September, the U.S. Centers for Disease Control and Prevention estimated—based on computer modeling—that Liberia and Sierra Leone could see up to 1.4 million Ebola cases by January 2015 if the viral disease kept spreading without effective methods to contain it. Belatedly, the international community stepped up efforts to control the outbreak, and the explosive growth slowed.

“Those predictions proved to be wrong, and it was not only because of the successful intervention in West Africa,” King said. “It’s also because the methods people were using to make the forecasts were inappropriate.”

In a paper published online March 31 in Proceedings of the Royal Society B, King and his colleagues suggest several straightforward and inexpensive ways to avoid those pitfalls when the next big infectious disease outbreak strikes. Their suggestions pertain to disease transmission models, sophisticated systems of equations that use data from the early stages of an outbreak to predict how it will unfold.

“It’s just a matter of time before the next outbreak, and we want to make sure that we know how to provide reliable forecasts to guide the public health response when it happens,” King said.

Many of last year’s Ebola forecasts were made using common, off-the-shelf transmission models called deterministic models. Such models don’t account for the random elements in disease transmission—how many people are infected by each transmission event, for example—and are in incapable of accurately communicating uncertainty.

King and his colleagues say deterministic models should be avoided. So-called stochastic models, which account for randomness and which can more precisely communicate uncertainty, should be used instead.

In addition to using deterministic models, many Ebola forecasters tried to fit those models to the total number of cases that had accumulated since the start of the outbreak. The end result was forecasts that overestimated the eventual size of the Ebola outbreak and greatly underestimated the uncertainty in those forecasts, according to King and his colleagues.

“Deterministic models are easier and faster to work with, and the results look really good,” King said. “But when you use them, it’s a double whammy. Not only are you wrong, you are very sure that you are right.”

Co-authors of the paper are Pejman Rohani, Matthieu Domenech de Celles and Felicia M.G. Magpantay of the U-M Department of Ecology and Evolutionary Biology. King and Rohani also have appointments at the U-M Center for the Study of Complex Systems. King is an associate professor of mathematics, and Rohani is a professor of epidemiology at the School of Public Health. King and Rohani are supported by the Department of Homeland Security and the National Institutes of Health.

 

More information: