Formerly devastating bacterial epidemics have become only a matter of historic interest as we have taken preventive measures (suppressing rodent reservoirs and fleas when dealing with plague) and deployed early detection and immediate treatment of emerging cases. Viral infections pose a greater challenge. A rapid diffusion of smallpox, caused by variola virus, was responsible for well-known reductions of aboriginal American populations that lacked any immunity before their contact with European conquerors. Vaccination eventually eradicated this infection: the last natural outbreak in the US was in 1949, and in 1980 the World Health Organization declared smallpox eliminated on the global scale. But there is no prospect for an early elimination of viral influenza, returning annually in the form of seasonal epidemics and unpredictably as recurrent pandemics.
Seasonal outbreaks are related to latitude (Brazil and Argentina have infection peaks between April and September), they affect between 10% and 50% of the population and result in widespread morbidity and significant mortality among elderly. Annual US means amount to some 65 million illnesses, 30 million medical visits, 200,000 hospitalizations, 25,000 (10,000–40,000) deaths, and up to $5 billion in economic losses (Steinhoff 2007). As with all airborne viruses, influenza is readily transmitted as droplets and aerosols by respiration and hence its spatial diffusion is aided by higher population densities and by travel, as well as by the short incubation period, typically just 24–72 hours. Epidemics can take place at any time during the year, but in temperate latitudes they occur with a much higher frequency during winter. Dry air and more time spent indoors are the two leading promoters.
Epidemics of viral influenza bring high morbidity but in recent decades they have caused relatively low overall mortality, with both rates being the highest among the elderly. Understanding of key factors behind seasonal variations remains limited but absolute humidity might be the predominant determinant of influenza seasonality in temperate climates (Shaman et al. 2010). Recurrent epidemics require the continuous presence of a sufficient number of susceptible individuals, and while infected people recover with immunity, they become again vulnerable to rapidly mutating viruses as the process of antigenic drift creates a new supply of susceptible individuals (Axelsen et al. 2014). That is why epidemics persist even with mass-scale annual vaccination campaigns and with the availability of antiviral drugs. Because of the recurrence and costs of influenza epidemics, considerable effort has gone into understanding and modeling their spread and eventual attenuation and termination (Axelsen et al. 2014; Guo et al. 2015).
The growth trajectories of seasonal influenza episodes form complete epidemic curves whose shape conforms most often to a normal (Gaussian) distribution or to a negative binomial function whose course shows a steeper rise of new infections and a more gradual decline from the infection peak (Nsoesie et al. 2014). More virulent infections follow a rather compressed (peaky) normal curve, with the entire event limited to no more than 100–120 days; in comparison, milder infections may end up with only a small fraction of infected counts but their complete course may extend to 250 days. Some events will have a normal distribution with a notable plateau or with a bimodal progression (Goldstein et al. 2011; Guo et al. 2015).
But the epidemic curve may follow the opposite trajectory, as shown by the diffusion of influenza at local level. This progression was studied in a great detail during the course of the diffusion of the H1N1 virus in 2009. Between May and September 2009, Hong Kong had a total of 24,415 cases and the epidemic growth curve, reconstructed by Lee and Wong (2010), had a small initial peak between the 55th and 60th day after its onset, then a brief nadir followed by rapid ascent to the ultimate short-lived plateau on day 135 and a relatively rapid decline: the event was over six months after it began (figure 2.4). The progress of seasonal influenza can be significantly modified by vaccination, notably in such crowded settings as universities, and by timely isolation of susceptible groups (closing schools). Nichol et al. (2010) showed that the total attack rate of 69% in the absence of vaccination was reduced to 45% with a preseason vaccination rate of just 20%, to less than 1% with preseason vaccination at 60%, and the rate was cut even when vaccinations were given 30 days after the outbreak onset.
We can now get remarkably reliable information on an epidemic’s progress in near real-time, up to two weeks before it becomes available from traditional surveillance systems: McIver and Brownstein (2014) found that monitoring the frequency of daily searches for certain influenza- or health-related Wikipedia articles provided an excellent match (difference of less than 0.3% over a period of nearly 300 weeks) with data on the actual prevalence of influenza-like illness obtained later from the Centers for Disease Control. Wikipedia searches also accurately estimated the week of the peak of illness occurrence, and their trajectories conformed to the negative binomial curve of actual infections.
Seasonal influenza epidemics cannot be prevented and their eventual intensity and human and economic toll cannot be predicted—and these conclusions apply equally well to the recurrence of a worldwide diffusion of influenza viruses causing pandemics and concurrent infestation of the world’s inhabited regions. These concerns have been with us ever since we understood the process of virulent epidemics, and it only got more complicated with the emergence of the H5N1 virus (bird flu) in 1997 and with a brief but worrisome episode of severe acute respiratory syndrome (SARS). In addition, judging by the historical recurrence of influenza pandemics, we might be overdue for another major episode.
We can identify at least four viral pandemics during the 18th century, in 1729–1730, 1732–1733, 1781–1782, and 1788–1789, and there have been six documented influenza pandemics during the last two centuries (Gust et al. 2001). In 1830–1833 and 1836–1837, the pandemic was caused by an unknown subtype originating in Russia. In 1889–1890, it was traced to subtypes H2 and H3, most likely coming again from Russia. In 1918–1919, it was an H1 subtype with unclear origins, either in the US or in China. In 1957–1958, it was subtype H2N2 from south China, and in 1968–1969 subtype H3N2 from Hong Kong. We have highly reliable mortality estimates only for the last two events, but there is no doubt that the 1918–1919 pandemic was by far the most virulent (Reid et al. 1999; Taubenberger and Morens 2006).
The origins of the 1918–1919 pandemic have been contested. Jordan (1927) identified the British military camps in the United Kingdom (UK) and France, Kansas, and China as the three possible sites of its origin. China in the winter of 1917–1918 now seems the most likely region of origin and the infection spread as previously isolated populations came into contact with one another on the battlefields of WWI (Humphries 2013). By May 1918 the virus was present in eastern China, Japan, North Africa, and Western Europe, and it spread across entire US. By August 1918 it had reached India, Latin America, and Australia (Killingray and Phillips 2003; Barry 2005). The second, more virulent, wave took place between September and December 1918; the third one, between February and April 1919, was, again, more moderate.
Data from the US and Europe make it clear that the pandemic had an unusual mortality pattern. Annual influenza epidemics have a typical U-shaped age-specific mortality (with young children and people over 70 being most vulnerable), but age-specific mortality during the 1918–1919 pandemic peaked between the ages of 15 and 35 years (the mean age for the US was 27.2 years) and virtually all deaths (many due to viral pneumonia) were in people younger than 65 (Morens and Fauci 2007). But there is no consensus about the total global toll: minimum estimates are around 20 million, the World Health Organization put it at upward of 40 million people, and Johnson and Mueller (2002) estimated it at 50 million. The highest total would be far higher than the global mortality caused by the plague in 1347–1351. Assuming that the official US death toll of 675,000 people (Crosby 1989) is fairly accurate, it surpassed all combat deaths of US troops in all of the wars of the 20th century.
Pandemics have been also drivers of human genetic diversity and natural selection and some genetic differences have emerged to regulate infectious disease susceptibility and severity (Pittman et al. 2016). Quantitative reconstruction of their growth is impossible for events before the 20th century but good quality data on new infections and mortality make it possible to reconstruct epidemic curves of the great 1918–1919 pandemic and of all the subsequent pandemics. As expected, they conform closely to a normal distribution or to a negative binomial regardless of affected populations, regions, or localities. British data for combined influenza and pneumonia mortality weekly between June 1918 and May 1919 show three pandemic waves. The smallest, almost symmetric and peaking at just five deaths/1,000, was in July 1918. The highest, a negative binomial peaking at nearly 25 deaths/1,000 in October, and an intermediate wave (again a negative binomial peaking at just above 10 deaths/1,000) in late February of 1919 (Jordan 1927).
Perhaps the most detailed reconstruction of epidemic waves traces not only transmission dynamics and mortality but also age-specific timing of deaths for New York City (Yang et al. 2014). Between February 1918 and April 1920, the city was struck by four pandemic waves (also by a heat wave). Teenagers had the highest mortality during the first wave, and the peak then shifted to young adults, with total excess mortality for all four waves peaking at the age of 28 years. Each wave was spread with a comparable early growth rate but the subsequent attenuations varied. The virulence of the pandemic is shown by daily mortality time series for the city’s entire population: the second wave’s peak reached 1,000 deaths per day compared to the baseline of 150–300 deaths (figure 2.5). When compared by using the fractional mortality increase (ratio of excess mortality to baseline mortality), the trajectories of the second and the third wave came closest to a negative binomial distribution, with the fourth wave displaying a very irregular pattern.
Very similar patterns were demonstrated by analyses of many smaller populations. For example, a model fitted to reliable weekly records of incidences of influenza reported from Royal Air Force camps in the UK shows two negative binomial curves, the first one peaking about 5 weeks and the other one about 22 weeks after the infection outbreak (Mathews et al. 2007). The epidemic curve for the deaths of soldiers in the city of Hamilton (Ontario, Canada) between September and December 2018 shows a perfectly symmetrical principal wave peaking in the second week of October and a much smaller secondary wave peaking three weeks later (Mayer and Mayer 2006).
Subsequent 20th-century pandemics were much less virulent. The death toll for the 1957–1958 outbreak was about 2 million, and low mortality (about 1 million people) during the 1968–1969 event is attributed to protection conferred on many people by the 1957 infection. None of the epidemics during the remainder of the 20th century grew into a pandemic (Kilbourne 2006). But new concerns arose due to the emergence of new avian influenza viruses that could be transmitted to people. By May 1997 a subtype of H5N1 virus mutated in Hong Kong’s poultry markets to a highly pathogenic form (able to kill virtually all affected birds within two days) that claimed its first human victim, a three-year-old boy (Sims et al. 2002). The virus eventually infected at least 18 people, causing six deaths and slaughter of 1.6 million birds, but it did not spread beyond South China (Snacken et al. 1999).
WHO divides the progression of a pandemic into six phases (Rubin 2011). First, an animal influenza virus circulating among birds or mammals has not infected humans. Second, the infection occurs, creating a specific potential pandemic threat. Third, sporadic cases or small clusters of disease exist but there are no community-wide outbreaks. Such outbreaks mark the fourth phase. In the next phase, community-level outbreaks affect two or more countries in a region, and in the sixth phase outbreaks spread to at least one other region. Eventually the infections subside and influenza activity returns to levels seen commonly during seasonal outbreaks. Clearly, the first phase has been a recurrent reality, and the second and third phases have taken place repeatedly since 1997. But in April 2009, triple viral reassortment between two influenza lineages (that had been present in pigs for years) led to the emergence of swine flu (H1N1) in Mexico (Saunders-Hastings and Krewski 2016).
The infection progressed rapidly to the fourth and fifth stage and by June 11, 2009, when WHO announced the start of an influenza pandemic, there were nearly 30,000 confirmed cases in 74 countries (Chan 2009). By the end of 2009, there were 18,500 laboratory-confirmed deaths worldwide but models suggest that the actual excess mortality attributable to the pandemics was between 151,700 and 575,400 cases (Simonsen et al. 2013). The disease progressed everywhere in typical waves, but their number, timing, and duration differed: there were three waves (spring, summer, fall) in Mexico, two waves (spring-summer and fall) in the US and Canada, three waves (September and December 2009, and August 2010) in India.
There is no doubt that improved preparedness (due to the previous concerns about H5N1 avian flu in Asia and the SARS outbreak in 2002)—a combination of school closures, antiviral treatment, and mass-scale prophylactic vaccination—reduced the overall impact of this pandemic. The overall mortality remained small (only about 2% of infected people developed a severe illness) but the new H1N1 virus was preferentially infecting younger people under the age of 25 years, while the majority of severe and fatal infections was in adults aged 30–50 years (in the US the average age of laboratory confirmed deaths was just 37 years). As a result, in terms of years of life lost (a metric taking into account the age of the deceased), the maximum estimate of 1.973 million years was comparable to the mortality during the 1968 pandemic.
Simulations of an influenza pandemic in Italy by Rizzo et al. (2008) provide a good example of the possible impact of the two key control measures, antiviral prophylaxis and social distancing. In their absence, the epidemic on the peninsula would follow a Gaussian curve, peaking about four months after the identification of the first cases at more than 50 cases per 1,000 inhabitants, and it would last about seven months. Antivirals for eight weeks would reduce the peak infection rate by about 25%, and social distancing starting at the pandemic’s second week would cut the spread by two-thirds. Economic consequences of social distancing (lost school and work days, delayed travel) are much more difficult to model.
As expected, the diffusion of influenza virus is closely associated with population structure and mobility, and superspreaders, including health-care workers, students, and flight attendants, play a major role in disseminating the virus locally, regionally, and internationally (Lloyd-Smith et al. 2005). The critical role played by schoolchildren in the spatial spread of pandemic influenza was confirmed by Gog et al. (2014). They found that the protracted spread of American influenza in fall 2009 was dominated by short-distance diffusion (that was partially promoted by school openings) rather than (as is usually the case with seasonal influenza) long-distance transmission.
Modern transportation is, obviously, the key superspreading conduit. Scales range from local (subways, buses) and regional (trains, domestic flights, especially high-volume connections such as those between Tokyo and Sapporo, Beijing and Shanghai, or New York and Los Angeles that carry millions of passengers a year) to intercontinental flights that enable rapid global propagation (Yoneyama and Krishnamoorthy 2012). In 1918, the Atlantic crossing took six days on a liner able to carry mostly between 2,000 and 3,000 passengers and crew; now it takes six to seven hours on a jetliner carrying 250–450 people, and more than 3 million passengers now travel annually just between London’s Heathrow and New York’s JFK airport. The combination of flight frequency, speed, and volume makes it impractical to prevent the spread by quarantine measures: in order to succeed they would have to be instantaneous and enforced without exception.
And the unpredictability of this airborne diffusion of contagious diseases was best illustrated by the transmission of the SARS virus from China to Canada, where its establishment among vulnerable hospital populations led to a second unexpected outbreak (PHAC 2004; Abraham 2005; CEHA 2016). A Chinese doctor infected with severe acute respiratory syndrome (caused by a coronavirus) after treating a patient in Guangdong travelled to Hong Kong, where he stayed on the same hotel floor as an elderly Chinese Canadian woman who got infected and brought the disease to Toronto on February 23, 2003.
As a result, while none of other large North American cities with daily flights to Hong Kong (Vancouver, San Francisco, Los Angeles, New York) was affected, Toronto experienced a taxing wave of infections, with some hospitals closed to visitors. Transmission within Toronto peaked during the week of March 16–23, 2003 and the number of cases was down to one by the third week of April; a month later, the WHO declared Toronto SARS-free—but that was a premature announcement because then came the second, quite unexpected wave, whose contagion rate matched the March peak by the last week of May before it rapidly subsided.