# The Amateur Epidemiologist II

I am interested in critiquing my understanding of the simplest SIR epidemiological model and also praising a critique of an effort to extend the model and guide policy developed by some very smart economic theorists.

First the useful point is that this post by Noah Smith is brilliant. As is typical, Smith argues that the useful implications economic models depend on strong assumptions so economic theory isn’t very useful. He praises simple empirical work instead.

I will discuss Smith contra Acemoglu, Chernozhukov, Werning and Whinston and Smith pro Sergio Correia, Stephan Luck and Emil Verner after the jump, but really Smith is better at presenting Smith than I am.

It made me wonder. In the simplest model herd immunity stops an epidemic when 1-1/R0 of people have been infected. R0 as I recently learned and everyone now knows is the number of people who would catch a pathogen from one infected person if no one had any resistence. Over time people develop resistence so Rt < R0. If 1-1/R0 of people are resistent, then Rt =1. A bit later Rt<1 so each infected person will lead to a geometrical decreasing series of expected infections so total infections would be 1-1/R0 plus a (small) constant over the number infected at that critical time t. The SIR has susceptible, infected and resistent. The idea is that if one has not been exposed one is vulnerable. If one becomes infected, one carries and sheds the pathogen for a while and then one recovers. After one recovers one is immune and won't get it again. The key assumption in the model is that for every infected people R0 people are exposed (and infected if not immune) and that those people are chosen at random out of the entire population. It is necessary to assume that spread is equally likely from Mr A to Ms B if they share a house or live on opposite sides of the country. This is a silly assumption and the model is the old model used to teach kids and not, I'm sure, current research. It is also the model always used to guide public policy decisions (see me contra benchmark models http://rjwaldmann.blogspot.com/2016/10/benchmarks-model-and-hypotheses.html ) In population biology and evolutionary biology the silly assumption is called "pan mictic" in economics it is called "random matching". The assumption is made very often because doing without it can get one stuck in really hard math. I would like to put a few minutes of effort into trying to figure out if the random matching assumption affects the level of infection needed for herd immunity (of course everyone knows it matters a lot). Below I will always assume R0 =3. Model 2 the population is actually divided into N equal subpopulations and there is no spread from one to the other. The disease starts with one case in one sub population. It will spread until a few more than two thirds of that population has been infected. Spread will stop when 1/(3N) of the whole population is infected. the relaxation of random matching assumption reduceces the incidence needed for herd ommunity by the factor N. This works for any N. Model 3 very like model 2. Half of people have innate immunity to the virus. People transmit the virus to on average 6 other people (on average 3 have innate immunity). the virus will spread until 5 of 6 are immune. that means (5/6)-(1/2) = 1/3 must acquire immunity (by getting infected). So 1/3 not 2/3. OK can we be sure that the number who will get onfected is less than 2/3 ? Consider Model 4. people live one to a square of an invisible chess board (which is a really big square) they transmit the pathogen to those with whoù they share an edge. R0 = 3 (I get it from 1 neighbor and early in the epidemic give it to my other 3 not yet infected neighbors). How many people get infected ? All of them Katy. The currently infected are always in the border zone between the resistent and the vulnerable. So R0 = 3 implies herd immunity will stop the spread at some level which ranges from 1/(3N) for N as big as I like, to 100%. R0=3 and a priori reasoning without arbitrary assumptions which we know are false and make for convenience tells us nothing at all. Without some assumption about mixing, matching, and population structure, the core SIR assumptions have no implications. Maybe economists and epidemiologists have more in common than we thought.

OK now I consider Smith’s criticisms of Acemoglu et al. He is right, their conclusions depend entirely on arbutrary assumptions made for convenience. The 4 include at least two of the very smartest economic theorists in the world (no I won’t name them). The problem is the poverty of theory.

I don’t agree with all of Smith’s criticisms. He notes that A et al assume one can only get infected once “before a voccine is developed”. Infection and recovery confers stronger immunity than any vaccine (basically without acquired immunity one doesn’t recover). The data now show almost all convalescent patients have neutralizing antibodies. I think the discussion was based on small c conservatism of science in which one must stress that something isn’t proven even when it is very likely.

In particular the alleged examples of losing immunity are no such thing. We get many colds because there are hundreds of different viruses which cause colds (very different with RNA and with DNA genomes with and without envelopes pretty much the whole array of viruses which have the same evolutionary trick of being heat sensitive so they stay in our noses and don’t kill us before we spread them)

Seasonal flu occurs because the flu mutates a lot and has a sloppy hemaglutanin protein whose sequence doesn’t matter for the virus but does matter for our antibodies. Coronaviruses have spike proteins whose sequnces are critical (determining for example whether they infect bats, humans, or both)

Over decades immunity can fade. Over decades SARS CoV2 may mutate to evade the immunity of those who have recovered. Neither is likely to happen before a vaccine is developed (I can cheat and say the vaccine which will be massively used exists already – we just don’t know which of the dozens of canidate vaccines will be elected).

Smith also says that A et al ignore the structure of the population. Heh indeed, that’s what I discussed above. A et al propose isolating the old and letting the young get infected. They assume it is possible to isolate the old from the young — not just the few percent infected with lockdown followed by containment but of enough for young people herd immunity.

He correctly notes that when discussing policy they ask what society should do not what policymakers should do given the fact that people break the rules and sue and stuff.

I agree that the main A et al weakness is the economics not the epidemiology. They assume that lockdown causes productivity to go to 0 (so how did they manage to be so productive during a lockdown ?). More importantly, they assume reopening implies no effect of Covid 19 on the economy. This is a very strong assumption. Smith presents data which pretty much prove it false. Fear of Covid 19 *before* governor’s spoke already had an impact roughly on the order of the lockdown.

Importantly thos does *not* mean that lockdowns have a small effect on the number infected before a vaccine is deployed. A small effect on behavior implies a small effect on R0. If a lockdown gets it down from 1.1 to 0.9 then it has a huge effect on the caseload a few months from now (also weeks from now).

Since US R0 appears to be very close to 1, the gain from making lockdowns soewhat stricter is likely to be huge and the cost of partially opening up is also likely to be huge (except it won’t last –we will lock down again with more cases and probably more days of lockdown and higher economic costs from the lockdown).

Anyway, the patient reader will notice that I don’t mind typing a lot on topics about which I know little

The model with invisible squares is flat earth. N needs to go to infinity. Karl Marx pointed out the impossibility.

I is the immune, we do not know how much immunity we have. This is a version of the common cold with severe allergic reaction. Common colds usually have an immune period about 1/2 to 1/3 a year. The model does not count deaths, so N remains constant but introduces about a 4% error in the medium term.

So, in all cases the model will be in error as the agents do not notice a dropping N, in the latter case; and agents assume an infinite N in the former case.

This is the Lucas critique boiled down. The Lucas critique wants all agents to know the state of the system at equilibrium, no hidden information. In the maths this becomes a maladjustment for N, which is really a fuzzy constant,

The way to overcome this, some modelers have used infected, uninfected and tridac response from a third party. The tridac closes the surface and meets the Lucas critique. Lucas is all about open or closed. Closed mean the agents are N adapted.

The tridac response actually alters the square size in the flat earth model, it changes the square such that a tridac response can enter a square and reduce transmission to the next square to some very small probability. When that point is reached, then any infected or uninfected person will seem to be an independent event have no observable method to improve by observing a conditional.

With the tridac response, immunity is not infinite. Infected go back to immune, the immune go back to infected. The tridac makes the square size match the rates of these two variables. It is as if you were doing finite element summation bu adjusted your finite element size to be inverse of local volatility to reduce computer time.

To finish.

The real outcome of the studies would be how much moving around to people need to stabilize the triadic response in N. We can get the equilibrium for triadic pretty easily, but we cannot get the various paths people follow to reach equilibrium. There is no ‘trick’ method to get people onto a non equilibrium path.