Relevant and even prescient commentary on news, politics and the economy.

Hydroxychloroquine After Action Report

I was a vehement advocate of prescribing hydroxychloroquine (HCQ) off label while waiting for the results of clinical trials. I wasn’t all that much embarrassed to agree with Donald Trump for once. Now I feel obliged to note that my guess was totally wrong. I thought that the (uncertain) expected benefits were greater than the (relatively well known) costs.

The cost is that HCQ affects the heart beat prolonging the QT period (from when the atrium begins to contract to when the ventrical repolarizes and is read to go again). This can cause arrhythmia especially in people who already have heart problems. I understood that one might argue that all people with Covid 19 have heart problems but didn’t consider that argument decisive (I probably should have).

The positive expected value of the uncertain benefits was based on strong in vitro evidence that HCQ blocks SARS Cov2 infection of human cells in culture. (this is a publication in the world’s top general science journal).

Already in early May, there was evidence that any effect of HCQ on the rate of elimination of the virus must be small. In this controlled trial conducted in China, the null of no effect is not rejected. Much more importantly, the point estimates of the effects over time are all almost exactly zero. I considered the matter settled (although the painfully disappointed authors tried to argue for HCQ and that their study was not conclusive).

There are now four large retrospective studies all of which suggest no benefit from HCQ and two of which suggest it causes increased risk of death. I am going to discuss the two studies most recently reported.

One is a very large study (fairly big data goes to the hospital) published yesterday in The Lancet. In this study patients who received HCQ had a significantly higher death rate with a hazard of dying 1.335 times as high. The estimate comes from a proportional hazard model with a non parametric baseline probability and takes into account many risk factors including crucially initial disease severity. It is also important that only patients who were treated within 48 hours of diagnosis were considered.

I am, of course, dismayed by this result. I am also puzzled, because it is quite different from the result obtained in a smaller retrospective study published in JAMA

I think the practical lessons are that it seems unwise to give Covid 19 patients HCQ. Also maybe Robert Waldmann should be more humble. After the jump, I will discuss the two studies in some detail and propose an explanation of the difference in results.

Comments (14) | |

The Amateur Epidemiologist II

I am interested in critiquing my understanding of the simplest SIR epidemiological model and also praising a critique of an effort to extend the model and guide policy developed by some very smart economic theorists.

First the useful point is that this post by Noah Smith is brilliant. As is typical, Smith argues that the useful implications economic models depend on strong assumptions so economic theory isn’t very useful. He praises simple empirical work instead.

I will discuss Smith contra Acemoglu, Chernozhukov, Werning and Whinston and Smith pro Sergio Correia, Stephan Luck and Emil Verner after the jump, but really Smith is better at presenting Smith than I am.

It made me wonder. In the simplest model herd immunity stops an epidemic when 1-1/R0 of people have been infected. R0 as I recently learned and everyone now knows is the number of people who would catch a pathogen from one infected person if no one had any resistence. Over time people develop resistence so Rt < R0. If 1-1/R0 of people are resistent, then Rt =1. A bit later Rt<1 so each infected person will lead to a geometrical decreasing series of expected infections so total infections would be 1-1/R0 plus a (small) constant over the number infected at that critical time t. The SIR has susceptible, infected and resistent. The idea is that if one has not been exposed one is vulnerable. If one becomes infected, one carries and sheds the pathogen for a while and then one recovers. After one recovers one is immune and won't get it again. The key assumption in the model is that for every infected people R0 people are exposed (and infected if not immune) and that those people are chosen at random out of the entire population. It is necessary to assume that spread is equally likely from Mr A to Ms B if they share a house or live on opposite sides of the country. This is a silly assumption and the model is the old model used to teach kids and not, I'm sure, current research. It is also the model always used to guide public policy decisions (see me contra benchmark models http://rjwaldmann.blogspot.com/2016/10/benchmarks-model-and-hypotheses.html ) In population biology and evolutionary biology the silly assumption is called "pan mictic" in economics it is called "random matching". The assumption is made very often because doing without it can get one stuck in really hard math. I would like to put a few minutes of effort into trying to figure out if the random matching assumption affects the level of infection needed for herd immunity (of course everyone knows it matters a lot). Below I will always assume R0 =3. Model 2 the population is actually divided into N equal subpopulations and there is no spread from one to the other. The disease starts with one case in one sub population. It will spread until a few more than two thirds of that population has been infected. Spread will stop when 1/(3N) of the whole population is infected. the relaxation of random matching assumption reduceces the incidence needed for herd ommunity by the factor N. This works for any N. Model 3 very like model 2. Half of people have innate immunity to the virus. People transmit the virus to on average 6 other people (on average 3 have innate immunity). the virus will spread until 5 of 6 are immune. that means (5/6)-(1/2) = 1/3 must acquire immunity (by getting infected). So 1/3 not 2/3. OK can we be sure that the number who will get onfected is less than 2/3 ? Consider Model 4. people live one to a square of an invisible chess board (which is a really big square) they transmit the pathogen to those with whoù they share an edge. R0 = 3 (I get it from 1 neighbor and early in the epidemic give it to my other 3 not yet infected neighbors). How many people get infected ? All of them Katy. The currently infected are always in the border zone between the resistent and the vulnerable. So R0 = 3 implies herd immunity will stop the spread at some level which ranges from 1/(3N) for N as big as I like, to 100%. R0=3 and a priori reasoning without arbitrary assumptions which we know are false and make for convenience tells us nothing at all. Without some assumption about mixing, matching, and population structure, the core SIR assumptions have no implications. Maybe economists and epidemiologists have more in common than we thought.

Comments (2) | |

Antiviral Rumors

Tired: Remdesivir
Wired: Merimopodib
Inspired: Both

Merimopodib (of which I just read for the first time) is an inhibitor of an enzyme used to make Guanosine. Viruses need a lot of Guanosine (and other nucleosides) to reproduce, so it is an antiviral. It can be taken orally and there is a known safe dose.

A preprint asserts that a combination of Remdesivir and Merimopodib completely blocks SARS-CoV-2 replication in vitro.

Here is the abstract

The IMPDH inhibitor merimepodib provided in combination with the adenosine analogue remdesivir reduces SARS-CoV-2 replication to undetectable levels in vitro [version 1; peer review: awaiting peer review]
Natalya Bukreyeva, Rachel A. Sattler, Emily K. Mantlo1, Timothy Wanninger, John T. Manning, Cheng Huang, Slobodan Paessler, Jerome B. Zeldis

Home Browse The IMPDH inhibitor merimepodib provided in combination with the adenosine…

ALL METRICS
692
VIEWS
15
DOWNLOADS

Get PDF

Get XML

Cite

Export

Track

Email

Share

BRIEF REPORT
The IMPDH inhibitor merimepodib provided in combination with the adenosine analogue remdesivir reduces SARS-CoV-2 replication to undetectable levels in vitro [version 1; peer review: awaiting peer review]
Natalya Bukreyeva1, Rachel A. Sattler1, Emily K. Mantlo1, Timothy Wanninger1, John T. Manning https://orcid.org/0000-0002-2130-20541, Cheng Huang1, Slobodan Paessler1, Jerome B. Zeldis2

Abstract
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is the novel coronavirus responsible for the ongoing COVID-19 pandemic, which has resulted in over 2.5 million confirmed cases and 170,000 deaths worldwide as of late April 2020. The pandemic currently presents major public health and economic burdens worldwide. No vaccines or therapeutics have been approved for use to treat COVID-19 cases in the United States despite the growing disease burden, thus creating an urgent need for effective treatments. The adenosine analogue remdesivir (REM) has recently been investigated as a potential treatment option, and has shown some activity in limiting SARS-CoV-2 replication. We previously reported that the IMPDH inhibitor merimepodib (MMPD) provides a dose-dependent suppression of SARS-CoV-2 replication in vitro. Here, we report that a 4-hour pre-treatment of Vero cells with 2.5µM MMPD reduces the infectious titer of SARS-CoV-2 more effectively than REM at the same concentration. Additionally, pre-treatment of Vero cells with both REM and MMPD in combination reduces the infectious titer of SARS-CoV-2 to values below the detectable limit of our TCID50 assay. This result was achieved with concentrations as small as 1.25 µM MMPD and 2.5 µM REM. At concentrations of each agent as low as 0.31 µM, significant reduction of viral production occurred. This study provides evidence that REM and MMPD administered in combination might be an effective treatment for COVID-19 cases.

Comments (4) | |

Remdesivir VIII

There is a severe Remdesivir shortage

On March 2 2020, I warned you that this was going to happen.

I did not warn about the opaque and arbitrary Trump administration policy, because the Trump administration is always “worse than you imagine possible even taking into account the fact that it is worse than you imagine possible” Brad DeLong 2003 or so referring to the last Republican presidency.

When are Americans going to notice the pattern ?

Comments (24) | |

The Amateur Epidemiologist

I frequently read a debate about whether, when assessing anti covid 19 performance, one should look at deaths per capita or deaths on days since 1000 deaths. Like everything involving Americans, this has become a pro v contra Trump debate — clearly he wants deaths per capita (and the absolute number of tests performed).

The arguments are as follows. for number of deaths on time since a certain number was reached, it is argued that all countries are at the negligible fraction of people are resistant (naturally exponential growth) stage, so the relevant variable is rate of growth of cases (or deaths). So cases now divided by cases a week ago and not by population.

The counterargument is that, come on it’s obvious.

I think that it is natural to expect a transition from roughly the same growth (no matter what population is) to cases (very roughly) proportional to population. All of this is during the neglible fraction resistant phase.

I am going to set up a straw man and knock him down with a silly super super simple model. So the straw man is that it is reasonable to assume that if two countries have the same number of cases at time t, then they will have similar numbers later. The silly model is that people live on a giant chess board (1000 squares on a side) and infect people who share an edge. This gives R_0 between 2 and 3. So say start with two cases, one in each country. Straw man says there should be the same number of cases in each country in each subsequent period.

OK now country one is the upper right quadrant and country two is the rest of the board. Strra man predicts the same number of cases. Or what if all is the same but I draw the border so country 2 is the lower left quadrant and country 1 is the rest. Again the same number.

So straw man concludes that there are never any cases in the lower right or upper left. This can’t be right.

Now I will discuss a model which is slightly less silly. Assume most transmission is local so the infected and the infector are in the same country. Assume people are infectious for one period and that, during tht time, each infected person infects n nearby people. Also assume lower rate of distant infection, so an infected person infects someone chosen at random in the whole world with probability m less than 1 less than n.

This distant infection seeds a new outbreak with a new patient 1.

Assume that at t=1, each country has the same number of infected people.

There are countries indexed by i and caseload x_it.
x_(it+1) = n x_(it) + (sum_j x_j)m(population_i/ (sum_j population_j))

If m is much less than n, then, at first the rate of growth in all countries is roughly n. But eventually x_(it) becomes proportional to population_i .

The reason is that, in each country, there is the same number of people infected in the outbreaks that had already started at time 1. However, the number of new outbreaks is proportionatl to population (from someone chosen at random in the whole world). So the (expected) number of people infected in outbreaks which started after t=1 is proportional to population.

As t goes to infinity, the fraction of infected people infected in the outbreaks which had already started at t=1 goes to zero. So in the medium run (after a lot of long distance transmission but before there is a significant fraction of resistant people) the infection rates per capita converge.

OK the bit about initial growth is similar conditional on similar numbers infected at t=1 sure fits the data (where t=1 is t when the number of infections passes say 1000). Thus people could talk about “days behind Italy” and accurately predict the number of cases (and not change how many days behind different countries are).

But on the other hand, after a while, similar countries have rates roughly proportional to population. So, for example, the number of cases in the USA is similar to the number of cases in Europe.

The alternative is to claim this figure illustrates a pure coincidence.

Comments (5) | |

Evidence Based Medicine

Here Trisha Greenhalgh, an actual expert, writes what I have been trying to write. In a Twitter thread.

Please click the link.

Two key tweets

But the principle of waiting for the definitive RCT [randomized controlled Trial] before taking action should not be seen as inviolable, or as always defining good science. On the contrary, this principle, inappropriately applied, will distort our perception of what “good science” is.

This explains with less than 280 characters what I was trying to say in “What has science established”. I am quite sure that she is correctly desscribing a widespred view that “waiting for a definitive RCT” defines “good science”. Clearly this is a category error. Science does not tell us what to do. It might tell us what will happen if we do things, but it is not a moral code. “First do no harm” is ethics not science. Good science requires recognizing what is not known, it does not have anything to do with the argument “we don’t know and therefore we should”. The principle however applied is not consistent with knowing what science is let alone knowing what good science is.

The scientific method, professional ethics and the Federal Food, Drug, and Cosmetic Act are all sets of rules that some people should follow. But they aren’t different aspects of the same entity, they aren’t different parts of one organic whole. Following Adam Smith, I think laws have been internalized to fear and obedience, then respect, then internaliztion so what was the letter of law becomes the voice of concience.

But, of course, the main point is

More specifically, taking a “primum non nocere” (= don’t act till we’ve got RCT certainty) stance when hundreds are dying daily makes no scientific or moral sense. It is neither scientifically nor morally reckless to try out policies that have a plausible chance of working

Comments (7) | |

Accelerating Vaccine Development

How can deployment of a Sars Cov2 vaccine be speeded up ?

One important step is to make a lot of candidate vaccines while testing for effectiveness. Usually, there is testing, approval than producing. Since producing a lot then testing makes no business sense, it is a project for states or charities. Unsurprisingly the Gates foundation is on it committing to mass produce 6 candidate vaccines.

A remaining problem is that proof of effectiveness takes a long time (and a huge sample). Fortunately, even without a vaccine, most people, even in the highest risk occupations, don’t get Covid 19. Therefore it is hard to prove that vaccinated people do even better. An effectiveness trial will take time.

The most advanced candidate is the Oxford made adenovirus based vaccine. Because a very similar vaccine against (I forget) MERS or Sars Cov1 is known to be safe, they can go straight to testing effectiveness. A phase 1/2 trial is recruiting volunteers. A phase III (yhat means very big) trial is being organized.

The phase 1/2 trial has estimated primary completion date May 2021. One step in the trial is follow the subjects for 6 months to see if they catch Covid 19. I am citing the entry at clinicaltrials.gov

One basically clearly unethical shortcut is to vaccinate then challenge with the virus. The hope is that the experimental subjects won’t be killed. Acting on such hope is no longer allowed (nor should it be). I am willing to volunteer to participate (a safe offer because it won’t happen and they would use young volunteers if it did).

Of course most people have few qualms about doing this with animals and there is already a vaccine (good old killed virus vaccine) known to protect Rhesus monkeys.

I think another approach would be to vaccinate then test serum. One thing would be to test if the serum blocks infection in vitro, that is contains neutralizing antibodies. It is know that the serum of (most) people who have had Covid 19 contains such neutralizing antibodies.

Another would be a trial of serum of vaccinated people as therapy for people with Covid 19. Serum from people who have recovered is being used. It is in short supply. A theraputic trial of serum from unexposed people, from vaccinated people, and from people who have recovered from Covid 19 as therapy for people with Covid 19 makes sense. As we have seen, theraputic trials don’t have to take very long (the course of the disease is within 28 days the course of not catching the disease has been all of time until now).

Serum of recovered patients has not been proven to be effective, so this trial would be legal.

Comments (1) | |

Remdesivir 7

The NIAID Trial of Remdesivir has closed early, because they concluded it was not ethical to treat people with placebo given what they consider proof that Remdesivir is effective. This is huge news (I am surprised that the Dow Jones only went up 2% not that I care about the Dow Jones).

This is a large double blind randomized controlled trial. The null of no effectiveness was rejected using the principal outcome measure. This is the sort of outcome which causes the FDA to approve drugs.

The data shows that remdesivir has a clear-cut, significant, positive effect in diminishing the time to recovery,” said the institute’s director, Dr. Anthony Fauci.Results from the preliminary trial show remdesivir improved recovery time for coronavirus patients from 15 to 11 days.

Also preliminary results from the Gilead 5 days vs 10 days study were announced. This is an odd study as there was no control group. The result is that the null that 5 days of treatment are as good as 10 days of treatment was not rejected. This is very useful information, since Remdesivir is likely to be a lot of excess demand for Remdesivir very soon. Many wondered why do a study without a control group. I think the aim was to get Remdesivir in as many people as possible as soon as possible. The study has 6,000 participants. This is in addition to the controlled trials, the expanded access program and individuals who have obtained compassionate use.

The scientific result is not critical. If Remdesivir doesn’t work, the 5 days are as good as 10 days and no one cares as 0 days are also just as good at 0 cost and with 0 side effects. However managers at Gilead believed (with it appears good reason) that the cost of the trial is huge and negative. Certainly people beg to participate. I think that in extreme cases it can be a good idea to use drugs based on preliminary (even in vitro) evidence while waiting for the results of the phase III controlled trial. The Gildead 5 day v 10 day trial is one example of this, and I applaud their clever approach to dealing with regulations.

Also, last and least, the disappointing Chinese study has been published in The Lancet. This is the study which caused widespread dismay and headlines including “Remdesivir fails”. Given the results of the NIAID trial, it appears that some people misunderstood the brief note accidentally published by WHO. More people correctly asserted that the question was still open. However, I think misunderstanding of the note (maybe also by the person who wrote it) is a good example of what happens when people try to use mathematical statistics but do not understand the Neyman Pearson framework, that is don’t know what a null hypothesis means or what failure to reject a null hypothesis implies. This is a very common elementary error (actually more universal than common).

I am not going to provide links, but many articles and especially many headlines contained the completely incorrect claim that the study showed that Remdesivir failed to perform better than a placebo. This is simply and obviously false (this is obvious without looking at the data just based on an understanding of what data can and can not imply). The correct statements are that the Chinese trial failed to show that Remdesivir performed better than a placebo or (equivalently) that Remdesivir failed to perform statistically significantly better than a placebo.
Removing the words “statistically significantly” makes a true statement absolutely false. It is not acceptable even in a headline.

In fact, in the study, on average patients treated with Remdesivir recovered more quickly than patients treated with placebo, however, the difference was not large enough to reject the null of no benefit at the 5% level. the ratio was not huge with a ratio of hazards of improvement of 1.2. Notably the ratio 1.31 in the NIAID trial is not huge either. The difference between headline success and headline failure is almost entirely due to the sample size. This is a failure to understand what it means to test a null hypothesis against and alternative hypothesis. The statement that the Chinese study was underpowered does not even begin to approach a demonstration of an understanding of elementary mathematical statistics. I will try to explain after the jump.

An even more alarming headline result of the Chinese study was that Remdesivir did not have a statistically significantly greater effect on the viral load. The failure to help patients statistically significantly more could occur even if Remdesivir blocked viral replication in people as it does in vitro. The explanation is that late in the disease the often fatal trouble is due to the patients’ immune response not the virus directly. Cytokine storm can kill in the absence of a virus.

However, a failure to reduce the viral load would be terrible news. Of course the study didn’t show that. It showed that the reduction was not statistically singnificant not that it was zero. Now there are two ways to compare treaments — the viral load after days of treatment or the reduction of the viral load from when treatment started. In principle and with large enough samples, this makes little difference. The two treatments are randomized and the average viral load at the beginning of the trial for the two treatments will go to the same number by the law of large numbers, which is true asymptotically (recall my personal slogan “asymptotically we’ll all be dead”). It did not apply in this case.

The average lower respiratory viral load (from now on just viral load) at the beginning of the trial was not similar for those treated with Remdesivir and those treated with placebo. It was roughly 10 time higher for those treated with Remdesivir. Notice the recovered more quickly in spite of having an initial viral load on average 10 times higher.

In this case it seems reasonable to look at each patients viral load after n days of treatment divided by his or her viral load aftr 0 days of treatment. The FDA will not allow doing what seems reasonable after looking at the data (nor should they). To avoid cherry picking the test must be described *before* the data are collected. I think that, reasonably assuming the distributions of initial loads would be similar for the two groups, the researchers said they would look a viral load on day n not that ratio, so the placebo started out with (on average) a huge head start.

Also Remdesivir caught up. After 2 days of treatment, the average viral load was lower in remdesivir treated patients. The ratio changed (roughly eyeballing) 100 fold. This is the raw data which was reported as Remdesivir fails to reduce the viral load.

The authors tested if this (apparently huge difference but I am cherry picking after 2 days) was statistically significant and got a p level of 0.0672. This means that, even if allowed to divide by initial levels, they would not have rejected the null of no benefit at the 5% level. It would have been reported as “remdesivir failed to affect the viral load”. This is would have been crazy.

I think the particular issue of divide by initial load or not is less important than the point that a p level of 0.0672 is not the same as a p level of 0.5, yet it is treated as the same.

5% is not a scientific concept. Calling 5% statistically signficant is an arbitrary choice due to the fact that the (smallest) 95% interval of a normal distribution is about 4 standard deviations. The idea that science requires one to look only at whether an number is greater or less than 0.05 is crazy and extremely influential.

The correct brief description of the results of the Chinese study is that, in the study, patients on Remdesivir did better than patients on placebo, but the difference was not significant at the 5% level. The direction the results points depends on the point estimate. Technicalities like p values are also important as are technicalities like the power function. But considering p values the main result and power a technicality is an error. It lead people to conclude that a ratio which was 100 after 2 days of treatment had been shown to be constant at 1.

I should point out that everything I am writing about hypothesis testing is elementary statistics which statisticians try again and again and again to explain.

After the jump I wonder why and type more about statistics.

Comments (5) | |

Does recovering from Covid 19 cause immunity to new infection by Sars Cov2 ?

The WHO warns that it is not proven that we acquire immunity to Sars Cov2. If we don’t we are pretty much doomed. However, I don’t see how people could recover without developing immunity or develope immunity without memory with human immune systems. Here, as often, the burden of proof is placed on the optimistic hypothesis. Here again (third time) I dare to be optimistic (third time’s a charm).

At Daily Kos Mark Sumner has an excellent post on the topic. I cut and pasted my comment below. I agree with everything he wrote except the part about the common cold (which is a syndrome caused by one of dozens or hundreds of diffferent known viruses). In particular a key open question is whether all people who recover are immune but some are more immune than others (as he notes it is known that some have low anti Sars Cov2 antibody levels). Also it isn’t known if and when there will be new strains which evade the currently existing neutralizing antibodies.

His post is brief and too good to excerpt. Just click this link (then come back to read my post or not — it’s not at the same level)

First there is strong evidence that exposure to Covid causes people to make neutralizing antibodies, you mention this.  Here is my first google hit  .  In that article, the cell making such an antibody was immortalized by fusion with a leukemic cell and so there is a neutralizing monoclonal which can, in principal, be mass produced and used as a treatment.   The problem (as usual) is that it isn’t proven that it works and such proof takes a lot of time and money.

I don’t think there are viruses which infect us again and again.  We are infected by new viruses (here I include mutants of the old virus as in seasonal flu). The common cold is a syndrome not a virus. There are dozens to hundreds of known viruses which cause colds (with similar symptoms).  By your definition we get many different viral diseases with similar symptoms.

There is certainly a risk that Sars Cov2 will mutate in a way such that it is no longer neutralized by currently neutralizing antibodies,   and so there will be a new epidemic even if enough people are vaccinated to provide herd immunity to Sars Cov2 1.0 (influenza does this which is why there is seasonal flu).  It is also possible that this won’t happen (there wasn’t a smallpox 2.0 which infected people who were vaccinated).

Importantly, antisera against Sars Cov2 can be tested (in cell culture) against isolates from different continents.  So far, I think, the evidence is that sera (polyclonal) antibodies which block one strain block all existing strains (which sure doesn’t mean all strains which will ever exist).  There is a vaccine (good old fashioned killed virus vaccine, which has protected Rhesus macaques from infection by Sars Cov2 squirted in their lungs b
If our immune system can’t stop Sars Cov2, then how to people recover ? Why has anyone gone from positive to negative ?

Again, we don’t get the same common cold virus again and again.  There are many different cold viruses.  Most are picornaviruses (double stranded circular RNA genome, small, like polio) some are coronaviruses, and some are adenoviruses (which cause fierce colds).

I think the WHO is putting the burden of proof on the hypothesis that, whatever happens in us to beat one Sars Cov2 infection lasts for a while.  If this were not true, it would be unique (I think).

update: there is now more direct evidence in a non peer reviewed preprint here

Neutralizing antibody responses to SARS-CoV-2 in a COVID-19 recovered patient cohort and their implications
Fan Wu, Aojie Wang, Mei Liu, Qimin Wang, Jun Chen, Shuai Xia, Yun Ling, Yuling Zhang, Jingna Xun, Lu Lu, Shibo Jiang, Hongzhou Lu, Yumei Wen, Jinghe Huang

“neutralizing antibodies (NAbs)”

Plasma collected from 175 COVID-19 recovered patients with mild symptoms were screened using a safe and sensitive pseudotyped-lentiviral-vector-based neutralization assay. [skip]
SARS-CoV-2-specific NAbs were detected in patients from day 10-15 after the onset of the disease and remained thereafter. [skip]
Notably, among these patients, there were ten patients whose NAb titers were under the detectable level of our assay (ID50: < 40)

So according to We et al 165 out of 175 people had produced antibodies which prevent infection of cells. So this is evidence that people recover because of acquired immunity. On the other hand, not all subjects had measurable quantities of the antibodies (and the measure is what dilution blocks virus entry). So an exposed and immune passport would be unreliable — those 10 people are assessed as still vulnerable. This isn’t the end of humanity (165/175 is enough for herd immunity to get R under 1). But it matters for policy (as WHO insisted). Also “remained thereafter” doesn’t mean forever. It can only mean up until now which is, at most, months after the recovery (even if the blood was collected as soon as some people had recovered).

Good news.

Comments (10) | |