Thinking a bit more, I'm not sure this argument works, though I might have misunderstood.
In London, 5-10% have been infected. Prevalence currently is ~1 in 2000 , R = 1 and let's assume transmission time is 1 week. That means that in 6 months time, about another 1.5% of people will have been infected (30/2000)
If I get infected now, then I there will be an extra chain of infections 30 people long.
I don't see how the overall prevalence levels block the chain I cause. If in 6 months, another 1.5% of people have been infected, that's not enough to meaningfully change R.
If 5% of people were infected now, and R = 1, then we'd be saturated and reach herd immunity in a matter of weeks, which would cut off the chain. But instead, the prevalence is sufficiently low that it seems like it is possible for each individual to cause a long chain.
comment by Linch
· score: 2 (1 votes) · EA
) · GW
Right, I think the argument as written may not hold for the UK (and other locations with very low prevalence but R ~=1). My intuitions, especially in recent months, have mostly been formed from a US context (specifically California), where R has never been that far away from 1 (and current infectious prevalence closer to 0.5%).
That said, here are a bunch of reasons to argue against "Alice, an EA reading this forum post, being infected in London means Alice is responsible for 30 expected covid-19 infections (and corresponding deaths at 2020/08 levels)."
(For simplicity, this comment assumes an Rt ~= 1, a serial interval of ~one week, and a timeframe of consideration of 6 months)
1. Notably, an average Rt~=1 means that the median/mode is very likely 0. So there's a high chance that any given chain will either terminate before Alice infects anybody else, or soon afterwards. Of course, as EAs with aggregatively ethics, we probably care more about the expectation than the medians, so the case has to be made that we're less likely on average to infect others. Which brings us to...
2. Most EAs taking some precautions are going to be less likely to be infected than average, so their expected Rt is likely <1. See Owen's comment [EA(p) · GW(p)] and responses. Concretely, if you have a 1% annualized covid budget for a year (10,000 microcovids), which I think is a bit on the high side for London, then you're roughly exposing yourself to 200 microcovids a week. At a baseline population prevalence of 500 microcovids, this means you have a ~40% chance of getting covid-19 in a week conditional upon your contacts having it, which (assuming a squared term) means P(Alice infects others | Alice is infected) is also ~40%.
Notably a lot of your risk comes from model uncertainty, as I mentioned in my comment to Owen [EA(p) · GW(p)], so the real expected Rt(Alice) > 0.4
As I write this out, under those circumstances I think a weekly budget of 200 microcovids a week is possibly too high for Alice.
However, given that I live in Berkeley, I strongly suspect that E(Number of additional people infected, other than Linch | Linch being infected) is < 1. (especially if you ignore housemates).
3. If your contacts are also cautious-ish people, many of them who are EAs and/or have read this post, they are likely to also take more precautions than average, so P(Alice's child nodes infecting others | Alice's child nodes being infected) is also lower than baseline.
4. There's also the classist aspect here, where most EAs work desk jobs and aren't obligated to expose themselves to lots of risks like being essential workers.
5. Morally, this will involve a bunch of double-counting. Eg, if you imagine a graph where Alice infects one person, her child node infects another person etc, for the next 6 months, you have to argue that Alice is responsible for 30 infections, her child node is responsible for 29, etc. Both fully counterfactual credit assignment and proposed alternatives have some problems in general [EA · GW], but in this covid-y specific case I don't think having an aggregate responsibility of 465 infections when only 30 people will be infected will make a lot of sense. (Sam made a similar point here, which I critiqued because I think there should be some time dependence, but I don't think time dependence should be total).
6. Empirical IFR rates have gone down, and are likely to continue doing so as a) medical treatment improves, b)people make mostly reasonable decisions with their lives (self-select on risk levels) plus c) reasonable probability of viral doses going down due to mask usages and the like.
7. As a related point to #3 and #6, I'd expect Alice's child nodes to be not just more cautious but also healthier than baseline (they are not randomly drawn from the broader population!).
8. There's suggestive evidence of substantial behavioral modulation (which is a large factor keeping Rt ~=1). If true, this means any marginal infection (or lack thereof) has less than expected effect as other people adjust behavior to take less or more risks.
Counterarguments, to argue that E(# of people infected| Alice is infected)>>30:
1. Maybe there's a nontrivial number of worlds where London infections spike again. In those worlds, assuming a stable Rt~=1 is undercounting. (and at 0.05% prevalence, a lot of E(#s infected) is dominated by the tails).
2. Maybe 6 months is too short of an expected bound for getting the pandemic under control in London (again tail heavy).
3. Reinfections might mess up these numbers.
In London, 5-10% have been infected
Where are you getting this range? All the estimates I've seen for London are >10%, eg this home study and this convenience sample of blood donors.
comment by Benjamin_Todd
· score: 4 (2 votes) · EA
) · GW
These seem like interesting points, but overall I'm left thinking there is still a significant chance of setting off a long chain that wouldn't have happened otherwise. (And even a lowish probability of a long chain means the bulk of the damages are on other people rather than your self.)
I think the argument applies to California too. Suppose that 20% have already been infected, and 0.5% are infected currently, and R = 1.
Then in 6 months, an extra 0.5%64 = 12% will have been infected, so 32% will have had it in total. That won't be enough to create herd immunity & prevent a long chain.
An extra infection now would in expectation cause a chain of 641 = 24 infections, and if a vaccine then came and the disease were stamped out, then those 24 people wouldn't have had the disease otherwise.
What seems to matter is that we're in a "slow burn" scenario, where we're a decently long way from ending it, but R ~ 1, but we're not sure we're going to reach herd immunity as the end game.
PS My figure for London was a rough ballpark from memory - your figures are better. (Though like I say I don't think the argument is very sensitive to whether 10% or 30% have already had it.)
comment by Linch
· score: 2 (1 votes) · EA
) · GW
And even a lowish probability of a long chain means the bulk of the damages are on other people rather than your self
Sure, but how large? At an empirical IFR of 0.5%, and expected chain size of 5 (which I think is a bit of an overestimate for most of my friends in Berkeley), you get to 2% fatality rate in expectation (assuming personal risk negligible).
If you assume local IFRs of your child nodes are smaller than global IFR, you can easily cut this again by 2-5x.
This is all empirical questions, before double-counting concerns in moral aggregation.