Posts

Are there other events in the UK before/after EAG London? 2019-08-11T06:38:12.163Z · score: 9 (7 votes)

Comments

Comment by michaela on Are there other events in the UK before/after EAG London? · 2019-08-12T09:33:53.406Z · score: 1 (1 votes) · EA · GW

Thanks, I'll post there too!

Comment by michaela on Invertebrate Welfare Cause Profile · 2019-08-07T10:49:48.250Z · score: 2 (2 votes) · EA · GW

Very interesting post!

Minor point: It seems to me that Charity Entrepreneurship could perhaps be worth mentioning in the section on this cause's level of attention vs neglectedness in the EA space. Invertebrates aren't their main focus for their animal-related work, but they did investigate "wild bugs" and rank them as having a "high priority", and released a report on ethical pest control, focusing on both insecticides and rodenticides. Also, two of their top charity ideas, "institutional ask research" and "animal careers - experiments", are animal-general and thus could potentially help with invertebrate welfare, though they aren't necessarily focused on that.

Comment by michaela on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2019-08-03T06:16:47.798Z · score: 1 (1 votes) · EA · GW

This continues to be a very interesting series - thanks for writing it.

Two minor corrections (I think):

  • I believe the formulas for percent change of precipitation (stated as -0.0485 * (teragrams of smoke) + -0.938) and for percent change of temperature (stated as -0.276 * (teragrams of smoke) + -5.55) are the wrong way around. Here I've shown what values the formulas generate, alongside the values you drew from Toon et al., to illustrate that the generated "temperature" values line up fairly well with Toon et al.'s precipitation values and vice versa: https://docs.google.com/spreadsheets/d/1OaXi3Fpr4q1WfwfkodVWTeUeR_Zoj9CYKZ0Jtgr1-jc/edit?usp=sharing
  • It seems that the wrong Toon et al. figure (Figure 6, about fatalities) is shown where Figure 12 (about smoke) is meant to be (after "...the amount of smoke generated by a nuclear exchange of a given size can be represented reasonably well by simple algebraic functions”). I imagine the Figure 12 shown in Appendix B is what was also meant to be used in that earlier place where Figure 6 currently is.
Comment by michaela on How many people would be killed as a direct result of a US-Russia nuclear exchange? · 2019-07-20T01:08:52.382Z · score: 1 (1 votes) · EA · GW

Very minor point - in the table in Appendix A, a value of “Likely small (3)” is shown, where I imagine that should be “Likely small (1)”. (This is for “The Us, France, and the UK all keep their nuclear weapons…”) I imagine fixing this would then also change the overall “score" from -1.5 to -0.5, because the multiplication for that row would change from -0.5 * 3 to -0.5 * 1.

Comment by michaela on How many people would be killed as a direct result of a US-Russia nuclear exchange? · 2019-07-20T01:04:38.347Z · score: 1 (1 votes) · EA · GW

Thanks for this series - it's very interesting so far.

Whether a counterforce second strike by Russia would actually cause fewer deaths than a first strike is conditional on 1) the US striking first, 2) Russia choosing not to launch on warning, and 3) Russia being substantially under-prepared for a first strike. My best guess is that the probability of all three of these being the case is fairly low. If we naively assume that the probability that the US strikes first is 50%, the probability that Russia chooses not to launch on warning is also 50%, and that the US counterforce strike destroyed the ‘center value’ of the range for the number of nuclear weapons that might be destroyed (870), or 79% of the number of warheads I expect Russia would use against the US during a counterforce _first _strike (1,100), I would expect that about 5% fewer deaths would be caused by a Russian second strike than by a Russian first strike (0.50.50.21).

There’s a good chance I’m just misunderstanding this, but shouldn’t that be 19.75% fewer deaths in expectation? 0.5 * 0.5 * 0.79 (=0.1975), rather than 0.5 * 0.5 * 0.21  (=0.0525), because the number of weapons used (and thus the number of deaths, if we stick with your assumption of linearity) would go down by 79%, to 21%, rather than going down by 21%. (Again, it’s very possible I’m misunderstanding the maths here.)

Also, wouldn't it actually be that a Russian counterforce strike in general (not a Russian counterforce second strike) would cause 19.75%* fewer deaths in expectation, given that the second strike may involve fewer nuclear weapons? Put another way, would it actually be that a Russian counterforce second strike would cause 39.5% (two times 19.75%) fewer deaths in expectation than a Russian counterforce first strike? I ask this because the first multiplication by 0.5, to represent a 50% chance of the US striking first, seems to account for you taking the half of the possible worlds in which Russia strikes second, and thus to not to be needed if you're discussing a Russian second strike anyway. This seems relevant because, if both that and the above are the case, then in the model of the total number of deaths expected from both side’s weapons in an exchange should be adjusted to reduce the deaths from Russian weapons by 19.75%. (As opposed to just reducing it by 9.875%, which you'd do if the 19.75% represented the reduction in weapons used conditional on Russia striking second, rather than in general.)

Apologies if I haven't explained that very clearly or I'm misunderstanding your reasoning.

*Or ~5%, if my calculations to get 19.75% are mistaken.

Comment by michaela on Is EA Growing? EA Growth Metrics for 2018 · 2019-07-12T04:58:00.033Z · score: 1 (1 votes) · EA · GW

Thanks for this post.

One question: Was the last sentence of footnote 35 meant to be the last sentence of footnote 34? The sentence is "This would suggest that the EA Survey growth might stagnate or decline in the future as sources of people finding out about the EA Survey also stagnate", and the survey is the focus of footnote 34, but not of footnote 35.

Comment by michaela on Which World Gets Saved · 2018-12-12T00:16:24.369Z · score: 1 (1 votes) · EA · GW

I was also using the nuclear war example just to illustrate my argument. You could substitute in any other catastrophe/extinction event caused by violent actions of humans. Again, the same idea that "human nature" is variable and (most importantly) malleable would suggest that the potential for this extinction event provides relatively little evidence about the value of the long-term. And I think the same would go for anything else determined by other aspects of human psychology, such as short-sightedness rather than violence (e.g., ignoring consequences of AI advancement or carbon emissions), because again that wouldn't show we're irredeemably short-sighted.

Your mention of "one's beliefs about technological development" does make me realise I'd focused only on what the potential for an extinction event might reveal about human psychology, not what it might reveal about other things. But most relevant other things that come to mind seem to me like they'd collapse back to human psychology, and thus my argument would still apply in just somewhat modified form. (I'm open to hearing suggestions of things that wouldn't, though.)

For example, the laws of physics seem to me likely to determine the limits of technological development, but not whether it's tendency to be "good" or "bad". That seems much more up to us and our psychology, and thus it's a tendency that could change if we change ourselves. Same goes for things like whether institutions are typically effective; that isn't a fixed property of the world, but rather a result of our psychology (as well as our history, current circumstances, etc.), and thus changeable, especially over very long time scales.

The main way I can imagine I could be wrong is if we do turn out to be essentially unable to substantially shift human psychology. But it seems to me extremely unlikely that that'd be the case over a long time scale and if we're willing to do things like changing our biology if necessary (and obviously with great caution).

Comment by michaela on Which World Gets Saved · 2018-12-11T09:07:30.940Z · score: 1 (3 votes) · EA · GW

Very interesting post.

But it seems to me that this argument assumes a relatively stable, universal, and fixed "human nature", and that that's a quite questionable assumption.

For example, the fact that a person was going to start a nuclear war that would've wiped out humanity may not give much evidence about how people tend to behave if in reality behaviours are quite influenced by situations. Nor would it give much evidence about how people in general tend to behave if behaviours vary substantially between different people. Even if behavioural patterns are quite stable and universal, if they're at least quite manipulable then the fact that person would've started that war only gives strong evidence about current behavioural tendencies, not what we're stuck with in the long term. (I believe this is somewhat similar to Cameron_Meyer_Shorb's point.)

Under any of those conditions, the fact that person would've started that war provides little evidence about typical human behavioural patterns in the long term, and thus little evidence about the potential value of the long-term.

I suspect that there's at least some substantial stability and universality to human behaviours. But on the other hand there's certainly evidence that situational factors often important and that different people vary substantially (https://www.ncbi.nlm.nih.gov/pubmed/20550733).

Personally, I suspect the most important factor is how manipulable human behavioural patterns are. The article cited above seems to show a huge degree to which "cultural" factors influence many behavioural patterns, even things we might assume are extremely basic or biologically determined like susceptibility to optical illusions. And such cultural factors typically aren't even purposeful interventions, let alone scientific ones.

It's of course true that a lot of scientific efforts to change behaviours fail, and even when they succeed they typically don't succeed for everyone. But some things have worked on average. And the social sciences working on behavioural change are very young in the scheme of things, and their methods and theories are continually improving (especially after the replication crisis).

Thus, it seems very plausible to me that even within decade we could develop very successful methods of tempering violent inclinations, and that in centuries far more could be done. And that's all just focusing on our "software" - efforts focusing on our biology itself could conceivably accomplish far more radical changes. That, of course, if we don't wipe ourselves out before this can be done.

I recently heard someone on the 80,000 Hours podcast (can't remember who or which episode, sorry) discussing the idea that we may not yet be ready, in terms of our "maturity" or wisdom, for some of the technologies that seem to be around the corner. They gave the analogy that we might trust a child to have scissors but not an assault rifle. (That's a rough paraphrasing.)

So I think there's something to your argument, but I'd also worry that weighting it too heavily would be somewhat akin to letting the child keep the gun based on the logic that, if something goes wrong, that shows the child would've always been reckless anyway.