Posts

Key points from The Dead Hand, David E. Hoffman 2019-08-09T13:59:09.864Z · score: 62 (29 votes)

Comments

Comment by kit on Key points from The Dead Hand, David E. Hoffman · 2019-08-14T07:14:38.794Z · score: 1 (1 votes) · EA · GW

I stand corrected. I think those quotes overstate matters a decent amount, but indeed the security of fissile material is a significantly more important barrier to misuse.

Comment by kit on Key points from The Dead Hand, David E. Hoffman · 2019-08-10T08:41:03.791Z · score: 9 (4 votes) · EA · GW

Thanks! Here are some places you might start. (People who have done deeper dives into nuclear risk might have more informed views on what resources would be useful.)

  • Baum et al., 2018, A Model For The Probability Of Nuclear War makes use of a more comprehensive list of (possible) close calls than I've seen elsewhere.
  • FLI's timeline of close calls is a more (less?) fun display, which links on to more detailed sources. Note that many of the sources are advocacy groups, and they have a certain spin.
  • Picking a few case studies that seemed important and following the citations to the most direct historical accounts to better understand how close a call they really were might be a project which would interest you.
  • I thought this interview with Samantha Neakrase of the Nuclear Threat Initiative was helpful for understanding what things people in the nuclear security community worry about today.

Some broader resources

  • The probability of nuclear war is only one piece of the puzzle – even a nuclear war would probably not end the world, thankfully. I found the recent Rethink Priorities nuclear risk series (#1, #2, #3, #4, #5, especially #4) very helpful for putting more of the pieces together.
  • This Q&A with climate scientist Luke Oman gets across some key considerations very efficiently.

I'm also glad that you interpret the discussion of the Petrov incident as 'some evidence against'. That's about the level of confidence I intended to convey.

Comment by kit on How urgent are extreme climate change risks? · 2019-08-08T19:24:11.404Z · score: 2 (2 votes) · EA · GW

Open Phil (then GiveWell Labs) explored climate change pretty early on in their history, including the nearer-term humanitarian effects. Giving What We Can also compared climate change efforts to health interventions. (Each page is a summary page which links to other pages going into more detail.)

Comment by kit on Cluster Headache Frequency Follows a Long-Tail Distribution · 2019-08-03T12:14:48.786Z · score: 16 (12 votes) · EA · GW

I'm very excited to see people doing empirical work on what things we care about are in fact dominated by their extremes. At least after adjusting for survey issues, statements like

The bottom 90% accounts for 30% of incidents

seem to be a substantial improvement on theoretical arguments about properties of distributions. (Personal views only.)

Comment by kit on Cluster Headache Frequency Follows a Long-Tail Distribution · 2019-08-03T12:13:30.322Z · score: 4 (4 votes) · EA · GW

I'm less optimistic about the use of surveys on whether people think tryptamines will/did work:

  • 'And do they work?' doesn't seem like a question that will be accurately answered by asking people whether it worked for them. (Reversion to the mean being my main concern.)
  • Non-users are asked whether tryptamines 'could be effective for treating your cluster headaches', which could be interpreted as a judgement on whether it works for anyone or whether it will work for them (for which the correct answer seems to be 'maybe'). Users are asked whether it worked for them specifically. Directly computing the difference between these answers doesn't seem meaningful.
Comment by kit on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-16T08:40:58.163Z · score: 15 (7 votes) · EA · GW

Huh. The winning response, one of the six early responses, also engages explicitly with the arguments in the main post in its section 1.2 and section 2. This one discussed things mentioned in the post without explicitly referring to the post. This one summarises the long-term-focused arguments in the post and then argues against them.

I worry I'm missing something here. Dismissing these responses as 'cached arguments' seemed stretched already, but the factual claim made to back that decision up, that 'None of these engaged with the pro-psychedelic arguments I made in the main post', seems straightforwardly incorrect.

Comment by kit on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-15T22:32:58.055Z · score: 19 (8 votes) · EA · GW

I also came to note that the request was for 'the best arguments against psychedelics, not for counter-arguments to your specific arguments in favour'.

However, I also wrote one of the six responses referred to, and I contest the claim that

None of these engaged with the pro-psychedelic arguments I made in the main post

The majority of my response explicitly discusses the weakness of the argumentation in the main post for the asserted effect on the long-term future. To highlight a single sentence which seems to make this clear, I say:

I don't see the information in 3(a) or 3(b) telling me much about how leveraged any particular intervention is.

I also referred to arguments made by Michael Plant, which in my amateur understanding appeared to be stronger than those in the post. To me, it seems good that others engaged primarily with arguments such as Michael's, because engaging with stronger arguments tends to lead to more learning. When I drafted my submission, I considered whether it was unhealthy to primarily respond to what I saw as weaker arguments from the post itself. Yet, contra the debrief post, I did in fact do so.

Comment by kit on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2019-06-28T14:10:05.951Z · score: 5 (3 votes) · EA · GW

On the specific questions you're asking about whether empirical data from the Kuwaiti oil field destruction is taken into account: it seems that the answer to each is simply 'yes'. The post says that the data used is adapted from Toon et al. (2007), which projects how much smoke would reach the stratosphere specifically. The paper explicitly considers that event and what the model would predict about them:

Much interest in plume rise was directed at the Kuwati oil fires set by Iraqi forces in 1991. Small (1991) estimated that oil well fires produce energy at a rate of about 300 MW. Since the wells were separated by roughly 1 km, they represent a very small energy source relative to either forest fires or mass fires such as occurred in Hiroshima. Hence these oil well smoke plumes would be expected to be confined to the boundary layer, and indeed were observed within the boundary layer during the Persian Gulf War.

The details of the paper could be wrong – I'm a complete amateur and would be interested to hear the views of people who've looked into it, especially given substantial reliance on this particular paper in the post – but it seems to have already considered the things you raise.

However, this still got me thinking. Why look at smoke from burning oil fields, with their much lower yields, when one could look at smoke from Hiroshima or Nagasaki? It's a grim topic, but more relevant for projecting the effects of other nuclear detonations. After a surprisingly long search, I found this paper, which attempts to measure the height of the 'mushroom cloud' over Hiroshima, which isn't what we're looking for. Fortunately for me, they seem to think that Photo '(a) Around Kurahashi Island' is another photo of the 'mushroom cloud', but in fact it appears to be the cloud produced by the resulting fires. This explains their surprising result:

The height of the cloud in Figure 1 (a) is estimated to be about 16 km. This largely exceeds the 8 km that was previously assumed.

16km (range 14.54-16.88km) is well into the stratosphere across Russia and most of the US, so it seems that history is compatible with theories which say that weapons on the scale of 'Little Boy' (13–18kt) are likely to cause substantial smoke in the stratosphere.

Comment by kit on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2019-06-28T14:07:19.427Z · score: 1 (1 votes) · EA · GW

On your general point about paying attention to political biases, I agree that's worthwhile. A quibble related to that which might matter to you: the Wikipedia article you're quoting seems to attribute the incorrect predictions to TTAPS but I could only trace them to Sagan specifically. I could be missing something due to dead/inaccessible links.

Comment by kit on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2019-06-20T18:49:11.058Z · score: 21 (8 votes) · EA · GW

There are a whole bunch of things I love about this work. Among other things:

  • An end-to-end model of nuclear winter risk! I'm really excited about this.
  • The quantitative discussions of many details and how they interact are very insightful. e.g. ones which were novel for me included how exactly smoke causes agriculture loss, and roughly where the critical thresholds for agricultural collapse might be. The concrete estimates for the difference in smoke production between counterforce and countervalue, which I knew the sign of but not the magnitude, are fascinating and make this much clearer.
  • I really appreciate the efforts to make the (huge) uncertainty transparent, notably the list of simplifying assumptions, and running specific scenarios for heavy countervalue targeting. Most of all, though, the Guesstimate model is remarkably legible, which makes absorbing all this info so much easier.
Comment by kit on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2019-06-20T18:41:44.054Z · score: 20 (7 votes) · EA · GW

I have one material issue with the model structure, which I think may reverse your bottom line. The scenario full-scale countervalue attack against Russia has a median smoke estimate of 60Tg and a scenario probability of 0.27 x 0.36 = ~0.1. This means the probability of total smoke exceeding 60Tg has to be >5%, but Total smoke generated by a US-Russia nuclear exchange calculates a probability of only 0.35% for >60Tg smoke.

What seems to be going on is that the model incorporates estimated smoke from each countervalue targeting scenario as {scenario probability x scenario amount of smoke} in all Monte Carlo samples, when I figure you actually want it to count {scenario amount of smoke} in the appropriate proportion of samples. This would give a much more skewed distribution.

Sampling properly (as I see it) seems to be a bit fiddly in Guesstimate, but I put something together for Smoke that would be generated as a result of countervalue targeting against the US in an 'Alternative 3.4.4' section here. (I figured making a copy would be the easiest way to communicate the idea.)

I also redirected the top-level smoke calculation to point to the above alternate calculation to see what difference it makes. (Things I've added are marked with [KH] in the copy to make the differences easy to spot.) Basically every distribution now has two humps: either there is a countervalue strike and everything has a high chance of collapsing, or there isn't and things are awful but probably recoverable. Some notable conclusions that change:

  • ~15% chance of getting into the 50Tg+ scenarios that you flag as particularly concerning, up from ~1%.
  • ~13% chance that corn cultivation becomes impossible in Iowa, and 6% chance that Ukraine cannot grow any of the crops you focus on, both from <1%. I don't know whether still being able to grow some amount of barley helps much.
  • Your bottom-line ~5% chance of 96% population collapse jumps to ~16%, with most of that on >99% collapse. On the bright side, expected deaths drop by ~1bn.

Obviously, all these numbers are hugely unstable. I list them only to illustrate the difference made by sampling in this way, not to suggest that the actual numbers should be taken super seriously.

As above, these changes are just from adjusting the sampling for Smoke that would be generated as a result of countervalue targeting against the US. Doing the same adjustment for Smoke that would be generated as a result of countervalue targeting against Russia would add additional risk of extreme nuclear winter. For example, I think your model would imply a few % chance of all the crops you focus on becoming impossible to grow in both Iowa and Ukraine.

Before exploring your work, I hadn't understood just how heavily extinction risk may be driven by the probability of a full-scale countervalue strike occurring. This certainly makes me wonder whether there's anything one can do to specifically reduce the risk of such strikes without too significantly increasing the overall risk of an exchange. In general, working through your model and associated text and sources has been super useful to my understanding.

Comment by kit on Would US and Russian nuclear forces survive a first strike? · 2019-06-20T09:54:37.618Z · score: 2 (2 votes) · EA · GW

Neat. Happy to be a little bit helpful!

Comment by kit on How many people would be killed as a direct result of a US-Russia nuclear exchange? · 2019-06-19T19:26:39.212Z · score: 3 (2 votes) · EA · GW

Agreed. The discussion of the likelihood of countervalue targetting throughout this piece seems very important if countervalue strikes would typically produce considerably more soot than counterforce strikes. In particular, the idea that any countervalue component of a second strike would likely be small seems important and is new to me.

I really hope the post is right that any countervalue targetting is moderately unlikely even in a second strike for the countries with the largest arsenals. That one ‘point blank’ line in the 2010 NPR was certainly surprising to me. On the other hand, I'm not compelled by most of the arguments as applied to second strikes specifically.

Comment by kit on Would US and Russian nuclear forces survive a first strike? · 2019-06-19T18:09:54.986Z · score: 19 (6 votes) · EA · GW

This is fascinating, especially with details like different survivability of US and Russian SLBMs. My main takeaway is that counterforce is really not that effective, so it remains hard to see why it would be worth engaging in a first strike. I'd be interested to hear if you ever attempt to quantify the risk that cyber, hypersonic, drone and other technologies (appear to) change this, or if this has been attempted by someone already.

Relatedly:

If improvements in technology allowed either country to reliably locate and destroy those targets, they would be able to eliminate the others’ secure second strike, thereby limiting the degree to which a nuclear war could escalate.

Perhaps reading into this too much, but I wondered if you think the development of some kinds of effective counterforce are net positive in expectation from an extinction risk perspective. My amateur impression is that these developments are kind of all bad (most prominently because the ability to destroy weapons seems to force ‘launch on warning’ to be the default, making accidental escalation (from zero) more likely), but I'm potentially generalising too much.

Comment by kit on Would US and Russian nuclear forces survive a first strike? · 2019-06-19T18:09:37.574Z · score: 12 (4 votes) · EA · GW

Quibbles/queries:

The one significant thing I was confused about was why the upper bound survivability for stationary, land-based ICBMs is only 25%. It looks like these estimates are specifically for cases where a rapid second strike (which could theoretically achieve survivability of up to 100%) is not attempted. Do you intend to be taking a position on whether a rapid second strike is likely? It seems like you are using these numbers in some places, e.g. when talking about ‘Countervalue targeting by Russia in the US’ in your third post, when you might be using significantly larger numbers if you thought a rapid second strike was likely. The reason I’m interested in this question is that it seems likely to feed into your planned research into nuclear winter, which I particularly look forward to.

Also, maybe you intend for your adjustment for US missile defence systems to be negating 15% of the lost warheads rather than adding 15% to the total arsenal? The current calculation suggests that missile defences reduce counterforce effectiveness by ~61%, which seems like not your intention given what you’ve said about interceptor success rates and diminishing returns on a counterforce strike. (I think this change would decrease surviving, deployed US warheads by ~163, so possibly has moderately large implications for your later work.)

Comment by kit on Which nuclear wars should worry us most? · 2019-06-19T17:15:41.037Z · score: 11 (6 votes) · EA · GW

This series (#2, #3) has begun as the most interesting-to-me on the Forum in a long time. Thanks very much. If you have written or do write about how future changes in arsenals may change your conclusions about what scenarios to pay the most attention to, I'd be interested in hearing about it.

In case relevant to others, I found your spreadsheet with raw figures more insightful than the discrete system in the post. To what extent do you think the survey you use for the probabilities of particular nuclear scenarios is a reliable source? (I previously distrusted it for heuristic reasons like the authors seeming to hype some results that didn’t seem that meaningful.) I'm interested because, as well as the numbers you use it for, the survey implies ~15% chance of use of nuclear weapons conditional on a conventional conflict occurring between nuclear-armed states, which seemed surprisingly low to me and would change my thinking about conflicts between great powers in general if I believed it.

Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-26T09:05:30.544Z · score: 12 (6 votes) · EA · GW
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor

Let's go. Upside 1:

effect from boosting efficacy of current long-termist labor

Adding optimistic numbers to what I already said:

  • Let's say EAs contribute $50m† of resources per successful drug being rolled out across most of the US (mainly contributing to research and advocacy). We ignore costs paid by everyone else.
  • This somehow causes rollout about 3 years earlier than it would otherwise have happened, and doesn't trade off against the rollout of any other important drug.
  • At any one time, about 100 EAs†† use the now-well-understood, legal drug, and their baseline productivity is average for long-term-focused EAs.
  • This improves their productivity by an expected 5%††† vs alternative mental health treatment.
  • Bottom line: your $50m buys you about 100 x 5% x 3 = 15 extra EA-years via this mechanism, at a price of $3.3m per person-year.

Suppose we would trade off $300k for the average person-year††††. This gives a return on investment of about $300k/$3.3m = 0.09x. Even with optimistic numbers, upside 1 justifies a small fraction of the cost, and with midline estimates and model errors I'd expect more like a ~0.001x multiplier. Thus, this part of the argument is insignificant.

-----

Also, I've decided to just reply to this thread, because it's the only one that seems decision-relevant.

† Various estimates of the cost of introducing a drug here, with a 2014 estimate being $2.4bn. I guess EAs could only cover the early stages, with much of the rest being picked up by drug companies or something.
†† Very, very optimistically, 1,000 long-term-focused EAs in the US, 10% of the population suffer from relevant mental health issues, and all of them use the new drug.
††† This looks really high but what do I know.
†††† Pretty made up but don't think it's too low. Yes, sometimes years are worth more, but we're looking at the whole population, not just senior staff.

Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T19:22:57.805Z · score: 4 (3 votes) · EA · GW
Psychedelic interventions seem promising because they can plausibly increase the number of capable people focused on long-termist work, in addition to plausibly boosting the efficacy of those already involved.

Pointing out that there are two upsides is helpful, but I had just made this claim:

The math for [the bold part] seems really unlikely to work out.

It would be helpful if you could agree with or contest with that claim before we move on to the other upside.

-

Rationality projects: I don't care to arbitrate what counts as EA. I'm going to steer clear of present-day statements about specific orgs, but you can see my donation record from when I was a trader on my LinkedIn profile.

Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T19:01:44.000Z · score: 8 (2 votes) · EA · GW

I'm not arguing against trying to compare things. I was saying that the comparison wasn't informative. Comparing dissimilar effects is valuable when done well, but comparing d-values of different effects from different interventions tells you very little.

Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T08:33:22.825Z · score: 18 (6 votes) · EA · GW

To explicitly separate out two issues that seem to be getting conflated:

  • Long-term-focused EAs should make use of the best mental health care available, which would make them more effective.
  • Some long-term-focused EAs should invest in making mental health care better, so that other long-term-focused EAs can have better mental health care and be more effective.

The former seems very likely true.

The latter seems very likely false. You would need the additional cost of researching, advocating for and implementing a specific new treatment (here, psilocybin) across some entire geography to be justified by the expected improvement in mental health care (above what already exists) for specifically long-term-focused EAs in that geography (<0.001% of the population). The math for that seems really unlikely to work out.

I continue to focus on the claims about this being a good long-term-focused intervention because that's what is most relevant to me.

-----

Non-central notes:

  • We've jumped from emotional blocks & unhelpful personal narratives to life satisfaction & treatment-resistant depression, which are very different.
  • As you note, the two effects you're now comparing (life satisfaction & treatment-resistant depression) aren't really the same at all.
  • I don't think that straightforwardly comparing two Cohen's d measurements is particularly meaningful when comparing across effect types.
Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-12T21:49:52.061Z · score: 9 (6 votes) · EA · GW

I believe you when you say that psychedelic experiences have an effect of some (unknown) size on emotional blocks & unhelpful personal narratives, and that this would change workers' effectiveness by some (unknown) amount. However, even assuming that the unknown quantities are probably positive, this doesn't tell me whether to prioritise it any more than my priors suggest, or whether it beats rationality training.

Nonetheless, I think your arguments should be either compelling or something of a wake-up call for some readers. For example, if a reader does not require careful, quantified arguments to justify their favoured cause area†, they should also not require careful, quantified arguments about other things (including psychedelics).

† For example, but by no means exclusively, rationality training.

[Edited for kindness while keeping the meaning the same.]

Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-10T20:02:55.591Z · score: 31 (18 votes) · EA · GW

Boring answer warning!

The best argument against most things being 'an EA cause area'† is simply that there is insufficient evidence in favour of the thing being a top priority.

I think future generations probably matter morally, so the information in sections 3(a), 3(b) and 4 matter most to me. I don't see the information in 3(a) or 3(b) telling me much about how leveraged any particular intervention is. There is info about what a causal mechanism might be, but analysis of the strength is also needed. (For example, you say that psychedelic interventions are plausibly in the same ballpark of effectiveness of other interventions that increase the set of well-intentioned + capable people. I only agree with this because you use the word 'plausibly', and plausibly...in the same ballpark isn't enough to make something an EA cause area.) I think similarly about previous discussion I've seen about the sign and magnitude of psychedelic interventions on the long-term future. (I'm also pretty sceptical of some of the narrower claims about psychedelics causing self-improvement.††)

I did appreciate your coverage in section 4 of the currently small amount of funding and what is getting done as a result, which seems like it could form part of a more thorough analysis.†††

My amateur impression is that Michael Plant has made a decent start on quantifying near-term effects, though I don't think anyone should take my opinion on that very seriously. Regardless of that start looking good, I would be unsurprised if most people who put less weight on future generations than me still wanted a more thorough analysis before directing their careers towards the cause.

As I said, it's a boring answer, but it's still my true objection to prioritising this area. I also think negative PR is a material consideration, but I figured someone else will cover that.

-----

† Here I'm assuming that 'psychedelics being an EA cause area' would eventually involve effort on a similar scale to the areas you're directly comparing it to, such as global health (say ~100 EAs contributing to it, ~$10m in annual donations by EA-aligned people). If you weaken 'EA cause area' to mean 'someone should explore this', then my argument doesn't work, but the question would then be much less interesting.

†† I think mostly this comes from me being pretty sceptical of claims of self-improvement which don't have fairly solid scientific backing. (e.g. I do deep breathing because I believe that the evidence base is good, but I think most self-improvement stuff is random noise.) I think that the most important drivers of my intuitions for how to handle weakly-evidenced claims have been my general mathematical background, a few week-equivalents trying to understand GiveWell's work, this article on the optimiser's curse, and an attempt to simulate the curse to get a sense of its power. Weirdness aversion and social stuff may be incorrectly biasing me, but e.g. I bought into a lot of the weirdest arguments around transformative AI before my friends at the time did, so I'm not too worried about that.

††† I also appreciated the prize incentive, without which I might not have written this comment.

Comment by kit on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T08:07:29.483Z · score: 16 (8 votes) · EA · GW

As an aside, I wouldn't say that any Good Ventures things are 'housed under Open Phil'. I'd rather say that Open Phil makes recommendations to Good Ventures. i.e. Open Phil is a partner to Good Ventures, not a subsidiary.

Technically, I've therefore answered a different question to the one you asked: I've answered the question 'why aren't these grants on the Open Phil website'.

Comment by kit on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T08:07:05.506Z · score: 13 (7 votes) · EA · GW

From Good Ventures' grantmaking approach page:

In 2018, Good Ventures funded $164 million in grants recommended by the Open Philanthropy Project, including $74 million to GiveWell’s top charities, standout charities, and incubation grants. (These grants generally appear in both the Good Ventures and Open Philanthropy Project grants databases.)
Good Ventures makes a small number of grants in additional areas of interest to the foundation. Such grants totaled around $19 million in 2018. Check out Our Portfolio and Grants Database to learn more about the grants we've made so far.
Comment by kit on Legal psychedelic retreats launching in Jamaica · 2019-04-18T18:17:40.483Z · score: 10 (4 votes) · EA · GW

I figured the OP was suggesting that people go to the retreat? (or maybe be generically supportive of the broader project of running retreats)

Not sure where this is going; doesn't immediately seem like it counters what I said about your comparison to specific fundraising + analysis posts, or about why readers might be confused as to why this is here.

Comment by kit on Legal psychedelic retreats launching in Jamaica · 2019-04-18T18:00:44.264Z · score: 4 (3 votes) · EA · GW

Right. The stuff about psychedelics as Cause X was maybe a bit of a red herring. You probably know how to sell your business much better than I do, but something which I think is undervalued in general is simply opening your pitch with why exactly you think someone should care about your thing. I actually hadn't considered creative problem-solving or career choice as reasons to go on this retreat.

My earlier comment was a reply to the challenge of 'how this post is substantively different from previous content like...' and this now seems fairly obvious, so I probably have little more useful to say :)

Comment by kit on Legal psychedelic retreats launching in Jamaica · 2019-04-18T07:46:59.846Z · score: 25 (13 votes) · EA · GW

I can see where you're coming from, but I think there's a lot of missing info here, and this will make the post confusing to most readers. Some* of the other posts you link to also ask things of their readers, but they also present a case for why that ask is a particularly exceptional use of resources.

I happen to know of some topics which psychedelics might be relevant to, some of which are mentioned in the post and in your later comment, e.g.

  • Potentially strong treatment for depression
  • Drug liberalisation could reduce unnecessary incarceration
  • Very speculative things like maybe psychedelics make you a better or more effective person (or increases your risk of psychosis), or maybe psychedelics could help us study sentience

but it's pretty unclear how EAs going on a psychedelic retreat is an effective way to make progress in these fields. i.e. even with what I guess is an above-median amount of context on the subject, I don't know what the case is. Given that, I think Khorton's reaction is very reasonable.

Maybe I'm missing the point and the post is just saying that there's a cool thing you can do with other EAs, not trying to claim that it's an effectively altruistic use of resources. In that case, the difference between the posts appears to be simple.

A disclosure of my own: I previously worked for CEA. Of course, these are my opinions only.

*Giving What We Can is still growing at a surprisingly good pace doesn't justify an ask, but it doesn't have an ask either.

Comment by kit on Getting People Excited About More EA Careers: A New Community Building Challenge · 2019-03-10T13:26:15.478Z · score: 45 (16 votes) · EA · GW

I think IASPCs handle these things well, and think there's some misinterpretation going on. What makes a strong plan change under this metric is determined by whatever 80,000 Hours thinks is most important, and currently this includes academic, industry, EA org and government roles. These priorities also change in response to new information and needs. The problem Sebastian is worried about seems more of a big deal: maybe some orgs / local groups are defining their metrics mostly in terms of one category, or that it's easy to naively optimise for one category at the expense of the others.

The part about counting impact from skill-building and direct work differently simply seems like correct accounting: EA orgs should credit themselves with substantially more impact for a plan change which has led to impact as one which might do so in the future, most obviously because the latter has a <100% probability of turning into the former.

I also think the metric works fine with Sebastian's point that quant trading can be competitive with other priority paths. You seem to imply that the use of IASPCs contradicts his advice, but you point to a non-priority rating for 'earn to give in a medium income career', which is not quant trading!† 80,000 Hours explicitly list quant trading as a priority path (as Seb pointed out in the post), so if an org uses IASPCs as one of their metrics they should be excited to see people with those particular skills go down that route. (If any readers land quant jobs in London, please do say hi :) )

I agree that misapplication of this or similar metrics is dangerous, and that if e.g. some local groups are just optimising for EA-branded orgs instead of at least the full swathe of priority paths, there's a big opportunity to improve. All the normal caveats about using metrics sensibly continue to apply.

All views my own.

†As a former trader, I felt the need to put an exclamation mark somewhere in this paragraph.

Comment by kit on Effective Impact Investing · 2019-03-02T17:23:14.793Z · score: 12 (5 votes) · EA · GW

I'd like to highlight the distinction between 'impact investing funds would outperform funds purely optimized for profit' and 'SRI doesn't undermine the bottom line'. In markets as efficient as I think publicly traded stocks are, the former is highly improbable and the latter is highly probable.

The blog post appears to make both claims. Habryka's complaint may seem more defensible to you if it is entirely about the former claim.

--

Two technical notes on this distinction:

  • Given the existence of some low-quality evidence for the strong (outperformance) claim, you might argue that that too is not so naive.
  • Of course, SRI typically reduces diversification, with an effect somewhere between negligible and substantial depending on the strategy, making the weak (doesn't undermine) claim misleading in some situations, even with efficient markets.
Comment by kit on Effective Impact Investing · 2019-02-28T09:18:25.403Z · score: 14 (8 votes) · EA · GW

Thanks for giving examples of advocacy efforts you might see as a good use of investor time and capital! Getting to the concrete outcomes of impact investing seems pretty key for figuring out in what situations it's a good use of time and capital to engage in.

When you say, 'shareholder advocacy, which is the primary mechanism for impact in public equity investing', I find this very much plausible in the sense that it's the part which seems to have the highest potential. Interestingly, though, when I last looked into this, the vast majority of the SRI industry by capital seemed to be not engaging in shareholder advocacy.*

I would expect shareholder advocacy to be worth the time of effectiveness-minded altruists only in very specific situations (perhaps including some of the ones you named), but given that good shareholder advocacy seems so rare even in SRI, I wonder if there is room for getting the entire SRI industry to actually do the part of SRI which seems promising? Is it true that most SRI capital isn't being used for shareholder advocacy? Is it tractable to improve the industry in this way? (Is that already your main aim?)

--

*I'm not counting screening/divestment campaigns which don't involve talking to specific companies, because this generally seems not to provide clear incentives for any particular company to do anything in particular. Best-in-class screening might be an exception, but the incentives still seemed super weak to me when I last thought about this. Overall, it looks like there's ${tens of trillions} of assets considered to be SRI and a small number (on the order of 1,000 per year?) of good shareholder advocacy campaigns, which suggests a massive difference between the potential of SRI and SRI in practice today.

Comment by kit on Is Superintelligence Here Already? · 2019-01-18T18:24:28.189Z · score: 2 (2 votes) · EA · GW

Context: the report is 190 pages long and was published this month. Those who are reading it seem unlikely to reply with detailed analysis on this particular Forum post.

Object-level response: becoming excellent at chess, go, and shogi is interesting, since it is more general than being excellent at any one alone. My impression is that the AI safety community recognises the importance of milestones like this. It is simply the case that superintelligence typically means something far more general still, such as

an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills

which will not include an AI which can play a specific set of games.

Since we have now discovered that the disagreement is merely a matter of definitions, hostilities can be ceased :)

Comment by kit on Talking about EA at an investors' summit · 2019-01-15T09:47:59.508Z · score: 7 (3 votes) · EA · GW

Immediate, not hugely informed thoughts: (I've talked to ~250 finance people about EA but only attended one finance conference, and it was fintech rather than investment.)

Broadly I'd recommend looking at generic sales advice, including some conference-specific stuff. A big thing is making clear quickly why you're relevant to someone. What can you offer them? Why will they care? What is the one point you want them to remember? They'll have little time, probably being very focused on finding potential business partners, e.g. funds looking for investors, if this is the kind of investor conference I'm thinking of. You might have to be even more prepared to demonstrate relevance than others at the conference because you are not obviously part of the main theme or expected to be there.

Also, how familiar are you with how these kinds of people think? Can you frame EA in industry terms, for example? Seeking to maximise (social) return on investment seems uncontroversial and used as a concept in impact investing. I've also tried talking directly about comparing charities (comparables, alpha) though that seemed to not translate as well. (Anecdotal.)

The handbook here may also be of tangential relevance: http://eaworkplaceactivism.org

Another tip is to expect an extremely low hit rate. Figure out who seems interested in thinking about their giving and focus on them. Think about whether you want to follow up with the most promising people to point them to key resources or connect them to other interested people. Figure out if asking for contact details is normal or weird in this context.

Good luck. There don't seem to be major downsides, so even with a low hit rate, seems worth a shot.

Comment by kit on What Is Effective Altruism? · 2019-01-10T13:29:56.097Z · score: 3 (5 votes) · EA · GW

Yes, I'd be excited to always include something about epistemics, such as scientific mindset. One can then argue about evidence instead of whether something qualifies as secular, which seems only relevant insofar as it is a weak predictor of well-evidenced-ness. In particular, while I don't assign it high credence, I would not be hasty to rule out maximising 'enlightenment' as one of my end goals. Terminal goals are weird.

Notably, without an epistemic/scientific part to the definition, it is unclear how to distinguish many current EA approaches from e.g. Playpumps, which is a secular project which was hyped for its extraordinary outcomes in helping people far away. Looking forward, I also think that strong epistemics are how long-term-focused EA efforts can continue to be more useful than regular futurism.

Comment by kit on [Link] The option value of civilization · 2019-01-09T09:03:29.598Z · score: 3 (2 votes) · EA · GW

I think the author has confused one type of payoff diagram (the probability density function of a variable) with another (the payoff of an option plotted against the value of the underlying variable). This results in a number of the claims in the piece being reversed. There seems to be confusion between other parameters too.

In finance, an option is the right to do something (typically some variant of 'buy asset A for price P'). The most surprising thing is that the piece doesn't establish where the optionality comes from. I think it just draws an analogy between the distribution of future outcomes and the payoff of an option, then treats the value of the future like an option. As above, I think this is incorrect.

One way the conclusions would carry is if one asserts that the future is net positive in expectation largely independently of size, and thus one should make the future bigger (a particular version of more variance). This argument is coherent but does not involve options.

A plausible source of optionality is that future generations might have some control over whether the world continues.* (Finance jargon would call this a put option on the future of the world with a low strike.)

Under this interpretation:

  • Standard options pricing theory applies when you can trade the thing you have an option on. Here that isn't the case: one cannot buy and sell the future to hedge the option delta.
  • One should instead use more straightforward expected utility calculations taking into account the expected actions of future actors. e.g. a crude simplification: P(world good)×(how good)×P( future actors let world continue | good) - P(world bad)×(how bad)×P(future actors let world continue | bad). Standard financial options pricing would give a very different formula.
  • The volatility point does hold, but for the above reasons, not the analogy drawn in the piece. One should simply be willing to trade extra potential upside (before future generations intervene) for extra potential downside (before intervention) in proportion to how much future generations have the power to stop bad outcomes closer to the time.
  • The claim that we are long a call and short a put seems false -- I think this is just drawing an incorrect analogy as noted in the first paragraph. I think the situation is more like owning an X% put on the future, where X% is the chance that future altruists have control over whether the long-term future exists. (You could alternatively see the overall situation as long X calls and long (1-X) futures.) This weakens but does not nullify the amount that a balanced upside and downside leads to focusing more on the present.

*Seems unclear but worth considering.

In conclusion, I got some interesting thinking out of reading this piece, but disagree with most of it.

Comment by kit on Request For Feedback: EA Venture Fund · 2019-01-04T20:40:11.085Z · score: 1 (1 votes) · EA · GW

Wow, I'm no expert on VC, but it sounds like you could have the expertise to pull something like this off.

Counterfactuals: mostly I'm saying that most impact investing just replaces other investment. At the ludicrous end of the spectrum (which is unfortunately most of the spectrum), a lot of 'socially responsible investing' involves buying shares on a public market, simply transferring ownership without changing incentivises (since the impact on capital raising ability appears minimal and most investors don't do PR stunts, influence management or other potentially useful byproducts of owning shares). As one goes into private markets -- as you are --, I'm a bit more optimistic since there are situations like seed funding where an investor really can make the difference between existing or not, or growing or not, and can perhaps have useful early influence. e.g. I'd guess that investing in Wave now would just be displacing another investor, while maybe an impact investor helped them get off the ground and they wouldn't have been funded by regular investors. (I don't know if that's true.) The more you can identify opportunities which you could fund which wouldn't otherwise get funded, the less confident I would be in my pessimism :) Overall, I mostly defer to the Founders Pledge report. Reading every mention of 'counterfactual' will likely cover everything I would say and much more.

You mention a few potential outcomes from this kind of work (e.g. getting impactful things capital, a platform for EA advocacy, influencing companies' behaviour*). When I have done impact analysis recently, the first step was to consider what the most important outcomes could be. Sometimes a quick estimate suggests that one of the outcomes is much more important than the others, allowing you to focus on studying that factor.

Re comparing to 80k's priority paths, I'd be surprised if doing something part-time would be optimal, just on generic advice. If that generalises to VC, I'd start by comparing running a VC full-time vs deploying the same staff in other roles. Interesting idea for this as a sub-fund of a larger group. Whatever you decide, great to hear about what you're doing.

*See commentary from the Good Technology Project on their related experiences here. The 'Advise entrepreneurs directly' section seems particularly relevant, but it all might be of interest.

Comment by kit on Request For Feedback: EA Venture Fund · 2019-01-03T20:26:14.399Z · score: 9 (4 votes) · EA · GW

Hi atlasunshrugged,

My first thought (as per a comment on another investment project) is that investment firms typically need to be managed by experienced investors. It works a bit differently for VC than algorithmic trading but the basic point applies: you'd need to convince someone with relevant experience to run it (or maybe that's you).

That's my first hurdle, but there's a broader question: how valuable are various kinds of impact investing? Founders Pledge covered a lot of ground in their impact investing report. I can't remember if they mentioned the value of having influence over important startups, but based on a summary I heard, it seemed to basically hit all the points I was aware of from my experience in finance, and more. A key point, which you might be aware of, is that the counterfactuals tend to be quite bad in impact investing. That is, impactful companies typically don't require your capital in order to exist, because there are other investors. Happily, you are at least considering working with seed funding, which seems like a key place where impact investing might be worthwhile. (It's sadly a tiny share of the commercial 'socially responsible investing' market.)

Overall, I think it's pretty likely that any of 80,000 Hours' priority paths would beat founding a small impact investing firm, even a sensibly targeted one. However, the Founders Pledge report mentions a bunch of ways to do impact investing well, so that seems worth looking into if you are a professional investor or end up working with one.

Happy exploring!

P.S. there are some particular details which seem like they might need changing if this goes ahead someday, e.g. requiring all investees to commit a large percentage of their profits to charity seems likely to limit your scope.

Comment by kit on EA Hotel Fundraiser 1: the story · 2018-12-27T13:54:15.877Z · score: 2 (2 votes) · EA · GW

Thanks! That's a very helpful summary.

Comment by kit on EA Hotel Fundraiser 1: the story · 2018-12-27T12:39:37.189Z · score: 8 (8 votes) · EA · GW

Are the 'abstracted case studies' each an anonymised description of a single current/past resident?

(I wondered if they might be hypothetical or blended after reading the possibility to 'interview some actual residents' mentioned later.)

Comment by kit on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-24T18:16:03.273Z · score: 4 (2 votes) · EA · GW

Also see https://forum.effectivealtruism.org/posts/SfMvGMPYswSwpeGDk/pursuing-infinite-positive-utility-at-any-cost

Comment by kit on Antigravity Investments: Helping the EA Community Leverage Investing to Increase Funding and Donations · 2018-12-20T14:16:57.335Z · score: 9 (4 votes) · EA · GW
Antigravity Investments does not do trading or money-management in the sense of actively managing portfolios.

My heuristic applies to money-management in the sense of managing money, or, more precisely, having control of others' investments. I agree that the strength of the heuristic varies by project, but I think it applies pretty broadly.

Relatedly, you may wish to update your website. You currently advertise Long-Term Active Investing as one of three services.

-----

Thanks for noting why one might override the heuristic I mentioned. Congratulations on setting up a legal entity and registering with the SEC. However, my impression is that the qualifications required to register are pretty basic. e.g. it took me <2 weeks of preparation to pass 3 regulatory exams in the UK when I started working in finance. I already believe you to be smart, and of course Antigravity isn't fraudulent, so passing the low hurdles the SEC requires doesn't seem like much of an update. You might want to be careful with using this as evidence: non-finance people might not understand what it means.

-----

Thanks for clarifying your previous statements. Given the private nature of 2017 discussions, I probably can't comment on your representation of previous feedback.

Thank you, also, for the offer to spend more time on Antigravity. As mentioned by chat, I'm going to focus on other things. If you believe your comparative advantage to be money-management, my recommendation would be to work at a top firm for a couple of years, focusing first on learning all the best practices and tacit knowledge and second on building respect in the industry, then to consider starting your own firm. As an aside, I would be more excited about this as an earning to give strategy than as an EA-specific company, though I think the advice applies similarly either way.

Comment by kit on Antigravity Investments: Helping the EA Community Leverage Investing to Increase Funding and Donations · 2018-12-19T19:26:08.327Z · score: 22 (10 votes) · EA · GW

Speaking as an individual who follows finance projects with interest, I'm assuming that "no one has reached out to me" refers to something like no-one reaching out after this particular post, rather than no-one having flagged significant bugs with Antigravity Investments in the past?

It is very likely most concerns are "partly or fully out of date" as you mention, likely stemming from when I talked with a lot of EAs in finance about my very first investment related project idea in 2016

I would have expected the majority of perceptions to be shaped by the public launch and marketing of Antigravity in 2017, rather than individual discussions in 2016? It seems possible that the 2017 perceptions are nonetheless out of date. However, the following claim still stood out as very surprising to me:

Within the last year, this project has received positive evaluations from... EAs in finance, and I have not received any negative feedback

Until the end of 2017, I worked as a trader, ran the London EA & finance community, and talked with financial markets people around the world. My understanding is that most traders and money managers think that a money-management project usually needs staff with previous trading experience to be viable. (I agree that this heuristic should carry a lot of weight.) Antigravity, of course, is attempting to substantially shortcut this heuristic, by launching an investment firm without any staff with previous trading experience. I am therefore very surprised by the above claim. The heuristic leads me to expect that EAs in finance will be typically negative on such a project, but the above suggests that they are consistently positive.

Comment by kit on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-08T12:47:07.757Z · score: 7 (7 votes) · EA · GW

Thanks for the food for thought. I thought I'd share three related notes on framing which might be relevant to the rest of your series:

1) Tiny probabilities appear to not be fundamental to long-termism. The mugging argument you attribute to long-termists is indeed espoused as the key argument in favour by some, especially on online forums, but many people researching these problems assign non-mugging (e.g. ~1%) probability of their organizations having an overall large effect. For example, Yudkowsky in the interesting piece you linked (thanks for that):

You cannot justifiably trade off tiny probabilities of x-risk improvement against efforts that do not effectuate a happy intergalactic civilization, but there is nonetheless no need to go on tracking tiny probabilities when you'd expect there to be medium-sized probabilities of x-risk reduction.

It would be excellent to see someone write this up, since the divergence in argumentation by different parties interested in long-termism is large.

2) Projects tend not to have binary outcomes, and may have large potential positive and negative effects which have unclear sign on net. This makes robustness considerations (Knightian uncertainty, confidence in the sign, etc.) somewhere between quite important and the central consideration. This is the key reason why I pay little attention to the mugging arguments, which typically assume no negative outcomes without justifying this assumption. Instead, I think that the strongest case for aiming directly at improving the long-run may revolve around the robustness gained from aiming directly at where most of the expected value is (if one considers future generations to be morally relevant). Might be valuable to explore the relative merits of these approaches.

3) Consider explicitly separating the overall promisingness of a project from the marginal effect of additional resources. This is relevant e.g. for case 'You give $7,000 to the Future of Humanity Institute, and your donation makes the difference between human extinction and long-term flourishing.' i.e. consider separating out 'your donation makes the difference' from 'the Future of Humanity Institute has a large positive impact'. When looking at marginal contributions, these things can get conflated. For example, there is a low probability that there has been or will be a distribution of bednets which would not have happened had I not donated $3,000 to AMF in 2014, but this uncertainty does not worry me. Uncertainty about whether increasing economic growth is good is a much larger deal. It looks like Eliezer summarised this well:

In this case the average marginal added dollar can only account for a very tiny slice of probability, but this is not Pascal's Wager.  Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginally increased probabilities of success per added small unit of effort.  It would only be Pascal's Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.

Based on your 'Summary' section, I suspect that you are already intending to tackle some of these points in 'Sheltering in the herd' or elsewhere. Good luck!

Comment by Kit on [deleted post] 2018-11-08T12:26:45.848Z

1) Tiny probabilities (e.g. ) appear to not be fundamental to long-termism. The

Comment by Kit on [deleted post] 2018-11-08T12:14:53.172Z

1) Tiny probabilities (e.g. $10^-40$) appear to not be fundamental to long-termism. The

Comment by kit on RPTP Is a Strong Reason to Consider Giving Later · 2018-11-08T09:11:24.659Z · score: 1 (1 votes) · EA · GW

If you have any confident thoughts, I'd be interested to hear which funding opportunities in the space seem most promising to you. (i.e. my question above was not rhetorical.) In particular, it is not obvious to me where the funding gaps are, and you seem likely to be better placed to know.

Also, I think there are some considerations which are 1-3 orders of magnitude more important than RTPT. Your post prompted me to write up a simple quantitative comparison of factors. Would you be interested in discussing/commenting at some point?

Comment by kit on RPTP Is a Strong Reason to Consider Giving Later · 2018-11-07T12:23:40.374Z · score: 2 (2 votes) · EA · GW

Regarding learning value being dominant (which seems plausible), you make one concrete recommendation:

In the meantime, there is nothing to do but invest and wait.

Do you think we could instead spend money on prioritization research? Some actors relevant to this space include

  • The Open Philanthropy Project
  • The Global Priorities Institute
  • The Future of Humanity Institute
  • Rethink Priorities
  • The Foundational Research Institute
  • The China Center for Global Priorities
  • Some think tanks
  • Independent researchers working on forecasting, cause prioritization, exploring specific domains that might turn out to be promising, etc.

Some of these groups are clearly very well-funded. However, some have raised funds recently, suggesting that they believe that marginal funds will accelerate prioritization work. Reasonable people can disagree about this, and it seems like a key point which would need to be resolved for the recommendation to carry.

Comment by kit on Impact Investing - A Viable Option for EAs? · 2018-07-22T18:13:14.147Z · score: 6 (6 votes) · EA · GW

Among the most significant confusions in investing is this: when you buy a stock for $10, you give up $10 now in exchange for some variable return in the future. Importantly, the seller gets back $10 and gives up the same variable return in the future. Both parties are happy with this trade (due to different preferences or beliefs about the world).

This symmetry applies to the social impact too. The buyer gets a stock producing social impact. The seller gives up a stock producing social impact. It is unclear whether this transaction has affected the total social impact in the world at all. My understanding is that the section A Rational Take on Investing does not take this into account.

The above paragraph is the typical second-order reasoning in this space, and is a more explicit version of a point kbog made.

  • First-order: I claim credit for whatever good/bad things I'm associated with, thus impact investing counts for a lot.

  • Second-order: taking counterfactuals into account, impact investing appears to have approximately zero impact.

  • The interesting stuff: controlling or launching important startups, and shareholder activism in general, seems like it might do something, and can sometimes be targeted at the most promising cause areas. Plant-based meat alternative companies are the companies that I'm aware of that seem likely to be worth EA investors' attention for this reason. Maybe holding tobacco company shares in order to be able to more easily lobby them might be attractive for EAs focused on global health. Note that these activist approaches are entirely distinct from what most of the impact investing industry is doing.

Comment by kit on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2018-02-18T22:58:29.430Z · score: 3 (5 votes) · EA · GW

The point I would most like to emphasise is that it's often unclear what will happen to an asset when cost-effectiveness goes up. If you're confident it'll go up at that time, you buy/overweight it. If you're confident it'll go down at that time, you sell/underweight it. If it could go either way, this approach is weaker. Most discussion I have seen on this topic assumes that the 'evil' asset can be expected to move in the same direction as cost-effectiveness. Finding something with reliable covariance in either direction seems like it might be most of the challenge.

For more detail on that, here are some notes on the most valuable insights and most significant errors of the original Federal Reserve paper.

My guess is that the best suggestions from this post appear in 'Applications outside of investment'. These do not fall prey to the abovementioned issues since the mechanisms are different to the investment case, directly exploiting the extra power one gains from being on the inside of an organisation rather than correlation/covariance.

(I might as well note that this comment represents my views on the matter, and no-one else's, while the main post represents the views of others, and not necessarily mine.)

Comment by kit on Contra the Giving What We Can pledge · 2016-12-05T22:24:13.129Z · score: 0 (0 votes) · EA · GW

Indeed. Knowing what the proposal is would help here.

Comment by kit on Contra the Giving What We Can pledge · 2016-12-05T09:32:57.313Z · score: 1 (3 votes) · EA · GW

GWWC is not firebombing anything, happily. War crimes are obviously bad and need no counterfactual spelled out. The principle you outline does not apply to the pledge because many people (citation) don't think the pledge is obviously bad. To engage these people in productive discourse you need to suggest at least one strategy which could be better.