Posts

Key points from The Dead Hand, David E. Hoffman 2019-08-09T13:59:09.864Z · score: 66 (33 votes)

Comments

Comment by kit on GiveDirectly plans a cash transfer response to COVID-19 in US · 2020-03-20T08:05:46.407Z · score: 0 (2 votes) · EA · GW

I would guess that the decision of which GiveDirectly programme to support† is dominated by the principle you noted, of

the dollar going further overseas.

Maybe GiveDirectly will, in this case, be able to serve people in the US who are in comparable need to people in extreme poverty. That seems unlikely to me, but it seems like the main thing to figure out. I think your 'criteria' question is most relevant to checking this.

† Of course, I think the most important decision tends to be deciding which problem you aim to help solve, which would precede the question of whether and which cash transfers to fund.

Comment by kit on GiveDirectly plans a cash transfer response to COVID-19 in US · 2020-03-19T08:14:21.881Z · score: 5 (3 votes) · EA · GW

The donation page and mailing list update loosely suggest that donations are project-specific by default. Likewise, GiveWell says:

GiveDirectly has told us that donations driven by GiveWell's recommendation are used for standard cash transfers (other than some grant funding from Good Ventures and cases where donors have specified a different use of the funds).

(See the donation page for what the alternatives to standard cash transfers are.)

If funding for different GiveDirectly projects are sufficiently separate, your donation would pretty much just increase the budgets of the programmes you wish to support, perhaps especially if you give via GiveWell. If I were considering giving to GiveDirectly, I would want to look into this a bit more.

Comment by kit on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T11:49:17.095Z · score: 1 (1 votes) · EA · GW

[Comment not relevant]

Comment by kit on A small observation about the value of having kids · 2020-01-19T11:01:12.101Z · score: 25 (11 votes) · EA · GW

For the record, I wouldn't describe having children to 'impart positive values and competence to their descendants' as a 'common thought' in effective altruism, at least any time recently.

I've been involved in the community in London for three years and in Berkeley for a year, and don't recall ever having an in-person conversation about having children to promote values etc. I've seen it discussed maybe twice on the internet over those years.

--

Additionally: This seems like an ok state of affairs to me. Having children is a huge commitment (a significant fraction of a life's work). Having children is also a major part of many people's life goals (worth the huge commitment). Compared to those factors, it seems kind of implausible even in the best case that the effects you mention would be decisive.

Then: If one can determine a priori that these effects will rarely affect the decision of whether to have children, the value of information as discussed in this piece is small.

Comment by kit on Assumptions about the far future and cause priority · 2019-11-20T16:41:42.124Z · score: 2 (2 votes) · EA · GW

In the '2% RGDP growth' view, the plateau is already here, since exponential RGDP growth is probably subexponential utility growth. (I reckon this is a good example of confusion caused by using 'plateau' to mean 'subexponential' :) )

In the 'accelerating view', it seems that whether there is exponential utility growth in the long term comes down to the same intuitions about whether things keep accelerating forever that are discussed in other threads.

Comment by kit on Assumptions about the far future and cause priority · 2019-11-16T13:31:05.377Z · score: 7 (5 votes) · EA · GW

Thanks!

In my understanding, [a confident focus on extinction risk] relies crucially on the assumption that the utility of the future cannot have exponential growth in the long term

I wanted to say thanks for spelling that out. It seems that this implicitly underlies some important disagreements. By contrast, I think this addition is somewhat counterproductive:

and will instead essentially reach a plateau.

The idea of a plateau brings to mind images of sub-linear growth, but all that is required is sub-exponential growth, a much weaker claim. I think this will cause confusion.

I also appreciated that the piece is consistently accurate. As I wrote this comment, there were several times where I was considering writing some response, then saw that the piece has a caveat for exactly the problem I was going to point out, or a footnote which explained what I was confused about.

A particular kind of accuracy is representing the views of others well. I don't think the piece is always as charitable as it could be, but details like footnote 15 make it much easier to understand what exactly other people's views are. Also, the simple absence of gross mischaracterisations of other people's views made this piece much more useful to me than many critiques.

Here are a few thoughts on how the model or framing could be more useful:

'Growth rate'

The concept of a 'growth rate' seems useful in many contexts. However, applying the concept to a long-run process locks the model of the process into the framework of an exponential curve, because only exponential curves have a meaningful long-run growth rate (as defined in this piece). The position that utility will grow like an exponential is just one of many possibilities. As such, it seems preferable to simply talk directly in terms of the shape of long-run utility.

Model decomposition

When discussing the shape of long-run utility, it might be easier to decompose total utility into population size and utility per capita. In particular, the 'utility = log(GDP)' model is actually 'in a perfectly equal world, utility per capita = log(GDP per capita)'. i.e. in a perfectly equal world, utility = population size x log(GDP per capita).[1]

For example, this resolves the objection that

if we duplicate our world and create an identical copy of it, I would find it bizarre if our utility function only increases by a constant amount, and find it more reasonable if it is multiplied by some factor.

The proposed duplication doubles population size while keeping utility per capita fixed, so it is a doubling[2] of utility in a model of this form, as expected.

More broadly, I suspect that the feasibility of ways to gain at-least-exponentially greater resources over time (analogous to population size, e.g. baby-universes, reversible computation[3]) and ways to use those resources at-least-exponentially better (analogous to utility per capita, no known proposals?) might be debated quite separately.

How things relate to utility

Where I disagreed or thought the piece was less clear, it was usually because something seemed at risk of being confused for utility. For example, explosive growth in 'the space of possible patterns of matter we can potentially explore' is used as an argument for possible greater-than-exponential growth in utility, but the connection between these two things seems tenuous. Sharpening the argument there could make it more convincing.

More broadly, I can imagine any concrete proposal for how utility per capita might be able to rise exponentially over very long timescales being much more compelling for taking the idea seriously. For example, if the Christiano reversible computation piece Max Daniel links to turns out to be accurate, that naively seems more compelling.

Switching costs

My take is that these parts don't get at the heart of any disagreements.

It already seems fairly common that, when faced with two approaches which look optimal under different answers to intractable questions, Effective Altruism-related teams and communities take both approaches simultaneously. For example, this is ongoing at the level of cause prioritisation and in how the AI alignment community works on multiple agendas simultaneously. It seems that the true disagreements are mostly around whether or not growth interventions are sufficiently plausible to add to the portfolio, rather than whether diversification can be valuable.

The piece also ties in some concerns about community health to switching costs. I particularly agree that we would not want to lose informed critics. However, similarly to the above, I don't think this is a real point of disagreement. Discussed simultaneously are risk from being 'surrounded by people who think that what I intend to do is of negligible importance' and risks from people 'being reminded that their work is of negligible importance'. I think this conflates what people believe with whether they treat those around them with respect, which I think are largely independent problems. It seems fairly clear that we should attempt to form accurate beliefs about what is best, and simultaneously be kind and supportive to other people trying to help others using evidence and reason.

---

[1] The standard log model is surely wrong but the point stands with any decomposition into population size multiplied by a function of GDP per capita.

[2] I think the part about creating identical copies is not the main point of the thought experiment and would be better separated out (by stipulating that a very similar but not identical population is created). However, I guess that in the case that we are actually creating identical people we can handle how much extra moral relevance we think this creates through the population factor.

[3] I guess it might be worth making super clear that these are hypothetical examples rather than things for which I have views on whether they are real.

Comment by kit on Updated Climate Change Problem Profile · 2019-10-09T11:43:42.288Z · score: 6 (5 votes) · EA · GW
I’m curious to know what you think the difference is. Both problems require greenhouse gas emissions to be halted.

I agree that both mainline and extreme scenarios are helped by reducing greenhouse gas emissions, but there are other things one can do about climate change, and the most effective actions might turn out to be things which are specific to either mainline or extreme risks. To take examples from that link:

  • Developing drought-resistant crops could mitigate some of the worst effects of mainline scenarios, but might help little in extreme scenarios.
  • Attempting to artificially reverse climate change may be a last resort for extreme scenarios, but may be too risky to be worthwhile for mainline scenarios.

For the avoidance of doubt, I think that my point about mainline and extreme risks appealing to different worldviews is sufficient reason to separate the analyses even if the interventions ended up looking similar.

if you have two problems who require $100 or $200 of total funding to solve completely, if they both have $50 of funding today, they are not equally neglected

Yep, you could use the word 'neglected' that way, but I stand by my comment that if you do that without also modifying your definition of 'scale' or 'solvability', the three factors no longer add up to a cost-effectiveness heuristic. i.e. if you formalise what you mean by neglectedness and insert it into the formula here without changing anything else, the formula will no longer cancel out to 'good done / extra person or $'.

Comment by kit on Updated Climate Change Problem Profile · 2019-10-08T14:13:49.736Z · score: 11 (8 votes) · EA · GW

Thanks for this. I found it interesting to think about. Here are my main comments.

Mainline and extreme risks

I think it would be better to analyse mainline risks and extreme risks separately.

  • Depending on whether or not you put substantial weight on future people, one type of risks may be much more important than the other. The extreme risks appear to pose a much larger existential threat than mainline risks, so if you value future generations the extreme risks may be much more important to focus on. The opposite may be true for people who apply high pure time discounting. (Relatedly, on most worldviews, the 'scale' factor will be materially different between the two.)
  • The ideal responses to mainline and extreme risks appear to be different. (Relatedly, the 'solvability' of those responses may differ, as might the amount of resources that are already committed to the relevant kinds of responses.)
  • The methodologies which are useful are different. Efforts to understand and act on mainline risks are amenable to standard economic approaches, while extreme risk analysis requires substantive judgements about empirical and model uncertainty.

Overall, putting the two together seems to make the analysis less clear, not more.

More expensive things are worse

Firstly, this approach will undervalue capital intensive causes where the total required investment is so large that $10 billion/year may still represent underfunding. A better model would be one which examined the total funding required to solve the problem compared to the current level of funding.

The framework 80k uses is designed to add up to a cost-effectiveness heuristic. Adjusting this by giving more expensive things higher neglectedness scores in effect takes the 'cost' out of the 'cost-effectiveness analysis'. Using a completely different framework would be fine, but making this adjustment alone causes one to depart from any notion of good done per effort put in.

If I put more time into this, I would focus on the solvability part

In your initial comments on solvability, you give a concrete set of interventions which you say would cost approximately a certain amount and achieve approximately a certain amount. If someone were to analyse these in detail, this could be the basis for a cost-effectiveness calculation. Of course, I don't know enough to say whether the Project Drawdown analysis you reference is accurate. Other people looking into this might want to focus on that since it seems crucial to the bottom line.

Qualified 'need's

When you just say 'we need to do Y', this seems to be sort of assuming the conclusion, adding little to my understanding of which actions produce the most impact. For example:

All of these solutions, and more, need to be rolled out as quickly as possible at global scale.

I found it much more helpful when you said 'if we want to achieve X, we need to do Y', improving my understanding of what actions lead to what effects. For example:

the latest UN press release (2019-10-23) states that nations need to increase their targets fivefold to meet the goal of limiting warming to 1.5C.

This part of my comment might sound like a nitpick, but I think attention to this kind of thing can make for better analysis and better communication.

All personal views only, of course.

Comment by kit on Summary of my academic paper “Effective Altruism and Systemic Change” · 2019-09-30T10:22:27.089Z · score: 13 (7 votes) · EA · GW

Regarding increasing marginal returns (IMR), which seems to be the primary contribution of this paper and not obviously addressed by replies to other types of systemic change objections:

Perhaps rather than 'Are IMR commonly found in cause areas?', I would ask 'where are IMR found?' and, for the purposes of testing the critique, 'in which cases are relevant actors not already aware of those IMR?' This is because I expect the prevalence of IMR to vary substantially between areas. (I see that you also call for concrete examples in your paper.) Some examples, sorted from strategic to individual decision-making:

  • DMR at a high level within causes: you may have seen this list of reasons to expect DMR when looking at a cause area at a sufficiently high level. I would argue that since
    • the methodology you point to is largely about high-level cause prioritisation, and
    • the neglectedness criterion works if there are roughly† logarithmic diminishing marginal returns within causes,
  • the IMR objection does not apply to them.
  • IMR when starting new projects: in their 2011 post on why they are not using donation matching, GiveWell discusses 'coordination matching', where the project only goes ahead if sufficient funds are committed (like Kickstarter), as a legitimate form of donation matching. [Minor: this may be IMR only with respect to money ultimately committed. A large donor making a partial, initial commitment may in fact have outsized impact relative to later funders due to the earlier donor causing later donors to act.]
  • IMR when coordinating individual careers: this section of William MacAskill's 2017 EAGx Berlin talk encourages more attention to coordinated action. This includes things closely analogous to the coordination games you discuss in the paper, in which one person switching to their comparative advantage is a cost, but two people switching is a gain. I would guess that you would also like 80,000 Hours' writing on comparative advantage and other coordination concepts.

Of course, my examples of IMR are drawn from people who helped create the effective altruism community, so they help with 'where are IMR found?' but not with identifying points where IMR are underappreciated by the community. It certainly seems possible that there are issues I am unaware of. I see that in your paper you mention 'the problem of competition between NGOs and states' as a problem for which the solution which might have IMR. To rehash an old counterargument, I would guess that this is not really a conflict, in that substantially reducing disease prevalence (the relevant comparable on the scale of the funding required for other mass empowerment efforts) is hugely empowering. However, this is an off-the-cuff comment. I also know little about the example of whether corporate campaigns accelerate or decelerate abolition, but I believe that this was and probably still is an active debate among effectiveness-focused animal advocates.

† In fact, I am a little worried that the logarithmic assumption is sufficiently wrong to cause problems, while I am confident that there is DMR in general at this level.

Comment by kit on Are we living at the most influential time in history? · 2019-09-22T17:21:36.669Z · score: 1 (1 votes) · EA · GW
Punting strategies, in contrast, affect future generations primarly via their effect on the people alive in the most influential centuries.

That seems like a sufficiently precise definition. Whether there are any interventions in that category seems like an open question. (Maybe it is a lot more narrow than Will's intention.)

Comment by kit on Are we living at the most influential time in history? · 2019-09-15T14:05:04.832Z · score: 3 (3 votes) · EA · GW

Not appropriated: lost to value drift. (Hence, yes, the historical cases I draw on are the same as in my comment 3 up in this thread.) I'm thinking of this quantity as something like the proportion of resources which will in expectation be dedicated 100 years later to the original mission as envisaged by the founders, annualised.

Comment by kit on Are we living at the most influential time in history? · 2019-09-14T22:36:10.535Z · score: 9 (5 votes) · EA · GW

Got it. Given the inclusion of (bad) value drift in 'appropriated (or otherwise lost)', my previous comment should just be interpreted as providing evidence to counter this claim:

But I shouldn’t think that the chance of my funds being appropriated (or otherwise lost) is as high as 2% per year.

[Recap of my previous comment] It seems that this quote predicts a lower rate than there has ever† been before. Such predictions can be correct! However, a plan for making the prediction come true is needed.

It seems that the plan should be different to what essentially all†† the people with higher rates of (bad) value drift did. These particular suggestions (succession planning and including an institution's objectives in its charter) seem qualitatively similar to significant minority practices in the past. (e.g. one of my outside views uses the reference class of 'charities with clear founding values'. For the 'institutions through the eras' one, religious groups with explicit creeds and explicit succession planning were prominent examples I had in mind.) The open question then seems to be whether EAs will tend to achieve sufficient improvement in such practices to bring (bad) value drift down by around an order of magnitude relative to what has been achieved historically. This seems unlikely to me, but not implausible. In particular, the idea that it is easier to design a constitution based on classical utilitarianism than for other goals people have had is very interesting.

Aside: investing heavily in these practices seems easier for larger donors. The quote seems very hard to defend for donors too small to attract a highly dedicated successor.

This discussion has made me think that insofar as one does punt to the future, making progress on how to reduce institutional value drift would be a very valuable project, even if I'm doubtful about how much progress is possible.

† It seems appropriate to exclude all groups coordinating for mutual self-interest, such as governments. (This is broader than my initial carving out of for-profits.)
†† However, it seems useful to think about a much wider set of mission-driven organisations than foundations because the sample of 100-year-old foundations is tiny.

Comment by kit on Are we living at the most influential time in history? · 2019-09-13T18:12:26.542Z · score: 2 (2 votes) · EA · GW

Thanks. I agree that we might endorse some (or many) changes. Hidden away in my first footnote is a link to a pretty broad set of values. To expand: I would be excited to give (and have in the past given) resources to people smarter than me who are outcome-oriented, maximizing, cause-impartial and egalitarian, as defined by Will here, even (or especially) if they plan to use them differently to how I would. Similarly, keeping the value 'do the most good' stable maybe means something like keeping the outcome-oriented, maximizing, cause-impartial and egalitarian values stable.

For clarity, I excluded profit maximisation because incentives to pursue this goal seem powerful in a way that might never apply to effective altruism, however broadly it is construed. (The 'impartial' part seems especially hard to keep stable.) In particular, profit maximisation does not even need to be propagated: e.g. if a company does some random other stuff for a while, its stakeholders will still have a moderate incentive to maximise profits, so will typically return to doing this. A similar statement is that 'maximise profits' is the default state of things. No matter how broad our conception of 'do the most good' can be made, it seems likely to lack this property (except for lock-in scenarios).

Comment by kit on Are we living at the most influential time in history? · 2019-09-13T09:10:32.942Z · score: 29 (9 votes) · EA · GW

I was very surprised to see that 'funds being appropriated (or otherwise lost)' is the main concern with attempting to move resources 100 years into the future. Before seeing this comment, I would have been confident that the primary difficulty is in building an institution which maintains acceptable values† for 100 years.

Some of the very limited data we have on value drift within individual people suggests losses of 11% and 18% per year for two groups over 5 years. I think these numbers are a reasonable estimate for people who have held certain values for 1-6 years, with long-run drop-off for individuals being lower.

A more relevant but less precise outside view is my intuitions about how long charities which have clear founding values tend to stick to those values after their founders leave. I think of this as ballpark a decade on average, though hopefully we could do better if investing time and money in increasing this.

Perhaps yet more relevant and yet less precise is the history of institutions through the eras which have built themselves around some values which they thought of as non-negotiable (in the same way that we might see impartiality as non-negotiable). For example, religious institutions. My vague, non-historian impression is that, even considering institutions founded with concrete values at their core, very few still had those values†† 100 years later, if they existed in the same form at all.

The thing I'd find most convincing in outweighing these outside views is simply an outline for how EAs can get this institutional value drift thing close to zero. I can imagine such a plan seeming obvious to others, but it currently looks like a potentially intractable problem to me.†††

Possible example of acceptable values.

†† I'm excluding 'maximise profits' as a value!

††† This all becomes fairly simple upon the rise of any technology which would enable permanent lock-in. However, it seems that this would be a time to deploy a lot of resources immediately, so ways to move money into the future at that time seem less helpful. This seems like weak evidence for an unfortunate correlation between hingeyness and ability to move resources into the future.

Comment by kit on Are we living at the most influential time in history? · 2019-09-05T13:12:46.569Z · score: 7 (3 votes) · EA · GW

Thanks! I hadn't seen the Cotton-Barratt piece before.

Extinction risk reduction punts on the question of which future problems are most important to solve, but not how best to tackle the problem of extinction risk specifically. Building capacity for future extinction risk reduction work punts on how best to tackle the problem of extinction risk specifically, but not the question of which future problems are most important to solve. They seem to do more/less punting than one another along different dimensions, so, depending on one's definition of direct vs punting, each could be more of a punt than the other. I'm not clear on whether this means we should pick a dimension to talk about, or whether there is no meaningful single spectrum of directness vs punting.

Comment by kit on Are we living at the most influential time in history? · 2019-09-05T10:08:17.684Z · score: 13 (5 votes) · EA · GW

I agree that, among other things, discussion of mechanisms for sending resources to the future would needed to make such a decision. I figured that all these other considerations were deliberately excluded from this post to keep its scope manageable.

However, I do think that one can interpret the post as making claims about a more insightful kind of probability: the odds with which the current century is the one which will have the highest leverage-evaluated-at-the-time (in contrast to an omniscient view / end-of-time evaluation, which is what this thread mostly focuses on). I think that William_MacAskill's main arguments are broadly compatible with both of these concepts, so one could get more out of the piece by interpreting it as about the more useful concept.

Formally, one could see the thing being analysed as

maximises leverage of century ,

where is the knowledge available at the beginning of century i. If we and all future generations may freely move resources across time, and some things that are maybe omitted from the leverage definition are held constant, this expression tells us with what odds we are correct to do 'direct work' today as opposed to transfer resources one century forward. (Confusion about what 'direct work' means noted here.)

However, you seem to be right that as soon as you don't hold other very important factors (such as how well one can send resources to the future) constant, those additional terms go inside the maximisation evaluation, and hence the above expression still isn't that useful. (In particular, it can't just be multiplied by an independent factor to get to a useable expression.)

(Also, I feel like I'm mathing from the hip here, so quite possibly I've got this quite wrong.)

Comment by kit on Are we living at the most influential time in history? · 2019-09-04T18:46:18.174Z · score: 23 (11 votes) · EA · GW

This was very thought-provoking. I expect I'll come back to it a number of times.

I suspect that how the model works depends a lot on exactly how this definition is interpreted:

a time t is more influential (from a longtermist perspective) than a time t iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at t rather than to a longtermist altruist living at t.

In particular, I think you intend direct work to include extinction risk reduction, and to be opposite to strategies which punt decisions to future generations. However, extinction risk reduction seems like the mother of all punting strategies, so it seems naturally categorised as not direct work for the purpose of considering whether to punt. Due to this, I expect some weirdness around the categorisation, and would guess that a precise definition would be productive.

(Added formatting and bold to the quote for clarity.)

Comment by kit on Are we living at the most influential time in history? · 2019-09-04T17:56:20.386Z · score: 20 (6 votes) · EA · GW

Using a distribution over possible futures seems important. The specific method you propose seems useful for getting a better picture of maxcentury most leveraged. However, what we want in order to make decisions is something more akin to maxleverage of century . The most obvious difference is that scenarios in which the future is short and there is little one can do about it score highly on expected ranking and low on expected value. I am unclear on whether a flat prior makes sense for expectancy, but it seems more reasonable than for probability.

Of course, even maxleverage of century does not accurately reflect what we are looking for. Similarly to Gregory_Lewis' comment, the decision-relevant thing (if 'punting to the future' is possible at all) is closer still to maxwhat we will assess the leverage of century i to be at the time. i.e. whether we will have higher expected leverage in some future century according to our beliefs at that time. Thinking this through, I also find it plausible that even this does not make sense when using the definitions in the post, and will make a related top-level comment.

Comment by kit on Key points from The Dead Hand, David E. Hoffman · 2019-08-14T07:14:38.794Z · score: 1 (1 votes) · EA · GW

I stand corrected. I think those quotes overstate matters a decent amount, but indeed the security of fissile material is a significantly more important barrier to misuse.

Comment by kit on Key points from The Dead Hand, David E. Hoffman · 2019-08-10T08:41:03.791Z · score: 12 (6 votes) · EA · GW

Thanks! Here are some places you might start. (People who have done deeper dives into nuclear risk might have more informed views on what resources would be useful.)

  • Baum et al., 2018, A Model For The Probability Of Nuclear War makes use of a more comprehensive list of (possible) close calls than I've seen elsewhere.
  • FLI's timeline of close calls is a more (less?) fun display, which links on to more detailed sources. Note that many of the sources are advocacy groups, and they have a certain spin.
  • Picking a few case studies that seemed important and following the citations to the most direct historical accounts to better understand how close a call they really were might be a project which would interest you.
  • I thought this interview with Samantha Neakrase of the Nuclear Threat Initiative was helpful for understanding what things people in the nuclear security community worry about today.

Some broader resources

  • The probability of nuclear war is only one piece of the puzzle – even a nuclear war would probably not end the world, thankfully. I found the recent Rethink Priorities nuclear risk series (#1, #2, #3, #4, #5, especially #4) very helpful for putting more of the pieces together.
  • This Q&A with climate scientist Luke Oman gets across some key considerations very efficiently.

I'm also glad that you interpret the discussion of the Petrov incident as 'some evidence against'. That's about the level of confidence I intended to convey.

Comment by kit on How urgent are extreme climate change risks? · 2019-08-08T19:24:11.404Z · score: 2 (2 votes) · EA · GW

Open Phil (then GiveWell Labs) explored climate change pretty early on in their history, including the nearer-term humanitarian effects. Giving What We Can also compared climate change efforts to health interventions. (Each page is a summary page which links to other pages going into more detail.)

Comment by kit on Cluster Headache Frequency Follows a Long-Tail Distribution · 2019-08-03T12:14:48.786Z · score: 16 (12 votes) · EA · GW

I'm very excited to see people doing empirical work on what things we care about are in fact dominated by their extremes. At least after adjusting for survey issues, statements like

The bottom 90% accounts for 30% of incidents

seem to be a substantial improvement on theoretical arguments about properties of distributions. (Personal views only.)

Comment by kit on Cluster Headache Frequency Follows a Long-Tail Distribution · 2019-08-03T12:13:30.322Z · score: 4 (4 votes) · EA · GW

I'm less optimistic about the use of surveys on whether people think tryptamines will/did work:

  • 'And do they work?' doesn't seem like a question that will be accurately answered by asking people whether it worked for them. (Reversion to the mean being my main concern.)
  • Non-users are asked whether tryptamines 'could be effective for treating your cluster headaches', which could be interpreted as a judgement on whether it works for anyone or whether it will work for them (for which the correct answer seems to be 'maybe'). Users are asked whether it worked for them specifically. Directly computing the difference between these answers doesn't seem meaningful.
Comment by kit on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-16T08:40:58.163Z · score: 15 (7 votes) · EA · GW

Huh. The winning response, one of the six early responses, also engages explicitly with the arguments in the main post in its section 1.2 and section 2. This one discussed things mentioned in the post without explicitly referring to the post. This one summarises the long-term-focused arguments in the post and then argues against them.

I worry I'm missing something here. Dismissing these responses as 'cached arguments' seemed stretched already, but the factual claim made to back that decision up, that 'None of these engaged with the pro-psychedelic arguments I made in the main post', seems straightforwardly incorrect.

Comment by kit on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-15T22:32:58.055Z · score: 19 (8 votes) · EA · GW

I also came to note that the request was for 'the best arguments against psychedelics, not for counter-arguments to your specific arguments in favour'.

However, I also wrote one of the six responses referred to, and I contest the claim that

None of these engaged with the pro-psychedelic arguments I made in the main post

The majority of my response explicitly discusses the weakness of the argumentation in the main post for the asserted effect on the long-term future. To highlight a single sentence which seems to make this clear, I say:

I don't see the information in 3(a) or 3(b) telling me much about how leveraged any particular intervention is.

I also referred to arguments made by Michael Plant, which in my amateur understanding appeared to be stronger than those in the post. To me, it seems good that others engaged primarily with arguments such as Michael's, because engaging with stronger arguments tends to lead to more learning. When I drafted my submission, I considered whether it was unhealthy to primarily respond to what I saw as weaker arguments from the post itself. Yet, contra the debrief post, I did in fact do so.

Comment by kit on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2019-06-28T14:10:05.951Z · score: 5 (3 votes) · EA · GW

On the specific questions you're asking about whether empirical data from the Kuwaiti oil field destruction is taken into account: it seems that the answer to each is simply 'yes'. The post says that the data used is adapted from Toon et al. (2007), which projects how much smoke would reach the stratosphere specifically. The paper explicitly considers that event and what the model would predict about them:

Much interest in plume rise was directed at the Kuwati oil fires set by Iraqi forces in 1991. Small (1991) estimated that oil well fires produce energy at a rate of about 300 MW. Since the wells were separated by roughly 1 km, they represent a very small energy source relative to either forest fires or mass fires such as occurred in Hiroshima. Hence these oil well smoke plumes would be expected to be confined to the boundary layer, and indeed were observed within the boundary layer during the Persian Gulf War.

The details of the paper could be wrong – I'm a complete amateur and would be interested to hear the views of people who've looked into it, especially given substantial reliance on this particular paper in the post – but it seems to have already considered the things you raise.

However, this still got me thinking. Why look at smoke from burning oil fields, with their much lower yields, when one could look at smoke from Hiroshima or Nagasaki? It's a grim topic, but more relevant for projecting the effects of other nuclear detonations. After a surprisingly long search, I found this paper, which attempts to measure the height of the 'mushroom cloud' over Hiroshima, which isn't what we're looking for. Fortunately for me, they seem to think that Photo '(a) Around Kurahashi Island' is another photo of the 'mushroom cloud', but in fact it appears to be the cloud produced by the resulting fires. This explains their surprising result:

The height of the cloud in Figure 1 (a) is estimated to be about 16 km. This largely exceeds the 8 km that was previously assumed.

16km (range 14.54-16.88km) is well into the stratosphere across Russia and most of the US, so it seems that history is compatible with theories which say that weapons on the scale of 'Little Boy' (13–18kt) are likely to cause substantial smoke in the stratosphere.

Comment by kit on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2019-06-28T14:07:19.427Z · score: 1 (1 votes) · EA · GW

On your general point about paying attention to political biases, I agree that's worthwhile. A quibble related to that which might matter to you: the Wikipedia article you're quoting seems to attribute the incorrect predictions to TTAPS but I could only trace them to Sagan specifically. I could be missing something due to dead/inaccessible links.

Comment by kit on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2019-06-20T18:49:11.058Z · score: 21 (8 votes) · EA · GW

There are a whole bunch of things I love about this work. Among other things:

  • An end-to-end model of nuclear winter risk! I'm really excited about this.
  • The quantitative discussions of many details and how they interact are very insightful. e.g. ones which were novel for me included how exactly smoke causes agriculture loss, and roughly where the critical thresholds for agricultural collapse might be. The concrete estimates for the difference in smoke production between counterforce and countervalue, which I knew the sign of but not the magnitude, are fascinating and make this much clearer.
  • I really appreciate the efforts to make the (huge) uncertainty transparent, notably the list of simplifying assumptions, and running specific scenarios for heavy countervalue targeting. Most of all, though, the Guesstimate model is remarkably legible, which makes absorbing all this info so much easier.
Comment by kit on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2019-06-20T18:41:44.054Z · score: 20 (7 votes) · EA · GW

I have one material issue with the model structure, which I think may reverse your bottom line. The scenario full-scale countervalue attack against Russia has a median smoke estimate of 60Tg and a scenario probability of 0.27 x 0.36 = ~0.1. This means the probability of total smoke exceeding 60Tg has to be >5%, but Total smoke generated by a US-Russia nuclear exchange calculates a probability of only 0.35% for >60Tg smoke.

What seems to be going on is that the model incorporates estimated smoke from each countervalue targeting scenario as {scenario probability x scenario amount of smoke} in all Monte Carlo samples, when I figure you actually want it to count {scenario amount of smoke} in the appropriate proportion of samples. This would give a much more skewed distribution.

Sampling properly (as I see it) seems to be a bit fiddly in Guesstimate, but I put something together for Smoke that would be generated as a result of countervalue targeting against the US in an 'Alternative 3.4.4' section here. (I figured making a copy would be the easiest way to communicate the idea.)

I also redirected the top-level smoke calculation to point to the above alternate calculation to see what difference it makes. (Things I've added are marked with [KH] in the copy to make the differences easy to spot.) Basically every distribution now has two humps: either there is a countervalue strike and everything has a high chance of collapsing, or there isn't and things are awful but probably recoverable. Some notable conclusions that change:

  • ~15% chance of getting into the 50Tg+ scenarios that you flag as particularly concerning, up from ~1%.
  • ~13% chance that corn cultivation becomes impossible in Iowa, and 6% chance that Ukraine cannot grow any of the crops you focus on, both from <1%. I don't know whether still being able to grow some amount of barley helps much.
  • Your bottom-line ~5% chance of 96% population collapse jumps to ~16%, with most of that on >99% collapse. On the bright side, expected deaths drop by ~1bn.

Obviously, all these numbers are hugely unstable. I list them only to illustrate the difference made by sampling in this way, not to suggest that the actual numbers should be taken super seriously.

As above, these changes are just from adjusting the sampling for Smoke that would be generated as a result of countervalue targeting against the US. Doing the same adjustment for Smoke that would be generated as a result of countervalue targeting against Russia would add additional risk of extreme nuclear winter. For example, I think your model would imply a few % chance of all the crops you focus on becoming impossible to grow in both Iowa and Ukraine.

Before exploring your work, I hadn't understood just how heavily extinction risk may be driven by the probability of a full-scale countervalue strike occurring. This certainly makes me wonder whether there's anything one can do to specifically reduce the risk of such strikes without too significantly increasing the overall risk of an exchange. In general, working through your model and associated text and sources has been super useful to my understanding.

Comment by kit on Would US and Russian nuclear forces survive a first strike? · 2019-06-20T09:54:37.618Z · score: 2 (2 votes) · EA · GW

Neat. Happy to be a little bit helpful!

Comment by kit on How many people would be killed as a direct result of a US-Russia nuclear exchange? · 2019-06-19T19:26:39.212Z · score: 3 (2 votes) · EA · GW

Agreed. The discussion of the likelihood of countervalue targetting throughout this piece seems very important if countervalue strikes would typically produce considerably more soot than counterforce strikes. In particular, the idea that any countervalue component of a second strike would likely be small seems important and is new to me.

I really hope the post is right that any countervalue targetting is moderately unlikely even in a second strike for the countries with the largest arsenals. That one ‘point blank’ line in the 2010 NPR was certainly surprising to me. On the other hand, I'm not compelled by most of the arguments as applied to second strikes specifically.

Comment by kit on Would US and Russian nuclear forces survive a first strike? · 2019-06-19T18:09:54.986Z · score: 19 (6 votes) · EA · GW

This is fascinating, especially with details like different survivability of US and Russian SLBMs. My main takeaway is that counterforce is really not that effective, so it remains hard to see why it would be worth engaging in a first strike. I'd be interested to hear if you ever attempt to quantify the risk that cyber, hypersonic, drone and other technologies (appear to) change this, or if this has been attempted by someone already.

Relatedly:

If improvements in technology allowed either country to reliably locate and destroy those targets, they would be able to eliminate the others’ secure second strike, thereby limiting the degree to which a nuclear war could escalate.

Perhaps reading into this too much, but I wondered if you think the development of some kinds of effective counterforce are net positive in expectation from an extinction risk perspective. My amateur impression is that these developments are kind of all bad (most prominently because the ability to destroy weapons seems to force ‘launch on warning’ to be the default, making accidental escalation (from zero) more likely), but I'm potentially generalising too much.

Comment by kit on Would US and Russian nuclear forces survive a first strike? · 2019-06-19T18:09:37.574Z · score: 12 (4 votes) · EA · GW

Quibbles/queries:

The one significant thing I was confused about was why the upper bound survivability for stationary, land-based ICBMs is only 25%. It looks like these estimates are specifically for cases where a rapid second strike (which could theoretically achieve survivability of up to 100%) is not attempted. Do you intend to be taking a position on whether a rapid second strike is likely? It seems like you are using these numbers in some places, e.g. when talking about ‘Countervalue targeting by Russia in the US’ in your third post, when you might be using significantly larger numbers if you thought a rapid second strike was likely. The reason I’m interested in this question is that it seems likely to feed into your planned research into nuclear winter, which I particularly look forward to.

Also, maybe you intend for your adjustment for US missile defence systems to be negating 15% of the lost warheads rather than adding 15% to the total arsenal? The current calculation suggests that missile defences reduce counterforce effectiveness by ~61%, which seems like not your intention given what you’ve said about interceptor success rates and diminishing returns on a counterforce strike. (I think this change would decrease surviving, deployed US warheads by ~163, so possibly has moderately large implications for your later work.)

Comment by kit on Which nuclear wars should worry us most? · 2019-06-19T17:15:41.037Z · score: 11 (6 votes) · EA · GW

This series (#2, #3) has begun as the most interesting-to-me on the Forum in a long time. Thanks very much. If you have written or do write about how future changes in arsenals may change your conclusions about what scenarios to pay the most attention to, I'd be interested in hearing about it.

In case relevant to others, I found your spreadsheet with raw figures more insightful than the discrete system in the post. To what extent do you think the survey you use for the probabilities of particular nuclear scenarios is a reliable source? (I previously distrusted it for heuristic reasons like the authors seeming to hype some results that didn’t seem that meaningful.) I'm interested because, as well as the numbers you use it for, the survey implies ~15% chance of use of nuclear weapons conditional on a conventional conflict occurring between nuclear-armed states, which seemed surprisingly low to me and would change my thinking about conflicts between great powers in general if I believed it.

Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-26T09:05:30.544Z · score: 12 (6 votes) · EA · GW
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor

Let's go. Upside 1:

effect from boosting efficacy of current long-termist labor

Adding optimistic numbers to what I already said:

  • Let's say EAs contribute $50m† of resources per successful drug being rolled out across most of the US (mainly contributing to research and advocacy). We ignore costs paid by everyone else.
  • This somehow causes rollout about 3 years earlier than it would otherwise have happened, and doesn't trade off against the rollout of any other important drug.
  • At any one time, about 100 EAs†† use the now-well-understood, legal drug, and their baseline productivity is average for long-term-focused EAs.
  • This improves their productivity by an expected 5%††† vs alternative mental health treatment.
  • Bottom line: your $50m buys you about 100 x 5% x 3 = 15 extra EA-years via this mechanism, at a price of $3.3m per person-year.

Suppose we would trade off $300k for the average person-year††††. This gives a return on investment of about $300k/$3.3m = 0.09x. Even with optimistic numbers, upside 1 justifies a small fraction of the cost, and with midline estimates and model errors I'd expect more like a ~0.001x multiplier. Thus, this part of the argument is insignificant.

-----

Also, I've decided to just reply to this thread, because it's the only one that seems decision-relevant.

† Various estimates of the cost of introducing a drug here, with a 2014 estimate being $2.4bn. I guess EAs could only cover the early stages, with much of the rest being picked up by drug companies or something.
†† Very, very optimistically, 1,000 long-term-focused EAs in the US, 10% of the population suffer from relevant mental health issues, and all of them use the new drug.
††† This looks really high but what do I know.
†††† Pretty made up but don't think it's too low. Yes, sometimes years are worth more, but we're looking at the whole population, not just senior staff.

Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T19:22:57.805Z · score: 4 (3 votes) · EA · GW
Psychedelic interventions seem promising because they can plausibly increase the number of capable people focused on long-termist work, in addition to plausibly boosting the efficacy of those already involved.

Pointing out that there are two upsides is helpful, but I had just made this claim:

The math for [the bold part] seems really unlikely to work out.

It would be helpful if you could agree with or contest with that claim before we move on to the other upside.

-

Rationality projects: I don't care to arbitrate what counts as EA. I'm going to steer clear of present-day statements about specific orgs, but you can see my donation record from when I was a trader on my LinkedIn profile.

Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T19:01:44.000Z · score: 8 (2 votes) · EA · GW

I'm not arguing against trying to compare things. I was saying that the comparison wasn't informative. Comparing dissimilar effects is valuable when done well, but comparing d-values of different effects from different interventions tells you very little.

Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T08:33:22.825Z · score: 18 (6 votes) · EA · GW

To explicitly separate out two issues that seem to be getting conflated:

  • Long-term-focused EAs should make use of the best mental health care available, which would make them more effective.
  • Some long-term-focused EAs should invest in making mental health care better, so that other long-term-focused EAs can have better mental health care and be more effective.

The former seems very likely true.

The latter seems very likely false. You would need the additional cost of researching, advocating for and implementing a specific new treatment (here, psilocybin) across some entire geography to be justified by the expected improvement in mental health care (above what already exists) for specifically long-term-focused EAs in that geography (<0.001% of the population). The math for that seems really unlikely to work out.

I continue to focus on the claims about this being a good long-term-focused intervention because that's what is most relevant to me.

-----

Non-central notes:

  • We've jumped from emotional blocks & unhelpful personal narratives to life satisfaction & treatment-resistant depression, which are very different.
  • As you note, the two effects you're now comparing (life satisfaction & treatment-resistant depression) aren't really the same at all.
  • I don't think that straightforwardly comparing two Cohen's d measurements is particularly meaningful when comparing across effect types.
Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-12T21:49:52.061Z · score: 9 (6 votes) · EA · GW

I believe you when you say that psychedelic experiences have an effect of some (unknown) size on emotional blocks & unhelpful personal narratives, and that this would change workers' effectiveness by some (unknown) amount. However, even assuming that the unknown quantities are probably positive, this doesn't tell me whether to prioritise it any more than my priors suggest, or whether it beats rationality training.

Nonetheless, I think your arguments should be either compelling or something of a wake-up call for some readers. For example, if a reader does not require careful, quantified arguments to justify their favoured cause area†, they should also not require careful, quantified arguments about other things (including psychedelics).

† For example, but by no means exclusively, rationality training.

[Edited for kindness while keeping the meaning the same.]

Comment by kit on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-10T20:02:55.591Z · score: 31 (18 votes) · EA · GW

Boring answer warning!

The best argument against most things being 'an EA cause area'† is simply that there is insufficient evidence in favour of the thing being a top priority.

I think future generations probably matter morally, so the information in sections 3(a), 3(b) and 4 matter most to me. I don't see the information in 3(a) or 3(b) telling me much about how leveraged any particular intervention is. There is info about what a causal mechanism might be, but analysis of the strength is also needed. (For example, you say that psychedelic interventions are plausibly in the same ballpark of effectiveness of other interventions that increase the set of well-intentioned + capable people. I only agree with this because you use the word 'plausibly', and plausibly...in the same ballpark isn't enough to make something an EA cause area.) I think similarly about previous discussion I've seen about the sign and magnitude of psychedelic interventions on the long-term future. (I'm also pretty sceptical of some of the narrower claims about psychedelics causing self-improvement.††)

I did appreciate your coverage in section 4 of the currently small amount of funding and what is getting done as a result, which seems like it could form part of a more thorough analysis.†††

My amateur impression is that Michael Plant has made a decent start on quantifying near-term effects, though I don't think anyone should take my opinion on that very seriously. Regardless of that start looking good, I would be unsurprised if most people who put less weight on future generations than me still wanted a more thorough analysis before directing their careers towards the cause.

As I said, it's a boring answer, but it's still my true objection to prioritising this area. I also think negative PR is a material consideration, but I figured someone else will cover that.

-----

† Here I'm assuming that 'psychedelics being an EA cause area' would eventually involve effort on a similar scale to the areas you're directly comparing it to, such as global health (say ~100 EAs contributing to it, ~$10m in annual donations by EA-aligned people). If you weaken 'EA cause area' to mean 'someone should explore this', then my argument doesn't work, but the question would then be much less interesting.

†† I think mostly this comes from me being pretty sceptical of claims of self-improvement which don't have fairly solid scientific backing. (e.g. I do deep breathing because I believe that the evidence base is good, but I think most self-improvement stuff is random noise.) I think that the most important drivers of my intuitions for how to handle weakly-evidenced claims have been my general mathematical background, a few week-equivalents trying to understand GiveWell's work, this article on the optimiser's curse, and an attempt to simulate the curse to get a sense of its power. Weirdness aversion and social stuff may be incorrectly biasing me, but e.g. I bought into a lot of the weirdest arguments around transformative AI before my friends at the time did, so I'm not too worried about that.

††† I also appreciated the prize incentive, without which I might not have written this comment.

Comment by kit on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T08:07:29.483Z · score: 16 (8 votes) · EA · GW

As an aside, I wouldn't say that any Good Ventures things are 'housed under Open Phil'. I'd rather say that Open Phil makes recommendations to Good Ventures. i.e. Open Phil is a partner to Good Ventures, not a subsidiary.

Technically, I've therefore answered a different question to the one you asked: I've answered the question 'why aren't these grants on the Open Phil website'.

Comment by kit on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T08:07:05.506Z · score: 13 (7 votes) · EA · GW

From Good Ventures' grantmaking approach page:

In 2018, Good Ventures funded $164 million in grants recommended by the Open Philanthropy Project, including $74 million to GiveWell’s top charities, standout charities, and incubation grants. (These grants generally appear in both the Good Ventures and Open Philanthropy Project grants databases.)
Good Ventures makes a small number of grants in additional areas of interest to the foundation. Such grants totaled around $19 million in 2018. Check out Our Portfolio and Grants Database to learn more about the grants we've made so far.
Comment by kit on Legal psychedelic retreats launching in Jamaica · 2019-04-18T18:17:40.483Z · score: 10 (4 votes) · EA · GW

I figured the OP was suggesting that people go to the retreat? (or maybe be generically supportive of the broader project of running retreats)

Not sure where this is going; doesn't immediately seem like it counters what I said about your comparison to specific fundraising + analysis posts, or about why readers might be confused as to why this is here.

Comment by kit on Legal psychedelic retreats launching in Jamaica · 2019-04-18T18:00:44.264Z · score: 4 (3 votes) · EA · GW

Right. The stuff about psychedelics as Cause X was maybe a bit of a red herring. You probably know how to sell your business much better than I do, but something which I think is undervalued in general is simply opening your pitch with why exactly you think someone should care about your thing. I actually hadn't considered creative problem-solving or career choice as reasons to go on this retreat.

My earlier comment was a reply to the challenge of 'how this post is substantively different from previous content like...' and this now seems fairly obvious, so I probably have little more useful to say :)

Comment by kit on Legal psychedelic retreats launching in Jamaica · 2019-04-18T07:46:59.846Z · score: 25 (13 votes) · EA · GW

I can see where you're coming from, but I think there's a lot of missing info here, and this will make the post confusing to most readers. Some* of the other posts you link to also ask things of their readers, but they also present a case for why that ask is a particularly exceptional use of resources.

I happen to know of some topics which psychedelics might be relevant to, some of which are mentioned in the post and in your later comment, e.g.

  • Potentially strong treatment for depression
  • Drug liberalisation could reduce unnecessary incarceration
  • Very speculative things like maybe psychedelics make you a better or more effective person (or increases your risk of psychosis), or maybe psychedelics could help us study sentience

but it's pretty unclear how EAs going on a psychedelic retreat is an effective way to make progress in these fields. i.e. even with what I guess is an above-median amount of context on the subject, I don't know what the case is. Given that, I think Khorton's reaction is very reasonable.

Maybe I'm missing the point and the post is just saying that there's a cool thing you can do with other EAs, not trying to claim that it's an effectively altruistic use of resources. In that case, the difference between the posts appears to be simple.

A disclosure of my own: I previously worked for CEA. Of course, these are my opinions only.

*Giving What We Can is still growing at a surprisingly good pace doesn't justify an ask, but it doesn't have an ask either.

Comment by kit on Getting People Excited About More EA Careers: A New Community Building Challenge · 2019-03-10T13:26:15.478Z · score: 45 (16 votes) · EA · GW

I think IASPCs handle these things well, and think there's some misinterpretation going on. What makes a strong plan change under this metric is determined by whatever 80,000 Hours thinks is most important, and currently this includes academic, industry, EA org and government roles. These priorities also change in response to new information and needs. The problem Sebastian is worried about seems more of a big deal: maybe some orgs / local groups are defining their metrics mostly in terms of one category, or that it's easy to naively optimise for one category at the expense of the others.

The part about counting impact from skill-building and direct work differently simply seems like correct accounting: EA orgs should credit themselves with substantially more impact for a plan change which has led to impact as one which might do so in the future, most obviously because the latter has a <100% probability of turning into the former.

I also think the metric works fine with Sebastian's point that quant trading can be competitive with other priority paths. You seem to imply that the use of IASPCs contradicts his advice, but you point to a non-priority rating for 'earn to give in a medium income career', which is not quant trading!† 80,000 Hours explicitly list quant trading as a priority path (as Seb pointed out in the post), so if an org uses IASPCs as one of their metrics they should be excited to see people with those particular skills go down that route. (If any readers land quant jobs in London, please do say hi :) )

I agree that misapplication of this or similar metrics is dangerous, and that if e.g. some local groups are just optimising for EA-branded orgs instead of at least the full swathe of priority paths, there's a big opportunity to improve. All the normal caveats about using metrics sensibly continue to apply.

All views my own.

†As a former trader, I felt the need to put an exclamation mark somewhere in this paragraph.

Comment by kit on Effective Impact Investing · 2019-03-02T17:23:14.793Z · score: 12 (5 votes) · EA · GW

I'd like to highlight the distinction between 'impact investing funds would outperform funds purely optimized for profit' and 'SRI doesn't undermine the bottom line'. In markets as efficient as I think publicly traded stocks are, the former is highly improbable and the latter is highly probable.

The blog post appears to make both claims. Habryka's complaint may seem more defensible to you if it is entirely about the former claim.

--

Two technical notes on this distinction:

  • Given the existence of some low-quality evidence for the strong (outperformance) claim, you might argue that that too is not so naive.
  • Of course, SRI typically reduces diversification, with an effect somewhere between negligible and substantial depending on the strategy, making the weak (doesn't undermine) claim misleading in some situations, even with efficient markets.
Comment by kit on Effective Impact Investing · 2019-02-28T09:18:25.403Z · score: 14 (8 votes) · EA · GW

Thanks for giving examples of advocacy efforts you might see as a good use of investor time and capital! Getting to the concrete outcomes of impact investing seems pretty key for figuring out in what situations it's a good use of time and capital to engage in.

When you say, 'shareholder advocacy, which is the primary mechanism for impact in public equity investing', I find this very much plausible in the sense that it's the part which seems to have the highest potential. Interestingly, though, when I last looked into this, the vast majority of the SRI industry by capital seemed to be not engaging in shareholder advocacy.*

I would expect shareholder advocacy to be worth the time of effectiveness-minded altruists only in very specific situations (perhaps including some of the ones you named), but given that good shareholder advocacy seems so rare even in SRI, I wonder if there is room for getting the entire SRI industry to actually do the part of SRI which seems promising? Is it true that most SRI capital isn't being used for shareholder advocacy? Is it tractable to improve the industry in this way? (Is that already your main aim?)

--

*I'm not counting screening/divestment campaigns which don't involve talking to specific companies, because this generally seems not to provide clear incentives for any particular company to do anything in particular. Best-in-class screening might be an exception, but the incentives still seemed super weak to me when I last thought about this. Overall, it looks like there's ${tens of trillions} of assets considered to be SRI and a small number (on the order of 1,000 per year?) of good shareholder advocacy campaigns, which suggests a massive difference between the potential of SRI and SRI in practice today.

Comment by kit on Is Superintelligence Here Already? · 2019-01-18T18:24:28.189Z · score: 2 (2 votes) · EA · GW

Context: the report is 190 pages long and was published this month. Those who are reading it seem unlikely to reply with detailed analysis on this particular Forum post.

Object-level response: becoming excellent at chess, go, and shogi is interesting, since it is more general than being excellent at any one alone. My impression is that the AI safety community recognises the importance of milestones like this. It is simply the case that superintelligence typically means something far more general still, such as

an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills

which will not include an AI which can play a specific set of games.

Since we have now discovered that the disagreement is merely a matter of definitions, hostilities can be ceased :)

Comment by kit on Talking about EA at an investors' summit · 2019-01-15T09:47:59.508Z · score: 7 (3 votes) · EA · GW

Immediate, not hugely informed thoughts: (I've talked to ~250 finance people about EA but only attended one finance conference, and it was fintech rather than investment.)

Broadly I'd recommend looking at generic sales advice, including some conference-specific stuff. A big thing is making clear quickly why you're relevant to someone. What can you offer them? Why will they care? What is the one point you want them to remember? They'll have little time, probably being very focused on finding potential business partners, e.g. funds looking for investors, if this is the kind of investor conference I'm thinking of. You might have to be even more prepared to demonstrate relevance than others at the conference because you are not obviously part of the main theme or expected to be there.

Also, how familiar are you with how these kinds of people think? Can you frame EA in industry terms, for example? Seeking to maximise (social) return on investment seems uncontroversial and used as a concept in impact investing. I've also tried talking directly about comparing charities (comparables, alpha) though that seemed to not translate as well. (Anecdotal.)

The handbook here may also be of tangential relevance: http://eaworkplaceactivism.org

Another tip is to expect an extremely low hit rate. Figure out who seems interested in thinking about their giving and focus on them. Think about whether you want to follow up with the most promising people to point them to key resources or connect them to other interested people. Figure out if asking for contact details is normal or weird in this context.

Good luck. There don't seem to be major downsides, so even with a low hit rate, seems worth a shot.