Thank, that's very helpful!
(I'm not surprised satisfaction was higher in 2022 than 2020 before FTX.)
Is it possible to compare overall satisfaction to previous EA surveys? As you say, all the methods for extracting the impact of FTX from this data seem a bit suspect. Satisfaction now vs. satisfaction 1-2 years ago is simpler and arguably more decision-relevant.
I'd expect this to look significantly worse if done in March rather than Dec :(
Might it be possible to re-survey a subset of people just about overall satisfaction, to see if it's moved?
I agree different comparisons are relevant in different situations.
A comparison with the median is also helpful, since it e.g. tells us the gain that the people currently doing the bottom 50% of interventions could get if they switched.
Though I think the comparison to the mean is very relevant (and hasn't had enough attention) since it's the effectiveness of what the average person donates to, supposing we don't know anything about them. Or alternatively it's the effectiveness you end up with if you pick without using data.
I think you'd need to show why this mean-over-median approach is correct to apply to strategy selection but incorrect to apply to cause area selection. Couldn't you equally argue that regression to the mean indicates we'll make errors in thinking some cause areas are 1000x more important or neglected than others?
I think regression to the mean is a bigger issue for cause selection than solution selection. I've tried to take this into account when thinking about between-cause differences, but could have underestimated it.
Basically, I think it's easier to pick the top 1% of causes than the top 1% of solutions, and there's probably also greater variance between causes.
(One way to get an intuition for this is that only <0.001% of world GDP goes into targeted xrisk reduction or ending factory farming, while ~10% of world GDP is spent on addressing social issues in rich countries.)
One small extra data point that might be useful: I made a rough estimate for smallpox eradication in the post, finding it fell in the top 0.1% of the distribution for global health, so it seemed consistent.
I'd also add it would be great if there was more work to empirically analyse ex ante and ex post spread among hits based interventions with multiple outcomes. I could imagine it leading to a somewhat different picture, though I think the general thrust will still hold, and I still thinking looking at spread among measurable interventions can help to inform intuitions about the hits based case.
One example of work in this area is this piece by OP, where they say they believe they found some 100x and a few 1000x multipliers on cash transfers to US citizens by e.g. supporting advocacy into land use reform. But this involves an element of cause selection as well as solution selection, cash transfers seem likely below the mean, and this was based on BOTECs that will contain a lot of model error and so should be further regressed. Overall I'd say this is consistent with within-cause differences of ~10x from top to mean, and doesn't support > 100x differences.
Hey, thanks for the comments. Here are some points that might help us get on the same page:
1) I agree this data is missing difficult-to-measure hits based interventions, like research and advocacy, which means it'll understate the degree of spread.
I discuss that along with other ways it could understate the differences here:
2) Aside: I'm not sure conjunction of multipliers is the best way to illustrate this point. Each time you add a multiplier it increases the chance it doesn't work at all. I doubt the optimal degree of leverage in all circumstances is "the most possible", which is why Open Philanthropy supports interventions with a range of degree of multipliers (including those without), rather than putting everything into the most multiplied thing possible (research into advocacy into research into malaria..). (Also if adding multipliers is the right way to think about it, this data still seems relevant, since it tells you the variance of what you're multiplying in the first place.)
3) My comparison is between the ex ante returns of top solutions and the mean of the space.
Even if you can pick the top 1% of solutions with certainty, and the other 99% achieve nothing, then your selection is only ~100x the mean. And I'm skeptical we can pick the top 1% in most cause areas, so that seems like an upper bound. E.g. in most cases (esp things like advocacy) I think there's more than a 1% chance of picking something net harmful, which would already take us out of the top 1% in expectation.
4) There are also major ways the data overstates differences in spread, like regression to the mean.
The data shows the top are ~10x the mean. If you were optimistic about getting a big multiplier on those, that maybe could get you to 1,000x. But then when we take into account regression to the mean, that can easily reduce spread another 10x, getting us back to something like 100x.
That seems plausible but pretty optimistic to me. My overall estimate for top vs. mean is ~10x, but with a range of 3-100x.
>This also seems to potentially lead to biased comparisons between solution variance and cause level variance given how strongly differences in cause level variance are driven by expected value calculations (value of the future, etc.) that are far more extreme / speculative to what people comparing interventions on single interventions would have data on.
I agree estimates of cause spread should be regressed more than solution spread. I've tried to take this into account, but could have underestimated it.
In general I think regression to the mean is a very interesting avenue for developing a critique of core EA ideas.
A couple of comments that might help readers of the thread separate problems and solutions:
1) If you're aiming to do good in the short-term, I think this framework is useful:
expected impact = problem effectiveness x solution effectiveness x personal fit
I think problem effectiveness varies more than solution effectiveness, and is also far less commonly discussed in normal doing good discourse, so it makes sense for EA to emphasise it a lot.
However, solution effectiveness also matters a lot too. It seems plausible that EAs neglect it too much.
80k covers both in the key ideas series: https://80000hours.org/articles/solutions/
If you can find a great solution to a second tier problem area, that could be more effective than working on the average solution in a top tier area.
This circumstance could arise if you're comparing a cause with a lot of effectiveness-focused people working on it (where all the top solutions are taken already) vs. a large cause with lots of neglected pockets; or due to personal fit considerations.
Personally, I don't think solution effectiveness varies enough to make climate change the top thing to work on for people focused on existential risk, but I'd be keen to see 1-5% focused on the highest-upside and most neglected solutions to climate change.
2) If you're doing longer-term career planning, however, then I think thinking in terms of specific solutions is often too narrow.
A cause is broad enough that you can set out to work on one in 5, 10 or even 20 years, and usefully aim towards it. But which solutions are most effective is normally going to change too much.
For longer-term planning, 80k uses the framework: problem effectiveness x size of contribution x fit
Size of contribution includes solution effectiveness, but we don't emphasise it – the emphasise is on finding a good role or aptitude instead.
3) Causes or problem areas can just be thought of as clusters of solutions.
Causes are just defined instrumentally, as whatever clusters of solutions are useful for the particular type of planning you're doing (because require common knowledge and connections).
E.g. 80k chooses causes that seem useful for career planning; OP chooses causes based on what it's useful for their grantmakers to specialise in.
You can divide up the space into however many levels you like e.g.
International development -> global health -> malaria -> malaria nets -> malaria nets in a particular village.
Normally we call the things on the left 'problem areas' and on the left 'solutions' or 'interventions', but you can draw the line in different places.
Narrower groups let you be more targeted, but are more fragile for longer-term planning.
4) For similar reasons, you can compare solutions with similar frameworks to cause areas, including by using INT.
I talk more about that here: https://80000hours.org/articles/solutions/
This is very helpful.
Might you have a rough estimate for how much the bar has gone up in expected value?
E.g. is the marginal grant now 2x, 3x etc. higher impact than before?
Hey, I missed the lottery this year. Do you know when the next one will be?
Is this also the only one running in EA right now? Does it replace the one run by the EA Funds in the past?
That makes sense. It just means you should decrease your exposure to bonds, and not necc buy more equities.
I'm skeptical you'd end up with a big bond short though - due to my other comment. (Unless you think timelines are significantly shorter or the market will re-rate very soon.)
I think the standard asset pricing logic would be: there is one optimal portfolio, and you want to lever that up or down depending on your risk tolerance and how risky that portfolio is.
In the merton's share, your exposure depends on (i) expected returns of the optimal portfolio (ii) volatility / risk (iii) the risk free rate over your investment horizon and (iv) your risk aversion.
You're arguing the risk free rate will be higher, which reduces exposure.
It seems like the possibility of an AI boom will also increase future volatility, also reducing exposure.
Then finally there's the question of expected returns of the optimal portfolio, which you seem to think is ambiguous.
So it seems like the expected effect would be to reduce exposure.
Sorry for making you repeat yourself, I'd read the appendix and the Cochrane post :)
To summarise, the effect on equities seems ambiguous to you, but it's clearly negative on bonds, so investors would likely tilt towards equities.
In addition, the sharpe ratio of the optimal portfolio is decreased (since one of the main asset classes is worse), while the expected risk-free rate over your horizon is increased, so that would also imply taking less total exposure to risk assets.
What do you think of that implication?
One additional piece of caution is that within investing, I'm pretty sure the normal assumption is that growth shocks are good for equities e.g. you can see the Chapter in Expected Returns by Anti Ilmanen on the growth factor, or read about risk parity. There have been attempts to correlate the returns of different assets to changes in growth expectations.
On the other hand, I would guess theta is above one for the average investor.
What effect do you think an AI boom would have on inflation?
It seems like it would be deflationary, since it would drive down the cost of goods and labour, though it might cause inflation in finite resources like commodities and land, so perhaps the net effect could go either way?
(I partly ask because a common framework in investing for thinking about the what drives asset prices is to break it into growth shocks, inflation shocks, changes in investor risk appetite and changes in interest rate policy. If AI will cause a growth shock and deflation shock, then normally that would be seen as positive for equities, ambiguous for real assets and nominal bonds, and negative for TIPs.)
I think we should go back to having a community tab.
The default front page would be for discussing how to actually use our resources to do the most good (i.e. a focus on the intellectual project of EA and object level questions).
All posts about the nature of EA as the particular group of people trying to work together would go in community. This would include criticisms of EA as a community (while criticisms of specific ways of doing good would go on the front page). It could also include org updates etc.
I think the key point is just equities will also go down if real interest rates rise (all else equal) and plausibly by more than a 20 year bond.
Just a quick addition that I think there's been too much focus on VCs in these discussions. FTX was initially aimed as a platform for professional crypto traders. If FTX went down, these traders using the platform stood to lose a large fraction of their capital, and if they'd taken external money, to go out of business. So I think they did have very large incentives to understand the downside risks (unlike VCs who are mainly concerned with potential upside).
Yeah I agree that the AGI could also make you want to save more. One factor is that higher interest rates can mean it's better to save more (depending on your risk aversion). Another factor is that it could increase your lifespan, or make it easier to convert money into utility (making your utility function more linear). That it could reduce the value of your future income from labour is another factor.
Interesting. What would be the theoretical explanation for a negative relationship?
I think the effective duration on equities is roughly the inverse of the dividend yield + net buybacks, so with a ~2% yield, that's ~50 years.
Some more here: https://www.hussmanfunds.com/wmc/wmc040223.htm
Thanks that makes sense.
So if you implemented this with a future, you'd end up with -3.5% + 2.9% + rerating return = -0.6% + rerating.
With a 2% p.a. re-rating return over 20 years, the expected return is +1.4%, minus any fees & trade management costs.
If it happens over only 5 years, then +7.4%.
Thank you for the post! I'm very interested to see more work on this topic.
I feel a little bit unsure about the focus on the bonds – would be very curious to hear any reflections on the below.
As you say, if real interest rates rise, that should affect all assets with positive duration.
Perhaps then the net effect of having the view that real interest rates will rise is just that you should reduce overall portfolio duration. A 60:40 portfolio has an effective duration of ~40 years, where most of that duration comes from equities. Perhaps someone who believes this should target, say, a 20 year average duration instead (through whatever means seems least costly, which could mean holding fewer equities).
Perhaps equivalently, if real interest rates are going to rise, then all financial assets are currently overpriced, so maybe the effect would be holding fewer financial assets in general, and holding more cash / spending more.
My understanding is that an important part of the reasoning for a focus on avoiding bonds is that an increase in GDP growth driven by AI is clearly negative for bonds, but has an ambiguous effect on equities (plus commodities and real estate), so overall you should hold more equities (/growth assets) and less bonds. Is that right?
That makes sense to me, but then I still feel unsure about, having tilted towards equities, whether your overall exposure should be higher or lower.
(And tilting towards equities will increase the effective duration of your portfolio, making an increase in real interest rates worse for you all else equal.)
If we use the merton's share to estimate optimal exposure, that depends on the difference between the expected return of the asset and the expected real interest rate over your horizon. Perhaps with equities you might expect both returns and the interest rate to rise by 3%, which would cancel out, and you end up with the same exposure. But with bonds only the interest rate will rise, so you end up with much lower exposure (potentially negative exposure if your expected interest rate is higher than the expected returns). Is that basically the reasoning?
I want to suggest a bunch of caution against shorting bonds (or tips).
- The 30yr yield is 3.5%, so you make -3.5% per year from that.
- You earn the cash rate on the capital freed up from the shorts, which is 3.8% in interactive brokers.
- If you're right that the real interest rate will rise 2% over 20 years, and average duration is 20 years, then you make +40% over 20 years – roughly 2% per year.
- If you buy an ETF, maybe you lose 0.4% in fees.
So you end up with a +1.9% expected return per year.
This would have a third of the volatility of stocks, so you could leverage it several times, but then you'd need to pay the margin cost of ~4%.
So it doesn't seem like an amazing trade in terms of expected returns (if I've estimated this correctly.
It gets worse if you consider correlations – if we go into a recession, yields might fall 1-2%, which would mean you lose 20-40%, and you make those losses at the worst possible time – when everything else is going down.
In addition, a neutral portfolio is something like 50% equity, 20% real assets and 30% bonds, so that should be our prior, and then you'd want to make a bayesian update away from there based on your inside view.
In effect, in your portfolio optimizer, you could set the expected returns of long bonds to be say 1.5% rather than 3.5%. My guess is that would spit out having say 0-10% bonds rather than 30%, but not actively shorting them.
Tldr my guess is that most investors (if they believe the thesis) should just underweight bonds rather than actively short them.
I'd be very keen to hear more comments on this.
Am I being dumb or do you mean short TIPS? If real interest rates rise, TIPS go down.
I’ve been taking time off work and haven’t been looped into any of CEA’s discussions about media strategy, so here I’m speaking only for myself.
Clearly recent events have been a disaster in terms of media, which means we should reassess our strategies, so at a high level, I agree.
However, I think I mostly disagree with what I understand to be the more specific claims. I've tried to split these up as follows:
- CEA has a policy of minimising the total amount of media engagement.
- EA should have sought even more media coverage than it did in recent years.
- CEA’s policy is effectively EA’s policy.
- There should be a wider range EA public figures, but CEA has prevented this.
- A significantly wider range of people in the community should speak to the media even if they don’t have approval or training from others in EA.
Here are some very brief comments on why I disagree or feel very unsure about these. These are big topics so it's hard to give much of my reasoning.
On 1) my understanding (not speaking for them) is that CEA had a policy of minimising engagement around 2017-2019 (along the lines of the fidelity post), but my understanding is that from 2020 onwards they became significantly more pro media coverage. EA then received dramatically more media coverage 2020-2022 than it did 2017-2019. This uptick can be seen on 80k's media page: https://80000hours.org/about/media-coverage/ and then there were also the campaigns for the Precipice, WWOTF etc.
That said, with regards to FTX they seem to have reverted to a policy of less engagement. What to do here just seems like a really hard call to me. You point out there's a high proportion of negative coverage, but I don't see a straightforward route to drowning that out with new positive coverage in the current environment. A more realistic option would be to try to more to make the coverage less bad (and there is some of this happening) or do more to tell our own narrative about what happened; but that could easily have the effect of EA being given greater prominence in the negative stories, or make whatever narrative the journalist has more credible etc.
On 2) I feel pretty unsure even more media coverage would have been better.
One point is that it seems likely that EA is getting way more negative coverage now because it was fresh in journalists’ minds from Will’s summer media campaign. If there had been less coverage of EA over the summer, I expect there would be fewer negative articles about EA now. So overall I'm tempted to draw the lesson from this that less media is better.
Another point is that most surveys I’ve seen show that only ~5% of people come into EA via the media, but the media is how most people have heard of EA. This means it creates only a small fraction of community members, but perhaps the majority of haters. I think that suggests there’s still a lot to be said for a strategy that involves a small media footprint (i.e. maybe you get 95% of the recruitment but with only 20% of the haters).
On noisy fuckers, I think the EA Hotel is a bad look for EA (Edit: I want to clarify that I don't have a problem with the EA Hotel as a project, I just think it's not a great media story) . Although turning the journalist away at the door also worked out badly in hindsight, I think the bigger mistake was accepting the journalist’s invite in the first place, so if anything I think this example updates me towards a stronger non-engagement policy.
(Also the fact that the journalist came even after you'd turned down the interview is pretty aggressive of him, suggesting the story could have been a lot worse.)
The way I agree with (2) is that it’s a big shame there wasn’t more positive coverage out about longtermism 1-2 years ahead of the launch of WWOTF, which would have meant Torres didn’t drive the discourse around it as much as they did.
More broadly, generating high-profile positive coverage is hard - pushing into more marginal opportunities can easily lead to stories that are more ambiguously positive, or simply have little reach, or require doing a ton of work, and there's always the question of opportunity costs.
On 3), I think you’re overstating CEA’s influence. The large orgs (GiveWell, OP, FP, TLYCS, ACE, 80k, Singer etc) all have their own outreach people and decide their own strategy. CEA provides advice to some of these orgs, but I wouldn’t say it’s the main driver of what’s happened in recent years. Far more funding comes from OP than CEA. CEA does not ‘appoint’ the representatives of EA.
On 4), I agree it would be great if there were more EA public figures, and having a small number of faces of EA is a big risk for the reasons you say. But my impression is that Will, CEA and 80k have all been trying to find and encourage such people. (If you’re reading this and interested in trying, please apply to 80k advising asap.)
The reason it hasn’t happened isn’t because CEA doesn’t want it, but rather because it’s a shitty and difficult job. Even if someone can match e.g. Toby in terms of communication skills and charm, very few people can tell the kind of story he can, and match the level of coverage he’s able to get. And results are heavy-tailed – it’s almost bound to be that a couple of people receive the majority of the coverage. (Unless those people step back, in which case the total amount of coverage will likely drop.)
So I would really like this to change, but I think it’s going to be a slow process.
On 5), I disagree for similar reasons that others have said in the thread. It’s pretty hard to make media go well and generate positive coverage. I think if lots more people tried without making it a major focus of theirs, the results are as likely to be bad as good.
Since media coverage affects the brand of the whole movement, I think it’s an area where it’s easy to be unilateralist, so it’s reasonable to adopt a rule of thumb like “if a significant number of people think this coverage would be bad, don’t do it.” I’m not sure how this should be implemented in practice, and maybe right now things are too centralised, but I think something like having a media team who can provide quick guidance on whether something seems good or bad seems like a reasonable way to go.
I’m sorry I don’t have more positive suggestions about how things should change going forward (and they probably should) but maybe this helps identify the best criticisms of the old approach.
Yes, maybe we should model it as 10bn meta and 10bn other stuff, now worth 2.5bn and 7bn.
Something like that seems right.
Though I don't believe the Forbes figure for Dustin – it seems to assume that most of his wealth comes from his meta stake, and he's said on Twitter that he'd sold a lot of his stake (and hopefully invested in stuff that's gone up). Last spring, Open Phil also said their assets were down 40% when Meta was down 60%, which could suggest Meta was about half of the assets at that point. So I expect it's too low.
Also seems like there might be some new donors in the last year.
The original rumour was that Alameda would have net negative assets if FTT coin collapsed. Though there's a chance it's actually OK.
Thank you for writing - seems like a good summary of what I've seen.
Also maybe of interest, I think the current EA portfolio is actually allocated pretty well in line with what this heuristic would imply:
I think the bigger issue might be that it's currently demoralising not to work on AI or meta. So I appreciate this post as an exploration of ways to make it more intuitive that everyone shouldn't work on AI.
Upvoted, though I was struck by this part of the appendix:
Appendix: Other reasons to diverge from argmax
In order of how much we endorse them:
- Value of information is usually incredibly high
- You don’t know the whole option set
- Moral uncertainty
- Concave altruism (i.e. Jensen’s inequality!)
- The optimiser’s curse
- Worldview diversification
- Principled risk aversion, as at GiveWell
- Strategic skulduggery
- Decrease variance of your portfolio for more impact compounding(?)
While I totally agree with the the conclusion of the post (the community should have a portfolio of causes, and not invest everything in the top cause), I feel very unsure that a lot of these reasons are good ones for spreading out from the most promising cause.
Or if they do imply spreading out, they don't obviously justify the standard EA alternatives to AI Risk.
I noticed I felt like I was disagreeing with your reasons for not doing argmax throughout the post, and this list helped to explain why.
1. Starting with VOI, that assumes that you can get significant information about how good a cause is by having people work on it. In practice, a ton of uncertainty is about scale and neglectedness, and having people work on the cause doesn't tell you much about that. Global priorities research usually seems more useful.
VOI would also imply working on causes that might be top, but that we're very uncertain about. So, for example, that probably wouldn't imply that that longtermist-interested people should work on global health or factory farming, but rather spread out over lots of weirder small causes, like those listed here: https://80000hours.org/problem-profiles/#less-developed-areas
2. "You don't know the whole option set" sounds like a similar issue to VOI. It would imply trying to go and explore totally new areas, rather than working on familiar EA priorities.
3. Many approaches to moral uncertainty suggest that you factor in uncertainty in your choice of values, but then you just choose the best option with respect to those values. It doesn't obviously suggest supporting multiple causes.
4. Concave altruism. Personally I think there are increasing returns on the level of orgs, but I don't think there are significant increasing returns at the level of cause areas. (And that post is more about exploring the implications of concave altruism rather than making the case it actually applies to EA cause selection.)
5. Optimizer's curse. This seems like a reason to think your best guess isn't as good as you think, rather than to support multiple causes.
6. Worldview diversification. This isn't really an independent reason to spread out – it's just the name of Open Phil's approach to spreading out (which they believe for other reasons).
7. Risk aversion. I don't think we should be risk averse about utility, so agree with your low ranking of it.
8. Strategic skullduggery. This actually seems like one of the clearest reasons to spread out..
9. Decreased variance. I agree with you this is probably not a big factor.
You didn't add diminishing returns to your list, though I think you'd rank it near the top. I'd also agree it's a factor, though I also think it's often oversold. E.g. if there are short-term bottlenecks in AI that create diminishing returns, it's likely the best response is to invest in career capital and wait for the bottlenecks to disappear, rather than to switch into a totally different cause. You also need big increases in resources to get enough diminishing returns to change the cause ranking e.g. if you think AI safety is 10x as effective as pandemics at the margin, you might need to see the AI safety community roughly 10x in size relative to biosecurity before they'd equalise.
I tried to summarise what I think the good reasons for spreading out are here.
For a longtermist, I think those considerations would suggest a picture like:
- 50% into the top 1-3 issues
- 20% into the next couple of issues
- 20% into exploring a wide range of issues that might be top
- 10% into other popular issues
If I had to list a single biggest driver, it would be personal fit / idiosyncratic opportunities, which can easily produce orders of magnitude differences in what different people should focus on.
The question of how to factor in neartermism (or other alternatives to AI-focused longtermism) seems harder. It could easily imply still betting everything on AI, though putting some % of resources into neartermism in proportion to your credence in it also seems sensible.
Some more here about how worldview diversification can imply a wide range of allocations depending on how you apply it: https://twitter.com/ben_j_todd/status/1528409711170699264
Also see Brian Christian briefly suggesting a cause allocation rule a bit like this towards the end of 80k's interview with him.
We were discussing solutions to the explore-exploit problem, and one is that you allocate resources in proportion to your credence the option is best.
Isn't there a similar argument to covid – the best case scenario is bounded at zero hours lost, while the bound on the worst case is very high (losing tens of thousands of hours), so increasing uncertainty will tend to drag up the mean?
The current forecasts try to account for a bunch of uncertainty, but we should also add in model uncertainty – and model uncertainty seems like it could be really high (for the reasons in Dan's comment). So this would suggest we should round up rather than down.
Does anyone have comments on how the huge degree of uncertainty should change our actions?
My intuition is that high uncertainty is argument in favour of leaving town, since it seems like it's worse to underestimate the risks (death) than overestimate them (some inconvenience).
Or another idea might be that if the risk turns out to be lower than the best guess, you can just return to town. Whereas if it was higher, then you're dead. So leaving town is a more robust strategy.
But I could also imagine this is totally the wrong way of thinking about it. E.g. maybe if we're thinking about hours of EA work (instead of personal hours), we should be pretty risk neutral about them, and just go with expected hours lost vs. gained.
Thanks this is helpful!
Just a heads up my latest estimate is here in footnote 15: https://www.effectivealtruism.org/articles/introduction-to-effective-altruism#fn-15
I went for 300 technical researchers though say the estimate seems more likely to be too high than too low, so seems like we're pretty close.
(My old Twitter thread was off the top of head, and missing the last year of growth.)
Glad to see more thorough work on this question :)
I think of Shapley values as just one way of assigning credit in a way to optimise incentives, but from what I've seen, it's not obvious it's the best one. (In general, I haven't seen any principled way of assigning credit that always seems best.)
Good point that CFT is a more science-grounded alternative to IFS. Tim LeBon is a therapist in the UK who has seen community members, does remote sessions, and offers CFT.
This is a cool post. Though, I wonder if there's switching between longtermism as a theory of what matters vs the idea you should try to act over long timescales (as with a 200yr foundation).
You could be a longtermist in terms of what you think is of moral value, but believe the best way to benefit the future (instrumentally) is to 'make it to the next rung'. Indeed this seems like what Toby, Will etc. basically think.
Maybe then relevant reference class is more something like 'people motivated to help future generations but who did that by solving certain problems of the day', which seems a very broad and maybe successful reference class - eg encompassing many scientists, activists etc.
PS shouldn't the environmentalism, climate change and anti nuclear movements be part of your reference class?
I agree the basic version of this objection doesn't work, but my understanding is there's a more sophisticated version here:
Where he talks about how a the case for an individual being longtermist rests on a tiny probability of shifting the entire future.
I think the response to this might be that if we aggregate together the longtermist community, then collectively it's no longer pascalian. But this feels a bit arbitrary.
Anyway, partly wanted to post this paper here for further reading, and partly an interested in responses.
Short update on the situation: https://twitter.com/ben_j_todd/status/1561100678654672896
where you can dilute the philosophy more and more, and as you do so, EA becomes "contentless" in that it becomes closer to just "fund cool stuff no one else is really doing.
Makes sense. It just seems to me that the diluted version still implies interesting & important things.
Or from the other direction, I think it's possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.
So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions
I agree something like trying to maximise might be at the core of the issue (where utilitarianism is just one ethical theory that's into maximising).
However, I don't think it's easy to avoid by switching to a rights or duties. Philosophers focused on rights still think that if you can save 10 lives with little cost to yourself, that's a good thing to do. And that if you can 100 lives with the same cost, that's an even better thing to do. A theory that said all that matters ethically is not violating rights would be really weird.
Or another example is that all theories of population ethics seem to have unpleasant conclusions, even the non-totalising ones.
If one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
I don't see why it implies nihilism. I think it's shows the moral philosophy is hard, so we should moderate our views, and consider a variety of perspectives, rather than bet everything on a single theory like utilitarianism.
I think once you take account of diminishing returns and the non-robustness of the x-risk estimates, there's a good chance you'd end up estimating the cost per present life saved of GiveWell is cheaper than donating to xrisk. So the claim 'neartermists should donate to xrisk' seems likely wrong.
I agree with Carl the US govt should spend more on x-risk, even just to protect their own citizens.
I think the typical person is not a neartermist, so might well end up thinking x-risk is more cost-effective than GiveWell if they thought it through. Though it would depend a lot on what considerations you include or not.
From a pure messaging pov, I agree we should default to opening with "there might be an xrisk soon" rather than "there might be trillions of future generations", since it's the most important message and is more likely to be well-received. I see that as the strategy of the Precipice, or of pieces directly pitching AI xrisk. But I think it's also important to promote longtermism independently, and/or mention it as an additional reason to prioritise about xrisk a few steps after opening with it.
Thanks I made some edits!
This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren't necessarily "low-hanging fruit" to be plucked; and it's unclear whether previous efforts besides field-building have even been net positive in total.
Agree though my best guess is something like diminishing log returns the whole way down. (Or maybe even a bit of increasing returns within the first $100m / 100 people.)
I just wanted to leave a very quick comment (sorry I'm not able to engage more deeply).
I think yours is an interesting line of criticism, since it tries to get to the heart of what EA actually is.
My understanding of your criticism is that EAs attempts to find an interesting middle ground between full utilitarianism and regular sensible do-gooding, whereas you claim there isn't one. In particular, we can impose limits on utilitarianism, but they're arbitrary and make EA contentless. Does this seem like a reasonable summary?
I think the best argument that an interesting middle ground exists the fact that EAs in practice have come up with ways of doing that that aren't standard (e.g. only a couple of percent of US philanthropy is spent on evidence-backed global health at best, and << 1% on ending factory farming + AI safety + ending pandemics).
More theoretically, I see EA as being about something like "maximising global wellbeing while respecting other values". This is different from regular sensible do-gooding in being more impartial, more wellbeing focused and more focused on finding the very best ways to contribute (rather than the merely good). I think another way EA is different is being more skeptical, open to weird ideas and trying harder to take a bayesian, science-aligned approach to finding better ways to help. (Cf the key values of EA.)
However, it's also different from utilitarianism since you can practice these values without saying maximising hedonic utility is the only thing that matters, or a moral obligation.
(Another way to understand EA is the claim that we should pay more attention to consequences, given the current state of the world, but not that only consequences matter.)
You could respond that there's arbitrariness in how to adjudicate conflicts between maximising wellbeing and other values. I basically agree.
But I think all moral theories imply crazy things ("poison") if taken to extremes (e.g. not lying to the axe murder as a deontologist; deep ecologists who think we should end humanity to preserve the environment; people who hold the person-affecting view in population ethics who say there's nothing bad about creating a being who's life is only suffering).
So imposing some level of arbitrary cut offs on your moral views is unavoidable. The best we can do is think hard about the tradeoffs between different useful moral positions, and try to come up with an overall course of action that's non-terrible on the balance of them.
I agree thinking xrisk reduction is the top priority likely depends on caring significantly about future people (e.g. thinking the value of future generations is at least 10-100x the present).
A key issue I don't see discussed very much is diminishing returns to x-risk reduction. The first $1bn spent on xrisk reduction is (I'd guess) very cost-effective, but over the next few decades, it's likely that at least tens of billions will be spent on it, maybe hundreds. Additional donations only add at that margin, where the returns are probably 10-100x lower than the first billion. So a strict neartermist could easily think AMF is more cost-effective.
That said, I think it's fair to say it doesn't depend on something like "strong longtermism". Common sense ethics cares about future generations, and I think suggests we should do far more about xrisk and GCR reduction than we do today.
I wrote about this in an 80k newsletter last autumn:
Carl Shulman on the common-sense case for existential risk work and its practical implications (#112)
Here’s the basic argument:
- Reducing existential risk by 1 percentage point would save the lives of 3.3 million Americans in expectation.
- The US government is typically willing to spend over $5 million to save a life.
- So, if the reduction can be achieved for under $16.5 trillion, it would pass a government cost-benefit analysis.
- If you can reduce existential risk by 1 percentage point for under $165 billion, the cost-benefit ratio would be over 100 — no longtermism or cosmopolitanism needed.
Taking a global perspective, if you can reduce existential risk by 1 percentage point for under $234 billion, you would save lives more cheaply than GiveWell’s top recommended charities — again, regardless of whether you attach any value to future generations or not.
Toby Ord, author of The Precipice, thinks there's a 16% chance of existential risk before 2100. Could we get that down to 15%, if we invested $234 billion?
I think yes. Less than $300 million is spent on the top priorities for reducing risk today each year, so $200 billion would be a massive expansion.
The issue is marginal returns, and where the margin will end up. While it might be possible to reduce existential risk by 1 percentage point now for $10 billion — saving lives 20 times more cheaply than GiveWell's top charities — reducing it by another percentage point might take $100 billion+, which would be under 2x as cost-effective as GiveWell top charities.
I don’t know how much is going to be spent on existential risk reduction over the coming decades, or how quickly returns will diminish. [Edit: But it seems plausible to me it'll be over $100bn and it'll be more expensive to reduce x-risk than these estimates.] Overall I think reducing existential risk is a competitor for the top issue even just considering the cost of saving the life of someone in the present generation, though it's not clear it's the top issue.
My bottom line is that you only need to put moderate weight on longtermism to make reducing existential risk seem like the top priority.
(Note: I made some edits to the above in response to Eli's comment.)
Hey, just a quick comment to say something like this line of objection is discussed in footnote 3.
I'm going to propose the following further edits:
- Compare with terrorism deaths over 50 years from 1970.
- Mention HIV/AIDS in the main text and some other tweaks.
- Add further discussion in the footnote.
I downvoted this post – I think it's unhelpful to write a polemic complaining that "X isn't being done" without taking basic steps to find out what's already being done, or first writing a post asking about what's being done.
- Next week a major PR campaign to promote Will's book will begin.
- Open Phil, CEA, 80k and others are already advised by a professional PR agency.
- There are many in-progress efforts to get EA more into the media (e.g. it's a key focus for Longview).
- There's a communications strategy being drafted.
There are huge differences in how well-grounded different EA claims are; we should be much more mindful of these differences. “Donations to relieve poverty go much further in the developing world than in the developed world” or “If you care about animal welfare, it probably makes more sense to focus on farmed animals than pets because there are so many more of them” are examples of extremely well-grounded claims. “There’s a >5% chance humanity goes extinct this century” or “AI and bio are the biggest existential risks” are claims with very different epistemic status, and should not be treated as similarly solid.
One thing I struggle with is switching back and forth between the two types of claims.
If we have a bunch of ideas that we think are really important and not widely appreciated ('type 1' claims), it's hard to trumpet those without giving off the vibe that you have everything figured out – I mean you're literally saying that other people could have 100x the impact if only they realised.
But then when you make type 2 claims, I'm not sure that emphasising they're really unsettled really 'undoes' the vibe created by the type 1 claims.
This is compounded by type 1 claims stated clearly being much easier to spread and remember, while hedging tends to be forgotten.
I'm sure there are ways to handle this way better, but I find it hard.