Posts

Despite billions of extra funding, small donors can still have a significant impact 2021-11-23T11:20:20.194Z
What does the growth of EA mean for our priorities and level of ambition? 2021-11-15T09:07:18.535Z
How are resources in EA allocated across issues? 2021-08-08T12:52:41.604Z
Is effective altruism growing? An update on the stock of funding vs. people 2021-07-29T11:47:26.747Z
[Link] 80,000 Hours Nov 2020 annual review 2021-05-15T16:28:20.975Z
How much does performance differ between people? 2021-03-25T22:56:32.660Z
Careers Questions Open Thread 2020-12-04T12:05:34.775Z
A new, cause-general career planning process 2020-12-03T11:35:38.121Z
What actually is the argument for effective altruism? 2020-09-26T20:32:10.504Z
Judgement as a key need in EA 2020-09-12T14:48:20.588Z
An argument for keeping open the option of earning to save 2020-08-31T15:09:42.865Z
More empirical data on 'value drift' 2020-08-29T11:44:42.855Z
Why I've come to think global priorities research is even more important than I thought 2020-08-15T13:34:36.423Z
New data suggests the ‘leaders’’ priorities represent the core of the community 2020-05-11T13:07:43.056Z
What will 80,000 Hours provide (and not provide) within the effective altruism community? 2020-04-17T18:36:00.673Z
Why not to rush to translate effective altruism into other languages 2018-03-05T02:17:20.153Z
New recommended career path for effective altruists: China specialists 2018-03-01T21:18:46.124Z
80,000 Hours annual review released 2017-12-27T20:31:05.395Z
The case for reducing existential risk 2017-10-01T08:44:59.879Z
How can we best coordinate as a community? 2017-07-07T04:45:55.619Z
Can one person make a difference? 2017-04-04T00:57:48.629Z
Why donate to 80,000 Hours 2016-12-24T17:04:38.089Z
If you want to disagree with effective altruism, you need to disagree one of these three claims 2016-09-25T15:01:28.753Z
Is the community short of software engineers after all? 2016-09-23T11:53:59.453Z
6 common mistakes in the effective altruism community 2016-06-03T16:51:33.922Z
Why more effective altruists should use LinkedIn 2016-06-03T16:32:24.717Z
Is legacy fundraising actually higher leverage? 2015-12-16T00:22:46.723Z
We care about WALYs not QALYs 2015-11-13T19:21:42.309Z
Why we need more meta 2015-09-26T22:40:43.933Z
Thread for discussing critical review of Doing Good Better in the London Review of Books 2015-09-21T02:27:47.835Z
A new response to effective altruism 2015-09-12T04:25:43.242Z
Random idea: crowdsourcing lobbyists 2015-07-02T01:16:05.861Z
The career questions thread 2015-06-20T02:19:07.131Z
Why long-run focused effective altruism is more common sense 2014-11-21T00:12:34.020Z
Two interviews with Holden 2014-10-03T21:44:12.163Z
We're looking for stories of EA career decisions 2014-09-30T18:20:28.169Z
An epistemology for effective altruism? 2014-09-21T21:46:04.430Z
Case study: designing a new organisation that might be more effective than GiveWell's top recommendation 2013-09-16T04:00:36.000Z
Show me the harm 2013-08-06T04:00:52.000Z

Comments

Comment by Benjamin_Todd on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-30T12:29:54.318Z · EA · GW

Thanks, fixed. (https://twitter.com/ben_j_todd/status/1462882167667798021)

Comment by Benjamin_Todd on A Red-Team Against the Impact of Small Donations · 2021-11-26T00:00:25.837Z · EA · GW

It's hard to know – most valuations of the human capital are bound up with the available financial capital. One way to frame the question is to consider how much the community could earn if everyone tried to earn to give. I agree it's plausible that would be higher than the current income on the capital, but I think could also be a lot less.

Comment by Benjamin_Todd on A Red-Team Against the Impact of Small Donations · 2021-11-25T15:00:37.979Z · EA · GW

Thanks for red teaming – it seems like lots of people are having similar thoughts, so it’s useful to have them all in one place.

First off, I agree with this:

I think there are better uses of your time than earning-to-give. Specifically, you ought to do more entrepreneurial, risky, and hyper-ambitious direct work, while simultaneously considering weirder and more speculative small donations.

I say this in the introduction (and my EA Global talk). The point I’m trying to get across is that earning to give to top EA causes is still perhaps (to use made-up numbers) in the 98th percentile of impactful things you might do; while these things might be, say, 99.5-99.9th percentile. I agree my post might not have made this sufficiently salient. It's really hard to correct one misperception without accidentally encouraging one in the opposite direction.

The arguments in your post seem to imply that additional funding has near zero value. My prior is that more money means more impact, but at a diminishing rate.

Before going into your specific points, I’ll try to describe an overall model of what happens when more funds come into the community, which will explain why more money means more but diminishing impact.

Very roughly, EA donors try to fund everything above a ‘bar’ of cost-effectiveness (i.e. value per dollar). Most donors (especially large ones) are reasonably committed to giving away a certain portion of their funds unless cost-effectiveness drops very low, which means that the bar is basically set by how impactful they expect the ‘final dollar’ they give away in the future to be. This means that if more money shows up, they reduce the bar in the long run (though capacity constraints may make this take a while). Additional funding is still impactful, but because the bar has been dropped, each dollar generates a little less value than before.

Here’s a bit more detail of a toy model. I’ll focus on the longtermist case since I think it’s harder to see what’s going on there.

Suppose longtermist donors have $10bn. Their aim might be to buy as much existential risk reduction over the coming decades as possible with that $10bn, for instance, to get as much progress as possible on the AI alignment problem.

Donations to things like the AI alignment problem has diminishing returns – it’s probably roughly logarithmic. Maybe the first $1bn has a cost-effectiveness of 1000:1. This means that it generates 1000 units of value (e.g. utils, x-risk reduction) per $1 invested. The next $10bn returns 100:1, the next $100bn returns 10:1, the next $1,000bn is 2:1, and additional funding after that isn’t cost-effective. (In reality, it’s a smoothly declining curve.)

If longtermist donors currently have $10bn (say), then they can fund the entire first $1bn and $9bn of the next tranche. This means their current funding bar is 100:1 – so they should aim to take any opportunities above this level.

Now suppose some smaller donors show up with $1m between them. Now in total there is $10.001bn available for longtermist causes. The additional $1m goes into the 100:1 tranche, and so has a cost-effectiveness of 100:1. This is a bit lower than the average cost-effectiveness of the first $10bn (which was 190:1), but is the same as marginal donations by the original donors and still very cost-effective.

Now instead suppose another mega-donor shows up with $10bn, so the donors have $20bn in total. They’re able to spend $1bn at 1000:1, then $10bn at 100:1 and then the remaining $9bn is spent on the 10:1 tranche. The additional $10bn had a cost-effectiveness of 19:1 on average. This is lower than the 190:1 of the first $10bn, but also still worth doing.

How does this play out over time?

Suppose you have $10bn to give, and want to donate it over 10 years.

If we assume hinginess isn’t changing & ignore investment returns, then the simplest model is that you’ll want to donate about $1bn per year for 10 years.

The idea is that if the rate of good opportunities is roughly constant, and you’re trying to hit a particular bar of cost-effectiveness, then you’ll want to spread out your giving. (In reality you’ll give more in years where you find unusually good things, and vice versa.)

Now suppose a group of small donors show up who have $1bn between them. Then the ideal is that the community donates $1.1bn per year for 10 years – which requires dropping their bar (but only a little).

One way this could happen is for the small donors to give $100m per year for 10 years (‘topping up’). Another option is for the small donors to give $1bn in year 1 – then the correct strategy for the megadonor is to only give $100m in year 1 and give $1.1bn per year for the remaining 9 (‘partial funging’).

A big complication is that the set of opportunities isn’t fixed – we can discover new opportunities through research or create them via entrepreneurship. (This is what I mean by ‘grantmaking capacity and research’.)

It takes a long time to scale up a foundation, and longtermism as a whole is still tiny. This means there’s a lot of scope to find or create better opportunities. So donors will probably want to give less at the start of the ten years, and more towards the end when these opportunities have been found (and earning investment returns in the meantime). 

Now I can use this model to respond to some of your specific points:

At face value, CEPI seems great. But at the meta-level, I still have to ask, if CEPI is a good use of funds, why doesn't OpenPhil just fund it?

Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.

This doesn’t, however, mean donating to CEPI has no value. I think CEPI could make a meaningful contribution to biosecurity (and given my personal cause selection, likely similarly or more effective than donating to GiveWell-recommended charities).

An opportunity can be below Open Phil’s current funding bar if Open Phil expects to find even better opportunities in the future (as more opportunities come along each year, and as they scale up their grantmaking capacity), but that doesn’t mean it wouldn’t be ‘worth funding’ if we had even more money. 

My point isn’t that people should donate to CEPI, and I haven’t thoroughly investigated it myself. It’s just meant as an illustration of how there are many more opportunities at lower levels of cost-effectiveness. I actually think both small donors and Open Phil can have an impact greater than funding CEPI right now.

(Of course, Open Phil could be wrong. Maybe they won’t discover better opportunities, or EA funding will grow faster than they expect, and their bar today should be lower. In this case, it will have been a mistake not to donate to CEPI now.)


In general, my default view for any EA cause is always going to be:

If this isn't funded by OpenPhil, why should I think it's a good idea?

If this is funded by OpenPhil, why should I contribute more money?

It’s true that it’s not easy to beat Open Phil in terms of effectiveness, but this line of reasoning seems to imply that Open Phil is able to drive cost-effectiveness to negligible levels in all causes of interest.  Actually Open Phil is able to fund everything above a certain bar, and additional small donations have a cost-effectiveness similar to that bar.

In the extreme version of this view, a donation to AMF doesn't really buy more bednets, it's essentially a donation to GiveWell, or even a donation to Dustin Moskovitz.

You’re right that donations to AMF probably doesn’t buy more bednets, since AMF is not the marginal opportunity any more (I think, not sure about that). Rather, additional donations to global health get added to the margin of GiveWell donations over the long term, which Open Phil and GiveWell estimate has a cost-effectiveness of about 7x GiveDirectly / saving the life of a child under 5 for $4,500.

You’re also right that as additional funding comes in, the bar goes down, and that might induce some donors to stop giving all together (e.g. maybe people are willing to donate above a certain level of cost-effectiveness, but not below.

However, I think we’re a long way from that point. I expect Dustin Moskovitz would still donate almost all his money at GiveDirectly-levels of cost-effectiveness, and even just within global health, we’re able to hit levels at least 5x greater than that right now.

Raising everyone in the world above the extreme poverty line would cost perhaps $100bn per year (footnote 8 here), so we’re a long way from filling everything at a GiveDirectly level of cost-effectiveness – we’d need about 50x as much capital as now to do that, and that’s ignoring other cause areas.

There seem to be a few reasonable views:

1. OpenPhil will fund the most impactful things up to $Y/year.

2. OpenPhil will fund anything with an expected cost-effectiveness of above X QALYs/$.

3. OpenPhil tries to fund every highly impactful cause it has the time to evaluate.

I think view (2) is closest, but this part is incorrect:

What about the second view? In that case, you're not freeing up any money since OpenPhil just stops donating once it's filled the available capacity.

What actually happens is that as more funding comes in, Open Phil (& other donors) slightly reduces its bar, so that the total donated is higher, and cost-effectiveness a little lower. (Which might take several years.)

Why doesn’t Open Phil drop its bar already, especially given that they’re only spending ~1% of available capital per year? Ideally they’d be spending perhaps more like 5% of available capital per year. The reason this isn’t higher already is because growth in grantmaking capacity, research and the community will make it possible to find even more effective opportunities in the future. I expect Open Phil will scale up its grantmaking several fold over the coming decade. It looks like this is already happening within neartermism.

One way to steelman your critique, would be to push on talent vs. funding constraints. Labour and capital are complementary, but it’s plausible the community has more capital relative to labour than would be ideal, making additional capital less valuable. If the ratio became sufficiently extreme, additional capital would start to have relatively little value. However, I think we could actually deploy billions more without any additional people and still achieve reasonable cost-effectiveness. It’s just that I think that if we had more labour (especially the types of labour that are most complementary with funding), the cost-effectiveness would be even higher.

Finally, on practical recommendations, I agree with you that small donors have the potential to make donations even more effective than Open Phil’s current funding bar by pursuing strategies similar to those you suggest (that’s what my section 3 covers – though I don’t agree that grants with PR issues is a key category).  But simply joining Open Phil in funding important issues like AI safety and global health still does a lot of good.

In short, world GDP is $80 trillion. The interest on EA funds is perhaps $2.5bn per year, so that’s the sustainable amount of EA spending per year. This is about 0.003% of GDP. It would be surprising if that were enough to do all the effective things to help others.


 

Comment by Benjamin_Todd on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-25T10:49:56.299Z · EA · GW

There isn't a hard cutoff, but one relevant boundary is when you can ignore the other issue for practical purposes. At 10-100x differences, then other factors like personal fit or finding an unusually good opportunity can offset differences in cause effectiveness. At, say 10,000x, they can't.

Sometimes people also suggest that e.g. existential risk reduction is 'astronomically' more effective than other causes (e.g. 10^10 times), but I don't agree with that for a lot of reasons.

Comment by Benjamin_Todd on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-24T16:23:27.702Z · EA · GW

That's fair - the issue is there's a countervailing force in that OP might just fill 100% of their budget themselves if it seems valuable enough. My overall guess is that you probably get less than 1:1 leverage most of the time.

Comment by Benjamin_Todd on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-24T12:39:02.008Z · EA · GW

I think this dynamic has sometimes applied in the past.

However, Open Philanthropy are now often providing 66%, and sometimes 100%, so I didn't want to mention this as a significant benefit.

There might still be some leverage in some cases, but less than 1:1. Overall, I think a clearer way to think about this is in terms of the value of having a diversified donor base, which I mention in the final section.

Comment by Benjamin_Todd on AI Safety Needs Great Engineers · 2021-11-24T12:16:18.362Z · EA · GW

+1 to this!

If you're a software engineer considering transitioning into AI Safety, we have a guide about how to do it, and attached podcast interview.

There are also many other ways SWE can use their skills for direct impact, including in biosecurity and by transitioning into information security, building systems at EA orgs, or in various parts of govt.

To get more ideas, we have 180+ engineering positions on our job board.

Comment by Benjamin_Todd on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-24T00:04:32.076Z · EA · GW

There are no sharp cut offs - just gradually diminishing returns.

An org can pretty much always find a way to spend 1% more money and have a bit more impact. And even if an individual org appears to have a sharp cut off, we should really be thinking about the margin across the whole community, which will be smooth. Since the total donated per year is ~$400m, adding $1000 to that will be about equally as effective as the last $1000 donated.

 

You seem to be suggesting that Open Phil might be overfunding orgs so that their marginal dollars are not actually effective.

But Open Phil believes it can spend marginal dollars at ~7x GiveDirectly.

I think what's happening is that Open Phil is taking up opportunities down to ~7x GiveDirectly, and so if small donors top up those orgs, those extra donations will be basically as effective as 7x GiveDirectly (in practice negligibly lower).

 

Comment by Benjamin_Todd on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-23T22:12:36.720Z · EA · GW

Yes, my main attempt to discuss the implications of the extra funding is in the Is EA growing? post and my talk at EAG. This post was aimed at a specific misunderstanding that seems to have come up. Though, those posts weren't angsty either.

Comment by Benjamin_Todd on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-23T18:24:44.036Z · EA · GW

This is the problem with the idea of 'room for funding'. There is no single amount of funding a charity 'needs'. In reality there's just a diminishing return curve. Additional donations tend to have a little less impact, but this effect is very small when we're talking about donations that are small relative to the charity's budget (if there's only one charity you want to support), or small relative to the EA community as a whole if you take a community perspective.

Comment by Benjamin_Todd on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-23T18:21:54.169Z · EA · GW

Makes sense - have added a note to the list.

Comment by Benjamin_Todd on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-23T18:20:43.541Z · EA · GW

I agree that's better - have changed it.

Comment by Benjamin_Todd on We need alternatives to Intro EA Fellowships · 2021-11-20T00:20:35.382Z · EA · GW

One quick comment is that people who are more self-motivated can easily progress via reading books, online content, podcasts etc. - and they don't need a fellowship at all.

Besides reading material, the main extra thing they need are ways to meet suitable people in the community – after they have some connections they'll talk about it the ideas naturally with those connections.

To get these people, you mainly need to:

1. Reach them with something interesting

2. Get them subscribed to something (e.g. newsletter, social media), so you can periodically remind them about it

3. Introduce them to some ways to learn more

4. Make some one-on-one introductions, or send them to EAG, or local group socials.

I suspect there are a bunch more ways we could be doing the above, which will, if done well, find new people more cheaply than fellowships - especially the most talented and proactive people.

Comment by Benjamin_Todd on What does the growth of EA mean for our priorities and level of ambition? · 2021-11-17T16:43:27.139Z · EA · GW

Applied Divinity Studies and Rossa O'Keeffe-O'Donovan both pointed out that talking about a single 'bar' can sometimes be misleading.

For instance, it can often be worth supporting a startup charity that has, say, a 10% chance of being above the bar, even if the expected value is that they're below the bar. This is because funding them provides value of information about their true effectiveness.

It can also been worth supporting organisations that are only a little above the bar but might be highly scalable, since that can create more total giving opportunities above the bar in the longer term.

As a quick summary, I think it's reasonable to say something like: "funders would like to eventually donate to as many opportunities above the bar as possible."

To that end, when assessing specific opportunities, they'll want to consider all of:

  • What's the probability it's above the bar?
  • If turns out to be above above the bar, how much above the bar is it?
  • If it turns out to be above the bar, how scalable is it?

In each case, the higher the better.

The eventual goal is something like "donating as many $ above the bar weighted by cost-effectiveness as possible".

Comment by Benjamin_Todd on A Model of Patient Spending and Movement Building · 2021-11-16T11:26:59.598Z · EA · GW

We should keep reminding ourselves that FTX's value could easily fall by 90% in a big bear market.

Comment by Benjamin_Todd on What does the growth of EA mean for our priorities and level of ambition? · 2021-11-15T20:42:25.129Z · EA · GW

Normally with the podcasts we cut the filler words in the audio. This audio was unedited so ended up with more filler than normal. We've just done a round of edits to reduce the filler words.

Comment by Benjamin_Todd on What does the growth of EA mean for our priorities and level of ambition? · 2021-11-15T20:37:25.506Z · EA · GW

I'm not a funder myself, so I don't have a strong take on this question.

I think the biggest consideration might just be how quickly they expect to find opportunities that are above the bar. This depends on research progress, plus how quickly the community is able to create new opportunities, plus how quickly they're able to grow their grantmaking capacity.

All the normal optimal timing questions also also relevant (e.g. is now an unusually hingey time or not; the expected rate of investment returns).

The idea of waiting 10 years while you gradually build a team & do more research, and maybe double your money via investing, seems like a pretty reasonable strategy, unless you think now is very hingey. This is basically the strategy that Open Phil has taken so far. Though you could also argue that either now *is* hingey, and also that we're only deploying 1% of capital per year, which seems too low, which would both be reasons to deploy more rapidly.

Comment by Benjamin_Todd on What does the growth of EA mean for our priorities and level of ambition? · 2021-11-15T17:01:00.480Z · EA · GW

Hey, that seems like I mis-spoke in the talk (or there's a typo in the transcript). I think it should be "current bar of funding with global development".

I think in general new charities need to offer some combination of the potential for higher or similar cost-effectiveness of AMF and scalability. Exactly how to weigh those two is a difficult question.

Comment by Benjamin_Todd on A Model of Patient Spending and Movement Building · 2021-11-11T15:08:34.809Z · EA · GW

Attempt to summarise the key points on Twitter:

Comment by Benjamin_Todd on A Model of Patient Spending and Movement Building · 2021-11-11T14:57:17.204Z · EA · GW

A hacky solution is just to bear in mind that 'movement building' often doesn't look like explicit recruitment, but could include a lot of things that look a lot like object level work.

We can then consider two questions:

  • What's the ideal fraction to invest in movement building?
  • What are the highest-return movement building efforts? (where that might look like object-level work)

This would ignore the object level value projected by the movement building efforts, but that would be fine, unless they're of comparable value. 

For most interventions, either the movement building effects or the object level value is going to dominate, so we can just treat them as one of the other.

Comment by Benjamin_Todd on Good news on climate change · 2021-11-08T14:06:11.682Z · EA · GW

That all makes sense, thank you!

Comment by Benjamin_Todd on Good news on climate change · 2021-11-06T23:03:02.690Z · EA · GW

I had a similar question. I've been reading some sources arguing for strong action on climate change recently, and they tend to emphasise tipping points.

My understanding is that the probability of tipping points is also accounted for in the estimates of eq climate sensitivity, and is one of the bigger reasons why the 95% confidence interval is wide.

It also seems like if ultimately the best guess relationship is linear, then the expectation is that tipping points aren't decisive (or that negative feedbacks are just as likely as positive feedbacks).

Does that seem right?

Comment by Benjamin_Todd on Good news on climate change · 2021-11-06T22:59:35.774Z · EA · GW

This is a useful post and updated my estimate of the chance of lots of warming (>5 degrees) downwards.

 

Quick question: Do you have a rough sense of how the different emission scenarios translate into concentration of CO2 in the atmosphere?

 

The reason I ask is that I had thought there's a pretty good chance that concentrations double compared to preindustrial, which would suggest the long-term temperature rise will be roughly 2 - 5 centigrade with 95% confidence – using the latest estimate of ECS.

 

However, the estimates in the table are mostly lower than this. Are they lower because:

  • Concentrations won't double on these emission scenarios?
  • The world will still be warming in 2100, and won't have yet reached equilibrium?
  • Something else I'm not understanding?
Comment by Benjamin_Todd on What's the role of donations now that the EA movement is richer than ever? · 2021-11-04T18:58:10.747Z · EA · GW

I don't mean to imply that, and I agree it probably doesn't make sense to think longtermist causes are top and then not donate to them. I was just using 10x GiveDirectly as an example of where the bar is within near termism. For longtermists, the equivalent is donating to the EA Long-term or Infrastructure Funds. Personally I'd donate to those over GiveWell-recommended charities. I've edited the post to clarify.

Comment by Benjamin_Todd on EA Forum engagement doubled in the last year · 2021-11-04T12:15:54.451Z · EA · GW

Would be useful to see the number of unique users over time, rather than just engagement hours.

Comment by Benjamin_Todd on Can EA leverage an Elon-vs-world-hunger news cycle? · 2021-11-02T22:14:41.697Z · EA · GW

Is the aim here to generate a bunch of PR for EA, or to actually convince Elon Musk to do more EA-aligned giving?
 

If the latter, I doubt trying to publicly pressure him into donating to an EA global poverty charity as part of a twitter debate is the best way to do it. (In fact, he already knows several EAs and has donated to EA orgs before.)

 

The 'get PR' angle (along the lines of what Fin is saying below) seems more promising – in that ideally we'd have more 'public intellectuals' focused on getting EA into the media & news cycle. This is mainly because the main candidates are doing things that seem even higher value, but I would like to fix this.

Comment by Benjamin_Todd on Can EA leverage an Elon-vs-world-hunger news cycle? · 2021-11-02T22:05:36.666Z · EA · GW

I'd actually say there's a lot of work done on recruiting HNW donors - it's just mainly done via one-on-one meetings so not very visible.

That said, Open Philanthropy, Effective Giving, Founder's Pledge, Longview & Generation Pledge all have it as part of their mission.

There would be even more work on it, but right now the bottleneck seems to be figuring out how to spend the money we already have (we're only deploying $400m p.a. out of over $40bn+,  under 1%). If we had a larger number of big, compelling opportunities, we could likely get more mega donors interested.

Comment by Benjamin_Todd on What's the role of donations now that the EA movement is richer than ever? · 2021-11-02T18:39:51.905Z · EA · GW

It's super rough but I was thinking about jobs that college graduates take in general.

One line of thinking is based on a direct estimate:

  • Average college grad income ~$80k, so 20% donations = $16k per year
  • Mean global income is ~18k vs. GiveDirectly recipients at $500
  • So $1 to GiveDirectly creates value equivalent to increasing global income by ~$30
  • So that's ~$500k per year equivalent
  • My impression is very few jobs add this much to world income (e.g. here's one piece of reading about this). Maybe just people who are both highly paid and do something with a lot of positive externalities, like useful R&D or something like that.

Another line of thinking is that earning to give for GiveDirectly is a career path that has already been heavily selected for impact i.e. it contributes to global development, which is one of the most pressing global problems, it's supporting an intervention and org that's probably more effective than average within that cause, and it involves a strategy with some leverage (i.e. earning to give). So, we shouldn't expect it to be easy to find something a lot better.

Comment by Benjamin_Todd on What's the role of donations now that the EA movement is richer than ever? · 2021-11-02T16:29:16.423Z · EA · GW

I think that's roughly right - though some of the questions around timing donations get pretty  complicated.

Comment by Benjamin_Todd on What's the role of donations now that the EA movement is richer than ever? · 2021-11-02T11:46:47.529Z · EA · GW

I was wrong about that. The next step for GiveWell would be to drop the bar a little bit (e.g. to 3-7x GiveDirectly), rather than drop all the way to GiveDirectly.

https://twitter.com/moskov/status/1455210000855359490

Comment by Benjamin_Todd on What's the role of donations now that the EA movement is richer than ever? · 2021-11-02T11:45:31.379Z · EA · GW

I agree there's a substantial signalling benefit.

People earning to give might well have a bigger impact via spreading EA than through their donations, but one of the best ways to spread EA is to lead by example. Making donations makes it clear you're serious about what you say.

Comment by Benjamin_Todd on What's the role of donations now that the EA movement is richer than ever? · 2021-11-01T18:15:36.248Z · EA · GW

Quick attempt to summarise:

  1. Earning to give is still impactful – probably more impactful than 98%+ of jobs. The current funding bar in e.g. global health by GiveWell is about 10x GiveDirectly, and so marginal donations still have about that level of impact. In longtermism, the equivalent bar is harder to quantify, but you can look at recent examples of what's been funded by the EA Infrastructure and Long Term Funds (the equivalent of GiveDirectly is something like green energy R&D or scaling up a big disease monitoring program). Small donors can probably achieve a similar or better level of effectiveness as GiveWell. Going forward, the crucial question is how quickly more opportunities at that level can be found. It might be possible to keep the bar at 10x, otherwise it's likely to drop a bit so that more funds can be deployed e.g. to ~5x GiveDirectly. If that happens, the value of earning to give in absolute terms will go down 2x, but still be very high.
     
  2. Roles that help to deploy large amounts of funding above the current funding bar are more impactful than before (e.g. grantmaking, research, organisation building, entrepreneurship, movement building, plus supporting and feeder roles to these roles). This means their value has gone up relative to earning to give. (This is what we should expect because funding and labour are partially complementary, so as the amount of funding increases, the value of labour increases.) This means if you can find a good opportunity that's a good fit within one of these paths, it's seriously worth considering switching, and the case for switching is stronger than before.

    3. If you're earning to give and not going to switch, you could consider trying to add extra value by doing more active grantmaking (e.g. exploring new causes) rather than just topping up large pots. However, you still need to be able to do this better than e.g. the EA Funds, and it might still be more efficient just to earn extra money and delegate your grantmaking. Entering a donor lottery is another good option. It might also be better to focus on community building or gaining career capital that might make you happy to switch to direct work in the future (e.g. saving money).

     
Comment by Benjamin_Todd on Many Undergrads Should Take Light Courseloads · 2021-10-31T10:43:31.158Z · EA · GW

Thanks for the article, I've added a link to our page:

https://80000hours.org/articles/advice-for-undergraduates/

 

I'd be curious for thoughts on when you should  take more courses. The main situations that came to mind for me were: (i) you're learning something you might actually use (e.g. programming) or (ii) you want to open up extra grad school options (e.g. taking extra math courses to open up economics).

Comment by Benjamin_Todd on Is effective altruism growing? An update on the stock of funding vs. people · 2021-10-31T10:14:37.323Z · EA · GW

We can take my estimates of the drop out rate to make an estimate of the equilibrium size of the movement.

If ~3% of the more engaged people drop out each year, and the flow of new members stays constant at ~300 per year (14% of 2300), then the number of highly engaged members will tend towards 10,000, which is 4-fold growth from today.

If the ratios stay the same, the number of people at the slightly broader definition of membership will tend towards 30,000.

This process will take ~25 years.

We'll hopefully be able to grow the flow of people entering in that time(!). If we double the rate of new people entering each year,  then we'll double the long-term equilibrium size.

This also assumes that the drop out rate doesn't change, but large changes are likely depending on fashion. I've also seen evidence that drop out rates tend to decline as people have been involved for longer, though on the other hand,  eventually people will start to retire, increasing the drop out rate.

Comment by Benjamin_Todd on List of EA funding opportunities · 2021-10-28T13:30:39.204Z · EA · GW

Should GiveWell's incubation grants be listed?
https://www.givewell.org/research/incubation-grants

(And there are other adjacent programmes like Evidence Action.)

What about Charity Entrepreneurship?

Comment by Benjamin_Todd on Future Funding/Talent/Capacity Constraints Matter, Too · 2021-10-25T12:31:32.932Z · EA · GW

I agree the future constraints are what mostly matter - I speculate about them in the original post.

I also agree earning to give is still useful - simply investing the money and donating when there is more capacity seems like a decent option; and medium donors can play a useful role as angel donors and people matching OP.

I think I'm less confident than you there will be convergence in the next 10yr. I think it's fairly likely that another 1-3 multibillionaires start significantly funding EA issues, which could mean the amount of funding continues to grow rapidly.

The number of people needs to grow significantly faster than the amount of funding to significantly decrease the absolute size of the gap.
 

I do expect some convergence eventually - it seems easier to 3x  or 10x the number of people than the amount of capital - though it's not obvious.

 

I also agree there's a decent chance we discover a fairly effective longtermist money-pit.

Comment by Benjamin_Todd on An update in favor of trying to make tens of billions of dollars · 2021-10-24T20:22:59.841Z · EA · GW

Definitely - but that could make the point even stronger. If it's such an outlier, maybe that means it's become easier to do something like this, which is an update in favour of trying.

Comment by Benjamin_Todd on An update in favor of trying to make tens of billions of dollars · 2021-10-15T22:07:36.765Z · EA · GW

I agree Alameda seemed like an unusually good opportunity at the time.

Comment by Benjamin_Todd on An update in favor of trying to make tens of billions of dollars · 2021-10-15T14:26:25.463Z · EA · GW

It's definitely stronger evidence of that :) Though I've also noticed EAs advancing ahead of my expectations in other areas, like government.

Comment by Benjamin_Todd on An update in favor of trying to make tens of billions of dollars · 2021-10-15T10:55:23.938Z · EA · GW

I basically agree with the core point.

I think recent events have been an update in favour of people in effective altruism being super talented, which means we should aim at the very top.

I also think I agree with the arguments that lower risk-aversion mean we should aim higher.

I wonder if these arguments especially bite at the ~$1bn+ project level i.e. there are a lot of startup founders aiming to found a unicorn and make $100m for themselves. But there's very little personal benefit in going from, say $10bn company to $100bn.

My main push back is that I'm not sure people should be aiming to become billionaires, given the funding situation. I'd prefer to see people aim at the top in other paths e.g. winning a nobel prize, becoming president, founding a 'megaproject' non-profit etc.

(Though, it seems like the distribution of wealth might be one of the most heavy-tailed, so the rewards of aiming high there might be better than other paths, and EAs seem perhaps unusually good at earning money.)

 

PS here are two threads from Sam on this topic:

https://twitter.com/sbf_ftx/status/1337250686870831107

https://twitter.com/sbf_ftx/status/1337904412149182464

Comment by Benjamin_Todd on The Cost of Rejection · 2021-10-10T21:31:23.008Z · EA · GW

Yes, my figures were proportional rather than absolute.

I was mainly responding to:

  • EA organizations are growing slower or at pace with the overall EA population

This sounds like a proportional claim to me. My take is they're growing at the same or faster pace as the overall EA population.

It's true that if they both grow the same proportionally, the absolute number of people not able to get jobs will grow. It's less obvious to me something is 'going wrong' if they both grow at the same rate, though it's true that the bigger the community, the more important it is to think about culture.

Comment by Benjamin_Todd on The Cost of Rejection · 2021-10-09T19:38:08.314Z · EA · GW

Hey there, 

I agree with your main point that rejection is painful, has negative effects on the culture, and we should think about how to minimise it. 

But I wanted to add that in my post about whether EA is growing, I estimate that the number of people employed in EA orgs and the number of engaged EAs have both been growing at around 20% p.a. since 2015.

If anything, in the last 1-2 years I'd guess that the number of jobs has been growing faster than the total number of people.

There was a period maybe around 2015-2018 when the number of people was more likely to have been growing faster than the number of jobs, but I don't think that's happening right now.

Comment by Benjamin_Todd on Snapshot of a career choice 10 years ago · 2021-09-27T13:19:25.322Z · EA · GW

I agree more case studies would be great. Unfortunately I don't think producing them is going to be at the top of our stack at 80k for at least a year - right now we're focused on producing content aimed at attracting new readers, and we haven't generally found this type of material is the best for that.

If someone on the forum would like to write a study of their own career though (or interview someone else), I think that could be a pretty useful piece of content. We'd be interested in incorporating them into our planning process, which could really use more worked examples (and could later develop into the practical kind of book you're outlining).

Comment by Benjamin_Todd on Is effective altruism growing? An update on the stock of funding vs. people · 2021-09-22T12:21:33.311Z · EA · GW

I made a mistake in counting the number of committed community members.

I thought the Rethink estimate of the number of ~7,000 'active' members was for people who answered 4 or 5 out of 5 on the engagement scale in the EA survey, but actually it was for people who answered 3, 4 or 5.

The number of people who answered 4 or 5 is only ~2,300.

I've now added both figures to the post.

Comment by Benjamin_Todd on Is effective altruism growing? An update on the stock of funding vs. people · 2021-09-20T20:30:16.054Z · EA · GW

Hi Aidan, the short answer is that global poverty seems the most funding constrained of the EA causes. The skill bottlenecks are most severe in longtermism and meta e.g. at the top of the 'implications section' I said:

The existence of a funding overhang within meta and longtermist causes created a bottleneck for the skills needed to deploy EA funds, especially in ways that are hard for people who don’t deeply identify with the mindset.

That said, I still thinking global poverty is 'talent constrained' in the sense that:

  • If you can design something that's several-fold more cost-effective than GiveDirectly and moderately scalable, you have a good shot of getting a lot of funding. Global poverty is only highly funding constrained at the GiveDirectly level of cost-effectiveness. 
     
  • I think people can often have a greater impact on global poverty via research, working at top non-profits, advocacy, policy etc. rather than via earning to give.
Comment by Benjamin_Todd on How are resources in EA allocated across issues? · 2021-09-20T20:24:50.050Z · EA · GW

I agree that figure is really uncertain. Another issue is that the mean is driven by the tails.

For that reason, I mostly prefer to look at funding and the percentage of people separately, rather than the combined figure - though I thought I should provide the combined figure as well.

On the specifics:

I'd guess >20 people pursuing direct work could make >$10 million per year if they tried earning to give

That seems plausible, though jtbc the relevant reference class is the 7,000 most engaged EAs rather than the people currently doing (or about to start doing) direct work. I think that group might in expectation donate several fold-less than the narrower reference class.

Comment by Benjamin_Todd on Gifted $1 million. What to do? (Not hypothetical) · 2021-09-03T18:29:55.977Z · EA · GW

2) I agree you should consider your future income, the percentage should be calculated as a percentage of current assets + NPV of future income.

 

1) I agree the approach of "work out if the community is above or below the optimum level of investing vs. saving, and then either donate everything, or save everything" makes a lot of sense for small donors. I'd feel pretty happy if someone wanted to try do that.  (Another factor is that it could be a good division of labour for some to specialise in giving soon and some specialise in investing.)

But I feel a bit unsure about recommending it.

  •  One issue is that it assumes you're pretty coordinated with the rest of the community, which might not be true, especially of new people.
  • It's very hard to know what the optimal level is for the community as a whole should be, and if the person gets it wrong, then they might donate everything, which is irreversible. The advice of "give about 4% per year" seems less likely to be drastically wrong for them.
  • From a community pov, 'everyone donate what they think they ideal percentage should be' vs. 'donate all or nothing depending on your best guess at whether above or below the optimum' would both end up converging to the ideal percentage donated.
  • Donating more gradually means you can take advantage of learning (and as you say, radical worldview shifts).
  • It also lets you self-insure if your personal circumstances end up worse than you expected.

 

PS A third approach would be more bottom up: do a search for the best thing you can donate to right now, then compare it to your best investment, and try to think about which is better.

Comment by Benjamin_Todd on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-30T15:29:22.203Z · EA · GW

The estimates are aiming to take account of the counterfactual i.e. when I say "that person generates value equivalent to extra donations of $1m per year to the movement", the $1m is accounting for the fact that the movement has the option to hire someone else.

In practice, most orgs are practicing threshold hiring, where if someone is clearly above the bar, they'll create a new role for them (which is what we should expect if there's a funding overhang).

Comment by Benjamin_Todd on Gifted $1 million. What to do? (Not hypothetical) · 2021-08-30T12:21:38.717Z · EA · GW

The advice below is only about where to donate it, but that's only one of the key questions.

It's also worth thinking hard about how much you want to give, and how to time your giving.

Even if you decide you want to use the entire amount for good, Phil Trammel's model about giving now vs. giving later suggests that you should donate x% of the capital per year, where x% is mainly given by your discount rate. In general people think the community as a whole should donate 1-10% per year, so I'd suggest, as a starting point, you could pick a percentage in that range to donate each year.

(There are lots of complications. A few examples: (i) If you think the community is too tilted towards investment, and you've found an unusually good opportunity to donate now, then it could be justified to donate the entire amount right fairly soon. But I think a good prior is to donate something close to the optimal percentage for the community as a whole.  (ii) If you think AI is coming soon, or lots of big donors are going to enter EA in the coming years, you could argue for a lot of urgency (iii) likewise if you anticipate you might give up on donating in the future (iv) Otoh, you might radically change your views of which causes are best in the coming years. If you donate everything now, you can't take advantage of that learning. (iv) there are also questions about psychology and practicalities e.g. whether you'll still be willing to donate in the future, how much time you have for research, taxes.)

There's also the question of how much to give to charity vs. spend on yourself. As one consideration: once you've donated the money, that's irreversible - if you run into financial hardship later, you could really regret it. Disbursing more slowly is not only probably optimal for impact, but it also preserves this option value.

I think a procedure that makes a lot of sense is to work out how much you'd like to live on per year, and how that might change over time. Then you can donate a significant fraction of any income you earn above that.

For instance, if you'd like to live on $50k, need to save $20k per year for retirement, you get $70k of income from work, and you earn $20k of income from your investments, then your 'excess' that year is $20k, so maybe you put that $20k into your 'altruism' fund, and then donate 1-10% per year of that. (While bearing in mind your altruism fund could be used as emergency funds.) I think this is a pretty good procedure from a tax pov too.

As a rough rule of thumb, I think you can plan to withdraw ~2% per year from money invested in the global index indefinitely, if you want your spending to keep pace with nominal GDP. The 2% is chosen to be roughly the dividend yield on global stocks, accounting for net buybacks, so this percentage will go up and down over time. But right now this means your $1m can provide an indefinite income of ~$20k (pre-tax).

The above was all framed in terms of income, but it probably makes sense to have some wealth goals as well e.g. maybe you should put aside enough for retirement + runway + some income insurance, and then donate excess income after those goals are made.

Related, it's worth thinking about whether you can use the money to empower yourself to do something high-impact. E.g. you now have the freedom to take 1-2 years off work – should you consider retraining, studying, or trying to make a big career change? You might be able to think of ways to increase your long-term impact a lot, perhaps with higher-rates of return than investing the money in the stockmarket.

Comment by Benjamin_Todd on How are resources in EA allocated across issues? · 2021-08-27T12:30:05.884Z · EA · GW

Seems reasonable.

Salaries are also lower than in AI.

You could make a similar argument about animal welfare, though, I think.