Is effective altruism growing? An update on the stock of funding vs. people

post by Benjamin_Todd · 2021-07-29T11:47:26.747Z · EA · GW · 98 comments

Contents

  Which growth metrics matter?
  How many funds are committed to effective altruism?
    note for EA investors
  How quickly have committed funds grown?
  How much funding is being deployed each year?
  How many engaged community members are there?
  How quickly has the number of engaged community members grown?
  How much labour is being deployed?
  Changes in the overhang: how quickly has funding grown compared to people?
  What about the future of the balance of funding vs. people?
    investment returns
    donors vs. new members
    capital investment returns
  Implications for career choice
    roles are most needed?
    should individuals respond to these needs?
    does this mean for earning to give?
    jobs
  What’s next
None
98 comments

This is a cross-post from 80,000 Hours. See part 2 [EA · GW] on the allocation across cause areas.

In 2015, I argued that funding for effective altruism – especially within meta and longtermist areas – had grown faster than the number of people interested in it, and that this was likely to continue. As a result, there would be a funding ‘overhang’, creating skill bottlenecks for the roles needed to deploy this funding.

A couple of years ago, I wondered if this trend was starting to reverse. There hadn’t been any new donors on the scale of Good Ventures (the main partner of Open Philanthropy), which meant that total committed funds were growing slowly, giving the number of people a chance to catch up.

However, the spectacular asset returns of the last few years and the creation of FTX, seem to have shifted the balance back towards funding. Now the funding overhang seems even larger in both proportional and absolute terms than 2015.

In the rest of this post, I make some rough guesses at total committed funds compared to the number of interested people, to see how the balance of funding vs. talent might have changed over time.

This will also serve as an update on whether effective altruism is growing – with a focus on what I think are the two most important metrics: the stock of total committed funds, and of committed people.

This analysis also made me make a small update in favour of giving now vs. investing to give later.

Here’s a summary of what’s coming up:

To caveat, all of these figures are extremely rough, and are mainly estimated off the top of my head. I haven’t checked them with the relevant donors, so they might not endorse these estimates. However, I think they’re better than what exists currently, and thought it was important to try to give some kind of rough update on how my thinking has changed. There are likely some significant mistakes; I’d be keen to see a more thorough version of this analysis. Overall, please treat this more like notes from a podcast than a carefully researched article.

Which growth metrics matter?

Broadly, the future[1] impact of effective altruism depends on the total stock of:

(In economic growth models, this would be capital, labour, and productivity.)

You could consider other resources like political capital, reputation, or public support as well, though we can also think of these as being a special type of labour.

In this post, I’m going to focus on funding and labour. (To do an equivalent analysis for ideas, which could easily matter more, we could try to estimate whether the expected return of our best way of using resources is going up or down, with some kind of adjustment for diminishing returns.)

For both funding and labour, we can look at the growth of the stock of that resource, or the growth of how much of that resource is deployed (i.e. spent on valuable projects) each year.

If we want to estimate how quickly effective altruism is growing, then I think the stock is most relevant, since that determines how many resources will be deployed in the long term.

It’s true there’s no point having a big stock of resources if it’s not being deployed – so we should also want to see growth in deployed resources – however there can be good reasons to delay deployment while the stock is still growing, such as (i) to gain better information about how to spend it, (ii) to build up grantmaking capacity, or (iii) to accumulate investment returns and career capital. So, if forced to choose between stock and deployment, I’d choose the stock as the best measure of growth.

Both the stock of resources and the amount deployed each year are also more important than ‘top-of-funnel’ metrics (like Google search volume for ‘effective altruism’) though we should watch the top-of-funnel metrics carefully – especially insofar as they correlate with future changes in the stock.

Finally, I think it’s very important to try to make an overall estimate of the total stock of resources. It’s possible to come up with a long list of EA growth metrics, but different metrics typically vary by one or two orders of magnitude in how important they are. Typically most growth is driven by one or two big sources, so many metrics can be stagnant or falling while the total resources available are exploding.

How many funds are committed to effective altruism?

Here are some very, very rough figures:

Chart of funds committed to effective altruism

I’ve tried to focus on funds that are already ‘committed’. I mostly haven’t adjusted them for the chance the person gives up on EA (except for GWWC), but I’ve also ignored the net present value of likely commitments from new future donors.

I’m aware of at least one new donor who is pushing ahead with plans to donate to longtermist issues at around $100 million per year, with perhaps a net present value in the tens of billions.

There are several other billionaires who seem sympathetic to EA (e.g. Reid Hoffman has donated to GPI) – these are ignored.

I’m also ignoring people like Bill Gates who donate to things that EAs would often endorse.

Bear in mind that these figures are extremely volatile – e.g. the value of FTX could easily fall 80% in a market crash, or if a competitor displaces it. Many of the stakes that the wealth comes from are also fairly illiquid – if the owners tried to sell a significant fraction, it could crash the price.

Side note for EA investors

As an individual EA who’s fairly value-aligned with other EA donors, you should invest in order to bring the overall EA portfolio in line with the ideal EA portfolio, and to prefer assets that are uncorrelated with other EAs. The current EA portfolio is highly tilted towards Facebook and FTX, and more broadly towards Ethereum/decentralised finance and big U.S. tech companies. This overweight is much more significant if we risk-weight rather than capital-weight. For instance, Ethereum and FTX equity are probably about 5x more volatile and risky than Facebook stock, and so account for the majority of our risk allocation. This means you should only hold assets highly correlated to these if you think this overweight should be increased even further. It seems likelier to me that most EAs should underweight these assets in order to diversify the portfolio.

How quickly have committed funds grown?

The committed funds are dominated by Good Ventures and FTX, so to estimate total growth, we mainly need to estimate how much they’ve grown:

  1. In 2015, Forbes estimated Moskovitz’s net worth was $8 billion, so it has grown by 2.6x since then (about 20% per year). This is probably due to (i) Facebook stock price appreciation and (ii) the Asana IPO.

  2. FTX didn’t exist in 2015.

The impact of these two sources alone is growth of $33 billion since 2015. The new total of $39 billion from both is about five-fold growth compared to $8 billion in 2015.

The other sources make up a minority of the funds, but my rough estimate is they have grown around 2.5x since 2015.

For instance, GiveWell donors (excluding Open Phil) were giving $80 million per year in 2019, up from about $40 million in 2015. We don’t yet have the finalised figures for 2020, but it seems to be significantly higher – perhaps $120 million (see below).

Many of the sources have grown faster. As one example, in 2015 David Goldberg estimated the value of pledges made by Founders Pledge members at $64 million, compared to over $3 billion today.

In total, I’d guess that committed funds in 2015 were about $10 billion, so have grown 4.6x. This is 37% per year over five years.

You might worry that most of this growth was concentrated in the earlier years, and that recent growth has been slow. My guess is that if anything the opposite is the case – growth has been concentrated in the last 1–2 years, in line with the recent boom in technology stocks and cryptocurrencies, and the creation of FTX.

The situation for each cause could be different. My impression is that the funds available for longtermist and meta grantmaking have grown faster than those for global health.

How much funding is being deployed each year?

In early 2020, I estimated that the EA community was deploying about $420 million per year.

Around 60% was through Open Philanthropy, 20% through other GiveWell donors, and 20% from everyone else. The Open Phil grants were based on an average of their giving 2017–2019, which helps to smooth out big multi-year grants.

$420 million per year would be just over 1% of committed capital.

Even those who are relatively into patient philanthropy think we should aim to donate over 1% per year, and at the 2020 EA Leaders Forum, the median estimate was that we should aim to donate 3% of capital per year.

So, if we’re now at 1% per year, that’s one argument that we should aim to tilt the balance towards giving now rather than investing to give later. In contrast, in early 2020, I thought that longtermist donors were giving more like 3% of capital per year, so it wasn’t obvious whether this was too low or too high. (This argument is fairly weak by itself – the quality of the particular opportunities and our ability to make good grants are also big factors.)

How quickly have deployed funds grown?

Since 60% comes via Open Philanthropy, we can mainly look to their grants.

Around 2014–2015, Open Philanthropy was only making grants of around $30 million per year, which rapidly grew to a new plateau of $200–$300 million by 2017.

At that point, they decided to hold deployed funds constant for several years, in order to evaluate their progress and build staff capacity before trying to scale further.

Dustin Moskovitz and Cari Tuna have said they want to donate everything within their lifetimes. This will require hitting around $1 billion deployed per year fairly soon, which I expect to happen. The Metaculus community agrees, forecasting donations of over $1 billion per year by 2030 in a median scenario.

Note that the grants are very lumpy year-to-year. One reason for this is that Open Philanthropy sometimes makes three- or five-year commitments which all accrue to the first year. For instance, I think 2017 is unusually high due to the grants to OpenAI and malaria gene drives. Open Philanthropy has also reduced how much they donate to GiveWell-recommended charities since 2017 (see the next chart below), which was due to deciding to allocate more to their longtermist bucket, which is distributing more slowly than global health. You’ll get a more accurate impression from taking a three-year (or five-year) moving average, which currently stands at ~$240 million. (The chart below is from Applied Divinity Studies [EA · GW].)

Open Phil by Year

FTX is new, so the founders have only been giving millions per year. Their money is not yet highly liquid, and they haven’t created a foundation, so we should expect it to remain low for a while, but eventually increase to hundreds of millions.

Money moved by GiveWell (excluding Open Philanthropy) hit a flat period from 2015–2017, but seems to have started growing again in 2018, by over 30% per year. I believe the 2020 figures are on track to be even better than 2019, but aren’t shown on this chart (or included in my deployed funds estimate).

Money moved by category My impression from the data I’ve seen is that funds donated by GWWC members, EA Funds, Founders Pledge members, Longview Philanthropy, SFF, etc. have all grown significantly (i.e. more than doubling) in the last five years.

Overall, I estimate the community would have been deploying perhaps $160 million per year in 2015, so in total this has grown 2.6-fold, or 21% per year over five years – somewhat slower than the growth of the stock of committed capital, but roughly in line with the number of people.

Looking forward, my best guess is that this rate of growth continues for the next 5-10 years.

How many engaged community members are there?

The best estimate I’m aware of is by Rethink Priorities [EA · GW] using data from the 2019 EA Survey:

We estimate there are around 2,315 highly engaged EAs and 6,500 (90% CI: 4,700–10,000) active EAs in the community overall.

‘Highly engaged’ is defined as those who answered 5/5 for engagement in the survey, and ‘active’ is those who answered 4 or 5.

This is a fairly high bar for engagement — e.g. it’s someone who would seriously consider changing career to have a greater impact (many significant plan changes we track at 80,000 Hours only report ‘4’ on this scale).

In 2020, I estimate about 14% net growth, bringing the total number of active EAs to 7,400.

You can see some more statistics on what these people are like in the EA Survey [EA · GW].

If we were to consider the number of people interested in effective altruism, it would be much higher. For instance, at 80,000 Hours we have about 150,000 people on our newsletter, and over 100,000 people have bought a copy of Doing Good Better.

How quickly has the number of engaged community members grown?

Unfortunately, it’s still very hard to estimate the growth rate in the number of committed people, since the data are plagued with selection effects and lag effects.

For instance, the data I’ve seen shows that it often takes several years for someone to go from having first heard about EA to filling out the EA Survey, and from there to reporting themselves as ‘4’ or ‘5’ for engagement. This means that many of the new members from the last few years are not yet identified – so most ways of measuring this growth will undercount it.

In mid 2020, I made 6 estimates for the annual growth rate of committed members, which fell in the range of 0-30% in the last 1–2 years. My central estimate was around 20% (+900 per year at ‘4’ or ‘5’ on the engagement scale in the survey).

More recently, we were able to re-use the method Rethink Priorities used in the analysis above, but with data from the 2020 EA Survey rather than 2019. This analysis found [EA(p) · GW(p)] the total number of engaged EAs has grown about 14% in the last year, so would now be 7,400.

This is fairly uncertain, and there’s a reasonable chance the number of people didn’t grow in 2020.

The percentage growth rate would have been a lot higher in 2015–2017, since the base of members was much smaller, and I also think those were unusually good years for getting new people into EA.

Around 2017, there was a shift in strategy from reaching new people to getting those who were already interested into high-impact jobs. This meant that ‘top-line’ metrics — such as web reach and media impressions — slowed down.

My take is that this shift in strategy was at least partially successful, insofar as the number of committed EAs and their influence has continued to grow, despite flattish top-line metrics. (Though there’s a reasonable chance EA could have grown even faster if the top-line growth had continued.)

Going forward, we’ll eventually need to get the top-of-funnel metrics growing again, or the stock of ‘medium’ engaged people will run out, and the number of ‘highly’ engaged people will stop growing. It seems like several groups are prioritising outreach to new people more highly going forward.

What about the skill level of the people involved?

This is a big uncertainty because one influential member (e.g. in a senior position at the White House) can achieve what it might take thousands of others to achieve.

My sense is that the typical influence and skill level of members has grown a lot, partly just because people have grown older and advanced their careers. For example, there are now a number of interested people in senior government positions in the U.K. and U.S. who weren’t there in 2015. The average age of community members is several years higher.

In terms of the level of ‘talent’ of new members, we don’t have great data. Impressions seem to be split between the level being similar to the past and being a bit lower. So if we averaged the two, in expectation there would be a small decrease.

How much labour is being deployed?

It’s hard to estimate how much labour is being ‘deployed’. People can at most deploy one year per year if they focus on impact, but a proper estimate should account for:

Overall, my guess is that we’re only deploying 1–2% of the net present value of the labour of the current membership. This could be an argument for shifting the balance a bit more towards immediate impact rather than career capital – though this is a really complicated question. Young people often have great opportunities to build career capital, and if those opportunities increase their lifetime impact, they should take them, no matter what others in the community are doing.

How quickly has deployed labour increased?

If the percentage of people focused on career capital vs. impact is similar over time, then it should track the stock – so we can try to track that going forward.

To claw together some rough data, according to the 2019 EA Survey, about 260 people said they’re working at ‘an EA org’ (which includes object-level charities). With an estimated 40% response rate, that would imply 650 people in total, which seems a lot higher than what I would have guessed in 2015.

My impression is that most of the most central EA orgs have also grown headcount ~20% per year (e.g. CEA, MIRI, 80K), roughly doubling since 2015, and keeping pace with growth in the number of people.

Working at an EA org is only one option, and a better estimate would aim to track the number of people ‘deployed’ in research, policy, earning to give, etc. as well.

Changes in the overhang: how quickly has funding grown compared to people?

In 2015, I argued there was a funding overhang within meta and longtermist causes. (Though note it’s less obvious there’s a funding overhang within global health, and to a lesser extent animal welfare.) How has this likely evolved?

During 2017–2019, I thought the number of people might have been catching up, but in 2020, it seemed like the growth of each had been roughly similar. A similar rate of proportional growth would mean the absolute size of the overhang was increasing.

As of July 2021 and the latest FTX deal, I now think the amount of funding has grown faster than people, making the growth of the overhang even larger.

Here are some semi-made-up numbers to illustrate the idea:

Suppose that, in 2015:

In that case, it would take $75 million per year to employ them. But $10 billion can generate perpetual income of $200 million,[2] so the overhang is $125 million per year.

Suppose that in 2021:

Then, it would take $225 million to employ 30% of them, but you can generate $1,000 million of income with that capital, so the overhang is $775 million per year.

Another way to try to quantify the overhang is to try to estimate the financial value of the labour, and compare is to the committed funding. My rough estimate is that the labour is worth $50k to $500k per year. If, after accounting for drop out, the average career has 20 years remaining, that would be $1m – $10m per person. (If this seems high, note that it's driven largely by outliers.) If there are 7,400 people, that would be $7.4bn - $74bn in total (central around $20bn). In comparison, I estimated there is almost $50bn of committed capital, so the value of the labour is most likely lower. In contrast, in the economy as a whole, I think human capital is normally thought to be worth more than physical capital, so the situation in effective altruism is most likely the reverse of the norm.

Note that if there’s an overhang, the money can be invested to deploy later, or spent employing people outside of the community (e.g. funding academic research), so it’s not that the money is wasted – it’s more that we’ll end up missing some especially great opportunities that could have been taken otherwise. These will especially be opportunities that are best tackled by people who deeply share the EA mindset. I’ll talk more about the implications later.

What about the future of the balance of funding vs. people?

Financial investment returns

A major driver of the stock of capital will be the investment returns of Facebook, Asana, FTX, and Ethereum.

If there’s a crash in tech stocks and cryptocurrencies (which seems fairly likely in the short term), the balance could move somewhat back towards people.

In the longer term, I’ll leave it to the reader to forecast the future returns of a portfolio like the above.

Personally, I feel uneasy projecting that U.S. tech stocks will return more than 1–5% per year, due to their high valuations. I expect cryptocurrencies will return more, but with much higher risk.

New donors vs. new members

If we assume that effective altruism will keep growing, and won’t collapse, and that key existing donors will remain supporters, does it seem harder to grow the number of donors or the number of members who aren’t donors?

As noted above, I think it’s more likely than not that another $100 million per year/$20 billion NPV donor enters the community within the coming years. This would be roughly ~40% growth in the total stock, compared to 15% per year growth in people, which would shift the balance even more towards funding.

The Metaculus community also estimates there’s a 50% chance of another Good Ventures-scale donor within five years.

After this, I expect it’ll become harder to grow the pool of committed funds at current rates.

Going from $60 billion to $120 billion would require convincing a 100+ billionaire like Jeff Bezos to give a large fraction of their net worth to EA-aligned causes, or might require convincing around 10 ‘regular’ billionaires.

That said, it seems possible. For instance, the total pledged by all members of the Giving Pledge is around $600 billion, so if 20% of them were into EA, that would be $120 billion; four-fold growth from today.

The total U.S. philanthropic sector is $400 billion per year, so if 1% of that was EA aligned, that would be $4 billion per year, which is 10-fold growth from today, and three-fold growth from where I expect us to be in 5–10 years.

Expanding the number of committed community members from around 5,000 to around 50,000 seems somewhat more achievable given enough time.

If it seems easier to grow the number of people 10-fold than to grow the committed funds 10-fold, then I expect the size of the overhang will eventually decrease, but this could easily take 20 years, and I expect the overhang is going to be with us for at least the next five years.

A big uncertainty here is what fraction of people will ever be interested in EA in the long term – it’s possible its appeal is very narrow, but happens to include an unusually large fraction of wealthy people. In that case, the overhang could persist much longer.

Human capital investment returns

One other complicating factor is that, as noted, people’s productivity tends to increase with age, and many community members are focused on growing their career capital.

For instance, if someone goes from a masters student to a senior government official, then their influence has maybe increased by a factor of 1,000. This could enable the community to achieve far more, and to deploy far more funds, even if the number of people doesn’t grow that much.

Implications for career choice

Here are some very rough thoughts on what this might mean for people who want high-impact careers and feel aligned with the current effective altruism community. I’m going to focus on longtermist and meta causes, since they’re what I know the best and where the biggest overhang exists.

Which roles are most needed?

The existence of a funding overhang within meta and longtermist causes created a bottleneck for the skills needed to deploy EA funds, especially in ways that are hard for people who don’t deeply identify with the mindset.[3]

We could break down some of the key leadership positions needed to deploy these funds as follows:

  1. Researchers able to come up with ideas for big projects, new cause areas, or other new ways to spend funds on a big scale
  2. EA entrepreneurs/managers/research leads able to run these projects and hire lots of people
  3. Grantmakers able to evaluate these projects

These correspond to bottlenecks in ideas, management, and vetting, respectively.

Given that many of the most promising projects involve research and policy, I’d say there’s a special need to have these skills within those sectors, as well as within the causes longtermists are most focused on, such as AI and biosecurity (e.g. someone who can lead an AI research lab; the kind of person who can found CSET). That said, I hope that longtermists expand into a wider range of causes, and there are opportunities in other sectors too.

Putting the funding overhang aside, the skill sets listed above would still be valuable: as an illustration, these skills also seem very valuable within global health – and typically more valuable than earning to give – though there’s less obviously an overhang there.

But the presence of the overhang makes them even more valuable. Finding an extra grantmaker or entrepreneur can easily unlock millions of dollars of grants that would otherwise be left invested.[4]

I’ve thought these roles were some of the most needed in the community since 2015, and now that the overhang seems even bigger — and seems likely to remain big for 10 years — I think they’re even more valuable than I did back then.

Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10 (where this hugely depends on fit and the circumstances).

This would also mean that if you have a 10% chance of succeeding, then the expected value of the path is $300,000–$2 million per year (and the value of information will be very high if you can determine your fit within a couple of years).

The funding overhang also created bottlenecks for people able to staff projects, and to work in supporting roles. For each person in a leadership role, there’s typically a need for at least several people in the more junior versions of these roles or supporting positions — e.g. research assistants, operations specialists, marketers, ML engineers, people executing on whatever projects are being done, etc.

I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million per year (again, with huge variance depending on fit).

The bottleneck for supporting roles has, however, been a bit smaller than you might expect, because the number of these roles was limited by the number of people in leadership positions able to create these positions.

I think for the more junior and supporting roles there was also a vetting bottleneck [EA · GW]. I’m unsure if there were infrastructure or coordination bottlenecks beyond the factors mentioned, but it seems plausible.

How should individuals respond to these needs?

If you might be able to help fill one of these key bottlenecks, there’s a good chance it’ll be the highest impact thing you can do.

Ideally, you can shoot for the tail outcome of a leadership role within one of these categories (e.g. becoming a grantmaker, manager, or someone who finds a new cause area). Aiming for a leadership position also sets you up to go into a highly valuable supporting or more junior equivalent role (e.g. being a researcher for a grantmaker, or being an operations specialist working under a manager).

Your next step will likely involve trying to gain career capital that will accelerate you in this path. Depending on what career capital you focus on, there could be many other strong options you could switch to otherwise (e.g. government jobs).

Be aware that the leadership-style roles are very challenging – besides being smart and hardworking, you need to be self-motivated, independently minded, and maybe creative. They also typically require deep knowledge of effective altruism, and a lot of trust from — and a good reputation within — the community. It’s difficult to become trusted with millions of dollars or a team of tens of people.

So, no one should assume they’ll succeed, and everyone should have a backup plan.

The ‘supporting’ roles are also more challenging than you might expect. Besides also requiring a significant amount of skill and trust (though less than the leadership roles), there’s a lack of mentorship capacity, and their creation is limited by the number of people in leadership roles.

On our job board, I made a quick count of 40 roles like these within our top recommended problem areas posted within the last two months, so there are perhaps 240 per year.

This compares to about 7,400 engaged community members, of which perhaps about 1,000 are early career and looking to start these kinds of jobs.

So there are a significant number of opportunities, and given their impact I think many people should pursue them, but it’s important to know there’s a reasonable chance it doesn’t work out.

If you’re unsure of your chances of eventually being able to land a supporting role, then build career capital towards those roles, but focus on ways of gaining career capital that also take you towards 1–2 other longer-term roles you find attractive.

I want to be honest about the challenges of these roles so that people know what they’re in for, but I’m also very concerned about being too discouraging.

We meet many people who are under-confident in their abilities, and especially their potential to grow over the years.

I think it’s generally better to aim a bit high than too low. If you succeed, you’ll have a big impact. If it doesn’t work out, you can switch to your plan B instead.

Trying to fill the most pressing skill bottlenecks in the world’s most pressing problems is not easy, and I respect anyone who tries.

What does this mean for earning to give?

The success of FTX is arguably a huge vindication of the idea of earning to give, and so in that sense it’s a positive update.

On balance, however, I think the increase in funding compared to people is an update against the value of earning to give at the margin.

This doesn’t mean earning to give has no value:

  1. Medium-sized donors can often find opportunities that aren’t practical for the largest donors to exploit – the ecosystem needs a mixture of ‘angel’ donors to compliment the ‘VCs’ like Open Philanthropy. Open Philanthropy isn’t covering many of the problem areas listed here and often can’t pursue small individual grants.
  2. You can save money, invest it, and spend when the funding overhang has decreased, or in order to practice patient philanthropy more generally.
  3. You could support causes that seem more funding constrained, like global health.

But I do think the relative value of earning to give has fallen over time, as the overhang has increased.

Overall, I would encourage people early in their career to very seriously consider options besides earning to give first.

If you’re already earning to give — and especially if you don’t seem to have a chance of tail outcomes (e.g. startup exit) — I’d encourage you to seriously consider whether you could switch.

That said, there are definitely people for whom earning to give remains their overall top option, especially if they have personal constraints, can’t find another role that’s a good fit, have unusually high earnings, or are learning a lot from their job (and might switch out later).

Other jobs

I’ve focused on earning to give and jobs working ‘directly’ to deploy EA funds, but I definitely don’t want to give the impression these are the only impactful jobs.

I continue to think that jobs in government, academia, other philanthropic institutions and relevant for-profit companies (e.g. working on biotech) can be very high impact and great for career capital.

For instance, it would be possible for the community to have an absolutely massive impact via improving government policy around existential risks, and this doesn’t require anyone to get a job ‘in EA’.

I don’t discuss them more here because they don’t require EA funding to pursue, so their expected impact isn’t especially affected by the size of the funding overhang. I’d still encourage readers to consider them.

Read part 2: The allocation of resources across cause areas [EA · GW].

What’s next

If you think you might be able to help deal with one of the key bottlenecks mentioned, or are interested in switching out of earning to give, we’ve recently ended the waitlist for our one-on-one advice, and would encourage you to apply.

You might also be interested in:

Stay up to date on new research like this by following me on Twitter.


  1. Looking backwards, the main thing we care about is actual impact. Personally I think the EA community has had a lot of success doing things like turning AI safety into an accepted field, funding malaria prevention, scaling up cage-free campaigns etc., though this is a matter of judgement. ↩︎

  2. I assume a perpetual withdrawal rate of 2%. Studies of the market in the past often find that it's possible to withdraw 2–3% from a portfolio that’s mainly equities and not decrease your capital in real terms. 2% is also in line with the current dividend yield of global stocks – the capital should roughly track global nominal GDP, so the 2% represents what can be withdrawn each year. This 2% figure could be very conservative – if EA donors can earn higher returns than say an 80% equity portfolio, as they have historically, then it’ll be possible to withdraw a lot more. This would make the funding overhang a lot larger. I’ve also compared a perpetual withdrawal rate with the current stock of people, but the typical career of a member will only last 30 years, so it might have been better to assume we also want to spend down the capital over 30 years. In that case, you could likely withdraw 3–4% (a typical retirement safe withdrawal rate for a 30-year retirement), which would up to double the available income. ↩︎

  3. For instance, there’s a pre-existing community doing conventional biosecurity, which made it easier to turn money into progress on biosecurity despite a lack of EA community members working in the area. In contrast, there was no existing AI safety community, which has meant it has taken longer to deploy funds there. ↩︎

  4. The more reason for urgency, the bigger these bottlenecks. If you’d like to see more patient philanthropy, then it might be fine just to keep all the funding invested to spend later. ↩︎

98 comments

Comments sorted by top scores.

comment by Jan-WillemvanPutten · 2021-08-02T08:42:30.817Z · EA(p) · GW(p)

One comment regarding:

But the presence of the overhang makes them even more valuable. Finding an extra grantmaker or entrepreneur can easily unlock millions of dollars of grants that would otherwise be left invested.

If we really think that this is the case for EA / charity entrepreneurs I think we should consider the following:

We spend too little effort on recruiting entrepreneurial types in the movement. Being relatively new in the movement (coming in as an entrepreneur), I think we should foster a more entrepreneurial culture than we currently do. I know some fellow entrepreneurs that dropped out of / didn’t enter the movement because they felt EA is an intellectual endeavour with too little focus on actually doing something.

Adjacent to this argument I think that we should spend more resources on upskilling entrepreneurial EAs. Charity Entrepreneurship is doing a great job with their incubation program, but their current capacity is limited and there is definitely room for growth given the large interest in the program. In addition to this we should also encourage cheap tests of EA entrepreneurship within national / local chapters. Currently the focus is mainly on community building and running fellowships.

Entrepreneurial projects at local chapters are currently considered as nice-to-have and as a way to attract people to the community. But if Ben´s statement is true we should consider national groups as the breeding ground for entrepreneurs. They are the first part of the EA entrepreneur pipeline with a next possible step being CE´s incubation program or starting a charity right away. In this model local and national group leaders should support these aspiring entrepreneurs with advice and connections to other people in the movement. 

Replies from: Benjamin_Todd, oagr, Ben_West, Benjamin_Todd, Manuel_Allgaier
comment by Benjamin_Todd · 2021-08-02T21:44:42.648Z · EA(p) · GW(p)

I agree - people able to run big EA projects seem like one of our key bottlenecks right now. That was one of my motivations for writing this post, and this mini profile.

I'm especially excited about finding people who could run $100m+ per year 'megaprojects', as opposed to more non-profits in the $1-$10m per year range, though I agree this might require building a bigger pipeline of smaller projects.

I also agree it seems plausible that the culture of the movement is a bit biased against entrepreneurship, so we're not attracting as many people with this skillset as we could given our current reach. I'd be keen to do more celebrating of people who have tried to start new things.

This said, it might be even more pressing simply to reach 2x as many people, and then we'll find a bunch of founders among them.

I'd also want to be cautious about using the term 'entrepreneur' to describe what we're looking for, since I think that tends to bring to mind a particular silicon valley type, which is often pretty different from the people who have succeeded running big projects in EA. E.g. classic entrepreneurship is often about quickly testing lots of things, whereas many EA projects require really good judgement. That's why I cached it in terms of 'people who could run big projects in EA' (leaving it open about exactly which skills are most needed there).

To give a concrete example, I mention the example of 'the type of person who could found CSET' - and the skills there seem pretty different from the people who typically self-identify as entrepreneurs on HN etc.

Replies from: Charles He, Jan-WillemvanPutten
comment by Charles He · 2021-08-03T02:46:56.199Z · EA(p) · GW(p)

I'm especially excited about finding people who could run $100m+ per year 'megaprojects', as opposed to more non-profits in the $1-$10m per year range, though I agree this might require building a bigger pipeline of smaller projects.

Do you think it is useful to speculate about what these orgs could be, in any sense (cause area, purpose, etc.)?

Maybe this speculation could be useful to give some sense/hint/structure to how these orgs can be fostered (as opposed to directly encouraging someone to create such an org). For example, it may guide focus on certain smaller orgs or promoting some kind of cultural change.

To give a concrete example, I mention the example of 'the type of person who could found CSET' - and the skills there seem pretty different from the people who typically self-identify as entrepreneurs on HN etc.

To try to be helpful, here’s a sample of some founders from orgs who received the 3 largest Open Phil grants.

CSET - Jason Matheny - https://en.wikipedia.org/wiki/Jason_Gaverick_Matheny

OpenAI - Sam Altman - https://en.wikipedia.org/wiki/Sam_Altman

Malaria Consortium - Sylvia Meek - https://www.malariaconsortium.org/sylvia-meek/dr-sylvia-meek-1954-2016.htm

I'd also want to be cautious about using the term 'entrepreneur' to describe what we're looking for, since I think that tends to bring to mind a particular silicon valley type, which is often pretty different from the people who have succeeded running big projects in EA.

Indeed, at their current life stage (Sam Altman was a SV founder) these people are very different from the "move fast and break things" startup style. 

Touching on @Ben_West's comment, many of these founders seem similar in profile to founders at middle or larger size companies and also have significant scientific experience.

Matheny was a scientist and manager of research and Malaria Consortium's founding team has multiple strong scientists. At the same time, these are people have very high human capital in the form of executive experience. Their profile seems normal for "CEOs".

While many CEOs do have scientific degrees, the level of scientific prestige and activity among this group might be uncommon. 

This pattern could be useful in some way (most obviously, you could just ask the current senior research leaders of EA aligned orgs/think tanks if they have a vision for a useful project).

Replies from: tamgent
comment by tamgent · 2021-08-08T08:07:08.072Z · EA(p) · GW(p)

Do you think it is useful to speculate about what these orgs could be, in any sense (cause area, purpose, etc.)?

This is being done here: https://forum.effectivealtruism.org/posts/ckcoSe3CS2n3BW3aT/what-ea-projects-could-grow-to-become-megaprojects [EA · GW

Replies from: Charles He
comment by Charles He · 2021-08-09T00:20:13.739Z · EA(p) · GW(p)

Thanks for pointing this out!

comment by Jan-WillemvanPutten · 2021-08-03T04:58:38.599Z · EA(p) · GW(p)

Thanks for your response Benjamin (and Ben West asking a question)

Sorry for not being completely clear about this, but I pointed towards the profile of a (EA-style) charity entrepreneur which is indeed different from the regular SV co founder (although there are similarities, but let’s not go into the details). I think the mini profile you wrote about a non profit entrepreneur is great and I am happy to see that 80k pushes this. Hopefully the Community Building Program will follow since national and local chapters are for many people the first point of entrance into EA. It would be good if this program also encouraged local and national chapters to make valuable cheap tests in non profit entrepreneurship viable. 

I am also very happy that you acknowledge that reaching out to get 2x as many people in is probably desirable. Also here I think that the “common EA opinion” shifted quite a lot over the ~two years I’ve been involved in EA, great to see!

comment by Ozzie Gooen (oagr) · 2021-08-03T16:15:02.294Z · EA(p) · GW(p)

As someone who's spent a fair amount of time with the SV startup scene (have cofounded multiple companies) and the EA scene, I'd flag that the cultures of at least these two are quite different and often difficult to bridge. 

Most of the large EA-style projects I'd be excited about are ones that would require a fair amount of buy-in and trust from the senior EA community. For example, if you're making a new org to investigate AGI safety, bio safety, or expand EA, senior EAs would care a lot about the leadership having really strong epistemics and understand of existing EA thinking on the topic.

One problem is that entrepreneurship culture can present a few challenges:
1) There's often a lot of overconfidence and weird epistemics
2) Often there's not much spare time to learn about EA concepts
3) Leaders often seem to grow egos

The key thing, to me, seems to be some combination of humility and willingness to begin at the bottom for a while. I think that becoming well versed in EA/longtermism enough to found something important, can often require beginning in a low-level research role or similar. 

One strategy some people give is something like, "I don't care about buy-in from the EA community, I could start something myself quickly, and raise a lot of other money". In sensitive areas, this can get downright scary, in my opinion.

Of my current successful entrepreneur friends, I can't see many of them going the "go low-status for a few years route", but I could see some. Most people I know don't seem to want to go down a few status and confidence levels for a while. 

There are definitely some prominent examples in EA of people who have done similar things (I'd flag Ben West, who seems have pulled off a successful transition, and is discussed in these comments), but there aren't all too many.

The FHI RSP program was a nice introductory program, but was definitely made more for researchers than entrepreneurs. I could imagine us having similar transitionary programs for entrepreneur-types in the future. There are probably some ways more programs and work in this area could make things easier; for instance, they could seem really prestigious (flashy branding), in part to make it more palatable for people taking status-decreases for a while. 

If there are successful entrepreneurs out there reading this interested in chatting, I'd of course be happy to (just message me), though I'm sure 80k and other groups would be interested as well.

 

(Note: I think Charity Entrepreneurship gets around this a bit by first, focusing on younger people with potential to be entrepreneurs, rather than people who are already very successful, and second, focusing on particular interventions that can be done more independently.)

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-03T21:14:23.361Z · EA(p) · GW(p)

A lot of this rings true to me.

comment by Ben_West · 2021-08-02T21:04:35.104Z · EA(p) · GW(p)

I feel like these conversations often get confusing because people mean different things by the term "entrepreneur", so I wonder if you could define what you mean by "entrepreneur" and what you think they would do in EA?

Even with very commercializable EA projects like cellular agriculture, my experience is that the best founders are closer to scientists than traditional CEOs, and once you get to things like disentanglement research the best founders have almost no skills in common with e.g. tech company founders, despite them both technically being "entrepreneurs" in some sense.

comment by Benjamin_Todd · 2021-08-03T21:05:14.363Z · EA(p) · GW(p)

One extra thought is that there was a longtermist incubator project for a while, but they decided to close it down. I think one reason was they thought there weren't enough potential entrepreneurs in the first place, so the bigger bottleneck was movement growth rather than mentoring. I think another bottleneck was having an entrepreneur who could run the incubator itself, and also a lack of ideas that can be easily taken forward without a lot more thinking. (Though I could be mis-remembering.)

Replies from: tamgent
comment by tamgent · 2021-08-08T07:59:48.693Z · EA(p) · GW(p)

I think they were pretty low profile, and the types of things that Jan-WillemvanPutten is suggesting are about being more present/visible in EA in order to attract a subculture to develop more. I think this example supports his main point more actually, because movement growth is quite driven by culture and attractors for different subcultures.

(As an aside, I was engaged with the longtermist incubator and found it helpful/useful.)
(Another aside, I can think of a few downsides of Jan-WillemvanPutten's specific suggestion, but I think the important part is the visibility and culture building aspect.)

comment by Manuel_Allgaier · 2021-08-08T22:07:48.116Z · EA(p) · GW(p)

Agree that this seems neglected. EA Germany (and I personally) are happy to support EA projects that have potential to grow into impactful EA organisations. If you have ideas on how to better do that (within the limited capacity of national group organisers), feel free to get in touch!

(I also agree on the importance of having founders that are value-aligend and have good epistemics, which I think some entrepreneurs are but many others may not be)

comment by HaydnBelfield · 2021-07-31T14:59:47.469Z · EA(p) · GW(p)

How much funding is committed to effective altruism (going forward)? Around $46 billion.

For reference, the Bill & Melinda Gates Foundation is the second largest charitable foundation in the world, holding $49.8 billion in assets.

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-02T19:29:09.765Z · EA(p) · GW(p)

Though also note that most of Gates' and Buffet's wealth hasn't yet been put into the foundation.

comment by Benjamin_Todd · 2021-08-03T09:34:55.859Z · EA(p) · GW(p)

An extra thought is that this seems like a positive update on the cost-effectiveness of past meta work.

Here's a rough and probably overoptimistic back of the envelope to illustrate the idea:

  • I'd guess that maybe $50m was spent on formal movement building efforts in 2020. This is intended to include things like OP & GiveWell's spending on staff, most of FHI and MIRI, plus all of the explicit movement building orgs like CEA and 80k. If that started at 0 in 2010, then it might add up to $250m over the decade (assuming straight line growth).

  • If the average cost of employing someone was $50k, that would imply about 5000 person-years of work were invested.

  • Let's assume that formal movement building efforts receive 1/3 of the credit for resources raised (with the other 2/3 going to informal efforts like personal connections and to the original founders).

  • Then, the formal efforts have 'raised' $15bn and 146,000 person-years (assuming 20yr per person & growth of 7300 people).

  • So that's an average return of 60 dollars per 1 dollar in, and 29 person years per 1 year in. You also get a lot of research thrown in for free.

Individual projects will vary a lot, but that's a nice base rate!

comment by Neel Nanda · 2021-07-29T21:56:13.964Z · EA(p) · GW(p)

Thanks a lot for the thorough post! I found it really helpful how you put rough numbers on everything, and made things concrete, and I feel like I have clearer intuitions for these questions now.

My understanding is that these considerations only apply to longtermists, and that for people who prioritise global health and well-being or animal welfare this is all much less clear, would you agree with that? My read is that those cause areas have much more high quality work by non EAs and high quality, shovel ready interventions.

I think that nuance can often get lost in discussions like this, and I imagine a good chunk of 80K readers are not longtermists, so if this only applies to longtermists I think that would be good to make clear in a prominent place.

And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.

Replies from: Benjamin_Todd, Chi
comment by Benjamin_Todd · 2021-07-30T09:40:57.375Z · EA(p) · GW(p)

Hey, I agree the situation is more unclear outside of longtermism and meta (I flag that a couple of times). It's also pretty complicated, so I didn't want to put it in the post and hold up publication.

Here are some quick thoughts:

The money available to global health and animal welfare has grown a lot as well (perhaps 2-3x) - e.g. see the comment below about Moskovitz, but it's not just that - so a similar dynamic could apply.

Focusing on global health, there is a funding gap for GiveWell-recommended charities more effective than GiveDirectly, though this gap seems more filled than in the past. If you look at the RFMF estimates, additional donations now mostly go towards operations in ~3 years time rather than the next 1-2 years. And you'd want to think about things like Vitalik's recent$54m donation.

My guess is that for interventions more cost-effective than current GiveWell-recommended charities (excluding GiveDirectly), there is a big funding overhang (in the sense that GiveWell/OP would happily fund a lot more of this stuff asap if it existed).

For interventions similar to the GiveWell top charities, it's fairly balanced.

And then if you're OK to drop cost-effectiveness by 10-20x, GiveDirectly could ultimately absorb billions, so there's still a funding gap there.

I also still think that many people working on global health could have more impact via jobs in research, policy, nonprofit entrepreneurship etc. than through earning to give.

Turning to animal welfare, Lewis Bollard said in our 2017 podcast that animal welfare seemed to have more of a funding overhang / be more talent constrained, and this wasn't only driven by OP entering the space.

If the growth in funding has kept pace with the growth of people working on animal welfare, then the size of the overhang should be even bigger today in absolute terms.

Moreover, many animal welfare people are now focused on clean meat, which often doesn't need philanthropic funding in the first place.

On the other hand, many animal welfare non-profits still seem more likely to say that if they had more funding, they'd hire a bunch more people, and salaries also seem fairly low. I'm not sure exactly what's going on there.

Replies from: Linch
comment by Linch · 2021-07-31T03:12:58.906Z · EA(p) · GW(p)

(For context I work in an org where currently >50% of researchers do subcause prioritization within animal welfare, though I'm not an animal welfare researcher myself. Speaking only for myself, just one person's take, etc, etc)

Some quick personal thoughts on animal welfare funding vs talent constraints:

Naively I would guess that the median research hire in animal  welfare for RP would contribute >$1 million/year of counterfactual value solely in terms of improving the quality and quantity of grantmaker decisions within animal welfare. For example, I would naively ascribe somewhat higher numbers for Neil's EU work [EA · GW], or if quality of the moral weights work is improved by the equivalent of additional thinking of a median researcher-year. 

(Note that this is fairly BOTEC and there are obvious biases for someone to think that their work and that of their coworkers is especially important). 

Moreover, many animal welfare people are now focused on clean meat, which often doesn't need philanthropic funding in the first place.

I think this is wrong or at least may be easy to misinterpret for the typical reader. The biggest bottleneck I see for clean meat is strategic clarity along the lines of "can we honestly and accurately have a coherent roadmap, enough to persuade funders and others that the research we're currently doing is on the pathway to eventually making cost-competitive clean meat that will have widespread regulatory and consumer adoption" (A coherent and technically sound response to Humbird 2020 is necessary but not sufficient here). 

But if you're convinced that current work in clean meat is on the path to producing clean meat with the relevant desiderata, then in those worlds clean meat is much more bottlenecked by funding than by technical talent (compared to say research of malaria vaccines or universal viral sequencing). As a sanity check, all the cultured meat research in academia probably looks like <$10million/year(?), and maybe another !1.5 orders of magnitude more in industry, well over an order of magnitude less than the valuation of a single plant-based startup alone. 

So I'd expect in worlds where clean meat research is tractable, I'd expect philanthropic funding to be significant in improving the field, eg by drawing people away from plant-based meats, biopharma, yeast germline manufacture, etc. I suspect (though I don't have direct evidence) that many people in the latter two categories would take 30% pay cuts to work in something that's technically interesting and with high potential altruistic value like clean meat, but not >75% pay cuts. 

 

Replies from: Charles He
comment by Charles He · 2021-07-31T06:25:54.633Z · EA(p) · GW(p)

Moreover, many animal welfare people are now focused on clean meat, which often doesn't need philanthropic funding in the first place.

I think this is wrong or at least may be easy to misinterpret for the typical reader.

It is confusing but my guess is that Benjamin Todd meant plant based meat, for the reasons you indicate (size of the recently popular PBM industry, where recent valuations of a single company  is many times all funding in FAW, as opposed to in vitro lab grown meat, which is much farther away from commercialization).

Replies from: Benjamin_Todd, Linch
comment by Benjamin_Todd · 2021-07-31T11:48:14.351Z · EA(p) · GW(p)

Yes, sorry I was thinking of meat substitutes broadly. I agree clean meat is more funding constrained than plant based meat, because it's further from commercialisation.

comment by Linch · 2021-07-31T10:20:37.951Z · EA(p) · GW(p)

Hmm yeah maybe(?) he just misspoke. I do think "clean meat" usually refers to in vitro lab grown meat rather than "all meat alternatives", both within EA and more broadly, so if clean meat was a standin for PBM I'd stand by my assertion "may be easy to misinterpret for the typical reader"

FWIW I looked into PBM much less than clean meat but I would guess it would be overconfident to assume that replacing all (or most) slaughter-based meat via scaling up existing systems is inevitable and I would guess progress is at least somewhat amenable to philanthropic funding, though not necessarily on parity with top farmed animal welfare interventions like corporate campaigns.

Btw, I too find myself confused about this point by Benjamin_Todd and also am not sure exactly what's going on here.

On the other hand, many animal welfare non-profits still seem more likely to say that if they had more funding, they'd hire a bunch more people, and salaries also seem fairly low. I'm not sure exactly what's going on there.

Replies from: Charles He
comment by Charles He · 2021-07-31T18:06:12.892Z · EA(p) · GW(p)

Btw, I too find myself confused about this point by Benjamin_Todd and also am not sure exactly what's going on here.

On the other hand, many animal welfare non-profits still seem more likely to say that if they had more funding, they'd hire a bunch more people, and salaries also seem fairly low. I'm not sure exactly what's going on there.

I think Benjamin_Todd is saying that

  1. There is  currently “room for funding” in (farmed) animal welfare, maybe specifically in talent and salaries
  2. There was a reported overhang of funding in farmed animal welfare. Extrapolating from growth in Good Ventures, this overhang could even have increased
  3. 1 and 2 seems to be a contradiction
     

Some quick thoughts of mine that may be low quality:

  • I know some people in the farmed animal welfare space and funding is being thoughtfully deployed and there is  attention to talent and appropriate compensation.
  • There’s a lot of actual on the ground, operational activity in animal welfare, compared to meta or longtermist cause areas. In my personal bias/perspective/worldview, this activity is inherently less cohesive and produces noise and this is normal. This noise described above can make it a little harder to get signal about funding gaps
  • Increasing salaries or significantly improving the stream of talent are inherently delicate and slow processes involving changes in culture
  • I think 2017 is a long time in the EA movement. It seems reasonable to get newer information about funding. Note that clearly 80,000 hours has hosted important leaders in farmed animal welfare since 2017.

I'm more sure that actual on the ground work, operations and implementation, is precious and can be hard to communicate or make visible.

comment by Chi · 2021-07-29T22:27:43.700Z · EA(p) · GW(p)

And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.

+1

I think I often hear longtermists discuss funding in EA and use the 22 Bil number from OpenPhilanthropy. And I think people often make some implicit mental move thinking that's also the money dedicated to longtermism- even though my understanding is very much that that's not all available to longtermism.

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-07-30T09:23:08.673Z · EA(p) · GW(p)

In the recent podcast with Alexander Berger, he estimates it'll be split roughly 50:50 longtermism vs. global health and wellbeing.

This means that the funding available to global health and wellbeing has also grown a lot too, since Dustin Moskovitz's net worth has gone from $8bn to $25bn.

comment by Aidan Alexander · 2021-09-19T03:42:58.812Z · EA(p) · GW(p)

Hi there! 

I'm a bit confused about the claim that the bottleneck is ways to deploy funding rather than funding itself. 

In global poverty and health cause areas for example, there are highly scalable EA-endorsed interventions like insecticide treated bed nets, deworming and cash transfers, and there are still plenty of people with malaria, children to deworm, and folks below the poverty line who could receive cash transfers. As far as I'm aware, AMF, Deworm the World / SCI and GiveDirectly could deploy more funds, and to the extent that they needed to hire more people to do so, I hypothesise they would be able to easily given that, as I understand it, there is a lot of competition to get jobs at organisations like these. What am I missing?

Thanks in advance!

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-09-20T20:30:16.054Z · EA(p) · GW(p)

Hi Aidan, the short answer is that global poverty seems the most funding constrained of the EA causes. The skill bottlenecks are most severe in longtermism and meta e.g. at the top of the 'implications section' I said:

The existence of a funding overhang within meta and longtermist causes created a bottleneck for the skills needed to deploy EA funds, especially in ways that are hard for people who don’t deeply identify with the mindset.

That said, I still thinking global poverty is 'talent constrained' in the sense that:

  • If you can design something that's several-fold more cost-effective than GiveDirectly and moderately scalable, you have a good shot of getting a lot of funding. Global poverty is only highly funding constrained at the GiveDirectly level of cost-effectiveness. 
     
  • I think people can often have a greater impact on global poverty via research, working at top non-profits, advocacy, policy etc. rather than via earning to give.
comment by tylermaule · 2021-08-17T03:03:22.170Z · EA(p) · GW(p)

For each person in a leadership role, there’s typically a need for at least several people in the more junior versions of these roles or supporting positions — e.g. research assistants, operations specialists, marketers, ML engineers,...I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million per year

 

If this is true, why not spend way more on recruiting and wages? It's surprising to me that the upper bound could be so much larger than equivalent salary in the for-profit sector.

I might be missing something, but it seems to me the basic implication of the funding overhang is that EA should convert more of its money into 'talent' (via Meta spending or just paying more).

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-17T09:06:35.947Z · EA(p) · GW(p)

This is a big topic, and there are lots of factors.

One is that paying very high salaries would be a huge PR risk.

That aside, the salaries are many orgs are already good, while the most aligned people are not especially motivated by money. My sense is that e.g. doubling the salaries from here would only lead to a small increase in the talent pool (like maybe +10%).

Doubling costs to get +10% labour doesn't seem like a great deal - that marginal spending would be about a tenth as cost-effective as our current average. (And that's ignoring the PR and cultural costs.)

Some orgs are probably underpaying, though, and I'd encourage them to raise salaries.

Replies from: RyanCarey, tylermaule
comment by RyanCarey · 2021-08-17T22:08:38.495Z · EA(p) · GW(p)

This kind of ambivalent view of salary-increases is quite mainstream within EA, but as far as I can tell, a more optimistic view is warranted.

If 90% of engaged EAs were wholly unmotivated by money in the range of $50k-200k/yr, you'd expect >90% of EA software engineers, industry researchers, and consultants to be giving >50%, but much fewer do. You'd expect EAs to be nearly indifferent toward pay in job choice, but they're not. You'd expect that when you increase EAs' salaries, they'd just donate a large portion on to great tax-deductible charities, so >75% of the salary increase would be refunded on to other effective orgs. But when you say that the spending would be only a tenth as effective (rather than ~four-tenths), clearly you don't.

Although some EAs are insensitive to money in this way, 90% seems too high. Rather, with doubled pay, I think you'd see some quality improvements from an increased applicant pool, and some improved workforce size (>10%) and retention. Some would buy themselves some productivity and happiness. And yes, some would donate. I don't think you'd draw too many hard-to-detect "fake EAs" - we haven't seen many so far. Rather, it seems more likely to help quality than hurt on the margin.

I don't think the PR risk is so huge at <$250k/yr levels. Closest thing I can think of is commentary regarding folks at OpenAI, but it's a bigger target, with higher pay. If the message gets out that EA employees are not bound to a vow of poverty, and are actually compensated for >10% of the good they're doing, I'd argue that's would enlarge and improve the recruitment pool on the margin.

(NB. As an EA worker, I'd stand to gain from increased salaries, as would many in this conversation. Although not for the next few years at least given the policies of my current (university) employer.)

Replies from: Gregory_Lewis, Benjamin_Todd
comment by Gregory_Lewis · 2021-08-20T22:31:34.889Z · EA(p) · GW(p)

[Predictable disclaimers, although in my defence, I've been banging this drum [EA(p) · GW(p)] long before I had (or anticipated to have) a conflict of interest.]

I also find the reluctance to wholeheartedly endorse the 'econ-101' story (i.e. if you want more labour, try offering more money for people to sell labour to you) perplexing:

  • EA-land tends sympathetic using 'econ-101' accounts reflexively on basically everything else in creation. I thought the received wisdom these approaches are reasonable at least for first-pass analysis, and we'd need persuading to depart greatly from them.
  • Considerations why 'econ-101' won't (significantly) apply here don't seem to extend to closely analogous cases:  we don't fret (and typically argue against others fretting) about other charity's paying their staff too much, we don't think (cf. reversal test) that google could improve its human capital by cutting pay - keeping the 'truly committed googlers', generally sympathetic to public servants getting paid more if they add much more social value (and don't presume these people are insensitive to compensation beyond some limit), prefer simple market mechs over more elaborate tacit transfer system (e.g. just give people money) etc. etc.
  • The precise situation makes the 'econ-101' intervention particularly appetising: if you value labour much more than the current price, and you are sitting atop a ungodly pile of lucre so vast you earnestly worry about how you can spend big enough chunks of it at once, 'try throwing money at your long-standing labour shortages' seems all the more promising.
  • Insofar as it goes, the observed track record looks pretty supportive of the econ-101 story - besides all the points Ryan mentions, compare "price suppression results in shortages" to the years-long (and still going strong) record of orgs lamenting they can't get the staff.

Perhaps the underlying story is as EA-land is generally on the same team, one might hope you can do better than taking one's cue from 'econ-101', given the typically adversarial/competitive dynamics it presumes between firms, and employee/employer. I think this hope is forlorn: EA-land might be full aspiring moral saints, but aspiring moral saints remain approximate to homo economicus. So the usual stories about the general benefits econ efficiency prove hard to better- and (play-pumps style) attempts to try feel apt to backfire (1 [EA(p) · GW(p)], 2 [EA(p) · GW(p)], 3 [EA(p) · GW(p)], 4 [EA(p) · GW(p)] - ad nauseum).
 
However, although I don't think 'PR concerns' should guide behaviour (if X really is better than ¬X, the costs of people reasonably - if mistakenly - thinking less of you for doing X is typically better than strategising to hide this disagreement), many things look bad because they are bad.

In the good old days, I realised I was behind on my GWWC pledge so used some of my holiday to volunteer for a week of night-shifts as a junior doctor on a cancer ward. If in the future my 'EA praxis' is tantamount to splashing billionaire largess on a lifestyle for myself of comfort and affluence scarcely conceivable to my erstwhile beneficiaries, spending my days on intangible labour in well-appointed offices located among the richest places heretofore observed in human history, an outside observer may wonder what went wrong. 

I doubt they would be persuaded by my defence is any better than obscene: "Not all heroes wear capes; some nobly spend thousands on yuppie accoutrements they deem strictly necessary for them to do the most good!". Nor would they be moved by my remorse: self-effacing acknowledgement is not expiation, nor complaisance to my own vices atonement. I still think jacking up pay may be good policy, but personally, perhaps I should doubt myself too.   

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-20T23:06:55.129Z · EA(p) · GW(p)

I'm just saying that when we think offering more salary will help us secure someone, we generally do it. This means that further salary raises seem to offer low benefit:cost. This seems consistent with econ 101.

Likewise, it's possible to have a lot of capital, but for the cost-benefit of raising salaries to be below the community bar (which is something like invest the money for 20yr and spend on OP's last dollar - which is a pretty high bar).  Having more capital increases the willingness to pay for labour now to some extent, but tops out after a point.

To be clear, I'm sympathetic to the idea that salaries should be even higher (or we should have impact certificates or something). My position is more that (i) it's not an obvious win (ii) it's possible for the value of a key person to be a lot higher than their salary, without something going obviously wrong.

comment by Benjamin_Todd · 2021-08-18T10:32:15.962Z · EA(p) · GW(p)

I definitely agree EAs are motivated somewhat by money in this range. 

My thought is more about how it compares to other factors.

My impression of hiring at 80k is that salary rarely seems like a key factor in choosing us vs. other orgs (probably under 20% of cases). If we doubled salaries, I expect existing staff would save more, donate more, and consume a bit more; but I don't think we'd see large increases in productivity or happiness.

My impression is that this is similar at other orgs who pay similarly to us. Some EA orgs still pay a lot less, and I think there's a decent chance this is a mistake – though you'd need to weigh it against the current cost-effectiveness of the project.

I think the PR risks for charities paying high salaries are pretty big - normal people hate the idea of charities paying a lot. Paying regular employees $200k in London would make them higher paid than the CEOs of most regular charities, including pretty big ones where the staff are typically middle aged. EA has also had a lot of kudos from the 'living on not very much to donate' meme. Most people aiming to do good are assumed to be full of shit, and living on not very much is a hard-to-fake symbol that shows you're morally serious. I agree that meme has some serious downsides relative to 'you can earn a bunch of money doing good' meme, but giving up that kudos is a major cost – which makes the trade off ambiguous to me. Maybe it's possible to have some of both by paying a lot but having some people donate most of it, or maybe you get the worst of both worlds.

Replies from: RyanCarey
comment by RyanCarey · 2021-08-18T11:12:39.545Z · EA(p) · GW(p)

Agree that we shouldn't expect large productivity/wellbeing changes. Perhaps a ~0.1SD improvement in wellbeing, and a single-digit improvement in productivity - small relative to effects on recruitment and retention.

I agree that it's been good overall for EA to appear extremely charitable. It's also had costs though: it sometimes encouraged self-neglect, portrayed EA as 'holier than thou', EA orgs as less productive, and EA roles as worse career moves than the private sector. Over time, as the movement has aged, professionalised, and solidified its funding base, it's been beneficial to de-emphasise sacrifice, in order to place more emphasis on effectiveness. It better reflects what we're currently doing, who we want to recruit, too. So long as we take care to project an image that is coherent, and not hypocritical, I don't see a problem with accelerating the pivot. My hunch is that even apart from salaries, it would be good, and I'd be surprised if it was bad enough to be decisive for salaries.

Replies from: Sean_o_h
comment by Sean_o_h · 2021-08-19T11:31:37.219Z · EA(p) · GW(p)

I think there are a few other considerations that may point in the direction of slightly higher salaries (or at least, avoiding very low salaries). EA skews young in age as a movement, but this is changing as people grow up 'with it' or older people join. I think this is good. It's important to avoid making it more difficult for people to join/remain involved who have other financial obligations that come in a little later in life, e.g.
- child-rearing
- supporting elderly parents
Relatedly, lower salaries can be easier to accept for longer for people who come from wealthier backgrounds and have better-off social support networks or expectations of inheritance etc (it can feel very risky if one is only in a position to save minimally, and not be able to build up rainy day funds for unexpected financial needs otherwise).  

comment by tylermaule · 2021-08-17T13:22:31.721Z · EA(p) · GW(p)

Doubling costs to get +10% labour doesn't seem like a great deal

 

I agree in principal, but in this case the alternative is eliminating$400k-4M of funding, which is much more expensive than doubling the salary of e.g. a research assistant.

To be clear, I am more so skeptical of this valuation than I am actually suggesting doubling salaries. But conditional on the fact that one engaged donor entering the non-profit labor force is worth >$400k, seems like the right call.

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-20T23:10:41.912Z · EA(p) · GW(p)

Not sure I follow the maths.

 

If there are now 10 staff, each paid $100k, and each generating $1m of value p.a., then the net gain is $10m - $1m = $9m. The CBR is 1:9.

 

If we double salaries and get one extra staff member, we're now paying $2.2m to generate $11m of value. The excess is $8.8m. The average CBR has dropped to 5:1, and the CBR of the marginal $1.2m was actually below 1.

Replies from: tylermaule
comment by tylermaule · 2021-08-22T14:54:32.452Z · EA(p) · GW(p)

Agreed, just a function of how many salaries you assume will have to be doubled alongside to fill that one position

(a) Hopefully, doubling ten salaries to fill one is not a realistic model. Each incremental wage increase should expand the pool of available labor. If the EA movement is labor-constrained, I expect a more modest raise would cause supply to meet demand.

(b) Otherwise, we should consider that the organization was paying only half of market salary, which perhaps inflated their ‘effectiveness’ in the first place. Taking half of your market pay is itself an altruistic act, which is not counted towards the org’s costs. Presumably if these folks chose that pay cut, they would also choose to donate much of their excess salary (whether pay raise from this org, or taking a for-profit gig).

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-22T16:21:58.825Z · EA(p) · GW(p)

On b), for exactly that reason, our donors at least usually focus more on the opportunity costs of the labour input to 80k rather than our financial costs - looking mainly at 'labour out' (in terms of plan changes) vs. 'labour in'. I think our financial costs are a minority of our total costs.

On a), yes, you'd need to hope for a better return than a doubling leads to +10% labour estimate I made.

If we suppose a 20% increase is sufficient for +10% labour, then the new situation would be:

Total costs: $1.32m

Impact: $11m

So, the excess value has increased from $9m to $9.7m, and the CBR of the marginal $320k is about 1:3. So, this would be worth doing, though the cost-effectiveness is about a third of before. (In our case at least, I don't think a +20% increase to salaries would lead to +10% more hires though.)

It looks like the breakeven point is roughly an 80% increase in salaries to gain 10% of labour with this simplified model. (I.e. the CBR of the marginal $1m is around 1:1). In reality I don't think we'd want to go that close to the breakeven point - because there may be better uses of money, due to the reputation costs of unusually high salaries, and because salaries are harder to lower than to raise (and so if uncertain, it's better to undershoot).

Replies from: tylermaule
comment by tylermaule · 2021-08-25T23:51:48.140Z · EA(p) · GW(p)

In reality I don't think we'd want to go that close to the breakeven point - because there may be better uses of money, due to the reputation costs of unusually high salaries, and because salaries are harder to lower than to raise (and so if uncertain, it's better to undershoot).

Good points, I agree it would be better to undershoot.

Still, even with the pessimistic assumptions, the high end of that $0.4-4M range seems quite unlikely.

Does 80k actually advise people making >$1M to quit their jobs in favor of entry-level EA work? If so, that would be a major update to my thinking.

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-26T14:30:40.122Z · EA(p) · GW(p)

Does 80k actually advise people making >$1M to quit their jobs in favor of entry-level EA work?

 

It depends on what you mean by 'entry level' & relative fit in each path, but the short answer is yes. 

If someone was earning $1m per year and didn't think that might grow a lot further from there, I'd encourage them to seriously consider switching to direct work. 

I.e. I think it would be worth doing a round of speaking to people at the key orgs, making applications and exploring options for several months (esp insofar as that can be done without jeopardising your current job). Then they could compare what comes up with their current role. I know some people going through this process right now.

If someone was already doing direct work and doing well, I definitely wouldn't encourage them to leave if they were offered a $1m/year earning to give position.

The issue for someone already in earning to give is that probability that they can find a role like that which is a good fit for them, which is a long way from guaranteed. 

Replies from: tylermaule
comment by tylermaule · 2021-08-30T14:31:05.280Z · EA(p) · GW(p)

That all seems reasonable.

Shouldn’t the displacement value be a factor though? This might be wrong, but my thinking is (a) the replacement person in the $1M job will on average give little or nothing to effective charity (b) the switcher has no prior experience or expertise in non-profit, so presumably the next-best hire there is only marginally worse?

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-30T15:29:22.203Z · EA(p) · GW(p)

The estimates are aiming to take account of the counterfactual i.e. when I say "that person generates value equivalent to extra donations of $1m per year to the movement", the $1m is accounting for the fact that the movement has the option to hire someone else.

In practice, most orgs are practicing threshold hiring, where if someone is clearly above the bar, they'll create a new role for them (which is what we should expect if there's a funding overhang).

comment by Benjamin_Todd · 2021-08-17T09:08:46.073Z · EA(p) · GW(p)

Readers might be interested this twitter thread on megaprojects, and forum discussion of ideas [EA · GW].

comment by Charles He · 2021-07-29T22:56:27.315Z · EA(p) · GW(p)

This comment is a generic, low information poke at the excellent article:

One of the takeaways of this article is that there has been a dramatic expansion in EA funding, increasing the overhang of “money” (over “talent”).

I think this reasonably creates the impression that EA funding is now very abundant.

I'm interested or worried about the unintended side effects of this impression:

For an analogy, imagine making a statement about the EA movement needing more “skill in biology”. In response, this updates conscientious, strong EAs who change careers. However, what was actually needed was world class leaders in biology whose stellar careers involve special initial conditions. Unfortunately, this means that the efforts made by even very strong EAs were wasted.

I think such misperceptions can occur unintentionally. This motivates this comment.

With this motivation, it might be useful to interrogate these statements to try to get at the “qualia" or less tangible character behind the impression given of the new funding.

I’m not sure how to do this interrogation well. I’ve written speculative and likely unfairly aggressive scenarios about side effects from this impression of funding:

  • It undermines Earning to Give efforts that can give very large and hidden value to the movement (development of deep operational skills and benevolent coordination among EAs in industry, government and policy makers)
  • The concentration undermines nimbler funding of smaller, nascent organizations. To explain Open Phil funding can be hard to get (this is offset by thoughtful, generous creation of orgs such as EA Funds). These issues may increase with greater centralization, and also the perception of ample money may undermine the funding of small orgs.
  • The major new driver of funding is related to cryptocurrency. There are few industries as volatile or uncertain. While you specifically flag this, I’m worried this will be buried by the "top line" statement which reads EA funding has increased—if this turns out not to be true, choices and actions may have been made that decrease access to funding.
  • This concern is slightly different and related to alignment: there may shifts in the EA movement as a result of new, very large funders. This concern increases due to the unusually conscientious, modest nature of the EA community who update readily, and because new funders may be focus in relatively few and less established cause areas. While we should expect some cultural change as a result of a major increase of funding, these factors may make the community unstable or reduce cohesion.
  • Normal concerns about diversity of funding.

My first sentence, about this comment being low information and maybe unfounded, was not rhetorical. 

I am happy to be completely wrong in every way! 

It’s an absolute good if EAs are updated by changing resources, and articles like are invaluable.

Does anyone else have any comments about this?

Replies from: Benjamin_Todd, Kevin Kuruc
comment by Benjamin_Todd · 2021-07-30T11:22:50.170Z · EA(p) · GW(p)

Agree it's good to think about these things. Our past messaging wasn't nuanced enough - I tried to correct for those issues in the main post, but there are probably going to be new messaging issues.

One quick comment is that I'm pretty worried about issues in the opposite direction e.g. that people aren't being ambitious enough:

Most EA orgs are designed to use at most tens of millions of dollars per year of funding, but we should be trying to think of projects that could deploy $100m+ per year.

comment by Kevin Kuruc · 2021-07-30T16:22:32.176Z · EA(p) · GW(p)

For an analogy, imagine making a statement about the EA movement needing more “skill in biology”. In response, this updates conscientious, strong EAs who change careers. However, what was actually needed was world class leaders in biology whose stellar careers involve special initial conditions. Unfortunately, this means that the efforts made by even very strong EAs were wasted.

This doesn't immediately strike me as a bad outcome, ex-ante. It's very hard to know (1) who will become world class researchers or (2) if non-world-class people move the needle by influencing the direction of their field ever-so-slightly (maybe by increasing the incentives to work on an EA-problem by increasing citations here, peer-reviewing these papers, etc.). I, by no means, am world class, but I've written papers that (I hope) pave the way for better people to work on animal welfare in economics; participate in and attend conferences on welfare economics; signed a consensus statement on research methodology in population ethics; try to be a supportive/encouraging colleague of welfare-economists working on GPR topics; etc. I also worked under a world-class researcher in grad school and now sometimes serve as a glorified assistant (i.e., coauthor) who helps him flesh out and get more of his ideas to paper. In your example, if the community 'needs more people in biology' I think the scaffolding of the sorts I try to provide, is probably(?) still impactful. (Caveat: I'm almost certainly over-justifying my own impact, so take this with a grain of salt.)

If 80K was pushing people into undesirable careers with little earnings potential, this might be a legitimate problem. But I think most of the skills built in these HITS based careers are transferrable and won't leave you in a bad spot. 

Replies from: Charles He
comment by Charles He · 2021-07-30T17:18:19.859Z · EA(p) · GW(p)

Hi Kevin!

I saw your excellent posts as an economics professor [EA · GW] and also cutting WIFI [EA · GW].

Both were great. It's great to hear from your perspective as an economics professor and hear about your work!

Also, thanks for your comment. I think I get what you’re saying:

  • (It's not clear why anyone should listen to my opinions about their life choices) but yes, it seems perfectly valid to go into any discipline, and you can have a huge value and generate impact in many paths of life.
  • Also, there's a subthread here about elitism that is difficult to unpack, but it seems healthy to discuss "production functions", skill and related worldviews explicitly at some point.

To be frank, by giving my narrative example, I was trying to touch on past messaging issues that actually happened. 

These messaging issues are alluded in this article, also by Benjamin Todd:

https://80000hours.org/2018/11/clarifying-talent-gaps/

Basically, the problem is as suggested in my example—in the past, the need for very specific skills or profiles was misinterpreted as a need for general talent. This did result in bad outcomes.

I chose to give my narrative instead of directly pointing to a past instance of the issue. 

By doing this, I hoped to be more approachable to those less familiar with the history. It is also less confrontational while making the same point.

Replies from: Kevin Kuruc
comment by Kevin Kuruc · 2021-07-30T21:07:18.414Z · EA(p) · GW(p)

Thanks for writing back -- and for the unnecessary complements of my inaugural posts :) -- Charles! I only know the context of mis-messaging around skills at a high level, so it is hard for me to respond without knowing what 'bad outcomes' look like. I don't doubt that something like this could happen, so I now see the point you were trying to make.

I was responding as someone who read your (intentionally not fleshed out) hypothetical and thought the appropriate response might actually be for someone well-suited for 'biology' to work on building those broad skills even with a low probability of achieving the original goal. 

comment by Chi · 2021-07-29T22:29:52.062Z · EA(p) · GW(p)

edit: no longer relevant since OP has been edited since. (Thanks!)

Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10.

(emphasis mine)

This would also mean that if you have a 10% chance of succeeding, then the expected value of the path is $300,000–$2 million (and the value of information will be very high if you can determine your fit within a couple of years).

Just to clarify, that's the EV of the path per year, right?

The funding overhang also created bottlenecks for people able to staff projects, and to work in supporting roles. [...]

I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million.

I assume this is also per year?


Clarifying because I think numbers like this are likely to be quoted/vaguely remembered in the future, and it's easy to miss the per year part.

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-07-30T09:16:38.570Z · EA(p) · GW(p)

Yes, they're all per year. I'll add them.

comment by albrgr · 2021-08-13T19:52:44.724Z · EA(p) · GW(p)

Really liked this post, thanks.


Minor comment, wanted to flag that I think "Open Philanthropy has also reduced how much they donate to GiveWell-recommended charities since 2017." was true through 2019, but not in 2020, and we're expecting more growth for the GW recs (along with other areas) in the future.

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-16T08:51:35.291Z · EA(p) · GW(p)

Thanks! I probably should have just used the 2020 figure rather than the 2017-2019 average.

My estimate was an $80m allocation by Open Phil to global health, but this would suggest $100m.

comment by Benjamin_Todd · 2021-08-03T09:25:14.198Z · EA(p) · GW(p)

David Goldberg adds on linkedin that FP pledge value is now $5.7bn, rather than $3.1bn in the table (I was using an old figure).

If we use the 25% to EA-aligned charities figure, that would be $1.4bn NPV rather than $0.8bn.

That 25% figure is also especially uncertain for FP. It perhaps be 2.5% to 50%.

comment by Stefan_Schubert · 2021-08-03T23:34:36.314Z · EA(p) · GW(p)

I find this sort of posts very useful and interesting. Thanks for writing it.

It would be great to have a similarly detailed and well-judged post on the growth of AI safety (including AI governance, etc). Seb Farquhar published  a good post on that topic [EA · GW] in 2017, demonstrating very rapid growth, but I haven't seen anything similar since (please correct me if I'm wrong).

comment by kokotajlod · 2021-07-29T13:13:58.119Z · EA(p) · GW(p)

Thanks, this data is really helpful -- and it also is reassuring to know that people in the EA community are on top of this stuff. I would be disappointed if no one was.

I'm curious as to how the 3% per year number could be justified (via models, rather than by aggregating survey answers). It seems to me that it should be substantially higher.

Suppose you have my timelines (median 2030). Then, intuitively, I feel like we should be spending something like 10% per year. If you have 2055 as your median, then maybe 3% per year makes sense...

EXCEPT that this doesn't take into account interest rates! Even if we spent 10% per year, we should still expect our total pot of money to grow, leaving us with an embarrassingly large amount of money going to waste at the end. (Sure, sure,  it wouldn't literally go to waste--we'd probably blow it all on last-ditch megaprojects to try to turn things around--but these would probably be significantly less effective per dollar compared to a world in which we had spread out our spending more, taking more opportunities on the margin over many years.) And if we spent 3%...

Idk. I'm new to this whole question. I'd love for people to explain more about how to think about this.

Replies from: Benjamin_Todd, MichaelA
comment by Benjamin_Todd · 2021-07-29T13:29:36.282Z · EA(p) · GW(p)

It's a very difficult question. 3% was just the median. IIRC the upper quartile was more like 7%, and some went for 10%.

The people who gave higher figures usually either: (i) had short AI timelines - like you suggest (ii) believe there will be lots of future EA donors - so current donors should give more now and hope future donors can fill in for them.

For the counterargument, I'd suggest our podcast with Phil Trammel and Will on whether we're at the hinge of history [EA · GW]. Skepticism about the importance of AI safety and short AI timelines could also be an important part of the case (e.g. see our podcast with Ben Garfinkel).

One quick thing is that I think high interest rates are overall an argument for giving later rather than sooner!

Replies from: kokotajlod
comment by kokotajlod · 2021-07-29T14:30:43.887Z · EA(p) · GW(p)

I hadn't even taken into account future donors; if you take that into account then yeah we should be doing even more now. Huh. Maybe it should be like 20% or so. Then there's also the discount rate to think about... various risks of our money being confiscated, or controlled by unaligned people, or some random other catastrophe killing most of our impact, etc.... (Historically, foundations seem to pretty typically diverge from the original vision/mission laid out by their founders.)

I've read the hinge of history argument before, and was thoroughly unconvinced (for reasons other people explained in the comments).

One quick thing is that I think high interest rates are overall an argument for giving later rather than sooner!

Hmmm, toy model time: Suppose that our overall impact is log(whatwespendinyear2021)+log(whatwespendinyear2022)+Log(whatwespendinyear2023)... etc. up until some year when existential safety is reached or x-risk point of no return is passed.
Then is it still the case that going from e.g. a 10% interest rate to a 20% interest rate means we should spend less in 2021? Idk, but I'll go find out! (Since I take this toy model to be reasonably representative of our situation)

Replies from: Wayne_Chang, Benjamin_Todd
comment by Wayne_Chang · 2021-08-02T02:16:47.504Z · EA(p) · GW(p)

I highly recommend the Founder's Pledge report on Investing to Give. It goes through and models the various factors in the giving-now vs giving-later decision, including the ones you describe. Interestingly, the case for giving-later is strongest for longtermist priorities, driven largely by the possibility that significantly more cost-effective grants may be available in the future. This suggests that the optimal giving rate today could very well be 0%.  

Replies from: Owen_Cotton-Barratt2, kokotajlod
comment by Owen_Cotton-Barratt2 · 2021-08-02T07:27:29.746Z · EA(p) · GW(p)

I think it's implausible that the optimal giving rate today could be 0%. This is because many giving opportunities function as a form of investment, and we're pretty sure that the best of those outperform the financial market. (I wrote more about ~this in this post: https://forum.effectivealtruism.org/posts/Eh7c9NhGynF4EiX3u/patient-vs-urgent-longtermism-has-little-direct-bearing-on [EA · GW] )

Replies from: Wayne_Chang
comment by Wayne_Chang · 2021-08-03T16:10:21.798Z · EA(p) · GW(p)

Hi Owen, even if you're confident today about identifying investment-like giving opportunities with returns that beat financial markets, investing-to-give  can still be desirable. That's because investing-to-give preserves optionality. Giving today locks in the expected impact of your grant, but waiting allows for funding of potentially higher impact opportunities in the future.

The secretary problem comes to mind (not a perfect analogy but I think the insight applies). The optimal solution is to reject the initial ~37% of all applicants and then accept the next applicant that's better than all the ones we've seen. Given that EA has only been around for about a decade, you would have to think that extinction is imminent for a decade to count for ~37% of our total future. Otherwise, we should continue rejecting opportunities. This allows us to better understand the extent of impact that's actually possible,  including opportunities like movement building and global priorities research. Future ones could be even better! 

Replies from: Owen_Cotton-Barratt2, RyanCarey, kokotajlod, kokotajlod
comment by Owen_Cotton-Barratt2 · 2021-08-03T23:59:12.499Z · EA(p) · GW(p)

But the investment-like giving opportunities also preserve optionality! This is the sense in which they are investment-like. They can result in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values than if we just make financial investments now.

Replies from: Wayne_Chang
comment by Wayne_Chang · 2021-08-04T03:15:13.283Z · EA(p) · GW(p)

Thanks for the clarification, Owen! I had mis-understood 'investment-like' as simply having return compounding characteristics. To truly preserve optionality though, these grants would need to remain flexible (can change cause areas if necessary; so grants to a specific cause area like AI safety wouldn’t necessarily count) and liquid (can be immediately called upon; so Founder's Pledge future pledges wouldn't necessarily count). So yes, your example of grants that result "in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values" certainly qualifies, but I suspect that's about it. Still, as long as such grants exist today, I now understand why you say that the optimal giving rate is implausibly (exactly) 0%. 

comment by RyanCarey · 2021-08-03T21:11:36.172Z · EA(p) · GW(p)

If I recall correctly (and I may well be wrong), the secretary problem's solution only applies if your utility is linear in the ranking of the secretary that you choose - I've never come across a problem where this was a useful assumption.

comment by kokotajlod · 2021-08-03T20:17:35.175Z · EA(p) · GW(p)

Interesting! The secretary problem does seem relevant as a model, thanks!

Given that EA has only been around for about a decade, you would have to think that extinction is imminent for a decade to count for ~37% of our total future. 

FWIW, many of us do think that. [LW · GW] I do, for example.

comment by kokotajlod · 2021-08-03T19:56:04.587Z · EA(p) · GW(p)

Interesting! How does it work when you are uncertain about the date of extinction but think there's a 50% chance that it's within 10 years? (For concreteness, suppose that every decade there's a 50% chance of extinction.) (This is more or less what I think)
 

comment by kokotajlod · 2021-08-02T08:58:22.967Z · EA(p) · GW(p)

Thanks Wayne, will read!

comment by Benjamin_Todd · 2021-07-29T14:54:12.929Z · EA(p) · GW(p)

That toy model is similar to Phil's, so I'd start by reading his stuff. IIRC with log utility the interest rate factors out. With other functions, it can go either way.

However, if your model is more like impact = log(all time longtermist spending before the hinge of history), which also has some truth to it, then I think higher interest rates will generally make you want to give later, since they mean you get more total resources (so long as you can spend it quickly enough as you get close to the hinge).

I think the discount rate for the things you talk about is probably under 1% per year, so doesn't have a huge effect either way. (Whereas if you think EA capital is going to double again in the next 10 years, then that would double the ideal percentage to distribute.)

Replies from: kokotajlod
comment by kokotajlod · 2021-07-29T16:14:36.385Z · EA(p) · GW(p)

Will do, thanks!

comment by MichaelA · 2021-08-03T13:58:42.068Z · EA(p) · GW(p)

(Just want to say that I did find it a bit odd that Ben's post didn't mention timelines to transformative AI - or other sources of "hingeyness" - as a consideration, and I appreciate you raising it here. Overall, my timelines are longer than yours, and I'd guess we should be spending less than 10% per year, but it does seem a crucial consideration for many points discussed in the post.)

comment by MichaelA · 2021-08-03T14:09:54.809Z · EA(p) · GW(p)

The Metaculus community also estimates there’s a 50% chance of another Good Ventures-scale donor within five years.

I think that that question would count Sam Bankman-Fried starting to give at the scale Good Ventures is giving as a positive resolution, and that some forecasters have that as a key consideration for their forecast (e.g., Peter Wildeford's comment suggests that). Whereas I think you're using this as evidence that there'll be another donor at that scale, in addition to both Good Ventures and the FTX team people? So this might be double-counting?

(But I only had a quick look at both the Metaculus question and the relevant section of your post, so I might be wrong). 

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-03T21:07:57.927Z · EA(p) · GW(p)

Ah good point. I only found the metaculus questions recently and haven't thought about them as much.

comment by Jan-WillemvanPutten · 2021-08-02T08:02:15.367Z · EA(p) · GW(p)

Thanks for this great post, I think a must read for everyone working in the EA meta space.

Some thoughts on the following: 

"I continue to think that jobs in government, academia, other philanthropic institutions and relevant for-profit companies (e.g. working on biotech) can be very high impact and great for career capital."

I think we sometimes forget that these jobs in developing countries usually pay quite well. I wouldn't see earning to give and working in these institutions as opposites. There are jobs that give career capital with earning to give potential ánd that have the ability to have impact (probably after some years). But we should do some more research into the most relevant roles and organisations outside of EA organisations.  E.g., I would expect massive difference in expected impact potential between working for the US Ministry of Education and the US Ministry of Foreign Affairs. 

I know the Effective Institutions Project works on a framework to help us make thoughtful judgments about which institutions’ decisions we should most prioritize improving as well as what strategies are most likely to succeed at improving them.  But I think that is just the start: besides resources for (communication of) research on abovementioned topics, we also need to upskill EAs (and their colleagues) to make impact in these jobs and to accelerate their careers from a starter role to an impactful position. This would also enable growth of the EA movement as a whole, since there are plenty of positions giving career capital, E2G- and impact potential.

comment by MichaelA · 2021-08-03T14:10:21.825Z · EA(p) · GW(p)

Medium-sized donors can often find opportunities that aren’t practical for the largest donors to exploit – the ecosystem needs a mixture of ‘angel’ donors to compliment the ‘VCs’ like Open Philanthropy. Open Philanthropy isn’t covering many of the problem areas listed here and often can’t pursue small individual grants.

This reminded me of the following post, which may be of interest to some readers: Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation [EA · GW]

comment by MichaelA · 2021-08-03T14:09:02.275Z · EA(p) · GW(p)

Thanks for this really interesting post! 

Overall I think all the core claims and implications sound right to me, but I'll raise a few nit-picks in comments.

We could break down some of the key leadership positions needed to deploy these funds as follows:

  1. Researchers able to come up with ideas for big projects, new cause areas, or other new ways to spend funds on a big scale
  2. EA entrepreneurs/managers/research leads able to run these projects and hire lots of people
  3. Grantmakers able to evaluate these projects

I agree with all that, but think that that's a somewhat too narrow framing of how researchers can contribute to deploying these funds. I'd also highlight their ability to:

  • Help us sift through the existing ideas for projects, cause areas, "intermediate goals", etc. to work out what would be high-priority/cost-effective (or even just what seems net-positive overall)
  • Generate or sharpen insights, concepts, and/or vocabulary that can help the entrepreneurs, grantmakers, etc. do their work
    • E.g., as a (very new and temporary) grantmaker, I think I've probably done a better job because other people had previously developed the following concepts and terms and some analysis related to them: 
      • information hazards
      • the unilateralist's curse
      • disentanglement research
      • value of movement growth
      • talent constraints vs funding constraints vs vetting constraints [? · GW]
      • (a bunch of other things)
  • Maybe helping refine precise ideas for cause areas, projects, etc. (but I'm less sure what I mean by this)

(That said, I think some other people are more pessimistic than me either about how much research has helped on these fronts or how much it's likely to in future. See e.g. some other parts of Luke's post [EA · GW] or some comments on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? [EA · GW])

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-08-03T21:10:38.431Z · EA(p) · GW(p)

I agree there are lots of forms of useful research that could feed into this, and in general better ideas feels like a key bottleneck for EA. I'm excited to see more 'foundational' work and disentanglement as well. Though I do feel like at least right now there's an especially big bottleneck for ideas for specific shovel ready projects that could absorb a lot of funding.

comment by jared_m · 2021-07-30T10:11:11.041Z · EA(p) · GW(p)

I continue to think that jobs in government, academia, other philanthropic institutions and relevant for-profit companies (e.g. working on biotech) can be very high impact and great for career capital.

 

Those looking to work at the intersection of academia, biorisk, biotech, global health/infectious disease, and philanthropic institutions may wish to look at roles at leading academic medical centers. A few years at Charité; Cleveland Clinic; one of the Harvard  affiliates (e.g., the Brigham or MGH);  JHU;  Mayo Clinic; Toronto General;  UCH in London; or another leading institution could give one some surprising flexibility to support EA projects within a well-resourced academic institution.

The following link from this week lists a number of new strategy jobs at Mayo Clinic.  I suspect these roles would have career capital / impact benefits beyond what the brief job descriptions suggest.  https://www.linkedin.com/feed/update/urn:li:activity:6825490031046639616/

comment by Josh Jacobson (joshjacobson) · 2021-07-29T17:49:49.561Z · EA(p) · GW(p)

In 2020, I estimate about 14% net growth, bringing the total number of active EAs to 7,400.

  1. Do you think this growth rate applies to the "Highly-Engaged EAs" classification as well, of which there were estimated to be 2,315 in the 2019 Rethink Priorities analysis?

  2. Is this an estimate for the "Active EAs" at the end of 2020, or as of July 2021?

(Caveat to others that if you look at these estimates in Rethink Priorities initial 2019 report, you'll find that while they are well-informed, they are quite rough, so precise estimates have limited value.)

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-07-29T21:17:11.035Z · EA(p) · GW(p)
  1. Yes - I wasn't trying to distinguish between the two.

  2. Probably best to think of it as the estimate for 2020 (specifically, it's based on the number of EA survey respondents in the 2019 survey vs. the 2020 survey).

This estimate is just based on one method. Other methods could yield pretty different numbers. Probably best to think of the range as something like -5% to 30%.

Replies from: joshjacobson
comment by Josh Jacobson (joshjacobson) · 2021-07-30T05:16:31.035Z · EA(p) · GW(p)

Thanks, that’s helpful info.

comment by ofer · 2021-08-01T13:08:34.735Z · EA(p) · GW(p)

Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10 (where this hugely depends on fit and the circumstances).

(Maybe you already think so, but...) it probably also depends a lot on the identity of that "someone" who is donating the $X (even if we restrict the discussion to, say, potential donors who are longtermism-aligned). Some people may have a comparative advantage with respect to their ability to donate effectively such that the EV from their donation would be several orders of magnitude larger than the "average EV" from a donation of that amount.

Replies from: Linch, Benjamin_Todd
comment by Linch · 2021-08-02T21:56:22.591Z · EA(p) · GW(p)

Some people may have a comparative advantage with respect to their ability to donate effectively such that the EV from their donation would be several orders of magnitude larger than the "average EV" from a donation of that amount.


This seems like a fairly surprising claim to me, do you have a real or hypothetical example in mind? 

EDIT: Also I feel like in many such situations, such people should almost certainly become grantmakers!

Replies from: ofer
comment by ofer · 2021-08-03T10:06:16.655Z · EA(p) · GW(p)

This seems like a fairly surprising claim to me, do you have a real or hypothetical example in mind?

Imagine that all the longtermism ~aligned people in the world participate in a "longtermism donor lottery" that will win one of them $1M. My estimate is that the EV of that $1M, conditional on person X winning, is several orders of magnitude larger for X=[Nick Bostrom] than for almost any other value of X.

[EDIT: following the conversation here with Linch I thought about this some more, and I think the above claim is too strong. My estimate of the EV for many values of X is very non-robust, and I haven't tried to estimate the EV for all the relevant values of X. Also, maybe potential interventions that cause there to be more longtermism-aligned funding should change my reasoning here.]

EDIT: Also I feel like in many such situations, such people should almost certainly become grantmakers!

Why? Do you believe in something analogous to the efficient-market hypothesis for EA grantmaking? What mechanism causes that? Do grantmakers who make grants with higher-than-average EV tend to gain more and more influence over future grant funds at the expense of other grantmakers? Do people who appoint such high-EV grantmakers tend to gain more and more influence over future grantmaker-appointments at the expense of other people who appoint grantmakers?

Replies from: Linch
comment by Linch · 2021-08-03T10:13:07.240Z · EA(p) · GW(p)

My estimate is that the EV of that $1M, conditional on person X winning, is several orders of magnitude larger for X=[Nick Bostrom] than for almost any other value of X.

I doubt this is literally true fwiw. If Bostrom, a very high-status figure within longtermist EA, has really good donation opportunities to the tune of 1 million, I doubt it'd be unfunded. I also feel like there are similar analogous experiments made in the past where relatively low oversight grantmaking power was given to certain high-prestige longtermist EA figures( eg here and here). You can judge for yourself whether impact "several orders of magnitude higher" sounds right, personally I very much doubt it.

Why? Do you believe in something analogous to the efficient-market hypothesis for EA grantmaking

I meant "should" as a normative claim, not an empirical claim. Sorry if I miscommunicated. 

Replies from: MichaelDickens, ofer
comment by MichaelDickens · 2021-08-04T21:21:05.769Z · EA(p) · GW(p)

Some evidence in this direction: Eliezer Yudkowsky recently wrote on a Facebook post:

This is your regular reminder that, if I believe there is any hope whatsoever in your work for AGI alignment, I think I can make sure you get funded.

This implies that all the really good funding opportunities Eliezer is aware of have already been funded, and any that appear can get funded quickly. Eliezer is not Nick Bostrom, but they're in similar positions.

(Note: Eliezer's Facebook post is publicly viewable, so I think reposting this quote here is ok from a privacy standpoint.)

comment by ofer · 2021-08-03T16:33:20.265Z · EA(p) · GW(p)

If Bostrom, a very high-status figure within longtermist EA, has really good donation opportunities to the tune of 1 million, I doubt it'd be unfunded.

Even 'very high-status figures within longtermist EA' can control a limited amount of funding, especially for requests that are speculative/weird/non-legible from the perspective of the relevant donors. I don't know what's the bar for "really good donation opportunities", but the relevant thing here is to compare the EV of that $1M in the hands of Bostrom to the EV of that $1M in the hands of other longtermism aligned people.

Less importantly, you rely here on the assumption that being "a very high-status figure within longtermist EA" means you can influence a lot of funding, but the causal relationship may mostly be going in the other direction. Bostrom (for example) probably got his high-status in longtermist EA mostly from his influential work, and not from being able to influence a lot of funding.

I also feel like there are similar analogous experiments made in the past where relatively low oversight grantmaking power was given to certain high-prestige longtermist EA figures( eg here and here). You can judge for yourself whether impact "several orders of magnitude higher" sounds right, personally I very much doubt it.

To be clear, I don't think my reasoning here applies generally to "high-prestige longtermist EA figures". Though this conversation with you made me think about this some more and my above claim now seems to me too strong (I added an EDIT block).

Replies from: Linch
comment by Linch · 2021-08-03T18:18:29.284Z · EA(p) · GW(p)

Though this conversation with you made me think about this some more and my above claim now seems to me too strong (I added an EDIT block).

I'm glad to cause an update! Hopefully it's in the right direction! :) 

Even 'very high-status figures within longtermist EA' can control a limited amount of funding, especially for requests that are speculative/weird/non-legible from the perspective of the relevant donors. I don't know what's the bar for "really good donation opportunities", but the relevant thing here is to compare the EV of that $1M in the hands of Bostrom to the EV of that $1M in the hands of other longtermism aligned people.

Again, there are proxies you can look at, like what Carl Shulman donates to vs what actual winners of the donor lottery donates to. But maybe you don't consider this much evidence, if you posit that Nick Bostrom specifically has unusually high discernment, specifically enough to donate to things in the band of activities that are  "speculative/weird/non-legible" from the perspective of the relevant donors, but not speculative/weird/non-legible enough that the donor lottery administration won't permit this. 

I guess my rejoinder here is just an intuitive sense of disbelief? Several (say >=3?) orders of magnitude above 1 million gets you >1B, and as can be deduced in the figures in the post above, this is already well over the annual long-termist spending every year. If we believe that Nick Bostrom can literally accomplish much more good with 1 million than money allocated by the rest of the longtermist EA movement combined (including all money sent to FHI, where he works), isn't this really wild? Also why aren't we sending more money to Nick Bostrom to regrant

(Though perhaps you came to the same conclusion by now). 

 Less importantly, you rely here on the assumption that being "a very high-status figure within longtermist EA" means you can influence a lot of funding, but the causal relationship may mostly be going in the other direction

I'm confused about what you're saying here. P(B| do A) is not evidence against P(A|B), except in very rare circumstances. 

Replies from: ofer
comment by ofer · 2021-08-04T17:24:18.700Z · EA(p) · GW(p)

But maybe you don't consider this much evidence, if you posit that Nick Bostrom specifically has unusually high discernment, specifically enough to donate to things in the band of activities that are "speculative/weird/non-legible" from the perspective of the relevant donors, but not speculative/weird/non-legible enough that the donor lottery administration won't permit this.

My reasoning here is indeed based specifically on the track record of Nick Bostrom. (Also, I'm imagining here a theoretical donor lottery where the winner has 100% control over the money that they won.)

I guess my rejoinder here is just an intuitive sense of disbelief? Several (say >=3?) orders of magnitude above 1 million gets you >1B, and as can be deduced in the figures in the post above, this is already well over the annual long-termist spending every year. If we believe that Nick Bostrom can literally accomplish much more good with 1 million than money allocated by the rest of the longtermist EA movement combined (including all money sent to FHI, where he works), isn't this really wild?

I was not comparing $1M in the hands of Bostrom to $1B in the hands of a random longtermism-aligned person. (The $1B would plausibly be split across many grants, and it's plausible that Bostrom would end up controlling way more than $1M out of it.)

As an aside, without thinking about it much, it seems to me that the EV from the publication of the book Superintelligence is plausibly much higher than the total EV from everything else that was accomplished by the rest of the longtermist EA movement so far. (I can easily imagine myself updating away from that if I try to enumerate the things that were accomplished by the longtermist EA movement).

Also why aren't we sending more money to Nick Bostrom to regrant?

To answer this I think that the word "we" should be replaced with something more specific. Why grantmakers at longtermism-aligned grantmaking orgs don't send more money to Bostrom to regrant? One response is that there is probably nothing analogous to the efficient-market hypothesis for EA grantmaking (see the last paragraph here [EA(p) · GW(p)]). Also, the grantmakers are in an implicitly completion with each other over influence on future grant funds. A grantmaker who makes grants that are speculative, weird, non-legible or have a high probability of failing may tend to lose influence over future grant funds, and perhaps reduce the amount of future longtermist funding that their org can give.

Imagine that Bostrom uses the additional $1M to hire another assistant, or some manager for FHI, that simply results in Bostrom being a bit more productive. Looking at this from the lens of the grantmakers' incentives, how would that $1M grant compare to the average LTFF grant?

I'm confused about what you're saying here. P(B| do A) is not evidence against P(A|B), except in very rare circumstances.

If we estimate P(A|B) based on a correlation that we observe between A and B then the existence of a causal relationship from A to B is indeed evidence that should update our estimate of P(A|B) towards a lower value.

comment by Benjamin_Todd · 2021-08-03T09:21:33.508Z · EA(p) · GW(p)

In the hypothetical set up, it's in theory the same person each time – since the comparison is between the same person earning to give or working in one of these roles.

Replies from: Linch
comment by Linch · 2021-08-05T00:23:57.906Z · EA(p) · GW(p)

I don't think this addresses ofer's objection if I understand it correctly (but then again the length of our back and forth comments is maybe strong evidence against me understanding the objection correctly!)

comment by notarealusername · 2021-07-31T23:07:52.696Z · EA(p) · GW(p)

Hi Ben. I came across this article sort of at random and wanted to weigh in. 

I'm senior management at a for-profit (non-EA-affiliated) company. In principle, the idea of EA is very appealing to me. I absolutely agree that doing good "correctly" is really important. Prior to the last couple of years, I could absolutely have seen myself joining an EA org.

But over those years, as my exposure within Silicon Valley and the kind of groups that overlap heavily with EA (e.g. "rationalists") has grown, I've become more reluctant to support it. Bluntly, I don't trust groups with very high proportions (perhaps majorities?) of people who believe in some form of 'scientific racism' to solve the problems of the world's most vulnerable, who are almost entirely of races they consider biologically incapable of governing themselves. Nor do I trust groups that are so eager to deny what is (to me) self-evident problems with the local culture to solve problems within other cultures (especially ones they consider inferior).

"Effective" altruism requires a notion of what "effect" is. And when I find myself surrounded by people who seem so determined to ignore the stated and clear needs of the people around them, I am concerned that the "effect" they want is not the "effect" I want. How can I trust someone who won't see sexism right in front of him (choice of pronoun very much intentional) to not exacerbate sexism through his efforts? How can I trust someone who thinks Africans are genetic cretins to have the cultural respect to help them build a functioning society within the structures that make sense to them? Paternalism and a refusal to understand conditions on the ground is the king of all altruistic failure-modes.

I get that there's a lot of stupid takes on the broader tech world (which I would consider EA to be a part of) and on the kind of people in it. All that "you can't reduce feelings to numbers" nonsense is dumb. All the "white men are trying to help brown people and that's racist" takes are dumb. I don't want to throw the baby totally out with the bathwater. But the longer I'm around, the bigger I think these problems are, and the less welcome I feel, both for what I am and for what I believe. Management is a social discipline, and its practitioners do not particularly enjoy having the importance of social factors (which we know are vital even in very small organizations, much less in societies of millions) dismissed.

I don't know that I have a solution to offer you here. For myself, I'm increasingly of the view that founder effects have unfortunately tainted the movement beyond repair. But maybe my voice can at least give you a sense of where some of your problems with finding people-oriented talent lie.

Replies from: tessa, Khorton, evelynciara
comment by tessa · 2021-08-02T17:39:17.611Z · EA(p) · GW(p)

Appreciate you sharing why you have a negative impression of the Effective Altruism movement and aren't interested in joining an EA org; you might be getting downvoted under the "clear, on-topic, and kind" comment guideline, but I'm not sure. In my own experience, there sure are lots of frustrating Silicon Valley memes that are overly dismissive of social factors (or of sexism and racism) out in the world, but they aren't dominant among people actually doing direct EA-affiliated work. As a few recent examples that demonstrate a sensitivity to the importance of social factors, I enjoyed this 80,000 Hours Podcast with Leah Garcés on strategic and empathetic communication for animal advocacy and this post on surprising things learned from a year of working on policy to eliminate lead exposure in Malawi, Bostwana, Madagascar and Zimbabwe [EA · GW].

comment by Khorton · 2021-08-05T22:50:05.866Z · EA(p) · GW(p)

I'm surprised to see this so heavily downvoted - I've also had concerns about EA culture with regards to sex and race and I wouldn't be surprised if it puts off people with some of the soft skills EA is missing. This comment definitely exaggerates and I'm not happy about that, but the underlying idea people who are good at navigating social dynamics are wary of EA, which is contributing to the talent gap, is pretty interesting.

comment by evelynciara · 2021-08-02T18:19:06.496Z · EA(p) · GW(p)

Hi! Like Tessa, I appreciate you sharing your concerns about the EA movement. I downvoted because some of your criticisms seem off the mark to me. Specifically, in the two years I've been highly involved in EA, I haven't heard a single person say that non-white people are "biologically incapable of governing themselves." The scientific consensus is that "claims of inherent differences in intelligence between races have been broadly rejected by scientists on both theoretical and empirical grounds" (Wikipedia), so it seems like a bizarre thing for an EA to say. Do you mind telling us where you've heard someone in the EA community say this?

Replies from: notarealusername
comment by notarealusername · 2021-08-02T21:33:17.056Z · EA(p) · GW(p)

Sure. To take one concrete example, I know this is an explicit belief of Scott Alexander's (author of SlateStarCodex/AstralCodexTen and major LessWrong contributor - these are the two largest specific sources of EA growth beyond generics like "blog" or 80k hours itself, per this breakdown [EA · GW], and were my own entry point into awareness of EA). This came out through a series of leaked emails which cite people like Steve Sailer (second link under "1. HBD is probably partially correct") in a general defense of neoreactionaries. Yes, these emails are old, but (a) he's made no effort to claim they're incorrect and (b) he's very recently defended people like Steve Hsu, who explicitly endorse HBD on the grounds that that is a valid theory that deserves space for advocacy. I also know Scott and his immediate associates personally, and their defenses of his views to me personally made no effort to pretend his views were otherwise.

When this fact came out, I was quite horrified and said as much. I assumed this would be a major shock. Instead, I was unable to find a single member of the Berkeley rationalist community who had a problem with it. I asked quite a few, and all of them (without exception) endorsed a position that I can roughly sum up as "well sure, the fact is that black people are/probably are genetically stupid, but we're not mean and just stating a fact so it's fine". This included at least one person involved heavily with planning EA Global events here in the Bay Area, and included every single person I know personally who has even the loosest affiliation with EA. To my knowledge, not one of these people has a problem with explicit endorsement of the belief that black people are genetically stupider than white people.

To be clear, I don't think that makes them insincere. I believe they believe what they're saying, and I believe that they are sincerely motivated to make the world better. That's why I was part of that community in the first place - the people involved are indeed very kind and pleasant in the day to day, to the point that this ugliness could hide for a long time. So I don't think stuff like "it seems like a bizarre thing for an EA to say" applies: I think they basically think that being effective requires facts and that 'scientific racism' is a fact or at least probable fact. There's nothing inconsistent about that set of beliefs, abhorrent though it is to me.

Replies from: SamiM
comment by SamiM · 2021-08-05T16:04:10.210Z · EA(p) · GW(p)

Hey, I thought this discussion could use some data. I also added some personal impressions.

These are the results of the 2020 SSC survey.

For the question "How would you describe your opinion of the [sic] the idea of "human biodiversity", eg the belief that races differ genetically in socially relevant ways?"

20.8% answered 4 and 8.7% answered 5.

Where 1 is Very unfavorable and 5 is Very favorable

The answeres look similar for 2019

Taking that at face value, 30% of Scott’s readers think favorably of “HBD”.

(I guess you could look at it as "80% of SSC readers fail to condemn scientific racism". But that doesn't strike me as charitable.)


 From the same survey, 13.6% identified as EAs, and 33.4% answered sorta EA.
 

I should mention that the survey has some nonsensical answers (IQs of 186, verbal SATs of 30). And it appears that many answers lean liberal (Identifying as liberals, thinking favorably of feminism, and more open borders, while thinking unfavorably of Trump.)


A while ago, Gwen wrote [LW(p) · GW(p)]

“... If HBD is true, then all the existing correlational and longitudinal evidence immediately implies that group differences are the major reason why per capita income in the USA are 3-190x per capita income in Africa, that group differences are a major driver of history and the future, that intelligence has enormous spillovers totally ignored in all current analyses. This has huge implications for historical research, immigration policy (regression to the mean), dysgenics discussions (minor to irrelevant from just the individual differences perspective but long-term existential threat from HBD), development aid, welfare programs, education, and pretty much every single topic in the culture wars touching on 'sexism' or 'racism' where the supposedly iron-clad evidence is confounded or based on rational priors.”
 

I’m trying to imagine what global development charities EAs who believe HBD donate to, and I’m having a hard time.
Assuming this implies that some EAs (1-5%?) believe in this, I would reckon they're more focused on X-risks or animal welfare. It would be helpful to see how the people who identify as EAs answered this question.

Finally, from Scott's email (which I think sharing is a horrible violation of privacy), the last sentence is emblematic of the attitude of lots of people in the community (including myself) have. My Goodreads contains lots of books I expect to disagree with or be offended by (Gyn/Ecology, The Bell Curve), but I still think it's important to look into them.

Valuing new insights sometimes means looking into things no one else would, and that has been very useful for the community (fish/insect welfare, longtermism). But unfortunately, one risk is that at least some people will come out believing (outrageously) wrong things. I think that is worth it.

On a personal note, I’m black, and a community organizer, and I haven't encountered anything but respect and love from the EA community.

 

Replies from: MaxRa
comment by MaxRa · 2021-08-06T11:48:01.105Z · EA(p) · GW(p)

Great comment!

I’m trying to imagine what global development charities EAs who believe HBD donate to, and I’m having a hard time.

I don’t totally follow why „the belief that races differ genetically in socially relevant ways“ would leave one to not donate to for example the Against Malaria Foundation, or Give Directly. Assuming that there for example is on average a (slightly?) lower average IQ, it seems to me that less Malaria or more money still will do most one would hope for and what the RCTs say they do, even if you might expect (slightly ?) lower economic growth potential and in the longer term (slightly?) less potential for the regions to become highly-specialized skilled labor places?

Replies from: SamiM
comment by SamiM · 2021-08-06T12:45:57.100Z · EA(p) · GW(p)

I think you're right. I guess I took Gwen's comment at face value and tried to figure out how development aid will look different due to the "huge implications", which was hard.