New data suggests the ‘leaders’’ priorities represent the core of the community 2020-05-11T13:07:43.056Z · score: 99 (50 votes)
What will 80,000 Hours provide (and not provide) within the effective altruism community? 2020-04-17T18:36:00.673Z · score: 142 (69 votes)
Why not to rush to translate effective altruism into other languages 2018-03-05T02:17:20.153Z · score: 63 (64 votes)
New recommended career path for effective altruists: China specialists 2018-03-01T21:18:46.124Z · score: 17 (17 votes)
80,000 Hours annual review released 2017-12-27T20:31:05.395Z · score: 10 (10 votes)
How can we best coordinate as a community? 2017-07-07T04:45:55.619Z · score: 9 (9 votes)
Why donate to 80,000 Hours 2016-12-24T17:04:38.089Z · score: 18 (20 votes)
If you want to disagree with effective altruism, you need to disagree one of these three claims 2016-09-25T15:01:28.753Z · score: 31 (24 votes)
Is the community short of software engineers after all? 2016-09-23T11:53:59.453Z · score: 13 (15 votes)
6 common mistakes in the effective altruism community 2016-06-03T16:51:33.922Z · score: 12 (14 votes)
Why more effective altruists should use LinkedIn 2016-06-03T16:32:24.717Z · score: 13 (13 votes)
Is legacy fundraising actually higher leverage? 2015-12-16T00:22:46.723Z · score: 4 (14 votes)
We care about WALYs not QALYs 2015-11-13T19:21:42.309Z · score: 14 (16 votes)
Why we need more meta 2015-09-26T22:40:43.933Z · score: 22 (34 votes)
Thread for discussing critical review of Doing Good Better in the London Review of Books 2015-09-21T02:27:47.835Z · score: 10 (9 votes)
A new response to effective altruism 2015-09-12T04:25:43.242Z · score: 3 (3 votes)
Random idea: crowdsourcing lobbyists 2015-07-02T01:16:05.861Z · score: 6 (6 votes)
The career questions thread 2015-06-20T02:19:07.131Z · score: 13 (13 votes)
Why long-run focused effective altruism is more common sense 2014-11-21T00:12:34.020Z · score: 17 (18 votes)
Two interviews with Holden 2014-10-03T21:44:12.163Z · score: 7 (7 votes)
We're looking for stories of EA career decisions 2014-09-30T18:20:28.169Z · score: 5 (5 votes)
An epistemology for effective altruism? 2014-09-21T21:46:04.430Z · score: 9 (6 votes)
Case study: designing a new organisation that might be more effective than GiveWell's top recommendation 2013-09-16T04:00:36.000Z · score: 0 (0 votes)
Show me the harm 2013-08-06T04:00:52.000Z · score: 3 (3 votes)


Comment by benjamin_todd on Collection of good 2012-2017 EA forum posts · 2020-07-11T15:53:26.205Z · score: 13 (8 votes) · EA · GW

I agree the old posts get neglected, thanks for putting this together.

I'd also nominate more of Greg's old posts:

Such as this one:

Comment by benjamin_todd on EA Survey 2019 Series: How many people are there in the EA community? · 2020-06-26T13:08:40.055Z · score: 11 (5 votes) · EA · GW

Thank you for writing this up!

Comment by benjamin_todd on MathiasKirkBonde's Shortform · 2020-06-12T21:24:48.347Z · score: 10 (6 votes) · EA · GW

Some thoughts here on how quick it is to learn:

In there, I guess that 6-18 months of full-time study in the country is enough to get to conversational fluency.

I've seen other estimates that it takes a couple of thousand hours to get fluent e.g. here:

My guess is that it's more efficient to study full time while living in the country. I think living there increases motivation, means you learn what you actually need, means you learn a bunch 'passively', and lets you practice conversation a lot, which is better than most book learning, and you learn more of the culture. So, I'd guess someone would make more progress living there for a year compared to doing an hour a day for ~4 years, and enjoy it more.

That said, if you use the hour well, you could learn a lot of vocab and grammar. You could could then get a private tutor to practice conversation, or you could go to China (or Taiwan) later building on that base.

Comment by benjamin_todd on Does 80,000 Hours focus too much on AI risk? · 2020-06-06T14:14:02.961Z · score: 2 (1 votes) · EA · GW

Also see this clarification of how much we focus on different causes.

Comment by benjamin_todd on What are the leading critiques of "longtermism" and related concepts · 2020-06-04T12:35:03.620Z · score: 6 (3 votes) · EA · GW

Yes, I agree with that too - see my comments later in the thread. I think it would be great to be clearer that the arguments for xrisk and longtermism are separate (and neither depends on utilitarianism).

Comment by benjamin_todd on What are the leading critiques of "longtermism" and related concepts · 2020-06-03T22:04:22.512Z · score: 2 (1 votes) · EA · GW

I agree!

Comment by benjamin_todd on What are the leading critiques of "longtermism" and related concepts · 2020-06-03T16:17:09.739Z · score: 10 (6 votes) · EA · GW

FWIW I'd still favour two posts (or if you were only going to one, focusing on longtermism). I took a quick look at the original list, and I think they divide up pretty well, so you wouldn't end up with many reasons that should appear on both lists. I also think it would be fine to have some arguments appear on both lists.

In general, I think conflating the case for existential risk with the case for longtermism has caused a lot of confusion, and it's really worth pushing against.

For instance, many arguments that undermine existential risk actually imply we should focus on (i) investing & capacity building (ii) global priorities research or (iii) other ways to improve the future, but instead get understood as arguments for working on global health.

Comment by benjamin_todd on [updated] Global development interventions are generally more effective than Climate change interventions · 2020-06-03T16:08:11.422Z · score: 8 (2 votes) · EA · GW

Great, thank you for this! Look forward to seeing more work also.

And just a quick thought: if we know what the SCC of carbon is for Africa (looks like ~$10), and it's defined in the way you say, then we could also do the comparison directly with the Africa-SCC figure, rather than converting into US equivalent first e.g.:

1 tonne of CO2 averted -> equivalent to $10 of consumption in Africa If it costs $1 to avert a tonne, then $1 -> $10 consumption $1 cash transfer -> $1 of consumption in Africa (or maybe ~$5 to a GiveDirectly-recipient) $1 to AMF -> ~$50 African-consumption-equivalent (thinking of it as 10x GiveDirectly)

So with these figures, carbon offsets are better than cash transfers, but AMF is 5x better than carbon offsets.

Comment by benjamin_todd on What are the leading critiques of "longtermism" and related concepts · 2020-06-02T16:08:06.181Z · score: 12 (4 votes) · EA · GW

Strongly agree - I think it's really important to disentangle longtermism from existential risk from AI safety. I might suggest writing separate posts.

I'd also be keen to see more focus on which arguments seem best, rather than having such a long list (including many that have a strong counter, or are no longer supported by the people who first suggested them), though I appreciate that might take longer to write. A quick fix would be to link to counterarguments where they exist.

Comment by benjamin_todd on What are the leading critiques of "longtermism" and related concepts · 2020-06-02T15:59:24.380Z · score: 10 (4 votes) · EA · GW

I don't think longtermism depends on either (i) valuing future people equally to presently alive people or (ii) total utilitarianism (or utilitarianism in general), so I don't think these are great counterarguments unless further fleshed out. Instead it depends on something much more general like 'whatever is of value, there could be a lot more of it in the future'.

Comment by benjamin_todd on [updated] Global development interventions are generally more effective than Climate change interventions · 2020-06-02T15:50:14.007Z · score: 12 (3 votes) · EA · GW

Hey Hauke,

That makes sense.

I do think more EA work on this topic would be useful for someone to do, since I don't think it's clear from a near-termist perspective that global health is more effective than climate change.

On guesstimate, there was an error and I was unable to save my model. If someone is looking to reproduce this though, I'd suggest they just make their own.

On the value of money to Americans vs. GiveDirectly recipients, my personal estimate was a lower ratio, because I think we should take into account some flow through effects and I think this causes convergence. I don't think values like 10,000x are plausible for the all-considered tradeoff (even though the ratio could be 10,000x if we're just considering the welfare of two individuals). More here:

I was probably being unclear but my analysis was not supposed to be a confidence intervals but rather my the best guess and extreme scenarios.

I'm still a bit unclear how useful these are due to Rob's point.

Comment by benjamin_todd on What are the leading critiques of "longtermism" and related concepts · 2020-05-30T23:04:56.558Z · score: 20 (8 votes) · EA · GW

This is not exactly what you're looking for, but the best summary of objections I'm aware of is from the Strong Longtermism paper by Greaves and MacAskill.

Comment by benjamin_todd on [updated] Global development interventions are generally more effective than Climate change interventions · 2020-05-28T13:18:30.752Z · score: 17 (5 votes) · EA · GW

I think working this through on guesttimate rather than mulitplying point estimates is really important.

I tried doing it myself with similar figures, and I found the climate change came out ~80x better than global health (even though my point estimate that that global health is better) - which suggests the title of the article could maybe use editing!

When you're dealing with huge uncertainties like these, the tails of the distribution can drive the EV, so point estimates can be pretty misleading.

Here's a screenshot of the model: 2020-05-25 20.57.11.png?dl=0

I also tried doing the calculations in a different way that I found more intuitive - where I estimate the 'utils' of each intervention: 2020-05-25 20.58.02.png?dl=0

Some other reasons in favour of this approach:

  • Rob's point that by multiplying together extreme values, your confidence intervals are unreasonably wide.
  • Some of the confidence intervals you give for the individual parameters also seem too wide (and seem to not be mathematically possible to fit to a lognormal distribution).
Comment by benjamin_todd on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-05-27T14:39:55.915Z · score: 11 (3 votes) · EA · GW

I’ve discussed this post with a couple of people, and realised it’s unclear about where I think the clearest gaps actually are, so I thought I’d add a list.

Each career service idea needs to pick (i) an audience (ii) a set of programmes (iii) a set of causes. You can make different ideas by combining these 3 factors.

Some of the gaps within each factor that we’ll likely leave unfilled include:

i) Audience – there are several audience groups that 80k won’t reach for some time. For instance, careers advice for people over 40 could be useful to increase age diversity in the movement, and find more experienced people, which is a key gap. You could also pick out an audience with a cause area e.g. ‘careers advice for EAs who want to work on global health.’ Another clear group is services for other countries we’re not going to cover (e.g. a German job board), or perhaps focused on certain career paths. Some of the most valuable audiences that we’re not ideally suited to are groups like academics and policy makers, though it’s hard to credibly work with these groups unless you’re a member of this audience. I would also like to see more people working on student groups at top universities. Each of these audiences is clearly differentiable from 80k’s focus,

ii) Programmes – It seems to me like the biggest bottlenecks are around one-on-one advice and headhunting, since we just don’t have enough staff to cover everyone worth talking to, and this means these people don’t get direct help. We’re also not going to get to specialist content outside our priority problem areas for a while, such as in-depth guides to global health careers, or a guide to how to switch career mid-career. On the other hand, we plan to continue to provide more general purpose written content (e.g. advice on high level principles like career capital).

iii) Causes – I think the highest priority is for people to fill programming gaps that we’re leaving open within our priority areas and other promising areas (e.g. better advice for people who want to work on nuclear security). However, there are also some issues we’re not going to cover for some time so you could also fill a gap by replicating one of our existing programs for one of those areas. Global health is perhaps the most obvious example, but you might also want to include some longtermist areas here, such as reducing great power conflict.

Both of the new groups I mentioned in the main post match all these factors pretty well.

Another factor is how easily new programmes can fit into and serve as a multiplier on the existing infrastructure. For instance, specialists in specific topics and causes are fairly easy to slot in, since we and others can simply link or refer people to them when someone needs help with those areas.

On the other hand, I think starting a new job board aimed at the effective altruism community is less of an obvious gap, since the 80k board does cover multiple cause areas (including listing 110 global health jobs and 50 factory farming jobs currently). This argument holds less for job boards aimed at a particular country or cause.

Of course, it may eventually be better to have a direct competitor to the 80k job board (or other core programmes), especially if most of the biggest gaps have already been filled.

Another general thought, my personal advice is to start by doing one thing well, and then broadening over time. I think 80k started by doing too many things at once, and we could have gone faster if we’d started more focused. It also makes it much easier for other groups to coordinate with you.

If you’re considering going ahead with a new organisation to solve a gap, we’d love to hear about it, in case it’s a gap we can quickly plug. e.g. we’re open to consider putting up extra articles, making edits to old articles, or making tweaks to our programmes (though we don’t have a ton of spare capacity and aren’t likely to change any of our main focuses).

Comment by benjamin_todd on [updated] Global development interventions are generally more effective than Climate change interventions · 2020-05-24T19:55:19.510Z · score: 4 (2 votes) · EA · GW

I'm still a bit worried about this.

It would have been reasonable for them to use the mean global income as the baseline, rather than dollars to the mean US citizen.

If I understand correctly, that would boost things by about a factor of 3 in favour of climate change (mean global income is about $20k, vs. mean US income of about $60k). Though, I suppose that's a fairly small uncertainty compared to the others listed here.

Comment by benjamin_todd on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-14T13:36:53.712Z · score: 4 (2 votes) · EA · GW

Yes, unfortunately the leaders forum survey didn't ask about it as its own category, so it's merged into the others you mention.

Comment by benjamin_todd on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T16:47:57.062Z · score: 13 (8 votes) · EA · GW

I agree, I don't like the near-termism vs. longtermism terms, since I think it makes it sound like a moral issue whereas it's normally more about epistemology or strategy, and like you say for most people it's a matter of degree. I hope we can come up with better terms.

I also agree people should be clear about 'causes' vs. 'worldviews'. You could be longtermist in your worldview but want to work on economic empowerment, and you could be near termist but want to work on AI GCRs.

I did my analysis in terms of causes, though my impression is that results are similar when we ask about worldviews instead (because in practice causes and worldviews are reasonable correlated).

Comment by benjamin_todd on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T14:12:18.051Z · score: 16 (11 votes) · EA · GW

I'm not aiming to take a stance on how important representativeness is. My goal is to get people to focus on what I see as the bigger issue we face today: how should we design a community when the new members and the "middle" have mainstream cause priorities and the "core" have (some) unusual ones?

Comment by benjamin_todd on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T14:07:14.099Z · score: 7 (4 votes) · EA · GW

Hey Khorton, I didn't mean to imply that. I think the last paragraphs still stand as long as you assume that we'll want some of the core of EA to work on unusual causes, rather than 100%.

Comment by benjamin_todd on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T14:03:44.597Z · score: 2 (1 votes) · EA · GW

I'm also not sure I know what you mean.

Comment by benjamin_todd on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T14:03:27.979Z · score: 31 (14 votes) · EA · GW

Hi Denise, on the second point, I agree that might be a factor (I mention it briefly in the article) among others (such as people changing their minds as in David's data). My main point is that this means the problem we face today is more like "people are bouncing off / leaving EA because the most engaged ~2000 people focus on unusual causes" rather than "the leaders don't represent the most engaged 2000 people".

Comment by benjamin_todd on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T13:59:11.385Z · score: 3 (2 votes) · EA · GW

Thank you for preparing these - very interesting!

Comment by benjamin_todd on Racial Demographics at Longtermist Organizations · 2020-05-04T23:00:12.543Z · score: 7 (6 votes) · EA · GW

Yes we do track these, and have a brief note about it in the 2019 annual review.

Comment by benjamin_todd on Racial Demographics at Longtermist Organizations · 2020-05-04T17:36:51.895Z · score: 6 (9 votes) · EA · GW

Hi there,

Thank you for the data. 80,000 Hours agrees that improving diversity is important. We made a statement to this effect in our 2017 annual review, and have given updates in 2018 and 2019.

To update on our figures, we’ll now soon have 13 core full-time staff, and 2 will be POC (15%). We made job offers to both before Feb, though we’re still working on visas and one only recently started, which is why the meet the team page is out of date.

Comment by benjamin_todd on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-28T12:24:05.397Z · score: 2 (1 votes) · EA · GW

The very short answer for why we prioritise it is that I think tail risks from climate change have a stronger claim to be in the longtermist portfolio than global health and factory farming (e.g. see The Precipice). I'd need to think more about what exactly has changed since 2017-2018.

Comment by benjamin_todd on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-28T12:19:38.837Z · score: 2 (1 votes) · EA · GW

We have 12.7 FTE of full-time staff, and 1.4 FTE of freelancers.

FTE = full-time-equivalent.

Comment by benjamin_todd on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-27T13:29:38.326Z · score: 39 (14 votes) · EA · GW

It’s true a lot of my reply was about the communications challenges, but that’s because they’re harder to explain, I didn’t mean to imply they were the most significant reason for our strategy. The first and probably most important tradeoff I mentioned was:

Going broader makes the community more inclusive and so potentially larger, but at the cost of more of our effort going on areas that we think have far lower expected impact.

This is a huge topic, so I’m not going to be able to debate it here, but I wanted to flag this is probably the key driver of ours views (e.g. the question of how much causes differ in effectiveness, as you mention at the end).

However, I did want to respond to your point that you think it’s clear 80k was doing something wrong in the past, and in particular that we had broken bonds of trust with our readership. I’m sad to hear you think our lack of ‘cause impartiality’ has betrayed our readers’ trust. 80k’s key mission and responsibility is to improve the lives of others as much as possible. This requires us to prioritize between causes.

I actually think that being clear about what we’re prioritizing is an important part of fulfilling our duty to readers, and more importantly, it’s central to our duty to those whose lives we’re seeking to improve.

You might think that despite that being our overall goal, the way we should be achieving it is supporting the EA community. That could mean prioritising content according to average views across the whole community rather than using our own judgement or the judgement of those working in global priorities research (as well as doing the other things you mention, which seem like good suggestions). However, I don’t think that’s the best way to pick causes, and our readership also extends beyond the EA community, so we would still run into the problem with the wider audience.

I think that problems mainly arise if you mix these two strategies, rather than picking one and being clear about it.

I agree that in the past we sometimes portrayed ourselves as a more general source of EA careers advice than we were. I regret if this led people to be disappointed, or slowed down the creation of alternative sources of advice. In recent years we’ve been clearer about our role, and the fact that our site is about our priorities. We want our readers to be able to decide for themselves whether it’s useful to them.

We’ve found the impression of our site as more general to be hard to shift among the broader community, unfortunately. That’s why I wanted to write a post here to lay out our views and priorities clearly. I hope this will make it easier for community organisers to understand what 80k can provide over the next couple of years.

I appreciate it would be simpler if 80k could be a one-stop shop for EA careers advice, though we have worked around this problem in other areas e.g. GiveWell is the largest ‘where to donate’ org, but it only covers global poverty.

On the solutions for the EA community, I should have been clearer that I think where we most need multiple orgs is when it comes to one-on-one support and specialised content. We simply don’t have capacity to cover every cause area, and won’t for some time. We still provide general purpose advice (e.g. on principles like career capital), and as noted, we’re also often happy to link to or send people to other sources of advice, so can still act as a clearinghouse.

I also have the aspiration that 80k gets broader over time, so I hope we cover some of these gaps ourselves. However, the process will be faster if there are other groups involved.

Comment by benjamin_todd on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-27T13:24:21.276Z · score: 3 (2 votes) · EA · GW

It it a top priority, though we only have one full-time writer at the minute, so it may still take a while.

Comment by benjamin_todd on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-26T22:08:04.544Z · score: 4 (3 votes) · EA · GW


I changed it to "mainly focused".

The EA survey is run by a different group - I would also like them to publish it soon :)

Comment by benjamin_todd on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-26T22:05:55.542Z · score: 12 (5 votes) · EA · GW

Hi there, I think how quickly to hire is a really complex question. It would be best to read the notes on how quickly we think we should expand each of our programmes in our annual review as well as some of the comments in the summary.

Just quickly on the comparison with GiveWell, I think we're on a fairly similar trajectory to them, except that GiveWell started 4-5 years earlier, so it might be more accurate to compare us to GiveWell in 2015. We are planning to reach ~25 staff, though it will take several more years. Another difference is that we allocate across a wider range of programmes (headhunting, advising, job board etc.), so even if we were the same size as GiveWell, we wouldn't be doing as much research and content.

The out-of-date content is a problem that bugs me, though. One improvement we've made recently is that all the bottom lines are now kept up-to-date on the key ideas page.

Comment by benjamin_todd on Empirical data on value drift · 2020-04-26T14:04:14.182Z · score: 3 (2 votes) · EA · GW

I think that's basically right, though I also have the intuition that drift from the very early days will be higher, since at that point it was undecided what EA even was, and everyone was new and somewhat flung together.

Comment by benjamin_todd on Empirical data on value drift · 2020-04-26T14:01:15.034Z · score: 4 (2 votes) · EA · GW

This is really helpful, thanks.

It's interesting to note that it's now two years later, and I don't think the picture above has really changed.

So the measured marginal drift rate is ~0%.

On the previous estimate of 25% leaving after 6.5 years, that's about 5% per year, which would have predicted 1.4 extra people leaving in two years.

Of course these are tiny samples, but I think our expectation should be that 'drift' rates decrease over time. My prior is that if someone stays involved from age 20 to age 30, then there's a good chance they stay involved the rest of their career. I guess my best guess should be that they stay involved for another 10 years.

If I eyeball the group above, my guess is that this pattern also holds if we look back further i.e. there was more drift in the early years among people who were involved for less time.

One small comment on the original analysis is that in addition to how long someone has already been involved, I expect 'degree of social & identity involvement' to be a bigger predictor of staying involved than 'claimed level of dedication' e.g. I'd expect someone who works at an EA org is more likely to stay involved than someone who says they intend to donate 50% but doesn't have any good friends in the community. It would be cool to try to do an analysis more based around that factor, and it might reveal a group with lower drop out rates. The above analysis with CEA is better on these grounds but could still be divided further.

Comment by benjamin_todd on How Much Leverage Should Altruists Use? · 2020-04-23T16:17:25.813Z · score: 2 (1 votes) · EA · GW

My estimates came from the book Global Asset Allocation by Meb Faber. I expect it's less rigorous than the paper you link to, so I suppose we should trust the paper more.

I did find the results of the paper pretty surprising though. It just makes a lot of intuitive sense that bonds with anticorrelate with equities during recessions and real assets will anticorrelate during inflation shocks, which should reduce the risk quite a bit.

Also, all the other estimates I seen show that adding bonds to an all equity portfolio significantly increases sharpe (usually from ~0.3 to ~0.4). (And also that adding real assets helps too, though these estimates are less common.)

I'm wondering if the period used in paper might have been an unusually good time for equities. Meb Faber uses the period 1973-2013. The paper uses 1960 to 2017. 2017 was near a high for the equity market, whereas 2013 was more mid-cycle, which will favour equities. The 60s were a good time for equities, while the 70s were bad, so adding the 60s into the range will boost equities.

Ideally I'd also compare the percentages in each asset. One difference is that Faber's 'GAA' allocation includes 5% gold, which usually seems to improve sharpe quite a bit, since gold was one of the only assets that did well in the 70s*. Faber also gets similar results with what he calls the 'Arnott Portfolio' which doesn't include gold and is fairly in line with my estimate of the global capital portfolio, except using TIPs+REITs+commodities instead of private real estate.

I'd also trust the GMP a lot more out of sample due to the theoretical underpinning.

*My theory for why gold helps sharpe (even though it's only 1% of total wealth) is that the global capital portfolio includes a lot of private real estate that is not in the global portfolio of listed assets. This means most 'global market portfolios' are light on real assets, and adding some gold/TIPs/commodities balances this out, getting you closer to the true global capital portfolio.

Comment by benjamin_todd on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-23T15:32:13.513Z · score: 62 (22 votes) · EA · GW

Hi weeatquince,

Thank you for the kind words and the feedback.

I agree that 80k should pay more attention to our impact on community culture. We listed this as a mistake in our 2019 annual review.

As to whether we should promote & work on a broader range of causes, this is certainly a difficult tradeoff. Going broader makes the community more inclusive and so potentially larger, but at the cost of more of our effort going on areas that we think have far lower expected impact.

There are also several other issues that makes it challenging to promote a broader range of issues. One is that we think it’s important to honestly communicate our views on cause selection, and this necessarily means saying that we prioritize certain issues more. I can imagine a version of 80k that doesn’t rank causes, but this would involve losing one of the most informative parts of our advice.

When writing about areas we see as lower priorities, I think it would be a major communications challenge to balance being transparent about our views against the risk of (repeatedly) demoralizing a group of our users.

I feel sympathetic towards people who experienced a "bait and switch” when entering EA. This was one reason we decided to be much more up front about our views on the key ideas page, so that no-one would experience a bait and switch if they enter the community via 80k.

Unfortunately, I worry that promoting areas that our staff doesn't believe are top priorities is the kind of thing that creates the bait and switch dynamic in the first place, since our audience will (reasonably) make assumptions about our values based on our actions.

A final issue is that it’s already challenging for us to provide up-to-date advising/articles/job listings within our priority areas (e.g. we only have about 14 staff over 6 priority areas, which is 2.3 per area), and I think being focused is really important for organisations to be successful.

The solution I’d prefer is that global poverty career advice is provided by people who prioritize global poverty, animal welfare career advice is provided by people who prioritize animal welfare, and so on. If this happened, then the services provided would naturally reflect the community. Likewise, it would be great if there could be alternative introductions to EA aimed at different audiences.

Comment by benjamin_todd on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-23T13:19:23.943Z · score: 2 (1 votes) · EA · GW

Hi Louis,

We classify it as one of our second highest-priority areas:

For the overall allocation of effort:

We also aim to put ~30% of our effort into other ways of addressing our priority problems (AI, biorisk, global priorities research, building EA, nuclear security, improving institutional decision-making, extreme climate risks) or potential priority problems , some of which we might class as priority paths in the future.

More concretely, this will probably involve: (i) adding more jobs to the job boards (ii) updating our problem profile (iii) having some other podcasts or articles about it (plus cross-cutting content useful to everyone).

Comment by benjamin_todd on Is anyone working on a comparative COVID-19 policy response dataset? · 2020-04-11T21:25:50.497Z · score: 4 (3 votes) · EA · GW

In addition to covid19policywatch, here is also a summary of the policy responses by country.

Comment by benjamin_todd on A quick and crude comparison of epidemiological expert forecasts versus Metaculus forecasts for COVID-19 · 2020-04-02T23:05:32.175Z · score: 7 (4 votes) · EA · GW

There have been some claims that the 538 article put the wrong date on the expert's forecasts, and we haven't been able to figure out whether that's true or not by contacting them, so unfortunately I wouldn't use the 538 article by itself.

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T23:09:27.181Z · score: 19 (7 votes) · EA · GW

Interesting. My personal view is that the neglect of future generations is likely 'where the action is' in cause prioritisation, so if you exclude their interests from the cooperative portfolio, then I'm less interested in the project.

I'd still agree that we should factor in cooperation, but my intuition is then that it's going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation. I'd be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?

The point about putting more emphasis on international coordination and improving institutions seems reasonable, though again, I'd wonder if it's enough to trump the lower neglectedness.

Either way, it seems a bit odd to describe longtermist EAs who are trying to help future generations as 'uncooperative'. It's more like they're trying to 'cooperate' with future people, even if direct trade isn't possible.

On the point about whether the present generation values x-risk, one way to illustrate it is that value of a statistical life in the US is about $5m. This means that US citizens alone would be willing to pay, I think, 1.5 trillion dollars to avoid 0.1ppt of existential risk.

Will MacAskill used this as an argument that the returns on x-risk reduction must be lower than they seem (e.g. perhaps the risks are actually much lower), which may be right, but still illustrates the idea that present people significantly value existential risk reduction.

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T22:46:19.515Z · score: 2 (1 votes) · EA · GW
At the bottom of this write-up I have an example with three causes that all have log returns. As long as both funders value the causes positively and don't have identical valuations, a pareto improvement is possible through cooperation.

Very interesting, thank you.

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T14:10:23.301Z · score: 7 (5 votes) · EA · GW

This is a tangent, but if you're looking for an external critic maybe making a point along these lines, then the LRB review of DGB might be better. You could see systemic change is a public good problem, and the review claims that EAs neglect it due to their individualist focus. More speculation at the end of this:

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T14:02:46.783Z · score: 20 (8 votes) · EA · GW

I also wanted to attempt to clarify 80k's position a little.

With prisoner’s dilemmas against people outside of EA, it seems that the standard advice is to defect. In 80,000 Hours’ cause prioritization framework, the goal is to estimate the marginal benefit (measured by your value system, presumably) of an extra unit of resources being invested in a cause area [5]. No mention is given to how others value a cause, except to say that cause areas which you value a lot relative to others are likely to have the highest returns.

I agree this is the thrust of the article. However, also note that in the introduction we say:

However, if you’re coordinating with others in aiming to have an impact, then you also need to consider how their actions will change in response to what you do, which adds additional elements to the framework, which we cover here.

Within the section on scale we say:

It can also be useful to group instrumental sources of value within scale, such as gaining information about which issues are most important, or building a movement around a set of issues. Ideally, one would also capture the spillover benefits of progress on this problem on other problems. Coordination considerations, as briefly covered later, can also change how to assess scale.

And then at the end, we have this section:

On the key ideas page, we also have a short section on coordination and link to:

Which advocates compromising with other value systems.

And, there's the section where we advocate not causing harm:

Unfortunately, we haven't yet done a great job of tying all these considerations together – coordination gets wedged in as an 'advanced' consideration; whereas maybe you need to start from a cooperative perspective, and totally reframe everything in those terms.

I'm still really unsure of all of these issues. How common are prisoner's dilemma style situations for altruists? When we try to factor in greater cooperation, how will that change the practical rules of thumb? And how might that change how we explain EA? I'm very curious for more input and thinking on these questions.

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T13:50:23.487Z · score: 40 (15 votes) · EA · GW

I wonder if EA as it currently exists can be reframed into more cooperative terms, which could make it safer to promote. I'm speculating here, but I'd be interested in thoughts.

One approach to cause prioritisation is to ask "what would be the ideal allocation of effort by the whole world?" (taking account of everyone's values & all the possible gains from trade), and then to focus on whichever opportunities are most underinvested in vs. that ideal, and where you have the most comparative advantage compared to other actors. I've heard researchers in EA saying they sometimes think in these terms already. I think something like this is where a 'cooperation first' approach to cause selection would lead you.

My guess is that there's a good chance this approach would lead EA to support similar areas to what we do currently. For instance, existential risks are often pitched as a global public good problem i.e. I think that on balance, people would prefer there was more effort going into mitigation (since most people prefer not to die, and have some concern for future generations). But our existing institutions are not delivering this, and so EAs might aim to fill the gap, so long as we think we have comparative advantage addressing these issues (and until the point where institutions can be improved that this is no longer needed).

I expect we could also see work on global poverty in these terms. On balance, people would prefer global poverty to disappear (especially if we consider the interests of the poor themselves), but the division into nation states makes it hard for the world to achieve that.

This becomes even more likely if we think that the values of future generations & animals should also be considered when we construct the 'world portfolio' of effort. If these values were taken into account, then currently the world would, for instance, spend heavily on existential risk reduction & other investments that benefit the future, but we don't. It seems a bit like the present generation is failing to cooperate with future generations. EA's cause priorities aim to redress this failure.

In short, the current priorities seem cooperative to me, but the justification is often framed in marginal terms, and maybe that style of justification subtly encourages an uncooperative mindset.

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T13:25:10.019Z · score: 5 (4 votes) · EA · GW

Thank you for the post – very interesting and thought provoking ideas. I have a couple of points to explore further that I'll break into different replies.

I'd be curious for more thoughts on how common these situations are.

In the climate change, AI safety, conservation example, it occurred to me that if each individual thinks that their top option is 10 times more effective than the second option, it becomes clearly better again (from their pov) to support their top option. The numbers seem to only work because AI safety is marginally better than climate change.

You point out that the problem becomes more severe as the number of funders increases. It seems like there are roughly 4 'schools' of EA donors, so if we consider a coordination problem between these four schools, it'll roughly make the issue 2x bigger, but it seems like that still wouldn't outweigh 10x differences in effectiveness.

The point about advocacy making it worse seems good, and a point against advocacy efforts in general. Paul Christiano also made a similar point here:

I'd be interested in more thoughts on how commonly we're in the prisoner's dilemma situation you note, and what the key variables are (e.g. differences in cause effectiveness, number of funders etc.).

Comment by benjamin_todd on A naive analysis on if EA is Talent constrained · 2020-03-26T00:15:19.030Z · score: 48 (18 votes) · EA · GW

I’m really sad to hear how upset you are with 80,000 Hours and how you feel it has made it harder rather than easier to find a role in which you can have impact.

It’s a real challenge for us to decide whether to share our views or not publish them until we’re more certain and clear. We hope that by getting more information out there, it will let people make better decisions, but unfortunately we’re going to continue to be uncertain and unable to explain all our evidence, and our views will change over time. It’s useful to hear your feedback that we might be getting the tradeoff wrong. We’ve been trying to do a better job communicating our uncertainty in the new key ideas series, for instance by releasing: advice on how to read our advice

Thank you for collecting together all this specific information about different organisations in EA. The question of whether the issues we focus on are ‘talent constrained’ or not (though I prefer not to use this term), is a complicated one. Unfortunately, I can’t give you a full response here, though I do hope to write about it more in the future.

I do just want to clarify that I do still believe that certain skill bottlenecks are very pressing in effective altruism. Here are a couple of additional points:

  • To be specific, I think it’s longtermist organisations that are most talent constrained. Global health and factory farming organisations are much more constrained by funding relatively speaking (e.g. GiveWell top recommended charities could absorb ~$100m). I think this explains why organisations like TLYCS, Charity Science and Charity Entrepreneurship say they’re more funding constrained (and also to some extent Rethink priorities, which does a significant fraction of its work in this area).
  • Even within longtermist and meta organisations, not *every* organisation is mainly skill-constrained, so you can find counterexamples, such as new organisations without much funding. This may also explain the difference between the average survey respondents and Rethink Priorities’ view.
  • It doesn’t seem to me that looking at whether lots of people applied to a job tells us much about how talent constrained an organisation is. Some successful applicants might have still been much better than others, or the organisations might have preferred to hire even more than they were able to.
  • Something else I think is relevant to the question of whether our top problem areas are talent constrained is that I think many community members should seek positions in government, academia and other existing institutions. These roles are all ‘talent constrained’, in the sense that hundreds of people could take these positions without the community needing to gain any additional funding. In particular, we think there is room for a significant number of people to take AI policy careers, as argued here.

There’s a lot more I’d like to say about all of these topics. I hope that gives at least a little more sense of how I’m thinking about this. Unfortunately, I’ve been focusing on responding to covid-19 so won’t be able to respond to questions. I want to reiterate though how sad it is to hear that someone has found our advice so unhelpful, not just because of the negative effect on you, but also on those you’re working to help. Thank you for taking the time to tell us, and I hope that we can continue to improve not only our advice, but also the clarity with which we express our degree of certainty in it and evidence for it.

Comment by benjamin_todd on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-13T19:20:37.584Z · score: 13 (9 votes) · EA · GW

Great news! I'd be keen to hear where the money ends up being allocated.

Comment by benjamin_todd on When To Find More Information: A Short Explanation · 2019-12-31T15:19:11.225Z · score: 4 (3 votes) · EA · GW

Thanks - it's useful to see your take on this!

Comment by benjamin_todd on Is mindfulness good for you? · 2019-12-30T20:29:23.455Z · score: 20 (6 votes) · EA · GW

Have you come across the book Altered Traits? It tries to sum up the existing evidence for meditation, and in the latter half of the book, each chapter looks at the evidence for and against a proposed benefit. At the start, they talk about their criteria for which studies to include, and seem to have fairly strict standards.

One significant weakness is that it's written by two fans of meditation, so it's probably too positive. However, to their credit, the authors exclude some of their own early studies for not being well designed enough.

One advantage is that they try to bring together multiple forms of evidence, including theory, studies of extreme meditators, and neuroscience as well as RCTs of specific outcomes – though the neuroscience is pretty basic. They also do a good job of distinguishing how there are many different types of meditation that seem to have different benefits; and also distinguishing between beginners, intermediates and experts.

Comment by benjamin_todd on When To Find More Information: A Short Explanation · 2019-12-30T20:20:52.838Z · score: 10 (5 votes) · EA · GW
5) Is the information that would change your mind worth the cost of gathering it? (This might be tricky, but see below.)
For the last question, usually the answer is obviously yes or no. Sometimes, however, it's unclear, and you need to think a bit more quantitatively about the value of the information. If you want to see the math for how VoI is used in practice, here are some examples, and some more, of how to do the basic quantitative work.

Thanks for the post, but this seems like the tricky bit to me. Might you be able to give some rough rules of thumb people could apply to answer this question?

Trying to do actual VOI estimates gets pretty confusing, so what would be great is something simpler than that, but better than just going with your intuition.

I think there are probably some things to say, along the lines of things like "if you can spend under 10% of the time at stake in the decision, and you think it's likely you'd change your mind (say with 50% chance), then probably investigate more"; or "if you're early in your career, leans towards investigating because information is more valuable to you"; "people typically consider too few options, so it's worth generating at least one alternative to your current options".

Comment by benjamin_todd on Community vs Network · 2019-12-20T13:53:35.455Z · score: 4 (3 votes) · EA · GW

Just a quick clarification that 80k plan changes aim to measure 80k's counterfactual impact, rather than the expected lifetime impact of the people involved. A large part of the spread is due to how much 80k influenced them vs. their counterfactual.

Comment by benjamin_todd on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-12-06T01:15:55.343Z · score: 3 (5 votes) · EA · GW

Ultimately the operationalising needs to be done by the organisations & community leaders themselves, when they do their own planning, given the details of how they interact with the community, and while balancing the considerations raised at the leaders forum against their other priorities.