Posts

Why not to rush to translate effective altruism into other languages 2018-03-05T02:17:20.153Z · score: 60 (61 votes)
New recommended career path for effective altruists: China specialists 2018-03-01T21:18:46.124Z · score: 16 (16 votes)
80,000 Hours annual review released 2017-12-27T20:31:05.395Z · score: 10 (10 votes)
How can we best coordinate as a community? 2017-07-07T04:45:55.619Z · score: 9 (9 votes)
Why donate to 80,000 Hours 2016-12-24T17:04:38.089Z · score: 18 (20 votes)
If you want to disagree with effective altruism, you need to disagree one of these three claims 2016-09-25T15:01:28.753Z · score: 31 (24 votes)
Is the community short of software engineers after all? 2016-09-23T11:53:59.453Z · score: 13 (15 votes)
6 common mistakes in the effective altruism community 2016-06-03T16:51:33.922Z · score: 12 (14 votes)
Why more effective altruists should use LinkedIn 2016-06-03T16:32:24.717Z · score: 13 (13 votes)
Is legacy fundraising actually higher leverage? 2015-12-16T00:22:46.723Z · score: 4 (14 votes)
We care about WALYs not QALYs 2015-11-13T19:21:42.309Z · score: 14 (16 votes)
Why we need more meta 2015-09-26T22:40:43.933Z · score: 22 (34 votes)
Thread for discussing critical review of Doing Good Better in the London Review of Books 2015-09-21T02:27:47.835Z · score: 7 (7 votes)
A new response to effective altruism 2015-09-12T04:25:43.242Z · score: 3 (3 votes)
Random idea: crowdsourcing lobbyists 2015-07-02T01:16:05.861Z · score: 6 (6 votes)
The career questions thread 2015-06-20T02:19:07.131Z · score: 13 (13 votes)
Why long-run focused effective altruism is more common sense 2014-11-21T00:12:34.020Z · score: 15 (17 votes)
Two interviews with Holden 2014-10-03T21:44:12.163Z · score: 7 (7 votes)
We're looking for stories of EA career decisions 2014-09-30T18:20:28.169Z · score: 5 (5 votes)
An epistemology for effective altruism? 2014-09-21T21:46:04.430Z · score: 9 (6 votes)
Case study: designing a new organisation that might be more effective than GiveWell's top recommendation 2013-09-16T04:00:36.000Z · score: 0 (0 votes)
Show me the harm 2013-08-06T04:00:52.000Z · score: 3 (3 votes)

Comments

Comment by benjamin_todd on A quick and crude comparison of epidemiological expert forecasts versus Metaculus forecasts for COVID-19 · 2020-04-02T23:05:32.175Z · score: 7 (4 votes) · EA · GW

There have been some claims that the 538 article put the wrong date on the expert's forecasts, and we haven't been able to figure out whether that's true or not by contacting them, so unfortunately I wouldn't use the 538 article by itself.

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T23:09:27.181Z · score: 10 (4 votes) · EA · GW

Interesting. My personal view is that the neglect of future generations is likely 'where the action is' in cause prioritisation, so if you exclude their interests from the cooperative portfolio, then I'm less interested in the project.


I'd still agree that we should factor in cooperation, but my intuition is then that it's going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation. I'd be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?

The point about putting more emphasis on international coordination and improving institutions seems reasonable, though again, I'd wonder if it's enough to trump the lower neglectedness.

Either way, it seems a bit odd to describe longtermist EAs who are trying to help future generations as 'uncooperative'. It's more like they're trying to 'cooperate' with future people, even if direct trade isn't possible.


On the point about whether the present generation values x-risk, one way to illustrate it is that value of a statistical life in the US is about $5m. This means that US citizens alone would be willing to pay, I think, 1.5 trillion dollars to avoid 0.1ppt of existential risk.

Will MacAskill used this as an argument that the returns on x-risk reduction must be lower than they seem (e.g. perhaps the risks are actually much lower), which may be right, but still illustrates the idea that present people significantly value existential risk reduction.

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T22:46:19.515Z · score: 2 (1 votes) · EA · GW
At the bottom of this write-up I have an example with three causes that all have log returns. As long as both funders value the causes positively and don't have identical valuations, a pareto improvement is possible through cooperation.

Very interesting, thank you.

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T14:10:23.301Z · score: 7 (5 votes) · EA · GW

This is a tangent, but if you're looking for an external critic maybe making a point along these lines, then the LRB review of DGB might be better. You could see systemic change is a public good problem, and the review claims that EAs neglect it due to their individualist focus. More speculation at the end of this:

https://forum.effectivealtruism.org/posts/7DfaX75zGehPZWJTx/thread-for-discussing-critical-review-of-doing-good-better

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T14:02:46.783Z · score: 16 (6 votes) · EA · GW

I also wanted to attempt to clarify 80k's position a little.

With prisoner’s dilemmas against people outside of EA, it seems that the standard advice is to defect. In 80,000 Hours’ cause prioritization framework, the goal is to estimate the marginal benefit (measured by your value system, presumably) of an extra unit of resources being invested in a cause area [5]. No mention is given to how others value a cause, except to say that cause areas which you value a lot relative to others are likely to have the highest returns.

I agree this is the thrust of the article. However, also note that in the introduction we say:

However, if you’re coordinating with others in aiming to have an impact, then you also need to consider how their actions will change in response to what you do, which adds additional elements to the framework, which we cover here.

Within the section on scale we say:

It can also be useful to group instrumental sources of value within scale, such as gaining information about which issues are most important, or building a movement around a set of issues. Ideally, one would also capture the spillover benefits of progress on this problem on other problems. Coordination considerations, as briefly covered later, can also change how to assess scale.

And then at the end, we have this section:

https://80000hours.org/articles/problem-framework/#how-to-factor-in-coordination


On the key ideas page, we also have a short section on coordination and link to:

https://80000hours.org/articles/coordination/

Which advocates compromising with other value systems.

And, there's the section where we advocate not causing harm:

https://80000hours.org/key-ideas/#moral-uncertainty-and-moderation


Unfortunately, we haven't yet done a great job of tying all these considerations together – coordination gets wedged in as an 'advanced' consideration; whereas maybe you need to start from a cooperative perspective, and totally reframe everything in those terms.

I'm still really unsure of all of these issues. How common are prisoner's dilemma style situations for altruists? When we try to factor in greater cooperation, how will that change the practical rules of thumb? And how might that change how we explain EA? I'm very curious for more input and thinking on these questions.

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T13:50:23.487Z · score: 29 (11 votes) · EA · GW

I wonder if EA as it currently exists can be reframed into more cooperative terms, which could make it safer to promote. I'm speculating here, but I'd be interested in thoughts.

One approach to cause prioritisation is to ask "what would be the ideal allocation of effort by the whole world?" (taking account of everyone's values & all the possible gains from trade), and then to focus on whichever opportunities are most underinvested in vs. that ideal, and where you have the most comparative advantage compared to other actors. I've heard researchers in EA saying they sometimes think in these terms already. I think something like this is where a 'cooperation first' approach to cause selection would lead you.

My guess is that there's a good chance this approach would lead EA to support similar areas to what we do currently. For instance, existential risks are often pitched as a global public good problem i.e. I think that on balance, people would prefer there was more effort going into mitigation (since most people prefer not to die, and have some concern for future generations). But our existing institutions are not delivering this, and so EAs might aim to fill the gap, so long as we think we have comparative advantage addressing these issues (and until the point where institutions can be improved that this is no longer needed).

I expect we could also see work on global poverty in these terms. On balance, people would prefer global poverty to disappear (especially if we consider the interests of the poor themselves), but the division into nation states makes it hard for the world to achieve that.

This becomes even more likely if we think that the values of future generations & animals should also be considered when we construct the 'world portfolio' of effort. If these values were taken into account, then currently the world would, for instance, spend heavily on existential risk reduction & other investments that benefit the future, but we don't. It seems a bit like the present generation is failing to cooperate with future generations. EA's cause priorities aim to redress this failure.

In short, the current priorities seem cooperative to me, but the justification is often framed in marginal terms, and maybe that style of justification subtly encourages an uncooperative mindset.

Comment by benjamin_todd on Effective Altruism and Free Riding · 2020-03-30T13:25:10.019Z · score: 5 (4 votes) · EA · GW

Thank you for the post – very interesting and thought provoking ideas. I have a couple of points to explore further that I'll break into different replies.

I'd be curious for more thoughts on how common these situations are.

In the climate change, AI safety, conservation example, it occurred to me that if each individual thinks that their top option is 10 times more effective than the second option, it becomes clearly better again (from their pov) to support their top option. The numbers seem to only work because AI safety is marginally better than climate change.

You point out that the problem becomes more severe as the number of funders increases. It seems like there are roughly 4 'schools' of EA donors, so if we consider a coordination problem between these four schools, it'll roughly make the issue 2x bigger, but it seems like that still wouldn't outweigh 10x differences in effectiveness.

The point about advocacy making it worse seems good, and a point against advocacy efforts in general. Paul Christiano also made a similar point here: https://rationalaltruist.com/2013/06/13/against-moral-advocacy/

I'd be interested in more thoughts on how commonly we're in the prisoner's dilemma situation you note, and what the key variables are (e.g. differences in cause effectiveness, number of funders etc.).

Comment by benjamin_todd on A naive analysis on if EA is Talent constrained · 2020-03-26T00:15:19.030Z · score: 50 (17 votes) · EA · GW

I’m really sad to hear how upset you are with 80,000 Hours and how you feel it has made it harder rather than easier to find a role in which you can have impact.

It’s a real challenge for us to decide whether to share our views or not publish them until we’re more certain and clear. We hope that by getting more information out there, it will let people make better decisions, but unfortunately we’re going to continue to be uncertain and unable to explain all our evidence, and our views will change over time. It’s useful to hear your feedback that we might be getting the tradeoff wrong. We’ve been trying to do a better job communicating our uncertainty in the new key ideas series, for instance by releasing: advice on how to read our advice

Thank you for collecting together all this specific information about different organisations in EA. The question of whether the issues we focus on are ‘talent constrained’ or not (though I prefer not to use this term), is a complicated one. Unfortunately, I can’t give you a full response here, though I do hope to write about it more in the future.

I do just want to clarify that I do still believe that certain skill bottlenecks are very pressing in effective altruism. Here are a couple of additional points:

  • To be specific, I think it’s longtermist organisations that are most talent constrained. Global health and factory farming organisations are much more constrained by funding relatively speaking (e.g. GiveWell top recommended charities could absorb ~$100m). I think this explains why organisations like TLYCS, Charity Science and Charity Entrepreneurship say they’re more funding constrained (and also to some extent Rethink priorities, which does a significant fraction of its work in this area).
  • Even within longtermist and meta organisations, not *every* organisation is mainly skill-constrained, so you can find counterexamples, such as new organisations without much funding. This may also explain the difference between the average survey respondents and Rethink Priorities’ view.
  • It doesn’t seem to me that looking at whether lots of people applied to a job tells us much about how talent constrained an organisation is. Some successful applicants might have still been much better than others, or the organisations might have preferred to hire even more than they were able to.
  • Something else I think is relevant to the question of whether our top problem areas are talent constrained is that I think many community members should seek positions in government, academia and other existing institutions. These roles are all ‘talent constrained’, in the sense that hundreds of people could take these positions without the community needing to gain any additional funding. In particular, we think there is room for a significant number of people to take AI policy careers, as argued here.

There’s a lot more I’d like to say about all of these topics. I hope that gives at least a little more sense of how I’m thinking about this. Unfortunately, I’ve been focusing on responding to covid-19 so won’t be able to respond to questions. I want to reiterate though how sad it is to hear that someone has found our advice so unhelpful, not just because of the negative effect on you, but also on those you’re working to help. Thank you for taking the time to tell us, and I hope that we can continue to improve not only our advice, but also the clarity with which we express our degree of certainty in it and evidence for it.

Comment by benjamin_todd on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-13T19:20:37.584Z · score: 13 (9 votes) · EA · GW

Great news! I'd be keen to hear where the money ends up being allocated.

Comment by benjamin_todd on When To Find More Information: A Short Explanation · 2019-12-31T15:19:11.225Z · score: 4 (3 votes) · EA · GW

Thanks - it's useful to see your take on this!

Comment by benjamin_todd on Is mindfulness good for you? · 2019-12-30T20:29:23.455Z · score: 20 (6 votes) · EA · GW

Have you come across the book Altered Traits? It tries to sum up the existing evidence for meditation, and in the latter half of the book, each chapter looks at the evidence for and against a proposed benefit. At the start, they talk about their criteria for which studies to include, and seem to have fairly strict standards.

One significant weakness is that it's written by two fans of meditation, so it's probably too positive. However, to their credit, the authors exclude some of their own early studies for not being well designed enough.

One advantage is that they try to bring together multiple forms of evidence, including theory, studies of extreme meditators, and neuroscience as well as RCTs of specific outcomes – though the neuroscience is pretty basic. They also do a good job of distinguishing how there are many different types of meditation that seem to have different benefits; and also distinguishing between beginners, intermediates and experts.

Comment by benjamin_todd on When To Find More Information: A Short Explanation · 2019-12-30T20:20:52.838Z · score: 10 (5 votes) · EA · GW
5) Is the information that would change your mind worth the cost of gathering it? (This might be tricky, but see below.)
For the last question, usually the answer is obviously yes or no. Sometimes, however, it's unclear, and you need to think a bit more quantitatively about the value of the information. If you want to see the math for how VoI is used in practice, here are some examples, and some more, of how to do the basic quantitative work.

Thanks for the post, but this seems like the tricky bit to me. Might you be able to give some rough rules of thumb people could apply to answer this question?

Trying to do actual VOI estimates gets pretty confusing, so what would be great is something simpler than that, but better than just going with your intuition.

I think there are probably some things to say, along the lines of things like "if you can spend under 10% of the time at stake in the decision, and you think it's likely you'd change your mind (say with 50% chance), then probably investigate more"; or "if you're early in your career, leans towards investigating because information is more valuable to you"; "people typically consider too few options, so it's worth generating at least one alternative to your current options".

Comment by benjamin_todd on Community vs Network · 2019-12-20T13:53:35.455Z · score: 3 (2 votes) · EA · GW

Just a quick clarification that 80k plan changes aim to measure 80k's counterfactual impact, rather than the expected lifetime impact of the people involved. A large part of the spread is due to how much 80k influenced them vs. their counterfactual.

Comment by benjamin_todd on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-12-06T01:15:55.343Z · score: 3 (5 votes) · EA · GW

Ultimately the operationalising needs to be done by the organisations & community leaders themselves, when they do their own planning, given the details of how they interact with the community, and while balancing the considerations raised at the leaders forum against their other priorities.

Comment by benjamin_todd on Existential Risk and Economic Growth · 2019-11-05T00:10:09.118Z · score: 2 (1 votes) · EA · GW

Thank you!

Comment by benjamin_todd on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T13:44:26.557Z · score: 60 (26 votes) · EA · GW

Hi EarlyVelcro,

I’m happy to see more debate of how much we should prioritise AI safety. We intend to debate some of these issues on the podcast, and have already started recording with Ben Garfinkel.

However, I think you’re misrepresenting how much the key idea series recommends working on AI safety. We feature a range of other problem areas prominently and I don’t think many readers will come away thinking that our position is that “EA should focus on AI alone”.

We list 9 priority career paths, of which only 2 are directly related to AI safety, recommend a variety of other options, and say that there are many good options we don’t list.

Elsewhere on the page, we also discuss the importance of personal fit and coordination, which can make it better for an individual to enter different problem areas from those we most highlight.

The most relevant section is short, so I’d encourage readers of this thread to read the section and make up their own mind.

Comment by benjamin_todd on Existential Risk and Economic Growth · 2019-09-26T06:38:42.118Z · score: 15 (8 votes) · EA · GW

Yes, great paper and exciting work. Here are some further questions I'd be interested in (apologies if they result from misunderstanding the paper - I've only skimmed it once).

1) I'd love to see more work on Phil's first bullet point above.

Would you guess that due to the global public good problem and impatience, that people with a low rate of pure rate of time preference will generally believe society is a long way from optimal allocation to safety, and therefore that increasing investment in safety is currently much higher impact than increasing growth?


2) What would be the impact of uncertainty about the parameters be? Should we act as if we're generally in the eta > beta (but not much greater) regime, since that's where altruists could have the most impact?


3) You look at the chance of humanity surviving indefinitely - but don't we care more about something like the expected number of lives?

Might we be in the eta >> beta regime, but humanity still have a long future in expectation (i.e. tens of millions of years rather than billions). It might then still be very valuable to further extend the lifetime of civilisation, even if extinction is ultimately inevitable.

Or are there regimes where focusing on helping people in the short-term is the best thing to do?

Would looking at expected lifetime rather than probability of making it have other impacts on the conclusions? e.g. I could imagine it might be worth trading acceleration for a small increase in risk, so long as it allows more people to live in the interim in expectation.



Comment by benjamin_todd on List of ways in which cost-effectiveness estimates can be misleading · 2019-08-22T21:36:33.857Z · score: 21 (10 votes) · EA · GW

Just a quick note that 'double counting' can be fine, since the counterfactual impact of different groups acting in concert doesn't necessarily sum to 100%.

See more discussion here: https://forum.effectivealtruism.org/posts/fnBnEiwged7y5vQFf/triple-counting-impact-in-ea


Also note that you can also undercount for similar reasons. For instance, if you have impact X, but another org would have had done X otherwise, you might count your impact as zero. But that ignores that by doing X, you free up the other org to do something else high impact.


I think I'd prefer to frame this issue as something more like "how you should assign credit as a donor in order to have the best incentives for the community isn't the same as how you'd calculate the counterfactual impact of different groups in a cost-effectiveness estimate".

Comment by benjamin_todd on Is EA Growing? EA Growth Metrics for 2018 · 2019-06-18T02:10:22.135Z · score: 6 (3 votes) · EA · GW

Quick answer for 80k: Paid traffic only comes from our free Google Adwords, which is a fixed budget each month. Over the last year, about 12% of the traffic was paid, roughly 10,000-20,000 users per month. This isn't driving growth because the budget isn't growing.

Comment by benjamin_todd on EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement · 2019-02-23T11:49:56.263Z · score: 6 (4 votes) · EA · GW

Hi David,

Thank you very much for doing this (long!) analysis.

Your conclusion makes sense to me and is an interesting result.

It would be interesting to think about how the survey can be adapted to better pick up these differences in future years.

Ben

Comment by benjamin_todd on What has Effective Altruism actually done? · 2019-01-18T04:38:03.980Z · score: 8 (4 votes) · EA · GW

https://www.effectivealtruism.org/impact/

Comment by benjamin_todd on EA Survey Series 2018 : How do people get involved in EA? · 2018-12-03T08:15:12.456Z · score: 3 (2 votes) · EA · GW

This data might also be useful to cross-check against:

https://80000hours.org/2018/10/2018-talent-gaps-survey/#current-leaders-came-to-be-involved-through-a-wide-variety-of-different-channels

Comment by benjamin_todd on EA Survey Series 2018 : How do people get involved in EA? · 2018-12-01T23:28:14.897Z · score: 6 (2 votes) · EA · GW

I just noticed that there were people who report first finding out about EA from 80k in 2009 and 2010. I'd say 80k was only informally formed in early 2011, and the name was only chosen at the end of 2011, so those survey responses must be mistaken. I gather that the sample sizes for the early years are small, so this is probably just one or two people.

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-28T01:03:24.173Z · score: 6 (4 votes) · EA · GW

That's a complex topic, but our starting point for conversions would be the figures in the EA leaders survey: https://80000hours.org/2018/10/2018-talent-gaps-survey/

Comment by benjamin_todd on Earning to Save (Give 1%, Save 10%) · 2018-11-27T09:53:03.904Z · score: 10 (11 votes) · EA · GW

Just adding that we made a similar suggestion: that people should cut back their donations to ~1% until they've built up at least enough savings for 6-12 months of runway.

We also suggest here that people also prioritise saving 15% of their income for retirement ahead of substantial donations. If people want to donate beyond this level that's commendable, but I don't think that's where we should set a norm.

Comment by benjamin_todd on EA Survey Series 2018: Subscribers and Identifiers · 2018-11-26T23:09:16.005Z · score: 3 (2 votes) · EA · GW

Great! I was wondering if this might be it.

Comment by benjamin_todd on EA Survey Series 2018: Subscribers and Identifiers · 2018-11-26T23:08:25.271Z · score: 4 (3 votes) · EA · GW

I think in practice people work on it for both reasons depending on their values.

Comment by benjamin_todd on EA Survey Series 2018: Subscribers and Identifiers · 2018-11-26T21:17:04.648Z · score: 17 (7 votes) · EA · GW

Thanks for this analysis. If there's time for more, I'd be keen to see something more focused on 'level of contribution' rather than subscriber vs. identifier. I'm not too concerned about whether someone identifies with EA, but rather with how much impact they're able to have. It would be useful to know which sources are most responsible for the people who are most contributing.

I'm not sure what proxies you have for this in the survey data, but I'm thinking ideally of concrete achievements, like working full-time in EA; or donating over $5,000 per year.

You could also look at how dedicated to social impact they say they are combined with things like academic credentials, but these proxies are much more noisy.

One potential source of proxies is how involved someone says they are in EA, but again I don't care about that so much compared to what they're actually contributing.

Comment by benjamin_todd on EA Survey Series 2018: Subscribers and Identifiers · 2018-11-26T21:09:33.389Z · score: 8 (4 votes) · EA · GW

Hi there, just a quick thought on the cause groupings in case you use them in future posts.

Currently, the post notes that global poverty is the cause most often selected as the top priority, but it should add that this is sensitive to how the causes are grouped, and there's no single clear way to do this.

The most common division we have is probably these 4 categories: global poverty, GCRs, meta and animal welfare.

If we used this grouping, then the identifiers would report:

GCRs: 28%

Global poverty: 27%

Meta: 27%

Animal welfare: 10%

(Plus Climate Change: 13% Mental health: 4%)

So, basically the top 3 areas are about the same. If climate change were grouped into GCRs, then GCRs would go up to 41% and be the clear leader.

Global poverty is a huge area that receives hundreds of billions of dollars of investment, and could arguably be divided into health, economic empowerment (e.g. cash transfers), education, policy-change etc. That could also be an option for the next version of the survey.

I'm glad we have the finer grained divisions in the survey, but we have to be careful about how to present the results.

Comment by benjamin_todd on EA Survey 2018 Series: Community Demographics & Characteristics · 2018-11-26T20:54:03.907Z · score: 4 (3 votes) · EA · GW

I think it might be clearer to break up the Bay Area into SF, East Bay, North Bay and South Bay. These locations all take about an hour to travel between, which makes them comparable to London, Oxford and Cambridge (even Bristol). Including such a large area as a single category makes it much easier to rank top. Wikipedia reports that London is about 600 square miles, while the nine-county Bay Area is 7000. I appreciate that what counts as a city is not clear, but I'd definitely say the Bay Area is more than one city. (Alternatively, we could group 'Loxbridge' as one category.)

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-25T09:43:57.213Z · score: 11 (4 votes) · EA · GW

I agree it's better to give the most concrete suggestions possible.

As I noted right below this quote, we do often provide specific advice on ‘Plan B’ options within our career reviews and priority paths (i.e. nearby options to pivot into).

Beyond that and with Plan Zs, I mentioned that they usually depend a great deal on the situation and are often covered by existing advice, which is why we haven’t gone into more detail before. I’m skeptical that what EAs most need is advice on how to get a job at a deli. I suspect the real problem might be more an issue of tone or implicit comparisons or something else. That said, I’m not denying this part of the site couldn’t be greatly improved.

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-25T09:40:41.011Z · score: 9 (2 votes) · EA · GW
One point of factual disagreement is that I think good general career advice is in fact quite neglected.

I definitely agree with you that existing career advice usually seems quite bad. This was one of the factors that motivated us to start 80,000 Hours.

it seems like probably I and others disappointed with the lack of broader EA career advice should do the research and write some more concrete posts on the topic ourselves.

If we thought this was good, we would likely cross-post it or link to it. (Though we’ve found working with freelance researchers tough in the past, and haven't accepted many submissions.)

I think my hope for better broad EA career advice may be better met by a new site/organization rather than by 80k.

Potentially, though I note some challenges with this and alternative ideas in the other comments.

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-25T09:39:10.530Z · score: 15 (6 votes) · EA · GW

Hi Jamie,

Here are some additions and comments on some of your points.

If I remember correctly, the EA survey suggests that 80K is an important entry point for lots of people into EA.

It’s true that this means that stakes for improving 80,000 Hours are high, but it also seems like evidence that 80,000 Hours is succeeding as an introduction for many people.

3) We talk about EA movement-building not being funding constrained. If that's the case, then presumably it'd be possible to create more roles, be that at 80K or at new organisations.

Unfortunately lack of funding constraints doesn’t necessarily mean that it’s easy to build new teams. For instance, the community is very constrained by managers, which makes it hard to both hire junior people and set up new organisations. See more here.

Research/website like 80K's current career profile reviews, but including less competitive career paths (perhaps this would need to focus on quantity over quality and "breadth" over depth)

Note that we have tried this in the past (e.g. allied health, web design, executive search), but they took a long time to write, never got much attention, and as far as we’re aware haven’t caused any plan changes.

I think it would also be hard to correctly direct people to the right source of advice between the two orgs.

It seems better to try to make some quick improvements to 80,000 Hours, such as adding a list of very concrete but less competitive options to the next version of our guide. (And as noted, there are already options in earning to give and government.)

Research/website/podcasts etc like 80K's current work, but focusing on specific cause areas (e.g. animal advocacy broadly, including both farmed animals and wild animals)

Agree - I mention this in another comment.

Regular career workshops

Yes, these are already being experimented with by local effective altruism groups. However, note that there is a risk that if these become a major way people first engage with effective altruism, they could put off the people best suited for the narrow priority paths. As noted, this seems to have been a problem in our existing content, which is presumably more narrow than these new workshops would be. They’re also quite challenging to run well - often someone able to do this independently can get a full-time job at an existing organisation.

One-on-one calls seem safer, and funding someone to work independently doing calls all day seems like a reasonable use of funding to me, provided they couldn’t / wouldn't get a more senior job. (Though it was tried by ‘EA Action’ once before, which was shut down.)

Research/webite/podcasts etc like 80K's current work, but focused on high school age students, before they've made choices which significantly narrow down their options (like choosing their degree).

This seems pretty similar to SHIC: https://shicschools.org/

So it seems to me that either 80K should prioritise hiring more people to take up some of these opportunities, or EA as a movement should prioritise creating new organisations to take them up.

Unfortunately, we have very limited capacity to hire. It seems better that we focus our efforts on people who can help with our main organisational focus, which is the narrow vision. So, like I note, I think these would mainly have to be done by other organisations.

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-25T09:34:32.824Z · score: 13 (11 votes) · EA · GW
So it ought not to surprise anyone that a huge fraction of them come away demoralized.

I want to quickly point out that we don’t have enough evidence to conclude that ‘a huge fraction’ are demoralized. We have several reports and some intuitive reasons to expect that some are. We also have plenty of reports of people saying 80,000 Hours made them more motivated and ambitious, and helped them find more personally meaningful and satisfying careers. It’s hard to know what the overall effect is on motivation.

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-24T06:50:19.902Z · score: 4 (6 votes) · EA · GW

Hi Milan, this is a very quick response. The short answer is that we have considered it, but don't intend to do it in the foreseeable future.

The main reason is that it would cost one of our key managers, but we think it would be lower impact than our current activities for the reasons listed in the main post. I also think our donors would be less keen on it, and it seems hard to make work in practice - how would you tell people which one they should use?

My guess is that it might be better for a new team to work on. One framing might be to to approach the problem from a different angle, such as making a guide to contributing to politics part-time (e.g. neglected bipartisan bills you could call your congressperson about); or putting more emphasis on the GWWC pledge again. It would also be cheaper to start by just publishing a more concrete list of less competitive career options.

A slightly different project that might be worth someone taking on is an organisation focusing on global health or factory farming career advice.

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-24T03:47:11.376Z · score: 8 (6 votes) · EA · GW

Hi Milan, it would depend a lot on the details, but if it were mainly due to us and they were donating to the EA Long-term Fund or equivalent, then it would roughly be a rated-10 plan change, which would mean it's in the top 150 of all time.

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-23T22:02:11.913Z · score: 2 (1 votes) · EA · GW

On 2), note there’s discussion about this here.

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-23T22:00:44.516Z · score: 22 (10 votes) · EA · GW

It’s not our intention to give this impression - finding someone who donates $60k per year would be seen as a significant success within the team. We also highlight an example of someone doing exactly this (working at Google and earning to give) in our key career guide article on high impact jobs. I’d be curious to hear about anything we’ve done to exacerbate the problem other than our discussions of certain very competitive paths, which I admit can be demoralizing in themselves.

I think the main aspect of our advice that might be mainly relevant to people who have ‘top half of Oxford’ credentials is the list of priority paths. However, even within this list of our highest priorities, are options that don’t require that kind of academic background, such as government jobs and operations positions. We know lots of people without this background currently succeeding in these roles. What’s more, on that page, we also highlight five broader paths that a significant fraction of college graduates could pursue, as well as a general step-by-step process for coming up with options.

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-23T21:58:14.807Z · score: 33 (12 votes) · EA · GW

Here are some responses to your specific points:

while their career reviews provide an “ease of competition” rating on a 1-5 scale, there’s no explanation how they arrive at these ratings or what a given rating means concretely, and what information they provide on standards and expectations in different fields is frustratingly vague.

We aim to assess entry criteria, predictors of personal fit and how to test out your fit within each career review, although we admittedly do a substantially better job of this in our ‘medium depth’ reviews than in our ‘shallow’ ones. The score, along with the ‘key facts on fit’ section in the summary of each profile, is just a very quick summary of that material. For instance, you mentioned working out whether to continue with academia, and we have about four pages on assessing personal fit in academia in the relevant career review.

while 80,000 Hours occasionally mentions in passing the value of having a backup plan, their website contains almost no concrete advice or recommendations about what such a plan might entail or how to make one.

We encourage people to make a ranking of options, then their back-up plan B is a less competitive option than your plan A that you can switch into if plan A doesn’t work out. Then Plan Z is how to get back on your feet if lots goes wrong. We lead people through a process to come up with their Plan B and Plan Z in our career planning tool.

Precisely what a person’s Plan B and Plan Z will be will depend a great deal on their skills, interests, existing resources, and on what Plan A they are aiming for. For that reason, in our profiles on particular career steps, we try to discuss what the highest value roles to aim for might be, and also what other paths they open up, for example in our page on studying economics. Having said that, unfortunately (being a small team) we are not able to discuss the specifics of the vast majority of career paths. This is less bad than it could be because Plan Zs are likely to involve ways of building up savings or taking jobs which aren’t peculiar to effective altruists, and so to be covered by other careers advice.

To ameliorate this somewhat, we also often discuss donating as a great option which allows most people to have a huge impact. While we think it’s crucial to find the most important skill bottlenecks and work out how people can train to fill them, that shouldn’t be taken to imply that we think donating to effective charities is not important.

Somebody coming to the 80000hours.org front page might start by reading the “Career Guide”, where in the section on career capital they would read that the most impactful years of one’s life are probably one’s 40s, and that in the meantime it’s important to build up broad flexible skills since the most important opportunities and cause areas will likely be unpredictably different in the future. However, buried in the 2017 Annual Report where a new reader is unlikely to find it is a more recent discussion reaching the exact opposite conclusion, that one should focus exclusively on narrow career capital that can apply directly to the things that seem most important right now.

I agree this is a mistake, for which I apologise. We’ve been working on an update to our content on career capital this year, but haven’t been able to finish it due to the lack of writing capacity. I agree we should have flagged this at the top of the career capital article, and I’ve now added a note there. We’ll likely add it to our mistakes page too. Thank you for prompting us on this.

Other widely-linked parts of the website seem neglected or broken entirely; for example no matter what answers I put into the career quiz it tells me to become a policy-focused civil servant in the British government (having neglected to ask whether I’m British)

I agree there are some major problems with the career quiz. It was last reviewed in 2016 and no longer reflects our current views - we’ve therefore removed most links to it from the website (dramatically reducing traffic), and added a note on the page to the effect that it doesn’t reflect our views. We're considering whether to remove it altogether when we redesign our site next year. In the meantime, we recommend people use the general process for generating options listed here.

For what it's worth, civil service only stays on the top if you select 'no' to working in the most competitive fields. We do think this can be a high-impact but less competitive option, but it'd obviously be better to have more such options, and better tailored ones. I agree that sending people of all nationalities to our UK civil service career review is confusing; though we do think many of the general points are relevant to working in government in other countries.

We built the tool to be a fun way of thinking about new options, and to act as a springboard for further research. We hoped that this would be evident from the format (only asking 6 questions). Unfortunately, we failed to anticipate how people would in fact use it.

Many of my friends report that reading 80,000 Hours’ site usually makes them feel demoralized, alienated, and hopeless.

We deeply regret this. Unfortunately, as noted, we also often hear the opposite reaction. I think it’s going to be difficult to be helpful for our whole potential audience. With the narrowing of our focus, we’ve been putting a lot of time into thinking about ways to make it clearer who will find our content most useful, and to avoid demoralising others. We’re sad that we haven’t yet succeeded in striking this balance, and are keen for more ideas on this front. We think that the number of importantly impactful jobs in the world are far more than we can expect to cover, and we at root want to convey a message of hope: that by thinking carefully about our career decisions, we really can help others and build a better future.

Comment by benjamin_todd on Towards Better EA Career Advice · 2018-11-23T21:49:18.124Z · score: 63 (25 votes) · EA · GW

Hi lexande,

Thank you for taking the time to post this, we’re keen for the feedback. We hate the idea that we’ve contributed to people feeling demotivated about their careers, particularly because we believe that most people living in rich countries have the power to do an immense amount of good. Saving a life is the kind of incredible feat that most people wouldn’t expect ever to be able to do. But if we donate under $10,000 over our lifetime to AMF, we can do the equivalent of that.

That said, we also want to highlight ways people might be able to achieve even more. This includes highlighting some extremely competitive but high-impact jobs, and we understand that this may be demotivating for many of our readers. We wish we knew how to do a better job of communicating our priorities without having this effect.

I think the core issue behind your comments might be that there are two visions for 80,000 Hours.

One vision is a broad ‘social impact career advice’ organisation that could be used by a significant fraction of graduates choosing their careers, helping a large number of people have more impact whether or not they’re a fit for our highest priority areas and roles.

Another vision is to focus on solving the most pressing skill bottlenecks in the world’s most pressing problems. Given our current view of global priorities, this likely involves working with a smaller number of people.

In the second vision, we would talk more about cutting edge ideas in effective altruism, while in the first, we talk more about regular career advice - how to get a job, how to work out what you’re good at etc - and a wider range of jobs.

It seems like one thrust of your post is that we should focus more on the broader ‘social impact career advice’ vision.

We currently think the narrower ‘key skill bottleneck’ vision will have more impact. There’s a lot going into this decision, some of which is mentioned in our last annual review. One factor is that it seems easier to get and track a small number of plan changes in crucial areas than a much large number of smaller shifts. One reason for this is that the problems we most prioritise seem most constrained by the need for a small number of people filling some key roles and types of expertise (discussed more here).

The narrower vision is also more neglected, since no-one else does it, while there is already lots of general careers advice out there. You say:

Most people starting careers suffer from extremely poor and and incomplete information about the necessary and sufficient conditions for getting various jobs. This seems to me to be the most important source of inefficiency/market failure in the labor market and suboptimal (both altruistically and selfishly) career choices generally.

I think the biggest source of altruistic inefficiency is not considering the importance of choosing the right problem area, knowing what the key bottlenecks are within each area, not being scope blind about choice of intervention, and other ideas like these. Information about what it takes to get different jobs that’s currently available may not be great, but it’s already out there and can be provided by people outside of the effective altruism community. I don’t think 80,000 Hours should try to compete with normal careers advice when the core ideas in effective altruism haven’t been properly developed and written up, something that almost no-one else is going to do.

These two directions put us in a difficult position. Given our limited resources, if we go narrower, then we’ll make our site worse for the broader audience, and vice versa. We’ve received a lot of feedback in the opposite direction, where people who are more involved in effective altruism have said we weren’t able to help them, or people in a great position to enter our priority paths told us that the advice seemed too simplistic and they stopped reading. It’s already challenging even if we just have one audience, since each person needs different advice at different stages in their career and in different situations.

A particularly tough aspect of the situation is that I think a lot of our content is relevant to the broader audience (such as most articles in the career guide), but mentioning the narrower material (such as our list of priority paths) sometimes demoralises others.

Likewise, I expect that a broader range of people can enter our priority paths than you seem to suggest. For instance, you don’t need to be in the “top half of Oxford”/ Cambridge / Ivy League to get a relevant job in government, which I think is often higher-impact than earning to give, which is in turn higher impact than most ‘social impact’ jobs. But mentioning the narrower options often causes people to conclude everything we list isn’t suitable.

Another issue is that we’ve been narrowing our focus over the last few years, but the site started out broader, and still has some legacies from that time (e.g. the career quiz). We’re steadily fixing these but there’s a long way to go. Likewise, we’d like to make it clearer who our target audience is, and we're currently working on a major redraft of the front page and career guide which will address this.

Unfortunately, in part due to being held up by the redraft, we haven’t yet managed to adequately convey to the community that our focus has narrowed. Hopefully this will also become clearer after we redraft the site.

Doing both visions well would require substantially more capacity than we currently have. In the meantime, we aim to finish the redraft as soon as possible to make our intended audience really clear to readers. We will also continue thinking through and testing new ways to try to communicate both that we think that almost all university graduates in wealthy countries can have an incredible impact, and also the importance of us each considering whether and how we could be doing even more good. If you have thoughts on how we can strike this balance, and in particular do so in a way which is supportive and encouraging, please let us know.

Comment by benjamin_todd on [Link] Introduction to Cause Prioritisation · 2018-11-23T20:53:01.599Z · score: 6 (5 votes) · EA · GW

Hey there, I'm a bit surprised you didn't mention some of the existing introductions to this topic including:

1. CEA's introduction to EA, which includes a section on choosing a cause:

https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/

And more in-depth articles within the handbook, such as:

https://www.effectivealtruism.org/articles/prospecting-for-gold-owen-cotton-barratt/

2. The chapter on this topic in Doing Good Better

3. Some of GiveWell and Open Phil's relevant posts, such as:

https://blog.givewell.org/2012/05/02/strategic-cause-selection/

4. 80k's introduction and video as well as other relevant articles:

https://80000hours.org/career-guide/most-pressing-problems/

https://www.youtube.com/watch?v=1xsR0XBwyo4

https://80000hours.org/problem-profiles/

https://80000hours.org/career-guide/world-problems/

Comment by benjamin_todd on EA Survey Series 2018 : How do people get involved in EA? · 2018-11-23T07:47:58.816Z · score: 4 (2 votes) · EA · GW

The other additional analysis which would be great is if you could identify the 20% of the respondents who seem most involved and dedicated, and then repeat the analysis by source for this sub-group. This would give us some sense of the quality as well as the scale of the reach of different sources.

Comment by benjamin_todd on EA Survey Series 2018 : How do people get involved in EA? · 2018-11-23T07:45:32.636Z · score: 4 (2 votes) · EA · GW

That makes sense. I agree none of this is simple.

Comment by benjamin_todd on Cross-post: Think twice before talking about ‘talent gaps’ – clarifying nine misconceptions, by 80,000 Hours. · 2018-11-20T09:06:42.432Z · score: 6 (3 votes) · EA · GW

Also agree, though presumably some fraction is zero-sum.

Comment by benjamin_todd on Cross-post: Think twice before talking about ‘talent gaps’ – clarifying nine misconceptions, by 80,000 Hours. · 2018-11-20T02:19:56.961Z · score: 5 (3 votes) · EA · GW

That's a good point. We have worried about this in setting our own salary policy, but I forgot to mention it in the post. I've added to the original version. (Edit: Although we worried about this consideration, we didn't end up using it in setting our policy, and don't recommend that other orgs do. I've also removed the new sentence from the original version.)

Comment by benjamin_todd on EA Survey Series 2018 : How do people get involved in EA? · 2018-11-20T01:14:36.690Z · score: 14 (7 votes) · EA · GW

Hey David and everyone else involved, thank you for the analysis! This is useful data for us.

A quick request for next year: it would be great to keep working on the categories to get fewer 'other' responses and reduce overlap.

Approximately 19% of the open comment responses mentioned Podcasts (typically the Sam Harris or Joe Rogan podcasts), 15% mentioned Books, 10% mentioned Articles (online or in a newspaper), 9% mentioned a university or school Course, 7% a Blog, and 4% a Talk. 5% and 6% referred to an EA org that had already been included in the fixed response options (e.g. GWWC) and to a Personal Contact respectively.

The Sam Harris and Joe Rogan podcasts were done by Will as part of the promotion campaign for DGB while he was working at CEA/80k so could arguably be coded as DGB/CEA/80k. Presumably some of the books / articles / talks are also other materials produced by the organisations or press coverage they sought out - does that seem right?

Likewise, maybe 'search' and 'facebook' should be removed as categories, because they're channels you use to find the other content listed. Presumably everyone who found out about EA through 'facebook' likely saw a post by a friend, so should be a personal referral, or saw a post by one of the orgs, so should be coded as an org.

I'm also surprised to see https://www.effectivealtruism.org/ isn't listed - do you know what happened there?

Comment by benjamin_todd on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-25T00:41:29.808Z · score: 3 (3 votes) · EA · GW

80,000 Hours would likely be supportive of another organisation specialising in global health or factory farming career advising. I'd prefer to divide up by problem areas, rather than long vs. short term. We plan to write more about this.

Comment by benjamin_todd on Why more effective altruists should use LinkedIn · 2018-10-24T01:41:21.416Z · score: 0 (0 votes) · EA · GW

Unfortunately they've removed the ability to search within groups like the 80k group. You can still, however, do a keyword search of profiles.

Comment by benjamin_todd on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-20T19:24:55.909Z · score: 1 (1 votes) · EA · GW

Yes, I apologise for that, we were talking at cross purposes.

Comment by benjamin_todd on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-20T05:31:03.474Z · score: 0 (0 votes) · EA · GW

Hey Alex,

I'm feeling a bit set upon here. I was talking about a different topic (why people hire slowly) and it seems like we got our wires crossed and are now debating the value of a marginal hire. Looking back, I see how my earlier comment led us off in a confusing direction.

If we're discussing the value of a marginal hire, I totally agree that the survey figures DON'T include the costs of hiring someone in the first place. That's why I brought up the ex-ante ex-post distinction in the first place.

This means that someone considering working at an EA org should use a lower figure for the estimate of their value-add (we agree). In particular, they should subtract the opportunity costs of the time spent hiring them. (This varies a lot on the situation, but one month of senior staff time seems like a reasonable ballpark to me.)

However, just to be clear, I don't think they should subtract the opportunity costs of senior staff time spent on their on-going management, since I think the natural interpretation of the survey question includes these. (If a new hire could raise $1m of donations, but would take up management time that could have raised $800k otherwise and has a salary of $100k, it would be odd for the org say that they'd need to be compensated with $1m if the hire disappeared. Rather, the answer should be $100k. I expect most orgs were aiming to include these costs, though of course they might not have made a good estimate. This is what I thought you were talking about when I said I think orgs partially take opportunity costs into account.)

Where does this leave the survey figures? We could super roughly estimate the value of a month of senior staff time at an org is 5x the value of a month of junior staff time, so this would super roughly reduce the value of junior hires over three years by 5/36 = 14%.