Posts

How much does performance differ between people? 2021-03-25T22:56:32.660Z
Careers Questions Open Thread 2020-12-04T12:05:34.775Z
A new, cause-general career planning process 2020-12-03T11:35:38.121Z
What actually is the argument for effective altruism? 2020-09-26T20:32:10.504Z
Judgement as a key need in EA 2020-09-12T14:48:20.588Z
An argument for keeping open the option of earning to save 2020-08-31T15:09:42.865Z
More empirical data on 'value drift' 2020-08-29T11:44:42.855Z
Why I've come to think global priorities research is even more important than I thought 2020-08-15T13:34:36.423Z
New data suggests the ‘leaders’’ priorities represent the core of the community 2020-05-11T13:07:43.056Z
What will 80,000 Hours provide (and not provide) within the effective altruism community? 2020-04-17T18:36:00.673Z
Why not to rush to translate effective altruism into other languages 2018-03-05T02:17:20.153Z
New recommended career path for effective altruists: China specialists 2018-03-01T21:18:46.124Z
80,000 Hours annual review released 2017-12-27T20:31:05.395Z
How can we best coordinate as a community? 2017-07-07T04:45:55.619Z
Why donate to 80,000 Hours 2016-12-24T17:04:38.089Z
If you want to disagree with effective altruism, you need to disagree one of these three claims 2016-09-25T15:01:28.753Z
Is the community short of software engineers after all? 2016-09-23T11:53:59.453Z
6 common mistakes in the effective altruism community 2016-06-03T16:51:33.922Z
Why more effective altruists should use LinkedIn 2016-06-03T16:32:24.717Z
Is legacy fundraising actually higher leverage? 2015-12-16T00:22:46.723Z
We care about WALYs not QALYs 2015-11-13T19:21:42.309Z
Why we need more meta 2015-09-26T22:40:43.933Z
Thread for discussing critical review of Doing Good Better in the London Review of Books 2015-09-21T02:27:47.835Z
A new response to effective altruism 2015-09-12T04:25:43.242Z
Random idea: crowdsourcing lobbyists 2015-07-02T01:16:05.861Z
The career questions thread 2015-06-20T02:19:07.131Z
Why long-run focused effective altruism is more common sense 2014-11-21T00:12:34.020Z
Two interviews with Holden 2014-10-03T21:44:12.163Z
We're looking for stories of EA career decisions 2014-09-30T18:20:28.169Z
An epistemology for effective altruism? 2014-09-21T21:46:04.430Z
Case study: designing a new organisation that might be more effective than GiveWell's top recommendation 2013-09-16T04:00:36.000Z
Show me the harm 2013-08-06T04:00:52.000Z

Comments

Comment by Benjamin_Todd on Cash Transfers as a Simple First Argument · 2021-04-18T16:40:06.918Z · EA · GW

Hi there, I agree it's an interesting opening argument. I was wondering if you had seen this article before, which takes a similar approach: https://80000hours.org/career-guide/making-a-difference/

One concern about this approach is that I think it can seem obviously "low leverage" to the types of people focused on entrepreneurship, policy change and research (some of the people we most want to appeal to), and can give them the impression that EA is mainly about 'high confidence' giving rather than 'high expected value' giving, which is already a one of the most common misconceptions out there.

Comment by Benjamin_Todd on How much does performance differ between people? · 2021-03-30T19:48:01.029Z · EA · GW

I tried to sum up the key messages in plain language in a Twitter thread, in case that helps clarify.

Comment by Benjamin_Todd on How much does performance differ between people? · 2021-03-30T19:46:54.982Z · EA · GW

I think that's a good summary, but it's not only winner-takes-all effects that generate heavy-tailed outcomes.

You can get heavy tailed outcomes if performance is the product of two normally distributed factors (e.g. intelligence x effort).

It can also arise from the other factors that Max lists in another comment (e.g. scalable outputs, complex production).

Luck can also produce heavy tailed outcomes if it amplifies outcomes or is itself heavy-tailed.

Comment by Benjamin_Todd on How much does performance differ between people? · 2021-03-30T15:30:55.123Z · EA · GW

This is cool.

One theoretical point in favour of complexity is that complex production often looks like an 'o-ring' process, which will create heavy-tailed outcomes.

Comment by Benjamin_Todd on How much does performance differ between people? · 2021-03-28T19:40:29.451Z · EA · GW

On your main point, this was the kind of thing we were trying to make clearer, so it's disappointing that hasn't come through.

Just on the particular VC example:

I'm suspicious you can do a good job of predicting ex ante outcomes. After all, that's what VCs would want to do and they have enormous resources. Their strategy is basically to pick as many plausible winners as they can fund.

Most VCs only pick from the top 1-5% of startups. E.g. YC's acceptance rate is 1%, and very few startups they reject make it to series A. More data on VC acceptance rates here: https://80000hours.org/2014/06/the-payoff-and-probability-of-obtaining-venture-capital/

So, I think that while it's mostly luck once you get down to the top 1-5%, I think there's a lot of predictors before that.

Also see more on predictors of startup performance here: https://80000hours.org/2012/02/entrepreneurship-a-game-of-poker-not-roulette/

Comment by Benjamin_Todd on What Makes Outreach to Progressives Hard · 2021-03-16T12:08:34.210Z · EA · GW

Thank you for this summary!

One thought that struck me is that most of the objections seem most likely to come up in response to 'GiveWell style EA'.

I expect the objections that would be raised to a longtermist-first EA would be pretty different, though with some overlap. I'd be interested in any thoughts on what they would be.

I also (speculatively) wonder if a longtermist-first EA might ultimately do better with this audience. You can do a presentation that starts with climate change, and then point out that the lack of political representation for future generations is a much more general problem.

In addition, longtermist EAs favour hits based giving, and that makes it clear that policy change is among the best interventions, while acknowledging it's very hard to measure effects, which seems more palatable than an approach highly focused on measurement of narrow metrics.

Comment by Benjamin_Todd on Feedback from where? · 2021-03-14T14:01:25.001Z · EA · GW

I agree - but my impression is that they consider track record when making the forward-looking estimates, and they also update their recommendations over time, in part drawing on track record. I think "doesn't consider track record" is a straw man, though there could be an interesting argument about whether more weight should be put on track record as opposed to other factors (e.g. intervention selection, cause selection, team quality).

Comment by Benjamin_Todd on Feedback from where? · 2021-03-12T16:58:45.242Z · EA · GW

Impact = money moved * average charity effectiveness. FP tracks money to their recommended charities, and this is their published research on the effectiveness of those charities, and why they recommended them.

Comment by Benjamin_Todd on Feedback from where? · 2021-03-12T12:40:28.458Z · EA · GW

We make the impact evaluation I note above available to donors (and our donors also do their own version of it). We also publish top line results publicly in our annual reviews (e.g. number of impact-adjusted plan changes) , but don't publish the case studies since they involve a ton of sensitive personal information.

Comment by Benjamin_Todd on Feedback from where? · 2021-03-12T12:37:04.632Z · EA · GW

https://founderspledge.com/stories/2020-research-review-our-latest-findings-and-future-plans

Comment by Benjamin_Todd on Feedback from where? · 2021-03-11T21:51:14.767Z · EA · GW

Just a quick comment that I don't think the above is a good characterisation of how 80k assesses its impact. Describing our whole impact evaluation would take a while, but some key elements are:

  • We think impact is heavy tailed, so we try to identify the most high-impact 'top plan changes'. We do case studies of what impact they had and how we helped. This often involves interviewing the person, and also people who can assess their work. (Last year these interviews were done by a third party to reduce desirability bias). We then do a rough fermi estimate of the impact.

  • We also track the number of a wider class of 'criteria-based plan changes', but then take a random sample and make fermi estimates of impact so we can compare their value to the top plan changes.

If we had to choose a single metric, it would be something closer to impact-adjusted years of extra labour added to top causes, rather than the sheer number of plan changes.

We also look at other indicators like:

  • There have been other surveys of the highest-impact people who entered EA in recent years, evaluating which fraction came from 80k, which let's us make an estimate of the percentage of the EA workforce from 80k.

  • We look at the EA survey results, which let's us track things like how many people are working at EA orgs and entered via 80k.

We use number of calls as a lead metric, not an impact metric. Technically it's the number of calls with people who made an application above a quality bar, rather than the raw number. We've checked and it seems to be a proxy for the number of impact-adjusted plan changes that result from advising.

This is not to deny that assessing our impact is extremely difficult, and ultimately involves a lot of judgement calls - we were explicit about that in the last review - but we've put a lot more work into it than the above implies – probably around 5-10% of team time in recent years.

I think similar comments could be made by several of the other examples e.g. GWWC also tracks dollars donated each year to effective charities (now via the EA Funds) and total dollars pledged. They track the number of pledges as well since that's a better proxy for the community building benefits.

Comment by Benjamin_Todd on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-10T23:20:47.675Z · EA · GW

Tldr: I agree the 'top of the funnel' seems to not be growing (i.e. how many people are reached each year). This was at least in part due to a deliberate shift in strategy. I think the 'bottom' of the funnel (e.g. money and people focused on EA) is still growing. Eventually we'll need to get the top of the funnel growing again, and people are starting to focus on this more.

Around 2015, DGB and The Most Good You Can Do were both launched, which both involved significant media attention that aimed to reach lots of people (e.g. two TED talks). 80k was also focused on reaching more people.

After that, the sense was that the greater bottleneck was taking all these newly interested people (and the money from Open Phil), and making sure that results in actually useful things happening, rather than reaching even more people.

(There was also some sense of wanting to shore up the intellectual foundations, and make sure EA is conveyed accurately, rather than as "earn to give for malaria nets", which seems vital for its long-term potential. There was also a shift towards niche outreach, rather than mass media - since mass media seems better for raising donations to global health, but less useful for something like reducing GCBRs, and although good at reaching lots of people, wasn't as effective as the niche stuff.)

E.g. in 2018, 80k switched to focusing on our key ideas page and podcast, which are more about making sure already interested people understand our ideas than reaching new people; Will focused on research and niche outreach, and is now writing a book on longtermism. GWWC was scaled down, and Toby wrote a book about existential risk.

This wasn't obviously a mistake since I think that if you track 'total money committed to EA' and 'total number of people willing to change career (or take other significant steps)', it's still growing reasonably (perhaps ~20% per year?), and these metrics are closer to what ultimately matter. (Unfortunately I don't have a good source for this claim and it relies on judgement calls, though Open Phil's resources have increased due to the increase in Dustin Moskovitz's net worth; several other donors have made a lot of money; the EA Forum is growing healthily; 80k is getting ~200 plan changes per year; the student groups keep recruiting people each year etc.)

One problem is that if the top of the funnel isn't growing, then eventually we'll 'use up' the pool of interested people who might become more dedicated, so it'll turn into a bottleneck at some point.

And all else equal, more top of funnel growth would be better, so it's a shame we haven't done more.

My impression is that people are starting to focus more on growing the top of the funnel again. However, I still think the focus is more on niche outreach, so you'd need to track a metric more like 'total number of engaged people' to evaluate it.

Comment by Benjamin_Todd on Be Specific About Your Career · 2021-02-24T22:15:28.422Z · EA · GW

I agree it's important to do both, and people often neglect to understand the day-to-day reality, and to zoom in on very concrete opportunities rather than think about broad paths.

You could see it as a form of near vs. far mode bias.

Comment by Benjamin_Todd on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-23T22:52:50.594Z · EA · GW

That's true, but we considered switching at several points later on (and the local groups did in fact switch).

Comment by Benjamin_Todd on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-21T22:18:45.405Z · EA · GW

This makes sense - seems like a good thing to think about.

One other point you didn't mention is that by picking another name, it reduces brand risks for EA, and that means you can act a bit more independently. (E.g. once a local group ran an event that got picked up by national newspapers.)

You can also make your group name about something more specific. Another problem of effective altruism is that it's broad and abstract which makes it hard to explain.

For instance, if you want your group to be practical and aimed at students, picking something about careers / finding a job can be very appealing. (Though this wouldn't be a good choice if you wanted to be more niche and intellectual.) This was one of the reasons why we picked GWWC and 80k as names in the early days, rather than leading with effective altruism.

  1. The direct translations of Effective Altruism can sound a bit forced. The dutch “Effectief Altruïsme” is a case in point. This can make you seem a bit “out of touch”. We checked with German and Spanish EA’s and they confirm that the direct translations sound rather awkward in their native tongue (a Polish EA we talked to have said that the name actually sounds really positive in Polish, so this point may not be relevant in all languages).

Agree! I write a bit more about this topic here, and talk about how the first Chinese translation wasn't ideal.

Comment by Benjamin_Todd on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. · 2021-02-13T20:16:30.451Z · EA · GW

My personal impression is that it's a bit easier to find operations staff than before, but still difficult.

We wrote this update at the top of the original article:

Note: Though we edited this article lightly in 2020, it was written in 2018 and is now somewhat out of date.

Due to this post and other efforts, people in the effective altruist community have become more interested in working in operations, and it has also come to seem easier to fill these roles with people not already active in the community. As a result, several recent hiring rounds for these roles were successful, and there are now fewer open positions, and when positions open up they’ve become more competitive.

This means that the need for more people to pursue this path is somewhat less than when we wrote the post. However, many of the major points in this post still apply to some degree, there is still a need for more people working in operations management, and we expect this need to persist as the community grows. For these reasons, we still think this is a promising path to consider. We also think the information on how to enter operations roles and how to assess your fit for them will still be valuable to people pursuing this path.

Comment by Benjamin_Todd on Let's Fund Living review: 2020 update · 2021-02-13T20:13:22.300Z · EA · GW

Yes - I was thinking of the writing on economic growth research - that's still helping to advance a hits based approach to helping the global poor.

Comment by Benjamin_Todd on Let's Fund Living review: 2020 update · 2021-02-12T20:30:32.369Z · EA · GW

Great to see an attempt to take a hits based approach to climate change and global health that's also open to small donors.

Comment by Benjamin_Todd on Retention in EA - Part I: Survey Data · 2021-02-11T12:57:36.865Z · EA · GW

Thanks!

Comment by Benjamin_Todd on Retention in EA - Part I: Survey Data · 2021-02-10T09:51:58.568Z · EA · GW

Hey Ben, thank you for this!

I had a quick question. With this category:

Cultural fit/cause area disagreement/interpersonal conflict

Cause area disagreement seems fairly different from interpersonal conflict to me. Is there something I've missed about how you're thinking of the categories? How might it look if you broke this group out into sub-categories?

Comment by Benjamin_Todd on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-02-02T22:00:59.470Z · EA · GW

A small comment: I really like the term 'scope sensitive', but I worry that it's not easily understandable to people who aren't familiar with the 'scope neglect' bias, which isn't one of the more commonly known biases (e.g. when I search on Google, I get a very short wikipedia article first and at number 3 get a lesswrong article). I wonder if 'scale sensitive' might be more immediately understood by the typical person.

On google n-gram, 'scale sensitive' is about 10x more common.

I'm not sure which is better e.g. 'scope sensitive ethics' sounds nicer to me, but worth thinking about more if you want to turn it into a term.

Comment by Benjamin_Todd on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-02-02T21:56:05.443Z · EA · GW

PS If you push ahead more, you might want to frame it as also a core ethical intuition in non-utilitarian moral theories, rather than presenting it mainly as a more acceptable, watered-down utilitarianism. I think one of the exciting things about scope sensitivity is that it's a moral principle that everyone should agree with, but also has potentially radical consequences for how we should act.

Comment by Benjamin_Todd on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-02-02T21:53:53.614Z · EA · GW

Hi Richard,

That makes sense - it could be useful to define an ethical position that's separate from effective altruism (which I've been pushing to be defined as a practical and intellectual project rather than ethical theory).

I'd be excited to see someone try to develop it, and would be happy to try to help if you do more in this area.

In the early days of EA, we actually toyed with a similar idea, called Positive Ethics - an analogy with positive psychology - which aimed to be the ethics of how to best benefit others, rather than more discussion of prohibitions.

I think my main concern is that I'm not sure that in public awareness there's enough space in between EA, global priorities research and consequentialism for another field. (E.g. I also think it would be better if EA were framed more in terms of 'let's be scope sensitive' rather than the other connotations you mention), but it could be interesting to write more about the idea to see where you end up.

Comment by Benjamin_Todd on Important Between-Cause Considerations: things every EA should know about · 2021-01-29T21:10:51.563Z · EA · GW

Just a very quick answer, but I'd be keen to have content that lists e.g. the most important 5-10 questions that seem to have the biggest effect on cause selection, since I think that would help people think through cause selection for themselves, but without having to think about every big issue in philosophy.

We tried to make a simple version of this a while back here: https://80000hours.org/problem-quiz/

This was another attempt: http://globalprioritiesproject.org/2015/09/flowhart/

OP's worldview investigations are also about these kinds of considerations (more info: https://80000hours.org/podcast/episodes/ajeya-cotra-worldview-diversification/)

I think the main challenge is that it's really difficult to pinpoint what these considerations actually are (there's a lot of disagreement), and they differ a lot by person and which worldviews we want to have in scope. We also lack easy to read write-ups of many of the concepts.

I'd be interested in having another go at the list though, and I think we have much better write-ups than e.g. pre-2017. I'd be very interested to see other people's take on what the list should be.

Comment by Benjamin_Todd on What actually is the argument for effective altruism? · 2021-01-26T15:20:15.458Z · EA · GW

Hmm in that case, I'd probably see it as a denial of identifiability.

I do think something along these lines is one of the best counteraguments to EA. I see it as the first step in the cluelessness debate.

Comment by Benjamin_Todd on Promoting EA to billionaires? · 2021-01-24T18:10:20.665Z · EA · GW

I'd also add Open Philanthropy to this, since their long-term goal is to advise other philanthropists besides Dustin and Cari (and they've already done some of this).

There are also several individuals who do this, such as Will MacAskill.

Comment by Benjamin_Todd on What actually is the argument for effective altruism? · 2021-01-24T14:12:22.064Z · EA · GW

Hey, I agree something like that might be worth adding.

The way I was trying to handle it is to define 'common good' in such a way that different contributions are comparable (e.g. if common good = welfare). However, it's possible I should add something like "there don't exist other values that typically outweigh differences in the common good thus defined".

For instance, you might think that justice is incredibly intrinsically important, such that what you should do is mainly determined by which action is most just, even if there are also large differences in terms of the common good.

Comment by Benjamin_Todd on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-24T14:00:37.227Z · EA · GW

Hey Richard, I agree with this, and I like the framing.

I want to add though, that these are basically the reasons why we created EA in the first place, rather than promoting 'utilitarian charity'. The idea was that people with many ethical views can agree that the scale of effects on people's lives matters, and so it's a point of convergence that many can get behind, while also getting at a key empirical fact that's not widely appreciated (differences in scope are larger than people think).

So, I'd say scope sensitive ethics is a reinvention of EA. It's a regret of mine that we've not done a great job of communicating that so far. It's possible we need to try introducing the core idea in lots of ways to get it across, and this seems like a good one.

Comment by Benjamin_Todd on Ranking animal foods based on suffering and GHG emissions · 2021-01-22T13:46:22.380Z · EA · GW

That makes sense. The point I'm trying to make, though, is that the choice of how to do the conversion from CO2/kcal to hours/kcal is probably the most important bit that drives the results. I'd prefer to make that clearer to users, and get them to make their own assessment.

Instead, the WPM ends up coming up with an implicit conversion rate, which could be way different from what the person would say if asked. Given this, it seems like the results can't be trusted.

(I expect a WPM would be fine in domains where there are multiple difficult-to-compare criteria and we're not sure which criteria are most important – as in many daily decisions – but in this case, it could easily be that either CO2 or suffering should totally dominate your ranking, and it just depends on your worldview.)

Comment by Benjamin_Todd on Ranking animal foods based on suffering and GHG emissions · 2021-01-21T14:09:01.555Z · EA · GW

Cool idea!

I'm not sure I understand how it works, but isn't one of the most important parameters how someone would want to trade 1 tonne of CO2 for 1 h of suffering on a factory farm? I.e. I could imagine that ratio could vary by orders of magnitude, and could make either the suffering or the carbon effects dominate.

It seems like your current approach is to normalize both scales and then add them. This will be implicitly making some tradeoff between the two units, but that tradeoff is hidden from the user, which seems like a problem if it's going to be one of the main things driving the results.

Moreover, (apologies if I've misunderstood) but as far as I can see, the way the tradeoff is made is effectively that whichever animal is worst is set to 100 on each dimension. This doesn't seem likely to give the right results to me.

For instance: Perhaps I think beef = 10 CO2, and chicken = 1 CO2 Beef = 1 unit suffering, chicken = 100 units of suffering

In your process, I would normalize both scales so the worst is '100 points', so I'd need to increase beef to 100 and chicken to 10 on the CO2 scale.

If I weight each at 50%, I end up with overall harm scores of: Beef = 100 + 1 = 101 Chicken = 10 + 100 = 110

However, suppose my view is that 1 tonne of CO2 doesn't result in much animal suffering, so I think 1 unit of suffering = 100 CO2.

Then, my overall harm scores would be:

Beef = 10/100 + 1 = 1.1 Chicken = 1/100 + 100 = 100.1

So the picture is totally different.

(If instead I had a human-centric view that didn't put much weight on reducing animal suffering, the picture would be reversed.)

I could try to fix the results for myself by changing the relative weighting, but given that I'm not given any units, it's hard for me to know I'm doing this correctly.

Comment by Benjamin_Todd on Lessons from my time in Effective Altruism · 2021-01-18T13:50:48.032Z · EA · GW

What Michael says is closer to the message we're trying to get across, which I might summarise as:

  • Don't immediately rule out an area just because you're not currently interested in it, because you can develop new interests and become motivated if other conditions are present.
  • Personal fit is really important
  • When predicting your fit in an area, lots of factors are relevant (including interest & motivation in the path).
  • It's hard to predict fit - be prepared to try several areas and refine your hypotheses over time.

We no longer mention 'don't follow your passion' prominently in our intro materials.

I think our pre-2015 materials didn't emphasise fit enough.

The message is a bit complicated, but hopefully we're doing better today. I'm also planning to make personal fit more prominent on the key ideas page and also give more practical advice on how to assess it for further emphasis.

Comment by Benjamin_Todd on Everyday Longtermism · 2021-01-06T12:42:56.492Z · EA · GW

Agree - I think an interesting challenge is "when does this become better than donating 10% to the top marginal charity?"

Comment by Benjamin_Todd on [deleted post] 2021-01-03T15:35:30.797Z

I'm sympathetic to the idea of trying to make spread of impact the key idea. I think the problem in practice is "do thousands of times more good" is too abstract to be sticky and easily understood, so it gets simplified to something more concrete.

Comment by Benjamin_Todd on [deleted post] 2021-01-03T15:30:43.485Z

Unfortunately I think the importance of EA actually goes up as you focus on better and better things. My best guess is the distribution of impact is lognormal, this means that going from, say, the 90th percentile best thing to the 99th could easily be a bigger jump than going from, say, the 50th percentile to the 80th.

You're right that at some point diminishing returns to more research must kick in and you should take action rather than do more research, but I think that point is well beyond "don't do something obviously bad", and more like "after you've thought really carefully about what the very top priority might be, including potentially unconventional and weird-seeming issues".

Comment by Benjamin_Todd on A new, cause-general career planning process · 2020-12-24T12:19:52.880Z · EA · GW

Makes sense! We've neglected those categories in the last few years - would be great to make the advice there a bunch more specific at some point.

Comment by Benjamin_Todd on Careers Questions Open Thread · 2020-12-23T22:54:11.347Z · EA · GW

Hi Brad,

Just a very quick comment: if you'd like to get involved in politics/policy, the standard route is to try to network your way directly into a job as staffer, on a political campaign, in the exec branch or at a think tank - though this often takes a few years (and is easier if in DC), so in the meantime people normally focus on building up relevant credentials and experience.

In the second category, grad school is seen as useful step, especially if you want to be more on the technocrat side rather than party politics side of things.

Note that an MPP or Masters in another relevant subject (e.g. Economics) is enough for most positions, and that only takes 1-2 years, rather than 3-6. (PhDs are only needed if you want to be a technical expert or researcher.) It could be at least worth applying to see if you can get into a top ~5 MPP programme, or having that as a goal to potentially work towards.

A little more info here and in the links: https://80000hours.org/key-ideas/#government-and-policy https://80000hours.org/topic/careers/government/

Comment by Benjamin_Todd on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-14T22:08:05.379Z · EA · GW

Overall I think I'd prefer to think about "how good are various opportunities as investments in the longtermist community?", as well as "how good are various opportunities at making progress towards other proxies-for-good that we've identified?". Activities can score well on either, both, or neither of these, rather than being classed as one type or the other.

That seems like a good way of putting it, and I think I was mainly thinking of it this way (e.g. I was imagining that an opportunity could further all three categories), though I didn't make that clear (e.g. should call them 'goals' rather than 'categories').

Comment by Benjamin_Todd on Careers Questions Open Thread · 2020-12-12T13:52:46.702Z · EA · GW

I'd agree with the above. I also wanted to check you've seen our generic advice here – it's a pretty rough article, so many people haven't seen it: https://80000hours.org/articles/advice-for-undergraduates/

Comment by Benjamin_Todd on Careers Questions Open Thread · 2020-12-12T13:27:43.998Z · EA · GW

Hi Jeremy,

Glad to hear things have gone well!

I'd say it's pretty common for people to switch from management consulting into work at EA orgs. Some recent examples: we recently hired Habiba Islam; GPI hired Sven Herrmann and Will Jefferson; and Joan Gas who became the Managing Director of CEA a year ago.

As you can see, the most common route is normally to work in management or operations, but it doesn't need to be restricted to that.

If you want to pursue the EA orgs path, then as well as applying to jobs on the job board, follow our standard advice here (e.g. meet people, get more involved in the community).

Just bear in mind that there aren't many positions per year, so even if you're a good fit, it might take some time to find something.

For this reason, it's probably best to pursue a couple of other good longer-term paths at the same time. Another common option for someone with your background would to do something in policy, or you could try to work in development.

With this strand in particular:

helping others think about their own giving and the financial side of maximizing donations / minimizing taxes

There is a need for this, and there's a bit of a philanthropy advisory community building up in London around Founder's Pledge, Veddis and Longview Philanthropy. I'm not sure there's yet something like that in the States you could get involved in. You might be able to start your own thing, especially after working elsewhere in EA or philanthropy for 1-2 years. (Example plan: work at a foundation in SF -> meet rich tech people -> start freelance consulting for them / maybe joining up with another community member.)

Either way, I'd definitely encourage you to think hard about which impactful longer-term paths might be most promising, and what those would imply about the best next steps. You already have a lot of general career capital, and big corporate middle management experience is not that relevant to working at smaller non-profits, so I doubt continuing in the corporate sector will be the optimal one, unless you find something really outstanding.

Comment by Benjamin_Todd on Careers Questions Open Thread · 2020-12-12T12:49:58.962Z · EA · GW

I'd agree with what Michelle says, though I also wanted to add some quick thoughts about:

What's a good rule of thumb for letting go of your Plan A?

One simple way to think about it is that ultimately you have a list of options, and your job is to find and pick the best one.

Your Plan A is your current best guess option. You should change it once you find an option that's better.

So, then the question becomes: have you gained new information that's sufficient to change your ranking of options? Or have you found a new option that's better than your current best guess?

That can be a difficult question. It's pretty common to make a lot of applications in an area like this and not get anywhere, so it might only be a small negative update about your long-term chances (especially if you consider Denise's comment below). So it could be reasonable to continue, though perhaps changing your approach – we'd normally encourage people to pursue more than one form of next step (i.e. apply to a wider range of common next steps in political careers, and then see which approach is working best).

Another good exercise could be to draw up a list of alternative longer-term paths, and see if any seem better (in terms of potential long-term impact, career capital, personal fit and satisfaction).

Comment by Benjamin_Todd on Careers Questions Open Thread · 2020-12-12T12:35:55.271Z · EA · GW

I'd agree. We have this old blog post based on 4 interviews with insiders in the UK:

https://80000hours.org/2016/01/10-steps-to-a-job-in-politics/

And the first point is:

Net-work to get-work. Go to as many political and think tank events as you can. Talk to people. Ask advice. Make friends.

You might also consider options like working in the civil service or think tanks, which can lead to party politics later. Don't bet everything on the 'work for an MP' path, even though it is a common route.

Comment by Benjamin_Todd on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-11T22:28:49.874Z · EA · GW

I meant to define patient longtermism in terms of when you think the hinges are.

This will usually correspond to where you think the balance of object-level spending vs. investing/meta should be, but can come apart (e.g. uncertainty arguments could favour investing even if you think hinginess is going down).

I don't think it should be defined in terms of not having a pure rate of time preference, since the urgent longtermists don't have a pure rate of time preference either.

But overall all these definitions are pretty up in the air. It would be great if someone would like to take a more rigorous look.

Comment by Benjamin_Todd on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-11T20:13:23.973Z · EA · GW

Hi Owen,

Thanks for writing this up! I agree it's really important to clarify that a lot of 'spending' is also investing. I think I should have been clearer about this in the podcast, and I worry that if patient longtermism becomes more popular without this being appreciated, it might be negative.

When I think about it to myself, I divide ways to use resources into three categories:

  1. Object-level spending with the aim of impact.
  2. Meta spending that increases the resources and knowledge of aligned people
  3. Investments in financial assets or career capital.

These terms are not ideal because "meta" sounds like I only mean spending on activities that are explicitly aimed at increasing resources (e.g. EA community building or GPR), when many things that look like object level work are also 'meta' on this category – for instance, publishing research papers on a new topic might look like directly trying to solve the problem, but often also helps to get more researchers working on the area.

When I was saying 0.5% to 4%, I was only talking about the object-level component. I completely agree that a patient longtermist would likely 'spend' more on the meta category.

PS One smaller thing is that it sounds like I might be a bit more skeptical than you of the typical movement building benefits of general object level work. I agree they exist, but I think they are typically smaller than explicit meta work, unless someone is pretty strategic (e.g. the paper Concrete Problems in AI Safety), or in especially good cases. This is partly due to a prior of 'directly focusing on X leads to more of X'. So, I could image a purely patient longtermist portfolio still being pretty different (though I agree much less different than it first looks).

Comment by Benjamin_Todd on Careers Questions Open Thread · 2020-12-07T11:38:42.583Z · EA · GW

Hi there,

I'd say make networking your main focus.

The main reason is I think it's just a better strategy in general for getting good jobs (e.g. because lots of the best positions are never advertised or even created until the right person is found).

I think it's then especially important for smaller and newer organisations, which is what predominates in EA. This is because small/new organisations are less likely to have lots of standardised roles (the kinds that are easiest to run broad recruitment rounds for), and are less likely to have had the resources to set up a big open recruitment round (which tend to have lower returns per hour than recruiting via referrals).

Another factor is that many organisations in EA want to hire people who really care about EA, and the easiest way to do this is to hire from the community, rather than try to figure it out in an interview.

Also just double checking you've seen this, which is a bit out of date but I still think is useful: https://80000hours.org/career-guide/how-to-get-a-job/

Comment by Benjamin_Todd on Careers Questions Open Thread · 2020-12-06T12:03:33.855Z · EA · GW

Hi Jia,

There's a lot of options! Could you clarify which problem areas do you want to work on, and which longer-term career paths are you most interested in?

Comment by Benjamin_Todd on Careers Questions Open Thread · 2020-12-06T12:02:46.535Z · EA · GW

Hi Will,

James is asking a good question below, but I'm going to dive into a hot take :)

If you're about to start university, I'm wondering if you might be narrowing down too early. My normal advice for someone entering college for figuring out their career would be something like:

  1. Draw up a long list of potential longer-term options.
  2. See if you can 'try out' all of these paths while there, and right after.

You can consider all the following ways to try out potential paths, which also give you useful career capital:

  1. Doing 1-2 internships.
  2. Doing a research project as part of your studies or during the summer.
  3. Going to lots of talks from people in different areas.
  4. Getting involved in relevant student societies (e.g. student newspaper for the media)
  5. Doing side projects & self-study in free time (e.g. building a website, learning to program)
  6. Near the end, you can apply to jobs in several categories as well as graduate school, and see where you get the best offers.
  7. And even after college, you can probably then try something and switch again if it's not working.

So, going in, you don't need to have very definite plans. Besides being able to explore several paths within earning to give, I'd also encourage you to consider exploring some outside. As a starting point, some broad categories we often cover are: government and policy options, working at social impact organisations in your top cause areas (not just EA orgs), and graduate study (potentially leading into working at research organisations or as a researcher). Try to generate at least a couple of ideas within each of these.

Which subject should you study? A big factor should be personal fit – one factor there would be whether you'll be able to get good grades in moderate time (since you can use that time to do the steps above and also to socialise - and many meet their lifetime friends and partner at university). Besides that, you could consider which subject will (i) be most helpful to the longer-term options you're interested in and (ii) most keep your options open. If in doubt, applied quantitative subjects (e.g. economics and statistics) often do well on this analysis.

There's a bunch more rough thoughts here.

Comment by Benjamin_Todd on Careers Questions Open Thread · 2020-12-06T11:43:11.784Z · EA · GW

Hi Matt,

This is a common concern, though I think it's helpful to zoom out a bit – what employers most care about is that you can prove to them that you can solve the problems they want solved.

Insofar as that relates to your past experience (which is only one factor among many they'll look at), my impression[1] is that what matters is whether you can tell a good story about (i) how your past experience is relevant to their job and (ii) what steps have let you to wanting to work for them.

This is partly an exercise in communication. If your CV doesn't naturally lead to the job, you might want to spend more time talking with friends / advisors about how to connect your past experience to what they're looking for.

It depends even more on whether you had good reasons for changing, and whether you've built relevant career capital despite it.

I can't evaluate the latter from here, so I might throw the question back to you: do you think you've changed too often, or was each decision good?

I'm sympathetic to the idea that early career, a rule of thumb for exploration like "win, stick; lose, shift" makes sense (i.e. if a career path is going ahead of expectations, stick with it; and otherwise shift), and that can lead to lots of shifting early on if you get unlucky. However, you also need to balance that with staying long enough to learn skills and get achievements, which increase your career capital.


  1. *How to successfully apply to jobs isn't one of my areas of expertise, though I have experience as an employer and in marketing, and have read about it some. ↩︎

Comment by Benjamin_Todd on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T19:19:02.247Z · EA · GW

One quick addition is that I see Progress Studies as innovation into how to do innovation, so it's a double market failure :)

Comment by Benjamin_Todd on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T12:37:48.229Z · EA · GW

Hi Jason,

I think your blog and work is great, and I'm keen to see what comes out of Progress Studies.

I wanted to ask a question, and also to comment on your response to another question, that I think this has been incorrect after about 2017:

My perception of EA is that a lot of it is focused on saving lives and relieving suffering.

More figures here.

The following is more accurate:

I don't see as much focus on general economic growth and scientific and technological progress.

(Though even then, Open Philanthropy has allocated $100m+ to scientific research, which would make it a significant fraction of the portfolio. They've also funded several areas of US policy research aimed at growth.)

However, the reason for less emphasis on economic growth is because the community members who are not focused on global health, are mostly focused on longtermism, and have argued it's not the top priority from that perspective. I'm going to try to give a (rather direct) summary of why, and would be interested in your response.

Those focused on longtermism have argued that influencing the trajectory of civilization is far higher value than speeding up progress (e.g. one example of that argument here.)

Indeed, if you're concerned about existential risk from technology, it becomes unclear if faster progress in the short-term is even positive at all – though my guess is that it is.

In addition, longtermists have also argued that long-term trajectory-shaping efforts – which include reducing existential risk but are not limited to that – tend to be far more neglected than efforts to speed-up economic growth.

This is partly because I think there are stronger theoretical reasons to expect them to be market failures, but also from empirical observation: e.g. the field of AI safety and reducing catastrophic biorisks both receive well under $100m of funding per year, and issues around existential risk receive little attention in policy. In contrast, the world spends $1 trillion plus per year on R&D, and boosting economic growth is perhaps the main priority of governments worldwide.

I'd argue that the expected value of marginal work on an issue is proportional to its importance and neglectedness, and so these factors would suggest work on trajectory changes could be several orders of magnitude more effective.

I agree Progress Studies itself is far more neglected than general work to boost economic growth, I expect that work on Progress Studies is very high-impact by ordinary standards, and I'd be happy if a some more EAs worked on it, but I'd still expect marginal resources towards research in topics like existential risk or longtermist global priorities research to be far more effective per dollar / per person.

I've never seen a proponent of boosting economic growth or Progress Studies clearly give their response to these points (though I have several of my own ideas). We tried discussing it with Tyler Cowen, but my impression of that interview was that he basically conceded that existential risk is the greater priority, defending economic growth mainly because it's something the average person is better able / more likely to contribute to.

So my question would be: why should a longtermist EA work on boosting economic growth?

Comment by Benjamin_Todd on A new, cause-general career planning process · 2020-12-04T11:57:35.316Z · EA · GW

I'm pretty tempted to break it up into standalone sections in the next version.

I agree the tools are worth doing at some point (and maybe breaking up into multiple tools). I guess you're also aware of our 'make a decision' tool that's in guided track?

I think I might be a bit more skeptical about tools though. They take a lot longer to make & edit, and some fraction of our core audience finds them a bit lame (though some love them). Personally, I'd prefer a google doc which I can easily customise, where I can see everything on one page, and easily share for feedback. And it seems like the youth might agree :p