Posts

Careers Questions Open Thread 2020-12-04T12:05:34.775Z
A new, cause-general career planning process 2020-12-03T11:35:38.121Z
What actually is the argument for effective altruism? 2020-09-26T20:32:10.504Z
Judgement as a key need in EA 2020-09-12T14:48:20.588Z
An argument for keeping open the option of earning to save 2020-08-31T15:09:42.865Z
More empirical data on 'value drift' 2020-08-29T11:44:42.855Z
Why I've come to think global priorities research is even more important than I thought 2020-08-15T13:34:36.423Z
New data suggests the ‘leaders’’ priorities represent the core of the community 2020-05-11T13:07:43.056Z
What will 80,000 Hours provide (and not provide) within the effective altruism community? 2020-04-17T18:36:00.673Z
Why not to rush to translate effective altruism into other languages 2018-03-05T02:17:20.153Z
New recommended career path for effective altruists: China specialists 2018-03-01T21:18:46.124Z
80,000 Hours annual review released 2017-12-27T20:31:05.395Z
How can we best coordinate as a community? 2017-07-07T04:45:55.619Z
Why donate to 80,000 Hours 2016-12-24T17:04:38.089Z
If you want to disagree with effective altruism, you need to disagree one of these three claims 2016-09-25T15:01:28.753Z
Is the community short of software engineers after all? 2016-09-23T11:53:59.453Z
6 common mistakes in the effective altruism community 2016-06-03T16:51:33.922Z
Why more effective altruists should use LinkedIn 2016-06-03T16:32:24.717Z
Is legacy fundraising actually higher leverage? 2015-12-16T00:22:46.723Z
We care about WALYs not QALYs 2015-11-13T19:21:42.309Z
Why we need more meta 2015-09-26T22:40:43.933Z
Thread for discussing critical review of Doing Good Better in the London Review of Books 2015-09-21T02:27:47.835Z
A new response to effective altruism 2015-09-12T04:25:43.242Z
Random idea: crowdsourcing lobbyists 2015-07-02T01:16:05.861Z
The career questions thread 2015-06-20T02:19:07.131Z
Why long-run focused effective altruism is more common sense 2014-11-21T00:12:34.020Z
Two interviews with Holden 2014-10-03T21:44:12.163Z
We're looking for stories of EA career decisions 2014-09-30T18:20:28.169Z
An epistemology for effective altruism? 2014-09-21T21:46:04.430Z
Case study: designing a new organisation that might be more effective than GiveWell's top recommendation 2013-09-16T04:00:36.000Z
Show me the harm 2013-08-06T04:00:52.000Z

Comments

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2021-01-26T15:20:15.458Z · EA · GW

Hmm in that case, I'd probably see it as a denial of identifiability.

I do think something along these lines is one of the best counteraguments to EA. I see it as the first step in the cluelessness debate.

Comment by benjamin_todd on Promoting EA to billionaires? · 2021-01-24T18:10:20.665Z · EA · GW

I'd also add Open Philanthropy to this, since their long-term goal is to advise other philanthropists besides Dustin and Cari (and they've already done some of this).

There are also several individuals who do this, such as Will MacAskill.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2021-01-24T14:12:22.064Z · EA · GW

Hey, I agree something like that might be worth adding.

The way I was trying to handle it is to define 'common good' in such a way that different contributions are comparable (e.g. if common good = welfare). However, it's possible I should add something like "there don't exist other values that typically outweigh differences in the common good thus defined".

For instance, you might think that justice is incredibly intrinsically important, such that what you should do is mainly determined by which action is most just, even if there are also large differences in terms of the common good.

Comment by benjamin_todd on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-24T14:00:37.227Z · EA · GW

Hey Richard, I agree with this, and I like the framing.

I want to add though, that these are basically the reasons why we created EA in the first place, rather than promoting 'utilitarian charity'. The idea was that people with many ethical views can agree that the scale of effects on people's lives matters, and so it's a point of convergence that many can get behind, while also getting at a key empirical fact that's not widely appreciated (differences in scope are larger than people think).

So, I'd say scope sensitive ethics is a reinvention of EA. It's a regret of mine that we've not done a great job of communicating that so far. It's possible we need to try introducing the core idea in lots of ways to get it across, and this seems like a good one.

Comment by benjamin_todd on Ranking animal foods based on suffering and GHG emissions · 2021-01-22T13:46:22.380Z · EA · GW

That makes sense. The point I'm trying to make, though, is that the choice of how to do the conversion from CO2/kcal to hours/kcal is probably the most important bit that drives the results. I'd prefer to make that clearer to users, and get them to make their own assessment.

Instead, the WPM ends up coming up with an implicit conversion rate, which could be way different from what the person would say if asked. Given this, it seems like the results can't be trusted.

(I expect a WPM would be fine in domains where there are multiple difficult-to-compare criteria and we're not sure which criteria are most important – as in many daily decisions – but in this case, it could easily be that either CO2 or suffering should totally dominate your ranking, and it just depends on your worldview.)

Comment by benjamin_todd on Ranking animal foods based on suffering and GHG emissions · 2021-01-21T14:09:01.555Z · EA · GW

Cool idea!

I'm not sure I understand how it works, but isn't one of the most important parameters how someone would want to trade 1 tonne of CO2 for 1 h of suffering on a factory farm? I.e. I could imagine that ratio could vary by orders of magnitude, and could make either the suffering or the carbon effects dominate.

It seems like your current approach is to normalize both scales and then add them. This will be implicitly making some tradeoff between the two units, but that tradeoff is hidden from the user, which seems like a problem if it's going to be one of the main things driving the results.

Moreover, (apologies if I've misunderstood) but as far as I can see, the way the tradeoff is made is effectively that whichever animal is worst is set to 100 on each dimension. This doesn't seem likely to give the right results to me.

For instance: Perhaps I think beef = 10 CO2, and chicken = 1 CO2 Beef = 1 unit suffering, chicken = 100 units of suffering

In your process, I would normalize both scales so the worst is '100 points', so I'd need to increase beef to 100 and chicken to 10 on the CO2 scale.

If I weight each at 50%, I end up with overall harm scores of: Beef = 100 + 1 = 101 Chicken = 10 + 100 = 110

However, suppose my view is that 1 tonne of CO2 doesn't result in much animal suffering, so I think 1 unit of suffering = 100 CO2.

Then, my overall harm scores would be:

Beef = 10/100 + 1 = 1.1 Chicken = 1/100 + 100 = 100.1

So the picture is totally different.

(If instead I had a human-centric view that didn't put much weight on reducing animal suffering, the picture would be reversed.)

I could try to fix the results for myself by changing the relative weighting, but given that I'm not given any units, it's hard for me to know I'm doing this correctly.

Comment by benjamin_todd on Lessons from my time in Effective Altruism · 2021-01-18T13:50:48.032Z · EA · GW

What Michael says is closer to the message we're trying to get across, which I might summarise as:

  • Don't immediately rule out an area just because you're not currently interested in it, because you can develop new interests and become motivated if other conditions are present.
  • Personal fit is really important
  • When predicting your fit in an area, lots of factors are relevant (including interest & motivation in the path).
  • It's hard to predict fit - be prepared to try several areas and refine your hypotheses over time.

We no longer mention 'don't follow your passion' prominently in our intro materials.

I think our pre-2015 materials didn't emphasise fit enough.

The message is a bit complicated, but hopefully we're doing better today. I'm also planning to make personal fit more prominent on the key ideas page and also give more practical advice on how to assess it for further emphasis.

Comment by benjamin_todd on Everyday Longtermism · 2021-01-06T12:42:56.492Z · EA · GW

Agree - I think an interesting challenge is "when does this become better than donating 10% to the top marginal charity?"

Comment by benjamin_todd on What’s the low resolution version of effective altruism? · 2021-01-03T15:35:30.797Z · EA · GW

I'm sympathetic to the idea of trying to make spread of impact the key idea. I think the problem in practice is "do thousands of times more good" is too abstract to be sticky and easily understood, so it gets simplified to something more concrete.

Comment by benjamin_todd on What’s the low resolution version of effective altruism? · 2021-01-03T15:30:43.485Z · EA · GW

Unfortunately I think the importance of EA actually goes up as you focus on better and better things. My best guess is the distribution of impact is lognormal, this means that going from, say, the 90th percentile best thing to the 99th could easily be a bigger jump than going from, say, the 50th percentile to the 80th.

You're right that at some point diminishing returns to more research must kick in and you should take action rather than do more research, but I think that point is well beyond "don't do something obviously bad", and more like "after you've thought really carefully about what the very top priority might be, including potentially unconventional and weird-seeming issues".

Comment by benjamin_todd on A new, cause-general career planning process · 2020-12-24T12:19:52.880Z · EA · GW

Makes sense! We've neglected those categories in the last few years - would be great to make the advice there a bunch more specific at some point.

Comment by benjamin_todd on Careers Questions Open Thread · 2020-12-23T22:54:11.347Z · EA · GW

Hi Brad,

Just a very quick comment: if you'd like to get involved in politics/policy, the standard route is to try to network your way directly into a job as staffer, on a political campaign, in the exec branch or at a think tank - though this often takes a few years (and is easier if in DC), so in the meantime people normally focus on building up relevant credentials and experience.

In the second category, grad school is seen as useful step, especially if you want to be more on the technocrat side rather than party politics side of things.

Note that an MPP or Masters in another relevant subject (e.g. Economics) is enough for most positions, and that only takes 1-2 years, rather than 3-6. (PhDs are only needed if you want to be a technical expert or researcher.) It could be at least worth applying to see if you can get into a top ~5 MPP programme, or having that as a goal to potentially work towards.

A little more info here and in the links: https://80000hours.org/key-ideas/#government-and-policy https://80000hours.org/topic/careers/government/

Comment by benjamin_todd on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-14T22:08:05.379Z · EA · GW

Overall I think I'd prefer to think about "how good are various opportunities as investments in the longtermist community?", as well as "how good are various opportunities at making progress towards other proxies-for-good that we've identified?". Activities can score well on either, both, or neither of these, rather than being classed as one type or the other.

That seems like a good way of putting it, and I think I was mainly thinking of it this way (e.g. I was imagining that an opportunity could further all three categories), though I didn't make that clear (e.g. should call them 'goals' rather than 'categories').

Comment by benjamin_todd on Careers Questions Open Thread · 2020-12-12T13:52:46.702Z · EA · GW

I'd agree with the above. I also wanted to check you've seen our generic advice here – it's a pretty rough article, so many people haven't seen it: https://80000hours.org/articles/advice-for-undergraduates/

Comment by benjamin_todd on Careers Questions Open Thread · 2020-12-12T13:27:43.998Z · EA · GW

Hi Jeremy,

Glad to hear things have gone well!

I'd say it's pretty common for people to switch from management consulting into work at EA orgs. Some recent examples: we recently hired Habiba Islam; GPI hired Sven Herrmann and Will Jefferson; and Joan Gas who became the Managing Director of CEA a year ago.

As you can see, the most common route is normally to work in management or operations, but it doesn't need to be restricted to that.

If you want to pursue the EA orgs path, then as well as applying to jobs on the job board, follow our standard advice here (e.g. meet people, get more involved in the community).

Just bear in mind that there aren't many positions per year, so even if you're a good fit, it might take some time to find something.

For this reason, it's probably best to pursue a couple of other good longer-term paths at the same time. Another common option for someone with your background would to do something in policy, or you could try to work in development.

With this strand in particular:

helping others think about their own giving and the financial side of maximizing donations / minimizing taxes

There is a need for this, and there's a bit of a philanthropy advisory community building up in London around Founder's Pledge, Veddis and Longview Philanthropy. I'm not sure there's yet something like that in the States you could get involved in. You might be able to start your own thing, especially after working elsewhere in EA or philanthropy for 1-2 years. (Example plan: work at a foundation in SF -> meet rich tech people -> start freelance consulting for them / maybe joining up with another community member.)

Either way, I'd definitely encourage you to think hard about which impactful longer-term paths might be most promising, and what those would imply about the best next steps. You already have a lot of general career capital, and big corporate middle management experience is not that relevant to working at smaller non-profits, so I doubt continuing in the corporate sector will be the optimal one, unless you find something really outstanding.

Comment by benjamin_todd on Careers Questions Open Thread · 2020-12-12T12:49:58.962Z · EA · GW

I'd agree with what Michelle says, though I also wanted to add some quick thoughts about:

What's a good rule of thumb for letting go of your Plan A?

One simple way to think about it is that ultimately you have a list of options, and your job is to find and pick the best one.

Your Plan A is your current best guess option. You should change it once you find an option that's better.

So, then the question becomes: have you gained new information that's sufficient to change your ranking of options? Or have you found a new option that's better than your current best guess?

That can be a difficult question. It's pretty common to make a lot of applications in an area like this and not get anywhere, so it might only be a small negative update about your long-term chances (especially if you consider Denise's comment below). So it could be reasonable to continue, though perhaps changing your approach – we'd normally encourage people to pursue more than one form of next step (i.e. apply to a wider range of common next steps in political careers, and then see which approach is working best).

Another good exercise could be to draw up a list of alternative longer-term paths, and see if any seem better (in terms of potential long-term impact, career capital, personal fit and satisfaction).

Comment by benjamin_todd on Careers Questions Open Thread · 2020-12-12T12:35:55.271Z · EA · GW

I'd agree. We have this old blog post based on 4 interviews with insiders in the UK:

https://80000hours.org/2016/01/10-steps-to-a-job-in-politics/

And the first point is:

Net-work to get-work. Go to as many political and think tank events as you can. Talk to people. Ask advice. Make friends.

You might also consider options like working in the civil service or think tanks, which can lead to party politics later. Don't bet everything on the 'work for an MP' path, even though it is a common route.

Comment by benjamin_todd on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-11T22:28:49.874Z · EA · GW

I meant to define patient longtermism in terms of when you think the hinges are.

This will usually correspond to where you think the balance of object-level spending vs. investing/meta should be, but can come apart (e.g. uncertainty arguments could favour investing even if you think hinginess is going down).

I don't think it should be defined in terms of not having a pure rate of time preference, since the urgent longtermists don't have a pure rate of time preference either.

But overall all these definitions are pretty up in the air. It would be great if someone would like to take a more rigorous look.

Comment by benjamin_todd on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-11T20:13:23.973Z · EA · GW

Hi Owen,

Thanks for writing this up! I agree it's really important to clarify that a lot of 'spending' is also investing. I think I should have been clearer about this in the podcast, and I worry that if patient longtermism becomes more popular without this being appreciated, it might be negative.

When I think about it to myself, I divide ways to use resources into three categories:

  1. Object-level spending with the aim of impact.
  2. Meta spending that increases the resources and knowledge of aligned people
  3. Investments in financial assets or career capital.

These terms are not ideal because "meta" sounds like I only mean spending on activities that are explicitly aimed at increasing resources (e.g. EA community building or GPR), when many things that look like object level work are also 'meta' on this category – for instance, publishing research papers on a new topic might look like directly trying to solve the problem, but often also helps to get more researchers working on the area.

When I was saying 0.5% to 4%, I was only talking about the object-level component. I completely agree that a patient longtermist would likely 'spend' more on the meta category.

PS One smaller thing is that it sounds like I might be a bit more skeptical than you of the typical movement building benefits of general object level work. I agree they exist, but I think they are typically smaller than explicit meta work, unless someone is pretty strategic (e.g. the paper Concrete Problems in AI Safety), or in especially good cases. This is partly due to a prior of 'directly focusing on X leads to more of X'. So, I could image a purely patient longtermist portfolio still being pretty different (though I agree much less different than it first looks).

Comment by benjamin_todd on Careers Questions Open Thread · 2020-12-07T11:38:42.583Z · EA · GW

Hi there,

I'd say make networking your main focus.

The main reason is I think it's just a better strategy in general for getting good jobs (e.g. because lots of the best positions are never advertised or even created until the right person is found).

I think it's then especially important for smaller and newer organisations, which is what predominates in EA. This is because small/new organisations are less likely to have lots of standardised roles (the kinds that are easiest to run broad recruitment rounds for), and are less likely to have had the resources to set up a big open recruitment round (which tend to have lower returns per hour than recruiting via referrals).

Another factor is that many organisations in EA want to hire people who really care about EA, and the easiest way to do this is to hire from the community, rather than try to figure it out in an interview.

Also just double checking you've seen this, which is a bit out of date but I still think is useful: https://80000hours.org/career-guide/how-to-get-a-job/

Comment by benjamin_todd on Careers Questions Open Thread · 2020-12-06T12:03:33.855Z · EA · GW

Hi Jia,

There's a lot of options! Could you clarify which problem areas do you want to work on, and which longer-term career paths are you most interested in?

Comment by benjamin_todd on Careers Questions Open Thread · 2020-12-06T12:02:46.535Z · EA · GW

Hi Will,

James is asking a good question below, but I'm going to dive into a hot take :)

If you're about to start university, I'm wondering if you might be narrowing down too early. My normal advice for someone entering college for figuring out their career would be something like:

  1. Draw up a long list of potential longer-term options.
  2. See if you can 'try out' all of these paths while there, and right after.

You can consider all the following ways to try out potential paths, which also give you useful career capital:

  1. Doing 1-2 internships.
  2. Doing a research project as part of your studies or during the summer.
  3. Going to lots of talks from people in different areas.
  4. Getting involved in relevant student societies (e.g. student newspaper for the media)
  5. Doing side projects & self-study in free time (e.g. building a website, learning to program)
  6. Near the end, you can apply to jobs in several categories as well as graduate school, and see where you get the best offers.
  7. And even after college, you can probably then try something and switch again if it's not working.

So, going in, you don't need to have very definite plans. Besides being able to explore several paths within earning to give, I'd also encourage you to consider exploring some outside. As a starting point, some broad categories we often cover are: government and policy options, working at social impact organisations in your top cause areas (not just EA orgs), and graduate study (potentially leading into working at research organisations or as a researcher). Try to generate at least a couple of ideas within each of these.

Which subject should you study? A big factor should be personal fit – one factor there would be whether you'll be able to get good grades in moderate time (since you can use that time to do the steps above and also to socialise - and many meet their lifetime friends and partner at university). Besides that, you could consider which subject will (i) be most helpful to the longer-term options you're interested in and (ii) most keep your options open. If in doubt, applied quantitative subjects (e.g. economics and statistics) often do well on this analysis.

There's a bunch more rough thoughts here.

Comment by benjamin_todd on Careers Questions Open Thread · 2020-12-06T11:43:11.784Z · EA · GW

Hi Matt,

This is a common concern, though I think it's helpful to zoom out a bit – what employers most care about is that you can prove to them that you can solve the problems they want solved.

Insofar as that relates to your past experience (which is only one factor among many they'll look at), my impression[1] is that what matters is whether you can tell a good story about (i) how your past experience is relevant to their job and (ii) what steps have let you to wanting to work for them.

This is partly an exercise in communication. If your CV doesn't naturally lead to the job, you might want to spend more time talking with friends / advisors about how to connect your past experience to what they're looking for.

It depends even more on whether you had good reasons for changing, and whether you've built relevant career capital despite it.

I can't evaluate the latter from here, so I might throw the question back to you: do you think you've changed too often, or was each decision good?

I'm sympathetic to the idea that early career, a rule of thumb for exploration like "win, stick; lose, shift" makes sense (i.e. if a career path is going ahead of expectations, stick with it; and otherwise shift), and that can lead to lots of shifting early on if you get unlucky. However, you also need to balance that with staying long enough to learn skills and get achievements, which increase your career capital.


  1. *How to successfully apply to jobs isn't one of my areas of expertise, though I have experience as an employer and in marketing, and have read about it some. ↩︎

Comment by benjamin_todd on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T19:19:02.247Z · EA · GW

One quick addition is that I see Progress Studies as innovation into how to do innovation, so it's a double market failure :)

Comment by benjamin_todd on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T12:37:48.229Z · EA · GW

Hi Jason,

I think your blog and work is great, and I'm keen to see what comes out of Progress Studies.

I wanted to ask a question, and also to comment on your response to another question, that I think this has been incorrect after about 2017:

My perception of EA is that a lot of it is focused on saving lives and relieving suffering.

More figures here.

The following is more accurate:

I don't see as much focus on general economic growth and scientific and technological progress.

(Though even then, Open Philanthropy has allocated $100m+ to scientific research, which would make it a significant fraction of the portfolio. They've also funded several areas of US policy research aimed at growth.)

However, the reason for less emphasis on economic growth is because the community members who are not focused on global health, are mostly focused on longtermism, and have argued it's not the top priority from that perspective. I'm going to try to give a (rather direct) summary of why, and would be interested in your response.

Those focused on longtermism have argued that influencing the trajectory of civilization is far higher value than speeding up progress (e.g. one example of that argument here.)

Indeed, if you're concerned about existential risk from technology, it becomes unclear if faster progress in the short-term is even positive at all – though my guess is that it is.

In addition, longtermists have also argued that long-term trajectory-shaping efforts – which include reducing existential risk but are not limited to that – tend to be far more neglected than efforts to speed-up economic growth.

This is partly because I think there are stronger theoretical reasons to expect them to be market failures, but also from empirical observation: e.g. the field of AI safety and reducing catastrophic biorisks both receive well under $100m of funding per year, and issues around existential risk receive little attention in policy. In contrast, the world spends $1 trillion plus per year on R&D, and boosting economic growth is perhaps the main priority of governments worldwide.

I'd argue that the expected value of marginal work on an issue is proportional to its importance and neglectedness, and so these factors would suggest work on trajectory changes could be several orders of magnitude more effective.

I agree Progress Studies itself is far more neglected than general work to boost economic growth, I expect that work on Progress Studies is very high-impact by ordinary standards, and I'd be happy if a some more EAs worked on it, but I'd still expect marginal resources towards research in topics like existential risk or longtermist global priorities research to be far more effective per dollar / per person.

I've never seen a proponent of boosting economic growth or Progress Studies clearly give their response to these points (though I have several of my own ideas). We tried discussing it with Tyler Cowen, but my impression of that interview was that he basically conceded that existential risk is the greater priority, defending economic growth mainly because it's something the average person is better able / more likely to contribute to.

So my question would be: why should a longtermist EA work on boosting economic growth?

Comment by benjamin_todd on A new, cause-general career planning process · 2020-12-04T11:57:35.316Z · EA · GW

I'm pretty tempted to break it up into standalone sections in the next version.

I agree the tools are worth doing at some point (and maybe breaking up into multiple tools). I guess you're also aware of our 'make a decision' tool that's in guided track?

I think I might be a bit more skeptical about tools though. They take a lot longer to make & edit, and some fraction of our core audience finds them a bit lame (though some love them). Personally, I'd prefer a google doc which I can easily customise, where I can see everything on one page, and easily share for feedback. And it seems like the youth might agree :p

Comment by benjamin_todd on A new, cause-general career planning process · 2020-12-03T17:34:45.701Z · EA · GW

Hi Akash,

Thank you for the thoughtful comments!

I agree that being too long and overwhelming is perhaps the main issue with it currently. Just checking you saw this paragraph, which might reassure you a bit:

Later, we hope to release a ‘just the key messages’ version that aims to quickly communicate the key concepts, without as much detail on why or how to apply them. We realise the current article is very long – it’s not aimed at new readers but rather at people who might want to spend days or more making a career plan. Longer-term, I could imagine it becoming a book with chapters for each stage above, which contain advice, real-life examples and exercises. (Added: we'll also consider making a 'tool' version like we had in the 2017 career guide.)

Our top priority was 'just to get everything written down'. After we've had more feedback to check the stages / advice / structure is at least not obviously wrong, the next priority will be making it more digestible, engaging and easier to use. This may take some time, though, since I think we need to give the key ideas cover sheet and problem profiles some more attention next.

Comment by benjamin_todd on A new, cause-general career planning process · 2020-12-03T15:01:03.338Z · EA · GW

I completely agree. Adding more examples, both lots of quick ones as well as longer case studies for each section, is perhaps our top priority for further work (with making it shorter / less overwhelming as the other contender).

Comment by benjamin_todd on richard_ngo's Shortform · 2020-12-03T13:29:27.482Z · EA · GW

Agree. I've definitely heard the other point though - it's a common concern with 80k among donors (e.g. maybe 'concrete problems in AI safety' does far more to get people into the field than an explicit movement building org ever would). Not sure where to find a write up!

Comment by benjamin_todd on A new, cause-general career planning process · 2020-12-03T13:26:42.428Z · EA · GW

Hi Denise,

Compared to the old planning process (https://80000hours.org/career-planning-tool/), the structure is similar, though this clearly separates out longer-term paths from next career moves. Otherwise, the main difference is that this has far more detailed, specific guidance.

More broadly, there are lots of clarifications to our general advice, or explanations with different emphasis. Some of the bigger newer bits include:

  • The idea of '3 strategic focuses' (opportunism, exploring, betting on a longer-term path).
  • 4 frameworks for coming up with ideas for longer-term paths inc. clarifying your strengths and helping a community, with more specific guidance within each (though a lot more could be done here).
  • Being clearer about how just 'getting good at something useful or influential' can be a good longer-term strategy vs. attempting to solve problems directly (one of the main critiques of our advice in the past). More specific advice on how to get good career capital.
  • New advice on how to compare different global problems in practice, as well as how to do your own investigation. Also a different framing of problem selection in terms of the world portfolio.
  • Clarifying that you should both 'work backwards' and 'work forwards' in determining your next step.
  • Actively encouraging people to check with their gut, rather than just go with their system 2 analysis (though not to go with their gut either).
  • Hopefully emphasising personal fit and fit with your personal life and other moral values a bit more.

More minor:

  • Being generally clearer about how the different parts fit together.
  • More specific guidance on how to go about the process of career planning.
  • Generally trying to frame the aim as 'make a good plan' rather than 'achieve certain outcomes'.
  • Using a modified version of the 'career framework' at different points
Comment by benjamin_todd on richard_ngo's Shortform · 2020-12-03T11:30:45.598Z · EA · GW

I'm not sure. Unfortunately there's a lot of things like this that aren't yet written up. There might be some discussion of the movement building value of direct work in our podcast with Phil Trammell.

Comment by benjamin_todd on richard_ngo's Shortform · 2020-12-02T23:04:52.118Z · EA · GW

I think that's a good point, though I've heard it discussed a fair amount. One way of thinking about it is that 'direct work' also has movement building benefits. This makes the ideal fraction of direct work in the portfolio higher than it first seems.

Comment by benjamin_todd on Introducing High Impact Athletes · 2020-11-30T23:31:52.860Z · EA · GW

Hi Marcus,

I don't have any experience with athletes, though I'd be surprised if they were unusually self-centred compared to other rich people.

Donating a % of winnings above a threshold might be better if income volatility is the worry. That's the approach Founder's Pledge and REG both use, which are also very relevant examples. (Note that FP started out with IIRC 2% as their default but now they don't have a specific percentage and try to suggest the idea of donating much more initially.) I could imagine a pitch like "if you win X big competition, how about giving 30% of that?"

We do know that the EA pitch has worked best on finance, quanty and techy people so far, and it might be hard to extend.

One other thing I'd say is that when we've done outreach for GWWC, we're always letting interested people come to us, rather than going out and pitching to people. I expect if I tried to pitch giving 10% to a randomly selected friend I wouldn't get far. Instead we'd do something like host a talk about charity, or have a media article, or get introductions to people - so we were always working with a group who have preselected themselves into being pitched.

Though, I think David Golberg has had a lot of success with a more proactive approach at FP among tech entrepreneurs, so it's possible, though I think even there he'd mostly screen people for interest in charity, or get warm introductions.

Comment by benjamin_todd on Introducing High Impact Athletes · 2020-11-30T16:15:05.761Z · EA · GW

I think if you focus on climate change and pandemics, it can actually seem really mainstream (especially now!).

Just don't mention AI :)

I think it would be really cool if you added a section on 'catastrophic risks' and used the recommended charities from Founder's Pledge – they have examples in pandemic prevention and climate risks - at least as an experiment.

Comment by benjamin_todd on Introducing High Impact Athletes · 2020-11-30T16:12:30.916Z · EA · GW

This is a huge discussion, so sorry for the very quick comment. Very happy about the idea of the project in general!

I'm pretty unsure that pledges around 1% are a good idea, especially among people who are already wealthy. In the US, people donate 2% of their income on average (and more altruistic people presumably start higher), and so getting someone to pledge 1% could easily reduce how much they give in total. (Since after they take the pledge, they might feel they've done their bit, and reduce informal donations.)

I think it's important to set the default to be significantly ahead of where people are already likely to be at, so at least 5%. (This approach in the charity sector is also less neglected. People are used to be asking to make small commitments. What's new about EA is that we're really serious about giving; and this is a big part of how we appeal to people . And I so I think you can raise more money with the big giving approach e.g. GWWC has raised a lot more than TLYCS.)

Among a wealthy group, I'd make 10% the default, and then clarify that people can give less as an alternative. (Added: I also wouldn't want to anchor people on the 3.7% average figure - better to have some case studies of people giving 10% and make that the anchor.)

Based on my experience of getting people to take the GWWC 10% pledge – and through 80k I've helped to convince 300+ people to do that – you'll raise a lot more money by starting by making a bigger ask and then reducing if they don't want to do it.

I'd also suggest having a 'stretch' option that's well above 10% to help expanding the notion of seem possible. This is the role played by GWWC's further pledge – even though not many people take it, it's still useful to have because it makes the 10% pledge seem comparatively easy (this is a classic sales technique).

Many wealthy people have already heard of Bill Gates' giving pledge, which is 50%, so I don't think much higher figures even sound that off putting to people these days.

That said, if you don't have any initial members who have made the higher commitments, you might not be able to add it at the start.

Relatedly, I wouldn't call 10% 'saintly', because as you say, you don't think it involves any sacrifice at all among this group, and therefore is not especially saintly.

In sum, I'd go for a schema more like GWWC (which has one of the best track records for raising money via this kind of means):

  • 10% is the default
  • Something like 5% for one year 'try out giving'
  • Some kind of higher stretch option (maybe 30%, 50% or all above a cap)

(Added: or copying Founder's Pledge more could also work - more detail below.)

Comment by benjamin_todd on If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities? · 2020-11-26T17:41:01.506Z · EA · GW

I think it's a good question, but it's pretty complex, so it would take me a while to elaborate, sorry!

Comment by benjamin_todd on If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities? · 2020-11-26T17:39:29.743Z · EA · GW

Hey Brian,

We're somewhat more keen to see additional resources on AI safety compared to GCBRs, but the difference seems fairly narrow, so we're keen to see people take unusually good opportunities to help reduce GCBRs (or to work on it if they have better personal fit). More here: https://80000hours.org/problem-profiles/global-catastrophic-biological-risks/

Comment by benjamin_todd on If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities? · 2020-11-26T14:29:33.104Z · EA · GW

Hey Brian,

Just a very quick answer from me to your first question.

At 80k we rate climate change ahead of global health since it seems more pressing from a longtermist perspective (e.g. Toby Ord thinks it's a significant existential risk factor).

So, I would say that longtermists should donate to climate change over global health from an impact perspective, if choosing equally good charities from each cause (though I think it would be even better to donate to GCBRs of AI safety).

One might think that GiveWell is better at selecting charities than FP (and they've certainly done more research), but I think the edge on charity selection is unlikely to be big enough to offset the difference in cause area.

Another difference is that GiveWell focuses on evidence-backed interventions, whereas FP takes more of a hits based approach, but that seems like another advantage of the FP picks to me.

Finally, I'm focusing more on direct impact above. There could be other reasons to donate to global health (e.g. for advocacy reasons - since lots of great people have entered EA via global health), though I'm pretty unsure those factors would tell in favour of global health going forward (e.g. it seems plausible to me that EA should make climate change our standard 'mainstream' cause rather than global health).

PS Hauke's post is comparing GiveWell recommendations to climate change on a neartermist perspective, so doesn't answer your question.

Comment by benjamin_todd on Oxford college choice from EA perspective? · 2020-11-25T13:39:51.896Z · EA · GW

I don't think there's a really an EA reason to pick a certain college. Just pick based on the normal considerations (e.g. where you'll most enjoy living; where you think you fit with the culture / most fun; quality of tutorials & housing & funding; general reputation; academics https://en.wikipedia.org/wiki/Norrington_Table).

I did physics and philosophy and went to Balliol and was happy about that.

The main reason was that Balliol was clearly the phys phil college at that time, with ~5 people studying it each year, out of a total of ~15 in the university. It also had David Wallace, who was a great phys phil tutor (though he's now left). I'd guess it still the biggest phys phil, but haven't checked. It's also well-known for PPE.

I think this was a decent reason to choose it - I appreciated having other phys phils to work with, and they tend to be an interesting bunch (and maybe some of the most naturally EA minded people out there!). I had lots of great tutors too.

I thought at the time that Balliol also does well on other factors: central location, OK housing (though not as nice as the wealthiest ones); culture a bit more lefty; decent academically; good reputation etc.

If I hadn't gone to Balliol I might have gone for Wadham for the social life or one of prestigious ones with beautiful grounds (e.g. Merton, New, Magdalen, St Johns). New seems to have a good combination of features.

It's true these ones are harder to get into than the others, but in Phys Phil they used to have a good 'pool' system, so if you don't get into your first choice college, they'll assign you to another one so long as you're above the bar in general.

Comment by benjamin_todd on Some thoughts on EA outreach to high schoolers · 2020-10-28T22:27:27.974Z · EA · GW

Yeah I totally agree there are useful things to say, though my impression is these kinds of changes are smaller and this kind of advice is more out there already (except the last one).

I think the hope for more radical changes would be giving people more time to mull over the worldview, and maybe introducing people to a general 'prioritizy' mindset, that can sometimes payoff a lot (e.g. thinking about what you really want to get out of college and making sure you do).

(On the specifics, I think maths & physics probably trumps economics at A-level, if someone has the option to do both. At undergrad it's more unclear, but you can go from maths and physics into an econ, compsci or a bio PhD but not vice versa.)

Comment by benjamin_todd on Is there a positive impact company ranking for job searches? · 2020-10-16T11:02:03.984Z · EA · GW

Hi there,

If you're looking for a wider range of job listings, you might find this list of social impact job boards useful.

Comment by benjamin_todd on Plan for Impact Certificate MVP · 2020-10-02T12:26:01.271Z · EA · GW

I'm keen to see more experiments with impact certificates. Do you have funders interested in using it?

Comment by benjamin_todd on Against neglectedness · 2020-09-29T14:45:48.023Z · EA · GW

This will mainly need to wait for a separate article or podcast, since it's a pretty complicated topic.

However, my quick impression is that the issues Caspar mentions are mentioned in the problem framework article.

I also agree that their effect is probably to narrow the difference between AI safety and climate change, however I don't think they flip the ordering, and our 'all considered' view of the difference between the two was already narrower than a naive application of the INT framework implies – for the reasons mentioned here – so I don't think it really alters our bottom lines (in part because we were already aware of these issues). I'm sorry, though, that we're not clearer that our 'all considered' views are different from 'naive INT'.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2020-09-28T16:09:20.512Z · EA · GW

That's an interesting point. I was thinking that most people would say that if my goal is X, and I achieve far less of X than I easily could have, then that would qualify as a 'mistake' in normal language. I also wondered whether another premise should be something very roughly like 'maximising: it's better to achieve more rather than less of my goal (if the costs are the same)'. I could see contrasting with some kind of alternative approach could be another good option.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2020-09-28T12:51:27.421Z · EA · GW

I like the idea of thinking about it quantitatively like this.

I also agree with the second paragraph. One way of thinking about this is that if identifiability is high enough, it can offset low spread.

The importance of EA is proportional to the multiple of the degree to which the three premises hold.

Comment by benjamin_todd on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-28T11:47:17.999Z · EA · GW

Hi Paolo, I apologise this is just a hot take, but from quickly reading the article, my impression was that most of the objections apply more to what we could call the 'near termist' school of EA rather than the longtermist one (which is very happy to work on difficult-to-predict or quantify interventions). You seem to basically point this out at one point in the article. When it comes to the longtermist school, my impression is that the core disagreement is ultimately about how important/tractable/neglected it is to do grassroots work to change the political & economic system compared to something like AI alignment. I'm curious if you agree.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2020-09-28T11:41:13.191Z · EA · GW

Hi Jamie,

I think it's best to think about the importance of EA as a matter of degree. I briefly mention this in the post:

Moreover, we can say that it’s more of a mistake not to pursue the project of effective altruism the greater the degree to which each of the premises hold. For instance, the greater the degree of spread, the more you’re giving up by not searching (and same for the other two premises).

I agree that if there were only, say, 2x differences in the impact of actions, EA could still be very worthwhile. But it wouldn't be as important as in a world where there are 100x differences. I talk about this a little more in the podcast.

I think ideally I'd reframe the whole argument to be about how important EA is rather than whether it's important or not, but the phrasing gets tricky.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2020-09-28T11:38:54.032Z · EA · GW

Hi David, just a very quick reply: I agree that if the first two premises were true, but the third were false, then EA would still be important in a sense, it's just that everyone would already be doing EA, so we wouldn't need a new movement to do it, and people wouldn't increase their impact by learning about EA. I'm unsure about how best to handle this in the argument.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2020-09-28T11:35:36.685Z · EA · GW

Hi Greg,

I agree that when introducing EA to someone for the first, it's often better to lead with a "thick" version, and then bring in thin later.

(I should have maybe better clarified that my aim wasn't to provide a new popular introduction, but rather to better clarify what "thin" EA actually is. I hope this will inform future popular intros to EA, but that involves a lot of extra steps.)

I also agree that many objections are about EA in practice rather than the 'thin' core ideas, and that it can be annoying to retreat back to thin EA, and that it's often better to start by responding to the objections to thick. Still, I think it would be ideal if more people understood the thin/thick distinction (I could imagine more objections starting with "I agree we should try to find the highest-impact actions, but I disagree with the current priorities of the community because...), so I think it's worth making some efforts in that direction.

Thanks for the other thoughts!

Comment by benjamin_todd on Factors other than ITN? · 2020-09-28T11:28:13.867Z · EA · GW

Given a set of values, I see there as being multiple layers of heuristics, which are all useful to consider and make comparisons based on:

  1. Yardsticks (e.g. x-risk, qualys)
  2. Causes (e.g. AI alignment)
  3. Interventions (e.g. research into the deployment problem)
  4. Specific jobs /orgs (e.g. working at FHI)

Comparisons at all levels are all ultimately about finding proxies for expected value relative to your values.

The cause level abstraction seems to be especially useful for career planning (and grantmaking) since it helps you get career capital that builds up in a useful area. Intervention selection usually seems too brittle. Yardsticks are too broad. This post is pretty old but tries to give some more detail: https://80000hours.org/2013/12/why-pick-a-cause/