Is effective altruism growing? An update on the stock of funding vs. people 2021-07-29T11:47:26.747Z
[Link] 80,000 Hours Nov 2020 annual review 2021-05-15T16:28:20.975Z
How much does performance differ between people? 2021-03-25T22:56:32.660Z
Careers Questions Open Thread 2020-12-04T12:05:34.775Z
A new, cause-general career planning process 2020-12-03T11:35:38.121Z
What actually is the argument for effective altruism? 2020-09-26T20:32:10.504Z
Judgement as a key need in EA 2020-09-12T14:48:20.588Z
An argument for keeping open the option of earning to save 2020-08-31T15:09:42.865Z
More empirical data on 'value drift' 2020-08-29T11:44:42.855Z
Why I've come to think global priorities research is even more important than I thought 2020-08-15T13:34:36.423Z
New data suggests the ‘leaders’’ priorities represent the core of the community 2020-05-11T13:07:43.056Z
What will 80,000 Hours provide (and not provide) within the effective altruism community? 2020-04-17T18:36:00.673Z
Why not to rush to translate effective altruism into other languages 2018-03-05T02:17:20.153Z
New recommended career path for effective altruists: China specialists 2018-03-01T21:18:46.124Z
80,000 Hours annual review released 2017-12-27T20:31:05.395Z
The case for reducing existential risk 2017-10-01T08:44:59.879Z
How can we best coordinate as a community? 2017-07-07T04:45:55.619Z
Can one person make a difference? 2017-04-04T00:57:48.629Z
Why donate to 80,000 Hours 2016-12-24T17:04:38.089Z
If you want to disagree with effective altruism, you need to disagree one of these three claims 2016-09-25T15:01:28.753Z
Is the community short of software engineers after all? 2016-09-23T11:53:59.453Z
6 common mistakes in the effective altruism community 2016-06-03T16:51:33.922Z
Why more effective altruists should use LinkedIn 2016-06-03T16:32:24.717Z
Is legacy fundraising actually higher leverage? 2015-12-16T00:22:46.723Z
We care about WALYs not QALYs 2015-11-13T19:21:42.309Z
Why we need more meta 2015-09-26T22:40:43.933Z
Thread for discussing critical review of Doing Good Better in the London Review of Books 2015-09-21T02:27:47.835Z
A new response to effective altruism 2015-09-12T04:25:43.242Z
Random idea: crowdsourcing lobbyists 2015-07-02T01:16:05.861Z
The career questions thread 2015-06-20T02:19:07.131Z
Why long-run focused effective altruism is more common sense 2014-11-21T00:12:34.020Z
Two interviews with Holden 2014-10-03T21:44:12.163Z
We're looking for stories of EA career decisions 2014-09-30T18:20:28.169Z
An epistemology for effective altruism? 2014-09-21T21:46:04.430Z
Case study: designing a new organisation that might be more effective than GiveWell's top recommendation 2013-09-16T04:00:36.000Z
Show me the harm 2013-08-06T04:00:52.000Z


Comment by Benjamin_Todd on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-29T14:54:12.929Z · EA · GW

That toy model is similar to Phil's, so I'd start by reading his stuff. IIRC with log utility the interest rate factors out. With other functions, it can go either way.

However, if your model is more like impact = log(all time longtermist spending before the hinge of history), which also has some truth to it, then I think higher interest rates will generally make you want to give later, since they mean you get more total resources (so long as you can spend it quickly enough as you get close to the hinge).

I think the discount rate for the things you talk about is probably under 1% per year, so doesn't have a huge effect either way. (Whereas if you think EA capital is going to double again in the next 10 years, then that would double the ideal percentage to distribute.)

Comment by Benjamin_Todd on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-29T13:29:36.282Z · EA · GW

It's a very difficult question. 3% was just the median. IIRC the upper quartile was more like 7%, and some went for 10%.

The people who gave higher figures usually either: (i) had short AI timelines - like you suggest (ii) believe there will be lots of future EA donors - so current donors should give more now and hope future donors can fill in for them.

For the counterargument, I'd suggest our podcast with Phil Trammel and Will on whether we're at the hinge of history. Skepticism about the importance of AI safety and short AI timelines could also be an important part of the case (e.g. see our podcast with Ben Garfinkel).

One quick thing is that I think high interest rates are overall an argument for giving later rather than sooner!

Comment by Benjamin_Todd on The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020) · 2021-07-04T13:36:18.930Z · EA · GW

Thanks for the write up!

I'd also mention 'fit with audience' as an even bigger factor.

Sam's audience are people who are into big technical intellectual topics like philosophy, physics, consciousness; and also their impact on society. They're also up for considering weird or unpopular ideas. And the demographics also seem pretty similar to EA. So it's hard to imagine a following with a better potential fit.

Comment by Benjamin_Todd on Some thoughts on EA outreach to high schoolers · 2021-06-16T12:44:38.577Z · EA · GW

Ultimately I care about impact, but the engagement measures in the EA survey seem like the best proxy we have within that dataset.

(E.g. there is also donation data but I don't think it's very useful for assessing the potential impact of people who are too young to have donated much yet.)

A better analysis of this question should also look at things like people who made valuable career changes vs. age, which seems more closely related to impact.

Comment by Benjamin_Todd on Some thoughts on EA outreach to high schoolers · 2021-06-16T12:42:35.454Z · EA · GW

I'm going to leave it to David Moss or Eli to answer questions about the data, since they've been doing the analysis.

Comment by Benjamin_Todd on Seeking explanations of comparative rankings in 80k priorities list · 2021-06-15T20:39:22.060Z · EA · GW

Hey OmariZi,

Partly the ranking is based on an overall judgement call. We list some of the main inputs into it here.

That said, I think for the 'ratings in a nutshell' section, you need to look at the more quantiative version.

Here's the summary for AI:

Scale: We think work on positively shaping AI has the potential for a very large positive impact, because the risks AI poses are so serious. We estimate that the risk of a severe, even existential catastrophe caused by machine intelligence within the next 100 years is something like 10%.

Neglectedness: The problem of potential damage from AI is somewhat neglected, though it is getting more attention with time. Funding seems to be on the order of 100 million per year. This includes work on both technical and policy approaches to shaping the long-run influence of AI by dedicated organisations and teams.

Solvability: Making progress on positively shaping the development of artificial intelligence seems moderately tractable, though we’re highly uncertain. We expect that doubling the effort on this issue would reduce the most serious risks by around 1%.

Here's the summary for factory farming:

Scale: We think work to reduce the suffering of present and future nonhuman animals has the potential for a large positive impact. We estimate that ending factory farming would increase the expected value of the future by between 0.01% and 0.1%.

Neglectedness: This issue is moderately neglected. Current spending is between $10 million and $100 million per year.

Solvability: Making progress on reducing the suffering of present and future nonhuman animals seems moderately tractable. There are some plausible ways to make progress, though these likely require technological and expert support.

You can see that we rate them similarly for neglectedness and solvability, but think the scale of AI alignment is 100-1000x larger. This is mainly due to the potential of AI to contribute to existential risk, or to other very long-term effects.

Comment by Benjamin_Todd on Some thoughts on EA outreach to high schoolers · 2021-06-15T20:29:56.277Z · EA · GW

Eli Rose helpfully looked more into the data more carefully, and found a mistake in what I said above. It looks like people who got involved in EA at age ~18 are substantially more engaged than those who got involved at 40. People who got involved at 15-17 are also more engaged than those who got involved at 40. So, this is an update in favour of outreach to young people.

Comment by Benjamin_Todd on Should 80,000 hours run a tiktok account? · 2021-06-12T19:01:40.069Z · EA · GW

Yeah, I think youtube is higher priority. (And then we can cross-post short video & podcast clips & quotes to instagram as well.)

Comment by Benjamin_Todd on My current impressions on career choice for longtermists · 2021-06-07T19:20:10.876Z · EA · GW

Hi Michael,

Just some very quick reactions from 80k:

  • I think Holden’s framework is useful and I’m really glad he wrote the post.

  • I agree with Holden about the value of seeking out several different sources of advice using multiple frameworks and I hope 80k’s readers spend time engaging with his aptitude-based framing. I haven’t had a chance to think about exactly how to prioritise it relative to specific pieces of our content.

  • It’s a little hard to say to what extent differences between our advice and Holden’s are concrete disagreements v. different emphases. From our perspective, it’s definitely possible that we have some underlying differences of opinion (e.g. I think all else equal Holden puts more weight on personal fit) but, overall, I agree with the vast majority of what Holden says about what types of talent seem most useful to develop. Holden might have his own take on the extent to which we disagree.

  • The approach we take in the new planning process overlaps a bit more with Holden’s approach than some of our past content does. For example, we encourage people to think about which broad “role” is the best fit for them in the long-term, where that could be something like “communicator”, as well as something narrower like “journalist”, depending on what level of abstraction you find most useful.

  • I think one weakness with 80k’s advice right now is that our “five categories” are too high-level and often get overshadowed by the priority paths. Aptitudes are a different framework from our five categories conceptually, but seem to overlap a fair amount in practice (e.g. government & policy = political & bureaucratic aptitude). However, I like that Holden’s list is more specific (and he has lots of practical advice on how to assess your fit), and I could see us adapting some of this content and integrating it into our advice.

Comment by Benjamin_Todd on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T13:05:32.666Z · EA · GW

Cool to see this thread!

Just a very quick comment on this:

But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost.

I don't think anyone is proposing this. The debate I'm interested in is about which priorities are most pressing at the margin (i.e. creates the most value per unit of resources).

The main claim isn't that speeding up tech progress is bad,* just that it's not the top priority at the margin vs. reducing x-risk or speeding up moral progress.**

One big reason for this is that lots of institutions are already very focused on increasing economic productivity / discovering new tech (e.g. ~2% of GDP is spent on R&D), whereas almost no-one is focused on reducing x-risk.

If the amount of resources reducing xrisk grows, then it will drop in effectiveness relatively speaking.

In Toby's book, he roughly suggests that spending 0.1% of GDP on reducing x-risk is a reasonable target to aim for (about what is spent on ice cream). But that would be ~1000x more resources than today.

*Though I also think speeding up tech progress is more likely to be bad than reducing xrisk, my best guess is that it's net good.

**This assumes resources can be equally well spent on each. If someone has amazing fit with progress studies, that could make them 10-100x more effective in that area, which could outweigh the average difference in pressingness.

Comment by Benjamin_Todd on A new, cause-general career planning process · 2021-05-26T16:38:12.016Z · EA · GW

Thanks, updated.

Comment by Benjamin_Todd on A new, cause-general career planning process · 2021-05-25T15:19:54.279Z · EA · GW

Update: We've turned the article into a weekly course and added a 3-page summary.

Hopefully this is one step to making it more digestible.

Comment by Benjamin_Todd on EA Survey 2020: Demographics · 2021-05-21T11:36:51.290Z · EA · GW

If the sampling rate of highly engaged EAs has gone down from 40% to 35%, but the number of them was the same, that would imply 14% growth.

You then say:

The total number of highly engaged EAs in our sample this year was similar/slightly higher than 2019

So the total growth should be 14% + growth in highly engaged EAs.

Could you give me the exact figure?

Comment by Benjamin_Todd on EA Survey 2020: Demographics · 2021-05-20T19:13:58.134Z · EA · GW

Last year, we estimated the completion rate by surveying various groups (e.g. everyone who works at 80k) about who took the survey this year.

This showed that among highly engaged EAs, the response rate was ~40%, which let David make these estimates.

If we repeated that process this year, we could make a new estimate of the total number of EAs, which would give us an estimate of the growth / shrinkage since 2019. This would be a noisy estimate, but one of the better methods I'm aware of, so I'd be excited to see this happen.

Comment by Benjamin_Todd on Some global catastrophic risk estimates · 2021-05-15T16:53:38.942Z · EA · GW

That's really helpful thank you!

Comment by Benjamin_Todd on How much does performance differ between people? · 2021-05-09T13:39:58.606Z · EA · GW

And now I've created a more accurate summary here:

Comment by Benjamin_Todd on Some global catastrophic risk estimates · 2021-05-04T18:38:08.317Z · EA · GW

Yes you can only see the community one on open questions.

Ah thanks for clarifying (that's a shame!).

I'd recommend against drawing the conclusion you did from the second paragraph

Maybe we could add another question like "what's the chance it's caused by something that's not one of the others listed?"

Or maybe there's a better way at getting at the issue?

Comment by Benjamin_Todd on Some global catastrophic risk estimates · 2021-05-04T15:28:23.026Z · EA · GW

Thanks so much for this, it's a great resource!

Could you clarify a little the difference between the 'community' and the 'metaculus' forecasts? Is it correct that if I look at the live forecasts, I'll see the community one (e.g. the community thinks 24% chance of a catastrophe atm)?

Is it also possible to calculate the change of a catastrophe from an unknown risk from this? My understanding is the total risk is forecasted at ~14% by the metaculus group. If we add up the individual risks, we also get to ~14%. This suggests that the metaculus group think there's not much room for a catastrophe from an unknown source. Is that right?

Comment by Benjamin_Todd on 2020 Annual Review from the Happier Lives Institute · 2021-05-03T12:42:19.925Z · EA · GW

It's cool to see a new strand of priorities research being developed! (I also appreciated the clarity and readability of the update.)

Comment by Benjamin_Todd on What posts do you want someone to write? · 2021-04-29T20:15:48.338Z · EA · GW

Here's an example of something in the genre:

Though ideally it would contain a bunch more detail about the specific decisions they faced, what rules of thumb they used, how they'd ended up in a position to do this kind of thing etc. More critical analysis of their impact vs. the counterfactual would also be good.

Comment by Benjamin_Todd on What posts do you want someone to write? · 2021-04-27T14:22:56.418Z · EA · GW

Investigations into promising new cause areas:

For instance, take one of the issues listed here.

Then interview 2-3 people in the area about (i) what the best interventions are (ii) who's currently working on it. Write up a summary, and add any of your own thoughts on how promising more work on the area seems.

You could use Open Phil's shallow cause reports as a starting template:

Comment by Benjamin_Todd on What posts do you want someone to write? · 2021-04-23T13:34:40.520Z · EA · GW

In-depth stories of people who had a lot of impact, and the rules of thumb they used / how they navigated key decision points, with the intention of drawing lessons from them.

E.g. Interview Holden or Bostrom about each key moment in their career, challenges & decisions they faced, and how they navigated them.

They wouldn't need to be within EA. It would also be great to have more examples of people like Norman Borlaug, Viktor Zhdanov and Petrov, but ideally focusing on (i) new examples (ii) people who were deliberately trying to have a big impact, and then also with (iii) more interrogation of the strategies they used and how things might have gone differently.

You could write it up as a case study, podcast interview, or journalist-style story.

It would be like Open Phil's history of philanthropy project, but focused on individual actors.

Comment by Benjamin_Todd on Cash Transfers as a Simple First Argument · 2021-04-18T16:40:06.918Z · EA · GW

Hi there, I agree it's an interesting opening argument. I was wondering if you had seen this article before, which takes a similar approach:

One concern about this approach is that I think it can seem obviously "low leverage" to the types of people focused on entrepreneurship, policy change and research (some of the people we most want to appeal to), and can give them the impression that EA is mainly about 'high confidence' giving rather than 'high expected value' giving, which is already a one of the most common misconceptions out there.

Comment by Benjamin_Todd on How much does performance differ between people? · 2021-03-30T19:48:01.029Z · EA · GW

I tried to sum up the key messages in plain language in a Twitter thread, in case that helps clarify.

Comment by Benjamin_Todd on How much does performance differ between people? · 2021-03-30T19:46:54.982Z · EA · GW

I think that's a good summary, but it's not only winner-takes-all effects that generate heavy-tailed outcomes.

You can get heavy tailed outcomes if performance is the product of two normally distributed factors (e.g. intelligence x effort).

It can also arise from the other factors that Max lists in another comment (e.g. scalable outputs, complex production).

Luck can also produce heavy tailed outcomes if it amplifies outcomes or is itself heavy-tailed.

Comment by Benjamin_Todd on How much does performance differ between people? · 2021-03-30T15:30:55.123Z · EA · GW

This is cool.

One theoretical point in favour of complexity is that complex production often looks like an 'o-ring' process, which will create heavy-tailed outcomes.

Comment by Benjamin_Todd on How much does performance differ between people? · 2021-03-28T19:40:29.451Z · EA · GW

On your main point, this was the kind of thing we were trying to make clearer, so it's disappointing that hasn't come through.

Just on the particular VC example:

I'm suspicious you can do a good job of predicting ex ante outcomes. After all, that's what VCs would want to do and they have enormous resources. Their strategy is basically to pick as many plausible winners as they can fund.

Most VCs only pick from the top 1-5% of startups. E.g. YC's acceptance rate is 1%, and very few startups they reject make it to series A. More data on VC acceptance rates here:

So, I think that while it's mostly luck once you get down to the top 1-5%, I think there's a lot of predictors before that.

Also see more on predictors of startup performance here:

Comment by Benjamin_Todd on What Makes Outreach to Progressives Hard · 2021-03-16T12:08:34.210Z · EA · GW

Thank you for this summary!

One thought that struck me is that most of the objections seem most likely to come up in response to 'GiveWell style EA'.

I expect the objections that would be raised to a longtermist-first EA would be pretty different, though with some overlap. I'd be interested in any thoughts on what they would be.

I also (speculatively) wonder if a longtermist-first EA might ultimately do better with this audience. You can do a presentation that starts with climate change, and then point out that the lack of political representation for future generations is a much more general problem.

In addition, longtermist EAs favour hits based giving, and that makes it clear that policy change is among the best interventions, while acknowledging it's very hard to measure effects, which seems more palatable than an approach highly focused on measurement of narrow metrics.

Comment by Benjamin_Todd on Feedback from where? · 2021-03-14T14:01:25.001Z · EA · GW

I agree - but my impression is that they consider track record when making the forward-looking estimates, and they also update their recommendations over time, in part drawing on track record. I think "doesn't consider track record" is a straw man, though there could be an interesting argument about whether more weight should be put on track record as opposed to other factors (e.g. intervention selection, cause selection, team quality).

Comment by Benjamin_Todd on Feedback from where? · 2021-03-12T16:58:45.242Z · EA · GW

Impact = money moved * average charity effectiveness. FP tracks money to their recommended charities, and this is their published research on the effectiveness of those charities, and why they recommended them.

Comment by Benjamin_Todd on Feedback from where? · 2021-03-12T12:40:28.458Z · EA · GW

We make the impact evaluation I note above available to donors (and our donors also do their own version of it). We also publish top line results publicly in our annual reviews (e.g. number of impact-adjusted plan changes) , but don't publish the case studies since they involve a ton of sensitive personal information.

Comment by Benjamin_Todd on Feedback from where? · 2021-03-12T12:37:04.632Z · EA · GW

Comment by Benjamin_Todd on Feedback from where? · 2021-03-11T21:51:14.767Z · EA · GW

Just a quick comment that I don't think the above is a good characterisation of how 80k assesses its impact. Describing our whole impact evaluation would take a while, but some key elements are:

  • We think impact is heavy tailed, so we try to identify the most high-impact 'top plan changes'. We do case studies of what impact they had and how we helped. This often involves interviewing the person, and also people who can assess their work. (Last year these interviews were done by a third party to reduce desirability bias). We then do a rough fermi estimate of the impact.

  • We also track the number of a wider class of 'criteria-based plan changes', but then take a random sample and make fermi estimates of impact so we can compare their value to the top plan changes.

If we had to choose a single metric, it would be something closer to impact-adjusted years of extra labour added to top causes, rather than the sheer number of plan changes.

We also look at other indicators like:

  • There have been other surveys of the highest-impact people who entered EA in recent years, evaluating which fraction came from 80k, which let's us make an estimate of the percentage of the EA workforce from 80k.

  • We look at the EA survey results, which let's us track things like how many people are working at EA orgs and entered via 80k.

We use number of calls as a lead metric, not an impact metric. Technically it's the number of calls with people who made an application above a quality bar, rather than the raw number. We've checked and it seems to be a proxy for the number of impact-adjusted plan changes that result from advising.

This is not to deny that assessing our impact is extremely difficult, and ultimately involves a lot of judgement calls - we were explicit about that in the last review - but we've put a lot more work into it than the above implies – probably around 5-10% of team time in recent years.

I think similar comments could be made by several of the other examples e.g. GWWC also tracks dollars donated each year to effective charities (now via the EA Funds) and total dollars pledged. They track the number of pledges as well since that's a better proxy for the community building benefits.

Comment by Benjamin_Todd on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-10T23:20:47.675Z · EA · GW

Tldr: I agree the 'top of the funnel' seems to not be growing (i.e. how many people are reached each year). This was at least in part due to a deliberate shift in strategy. I think the 'bottom' of the funnel (e.g. money and people focused on EA) is still growing. Eventually we'll need to get the top of the funnel growing again, and people are starting to focus on this more.

Around 2015, DGB and The Most Good You Can Do were both launched, which both involved significant media attention that aimed to reach lots of people (e.g. two TED talks). 80k was also focused on reaching more people.

After that, the sense was that the greater bottleneck was taking all these newly interested people (and the money from Open Phil), and making sure that results in actually useful things happening, rather than reaching even more people.

(There was also some sense of wanting to shore up the intellectual foundations, and make sure EA is conveyed accurately, rather than as "earn to give for malaria nets", which seems vital for its long-term potential. There was also a shift towards niche outreach, rather than mass media - since mass media seems better for raising donations to global health, but less useful for something like reducing GCBRs, and although good at reaching lots of people, wasn't as effective as the niche stuff.)

E.g. in 2018, 80k switched to focusing on our key ideas page and podcast, which are more about making sure already interested people understand our ideas than reaching new people; Will focused on research and niche outreach, and is now writing a book on longtermism. GWWC was scaled down, and Toby wrote a book about existential risk.

This wasn't obviously a mistake since I think that if you track 'total money committed to EA' and 'total number of people willing to change career (or take other significant steps)', it's still growing reasonably (perhaps ~20% per year?), and these metrics are closer to what ultimately matter. (Unfortunately I don't have a good source for this claim and it relies on judgement calls, though Open Phil's resources have increased due to the increase in Dustin Moskovitz's net worth; several other donors have made a lot of money; the EA Forum is growing healthily; 80k is getting ~200 plan changes per year; the student groups keep recruiting people each year etc.)

One problem is that if the top of the funnel isn't growing, then eventually we'll 'use up' the pool of interested people who might become more dedicated, so it'll turn into a bottleneck at some point.

And all else equal, more top of funnel growth would be better, so it's a shame we haven't done more.

My impression is that people are starting to focus more on growing the top of the funnel again. However, I still think the focus is more on niche outreach, so you'd need to track a metric more like 'total number of engaged people' to evaluate it.

Comment by Benjamin_Todd on Be Specific About Your Career · 2021-02-24T22:15:28.422Z · EA · GW

I agree it's important to do both, and people often neglect to understand the day-to-day reality, and to zoom in on very concrete opportunities rather than think about broad paths.

You could see it as a form of near vs. far mode bias.

Comment by Benjamin_Todd on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-23T22:52:50.594Z · EA · GW

That's true, but we considered switching at several points later on (and the local groups did in fact switch).

Comment by Benjamin_Todd on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-21T22:18:45.405Z · EA · GW

This makes sense - seems like a good thing to think about.

One other point you didn't mention is that by picking another name, it reduces brand risks for EA, and that means you can act a bit more independently. (E.g. once a local group ran an event that got picked up by national newspapers.)

You can also make your group name about something more specific. Another problem of effective altruism is that it's broad and abstract which makes it hard to explain.

For instance, if you want your group to be practical and aimed at students, picking something about careers / finding a job can be very appealing. (Though this wouldn't be a good choice if you wanted to be more niche and intellectual.) This was one of the reasons why we picked GWWC and 80k as names in the early days, rather than leading with effective altruism.

  1. The direct translations of Effective Altruism can sound a bit forced. The dutch “Effectief Altruïsme” is a case in point. This can make you seem a bit “out of touch”. We checked with German and Spanish EA’s and they confirm that the direct translations sound rather awkward in their native tongue (a Polish EA we talked to have said that the name actually sounds really positive in Polish, so this point may not be relevant in all languages).

Agree! I write a bit more about this topic here, and talk about how the first Chinese translation wasn't ideal.

Comment by Benjamin_Todd on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. · 2021-02-13T20:16:30.451Z · EA · GW

My personal impression is that it's a bit easier to find operations staff than before, but still difficult.

We wrote this update at the top of the original article:

Note: Though we edited this article lightly in 2020, it was written in 2018 and is now somewhat out of date.

Due to this post and other efforts, people in the effective altruist community have become more interested in working in operations, and it has also come to seem easier to fill these roles with people not already active in the community. As a result, several recent hiring rounds for these roles were successful, and there are now fewer open positions, and when positions open up they’ve become more competitive.

This means that the need for more people to pursue this path is somewhat less than when we wrote the post. However, many of the major points in this post still apply to some degree, there is still a need for more people working in operations management, and we expect this need to persist as the community grows. For these reasons, we still think this is a promising path to consider. We also think the information on how to enter operations roles and how to assess your fit for them will still be valuable to people pursuing this path.

Comment by Benjamin_Todd on Let's Fund Living review: 2020 update · 2021-02-13T20:13:22.300Z · EA · GW

Yes - I was thinking of the writing on economic growth research - that's still helping to advance a hits based approach to helping the global poor.

Comment by Benjamin_Todd on Let's Fund Living review: 2020 update · 2021-02-12T20:30:32.369Z · EA · GW

Great to see an attempt to take a hits based approach to climate change and global health that's also open to small donors.

Comment by Benjamin_Todd on Retention in EA - Part I: Survey Data · 2021-02-11T12:57:36.865Z · EA · GW


Comment by Benjamin_Todd on Retention in EA - Part I: Survey Data · 2021-02-10T09:51:58.568Z · EA · GW

Hey Ben, thank you for this!

I had a quick question. With this category:

Cultural fit/cause area disagreement/interpersonal conflict

Cause area disagreement seems fairly different from interpersonal conflict to me. Is there something I've missed about how you're thinking of the categories? How might it look if you broke this group out into sub-categories?

Comment by Benjamin_Todd on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-02-02T22:00:59.470Z · EA · GW

A small comment: I really like the term 'scope sensitive', but I worry that it's not easily understandable to people who aren't familiar with the 'scope neglect' bias, which isn't one of the more commonly known biases (e.g. when I search on Google, I get a very short wikipedia article first and at number 3 get a lesswrong article). I wonder if 'scale sensitive' might be more immediately understood by the typical person.

On google n-gram, 'scale sensitive' is about 10x more common.

I'm not sure which is better e.g. 'scope sensitive ethics' sounds nicer to me, but worth thinking about more if you want to turn it into a term.

Comment by Benjamin_Todd on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-02-02T21:56:05.443Z · EA · GW

PS If you push ahead more, you might want to frame it as also a core ethical intuition in non-utilitarian moral theories, rather than presenting it mainly as a more acceptable, watered-down utilitarianism. I think one of the exciting things about scope sensitivity is that it's a moral principle that everyone should agree with, but also has potentially radical consequences for how we should act.

Comment by Benjamin_Todd on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-02-02T21:53:53.614Z · EA · GW

Hi Richard,

That makes sense - it could be useful to define an ethical position that's separate from effective altruism (which I've been pushing to be defined as a practical and intellectual project rather than ethical theory).

I'd be excited to see someone try to develop it, and would be happy to try to help if you do more in this area.

In the early days of EA, we actually toyed with a similar idea, called Positive Ethics - an analogy with positive psychology - which aimed to be the ethics of how to best benefit others, rather than more discussion of prohibitions.

I think my main concern is that I'm not sure that in public awareness there's enough space in between EA, global priorities research and consequentialism for another field. (E.g. I also think it would be better if EA were framed more in terms of 'let's be scope sensitive' rather than the other connotations you mention), but it could be interesting to write more about the idea to see where you end up.

Comment by Benjamin_Todd on Important Between-Cause Considerations: things every EA should know about · 2021-01-29T21:10:51.563Z · EA · GW

Just a very quick answer, but I'd be keen to have content that lists e.g. the most important 5-10 questions that seem to have the biggest effect on cause selection, since I think that would help people think through cause selection for themselves, but without having to think about every big issue in philosophy.

We tried to make a simple version of this a while back here:

This was another attempt:

OP's worldview investigations are also about these kinds of considerations (more info:

I think the main challenge is that it's really difficult to pinpoint what these considerations actually are (there's a lot of disagreement), and they differ a lot by person and which worldviews we want to have in scope. We also lack easy to read write-ups of many of the concepts.

I'd be interested in having another go at the list though, and I think we have much better write-ups than e.g. pre-2017. I'd be very interested to see other people's take on what the list should be.

Comment by Benjamin_Todd on What actually is the argument for effective altruism? · 2021-01-26T15:20:15.458Z · EA · GW

Hmm in that case, I'd probably see it as a denial of identifiability.

I do think something along these lines is one of the best counteraguments to EA. I see it as the first step in the cluelessness debate.

Comment by Benjamin_Todd on Promoting EA to billionaires? · 2021-01-24T18:10:20.665Z · EA · GW

I'd also add Open Philanthropy to this, since their long-term goal is to advise other philanthropists besides Dustin and Cari (and they've already done some of this).

There are also several individuals who do this, such as Will MacAskill.

Comment by Benjamin_Todd on What actually is the argument for effective altruism? · 2021-01-24T14:12:22.064Z · EA · GW

Hey, I agree something like that might be worth adding.

The way I was trying to handle it is to define 'common good' in such a way that different contributions are comparable (e.g. if common good = welfare). However, it's possible I should add something like "there don't exist other values that typically outweigh differences in the common good thus defined".

For instance, you might think that justice is incredibly intrinsically important, such that what you should do is mainly determined by which action is most just, even if there are also large differences in terms of the common good.

Comment by Benjamin_Todd on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-24T14:00:37.227Z · EA · GW

Hey Richard, I agree with this, and I like the framing.

I want to add though, that these are basically the reasons why we created EA in the first place, rather than promoting 'utilitarian charity'. The idea was that people with many ethical views can agree that the scale of effects on people's lives matters, and so it's a point of convergence that many can get behind, while also getting at a key empirical fact that's not widely appreciated (differences in scope are larger than people think).

So, I'd say scope sensitive ethics is a reinvention of EA. It's a regret of mine that we've not done a great job of communicating that so far. It's possible we need to try introducing the core idea in lots of ways to get it across, and this seems like a good one.