Posts

What actually is the argument for effective altruism? 2020-09-26T20:32:10.504Z · score: 70 (32 votes)
Judgement as a key need in EA 2020-09-12T14:48:20.588Z · score: 30 (11 votes)
An argument for keeping open the option of earning to save 2020-08-31T15:09:42.865Z · score: 31 (18 votes)
More empirical data on 'value drift' 2020-08-29T11:44:42.855Z · score: 101 (40 votes)
Why I've come to think global priorities research is even more important than I thought 2020-08-15T13:34:36.423Z · score: 61 (29 votes)
New data suggests the ‘leaders’’ priorities represent the core of the community 2020-05-11T13:07:43.056Z · score: 99 (50 votes)
What will 80,000 Hours provide (and not provide) within the effective altruism community? 2020-04-17T18:36:00.673Z · score: 142 (69 votes)
Why not to rush to translate effective altruism into other languages 2018-03-05T02:17:20.153Z · score: 66 (66 votes)
New recommended career path for effective altruists: China specialists 2018-03-01T21:18:46.124Z · score: 17 (17 votes)
80,000 Hours annual review released 2017-12-27T20:31:05.395Z · score: 10 (10 votes)
How can we best coordinate as a community? 2017-07-07T04:45:55.619Z · score: 11 (10 votes)
Why donate to 80,000 Hours 2016-12-24T17:04:38.089Z · score: 18 (20 votes)
If you want to disagree with effective altruism, you need to disagree one of these three claims 2016-09-25T15:01:28.753Z · score: 31 (24 votes)
Is the community short of software engineers after all? 2016-09-23T11:53:59.453Z · score: 13 (15 votes)
6 common mistakes in the effective altruism community 2016-06-03T16:51:33.922Z · score: 12 (14 votes)
Why more effective altruists should use LinkedIn 2016-06-03T16:32:24.717Z · score: 13 (13 votes)
Is legacy fundraising actually higher leverage? 2015-12-16T00:22:46.723Z · score: 4 (14 votes)
We care about WALYs not QALYs 2015-11-13T19:21:42.309Z · score: 14 (16 votes)
Why we need more meta 2015-09-26T22:40:43.933Z · score: 22 (34 votes)
Thread for discussing critical review of Doing Good Better in the London Review of Books 2015-09-21T02:27:47.835Z · score: 10 (9 votes)
A new response to effective altruism 2015-09-12T04:25:43.242Z · score: 3 (3 votes)
Random idea: crowdsourcing lobbyists 2015-07-02T01:16:05.861Z · score: 6 (6 votes)
The career questions thread 2015-06-20T02:19:07.131Z · score: 13 (13 votes)
Why long-run focused effective altruism is more common sense 2014-11-21T00:12:34.020Z · score: 19 (19 votes)
Two interviews with Holden 2014-10-03T21:44:12.163Z · score: 7 (7 votes)
We're looking for stories of EA career decisions 2014-09-30T18:20:28.169Z · score: 5 (5 votes)
An epistemology for effective altruism? 2014-09-21T21:46:04.430Z · score: 12 (7 votes)
Case study: designing a new organisation that might be more effective than GiveWell's top recommendation 2013-09-16T04:00:36.000Z · score: 0 (0 votes)
Show me the harm 2013-08-06T04:00:52.000Z · score: 7 (5 votes)

Comments

Comment by benjamin_todd on Against neglectedness · 2020-09-29T14:45:48.023Z · score: 13 (4 votes) · EA · GW

This will mainly need to wait for a separate article or podcast, since it's a pretty complicated topic.

However, my quick impression is that the issues Caspar mentions are mentioned in the problem framework article.

I also agree that their effect is probably to narrow the difference between AI safety and climate change, however I don't think they flip the ordering, and our 'all considered' view of the difference between the two was already narrower than a naive application of the INT framework implies – for the reasons mentioned here – so I don't think it really alters our bottom lines (in part because we were already aware of these issues). I'm sorry, though, that we're not clearer that our 'all considered' views are different from 'naive INT'.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2020-09-28T16:09:20.512Z · score: 7 (4 votes) · EA · GW

That's an interesting point. I was thinking that most people would say that if my goal is X, and I achieve far less of X than I easily could have, then that would qualify as a 'mistake' in normal language. I also wondered whether another premise should be something very roughly like 'maximising: it's better to achieve more rather than less of my goal (if the costs are the same)'. I could see contrasting with some kind of alternative approach could be another good option.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2020-09-28T12:51:27.421Z · score: 5 (3 votes) · EA · GW

I like the idea of thinking about it quantitatively like this.

I also agree with the second paragraph. One way of thinking about this is that if identifiability is high enough, it can offset low spread.

The importance of EA is proportional to the multiple of the degree to which the three premises hold.

Comment by benjamin_todd on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-28T11:47:17.999Z · score: 13 (6 votes) · EA · GW

Hi Paolo, I apologise this is just a hot take, but from quickly reading the article, my impression was that most of the objections apply more to what we could call the 'near termist' school of EA rather than the longtermist one (which is very happy to work on difficult-to-predict or quantify interventions). You seem to basically point this out at one point in the article. When it comes to the longtermist school, my impression is that the core disagreement is ultimately about how important/tractable/neglected it is to do grassroots work to change the political & economic system compared to something like AI alignment. I'm curious if you agree.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2020-09-28T11:41:13.191Z · score: 2 (1 votes) · EA · GW

Hi Jamie,

I think it's best to think about the importance of EA as a matter of degree. I briefly mention this in the post:

Moreover, we can say that it’s more of a mistake not to pursue the project of effective altruism the greater the degree to which each of the premises hold. For instance, the greater the degree of spread, the more you’re giving up by not searching (and same for the other two premises).

I agree that if there were only, say, 2x differences in the impact of actions, EA could still be very worthwhile. But it wouldn't be as important as in a world where there are 100x differences. I talk about this a little more in the podcast.

I think ideally I'd reframe the whole argument to be about how important EA is rather than whether it's important or not, but the phrasing gets tricky.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2020-09-28T11:38:54.032Z · score: 2 (1 votes) · EA · GW

Hi David, just a very quick reply: I agree that if the first two premises were true, but the third were false, then EA would still be important in a sense, it's just that everyone would already be doing EA, so we wouldn't need a new movement to do it, and people wouldn't increase their impact by learning about EA. I'm unsure about how best to handle this in the argument.

Comment by benjamin_todd on What actually is the argument for effective altruism? · 2020-09-28T11:35:36.685Z · score: 4 (2 votes) · EA · GW

Hi Greg,

I agree that when introducing EA to someone for the first, it's often better to lead with a "thick" version, and then bring in thin later.

(I should have maybe better clarified that my aim wasn't to provide a new popular introduction, but rather to better clarify what "thin" EA actually is. I hope this will inform future popular intros to EA, but that involves a lot of extra steps.)

I also agree that many objections are about EA in practice rather than the 'thin' core ideas, and that it can be annoying to retreat back to thin EA, and that it's often better to start by responding to the objections to thick. Still, I think it would be ideal if more people understood the thin/thick distinction (I could imagine more objections starting with "I agree we should try to find the highest-impact actions, but I disagree with the current priorities of the community because...), so I think it's worth making some efforts in that direction.

Thanks for the other thoughts!

Comment by benjamin_todd on Factors other than ITN? · 2020-09-28T11:28:13.867Z · score: 13 (4 votes) · EA · GW

Given a set of values, I see there as being multiple layers of heuristics, which are all useful to consider and make comparisons based on:

  1. Yardsticks (e.g. x-risk, qualys)
  2. Causes (e.g. AI alignment)
  3. Interventions (e.g. research into the deployment problem)
  4. Specific jobs /orgs (e.g. working at FHI)

Comparisons at all levels are all ultimately about finding proxies for expected value relative to your values.

The cause level abstraction seems to be especially useful for career planning (and grantmaking) since it helps you get career capital that builds up in a useful area. Intervention selection usually seems too brittle. Yardsticks are too broad. This post is pretty old but tries to give some more detail: https://80000hours.org/2013/12/why-pick-a-cause/

Comment by benjamin_todd on Factors other than ITN? · 2020-09-26T16:52:47.933Z · score: 43 (18 votes) · EA · GW

INT = good per dollar by definition (when used in the quantitative rather than heuristic way), so in that sense it's exhaustive, though in practice, people often miss some factors that are not as naturally captured by the framework:

  • People often assess 'importance' just based on one yardstick (e.g. QALYs) when there are effects on other relevant metrics (e.g. x-risk reduction, economic growth) (and the choice of which yardsticks to focus on in the first place is where a lot of the action is).
  • Value of information - can be included in either I or N, but often missed.
  • Coordination considerations e.g. portfolio approach, comparative advantage, trade with people with other values.
  • Cross-cutting epistemic considerations such as regression to the mean & epistemic humility & how to deal with unmeasured factors – partially covered in my recent podcast. People often only report their 'unadjusted' estimates and don't account for these.
  • Movement building effects e.g. promoting one cause might bring people into others.
  • Funging e.g. if you solve one problem, it might free up resources to work on another.

I find it useful to try to make a 'direct' estimate using INT, and then to have a separate 'all considered' estimate that aims to take account of all the above.

The application of INT also gets more complicated depending on whether you're interested in the resources spent in a certain year or over all time. Likewise, there are issues like complementarities between different forms of resources (e.g. funding vs. labour) that can mean the analysis is different for different resources.

You could also have another category of timing considerations, such as these. Toby's soon, sharp, sudden framework helps to capture some of timing factors as well, or you could think of them as guides to what's most important.

As an alternative, I think it's also useful to think of cost-effectiveness analysis of specific interventions as a separate framework that provides a different perspective.

INT is also only about how pressing causes are in general. In practice if you're making a real decision, you also need to consider your personal fit, career capital, the quality of the specific opportunity etc. as well as other moral considerations besides good done.

Comment by benjamin_todd on How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna? · 2020-09-26T16:37:40.234Z · score: 6 (3 votes) · EA · GW

Agree. We should also probably expect it to happen: the income distribution is very heavy tailed, and it becomes easier to donate the more money you have, so we should probably expect the largest couple of donors to account for most of the money.

Otoh, the total US non-profit sector is something like 300 billion per year, and I think billionaire philanthropy is under $30bn, so that would suggest 10% from billionaires as a base rate. (Though a lot of this is to fund local services, churches etc. where we might expect a broader base.)

Comment by benjamin_todd on How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna? · 2020-09-23T11:02:59.280Z · score: 11 (7 votes) · EA · GW

That's the bit I'm most unsure about.

I think the longtermist EA amount is around $30m per year - and I have reasonable data on that.

I then guessed that there's another $50m of near termist donations (based on typical ratios of near termist to longtermist donors). Note this needs to include all effective animal advocacy and Founder's Pledge. However, it might be that most of this category overlaps with the GiveWell donations, so I might have been overoptimistic. Still I would guess that it's ~$20m, making for $50m+ in total within other.

I've updated it to $50m rather than $80m.

Comment by benjamin_todd on How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna? · 2020-09-22T21:52:17.550Z · score: 34 (16 votes) · EA · GW

I think total EA funding is something like (per year):

  • $250m Open Phil / Good Ventures
  • $80m from GiveWell (excluding Open Phil)
  • $50m other

So that's 66% Open Phil. Note that Open Phil seems to be 90%+ Dustin and Cari.

Sorry I don't have sources for these figures - they're my personal rough estimates. (Though Open Phil's grants are almost all published.)

One other thing to note is that funding is only one component of EA – we also have the value of the labour of community members and our ideas. Even if there was no funding at all, we could still accomplish a bunch (by going to work in existing institutions like govt., academia, non-profits, or working as volunteers).

Quick note on the figures for 80k: about 66% of our funding comes from Open Phil in recent years. You can't divide the size of the grant by our annual budget, because we're also building up reserves with the scale of the org. You instead need to divide the grant by our total income.

Comment by benjamin_todd on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-09-21T13:23:43.834Z · score: 4 (2 votes) · EA · GW

These seem like interesting points, but overall I'm left thinking there is still a significant chance of setting off a long chain that wouldn't have happened otherwise. (And even a lowish probability of a long chain means the bulk of the damages are on other people rather than your self.)

I think the argument applies to California too. Suppose that 20% have already been infected, and 0.5% are infected currently, and R = 1.

Then in 6 months, an extra 0.5%64 = 12% will have been infected, so 32% will have had it in total. That won't be enough to create herd immunity & prevent a long chain.

An extra infection now would in expectation cause a chain of 641 = 24 infections, and if a vaccine then came and the disease were stamped out, then those 24 people wouldn't have had the disease otherwise.

What seems to matter is that we're in a "slow burn" scenario, where we're a decently long way from ending it, but R ~ 1, but we're not sure we're going to reach herd immunity as the end game.

PS My figure for London was a rough ballpark from memory - your figures are better. (Though like I say I don't think the argument is very sensitive to whether 10% or 30% have already had it.)

Comment by benjamin_todd on Some thoughts on EA outreach to high schoolers · 2020-09-17T12:07:20.961Z · score: 21 (12 votes) · EA · GW

My take is it's not the most effective mindset either. Personally I try to focus on giving people information to help them make better decisions by their own lights, even rather than the standard marketing mindset.

Comment by benjamin_todd on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-09-17T12:02:39.697Z · score: 2 (1 votes) · EA · GW

Thinking a bit more, I'm not sure this argument works, though I might have misunderstood.

In London, 5-10% have been infected. Prevalence currently is ~1 in 2000 , R = 1 and let's assume transmission time is 1 week. That means that in 6 months time, about another 1.5% of people will have been infected (30/2000)

If I get infected now, then I there will be an extra chain of infections 30 people long.

I don't see how the overall prevalence levels block the chain I cause. If in 6 months, another 1.5% of people have been infected, that's not enough to meaningfully change R.

If 5% of people were infected now, and R = 1, then we'd be saturated and reach herd immunity in a matter of weeks, which would cut off the chain. But instead, the prevalence is sufficiently low that it seems like it is possible for each individual to cause a long chain.

Comment by benjamin_todd on Some thoughts on EA outreach to high schoolers · 2020-09-15T11:36:30.080Z · score: 6 (4 votes) · EA · GW

I'm worried about this, though it seems hard to deliver detailed info that explains and backs up our positions via short videos. One hope would be that once we feel like our core advice is all written up, we should turn to short videos as an alternative entry point.

Comment by benjamin_todd on Some thoughts on EA outreach to high schoolers · 2020-09-15T11:34:17.693Z · score: 26 (11 votes) · EA · GW

Just a quick comment to say this sample doesn't seem representative to me:

Anecdotally, a large number of the most dedicated and promising longtermist EAs I know heard about EA in high school (at a workshop I ran for a small group of newish longtermist EAs, if I remember correctly about ⅔ raised their hands when asked if they’d heard about EA before age 18)

In the EA survey, for people who said they were 5/5 engaged, the median age at which they first heard about EA was 22 (mean 24). So a majority of the most engaged EAs became involved later than high school.

This is also matches my anecdotal experience, where university age seems more common than high school.

When we plotted average engagement against age first involved, the peak was at 20. People who first got involved at age 18 were less involved on average, and had a similar average level of engagement of people who first got involved at age 40. It's hard to know what to draw from this (younger and older people probably get less engaged because the community is less well set up for them), but I think it means we don't have clear evidence that it's better to reach people younger.

Comment by benjamin_todd on Some thoughts on EA outreach to high schoolers · 2020-09-15T11:32:39.051Z · score: 20 (7 votes) · EA · GW

Hey Buck,

Upvoted - I think agree that people have become too negative about it, and I'd be interested in seeing another org in this space, though I think I prefer top university outreach still on average, so am probably still a bit more negative than you.

A. One reason is you didn't put much emphasis on what I see as some of the more significant downsides. One is what Peter Hurford says:

One of the hardest parts of high school outreach I think will be getting people to continue to be engaged over their college career (assuming they go to college), which is four years of substantial chance of value drift before any direct impact happens. Whereas recruiting from college, the distance is much less.

With a college student, you can talk to them about decisions like moving to an EA hub, working in an EA org etc., which are steps that tend to get people onto great long-term paths. With a high schooler, you'd need to hope they get involved in an EA group while at university, which suggests we'd ideally make the university groups good first, and even then I think would have higher chance of drifting away.

B. Another is that I think EA advice to high schoolers is less useful than what we have to say about later decisions. The common sense advice of things like "go to a prestigious university" and "quantitative subjects keep options open more" and "do internships / interesting projects / build CV material" already seems pretty good to me and are widely known.

In contrast, when someone is working out where to donate or which cause to work on, we think some options are over 100x better according to their values in a way that's not widely known. This isn't to say many people don't make mistakes in their choice of university / major or how to spend their time at college, but I still think the delta between having EA advice and not having it is smaller.

Our crux might be this:

I think the EA and rationality communities have lots of tools that help people become overall better at thinking, and potentially vastly increase their lifetime impact.

My take is more that this advice is useful, but it's not radically better (for most people) than other sources out there (plus the other downsides you mention). E.g. a bunch of value comes from ideas like 'actually optimize for your goals', but you can get this from other smart self-help advice, following top Silicon Valley people etc., reading Dalio's Principles.That's just one example, and to repeat, I still think it's better to have these ideas than not, just I don't think the delta is as big.

On the other hand, if people had 4 years longer to think about which cause to focus on, and to learn a lot about that topic, that seems pretty useful.

C. The age data I mention in the other comment.

D. I think the lack of track record is still a negative E.g. we have a lot of examples of great university groups bringing in great people, so I feel confident that starting another good university group will be useful. Doing more high school outreach is great as an experiment, but overall I'm less confident it'll work, and it's also harder to measure. (Though I haven't seen data from SPARC, which could change my mind.)

Comment by benjamin_todd on How can good generalist judgment be differentiated from skill at forecasting? · 2020-09-13T18:46:14.977Z · score: 6 (3 votes) · EA · GW

I think it would be clearer to put many of these under different categories than to lump everything under judgement. In my post I also cover the following, and try to sketch how they're different:

  • Intelligence
  • Decision-making
  • Strategy

I should have maybe mentioned creativity as another category.

I also contrast 'using judgement' with alternatives like statistical analysis; applying best practice; quantitative models etc., though you might draw on these in making your judgement.

Comment by benjamin_todd on Judgement as a key need in EA · 2020-09-13T18:38:04.978Z · score: 8 (4 votes) · EA · GW

I think I disagree, though that's just my impression. As one piece of evidence, the article I most drew on is by Open Phil and also treats them as very related: https://www.openphilanthropy.org/blog/efforts-improve-accuracy-our-judgments-and-forecasts

Comment by benjamin_todd on Judgement as a key need in EA · 2020-09-13T18:33:31.146Z · score: 4 (2 votes) · EA · GW

Hi Alex,

In the survey, good judgement was defined as "weighing complex information and reaching calibrated conclusions", which is the same rough definition I was using in my post.

I'm not sure how many people absorbed this definition and used their own definition instead. From talking to people, my impression is that most use 'judgement' in a narrower sense than the dictionary definitions, but maybe still broader than my definition.

It's maybe also worth saying that my impression that judgement is highly valued isn't just based on the survey - I highlighted that because it's especially easy to communicate. I also have the impression that people often talk about how it might be improved, how to assess it, as a trait to look for in hiring etc., and it seems to come up more in EA than in most other areas (with certain types of investing maybe the exception).

Comment by benjamin_todd on Judgement as a key need in EA · 2020-09-12T20:02:10.173Z · score: 4 (2 votes) · EA · GW

It's maybe also worth noting that my definition of 'judgement' is pretty narrow also, and more narrow than the standard usage. I'm working on a separate piece about 'good thinking' more broadly.

Comment by benjamin_todd on Judgement as a key need in EA · 2020-09-12T19:01:15.957Z · score: 6 (3 votes) · EA · GW

Hi Misha,

I do agree there's a worry about how much calibration training or forecasting in one area, will transfer to other areas. My best guess is there some transfer but there's not as much evidence about it as I'd like.

I also think of forecasting as more like a subfactor of good judgement, so I'm not claiming there will be a transfer of cognitive skills – rather I'm claiming that if you practice a specific skill (forecasting), you will get better at that skill.

I'd also suggest looking directly at the evidence on whether forecasting can be improved and seeing what you think of it: https://www.openphilanthropy.org/blog/efforts-improve-accuracy-our-judgments-and-forecasts

Comment by benjamin_todd on Judgement as a key need in EA · 2020-09-12T18:57:44.518Z · score: 7 (4 votes) · EA · GW

Hi Khorton, that's true. In the post I say:

Forecasting isn’t exactly the same as good judgement, but seems very closely related – it at least requires ‘weighing up complex information and coming to calibrated conclusions’, though it might require other abilities too. On the other side, I take good judgement to include ‘picking the right questions’, which forecasting doesn’t cover.

So I think they're pretty close.

The other point is that, yes - I think we have some reasonable evidence that calibration and forecasting can be improved (via the things mentioned in the post), but I'm less confident in other ways to improve judgement. I've made some edits to the post to make this clearer.

One other way of improving judgement in general that I do mention, though, is to spend time talking to other people who have good judgement.

Comment by benjamin_todd on Prabhat Soni's Shortform · 2020-09-11T21:59:21.424Z · score: 8 (2 votes) · EA · GW

This is a big topic, but I think these critiques mainly fail to address the core ideas of EA (that we should seek the very best ways of helping), and instead criticise related ideas like utilitarianism or international aid. On the philosophy end of things, more here: https://forum.effectivealtruism.org/posts/hvYvH6wabAoXHJjsC/philosophical-critiques-of-effective-altruism-by-prof-jeff

Comment by benjamin_todd on An argument for keeping open the option of earning to save · 2020-09-10T19:17:39.536Z · score: 2 (1 votes) · EA · GW

e.g. whether there are direct work opportunities which would have a significant effect of passing capital into the hands of future longtermists

Could you say more about what you might have in mind here?

Comment by benjamin_todd on An argument for keeping open the option of earning to save · 2020-09-10T19:16:56.968Z · score: 3 (2 votes) · EA · GW

That's fair - I was aiming to write it in a crisp way to make it easier to engage with, but I agree I could have given the argument crisply with a better introduction.

Comment by benjamin_todd on Thoughts on patient philanthropy · 2020-09-10T19:14:57.527Z · score: 4 (2 votes) · EA · GW

Thank you for the certainty equivalent calculations, that was interesting.

Comment by benjamin_todd on An argument for keeping open the option of earning to save · 2020-09-09T21:28:13.185Z · score: 5 (3 votes) · EA · GW

Yes, I agree that's an important consideration. Doing direct work also causes movement building, creating a bunch of extra value. (Some even argue that most movement building comes from direct work rather than explicit movement building efforts.) It doesn't seem like earning to save will be as good on this front, though I think that building up a big pot of money can also get people interested (though maybe for dubious reasons!).

Comment by benjamin_todd on More empirical data on 'value drift' · 2020-09-03T20:09:59.193Z · score: 4 (2 votes) · EA · GW

Thank you, that's helpful!

Do you mean 21 percentage points, so if the overall mean is 23%, then the most engaged are only 2%? Or does it mean 21% lower, in which case it's 18%?

I'm not aware of a good reference class where we have data - I'd be keen to see more research into that.

It might be worth saying that doing something like taking the GWWC pledge is still a high level of engagement & commitment on the scale of things, and I would guess significantly higher than the typical young person affiliating with a youth movement for a while.

(The mean & median age in EA is also ~28 right now, so while still on the youthful side, it's not mainly not students or teenagers.)

Comment by benjamin_todd on What is the financial size of the Effective Altruism movement? · 2020-09-03T11:24:21.355Z · score: 8 (2 votes) · EA · GW

Open Phil donates about $250m per year (they publish almost all their grants).

About $80m per year is given on the basis of GiveWell's recommendations (from their annual reports).

There's a bunch more but I think that accounts for the majority.

Comment by benjamin_todd on More empirical data on 'value drift' · 2020-09-03T11:06:08.587Z · score: 2 (1 votes) · EA · GW

Hi Matt,

It's cool you did that, though I wouldn't recommend simply combining all the samples, since they're for really different groups at very different levels of engagement (which leads to predictably very different drop out rates).

A quick improvement would be to split into a highly engaged and a broader group.

The highly engaged meta-analysis could include: Joey's 50% donors; CEA weekend away highly engaged subset; 80k top plan changes; CEA early employees.

The broader meta-analysis could be based on: GWWC estimate; EA survey estimate; Joey 10% donors; CEA weekend away entire sample.

I'd be keen to see the results of this!

Comment by benjamin_todd on An argument for keeping open the option of earning to save · 2020-09-03T10:54:36.930Z · score: 2 (1 votes) · EA · GW

It's a good point there could also be good cultural effects from encouraging people to save more as well as the negatives I mention.

Comment by benjamin_todd on An argument for keeping open the option of earning to save · 2020-09-02T22:24:36.278Z · score: 2 (1 votes) · EA · GW

Hi Larks, I think that's a nice way of framing the issue, and you might be right. I think Howie's reply to Owen is also relevant.

Comment by benjamin_todd on More empirical data on 'value drift' · 2020-09-02T22:18:11.018Z · score: 4 (2 votes) · EA · GW

Hey Ozzie, that makes sense. I think the last EA survey did some things pretty similar to this, inc. asking about value adds & issues, and something similar to the NPS score, as well as why people don't recommend it.

Comment by benjamin_todd on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-08-31T21:02:44.744Z · score: 4 (2 votes) · EA · GW

Ah that makes sense, thank you!

Comment by benjamin_todd on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-08-31T16:25:01.699Z · score: 8 (5 votes) · EA · GW

If R = about 1, then each infection in expectation infects another person.

If the disease last two weeks, then each infection results in another 31 infections per year that R stays around 1.

Given that it seems like R is going to stay around 1 for at least the next ~6 months, I think the 0.5% expected deaths seems a bit low.

If someone ~30 who's healthy gets infected, they have maybe a 1 in 2000 chance of dying, but then will also in expectation perhaps infect a chain of ~15 people. The morality among the broader population is more like 1.5%, so that's 0.22 expected deaths from the chain, about 40-times your estimate.

Comment by benjamin_todd on More empirical data on 'value drift' · 2020-08-31T15:13:34.382Z · score: 5 (3 votes) · EA · GW

Hi Michael, I made some quick edits to help reduce this impression.

I also want to clarify that out of the 6 methods given, only 1 is about people working at EA organisations.

Comment by benjamin_todd on More empirical data on 'value drift' · 2020-08-29T15:11:41.843Z · score: 2 (1 votes) · EA · GW

I was doing a very hacky calculation - I'll change to 30 years and mention your comment.

Comment by benjamin_todd on More empirical data on 'value drift' · 2020-08-29T15:10:51.258Z · score: 2 (1 votes) · EA · GW

Ah sorry I meant to link to Peter Hurford's analysis - I'll add it now.

Comment by benjamin_todd on Insomnia with an EA lens: Bigger than malaria? · 2020-08-29T12:20:44.291Z · score: 2 (1 votes) · EA · GW

I think this is a good start! https://effectivealtruismcoaching.com/blog/2019/10/24/lu1xjfsg8i9rzkatmnqgh2r9ykb0r1

Comment by benjamin_todd on The case of the missing cause prioritisation research · 2020-08-17T18:42:28.571Z · score: 21 (11 votes) · EA · GW

Hey Sam, just a very quick comment that the post you link to wasn't meant to imply we intend to do less prioritisation research than before.

The 50/30/20 split we mention there was for how we intend to split delivery efforts across different target audiences, rather than on research vs. delivery. And also note that this means ~50% of effort is going into non-priority paths, which will include new potential priorities & career paths (such as the lists we posted recently).

As Rob notes in another comment, we still intend to spend ~10% of team time on research, similar to the past, and more total time because the team is larger. This would include looking into whether we should add new priority paths or problem areas.

Comment by benjamin_todd on Why I've come to think global priorities research is even more important than I thought · 2020-08-16T17:16:02.842Z · score: 5 (3 votes) · EA · GW

Thanks, it's great you're planning to contribute! I've also let GPI know about your feedback.

Comment by benjamin_todd on Why I've come to think global priorities research is even more important than I thought · 2020-08-16T12:02:34.493Z · score: 11 (8 votes) · EA · GW

I think there are donors in the community who will fund this work if we can find people to run these centres (e.g. similar people to those who funded GPI).

I think we can find more people able to run more centres over 10 years. My evidence for this is mainly that we have managed to find people in the past (e.g. the people who work at GPI) and I expect that to continue. I also think GPI is making progress finding people through their seminars and fellowships. Many of these people are junior now, but in 10 years some will be senior enough to found new centres.

Comment by benjamin_todd on Cost-Effectiveness of Air Purifiers against Pollution · 2020-07-30T17:02:54.296Z · score: 4 (2 votes) · EA · GW

Thanks! I'd be keen to hear what reduction in levels you can measure in your home, with an without the filter over a couple of weeks. I worry that the studies will be overly optimistic.

I also worry that sleeping with the windows shut is a bad idea due to this: https://www.lesswrong.com/posts/pPZ27eZdBXtGuLqZC/what-is-up-with-carbon-dioxide-and-cognition-an-offer

Comment by benjamin_todd on The academic contribution to AI safety seems large · 2020-07-30T16:40:39.139Z · score: 23 (12 votes) · EA · GW

Hi there, thanks for the post - useful figures!

I agree with the central point, though I want to point out this issue applies to most of the problem areas we focus on. This means it would only cause you to down-rate AI safety relative to other issues if you think the 'spillover' from other work is greater for AI safety than for other issues.

This effect should be bigger for causes that appear very small, so it probably does cause AI safety to look less neglected relative to, say, climate change, but maybe not relative to global priorities research. And in general, these effects mean that super neglected causes are not as good as they first seem.

That said, it's useful to try to directly estimate the indirect resources for different issues in order to check this, so I'm glad to have these specific estimates.

There is some more discussion of this general issue in our problem framework article:

Often resources are unintentionally dedicated to solving a problem by groups that may be self-interested, or working on an adjacent problem. We refer to this as ‘indirect effort’, in contrast with the ‘direct effort’ of groups consciously focused on the problem. These indirect efforts can be substantial. For example, not much money is spent on research to prevent the causes of ageing directly, but many parts of biomedical research are contributing by answering related questions or developing better methods. While this work may not be well targeted on reducing ageing specifically, much more is spent on biomedical research in general than anti-ageing research specifically. Most of the progress on preventing ageing is probably due to these indirect efforts.

Indirect efforts are hard to measure, and even harder to adjust for how useful they are for solving the problem at hand.

For this reason we usually score only ‘direct effort’ on a problem. Won’t this be a problem, because we will be undercounting the total effort? No, because we will adjust for this in the next factor: Solvability. Problems where most of the effective effort is occurring indirectly will not be solved as quickly by a large increase in ‘direct effort’.

One could also use a directed-weighted measure of effort. So long as it was applied consistently in evaluating both Neglectedness and Solvability, it should lead to roughly the same answer.

Another challenge is how to take account of the fact that some problems might receive much more future effort than others. We don’t have a general way to solve this, except (i) it’s reason not to give extremely low neglectedness scores to any area (ii) one can try to consider the future direction of resources rather than only resources today.

Comment by benjamin_todd on [updated] Global development interventions are generally more effective than Climate change interventions · 2020-07-16T11:42:10.330Z · score: 12 (4 votes) · EA · GW

Just an extra thought for those following up on this analysis:

I was wondering if this analysis stacks the deck against global health.

The basic idea is that SCC estimates aim to include all the costs of CO2 – these are discounted, but many of the damages come ~100 years in the future.

On the other hand, the analyses of global health mainly try to quantify the immediate effects on health and income. They don't include the idea that greater health and income now can lead to compounding economic benefits in the future.

Another way of seeing this is that the SCC estimates include 'medium-term effects', whereas the the global health ones might not.

Or another way of seeing it is that once we're willing to include long-term benefits in the equation, we're actually in the longtermist regime, and should focus mainly on existential risks.

In future attempts to compare climate change to global health, I think it would be useful to distinguish different worldviews used to make the assessment, which might be something like:

  • Near termist
  • Long termist
  • Conventional economic CBA
Comment by benjamin_todd on Collection of good 2012-2017 EA forum posts · 2020-07-11T15:53:26.205Z · score: 23 (11 votes) · EA · GW

I agree the old posts get neglected, thanks for putting this together.

I'd also nominate more of Greg's old posts: https://forum.effectivealtruism.org/users/gregory_lewis

Such as this one: https://forum.effectivealtruism.org/posts/tPtY46ucbnMfNJjFE/expected-value-estimates-you-can-take-somewhat-literally

Comment by benjamin_todd on EA Survey 2019 Series: How many people are there in the EA community? · 2020-06-26T13:08:40.055Z · score: 11 (5 votes) · EA · GW

Thank you for writing this up!

Comment by benjamin_todd on MathiasKirkBonde's Shortform · 2020-06-12T21:24:48.347Z · score: 10 (6 votes) · EA · GW

Some thoughts here on how quick it is to learn: https://80000hours.org/articles/china-careers/#learn-chinese-in-china

In there, I guess that 6-18 months of full-time study in the country is enough to get to conversational fluency.

I've seen other estimates that it takes a couple of thousand hours to get fluent e.g. here: https://linguapath.com/how-many-hours-learn-language/

My guess is that it's more efficient to study full time while living in the country. I think living there increases motivation, means you learn what you actually need, means you learn a bunch 'passively', and lets you practice conversation a lot, which is better than most book learning, and you learn more of the culture. So, I'd guess someone would make more progress living there for a year compared to doing an hour a day for ~4 years, and enjoy it more.

That said, if you use the hour well, you could learn a lot of vocab and grammar. You could could then get a private tutor to practice conversation, or you could go to China (or Taiwan) later building on that base.