What are the key ongoing debates in EA?

post by richard_ngo · 2020-03-08T16:12:34.683Z · score: 68 (36 votes) · EA · GW · 1 comment

This is a question post.

Contents

  Answers
    70 Ardenlk
    53 Khorton
    31 Linch
    24 MichaelStJules
    21 algekalipso
    21 Stefan_Schubert
    13 RomeoStevens
    12 technicalities
    10 eFish
    10 Jamie_Harris
    4 Nathan Young
    4 Milan_Griffes
    -7 lucy.ea8
None
1 comment

Specifically, I'm interested in cases where people who are heavily involved in effective altruism both disagree about a question, and also currently put non-negligible effort into debating the issue.

One example would be the recent EA forum post Growth and the case against randomista development.

Anecdotal or non-public examples welcome.

Answers

answer by Ardenlk · 2020-03-09T09:32:58.783Z · score: 70 (32 votes) · EA(p) · GW(p)

I'm excited to read any list you come up with at the end of this!

Some I thought of:

  • How likely is it that we're living at the most influential time in history?
  • What is the total x-risk this century?
  • Are we saving/investing enough for the future?
  • How much less of an x-risk is AI if there is no "fast takeoff"? If the paper clip scenario is super unlikely? and how unlikely are those things? [can sum up question: how much should we be updating on the risk from AI due to some people updating away from Bostrom-style scenarios?]
  • how important are S-risks/should we place more emphasis on reducing suffering than on creating happiness?
  • Do anti-aging research, animal welfare work, and/or economic growth speedups have positive very long term benefits in expectation?
  • Should EA stay as a "big tent" or split up into different movements?
  • How much should EA be trying to grow?
  • Does EA pay enough attention to climate change?
comment by richard_ngo · 2020-03-15T15:08:51.407Z · score: 45 (13 votes) · EA(p) · GW(p)

Thanks for the list! As a follow-up, I'll try list places online where such debates have occurred for each entry:

1. https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1 [EA · GW]

2. Toby Ord has estimates in The Precipice. I assume most discussion occurs on specific risks.

3. Lots of discussion on this; summary here: https://forum.effectivealtruism.org/posts/7uJcBNZhinomKtH9p/giving-now-vs-later-a-summary [EA · GW] . Also more recently https://forum.effectivealtruism.org/posts/amdReARfSvgf5PpKK/phil-trammell-philanthropy-timing-and-the-hinge-of-history [EA · GW]

4. Best discussion of this is probably here: https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like [LW · GW]

5. Most stuff on https://longtermrisk.org/ addresses s-risks. In terms of pushback, Carl Shulman wrote http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html and Toby Ord wrote http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/ (although I don't find either compelling). Also a lot of Simon Knutsson's stuff, e.g. https://www.simonknutsson.com/thoughts-on-ords-why-im-not-a-negative-utilitarian

6a. https://forum.effectivealtruism.org/posts/LxmJJobC6DEneYSWB/effects-of-anti-aging-research-on-the-long-term-future [EA · GW] , https://forum.effectivealtruism.org/posts/jYMdWskbrTWFXG6dH/a-general-framework-for-evaluating-aging-research-part-1 [EA · GW]

6b. https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-longtermists-mostly-think-about-animals [EA · GW] , https://forum.effectivealtruism.org/posts/ndvcrHfvay7sKjJGn/human-and-animal-interventions-the-long-term-view [EA · GW]

6c. https://forum.effectivealtruism.org/posts/xh37hSqw287ufDbQ7/existential-risk-and-economic-growth-1 [EA · GW]

7. Nothing particularly comes to mind, although I assume there's stuff out there.

8. https://80000hours.org/2020/02/anonymous-answers-effective-altruism-community-and-growth/

9. E.g. here, which also links to more discussions: https://forum.effectivealtruism.org/posts/NLJpMEST6pJhyq99S/notes-could-climate-change-make-earth-uninhabitable-for [EA · GW]

comment by Louis_Dixon (bdixon) · 2020-03-30T11:37:42.442Z · score: 1 (1 votes) · EA(p) · GW(p)

Re: 9 - I wrote this [EA · GW] back in April 2019. There have been more recent comments from Will in his AMA [EA(p) · GW(p)], and Toby in this EA Global talk (link with timestamp).

answer by Khorton · 2020-03-08T17:38:54.007Z · score: 53 (39 votes) · EA(p) · GW(p)

Should EA be large and welcoming or small and weird? Related: How important is it for EAs to follow regular social norms? How important is diversity and inclusion in the EA community?

To what extent should EA get involved in politics or push for political change?

comment by John_Maxwell (John_Maxwell_IV) · 2020-03-12T04:24:44.943Z · score: 24 (9 votes) · EA(p) · GW(p)

I just want to note that in principle, large & weird or small & welcoming movements are both possible. 60s counterculture was a large & weird movement. Quakers are a small & welcoming movement. (If you want to be small & welcoming, I guess it helps to not advertise yourself very much.)

I think you are right that there's a debate around whether EA should be sanitized for a mass audience (by not betting on pandemics or whatever). But e.g. this post [EA · GW] mentions that caution around growth could be good because growth is hard to reverse, but I don't see weirdness advocacy.

comment by Evan_Gaensbauer · 2020-03-18T07:02:49.195Z · score: 4 (3 votes) · EA(p) · GW(p)

Whether effective altruism should be sanitized seems like an issue separate from how big the movement can or should grow. I'm also not sure questions of sanitization should be reduced to just either doing weird things openly, or not doing them at all. That framing ignores the possibility of how something can be changed to be less 'weird', like has been done with AI alignment, or, to a lesser extent, wild animal welfare. Someone could figure out how to make it so betting on pandemics or whatever can be done without it becoming a liability for the reputation of effective altruism.

comment by vaidehi_agarwalla · 2020-03-09T09:56:29.615Z · score: 6 (5 votes) · EA(p) · GW(p)

Expanding on those points: Should EA be small and elite (i.e. to influence important/powerful actors) or broad and welcoming? How many people should earn to give and how effective is this on the margin? (Maybe not a huge debate but a lot of uncertainty) How much/should we grow EA in non-Western countries? (I think there's a fair deal of ignorance on this topic overall)

Related to D&I: How important is academic diversity in EA? And what blindspots does the EA movement have as a result?

I don't think all of these have been always publicly discussed, but there is definitely a lack of consensus and differing views.

comment by willbradshaw · 2020-03-09T12:34:34.801Z · score: 2 (2 votes) · EA(p) · GW(p)

What does "academic diversity" mean? I could imagine a few possible interpretations.

comment by vaidehi_agarwalla · 2020-03-09T23:30:52.799Z · score: 4 (3 votes) · EA(p) · GW(p)

Getting people from non-STEM backgrounds, specifically non-econ social sciences and humanities.

comment by technicalities · 2020-03-09T15:00:47.738Z · score: 1 (1 votes) · EA(p) · GW(p)

I read it as 'getting some people who aren't economists, philosophers, or computer scientists'. (:

(Speaking as a philosophy+economics grad and a sort-of computer scientist.)

comment by willbradshaw · 2020-03-09T19:00:22.632Z · score: 3 (2 votes) · EA(p) · GW(p)

I think there's quite a large diversity in what people in EA did in undergrad / grad school. There's plenty of medics and a small but nontrivial number of biologists around, for example.

What they wish they'd done at university, or what they're studying now, might be another matter.

comment by Khorton · 2020-03-08T23:47:03.508Z · score: 4 (2 votes) · EA(p) · GW(p)

Along the same lines of community health and movement growth, in what situations should individual censor their views or expect to be censored by someone else (eg a Forum moderator or Facebook group admin)?

answer by Linch · 2020-03-10T08:18:24.159Z · score: 31 (16 votes) · EA(p) · GW(p)

Among long-termist EAs, I think there's a lot of healthy disagreement about the value-loading (what utilitarianism.net calls "theories of welfare") within utilitarianism. Ie, should we aim to maximize positive sentient experiences, should we aim to minimize negative sentient experiences, or should we focus on complexity of value and assume that the value loading may be very complicated and/or include things like justice, honor, nature, etc?

My impression is that the Oxford crowd (like Will MacAskill and the FHI people) are most gung ho about the total view and the simplicity needed to say pleasure good, suffering bad. It helps that past thinkers with this normative position have a solid track record.

I think Brian Tomasik has a lot of followers in continental Europe, and a reasonable fraction of them are in the negative(-leaning) crowd. Their pitch is something like "in most normal non-convoluted circumstances, no amount of pleasure or other positive moral goods can justify a single instance of truly extreme suffering."

My vague understanding is that Bay Area rationalist EAs (especially people in the MIRI camp) generally believe strongly in the complexity of value. A simple version of their pitch might be something like "if you could push a pleasure button to wirehead yourself forever, would you do it? If not, why are you so confident about it being the right recourse for humanity?"

Of the three views, I get the impression that the "Oxford view" gets presented the most for various reasons, including that they are the best at PR, especially in English speaking countries.

In general, a lot of EAs in all three camps believe something like "morality is hard, man, and we should try to avoid locking in any definitive normative results until after the singularity." This may also entail a period of time (maybe thousands of years [EA · GW]) on Earth to think through things, possibly with the help of AGI or other technologies, before we commit to spreading throughout the stars.

I broadly agree with this stance, though I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.

comment by Matthew_Barnett · 2020-03-13T08:10:11.015Z · score: 3 (2 votes) · EA(p) · GW(p)
I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.

Is this a prediction, or is this what you want? If it's a prediction, I'd love to hear your reasons why you think this would happen.

My own prediction is that this won't happen. But I'd be happy to see some reasons why I am wrong.

answer by MichaelStJules · 2020-03-08T18:59:19.695Z · score: 24 (13 votes) · EA(p) · GW(p)

Normative ethics, especially population ethics, as well as the case for longtermism (which is somewhere between normative and applied ethics, I guess). Even the Global Priorities Institute has research defending asymmetries and against longtermism. Also, hedonism vs preference satisfaction or other values, and the complexity of value.

Consciousness and philosophy of mind, for example on functionalism/computationalism and higher-order theories. This could have important implications for nonhuman animals and artificial sentience. I'm not sure how much debate there is these days, though.

comment by willbradshaw · 2020-03-09T19:11:16.511Z · score: 4 (3 votes) · EA(p) · GW(p)

You mention you're not sure how much debate there is around consciousness these days. Surprisingly I'd say the same is increasingly true of normative ethics.

There's still a lot of disagreement about value systems, but most people seem to have stopped having that particular argument, at least as regards total vs negative utilitarianism (which I'd say was the biggest such debate going on a few years ago).

answer by algekalipso · 2020-03-11T01:51:46.782Z · score: 21 (12 votes) · EA(p) · GW(p)

Whether avoiding *extreme suffering* such as cluster headaches, migraines, kidney stones, CRPS, etc. is an important, tractable, and neglected cause. I personally think that due to the long-tails of pleasure and pain [EA · GW], and how cheap the interventions would be, focusing our efforts on e.g. enabling cluster headaches sufferers to access DMT [EA · GW] would prevent *astronomical amounts of suffering* at extremely low costs.

The key bottleneck here might be people's ignorance of just *how bad* these kinds of suffering are. I recommend reading the "long-tails of pleasure and pain" article linked above to get a sense of why this is a reasonable interpretation of the situation.

answer by Stefan_Schubert · 2020-03-09T10:25:48.477Z · score: 21 (14 votes) · EA(p) · GW(p)

Whether we're living at the most influential time in history [EA · GW], and associated issues (such as the probability of an existential catastrophe this century).

answer by RomeoStevens · 2020-03-09T00:57:26.632Z · score: 13 (14 votes) · EA(p) · GW(p)

Whether or not EA has ossified in its philosophical positions and organizational ontologies.

comment by OllieBase · 2020-03-11T10:11:58.929Z · score: 8 (7 votes) · EA(p) · GW(p)

Could you spell out what this means? I'd guess that most people (myself included) aren't familiar with ossification and organizational ontologies.

comment by willbradshaw · 2020-03-11T20:08:18.723Z · score: 6 (4 votes) · EA(p) · GW(p)

I suspect this may be evidence in itself that this is not currently a key ongoing debate in EA.

comment by RomeoStevens · 2020-03-12T06:47:32.245Z · score: 7 (3 votes) · EA(p) · GW(p)

Ah, key = popular, I guess I can simplify my vocabulary. I'm being somewhat snarky here, but afaict it satisfies the criteria of significant effort has gone into debating this.

answer by technicalities · 2020-03-08T17:00:16.259Z · score: 12 (6 votes) · EA(p) · GW(p)

I've had a few arguments about the 'worm wars', whether the bet on deworming kids, which was uncertain from the start, is undermined by the new evidence.

My interlocutor is very concerned about model error in cost-benefit analysis, about avoiding side effects (and 'double effect' in particular); and not just for the usual PR or future credibility reasons.

comment by Linch · 2020-03-08T21:53:43.145Z · score: 4 (3 votes) · EA(p) · GW(p)

What's the new evidence? I haven't been keeping up with the worm wars since 2017. Is there more conclusive data or studies since?

comment by alexrjl · 2020-03-12T15:49:28.486Z · score: 10 (6 votes) · EA(p) · GW(p)

I looked into worms a bunch for the WASH post I recently made. Miguel and Kramer's study has a currently unpublished 15 year follow up which according to givewell has similar results to the 10 year followup. Other than that the evidence of the last couple of years (including a new metastudy in September 2019 from Taylor-Robinson et. al.) has continued to point towards there being almost no effects of deworming on weight, height, cognition, school performance, or mortality. This hasn't really caused anyone to update because this is the same picture as in 2016/7. My WASH piece had almost no response, which might suggest that people just aren't too bothered by worms any more, though it could equally be something unrelated like style.

I think there's a reasonable case to be made that discussion and interest around worms is dropping though, as people for whom the "low probability of a big success" reasoning is convincing seem likely to either be long-termists, or to have updated towards growth-based interventions.

comment by technicalities · 2020-03-09T13:28:40.601Z · score: 1 (1 votes) · EA(p) · GW(p)

Not sure. 2017 fits the beginning of the discussion though.

comment by Linch · 2020-03-12T06:29:01.244Z · score: 2 (1 votes) · EA(p) · GW(p)

I thought most of the fights around the worm wars were in 2015 [1]? I really haven't been following.

[1] https://chrisblattman.com/2015/07/24/the-10-things-i-learned-in-the-trenches-of-the-worm-wars/

answer by eFish · 2020-04-02T15:53:00.646Z · score: 10 (4 votes) · EA(p) · GW(p)

One such a debate is how (un)important doing "AI safety" now is. See, for example, Center on Long-Term Risk's (previously known as Foundational Research Institute) Lukas Gloor’s Altruists Should Prioritize Artificial Intelligence and Magnus Vinding's "point-by-point critique" of Gloor's essay in Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique.

answer by Jamie_Harris · 2020-03-15T19:08:42.445Z · score: 10 (6 votes) · EA(p) · GW(p)

"Assuming longtermism, are "broad" or "narrow" approaches to improving the value of the long-term future more promising?"

This is mostly just a broadening of one of Arden's suggestions: "Do anti-aging research, animal welfare work, and/or economic growth speedups have positive very long term benefits in expectation?" Not sure how widely debated this still is, but examples include 1, 2 [EA · GW], and 3 [EA · GW].

Partly relatedly, I find Sentience Institute's "Summary of Evidence for Foundational Questions in Effective Animal Advocacy" a really helpful resource for keeping track of the most important evidence and arguments on important questions, and I've wondered whether a comparable resource would be helpful for the effective altruism community more widely.

answer by Nathan Young · 2020-05-28T10:37:14.645Z · score: 4 (1 votes) · EA(p) · GW(p)

I think the answers here would be better if they were split up into points. That way we could vote on each separately and the best would come to the top.

answer by Milan_Griffes · 2020-03-09T05:01:41.974Z · score: 4 (17 votes) · EA(p) · GW(p)

Whether or not psychedelics are an EA cause area.

Psychedelics posts on the Forum in 2019:

comment by Khorton · 2020-03-09T08:50:07.033Z · score: 23 (18 votes) · EA(p) · GW(p)

(I don't think this is considered a debate by most people - my read is that less than 5% of people involved with EA consider psychedelics a plausible EA cause area, possibly less than 1%)

comment by John_Maxwell (John_Maxwell_IV) · 2020-03-12T04:11:36.726Z · score: 8 (3 votes) · EA(p) · GW(p)

"View X is a rare/unusual view, and therefore it's not a debate." That seems a little... condescending or something?

How are we ever supposed to learn anything new if we don't debate rare/unusual views?

comment by willbradshaw · 2020-03-12T10:10:46.042Z · score: 18 (9 votes) · EA(p) · GW(p)

I simultaneously have some sympathy for this view and think that people responding to this question by pushing their pet cause areas aren't engaging well with the question as I understand it.

For example, I think that anti-ageing research is probably significantly underrated by EAs in general and would happily push for it in a question like "what cause areas are underrated by EAs", but would not (and have not) reference it here as a "key ongoing debate in EA", because I recognise that many people who aren't already convinced wouldn't consider it such.

So one criterion I might use would be whether disputants on both sides would consider the debate to be key.

I also agree with point (2) of Khorton's response to this.

comment by willbradshaw · 2020-03-12T12:53:32.645Z · score: 12 (8 votes) · EA(p) · GW(p)

Thinking about this more, I suspect a lot of people would agree that some more general statement, like "What important cause areas is EA missing out on?" is a key ongoing debate, while being sceptical about most specific claimants to that status (because if most people weren't sceptical, EA wouldn't be missing out on that cause area).

comment by Khorton · 2020-03-12T09:00:14.751Z · score: 8 (5 votes) · EA(p) · GW(p)

I think this is two different things:

  1. yes I was being a bit condescending, sorry
  2. I wasn't trying to say what should be a debate; I was trying to lend accuracy to the discussion of what is a key debate in the EA community.
comment by John_Maxwell (John_Maxwell_IV) · 2020-03-13T06:51:22.514Z · score: 6 (3 votes) · EA(p) · GW(p)

Apology accepted, thanks. I agree on point 2.

comment by willbradshaw · 2020-03-09T12:36:01.033Z · score: 5 (4 votes) · EA(p) · GW(p)

I definitely don't think it would generally be considered a key debate.

comment by Milan_Griffes · 2020-03-09T18:10:29.223Z · score: 2 (1 votes) · EA(p) · GW(p)

I think it's closely related to key theoretical debates, e.g. Romeo's answer and Khorton's answer on this thread.

comment by Milan_Griffes · 2020-03-09T18:12:00.465Z · score: 2 (1 votes) · EA(p) · GW(p)

fwiw my read on that is ~15-35%, but we run in different circles

comment by Buck · 2020-03-10T04:13:21.733Z · score: 14 (6 votes) · EA(p) · GW(p)

I'm interested in betting about whether 20% of EAs think psychedelics are a plausible top EA cause area. Eg we could sample 20 EAs from some group and ask them. Perhaps we could ask random attendees from last year's EAG. Or we could do a poll in EA Hangout.

comment by Linch · 2020-03-10T08:03:15.703Z · score: 19 (4 votes) · EA(p) · GW(p)

We may need to operationalize "top EA cause area" more precisely but I would concur with Buck/also bet money odds that <20% of a reasonable random sample of EAs will not answer a question like "in 2025, will psychedelics normalization be a top 5 priority for EAs?" in the affirmative.

comment by Milan_Griffes · 2020-03-10T23:22:41.551Z · score: -2 (6 votes) · EA(p) · GW(p)

Happy to make a bet here – let's figure out an operationalization that would satisfy all parties!

fwiw 21.5% of 2019 EA survey [EA · GW] respondents thought Mental Health should be "top or near top priority" and 58.5% though it should receive "at least significant resources".

I'm sure we can quibble about how the "Mental Health" should map to the "Psychedelics" category, though it seems clear that psychedelics are one of the most promising developments in mental health in the last few decades (breakthrough therapy designation from the FDA and all that).

If we assume half of the above considered psychedelics to be in the mental health bucket, then 10.75% of 2019 respondents thought psychedelics should be "top or near top priority" and 29.25% thought that psychedelics should receive "at least significant" EA resources. (And so I'd win the bet under that operationalization, though I suppose we'd also have to quibble over how "receive at least significant resources" maps to "plausible top EA cause area"...)

comment by Larks · 2020-03-12T02:01:28.545Z · score: 23 (8 votes) · EA(p) · GW(p)
I'm sure we can quibble about how the "Mental Health" should map to the "Psychedelics" category, though it seems clear that psychedelics are one of the most promising developments in mental health in the last few decades (breakthrough therapy designation from the FDA and all that).
If we assume half of the above considered psychedelics to be in the mental health bucket ...

This does not seem like a quibble to me at all. It seems 'clear' to you but this is by no means the case for most people. I would happily bet that well under half of those people were thinking psychedelics when they said mental health.

comment by Milan_Griffes · 2020-03-12T05:15:30.940Z · score: 2 (1 votes) · EA(p) · GW(p)

Fair enough.

It seems clear to me because most mental health professionals I've encountered in the last ~2 years agree that psychedelics are the most innovative thing coming into mainstream Western mental health since SSRIs came online in the 1990s.

There's an obvious sampling bias here, but I've seen this from many people who are personally skeptical or uncertain about psychedelics and still agree that the early trials are extremely promising, not just from enthusiasts.

You can also see it in the media coverage – there's a lot of positive press about the psychedelic renaissance and some voices of caution too, but basically no negative press. (And the voices of caution are mostly saying "this is a very powerful thing that needs to be managed carefully.")

comment by Milan_Griffes · 2020-03-12T05:27:41.481Z · score: 2 (1 votes) · EA(p) · GW(p)

Even if we assume that only 25% of Mental Health supporters were thinking of psychedelics, that's still 15% of survey respondents saying that psychedelics should receive "at least significant" EA resources.

0.585 * .25 = 0.15 [edited to correct double-counting]

comment by Khorton · 2020-03-12T09:01:50.322Z · score: 12 (7 votes) · EA(p) · GW(p)

Honestly I would assume less; I voted for Mental Health thinking of Strong Minds.

comment by alexrjl · 2020-03-12T12:24:44.459Z · score: 3 (2 votes) · EA(p) · GW(p)

Ditto to both parts of this

comment by riceissa · 2020-03-12T07:34:49.267Z · score: 8 (5 votes) · EA(p) · GW(p)

I don't think you can add the percentages for "top or near top priority" and "at least significant resources". If you look at the row for global poverty, the percentages add up to over 100% (61.7% + 87.0% = 148.7%), which means the table is double counting some people.

Looking at the bar graph above the table, it looks like "at least significant resources" includes everyone in "significant resources", "near-top priority", and "top priority". For mental health it looks like "significant resources" has 37%, and "near-top priority" and "top priority" combined have 21.5% (shown as 22% in the bar graph).

So your actual calculation would just be 0.585 * .25 which is about 15%.

comment by Milan_Griffes · 2020-03-12T23:03:12.213Z · score: 4 (2 votes) · EA(p) · GW(p)

Good point, thanks. I've edited my comment to correct the double-counting.

comment by Liam_Donovan · 2020-03-11T09:32:03.042Z · score: 2 (2 votes) · EA(p) · GW(p)

I'd like to take Buck's side of the bet as well if you're willing to bet more

answer by lucy.ea8 · 2020-03-08T18:36:07.893Z · score: -7 (21 votes) · EA(p) · GW(p)

My fundamental disagreement with the EA community in on the importance of basic education (high school equivalent in USA)

1 comment

Comments sorted by top scores.

comment by willbradshaw · 2020-03-09T19:07:25.045Z · score: 17 (9 votes) · EA(p) · GW(p)

I think quite a few people here are interpreting this question to be one of either

"What is the issue about which I personally disagree with what I perceive to be EA orthodoxy?"

or

"What seemingly-EA-relevant issues am I personally most confused/uncertain about?"

Either of which could be a good question to answer, but not necessarily here (though the second one seems like a better substitution than the first).