Important Between-Cause Considerations: things every EA should know about
post by jackmalde
Illustrations of the idea
Important Between-Cause Considerations
Potential list of IBCs
for and against long-termism
we living at the most influential period in history?
for the future
we living in a simulation?
health and development
Some potential objections
#1: It’s not really the best use of time for people
#2: People know these things
#3: Greater knowledge of these things won’t change minds
#4: Some people need to know all this, but not everyone
#5: This process never ends and we have to make a decision
Choosing a preferred cause area is arguably one of the most important decisions an EA will make. Not only are there plausibly astronomical differences in value between non-EA and EA cause areas, but this is also the case between different EA cause areas. It therefore seems important to make it easy for EAs to make a fully-informed decision on preferred cause area.
In this post I claim that, to make the best choice on preferred cause area, EAs should have at least a high-level understanding of various ‘Important Between-Cause Considerations’ (IBCs). An IBC is an idea that a significant proportion of the EA community takes seriously, and that is important to understand (at least at a high-level) in order to aid in the act of prioritising between the potentially highest value cause areas, which I classify as: extinction risk, non-extinction risk longtermist, near-term animal-focused, near-term human-focused, global priorities research and movement building. I provide illustrations of the concept of an IBC, as well as a list of potential IBCs.
Furthermore, I think that the EA community needs to do more to ensure that EAs can easily become acquainted with IBCs, by producing a greater quantity of educational content that could appeal to a wider range of people. This could include short(ish) videos, online courses, or simplified write-ups. An EA movement where most EAs have at least a high-level understanding of all known IBCs should be a movement where people are more aligned to the highest value cause areas (whatever these might be), and ultimately a movement that does more good.
Note: I am fairly confident in the claim that it would be good for the EA community to do more to enable EAs to better understand important ideas, and that a greater variety of educational content would help with this. Any of my stronger claims are more speculative, but I hold to be true until convinced otherwise.
Acknowledgement: Many thanks to Michael Aird for some helpful comments on a first draft of this post.
Illustrations of the idea
Here are two fictional stories:
Arjun is a university student studying Economics and wants to improve health in the low-income world. He has been convinced by Peter Singer’s shallow pond thought experiment and is struck by how one can drastically improve the lives of those in different parts of the world at little personal cost. On the other hand, he has never been convinced of the importance of longtermist cause areas. In short, Arjun holds a person-affecting view of population ethics which makes him relatively unconcerned about the prospect of human extinction. One day, Arjun comes across a blog post on the EA Forum which summarises the core arguments of a paper called “The Case for Strong Longtermism” by Greaves and MacAskill. He’s heard of the paper but, not being an academic, has never quite felt up for reading it. A blog post however seems far more accessible to him. On reading the post, Arjun is struck by the claim that longtermism is broader than just reducing extinction risk. He is surprised to learn that there may be tractable ways to improve average future well-being, conditional on humanity not going prematurely extinct, for example by improving institutions. Whilst Arjun doesn’t feel the need to ensure people exist in the future, he thinks it an admirable goal to improve the wellbeing of those who will live anyway. Over the next month, Arjun reads everything on longtermism he can get his hands on and, whilst this doesn’t convince him of the validity of longtermism, it convinces him that it is at least plausible. Ultimately, because the stakes seem so high, Arjun decides to switch from working on global health to researching potentially tractable longtermist interventions that may be desirable even under a person-affecting view, with a focus on institutional and political economics.
Maryam wants to spend her career finding the best ways to improve human mental health. She has suffered from depression before and knows how bad it can be. There also seem to be some tractable ways to make progress on the problem. One day, Maryam’s friend Lisa tells her that she just has to read Animal Liberation by Peter Singer, as it changed Lisa’s life. Maryam googles the book and reads how influential it has been, so on Sunday morning Maryam buys a copy for her kindle and gets reading. Five hours later Maryam has devoured the book and is feeling weird. She can’t find the fault in Singer’s philosophical argument. Discriminating on the basis of species seems no different to discriminating on the basis of sex or race. What ultimately matters is that animals can feel pleasure and pain. Maryam asks herself why she is so concerned when a human is killed, but not when a pig or a cow is. Over the next month Maryam proceeds to read everything she can on the topic and learns of the horrors of factory farming. Ultimately Maryam decides she wants to change her focus. Mental health in humans is really important, but ending factory farming, a more neglected cause, seems to her to be even more so.
NOTE: I don’t necessarily hold the views of Maryam or Arjun, these stories are simply illustrative.
Important Between-Cause Considerations
Maryam and Arjun have something in common: they both encountered an important idea(s) which led them to change their view of the most important cause area. I call these ideas ‘Important Between-Cause Considerations’ (IBCs). More formally, an IBC is an idea that a significant proportion of the EA community takes seriously, and that is important to understand (at least at a high-level) in order to aid in the act of prioritising between the potentially highest value cause areas, which I classify as: extinction risk, non-extinction risk longtermist, near-term animal-focused, near-term human-focused, global priorities research and movement building.
In Maryam’s case the IBC was the concept of speciesism and later an awareness of factory farming, which changed her focus from near-term human-focused to near-term animal-focused. In Arjun’s case it was the realisation of the potential robustness of longtermism to differences in population axiology, which ultimately changed his focus from near-term human-focused to global priorities research.
My core claim is that we want to make it far easier for EAs to develop at least a high-level understanding of all known IBCs. This is because a movement in which people are more aware of these key ideas, and are therefore able to make fully-informed decisions on preferred cause areas, should be a movement in which people are more aligned to the highest impact causes, whatever these might be. I’m not saying that Arjun and Maryam definitely reoriented in an objectively better way when they came across the information they did, but I think that, on average, this should be the case. When people are exposed to more information and credible arguments, they should, on average, make better decisions.
Because certain cause areas are plausibly far better than others, a movement in which EAs understand IBCs and potentially reorient their focus on this basis, may do far more good than it would have otherwise. Indeed I chose to classify cause areas in the way I have because I think this classification allows for there to be potentially astronomical differences in value between the cause areas. There probably won’t be as astronomical differences in value within these cause areas (e.g. between different ways to improve near-term human welfare). As such, I think it makes sense for EAs to engage with the various IBCs to decide on a preferred cause area, but after that to restrict further reading and engagement to within that preferred cause area (and not within other cause areas they have already ruled out).
Here’s a question for you to ponder: how do you know you aren’t a Maryam or an Arjun? In other words, is it possible that there’s some idea that you haven’t come across or fully understood but that, if you did, would cause you to want to change your preferred cause area? Unless you’ve spent quite a bit of time investigating all the EA literature, you probably can’t be sure there isn’t such an idea out there. I’m certainly not saying this will apply to all EAs, but I think it will apply to a significant number, and I actually think it applies to myself, which is why I currently don’t have a strongly-held preferred cause area.
Potential list of IBCs
In my opinion, most EAs should have at least a high-level understanding of the following IBCs (which are listed below in no particular order). The idea is that, for each of these, I could tell a story like Maryam’s or Arjun’s, which involves someone becoming aware of the idea, and then changing their preferred cause area.
In reality, I suspect there are likely to be valid reasons for certain people not to engage with some of the IBCs. For example, if one has read up on population ethics and is confident that they hold a person-affecting view, one can rule out reducing extinction risk at that point without having to engage with that area further (i.e. by understanding the overall probability of extinction-risk this century). Ideally there would be some sort of flowchart for people to use to avoid them engaging with ideas that have no chance of swaying their preferred cause area.
I am sure that there are many IBCs that I haven’t included that should be here, and some IBCs that are included that shouldn’t be. I would appreciate it if people have any comments. I also include some links to relevant texts (mostly off the top of my head - a later exercise could do this more thoroughly).
- The different population axiologies (total utilitarianism, person-affecting etc.) (Greaves 2017)
- The key objections that can be levelled to each axiology (repugnant conclusion, non-identity problem etc.) (Greaves 2017)
- The general thrust of the impossibility theorems (Greaves 2017)
- The implications of choice of population axiology for preferred cause area (various)
- The concept of speciesism (Singer 1975)
- The arguments for and against the sentience of non-human animals (Muehlhauser 2017)
- The scale of suffering on factory farms and potentially promising interventions to reduce that suffering or end factory farming (various)
- Wild animal suffering and the leading ideas on how to reduce it
- The concept of moral circle expansion to anything sentient, and the possibility of artificial sentience
- The distinction between simple cluelessness, complex cluelessness, and not being clueless (Greaves 2016)
- The possible implications of cluelessness [EA · GW] for cause prioritisation
- The leading suggestions for how to act under complex cluelessness (various)
Arguments for and against long-termism
(Some of these are quite dense)
Are we living at the most influential period in history?
- The argument as presented by Will MacAskill that we probably aren’t living at the most influential time (MacAskill 2020)
- The counterarguments [EA · GW] against MacAskill’s position
- The implications of the answer to this question for what we should do (MacAskill 2020)
Investing for the future
Are we living in a simulation?
Global health and development
- The extent of global inequality and the concept of the diminishing marginal utility of resources, motivating giving to the global poor (various)
- The shallow pond / drowning child thought experiment (Singer 1972)
- The leading randomista intervention types and GiveWell-type charity evaluation
- Arguments for and against prioritising randomista interventions over boosting economic growth (Hillebrandt & Halstead, 2020) [EA · GW]
- Possible unintended side effects of working on global health and development (meat-eater problem, climate change) (various)
- The Tyler Cowen-type argument for why maximising economic growth might be the most important thing to do
- The possible drawbacks of boosting economic growth (animal welfare, climate change etc.) (various)
- The arguments for a suffering-focused ethics (Gloor, 2019)
- The implications of such a view, including focusing on s-risks
- Organisations such as the Global Priorities Institute are carrying out important research that could be full of IBCs. I think it’s important that this research is made easily digestible for those that may not want to read the original research
Some potential objections
Below are some possible objections to my argument that I have thought up. I leave out some possible objections that I feel I have already (tried to) tackle in the above text. I certainly may not be covering all possible objections in this post so am looking forward to people’s comments.
Objection #1: It’s not really the best use of time for people
“Think of the opportunity cost of people reading up on all of this. Is it really worth the time?”
I have a few things to say here. First of all, I don’t want becoming acquainted with the IBCs to be a very time-consuming endeavour, and I don’t think it has to be. The way to make it easy for people is to produce more educational content that is easy to digest. Not everyone wants to read academic papers. I would love to see the EA community produce a wider variety of content such as videos, simplified write-ups, or online courses, and I’d actually be quite interested in playing a part in this myself. I plan to make a video on one of my proposed IBCs as a personal project (to assuage concerns of doing harm I don’t plan to refer to EA in it).
Secondly, even if it is a non-negligible time commitment, I think it’s probably worth it for the reasons I outlined earlier. Cause area is arguably the most important decision an EA will make, and the differences in value between cause areas are potentially astronomical. It makes sense to me to spend a decent amount of time becoming acquainted with any ideas that can prove pivotal in deciding on a cause area. Even if one doesn’t want to change career, becoming convinced of the importance of a certain cause area can lead to one changing where they donate and the way they discuss ideas with other EAs, so I think it’s worth it for almost anyone.
Objection #2: People know these things
“Most people do consider important considerations before deciding on their preferred cause area and career and already know about these topics.”
I am fairly confident that many IBCs are not well-understood by a large number of EAs (myself included). I recently carried out a poll on the EA Polls Facebook group asking about awareness of the concept of ‘complex cluelessness’ and what people think are the implications of this for work in global health and development. The most common response was “I’m not aware of the concept”.
That’s just one example of course, but my general impression from interacting with others in the EA community is that EAs have not engaged with all the IBCs enough. Further polls could shed more light on what IBCs people are well aware of, and what IBCs people aren’t so well aware of.
Objection #3: Greater knowledge of these things won’t change minds
“Even if people don’t know about some of these things, I doubt greater knowledge of them would actually change minds. These ideas probably won’t have the effect you’re claiming they will.”
Maybe there aren’t many Arjuns or Maryams out there who would change their mind when confronted with these IBCs, perhaps because they are already aligned to the best cause area (for their underlying views) already, or because I’m just off the mark about the potential power of many of these ideas.
This is possible, but I’m more optimistic, in part due to my personal experience. On a few occasions I have come across ideas that have caused me to seriously rethink my preferred cause area. Since learning about EA I have gone from global health to ending factory farming to being quite unsure as I started to take long-termism more seriously. EAs are generally open-minded and very rational, so I’m hopeful a significant number can theoretically change their preferred cause area.
However, even if greater knowledge doesn’t change minds, I still think there is a strong case for a greater focus on educating people on these topics. I think this could improve the quality of discussion in the community and aid the search for the ultimate truth.
Objection #4: Some people need to know all this, but not everyone
“We probably do need some people to be aware of all of this so that they can make a fully-informed decision of which cause area is most important. These people should be those who are quite influential in the EA movement, and they probably know all this anyway. As for the rest of us average Joes, can’t we just defer to these people?”
In my opinion, it’s not as simple as that. It isn’t really clear to me how one can defer on the question of which cause area to prioritise. I guess if one were to try to defer in this way, one would probably go long-termist, as this is what most of the most prominent EAs seem to align to. In practice however, I don’t think people want to defer on cause area and, if they’re not going to defer, then we should ensure that they are well-informed when making their own decision.
Objection #5: This process never ends and we have to make a decision
“Well, IBCs are going to keep popping up as foundational research continues. I could end up wanting to change my cause area an arbitrarily large number of times and I think I should just make a decision on the cause area”.
Fair enough, at some point many of us should make a decision on which cause area we want to focus on and just accept that it is possible that there are IBCs yet to be uncovered that could change our views, but that we can’t just wait around forever. However, this is no excuse not to engage with the IBCs we are currently aware of. That seems to be the least we should do.
After engaging with the IBCs we are currently aware of there are two broad decisions one can make. Firstly, one could have a preferred cause area and feel fairly confident that further IBCs won’t change their mind. In this case one can just pursue their preferred cause area. Secondly, one could feel that it is quite possible that further IBCs could come along that could change their preferred cause area further. In that case one may want to remain cause-neutral by pursuing paths such as global priorities research, earning-to-give/save, or EA movement building (I realise two of these I’ve actually defined as cause areas themselves, but at the same time they seem very robustly good making it fairly safe to pursue them even when quite uncertain about how best to do good in the world). Either way, I think it is important for EAs to engage with all IBCs that we are currently aware of.
If what I have said is true, then it is the case that there is a central body of knowledge that most EAs should be aware of and understand, at least to a certain degree. It is also the case that currently many EAs don’t have a good understanding of much of this central body of knowledge.
In light of this these are my proposed next steps:
- Please comment on this post and either:
- Tear all of this to shreds in which case steps 2-5 can be ignored
- Shower me with praise and suggest some additions/removals from my list of IBCs. Then I would proceed with step 2
- I might try to gauge to what extent the EA community is aware of the IBCs, perhaps through a survey asking people of their awareness of specific concepts and maybe even including some questions to test knowledge
- Do a stock take of all the resources that are currently available to learn about the IBCs
- Identify where further content might be useful to inform a wider range of people of the IBCs, and determine what type of content this should be
- Potentially collaborate with others to produce this content and disseminate to the EA community (I am very aware of the danger of doing harm at this stage and would mitigate this risk or may not engage in this stage myself if necessary)
Comments sorted by top scores.
comment by Benjamin_Todd ·
2021-01-29T21:10:51.563Z · EA(p) · GW(p)
Just a very quick answer, but I'd be keen to have content that lists e.g. the most important 5-10 questions that seem to have the biggest effect on cause selection, since I think that would help people think through cause selection for themselves, but without having to think about every big issue in philosophy.
We tried to make a simple version of this a while back here: https://80000hours.org/problem-quiz/
This was another attempt: http://globalprioritiesproject.org/2015/09/flowhart/
OP's worldview investigations are also about these kinds of considerations (more info: https://80000hours.org/podcast/episodes/ajeya-cotra-worldview-diversification/)
I think the main challenge is that it's really difficult to pinpoint what these considerations actually are (there's a lot of disagreement), and they differ a lot by person and which worldviews we want to have in scope. We also lack easy to read write-ups of many of the concepts.
I'd be interested in having another go at the list though, and I think we have much better write-ups than e.g. pre-2017. I'd be very interested to see other people's take on what the list should be.
Replies from: jackmalde, evelynciara
↑ comment by jackmalde ·
2021-01-29T22:51:27.210Z · EA(p) · GW(p)
A flowchart like that seems like a good idea to me. I did briefly mention the possibility of some sort of flowchart in my post, although on reflection I actually think it could be even better than I first gave it credit. It can align people to the cause areas that fit their underlying views, whilst also countering potential misunderstandings that prevent people doing more good than they otherwise would. For example, in reality many people with person-affecting views might reject longtermism off the bat, but that global priorities project flowchart would guide those people, if they still care about people in the far future (like Arjun in my original post), to longtermist areas that may be robust to certain person-affecting views. I like that!
I would endorse a refresh of that flowchart, but think it would be helpful to have underlying content that people can look at if they want to understand the logic of the flowchart or make better decisions as they go along the flowchart. "Can we permanently improve society" is a pretty darned tough question for someone to answer, so it would be good to have some helpful easy-to-digest content for those who may want to read up on that a bit before actually giving an answer. It's quite a different question to "Are future people just as valuable as present people" which seems more subjective, although even for this question there could be some useful content to link to.
I may have a bit more of a think about a list of questions. Thanks for raising it!Replies from: MichaelA
↑ comment by MichaelA ·
2021-01-30T02:26:25.343Z · EA(p) · GW(p)
(I believe an EA wiki type thing is currently in the works, so these sorts of efforts could perhaps be combined with that somehow.)
comment by MHarris ·
2021-01-29T10:15:18.424Z · EA(p) · GW(p)
My main reaction (rather banal): I think we shouldn't use an acronym like IBC! If this is something we think people should think about early in their time as an effective altruist, let's stick to more obvious phrases like "how to prioritise causes".Replies from: MichaelA, jackmalde
↑ comment by MichaelA ·
2021-01-30T02:13:27.531Z · EA(p) · GW(p)
Personally, I think the term "important between-cause considerations" seems fairly clear in what it means, and seems to fill a useful role. I think an expanded version like "important considerations when trying to prioritise between causes" also seems fine, but if the concept is mentioned often the shorter version would be handy.
And I'd say we should avoid abbreviating it to IBC except in posts (like this one) that mention the term often and use the expanded form first - but in those posts, abbreviating it seems fine.
I think "how to prioritise causes" is a bit different. Specifically, I think that that'd include not just considerations about the causes themselves (IBCs), but also "methodological points" about how to approach the question of how to prioritise, such as:
- "focus on the interventions within each cause that seem especially good"
- "consider importance, tractability, and neglectedness"
- "try sometimes taking a portfolio or multiplayer-thinking perspective"
- "consider important between-cause considerations"
(That said, I think that that those "methodological points" are also important, and now realise that Jack's "calls to action" might be similarly important in relation to those as in relation to IBCs.)
comment by MichaelA ·
2021-01-29T05:42:49.011Z · EA(p) · GW(p)
Why I'm not sure it'd be worthwhile for all EAs to gain a high-level understanding of (basically) all IBCs
(Note: I'm not saying I think it's unlikely to be worthwhile, just that I'm not sure. And as noted in another comment [EA(p) · GW(p)], I do agree with the broad thrust of this post.)
I basically endorse a tentative version of Objection #1; I think more people understanding more IBCs is valuable, for the reasons you note, but it's just not clear how often it's valuable enough to warrant the time required (even if we find ways to reduce the time required). I think there are two key reasons why that's unclear to me:
- I don't think causes differ astronomically in the expected impact a reasonable EA should assign them after (let's say) a thousand hours of learning and thinking about IBCs, using good resources
- (Note: By "do causes differ astronomically in impact", I mean something like "does the best intervention in one cause area differ astronomically in impact from the best intervention in another area", or a similar statement but with the average impact of "positive outliers" in each cause, or something)
- I do think a superintelligent being with predictive powers far beyond our own would probably see the leading EA cause areas as differing astronomically in impact or expected impact
- But we're very uncertain about many key questions, and will remain very uncertain (though less so) after a thousand hours of learning and thinking. And that dampens the differences in expected impact
- And that in turn dampens the value of further efforts to work out which cause one should prioritise
- It also pushes in favour of plucking low-hanging fruit in multiple areas and in favour of playing to one's comparative advantage rather than just to what's highest priority on the margin
- See also the comments on this post [EA · GW]
- See also Doing good together — how to coordinate effectively, and avoid single-player thinking
- I expect the EA community will do more good if many EAs accept a bit more uncertainty than they might naturally be inclined to accept regarding their own impact, in order to just do a really good job of something
- This applies primarily to the sort of EAs who would naturally be inclined to worry a lot about cause prioritisation. I think most of the general public, and some EAs, should think a lot more than the naturally would about whether they're prioritising the right things for their own impact.
- This also might apply especially to people who already have substantial career capital in one cause area
- (But note that I'm saying "dampens" and "pushes in favour", not "eliminates" or "decisiveely proves one should")
- I think different interventions within a cause area (or at least within the best cause area) differ in expected impact by a similar amount to how much causes differ (and could differ astronomically in "true expected impact", evaluated by some being that has far less uncertainty than we do)
- So I disagree with what I think you mean by your claim that "There probably won’t be as astronomical differences in value within these cause areas (e.g. between different ways to improve near-term human welfare)"
- One thing that makes this clearly true is that, within every cause area, there are some interventions which have a negative expected impact, and other which have the best expected impact (as far as we can tell)
- So the difference within each cause area spans the range from a negative value to the best value within the cause area
- And at least within the best cause area, that's probably a larger difference than the difference between cause areas (since I'd guess that each cause area's best interventions are probably at least somewhat positive in expectation, or not as negative as something that backfires in a very important domain)
- It's harder to say how large the differences in expected impact between the currently leading candidate interventions within each cause area are
- But I'd guess that each cause area will contain some interventions that would be considered by people new to the cause area that will have approximately 0 or negative value
- E.g., by being and appearing naive and thus causing reputational harms or other downside risks [? · GW]
(Again, I feel I should highlight that I do agree with the general thrust of this post.)Replies from: MichaelA, MichaelA, jackmalde, jackmalde
↑ comment by MichaelA ·
2021-01-29T07:05:45.699Z · EA(p) · GW(p)
Ironically, having said this, I also think I disagree with you in sort-of the opposite direction on two specific points (though I think this is quite superficial and minor).
As such, I think it makes sense for EAs to engage with the various IBCs to decide on a preferred cause area, but after that to restrict further reading and engagement to within that preferred cause area (and not within other cause areas they have already ruled out).
I agree with the basic idea that it's probably best to start off thinking mostly about things like IBCs, and then on average gradually increase how much one focuses on prioritising and acting within a cause area. But it doesn't seem ideal to me to see this as a totally one-directional progression from one stage to a very distinct stage.
I think even to begin with, it might often be good to already be spending some time on prioritising and acting within a cause area.
And more so, I think that, even once one has mostly settled on one cause area, it could occasionally be good to spend a little time thinking about IBCs again. E.g., let's say a person decides to focus on longtermism, and ends up in a role where they build great skills and networks related to lobbying. But these skills and networks are also useful in relation to lobbying for other issues as well, and the person is asked if they could take on a potentially very impactful role using the same skills and networks to reduce animal suffering. (Maybe there's some specific reason why they'd be unusually well-positioned to do that.) I think it'd probably then be worthwhile for that person to again think a bit about cause prioritisation.
I don't think they should focus on the question "Is there a consideration I missed earlier that means near-term animal welfare is a more important cause than longtermism?" I think it should be more like "Do/Should I think that near-term animal welfare is close enough to as important a cause as longtermism that I should take this role, given considerations of comparative advantage, uncertainty, and the community taking a portfolio approach?"
(But I think this is just a superficial disagreement, as I expect you'd actually agree with what I've said, and that you might even have put in the sentence I'm disagreeing with partly to placate my own earlier comments :D)
For example, if one has read up on population ethics and is confident that they hold a person-affecting view, one can rule out reducing extinction risk at that point without having to engage with that area further (i.e. by understanding the overall probability of x-risk this century).
I'm guessing you mean "overall probability of extinction risk", rather than overall probability of x-risk as a whole? I say this because other types of existential risk [EA · GW] - especially unrecoverable dystopias - could still be high priorities from some person-affecting perspectives [EA · GW].
If that's what you mean, then I think I basically agree with the point you're making. But it's still possible for someone with a person-affecting view to prioritise reducing extinction risk [EA · GW] (not just other existential risks), primarily because of the fact extinction would harm the people alive at the time of the extinction event. So it still might be worth that person at least spending a little bit of time checking whether the overall probability of extinction risk seems high enough for them to prioritise it on those grounds. (Personally, I'd guess extinction risk wouldn't be a top priority on purely person-affecting grounds, but would still be decently important. I haven't thought about it much, though.)
↑ comment by MichaelA ·
2021-01-29T05:43:05.344Z · EA(p) · GW(p)
It also seems useful to imagine what we want the EA movement to become in (say) 10 years time, and to consider who this post is talking about when it says "every EA".
For example, maybe we want EA to become more like a network than a community [EA · GW] - connecting a vast array of people from different areas to important ideas and relevant people, but with only a small portion of these people making "EA" itself a big part of their lives or identities. This might look like a lot of people mostly doing what they're already doing, but occasionally using EA ideas to guide or reorient themselves. That might be a more natural way for EA to have a substantial influence on huge numbers of people, including very "busy and mainstream" people like senior policymakers, than for all those people to actually "become EAs". This seems like it might be a very positive vision (I'm not sure it's what we should aim for, but it might be), but it's probably incompatible with all of these people knowing about most IBCs.
Or, relatedly, imagine the EA movement grows to contain 100,000 people. Imagine,20,000 are working on things like AI safety research and nuclear security policy, in places like MIRI, the US government, and the Carnegie Foundation; 20,000 are working on animal welfare in a similar range of orgs; 20,000 on global health in a similar range of orgs; etc. It doesn't seem at all obvious to me that the world will be better in 50 years if all of those people spent the time required to gain a high-level understanding of most/all IBCs, rather than spending some of that time learning more about whatever specific problem they were working on. E.g., I imagine a person who's already leaning towards a career that will culminate in advising a future US president on nuclear policy might be better off just learning even more minutia relevant to that, and trusting that other people will do great work in other cause areas.
To be fair, you're just talking about what should be the case now. I think prioritisation is more important, relative to just getting work done, the smaller EA is. But I think this might help give a sense of why I'm not sure how often learning more about IBCs would be worthwhile.
↑ comment by jackmalde ·
2021-01-29T20:45:50.819Z · EA(p) · GW(p)
So I disagree with what I think you mean by your claim that "There probably won’t be as astronomical differences in value within these cause areas (e.g. between different ways to improve near-term human welfare)"
For the record, on reflection, I actually don't think this claim is important for my general argument, and I agree with you that it might not be true.
What really matters is if there are astronomical differences in (expected) value between the best interventions in each cause area.
In other words, in theory it shouldn't matter if the top-tier shorttermist interventions are astronomically better than mid-tier shorttermist interventions, it just matters how the top-tier shorttermist interventions compare to the top-tier longtermist interventions.Replies from: MichaelA
↑ comment by MichaelA ·
2021-01-30T02:19:02.558Z · EA(p) · GW(p)
I think this claim does matter in that it affects the opportunity costs of thinking about IBCs. (Though I agree that it doesn't by itself make or break the case for thinking about IBCs.)
If the differences in expected impact (after further thought) between the superficially-plausibly-best interventions within the best cause area are similar to the differences in expected impact (after further thought) between cause areas, that makes it much less obvious that all/most EAs should have a high-level understanding of all/most cause areas. (Note that I said "much less obvious", not "definitely false".)
It's still plausible that every EA should first learn about almost all IBCs, and then learn about almost all important within-cause considerations for the cause area they now prioritise. But it also seems plausible that they should cut off the between-cause prioritisation earlier in order to roll with their best-guess-at-that-point, and from then on just focus on doing great within that cause area, and trust that other community members will also be doing great in other cause areas. (This would be a sort of portfolio, multiplayer-thinking approach, as noted in one of my other comments.)
↑ comment by jackmalde ·
2021-01-29T20:32:41.492Z · EA(p) · GW(p)
Thanks for these thoughts and links Michael, and I’m glad you agree with the broad thrust of the post! You’ve given me a lot to think about and I’m finding my view on this is evolving.
I don't think causes differ astronomically in the expected impact a reasonable EA should assign them after (let's say) a thousand hours of learning and thinking about IBCs, using good resources
Thanks for this framing which is helpful. Reading through the comments and some of your links, I actually think that the specific claim I need to provide more of an argument for is this one:
There are astronomical differences in the expected value of different cause areas and people can uncover this through greater scrutiny of existing arguments and information.
I tentatively still hold this view, although I’m starting to think that it may not hold as broadly as I originally thought and that I got the cause area classification wrong. For example, it might not hold between near-term animal focused and near-term human focused areas. In other words, perhaps it just isn’t really possible, given the information that is currently available, to come to a somewhat confident conclusion that one of these areas is much better than the other in expected value terms. I have also realised that Maryam, in my hypothetical example, didn’t actually conclude that near-term animal areas were better than near-term human areas in terms of expected value. Instead she just concluded that near-term animal areas (which are mainstream EA) were better than a specific near-term human area (mental health - which is EA but not mainstream). So I’m now starting to question whether the way I classified cause areas was helpful.
Having said all that I would like to return to longtermist areas. I actually do think that many prominent EAs, say Toby Ord, Will MacAskill and Hilary Greaves, would argue that longtermist areas are astronomically better than shorttermist areas in expected value terms. Greaves and MacAskill’s The Case for Strong Longtermism basically argues this. Ord’s The Precipice basically argues this, but specifically from an x-risk perspective. It might be that longtermism is the only case where prominent thinkers in the movement do think there is a clear argument to be made for astronomically different expected value.
Does this then mean it’s very important to educate people about ideas that help people prioritise between shorttermist areas and longtermist areas? Well I think if we adopt some epistemic humility and accept that it’s probably worth educating people about ideas where prominent EAs have claimed astronomical expected value without much disagreement from other prominent EAs, then the answer is yes. The idea here being that, the fact that these people hold these ideas without much prominent disagreement means there is a good possibility these ideas are correct and so, on average, people being aware of these ideas should make them reorient in ways that will allow them to do more good. This actually makes some sense to me, although I realise this argument is grounded in epistemic humility and deferring, which is quite different to my original argument. It’s not pure deferring of course as people can still come across the ideas and reject them, I’m just saying it’s important that they come across and understand the ideas in the first place.
So to sum up, I think my general idea might still work, but I need to rework my cause areas. A better classification might be: non-EA cause areas, shortermist EA cause areas, longtermist EA cause areas. These are the cause areas between which I still think there should be astronomical differences in expected value that can plausibly be uncovered by people. In light of this different cause classification I suspect some of my IBCs will drop - specifically those that help in deciding between interventions between which there is reasonable disagreement amongst prominent EAs (e.g. Ord and MacAskill disagree on how influential the present is, so that can probably be dropped).
Given this it might be that my list of IBCs reduce to:
- Those that help people orient from non-EA to EA, so that they can make switches like Maryam did
- Those that help people potentially orient from shorttermism to longtermism, so that they can make switches like Arjun did
I feel that I may have rambled a lot here and I don’t know if what I have said makes sense. I’d be interested to hear your thoughts on all of this.Replies from: MichaelA
↑ comment by MichaelA ·
2021-01-30T02:10:19.891Z · EA(p) · GW(p)
I think you make a bunch of interesting points. I continue to agree with the general thrust of what you propose, though disagreeing on parts.
I actually do think that many prominent EAs, say Toby Ord, Will MacAskill and Hilary Greaves, would argue that longtermist areas are astronomically better than shorttermist areas in expected value terms. Greaves and MacAskill’s The Case for Strong Longtermism basically argues this. Ord’s The Precipice basically argues this, but specifically from an x-risk perspective. It might be that longtermism is the only case where prominent thinkers in the movement do think there is a clear argument to be made for astronomically different expected value.
I haven't read that key paper from Greaves & MacAskill. I probably should. But some complexities that seem worth noting are that:
- The longtermist interventions we think are best usually have a nontrivial chance of being net harmful
- It seems plausible to me that, if "longtermism is correct", then longtermist interventions that are actually net harmful will tend to be more harmful than neartermist interventions that are actually net harmful
- This is basically because the "backfires" would be more connected to key domains, orgs, decisions, etc.
- Which has various consequences, such as the chance of creating confusion on key questions, causing longtermism-motivated people to move towards career paths that are less good than those they would've gone towards, burning bridges (e.g., with non-Western governments), or creating reputational risks (seeming naive, annoying, etc.)
- See also Cotton-Barratt's statements about "Safeguarding against naive utilitarianism" [EA · GW]
- Even longtermist interventions that are typically very positive in expectation could be very negative in expectation if done in terrible ways by very ill-suited people
- Neartermist interventions will also have some longtermist implications, and I'd guess that usually there's a nontrivial chance that they have extremely good longtermist implications
- E.g., the interventions probably have a nontrivial chance of meaningfully increasing or decreasing economic growth, technological progress, or moral circle expansion, which in turn is plausibly very good or very bad from a longtermist perspective
- Related to the above point: In some cases, people might actually do very similar tasks whether they prioritise one cause area or another (specific) cause area
- E.g., I think work towards moral circle expansion is plausibly a top priority from a longtermist perspective, and working on factory farming is plausibly a top priority way to advance that goal (though I think both claims are unlikely to be true). And I think that moral circle expansion and factory farming are also plausibly top priorities from a neartermist perspective.
- Greaves, MacAskill, and Ord might be partly presenting a line of argument that they give substantial credence to, without constantly caveating it for epistemic humility in the way that they might if actually making a decision
A question I think is useful is "Let's say we have a random person of the kind who might be inclined towards EA. Let's say we could assign them to work on a randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a longtermist perspective, or to work on randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a neartermist animal welfare perspective. We have no other info about this person, which intervention they'd work on, or how they'd approach it. On your all-things-considered moral and empirical views - not just your independent impressions - it >1,000,000 times as good to assign this person to the randomly chosen longtermist intervention than the randomly chosen neartermist animal welfare intervention?"
I'm 95+% confident they'd say "No" (at least if I made salient to them the above points and ensured they understood what I meant by the question).
(I think expected impact differing by factors of 10 or 100s is a lot more plausible. And I think larger differences in expected impact are more plausible once we fill in more details about a specific situation, like a person's personal fit and what specific intervention they're considering. But learning about IBCs doesn't inform us on those details.)
I'd be interested in your thoughts on this take (including whether you think I'm just sort-of talking past your point, or that I really really should just read Greaves and MacAskill's paper!).Replies from: jackmalde
↑ comment by jackmalde ·
2021-01-30T10:58:10.223Z · EA(p) · GW(p)
Thanks for this.
Greaves and MacAskill don't cover concerns about potential downsides of longtermist interventions in their paper. I think they implicitly make a few assumptions, such as that someone pursuing the interventions they mention would actually do them thoughtfully and carefully. I do agree that one can probably go into say DeepMind without really knowing their stuff and end up doing astronomical harm.
Overall I think your general point is fair. When it comes to allocating a specific person to a cause area, the difference in expected value across cause areas probably isn't as large as I originally thought, for example due to considerations such as personal fit. Generally I think your comments have updated me away from my original claim that everyone should know all IBCs, but I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and I'm quite excited about this possibility.
Slight aside about the Greaves and MacAskill paper - I personally found it a very useful paper that helped me understand the longtermism claim in a slightly more formal way than say an 80K blog post. It's quite an accessible paper. I also found the (somewhat limited) discussion about the potential robustness of longtermism to different views very interesting. I'm sure Greaves and MacAskill will be strengthening that argument in the future. So overall I would recommend giving it a read!Replies from: MichaelA
↑ comment by MichaelA ·
2021-01-30T12:20:08.399Z · EA(p) · GW(p)
I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and I'm quite excited about this possibility.
Yeah, I'm definitely on the same page on those points!
So overall I would recommend giving it a read!
Ok, this has made it more likely that I'll make time for reading the paper in the coming weeks. Thanks :)
comment by Bella_Forristal ·
2021-01-29T09:34:52.203Z · EA(p) · GW(p)
Thanks for this post; really interesting and seems like it could be really important.
"I think this classification allows for there to be potentially astronomical differences in value between the cause areas. There probably won’t be as astronomical differences in value within these cause areas (e.g. between different ways to improve near-term human welfare)."
While I share this intuition (i.e. I can think of some informal reasons why I think this would be correct), I'm not sure it's an obvious enough claim to not need argumentation. Also, I agree with Michael that the importance of IBCs depends in part on this claim.
For that reason, I'd be really interested to see you make explicit your reasoning for saying this, if you can.
Replies from: jackmalde
↑ comment by jackmalde ·
2021-01-29T20:49:40.468Z · EA(p) · GW(p)
Hey Bella, I'm glad to hear you found it interesting!
I agree that that claim needs more argumentation, and I have replied to one of Michael's comments with my thoughts. On thinking about all of this a bit more my view has actually evolved somewhat (as explained in my comments to Michael).
comment by MichaelA ·
2021-01-29T07:16:24.667Z · EA(p) · GW(p)
A few random thoughts and links
- Something that might be worth considering as part of the next steps is attempting to evaluate the impact of any educational materials or efforts that are produced
- In terms of metrics like whether people form accurate understandings, whether they retain them later, whether they report applying them in actual decisions, and perhaps how this tends to affect people's later cause priorities, careers, donations, etc.
- On "Objection #4: Some people need to know all this, but not everyone", readers may find posts tagged Epistemic humility [? · GW] interesting.
- This post seems to dovetail somewhat with Michelle Hutchinson's [EA · GW] recent suggestion that "Supporting teaching of effective altruism at universities" might be an important gap in the EA community.
- My post (for Convergence Analysis) on Crucial questions for longtermists [EA · GW] could be seen as doing something like the equivalent of collecting IBCs, organising them into categories and hierarchies, and collecting existing resources for each of them - but in that case, for prioritising within longtermism, rather than between causes. It could be interesting to consider how similar a similar methodology and output type to the one used for that post might (or might not) be useful for potential further work to identify and improve understanding of IBCs.
- Incidentally, in that post, I wrote "One could imagine a version of this post that “zooms out” to discuss crucial questions on the “values” level, or questions about cause prioritisation as a whole. This might involve more emphasis on questions about, for example, population ethics, the moral status of nonhuman animals, and the effectiveness of currently available global health interventions. But here we instead (a) mostly set questions about morality aside, and (b) take longtermism as a starting assumption." It's cool to see that your post proposes something sort-of similar!
- The post Clarifying some key hypotheses in AI alignment [LW · GW] also does something sort-of similar, and I really like the approach that was taken there. So it could also be interesting to consider how that approach might be used for an IBC-related project.
comment by MichaelA ·
2021-01-29T04:58:04.835Z · EA(p) · GW(p)
Thanks for this post! I think it's useful and clearly written.
I'll split my thoughts into a few comments. (Some will partially repeat stuff we discussed in relation to your earlier draft.)
My main thoughts on this post's key ideas:
- I think the concept of an IBC is a useful one
- It of course overlaps somewhat with the concept of a crucial consideration (and also the only-used-by-me concept of a crucial question [EA · GW]). But I think IBCs are a subset of crucial considerations that are worth having a specific term for.
- I think you've identified a good initial set of candidate IBCs
- I agree that "to make the best choice on preferred cause area, [all] EAs should have at least a high-level understanding of various ‘Important Between-Cause Considerations’ (IBCs)" (emphasis added)
- But (as you acknowledge) making the best choice on preferred cause areas isn't our only or ultimate goal; we also have to at some point make decisions and take actions within a cause area. Given that, I'm not sure I agree it's worth every EA spending the time required to have even a high-level understanding of all IBCs
- This is even if we make it easier to gain such an understanding
- And this is just because of cases in which a person's position on one IBC indicates that a few specific IBCs are very unlikely to change their views (like in your example about population ethics and the level of extinction risk)
- Basically, I hold a tentative version of objection 1
- I'll expand on my reasoning for this in another comment
- But I think it's plausible that it's worth every EA spending the time required to have a high-level understanding all IBCs. And I'm confident that it'd at least be worth increasing the portion of EAs doing that or something close to that (at least if we assume we find ways to reduce how much time that requires).
- And I also very much endorse the idea that it'd be valuable to make gaining a high-level understanding of IBCs easier. And I like your ideas for that.
- I think the primary goal is to make it less time-consuming. But it'd also be good to make it less effortful and more pleasant (including for people who aren't nerdy philosopher/econ/math types), and to make it so that people understandings are more accurate, nuanced, and durable (i.e., to make misconceptions and forgetting less likely, and later, valid applications of the ideas more likely).
- For people who want to followup on this (including but not limited to you), I think (some) posts tagged EA Education [? · GW], (some) posts tagged EA Messaging [? · GW], and the EA Virtual Programs [? · GW] are worth checking out
 I don't mean this as an insult. I'm definitely nerdy myself, and am at least sort-of a philosopher/econ/math type.
comment by BrianTan ·
2021-01-30T05:57:38.916Z · EA(p) · GW(p)
Thanks for this post! I found it valuable and I agree it's important for people to learn and think about IBCs for their cause prioritization, but I'm just unsure how much time one should do so. I also agree that more educational and easy to consume content of the concepts above would be helpful. I would particularly like to see more digestible EA video or audio content on population ethics, cluelessness, and s-risks, in that order, as those are the ones on your list that are still hard to grasp or explain for me.
I'm someone who is still unsure on which causes to prioritize for myself and for advocating to others. My current strategy is to just use worldview diversification, and try to contribute to the main causes you mentioned through my community building. I also want to learn more about these different causes and a few of the IBCs you put that I'm not yet familiar with.
I'll note that I think this list and article is missing the important IBC of worldview diversification. I think a lot of people, especially community builders maybe, end up just hedging their bets and try to contribute to or advocate for multiple causes.
Also, I'm a bit surprised you put movement building as separate to the other causes. Even for those that do movement building (like me), we would still think a lot on which of the EA causes should we do more advocacy/movement building in than others. I don't think movement building is really a cause, but more of an intervention to improve one of the other causes/worldviews.
comment by evelynciara ·
2021-03-04T21:50:45.702Z · EA(p) · GW(p)
I really appreciate this post!
Speaking for myself, objections (2) and (3) don't hold for me - I'm not familiar with all of the IBCs, and I've changed my mind on the most pressing problems multiple times in the past year. I also like the idea of making a flowchart for this, perhaps one that the community can edit so it can be kept up to date with the latest knowledge about EA causes without a single person or org being the weakest link.
One suggestion: I think equity and human rights are potential IBCs that a lot of people care about, so I think it's worth exploring their implications for cause selection.
comment by rootpi ·
2021-02-06T11:42:25.159Z · EA(p) · GW(p)
I consider myself 'EA-adjacent' for the past few years - very sympathetic and somewhat knowledgeable, although not fully invested for various reasons. However I think I was already broadly aware of all the IBCs you listed. So perhaps I am more invested than I thought! But my preferred interpretation is that most people who are sufficiently interested in EA and also sufficiently open to considering various causes will have already found most of these or will do so relatively quickly on their own. Obviously I could be wrong, and if you do a survey we will find out at least the first half.
[Note that I fully agree with you that it is very beneficial for people to be aware of these!]Replies from: jackmalde
↑ comment by jackmalde ·
2021-02-06T13:22:58.773Z · EA(p) · GW(p)
most people who are sufficiently interested in EA and also sufficiently open to considering various causes will have already found most of these or will do so relatively quickly on their own.
This is definitely a possibility and I certainly don't have enough evidence to claim that this isn't the case. A survey to determine existing knowledge could be interesting, and I may get in touch with Rethink Priorities to see if they think such a survey might be useful.
Having said that, I have thought about all this a bit more since originally posting and I generally agree that doing more to spread EA to those who don't know about it is more important, on the current margin, than doing more to spread important ideas within the existing EA community. I expect this balance to flip at some point, but that may not be for a while.
comment by BrianTan ·
2021-01-30T06:01:18.782Z · EA(p) · GW(p)
Also FWIW, I think that as long as you're able to get feedback or help from a few knowledgeable EAs in crafting the content before you release them to help people learn about these IBCs, there's not a lot of harm you would do. And I assume you would be able to get this feedback. So I think you might be a bit more worried about doing harm than you should be!