post by SebastianSchmidt
During an in-depth conversation on positive impact, I learned that I have unnuanced and overconfident tendencies about what positive impact means. In particular, I had baked naive EA thinking into the definition of impact without adding the appropriate level of nuance to the mixture. In the following, I'll share an embarrassingly unfiltered account of the thinking process I went through as I worry it may be true for others as well.
As an EA I think, talk, and care a lot about having positive impact. But what is positive impact really, and how do I assess it when the rubber hits the road?
Over the past couple of weeks, I've been diving into another round of career planning. This has been associated with a lot of distress. In particular, I've been concerned with having a lot of positive impact. Being an exceptionally impactful person. If I'm honest, my concerns and goals have been revolving around the ambition of being one of the most impactful people in the world. But what does that even mean?
For the past five years, I've felt that I've become increasingly knowledgeable - almost an expert - on impact as I've been thinking about my career plan, organizing local groups, and processing EA content. Frankly, in most conversations with "non-EAs" I've felt a sense of superiority in my understanding of impact. Tacitly assuming that my non-EA friends and colleagues think they know what impact is without really knowing it.
(The following is an embarrassingly truthful account of a conversation I had with one of my friends. I realize that it's centered around people, which may be foreign to some. However, I expect that doing the same for organizations or actions may be equally valuable.)
During a conversation with one of the most thoughtful people I know (who happens not to identify as an EA), I was forced to scrutinize this superiority by interrogating how well I understand impact. He asked me, who are some of the most impactful people in the world? I treated this as a brainstorming session and readily listed the first people who came to mind:
- Will MacAskill
- Kellie Liket (founder of Effective Giving NL where I interned a while back)
- Brian Johnson (founder of optimize.me and an extraordinary teacher)
- Toby Ord
- Joey Savoie (Charity Entrepreneurship)
- Peter Singer
- Nelson Mandela
- Stanislov Petrov
He followed up by asking, what about someone like Bill Gates?
I responded, well, I don't know how important his focus areas actually are. He seems to be very focused on areas that aren't that neglected, and it may be that they aren't that important for the long-term future.
Puzzled by this list and my hesitancy around someone like Bill Gates, he asked me, how do you define impact?
I answered that I believe that impact boils down to positively altering consciousness. That is, positively influencing sentient beings.
But how do you actually operationalize that - how would you compare the impact of Will MacAskill to that of Gandhi, he asked.
At this point, I was feeling some discomfort. Part of me just wanted to change the topic. I'm tired and exhausted, I thought to myself. I slowly realized that my understanding of this topic might not be perfect, which was painful.
However, I started to flesh out my answer to his question.
Will has started Giving What We Can, which has raised ~ 1.5 bn USD in pledged donations. However, I believe it was actually Toby Ord's idea, and obviously, Will didn't do it alone - many good people were working on it.
Will also started 80,000hours, which has established some key priority paths, guided millions of readers, and coached hundreds of people from top universities. The team at 80,000hours has estimated that they enabled a lot of significant plan changes. However, I don't fully understand what this means and how certain are we that those changes were good? Also, as far as I know, they mainly do one-off sessions, so I wonder what the lasting effects are and what people would have done counterfactually if they had pursued things based more on personal fit and passion outside of existing cause areas.
Will also gave a TED talk with one million+ views, and co-founded EA that now has more than 100 local groups worldwide. Finally, he also has conversations with major philanthropists and has helped guide their donations. Overall, that's very impactful.
As for Gandhi, he helped a massive nation to independence and became a symbol of peace and non-violence, but I can't add much more than that. (Clearly, I didn't have a particularly elaborate understanding of this).
My friend then added, what about the fact that Gandhi led a revolution with peace and non-violence? What about that he went on to become a symbol of peace and directly inspired other world leaders like Nelson Mandela?
At this point, I got silent. Slowly coming to terms with what had just unfolded and seeing the cognitive dissonance. Had I been under the illusion of understanding "impact"? A concept so central to my achievements, aspirations, and identity. With discomfort and embarrassment, I acknowledged that impact is probably a lot more complicated than what I had been thinking and that it'd be wise of me to be more humble about assessing impact associated with specific people, organizations, and cause areas. At this point, we went a bit meta on the list of people and concluded the following:
- I can only list the people that I know something about. Right now, it's heavily biased towards EA folks (five out of the nine people I listed above are EAs) because that's the community that I'm part of.
- My impact assessment will often be based on whether it resonates with me, which is related to how many times I've seen a person or an organization presented as being impactful. As of now, EA labels, language, and reasoning resonates deeply with me. That isn't inherently problematic because there are a lot of solid principles and values embedded in that. However, it becomes actively harmful for my thinking if I immediately discount the impact of others just because they don't have EA labels on them. It seemed as if I had baked EA thinking into the definition impact without adding the appropriate level of nuance to the mixture.
- I may be overly focused on the magnitude of the impact rather than the robustness of the sign of it.
- Impact attribution is much more difficult than I had somehow managed to convince myself of. Should Peter Singer be credited with a big chunk of Will's impact because he inspired him? How much of EA's impact can be attributed to Dustin Moskowitz and the wealth he has brought EA via Open Phil? What about the art dealer who provided the early funding to CEA and 80,000hours (sadly, I don't remember his name, but he was honored at EAG London 2018).
EAs have made some solid contributions by asking unique questions, spreading important memes, raising donations, and changing thousands of people's careers based on impartial altruistic values and truth-seeking principles. However, I've had (and probably still have) unnuanced and overconfident tendencies about the very things that I care about, and I worry that may be true for others as well. This probing conversation on a fundamental concept was very useful for me, and perhaps such conversations can be useful for others as well.
If you're up for it, try to sit down with someone who can facilitate your thinking and ask yourself fundamental questions such as: Who do you find to be extraordinarily impactful and why? What organizations are impactful and why?
Comments sorted by top scores.
comment by Denis Drescher (Telofy) ·
2020-11-22T21:41:44.019Z · EA(p) · GW(p)
Sorry if you’re well aware of these, but points 3 and 4 sound like the following topics may be interesting for you: For 3, cluelessness and the recent 80k interview with Hilary Greaves that touches on the topic. For 4, Shapley values or cooperative game theory in general. You can find more discussions of it on the EA Forum [? · GW] (e.g., by Nuno), and I also have a post on it, but it’s a couple years old, so I don’t know anymore if it’s worth your time to read. ^.^'
Replies from: SebastianSchmidt
↑ comment by SebastianSchmidt ·
2020-11-24T20:06:39.992Z · EA(p) · GW(p)
Thanks for the resources! Replies from: Telofy
3. I'm aware of the cluelessness but I don't think that we as a community act as if we're clueless. At least, we prioritize a relatively narrow set of paths (e.g. 80K's priority paths) compared to all of the possible paths out there.
4. Very interesting. Clearly, others have thought much more about the complexity of this issue than me. Nuno's post was rather insightful and the examples used made Shapley Values seem more intuitive to me than counterfactual impact. However, the discussions in the comments made it less obvious that Shapley Values is what we should use going forward. Do you use Shapley values when thinking about your personal impact contribution?
↑ comment by Denis Drescher (Telofy) ·
2020-11-24T21:48:47.510Z · EA(p) · GW(p)
Re 3: Yes and no. ^.^ I’m currently working on something of whose robustness I have very weak evidence. I made a note to think about it, interview some people, and maybe write a post to ask for further input, but then I started working on it before I did any of these things. It’s like an optimal stopping problem. I’ll need to remedy that before my sunk cost starts to bias me too much… I suppose I’m not the only one in this situation. But then again I have friends who’ve thought for many years mostly just about the robustness of various approaches to their problem.
Hilary Graves doesn’t seem to be so sure that robustness gets us very far, but the example she gives is unlike the situations that I usually find myself in.
Arden Koehler: Do you think that’s an appropriate reaction to these cluelessness worries or does that seem like a misguided reaction?
Hilary Greaves: Yeah, I don’t know. It’s definitely an interesting reaction. I mean, it feels like this is going to be another case where the discussion is going to go something like, “Well, I’ve got one intervention that might be really, really, really good, but there’s an awful lot of uncertainty about it. It might just not work out at all. I’ve got another thing that’s more robustly good, and now how do we trade off the maybe smaller probability or very speculative possibility of a really good thing against a more robustly good thing that’s a bit more modest?”
Hilary Greaves: And then this feels like a conversation we’ve had many times over; is what we’re doing just something structurally, like expected utility theory, where it just depends on the numbers, or is there some more principled reason for discarding the extremely speculative things?
Arden Koehler: And you don’t think cluelessness adds anything to that conversation or pushes in favor of the less speculative thing?
Hilary Greaves: I think it might do. So again, it’s really unclear how to model cluelessness, and its plausible different models of it would say really different things about this kind of issue. So it feels to me just like a case where I would need to do a lot more thinking and modeling, and I wouldn’t be able to predict in advance how it’s all going to pan out. But I do think it’s a bit tempting to say too quickly, “Oh yeah, obviously cluelessness is going to favor more robust things.” I find it very non-obvious. Plausible, but very non-obvious.
She has thought about this a lot more than I have, so my objection probably doesn’t make sense, but the situation I find myself in is usually different from the one she describes in two ways: (1) There is no one really good but not robust intervention but rather everything is super murky (even whether the interventions have EV [EA · GW]) and I can usually think of a dozen ways any particular intervention can backfire; and (2) this backfiring doesn’t mean that we have no impact but that we have enormous negative impact. In the midst of this murkiness, the very few interventions that seem much less murky than others – like priorities research or encouraging moral cooperation – stand out quite noticeably.
Re 4: I’ve so far only seen Shapley values as a way of attributing impact, something that seems relevant for impact certificates, thanking the right people, and noticing some relevant differences between situations, but by and large only for niche applications and none that are relevant for me at the moment. Nuno might disagree with that.
I usually ask myself not what impact I would have by doing something but which of my available actions will determine the world history with the maximal value. So I don’t break this down to my person at all. Doing so seems to me like a lot of wasted overhead. (And I don’t currently understand how to apply Shapley values to infinite sets of cooperators, and I don’t quite know who I am given that there are many people who are like me to various degrees.) But maybe using Shapley values or some other, similar algorithm would just make that reasoning a lot more principled and reliable. It’s well possible.
comment by NunoSempere ·
2020-11-21T22:42:47.111Z · EA(p) · GW(p)
The Giving Pledge, which Bill Gates and Warren Buffet started, allegedly has pledges worth $1.2 trillion.Replies from: SebastianSchmidt
↑ comment by SebastianSchmidt ·
2020-11-24T20:07:48.887Z · EA(p) · GW(p)
Yes, that's why I find my initial reasoning unnuanced. Upon slightly further reflection, it seems obvious that his wealth and dedication have enabled a lot of good initiatives and I frequently see initiatives crediting Gates.
However, would you break that impact further down? Some quick approaches that I can think of:
i) Simply use the heuristic of money pledged ~ impact ("wow, that's a lot of money. That has to be super impactful".)
ii) Look into the causes supported in order to adjust the donations to the impact of actual causes supported.
iii) Go with the heuristic of if we have repeatedly heard someone (or something) being credited for his/her impact, then he/she must be very impactful.
comment by jserv ·
2020-11-25T21:11:45.682Z · EA(p) · GW(p)
I would suggest that the EA definition of "impact" has been developed to address a certain set of problems with measurable outcomes, making it useful but incredibly narrow.
My personal belief is that there is a lot of scope for other forms of impact that are mostly or entirely distinct from Effective Altruism. I know it's been written that effective altruists love systemic change, and indeed many people affiliated with EA pursue such change, but it's not the only (or, in my opinion, even the primary) mechanism by which systemic change can/will occur.
Gandhi and Mandela were political actors who achieved far-reaching impact by dint of their positioning within a particular institution or system, and then radically opposing it on principle. Many of their actions fall very far outside the framework of most EA organisations for reasons I don't think I need to go into very much. A brief overview of either of their biographies with the question "would an EA philosophy have advised this decision?" illustrates this point.
Coming at it from the other direction, I see EA as a philosophy dedicated to applying certain rationalist ideas and approaches to specific moral and ethical problems. As an institution, though, what is the structure of CEA and its offshoots? How does this affect the questions it attempts to address, and what are the assumptions of these questions?
80k provides the easiest example of what I mean; it's very clearly aimed at university graduates, overwhelmingly from wealthy countries, with enough material, cultural, and intellectual resources to achieve some measure of change through an impactful career. This is excellent, but a rather specific way of achieving impact, and it operates with a number of prerequisites.
There is nothing inherently wrong with EA's existence, and I fully support the basic idea of rationally figuring out to do the most good if you are coming from a particular starting position. I also think that EA currently cannot begin to address the kind of questions that motivate actors to become extraordinarily impactful in other ways, like the standout non-EAs on your list.
To be clear: one form of impact doesn't preclude pursuing another. If I were to give advice it would be to pursue impact on multiple levels, 'EA' and not, quantifiable and not.