I'm maximizing good, not my contribution to good

post by oh54321 · 2022-05-23T23:26:40.398Z · EA · GW · 7 comments

This seems like a not at all new or original idea, but maybe worth explicitly pointing out. 

A lot of times at EA events I've heard something along the lines of "as an EA, I'm trying to maximize the amount of good I accomplish."

But this isn't quite what we should (in theory, assuming I don't care about my non-EA goals, which I totally do) be doing. Instead, as an EA I'm trying to maximize how awesome the world over time will be, not just the "awesomeness" that can be attributed to me. 

More concretely, doing things like research, entrepreneurship, etc. are all shiny things that demonstrate that I've done lots of good. On the other hand, for AI Safety field building, spreading EA in areas where EA isn't really present, etc., the impact here can't always be traced back directly to me, but these also seem incredibly important and useful. I'm suprised at how many more safety researchers than field builders there are, and I'm guessing one reason is that it's difficult to pinpoint how much impact you've actually personally made as a field builder. 

Most roles would probably also look slightly different if people focused less on their personal contribution to the world. For example, it might make sense for a researcher to give a cool project idea to a not-currently-occupied person who they know will be better at the project.

 I think it's worth the effort to consciously try to feel tons of happiness from seeing altruism being accomplished, and not just from accomplishing altruism. 


Comments sorted by top scores.

comment by david_reinstein · 2022-05-24T01:01:49.699Z · EA(p) · GW(p)

But ultimately, I think you are basically saying you are trying to maximise your counterfactual impact, no? Not the impact that can be traced to you, but the impact that you have, through all channels.

Replies from: Sophia, Bluefalcon, oh54321
comment by Sophia · 2022-05-24T11:00:27.201Z · EA(p) · GW(p)

In the context of this post, I read "my contribution to good" to mean "good done that is clearly attributed to me" rather than "my counterfactual impact".

 Though I'd also usually think "my contribution"  is my "counterfactual impact", I still think this reframing ("I am maximizing how much good I do" to "I am maximizing how good the world is") might be instrumentally very useful for feeling good about more indirect ways of having a counterfactual impact.

comment by Vilfredo's Ghost (Bluefalcon) · 2022-05-24T14:12:46.170Z · EA(p) · GW(p)

yeah; it seems obvious to me that "the good I accomplish" includes my contribution to allowing others to do good. I'm open to seeing evidence but I suspect the reason field-building, movement-building etc. isn't done as much as OP would like has nothing to do with this kind of confusion. In fact I think it's questionable how much you can do at the meta level if your direct work doesn't measure up. People show up when they see cool stuff being done, not so much when they hear you talk about the cool stuff that someone else should do. Sputnik did a great deal more for science and engineering education than running a bunch of commercials about the importance of science would have. 

Replies from: david_reinstein
comment by david_reinstein · 2022-05-24T16:50:34.373Z · EA(p) · GW(p)

not so much when they hear you talk about the cool stuff that someone else should do.

Maybe that feels a bit unfair non-steelmanny to me? There are other ways of motivating and helping others and the process other than just saying 'wouldn't it be great if someone solved the alignment problem' etc.

Such as:

  • Encouraging people who are working on the problem
  • Providing inputs and support to others working on important problems
  • Offering career advice
  • Helping communicate and explain the work that is being done, in term helping people coordinate
comment by oh54321 · 2022-05-24T13:08:17.028Z · EA(p) · GW(p)

Yeah, this makes sense. That being said, I'm guessing while some people in theory are trying to maximize the "good" they accomplish, in practice it's easy to forget about options that aren't easily traceable. My point was also that it's worth explicitly putting in effort  to look for these kinds of options. 

By options, I mean something like giving a research project to a more capable person. I'm guessing some people wouldn't consider that this is a thing they can do. 

comment by Joseph Lemien (jlemien) · 2022-05-24T06:46:03.931Z · EA(p) · GW(p)

I like that you shared this thought. It parallels with my understanding of servant leadership, and the humility of it appeals to me. I do think that all of us (myself included) can get easily caught up in the ego of accomplishing things, of wanting others to respect us, of wanting to be admired. But I agree with you: if our goal is to make the world a better place, it doesn't really matter if I make it better place or if I spend my time allowing/enabling/supporting someone else to make it a better place.


Instead of "I'm trying to maximize the amount of good I accomplish" I'd love to see more people adopt the mindset of "I'm trying to maximize the amount of good accomplished."

comment by Michael_Wiebe · 2022-05-24T02:26:10.862Z · EA(p) · GW(p)

This is also captured by the 'hits-based' framing, where many people will try and fail, but some will have huge successes.