How can we best coordinate as a community?

post by Benjamin_Todd · 2017-07-07T04:45:55.619Z · score: 11 (10 votes) · EA · GW · Legacy · 10 comments

I recently gave a talk at EAG:Boston on this topic.

The video is now up.

Below is the blurb, and a rough script (it'll be easier to understand if you watch the video with boosted speed).

All these ideas are pretty speculative, and we're working on some more in-depth articles on this topic, so I'd be keen to get feedback.

* * *

A common objection to effective altruism is that it encourages an overly “individual” way of thinking, which reduces our impact. Ben will argue that, at least in a sense, that’s true.

When you’re part of a community, which careers, charities and actions are highest-impact changes. An overly narrow, individual analysis, which doesn’t take account of how the rest of the community will respond to your actions, can lead you to have less impact than you could. Ben will suggest some better options and rules of thumb for working together.

* * *

Introduction - why work together?


Why the EA community?


How can we work together better?

1) How to choose between our options





2) The new options that come available










Comments sorted by top scores.

comment by JanBrauner · 2017-07-13T18:20:44.842Z · score: 6 (6 votes) · EA(p) · GW(p)

With regards to "following your comparative advantage":

Key statement : While "following your comparative advantage" is beneficial as a community norm, it might be less relevant as individual advice.

Imagine 2 people Ann and Ben. Ann has very good career capital to work on cause X. She studied a relevant subject, has relevant skill, maybe some promising work experience and a network. Ben has very good career capital to contribute to cause Y. Both have aptitude to become good at the other cause as well, but it would take some time, involve some cost, maybe not be as save.

Now Ann thinks that cause Y is 1000 times as urgent as cause X, and for Ben it is the other way around. Both consider retraining for the cause they think is more urgent.

From a community perspective, it is reasonable to promote the norm that everyone should follow their comparative advantage. This avoids prisoner's dilemma situations and increases total impact of the community. After all, the solution that would best satisfy both Ann's and Ben's goals was if each continued in their respective areas of expertise. (Let's assume they could be motivated to do so)

However, from a personal perspective, let's look at Ann's situation: In reality of course, there will rarely be a Ben to mirror Ann, who would also be considering retraining at exactly the same time as Ann. And if there was, they would likely not know each other. So Ann is not in the position to offer anyone the specific trade that she could offer Ben, namely: "I keep contributing to cause X, if you continue contributing to cause Y"

So these might be Ann's thoughts: "I really think that cause Y is much more urgent than anything I could contribute to cause X. And yes, I have already considered moral uncertainty. If I went on to work on cause X, this would not directly cause someone else to work on cause Y. I realize that it is beneficial for EA to have a norm that people should follow their comparative advantage, and the creation of such a norm would be very valuable. However, I do not see how my decision could possibly have any effect on the establishment of such a norm"

So for Ann it seems to be a prisoner’s dilemma without iteration, and she ought to defect.

I see one consideration why Ann should continue working towards cause X: If Ann believed that EA is going to grow a lot, EA would reach many people with better comparative advantage for cause Y. And if EA successfully promoted said norm, those people would all work on cause Y, until Y would not be neglected enough any more to be much more urgent than cause X. Whether Ann believes this is likely to happen depends strongly on her predictions of the future of EA and on the specific characteristics of causes X and Y. If she believed this would happen (soon), she might think it was best for her to continue contributing to X. However, I think this consideration is fairly uncertain and I would not give it high weight in my decision process.

So it seems that

  • it clearly makes sense (for CEA/ 80000 hours/ ...) to promote such a norm
  • it makes much less sense for an individual to follow the norm, especially if said indiviual is not cause agnostic or does not think that all causes are within the same 1-2 orders of magnitude of urgency.

All in all, the situation seems pretty weird. And there does not seem to be a consensus amongst EAs on how to deal with this. A real world example: I have met several trained physicians who thought that AI safety was the most urgent cause. Some retrained to do AI safety research, others continued working in health-related fields. (Of course, for each individual there were probably many other factors that played a role in their decision apart from impact, e.g. risk aversion, personal fit for AI safety work, fit with the rest of their lives, ...)

Ps: I would be really glad if you could point me to errors in my reasoning or aspects I missed, as I, too, am a physician currently considering retraining for AI safety research :D

Pps: I am new to this forum and need 5 karma to be able to post threads. So feel free to upvote.

comment by Benjamin_Todd · 2017-07-14T01:03:01.393Z · score: 1 (1 votes) · EA(p) · GW(p)

Hi there,

I think basically you're right, in that people should care about comparative advantage to the degree that the community is responsive to your choices, and they're value-aligned with typical people in the community. If no-one is going to change their career in response to your choice, then you default back to whatever looks highest-impact in general.

I have a more detailed post about this, but I conclude that people should consider all of role impact, personal fit and comparative advantage, where you put or less emphasis on comparative advantage compared to personal fit given certain conditions.

comment by MichaelPlant · 2017-07-08T21:02:57.901Z · score: 1 (1 votes) · EA(p) · GW(p)

Very much enjoyed this. Good to see the thinking developing.

My only comment is on simple replacability. I think you're right to say this is too simple in an EA context, where someone this could cause a cascade or the work wouldn't have got done anyway.

Do you think simple replacability doesn't apply outside the EA world? For example, person X wants to be a doctor because they think they'll do good. If they take a place at med school, should we expect that 'frees up' the person who doesn't get the place to go and do something else instead? My assumption is the borderline medical candidate is probably not that committed to doing the most good anyway.

To push the point in a familiar case, assume I'm offered a place in an investment bank and I was going to E2G, but I decide to do something more impactful, like work at an EA org. It's unlike the person who gets my job and salary instead would be donating to good causes.

If you think replacability is sometimes true and other times not, it would be really helpful to specify that. My guess is motivation and ability to be an EA play the big role.

comment by Benjamin_Todd · 2017-07-09T03:42:20.756Z · score: 0 (0 votes) · EA(p) · GW(p)

Hi Michael,

I'm writing a much more detailed piece on replaceability.

But in short, simple replaceability could still be wrong in that the doctor wouldn't be replaced. In general, a greater supply of doctors should mean that more doctors get hired, even if it's less than 1.

But yes you're right that if the person you'd replace isn't value-aligned with you, then the displacement effects seem much less significant, and can probably often be ignored.

If you think replacability is sometimes true and other times not, it would be really helpful to specify that. My guess is motivation and ability to be an EA play the big role.

We did state this in our most recent writing about it from 2015: It's pretty complex to specify the exact conditions under which it does and doesn't matter, and I'm still working on that.

comment by Denise_Melchin · 2017-07-25T13:31:47.193Z · score: 0 (0 votes) · EA(p) · GW(p)

That's a great talk, thank you for it. This is why I've started to mind that people get encouraged to figure out what "their cause area" is.

Apart from the fact that they're likely to change their mind within a few years anyway, it's more valuable for the world for them to focus on what they're good at even if it's not in their preferred cause area. Cooperation between cause areas is important.

(Also, "figuring out what the best cause areas are" might be something that should also be done by people whose comparative advantage it is).

comment by SoerenMind · 2017-07-25T11:27:33.850Z · score: 0 (0 votes) · EA(p) · GW(p)

Great talk!

Given the value that various blogs and other online discussion has provided to the EA community I'm a bit surprised by the relative absence of 'advancing the state of community knowledge by writing etc' in 80k's advice. In fact, I've found that the advice to build lots of career capital and fill every gap with an internship has discouraged me from such activities in the past.

comment by Nekoinentr · 2017-07-07T23:36:47.719Z · score: 0 (0 votes) · EA(p) · GW(p)

One reason this is that, because there are donors with money on the sidelines, if the organisations were able to find someone with a good level of fit, they could fundraise enough money to pay for their salaries.

Can you (very roughly) quantify to what extent this is the case for EA organisations? (I imagine they will vary as to how donor-rich vs. potential-hire-rich they are, so some idea of the spread would be helpful.)

comment by Benjamin_Todd · 2017-07-08T06:25:48.165Z · score: 0 (0 votes) · EA(p) · GW(p)

Hey, the most relevant data we have on this is here:

We hope to do a more detailed survey this summer.

In terms of quantifying the amount of money available, we also have some survey results that I'm hoping to publish. But a key recent development is that now the Open Philanthropy Project is funding charities in the community.