"Big tent" effective altruism is very important (particularly right now)

post by Luke Freeman (lukefreeman) · 2022-05-20T03:39:10.749Z · EA · GW · 79 comments

[Note: [1]Big tent refers to a group that encourages "a broad spectrum of views among its members". This is not a post arguing for "fast growth" of highly-engaged EAs (HEAs) but rather a recommendation that as we inevitably get more exposure we try to represent and cultivate our diversity while ensuring we present EA as a question.]


This August, when Will MacAskill launches What We Owe The Future, we will see a spike of interest [EA · GW] in longtermism and effective altruism more broadly. People will form their first impressions – these will be hard to shake.

After hearing of these ideas for the first time, they will be wondering things like:

If we're lucky, they'll investigate these questions. The answers they get matter (and so does their experience finding those answers).

I get the sense that effective altruism is at a crossroads right now. We can either become a movement of people who appear dedicated to a particular set of conclusions about the world, or we can become a movement of people that appear united by a shared commitment to using reason and evidence to do the most good we can.

In the former case, I expect to become a much smaller group, easier to coordinate our focus, but it's also a group that's more easily dismissed. People might see us as a bunch of nerds[2] who have read too many philosophy papers[3] and who are out of touch with the real world.

In the latter case, I'd expect to become a much bigger group. I'll admit that it's also a group that's harder to organise (people are coming at the problem from different angles and with varying levels of knowledge). However, if we are to have the impact we want: I'd bet on the latter option.

I don't believe we can – nor should – simply tinker on the margins forever nor try to act as a "shadowy cabal". As we grow, we will start pushing for bigger and more significant changes, and people will notice. We've already seen this with the increased media coverage of things like political campaigns[4] and prominent people that are seen to be EA-adjacent[5].

A lot of these first impressions we won't be able to control. But we can try to spread good memes about EA (inspiring and accurate ones), and we do have some level of control about what happens when people show up at our "shop fronts" (e.g. prominent organisations, local and university groups, conferences etc.).

I recently had a pretty disheartening exchange where I heard from a new GWWC member who'd started to help run a local group felt "discouraged and embarrassed" at an EAGx conference. They left feeling like they weren't earning enough to be "earning to give" and that they didn't belong in the community if they're not doing direct work (or don't have an immediate plan to drop everything and change). They said this "poisoned" their interest in EA.

Experiences like this aren't always easy to prevent, but it's worth trying.

We are aware that we are one of the "shop fronts" at Giving What We Can. So we're currently thinking about how we represent worldview diversity within effective giving and what options we present to first-time donors. Some examples:

These are just small ways to make effective altruism more accessible and appealing to a wider audience.

Even if we were just trying to reach a small number of highly-skilled individuals, we don't want to make life difficult for them by having effective altruism (or longtermism) seem too weird to their family or friends (people are less likely to take actions when they don't feel supported by their immediate community). Even better, we want people's interest in these ideas and actions they take to spur more positive actions by those in their lives.

I believe we need the kind of effective altruism where:

Many paths to effective altruism. Many positive actions taken.

For this to work, I think we need to: 

I'm not saying that "anything goes", and we should drop our standards and not be bold enough to make strong and unintuitive claims. I think we must continue to be truth-seeking to develop a shared understanding of the world and what we should do to improve it. But I think we need to keep our minds open to the fact that we're going to be wrong about a lot of things, new people will bring helpful new perspectives, and we want to have the type of effective altruism that attracts many people who have a variety of things to bring to the table.

  1. ^

    After reading several comments I think that I could have done better by defining "big tent" at the beginning so I added this definition and clarification after this was posted.

  2. ^

    I wear the nerd label proudly

  3. ^

    And love me some philosophy papers

  4. ^

    e.g. This, this, and many more over the past few weeks

  5. ^

    e.g. Despite it being a stretch: this

  6. ^

    We are aware that this is often fungible with larger donors but we think that’s okay for reasons we will get into in future posts. We also expect that the type of donor who’s interested in fungibility is a great person to get more involved in direct work so we are working to ensure that these deeper concepts are still presented to donors and have a path for people to go “down the rabbit hole”.

  7. ^

    As opposed to concerned as I've heard people share that their family or friends are worried about their involvement after looking into it.

  8. ^

     Even beyond our typical recommendations. I’ve been thinking about “everyday altruism” having a presence within our local EA group (e.g. such as giving blood together, volunteering to teach ethics in schools, helping people to get to voting booths etc) – not skewing too much too this way, but having some presence could be good. As we’ve seen with Carrick’s campaign, doing some legible good within your community is something that outsiders will look for and will judge you on. Plus some of these things could (low confidence) make a decent case for considering how low cost they might be.

79 comments

Comments sorted by top scores.

comment by Thomas Kwa (tkwa) · 2022-05-20T12:01:38.711Z · EA(p) · GW(p)

There's value in giving the average person a broadly positive impression of EA, and I agree with some of the suggested actions. However, I think some of them risk being applause lights [LW · GW]-- it's easy to say we need to be less elitist, etc., but I think the easy changes you can make sometimes don't address fundamental difficulties, and making sweeping changes have hidden costs when you think about what they actually mean.

This is separate from any concern about whether it's better for EA to be a large or small movement.

Be extra vigilant to ensure that effective altruism remains a "big tent".

Edit: big tent actually means "encompassing a broad spectrum of views", not "big movement". I now think this section has some relevance to the OP but does not centrally address the above point.

As I understand it, this means spending more resources on people who are "less elite" and less committed to maximizing their impact. Some of these people will go on to make career changes and have lots of impact, but it seems clear that their average impact will be lower. Right now, EA has limited community-building capacity, so the opportunity cost is huge. If we allocate more resources to "big tent" efforts, it would mean less field-building at top-20 universities (Cambridge AGISF), less highly scalable top-funnel (80,000 Hours), less workshops for people who are committed to career changes and get huge speedups from workshops.

One could still make a neglectedness case for big-tent efforts, but the cost-benefit calculation definitely can't be summed up in one line.

Celebrate all the good actions[6] [EA(p) · GW(p)] that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).

I'm uncomfortable doing too much celebrating of actions that are much lower impact than other actions (e.g. donating blood), from both an honesty/transparency perspective and a consequentialist perspective. From a consequentialist perspective, we should probably celebrate actions that create a lot of expected impact in order to encourage people to take those actions. So the relevant question is whether donating blood makes one closer to having a very high-impact career. I think the answer is often no: it often doesn't practice careful scope-sensitive thinking, or bring high-impact actions into one's action space.

From a transparency perspective, celebration disproportionate to the good done also feels kind of fake. In the extreme, we're basically distorting our impressions of people's actions to get people to join a movement. I'm not saying we should shun people for taking a suboptimal action, but we should be transparent about the fact that (a) some altruistic actions aren't very good and don't deserve celebration, and (b) some actions are good but only because they're on the path to an impactful career.

Communicate our ideas both in high fidelity while remaining brief and to the point (be careful of the memes we spread).

Communication is hard. There's a tradeoff between fidelity, brevity, scale, and speed (time spent writing/editing/talking to distill 1 idea):

  • Long one-on-ones get very high fidelity, low brevity, low scale, and high speed
  • 80k podcasts are high fidelity, low brevity, high scale, and low speed
  • A tabling pitch is low fidelity, high brevity, moderate scale, and moderate speed
  • A short, polished EA forum post is moderate fidelity, high brevity, high scale, and very low speed. If you're not a gifted writer it takes multiple editing cycles to create a really high-quality post. Usually this includes copy-editing, sending the Google Doc draft to friends, having discussions in the comments, maybe adding visuals.

If we max out fidelity and brevity, we have to have lower scale and/or speed. I think this is okay if we're targeting communication, but it doesn't play well with the big-tent approach where we also need high scale. One could say we should just get closer to the Pareto frontier, but I think everyone is already trying to do this.

Avoid coming across as dogmatic, elitist, or out-of-touch.

I don't strongly disagree with this-- it's bad to put off people unnecessarily-- but I think it can easily be taken too far. 

I'm worried that people will avoid looking dogmatic by adding unwarranted uncertainty about what actions are best, and in particular being unwilling to reject popular ideas. I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty. (This is related to the post "PR is corrosive; "reputation" is not [LW · GW].) When someone asks whether volunteering in an animal shelter is high-impact, we should give well-reasoned arguments that there are probably higher-value things to do under almost every scope-sensitive moral view (perhaps starting from first principles if they're new), not avoid looking dogmatic by telling them something largely false like "Some people might find higher impact at an animal shelter because they have comparative advantage / are much more motivated, and there could also be unknown unknowns that place really high value on the work at animal shelters". It's impossible to spend 1% of our resources on every idea with as much true merit as volunteering at animal shelters because there are more than 100 such ideas, so we only would because of bias towards popular things. But when we require a well-reasoned case using the ITN framework to allocate 1% of our effort to a problem, and therefore refuse to spend 1% of our effort on animal shelters, plastic bag bans, or the NYC homelessness problem, we will come off as dogmatic to some people. OP addresses the need to protect our epistemics at the end, but I think doesn't stress this enough.

There are also many crucial EA things that sound or are elitist.

  • More resources are focused on top universities than community colleges (because talent is concentrated there and this ultimately helps the most sentient beings).
  • Over 80% of EA funding is from billionaires.
  • People are flown across the world to retreats (because this is often the most efficient way to network or learn, and we think their time can do more good than spending the money on anything else).
  • We are looking for people who produce 1000x the impact as others (because they have more multipliers [EA · GW] available).

We shouldn't be exclusionary for no reason when talking to new people. But based on community-building at two universities, ~10 retreats/EAGs, much of the reason EA looks elitist is not because we're exclusionary for no reason, it's because EAs do important things that look elitist.

Maybe the most elitist-sounding practices should even be slightly reduced for PR reasons. But going further to reduce the appearance of elitism would hamstring EA by taking away some of the most valuable direct and meta interventions. 

Replies from: Rob Mitchell, lukefreeman, tamgent, james.lucassen
comment by Rob Mitchell · 2022-05-20T18:27:59.099Z · EA(p) · GW(p)

Celebrate all the good actions[ [EA(p) · GW(p)]that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).

--

I'm uncomfortable doing too much celebrating of actions that are much lower impact than other actions

I think the following things can both be true:

  • The best actions are much higher impact than others and should be heavily encouraged.
  • Most people will come in on easier but lower impact actions and if there isn't an obvious and stepped progression to get to higher impact actions and support to facilitate this then many will fall out unnecessarily. Or may be put off entirely if 'entry level' actions either aren't available or receive a very low reward or status.

I didn't read the OP as saying that we should settle with lower impact actions if there's the potential for higher impact ones. I read it as saying that we should make it easier for people to find their level - either helping them to reach higher impact over time if for whatever reason they're unable or unwilling to get there straight away, or making space for lower impact actions if for whatever reason that's what's available. 

Some of this will involve shouting out and rewarding less impactful actions beyond their absolute value not for its own sake but because this may be the best way of helping this progression. I've definitely noticed the '0-100' thing and if I was younger and less experienced it might have bothered me more. 

Replies from: lukefreeman
comment by Luke Freeman (lukefreeman) · 2022-05-21T05:58:11.712Z · EA(p) · GW(p)

Thanks Rob. I think you just made my point better than me! 😀

comment by Luke Freeman (lukefreeman) · 2022-05-21T04:45:01.229Z · EA(p) · GW(p)

Thanks for your response. I tend to actually agree with a lot (but not all) of these points, so I totally own that some of this just needs clarification that wouldn't be the case if I were clearer in my original post.

this means spending more resources on people who are "less elite" and less committed to EA

There’s a difference between actively recruiting from “less elite” sources and being carefully about your shopfronts so that they don’t put-off would-be effective altruists and create enemies of could-be allies. I’m pointing much more to the latter than the former (though I do think there’s value in the former too).

 

I'm not saying we should shun people for taking a suboptimal action, but we should be transparent about the fact that (a) some altruistic actions aren't very good and don't deserve celebration, and (b) some actions are good but only because they're on the path to an impactful career.

I’m mostly saying we shouldn’t shun people for taking a suboptimal action. But also, be careful about how confident we are about what is suboptimal or not. And use to use positive reinforcement instead of good actions instead of guilting people for not reaching a particular standard. To recognise that we’re all on a journey and the destination isn’t always that clear anyway (Rob Wiblin thought it might not be a good idea for SBF to earn to give and I think that encouraging him to become a grantmaker at Open Philanthropy probably would have been a worse outcome).

Side note: There’s something pretty off-putting about treating the actions of altruistic people as purely a means to getting them into a particular predestined career. I think we lose good people when we treat them this way. We can seem like slimey salespeople.

 

Communication is hard. There's a tradeoff between fidelity, brevity, scale, and speed

Again this is where you have different focuses in different places. Our shopfronts (e.g. effectivealtruism.org, fellowships, virtual programs, introductory presentations, personal interactions with community members and group leaders etc) start brief and concise with a clear path to dig deeper.

 

big-tent approach where we also need high scale

I think this is a central confusion with my post and I own I must not have communicated this well: big tent doesn’t mean actively increasing reach. Big tent means encouraging and showcasing the diversity that exists within the community so that people can see that we’re committed to the question of “how can we do the most good” not a specific set of answers.

 

someone asks whether volunteering in an animal shelter is "EA", we should give well-reasoned arguments that there are probably higher-value things to do under almost every scope-sensitive moral view (perhaps starting from first principles if they're new), not avoid looking dogmatic by telling them something largely false like "Some people might find higher impact at an animal shelter because they have comparative advantage / are much more motivated, and there could also be unknown unknowns that place really high value on the work at animal shelters".

I agree! The former is a great response, the latter is not. I’d also say something along the lines of “you can have multiple goals and that’s fine” and that if the warm fuzzies is important and motivating for you then that’s great. I wouldn’t encourage someone to say it’s “EA” if it isn’t.

 

We should probably not come off as exclusionary when talking to new people.

Great! That’s one of my main points.

 

Taken to the extreme, avoiding the appearance of elitism would hamstring EA by taking away some of the most valuable direct and meta interventions.

I agree! I think we should just be judicious about it and bear in mind both (a) how perception of elitism can hurt us; and (b) when we miss out on great people because of unnecessary elitism that results in us achieving a lot less.

Replies from: tkwa
comment by Thomas Kwa (tkwa) · 2022-05-21T19:15:04.638Z · EA(p) · GW(p)

big tent doesn’t mean actively increasing reach. Big tent means encouraging and showcasing the diversity that exists within the community so that people can see that we’re committed to the question of “how can we do the most good” not a specific set of answers.

Thanks, this clears up a lot for me.

Replies from: lukefreeman
comment by Luke Freeman (lukefreeman) · 2022-05-22T00:49:54.161Z · EA(p) · GW(p)

Great! I definitely should have defined that up front!

comment by tamgent · 2022-05-20T23:31:08.244Z · EA(p) · GW(p)

Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful'? 

I got this impression from what I understood your main point to be, something like: 

There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.

I think there are several assumptions in both of these points that I want to unpack (and disagree with).

On the question of whether there is a unidimensional scale of talented people who will make the most impact: I believe that the EA movement could be wrong about the problems it thinks are most important, and/or the approaches to solving them. In the world where we are wrong, if we deter many groups with important skillsets or approaches that we didn't realise were important because we were overconfident in some problems/solutions, then that's quite bad. Conversely, in the world where we are right, yes maybe we have invested in more places than turned out to be necessary, but the downside risks seem smaller overall (depending on constraints, which I'll get to in next para). You could argue that talent correlates across all skillsets and approaches, and maybe there's some truth to that, but I think there's lots of places where the tails come apart [LW · GW], and I worry that not taking worldview diversification seriously can lead to many failure modes for a movement like EA. If you are quite certain that EA top cause areas as listed on 80k are right about the problems that are 'most' important and the 'best' approaches to solving them (this second one I am extremely uncertain about), you may reasonably disagree with me here - is that the case? In my view, these superlatives and collapsing of dimensions requires a lot of certainty about some baseline assumptions.

On the question of whether resource diversion from talented people to less 'talented' people is lower expected value: I think this depends on lots of things (sidestepping the question of talent definition which above para addresses). Firstly, are the resources substitutable? In the example you gave with university groups, I'd say no, if you fund a non-top university group then you are not detracting from top university group funding (assuming no shortage of monetary funding, which I believe we can assume). However, if you meant the resource is the time of a grantmaker specialised in community building, and it is harder for them to evaluate a non-top uni than top because maybe they know fewer people there etc. then I'd say that resource is substitutable. The question of substitutability matters to identify if it is a real cost, but it also opens a question of resource constraints and causality. Imagine a world where that time-constrained grantmaker decides to not take the easy decision but bear short term cost and invest in getting to know the new non-top uni - it is possible that the ROI is higher because of returns to early-stage scaling being higher, and new value of information. We could also imagine a different causality: if grantmaking itself was less centralised (which a bigger tent might lead to), some grantmakers might cater to non-top unis, and others to top unis, and we'd be able to see outcomes from both. So overall I think this point of yours is far from clearly true, and a bigger tent would give more value of information.

There were some points you made that I do agree with you on. In particular: celebration disproportionate to the impact feeling fake, adding false uncertainty to avoid coming across as dogmatic (although I think there is a middle way here), and real trade-offs in axes of desirable communication qualities. Another thing I noticed that I like is a care for epistemic quality and rigour and wanting to protect that cultural aspects. It's not obvious to me why that would need to be sacrificed to have a bigger tent - but maybe we have different ideas of what a bigger tent looks like.

(Also I did a quick reversal test of the actions in the OP in my head as mentioned in the applause lights post you linked to, and the vast majority do not stand up as applause lights in my opinion, in that I'd bet you'd find the opposite point of view being genuinely argued for around this forum or LW somewhere ).

Replies from: lukefreeman, Linch, tkwa
comment by Luke Freeman (lukefreeman) · 2022-05-21T07:51:51.694Z · EA(p) · GW(p)

(I also felt that the applause lights argument largely didn’t hold up and came across as unnecessarily dismissive, I think the comment would have held up better without it)

Replies from: tkwa
comment by Thomas Kwa (tkwa) · 2022-05-21T18:51:06.840Z · EA(p) · GW(p)

Thanks, I made an edit to weaken the wording.

I mostly wanted to point out a few characteristics of applause lights that I thought matched:

  • the proposed actions are easier to cheer for on a superficial level
  • arguing for the opposite is difficult, even if it might be correct: "Avoid coming across as dogmatic, elitist, or out-of-touch." inverts to "be okay with coming across as dogmatic, elitsit, or out-of-touch"
  • when you try to put them into practice, the easy changes you can make don't address fundamental difficulties, and making sweeping changes has high cost

Looking over it again, saying they are applause lights is saying that the recommendations are entirely vacuous, which is a pretty serious claim I didn't mean to make.

Replies from: lukefreeman
comment by Luke Freeman (lukefreeman) · 2022-05-22T00:42:40.342Z · EA(p) · GW(p)

Thanks Thomas! I definitely agree that when you get into the details of some of these they’re certainly not easy and that the framing of some of them could be seen as applause lights.

comment by Linch · 2022-05-21T15:06:51.189Z · EA(p) · GW(p)

Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful

I think this is unhelpfully conflating at least three pretty different concepts. 

  • Whether impact can be collapsed to a single dimension when doing moral calculus.
  • Whether morality is objective
  • Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Replies from: tamgent
comment by tamgent · 2022-05-21T20:47:28.947Z · EA(p) · GW(p)

Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.

Replies from: Linch
comment by Linch · 2022-05-21T21:07:07.406Z · EA(p) · GW(p)

I guess my personal read here is that I don't think Thomas implied that we had perfect predictive prowess, nor did his argument rely upon this assumption. 

Replies from: tamgent
comment by tamgent · 2022-05-21T21:31:00.991Z · EA(p) · GW(p)

Yeah I just couldn't understand his comment until I realised that he'd misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn't deter great people for having different views. So I was looking for an explanation and that's what my brain came up with. 

Replies from: Linch
comment by Linch · 2022-05-23T21:38:43.983Z · EA(p) · GW(p)

Thank you, that makes sense!

comment by Thomas Kwa (tkwa) · 2022-05-21T18:34:10.955Z · EA(p) · GW(p)

First off, note that my comment was based on a misunderstanding of "big tent" as "big movement", not "broad spectrum of views".

Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful'? 

As Linch pointed out, there are three different questions here (and there's a 4th important one):

  1. Whether impact can be collapsed to a single dimension when doing moral calculus.
  2. Whether morality is objective
  3. Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
  4. Whether we can identify groups of people to invest in, given the uncertainty we have

Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess. This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches.

There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.

I would state my main point as something like "Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they're good, given that they have large costs". I do believe that there's a tail of talented+dedicated people who will make much more impact than others, but I don't think the second half follows, just that any reallocation of resources requires weighing costs and benefits.

Here are some things I think we agree on:

  • Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one.
  • Before deciding that top community-builders should work at a synagogue, we should make sure it's the highest EV thing they could be doing (taking into account uncertainty and VOI). Note there are other high-VOI things to do, like trying to go viral on TikTok or starting EA groups at top universities in India and Brazil.
  • We can identify certain groups of people who will pretty robustly have higher expected impact (again where "expected" takes into account our uncertainty over what paths are best): people with higher engagement (able to make career changes), higher intelligence+conscientiousness.
  • Putting some resources towards less talented/committed people is good given some combination of uncertainty and neglectedness/VOI, and it's unclear where to put the marginal resource.
Replies from: Sophia, tamgent
comment by Sophia · 2022-05-21T19:06:26.523Z · EA(p) · GW(p)

It is plausible to me that there are some low opportunity cost actions that might make it way more likely that certain people will work on guesses that are plausible candidates for our top (or close to the top) guesses in the next 50 years who, otherwise, wouldn't engage with effective altruism.[1]

For example, how existing community organizers manage certain conversations can make a really big difference to some people's lasting impressions of effective altruism. 

Consider a person who comes to a group who is sceptical of the top causes we propose but uses the ITN framework [? · GW] to make a case for another cause that they believe is more promising by EA lights. 

There are many ways to respond to this person. One is to make it clear that you think that this person just hasn't thought about it enough, or they would just come to the same conclusion as existing people in the effective altruism community. Another is to give false encouragement, overstating the extent of your agreement for the sake of making this person, who you disagree with, feel welcome. A skilled community builder with the right mindset can, perhaps, navigate between the above two reactions. They might use this as an opportunity to really reinforce the EA mindset/thinking tools that this person is demonstrating (which is awesome!) and then give some pushback where pushback is due.[2] 

There are also some higher opportunity cost actions to achieve this inclusivity, including the ones you discussed (but this doesn't seem what Luke was advocating for, see his reply [EA(p) · GW(p)] [3]).

  1. ^

    This seems to get the benefit, if done successfully, of not only their work but having another person who might be able to communicate the core idea of effective altruism with high fidelity [EA · GW] to many others they meet over their entire career with a sphere of people we might not otherwise reach.

  2. ^

    Ideally, pushback is just on one part at a time. The shotgun method rarely leads to a constructive conversation and  it's hard to resolve all cruxes in a single conversation. The goal might be just to find one to resolve for now (maybe even a smaller one than the one that the conversation started out with) and hopefully they'll enjoy the conversation enough to come back to another event to resolve a second.

     I think it's also worth acknowledging 1) that we have spent a decade steelmanning our views, and that sometimes it takes a lot of time to build up new ideas (see butterfly ideas [EA · GW]), which won't get built up if no one makes that investment but also 2) people have spent 10 years thinking hard about the "how to help others as much as possible" question so it is definitely worth some investment to get an understanding of why these people think there is a case for these existing causes.

  3. ^

    maybe this whole comment should be a reply to Luke's reply [EA(p) · GW(p)] but moving this comment is a tad annoying so hopefully it is forgivable to leave it here 🌞.

Replies from: lukefreeman
comment by Luke Freeman (lukefreeman) · 2022-05-22T00:47:25.032Z · EA(p) · GW(p)

Thanks Sophia! That example is very much the kind of thing I’m talking about. IMHO it’s pretty low cost and high value for us to try and communicate in this way (and would attract more people with a scout mindset which I think would be very good).

Replies from: Sophia
comment by Sophia · 2022-05-22T05:35:53.261Z · EA(p) · GW(p)

🌞

comment by tamgent · 2022-05-21T21:25:26.072Z · EA(p) · GW(p)

Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it'd help with interpreting it.

So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.

I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That's my best guess at our disagreement - that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don't do enough macro analysis, or I'm just not that maximising.

BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population, sure, but here we are discussing the marginal set of people who would/wouldn't get deterred/to use EA as one of their inputs in helping them make an impact, depending on whether you take a big tent approach. I'm personally quite cautious to not confuse 'EA' with 'having impact' (not saying you did this, I'm just pretty wary about it and thus sensitive), and do worry about people selecting for 'EA alignment' - it really turns me off EA because it's strong sign of groupthink and bad epistemic culture.

comment by james.lucassen · 2022-05-20T21:04:43.251Z · EA(p) · GW(p)

I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty.

This is a great sentence, I will be stealing it :)

However, I think "having good legible epistemics" being sufficient for not coming across as dogmatic is partially wishful thinking. A lot of these first impressions are just going to be pattern-matching, whether we like it or not.

I would be excited to find ways to pattern-match better, without actually sacrificing anything substantive. One thing I've found anecdotally is that a sort of "friendly transparency" works pretty well for this - just be up front about what you believe and why, don't try to hide ideas that might scare people off, be open about the optics on things, ways you're worried they might come across badly, and why those bad impressions are misleading, etc.

comment by GraceAdams · 2022-05-20T03:57:37.547Z · EA(p) · GW(p)

Thanks for this post, Luke! 

This touches on many of my personal fears about the community in the moment. 

I sincerely hope that anyone who comes across our community with the desire and intent to participate in the project of effective altruism feels that they are welcome and celebrated, whether that looks like volunteering an hour each month, donating whatever they feel they can afford, or doing direct work.

To lose people who have diverse worldviews, abilities and backgrounds would be a shame, and could potentially limit the impact of the community. I'd like to see an increasingly diverse effective altruism community, all bound by seeking to do as much good as we can.

comment by lincolnq · 2022-05-21T00:58:14.541Z · EA(p) · GW(p)

The call to action here resonates -- feels really important and true to me, and I was just thinking yesterday about the same problem.

The way I would frame it is this:

The core of EA, what drives all of us together, is not the conclusions (focus on long term! AI!) -- it's the thought process and principles. Although EA's conclusions are exciting and headline-worthy, pushing them without pushing the process feels to me like it risks hollowing out an important core and turning EA into (more of) a cult, rather than a discipline.

Edit to add re. "celebrate the process" -- A bunch of people have critiqued you for pushing "celebrate all the good actions" since it risks diluting the power of our conclusions, but I think if we frame it as "celebrate and demonstrate the EA process" then that aligns with the point I'm trying to make, and I think works.

Replies from: lukefreeman
comment by Luke Freeman (lukefreeman) · 2022-05-21T05:44:32.449Z · EA(p) · GW(p)

Thanks! I really like your framing of both these 😀 

comment by MeganNelson · 2022-05-20T13:42:23.719Z · EA(p) · GW(p)

Thank you for this post! I'm a loud-and-proud advocate of the "big tent". It's partly selfish, because I don't have the markers that would make me EA Elite (like multiple Oxbridge degrees or a gazillion dollars). 

What I do have is a persistent desire to steadily hack away at the tremendous amount of suffering in the world, and a solid set of interpersonal skills. So I show up and I make my donations and I do my level best to encourage/uplift/motivate the other folks who might feel the way that I do. If the tent weren't big, I wouldn't be here, and I think that would be a loss. 

Your new GWWC member's EAGx experience is exactly what I'm out here trying to prevent. Here is someone who was interested/engaged enough to go to a conference, and - we've lost them. What a waste! Just a little more care could have helped that person come away willing to continue to engage with EA - or at least not have a negative view of it.

There are lots of folks out there who are working hard on "narrow tower" EA. Hooray for them - they are driving the forward motion of the movement and achieving amazing things. But in my view, we also need the "big tent" folks to make sure the movement stays accessible.

After all, “How can I do the most good, with the resources available to me?” is a question more - certainly not fewer! - people should be encouraged to ask.

comment by T3t · 2022-05-20T07:23:42.490Z · EA(p) · GW(p)

We can either become a movement of people who seem dedicated to a particular set of conclusions about the world, or we can become a movement of people united by a shared commitment to using reason and evidence to do the most good we can.

The former is a much smaller group, easier to coordinate our focus, but it's also a group that's more easily dismissed. People might see us as a bunch of nerds[1] [EA(p) · GW(p)] who have read too many philosophy papers[2] [EA(p) · GW(p)] and who are out of touch with the real world.

The latter is a much bigger group.

 

I'm aware that this is not exactly the central thrust of the piece, but I'd be interested if you could expand on why we might expect the former to be a smaller group than the latter.

 

I agree that a "commitment to using reason and evidence to do the most good we can" is a much better target to aim for than "dedicated to a particular set of conclusions about the world".  However, my sense is that historically there have been many large and rapidly growing groups of people that fit the second description, and not very many of the first.  I think this was true for mechanistic reasons related to how humans work rather than being accidents of history, and think that recent technological advances may even have exaggerated the effects.

Replies from: Maxdalton, lukefreeman, Sophia, Guy Raveh
comment by MaxDalton (Maxdalton) · 2022-05-20T09:04:50.928Z · EA(p) · GW(p)

+1 to this.

In fact, I think that it's harder to get a very big (or very fast-growing) set of people to do the "reason and evidence" thing well.  I think that reasoning carefully is very hard, and building a community that reasons well together is very hard.

I am very keen for EA to be about the "reason and evidence" thing, rather than about specific answers.  But in order to do this, I think that we need to grow cautiously (maybe around 30%/year) and in a pretty thoughtful way.

Replies from: lukefreeman, Guy Raveh
comment by Luke Freeman (lukefreeman) · 2022-05-21T05:17:30.516Z · EA(p) · GW(p)

I think that it's harder very big (or very fast-growing) set of people to do the "reason and evidence" thing well.  I think that reasoning carefully is very hard, and building a community that reasons well together is very hard.

I agree with this. I think it's even harder to build a community that reasons well together when we come across dogmatically (and we risk cultivating an echo chamber).

Note: I do want to applaud a lot of recent work that CEA-core team are doing to avoid this, the updates to effectivealtruism.org for example have helped!.

I am very keen for EA to be about the "reason and evidence" thing, rather than about specific answers.  But in order to do this, I think that we need to grow cautiously (maybe around 30%/year) and in a pretty thoughtful way.

A couple of things here:

Firstly, 30% /year is pretty damn fast by most standards!

Secondly, I agree that being thoughtful is essential (that's a key part of my central claim!).

Thirdly, some of the rate of growth is within "our" control (e.g. CEA can control how much it invests in certain community building activities). However, a lot of things aren't. People are noticing as we ramp up activities labelled EA or even losely associated with EA.

For example, to avoid growing faster than 30% /year should someone say to Will and the team promoting WWOTF to not pull back on the promotion? What about to SBF to not support more candidates or scaling up FTX Future Fund? Should we not promote EA to new donors/GWWC members? Should GiveWell stop scaling up?

If anything associated with EA grows, it'll trickle through to more people discovering it.

I think we need to expect that it's not entirely within our control and to act thoughtfully in light of this.

Replies from: Maxdalton
comment by MaxDalton (Maxdalton) · 2022-05-23T08:36:32.591Z · EA(p) · GW(p)

Agree that echo chamber/dogmatism is also a major barrier to epistemics!

"30% seems high by normal standards" - yep, I guess so. But I'm excited about things like GWWC trying to grow much faster than 30%, and I think that's possible.

Agree it's not fully within our control, and that we might not yet be hitting 30%. I think that if we're hitting >35% annual growth, I would begin to favour cutting back on certain sorts of outreach efforts or doing things like increasing the bar for EAG. I wouldn't want GW/GWWC to slow down, but I would want you to begin to point fewer people to EA (at least temporarily, so that we can manage the growth). [Off the cuff take, maybe I'd change my mind on further reflection.]

comment by Guy Raveh · 2022-05-20T10:34:31.618Z · EA(p) · GW(p)

grow cautiously (maybe around 30%/year)

Are there estimates about current or previous growth rates?

Replies from: Maxdalton
comment by MaxDalton (Maxdalton) · 2022-05-20T10:38:50.584Z · EA(p) · GW(p)

There are some, e.g. here. [EA · GW]

comment by Luke Freeman (lukefreeman) · 2022-05-21T04:59:13.997Z · EA(p) · GW(p)

my sense is that historically there have been many large and rapidly growing groups of people that fit the second description, and not very many of the first.  I think this was true for mechanistic reasons related to how humans work rather than being accidents of history, and think that recent technological advances may even have exaggerated the effects.

I think that works for many groups, and many subfields/related causes, but not for "effective altruism". 

To unpack this a bit, I think that "AI safety" or "animal welfare" movements could quite possibly get much bigger much more quickly than an "effective altruism" movement that is "commitment to using reason and evidence to do the most good we can".

However, when we are selling that we're "commitment to using reason and evidence to do the most good we can" and instead present people with a very narrow set of conclusions I think we do neither of these things well. Instead we put people off and we undermine our value. 

I believe that the value of the EA movement comes from this commitment to using reason and evidence to do the most good we can.

People are hearing about EA. These people could become allies or members of the community and/or our causes. However, if we present ourselves too narrowly we might not just lose them, but they might become adversaries.

I've seen this already. People soured on EA because if it seeming too narrow and too overconfident becoming increasingly adversarial and that hurting our overall goals of improving the world.

Replies from: T3t
comment by T3t · 2022-05-21T05:22:18.009Z · EA(p) · GW(p)

I think that works for many groups, and many subfields/related causes, but not for "effective altruism". 

To unpack this a bit, I think that "AI safety" or "animal welfare" movements could quite possibly get much bigger much more quickly than an "effective altruism" movement that is "commitment to using reason and evidence to do the most good we can".

I agree!  That's why I'm surprised by the initial claim in the article, which seems to be saying that we're more likely to be a smaller group if we become ideologically committed to certain object-level conclusions, and a larger group if we instead stay focused on having good epistemics and seeing where that takes us.  It seems like the two should be flipped?

Replies from: lukefreeman
comment by Luke Freeman (lukefreeman) · 2022-05-21T05:37:42.723Z · EA(p) · GW(p)

Sorry if the remainder of the comment didn't communicate this clearly enough:

I think the "bait and switch" of EA  (sell the "EA is a question" but seem to deliver "EA is these specific conclusions") is self-limiting for our total impact. This is self-limiting because:

  • It limits the size of our community (put off people who see it as a bait and switch)
  • It limits the quality of the community (groupthink, echo chambers, overfishing small ponds  etc)
  • We lose allies
  • We create enemies
  • Impact is a product of: size (community + allies) * quality (community + allies) - actions of enemies actively working against us.
  • If we decrease size and quality of community and allies while increasing the size and veracity of people working against us then we limit our impact.

Does that help clarify?

comment by Sophia · 2022-05-21T22:03:48.692Z · EA(p) · GW(p)

A core part of the differing intuitions might be because we're thinking about two different timescales.

 It seems intuitively right to me that the "dedicated to a particular set of conclusions about the world" version of effective altruism will grow faster in the short term. I think this might be because conclusions require less nuanced communication, and being more concrete  there are more concrete actions to take that can get people on board faster.

I also have the intuition that a "commitment to using reason and evidence to do the most good we can" (I'd maybe add, "with some proportion of our resources") has the potential to have a larger backing in the long-term. 

I have done a terrible "paint" job (literally used paint) in purple on one of the diagrams in this post [? · GW] to illustrate what I mean:

There are movement building strategies that end us up on the grey line, which gives us faster growth in the short term (so a bigger tent for a while), but doesn't change our saturation point (we're still at saturation point 1). 

I think that a "broad spectrum of ideas" might mean our end saturation point is higher even if this might require slower growth in the near term. I've illustrated this as the purple line which ends up being bigger in the end, at saturation point 2, even if in the short term, growth is slower. In this sense, we will be smaller tent for a while, but we have the potential to end up as a bigger tent in some terminal equilibrium. 

Replies from: Sophia
comment by Sophia · 2022-05-21T22:16:40.971Z · EA(p) · GW(p)

An example of a "movement" that had a vaguer, bigger picture idea that got so big it was too commonplace to be a movement might be "the scientific method"? 

comment by Guy Raveh · 2022-05-20T10:33:40.805Z · EA(p) · GW(p)

I think "large groups that reason together on how to achieve some shared values" is something that's so common, that we ignore it. Examples can be democratic countries, cities, communities.

Not that this means reasoning about being effective can attract as large a group. But one can hope.

comment by Nathan Young (nathan) · 2022-05-20T09:31:57.440Z · EA(p) · GW(p)

I both relatively strongly agree and strongly disagree with this post. Apologies that my points contradict one another:

Agreement:

  • Yes, community vibes feel weird right now. And I think in the run up to WWOTF they will only get weirder
  • Yes, we should be gracious to people who do small things. For me, being an EA is about being more effective or more altruistic with even $10 a month.  

Disagreement:

  • I reckon it's better if we focus on being a smaller highly engaged community rather than a really big one. I still think there should be actual research on this, but so far, much of the impact (SBF, Moskovitz funding GiveWell charities, direct work) has been from very engaged people. I find it compelling that we want similar levels of engagement in future. Do low engagement people become high engagement. I don't know. I don't emotionally enjoy this conclusion, but I can't say it's wrong, even though it clashes with the bullet point I made above.
    • GWWC is clearly a mass movement kind of organisation. I guess they should say, you might want to check out effective altruism, but it's not necessary.
  • I don't think that EA is for everyone. Again this clashes with what i said above, but I think that it can be harder for people who leave a community after some time than those who are rejected at the door. If my above point is correct, then there should be some way to signal to people that EA is for people who want to really engage and that it may not be for everyone

Synthesis

  • I suggest a wider movement being created around effective giving, perhaps reaching religious groups. This seems like the real "mass movement" etc
  • I would like research on if being smaller and higher engaged or not is better 
  • Be welcoming to new people, gracious to poeple whatever they are doing, but signal that EAGs are mainly for those who are engaged. Anyone can come to events and feel welcome, but there is a desire for more engagement and that may not fit everyone.

I'm worried this will be controversial and I think i could have worded it better, but I think it's better to say something clear and maybe wrong than vague. I may make edits and explain why.

Replies from: lukefreeman, Michael_Wiebe, casebash, Guy Raveh
comment by Luke Freeman (lukefreeman) · 2022-05-21T05:30:46.422Z · EA(p) · GW(p)

Thanks Nathan. I definitely see the tensions here. Hopefully these clarifications will help :)

I reckon it's better if we focus on being a smaller highly engaged community rather than a really big one.

My central claim isn't about the size of the community, it's about the diversity of EA that we present to the world (and represent within EA) and staying true to the core question not a particular set of conclusions. 

It depends on what you mean by "focus" too. The community will always be some degree of concentric circles of engagement. The total size and relative distribution of engagement will vary depending on what we focus on. My central claim is that the total impact of the community will be higher if the community remains a "big tent" that sticks to the core question of EA. The mechanism is that we create more engagement within each level of engagement, with more allies and fewer adversaries.

 

Do low engagement people become high engagement.

I've never seen someone become high engagement instantly. I've only seen engagement as something that increases incrementally (sometimes fast, sometimes slow, sometimes hit's a point and tapers off, and sadly sometimes high engagement turns to high anti-engagement).

 

I don't think that EA is for everyone. Again this clashes with what i said above, but I think that it can be harder for people who leave a community after some time than those who are rejected at the door. If my above point is correct, then there should be some way to signal to people that EA is for people who want to really engage and that it may not be for everyone

Depends on what you mean by EA. In my conception (and the conception I advocate for) everyone is an effective altruist to some extent sometimes and nobody is entirely an effective altruist ever. Effective altruism is a way of thinking not an identity. Some people are part of the "EA community" while some people eschew the label and community yet have much higher impact than most people within the "EA community" because they've interrogated big world problems and taken significant positive actions.

comment by Michael_Wiebe · 2022-05-20T20:26:17.495Z · EA(p) · GW(p)

I reckon it's better if we focus on being a smaller highly engaged community rather than a really big one.

Why not both? Have a big tent with less-engaged people, and a core of more-engaged people.

Also, a lot of people donating small amounts can add up to big amounts.

Replies from: lukefreeman
comment by Luke Freeman (lukefreeman) · 2022-05-21T05:41:28.539Z · EA(p) · GW(p)

Agree on both points. I think the concentric circles model still holds well. "Big tent" still applies at each level of engagement though. The best critics in the core will be those who still feel comfortable in the core while disagreeing with lots of people. I highly value people who are at a similar level of engagement but hold very different views to me as they make the best critics.

comment by Chris Leong (casebash) · 2022-05-20T10:12:45.436Z · EA(p) · GW(p)

What is WWOTF?

I reckon it's better if we focus on being a smaller highly engaged community rather than a really big one

Agreed, though it makes sense for Giving What We Can to become a mass movement. I think it'd be good for some people involved in GWWC to join EA, but there's no need to push it too hard. More like let people know about EA and if it resonates with people they'll come over.

but signal that EAGs are mainly for those who are engaged

Maybe, I think there's scope for people to become more engaged over time.

Replies from: Guy Raveh
comment by Guy Raveh · 2022-05-20T10:19:48.390Z · EA(p) · GW(p)

What is WWOTF?

"What We Owe the Future", Will MacAskill's new book.

comment by Guy Raveh · 2022-05-20T10:17:33.997Z · EA(p) · GW(p)

I think there are two ways to frame an expansion of the group of people who are engaged with EA through more than donations.

The first, which sits well with your disagreements: we're doing extremely important things which we got into by careful reasoning about our values and impact. More people may cause value drift or dilute the more impactful efforts to make way on the most important problems.

But I think a second one is much more plausible: we're almost surely wrong about some important things. We have biases that stem from who the typical EAs are, where they live, or just the very noisy path that EA has taken so far. While our current work is important, it's also crucial that our ideas are exposed to, and processed by, more people. What's "value drift" in one person's eyes might really be an important correction in another's. What's "dilution" may actually prove to mean a host of new useful perspectives and ideas (among other less useful ones).

comment by MaxDalton (Maxdalton) · 2022-05-20T09:20:37.278Z · EA(p) · GW(p)

Thanks for writing this up Luke! I think you're pointing to some important issues. I also think you and the GWWC team are doing excellent work - I'm really excited to see more people introduced to effective giving!

[Edit to add: Despite my comment below, I still am taking in the datapoints and perspectives that Luke is sharing, and I agree with many of his recommendations. I don't want to go into all of the sub-debates below because I'm focused on other priorities right now (including working on some of the issues Luke raises!).]

However, I worry that you're conflating a few pretty different dimensions, so I downvoted this post.

Here are some things that I think you're pointing to:

  1. "Particular set of conclusions" vs. "commitment to using evidence and reasoning"
  2. Size of the community, which we could in turn split into
    1. Rate of growth of the community
    2. Eventual size of the community
  3. How welcoming we should be/how diverse
    1. [I think you could split this up further.]
  4. In what circumstances, and to what degree, there should be  encouragement/pressure to take certain actions, versus just presenting people with options.
  5. How much we should focus on clearly communicating EA to people who aren't yet heavily involved.

This matters because you're sometimes then conflating these dimensions in ways that seem wrong to me (e.g. you say that it's easier to get big with the "evidence and reasoning" framing, but I think the opposite). 

Replies from: Jess_Whittlestone, lukefreeman, MichaelPlant
comment by Jess_Whittlestone · 2022-05-21T11:40:36.972Z · EA(p) · GW(p)

I also interpreted this comment as quite dismissive but I think most of that comes from the fact Max explicitly said he downvoted the post, rather than from the rest of the comment (which seems fine and reasonable).

 I think I naturally interpret a downvote as meaning "I think this post/comment isn't helpful and I generally want to discourage posts/comments like it." That seems pretty harsh in this case, and at odds with the fact Max seems to think the post actually points at some important things worth taking seriously. I also naturally feel a bit concerned about the CEO of CEA seeming to discourage posts which suggest EA should be doing things differently,  especially where they are reasonable and constructive like this one.

This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community). I haven't spent a lot of time on this forum recently so I'm wondering if other people think the norms around up/downvoting are different to my interpretation, and in particular whether Max you meant to use it differently?

[EDIT: I checked the norms on up/downvoting [EA · GW], which say to downvote if either "There’s an error", or "The comment or post didn’t add to the conversation, and maybe actually distracted." I personally think this post added something useful to the conversation about the scope and focus of EA, and it seems harsh to downvote it because it conflated a few different dimensions - and that's why Max's comment seemed a bit harsh/dismissive to me]

Replies from: aarongertler, Maxdalton
comment by Aaron Gertler (aarongertler) · 2022-05-25T00:23:29.822Z · EA(p) · GW(p)

This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community).

I ran the Forum for 3+ years (and, caveat, worked with Max). This is a complicated question.

Something I've seen many times: A post or comment is downvoted, and the author writes a comment asking why people downvoted (often seeming pretty confused/dispirited). 

Some people really hate anonymous downvotes. I've heard multiple suggestions that we remove anonymity from votes, or require people to input a reason before downvoting (which is then presumably sent to the author), or just establish an informal culture where downvotes are expected to come with comments.

So I don't think Max was necessarily being impolite here, especially since he and Luke are colleagues who know each other well.  Instead, he was doing something that some people want a lot more of and other people don't want at all. This seems like a matter of competing access needs (different people wanting different things from a shared resource).

In the end, I think it's down to individual users to take their best guess at whether saying "I downvoted" or "I upvoted" would be helpful in a given case. And I'm still not sure whether having more such comments would be a net positive — probably depends on circumstance.

***

Max having a senior position in the community is also a complicated thing. On the one hand, there's a risk that anything he says will be taken very seriously and lead to reactions he wouldn't want. On the other hand, it seems good for leaders to share their honest opinions on public platforms (rather than doing everything via DM or deliberately softening their views).

There are still ways to write better or worse comments, but I thought Max's was reasonable given the balancing act he's trying to do (and the massive support Luke's post had gotten already — I'd feel differently if Max had been joining a pile-on or something).

Replies from: Guy Raveh
comment by Guy Raveh · 2022-05-25T11:49:28.268Z · EA(p) · GW(p)

I think the problem isn't with saying you downvoted a post and why (I personally share the view that people should aim to explain their downvotes).

The problem is the actual reason:

I think you're pointing to some important issues... However, I worry that you're conflating a few pretty different dimensions, so I downvoted this post.

The message that, for me, stands out from this is "If you have an important idea but can't present it perfectly - it's better not to write at all." Which I think most of us would not endorse.

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2022-05-26T11:34:21.159Z · EA(p) · GW(p)

I didn't get that message at all. If someone tells me they downvoted something I wrote, my default takeaway is "oh, I could have been more clear" or "huh, maybe I need to add something that was missing" — not "yikes, I shouldn't have written this". *

I read Max's comment as "I thought this wasn't written very clearly/got some things wrong", not "I think you shouldn't have written this at all". The latter is, to me, almost the definition of a strong downvote.

If someone sees a post they think (a) points to important issues, and (b) gets important things wrong, any of upvote/downvote/decline-to-vote seems reasonable to me.

 

*This is partly because I've stopped feeling very nervous about Forum posts after years of experience. I know plenty of people who do have the "yikes" reaction. But that's where the users' identities and relationship [EA(p) · GW(p)] comes into play — I'd feel somewhat differently had Max said the same thing to a new poster.

Replies from: Guy Raveh
comment by Guy Raveh · 2022-05-26T15:24:17.460Z · EA(p) · GW(p)

I don't share your view about what a downvote means. However, regardless of what I think, it doesn't actually have any fixed meaning beyond that which people a assign to it - so it'd be interesting to have some stats on how people on the forum interpret it.

But that's where the users' identities and relationship comes into play — I'd feel somewhat differently had Max said the same thing to a new poster.

Most(?) readers won't know who either of them is, not to mention their relationship.

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2022-05-28T00:37:03.443Z · EA(p) · GW(p)

I don't share your view about what a downvote means.

What does a downvote mean to you? If it means "you shouldn't have written this", what does a strong downvote mean to you? The same thing, but with more emphasis?

It'd be interesting to have some stats on how people on the forum interpret it.

Why not create a poll? I would, but I'm not sure exactly which question you'd want asked.

Most(?) readers won't know who either of them is, not to mention their relationship.

Which brings up another question — to what extent should a comment be written for an author vs. the audience? 

Max's comment seemed very directed at Luke — it was mostly about the style of Luke's writing and his way of drawing conclusions. Other comments feel more audience-directed. 

Replies from: Linch, Guy Raveh
comment by Linch · 2022-06-07T02:43:01.878Z · EA(p) · GW(p)

Personally, I primarily downvote posts/comments where I generally think "reading this post/comment will on average make forum readers be worse at thinking about this problem than if they didn't read this post/comment, assuming that the time spent reading this post/comment is free."

I basically never strong downvote posts unless it's obvious spam or otherwise an extremely bad offender in the "worsens thinking" direction. 

comment by Guy Raveh · 2022-06-04T21:14:50.262Z · EA(p) · GW(p)

It's been over a week so I guess I should answer even if I don't have time for a longer reply.

What does a downvote mean to you? If it means "you shouldn't have written this", what does a strong downvote mean to you? The same thing, but with more emphasis?

I think so, but I'm not very confident.

to what extent should a comment be written for an author vs. the audience?

I don't think private conversations can exist on a public platform. If it's not a DM, there's always an audience, and in most contexts, I'd expect much of a comment's impact to come from its effects on that audience.

Why not create a poll?

The polls in that specific group look like they have a very small and probably unrepresentative sample size. Though I don't we'll be able to get a much larger one on such a question, I guess.

comment by MaxDalton (Maxdalton) · 2022-05-23T07:34:17.052Z · EA(p) · GW(p)

Nice to see you on the Forum again! 

Thanks for sharing that perspective - that makes sense. Possibly I was holding this to too high a standard - I think that I held it to a higher standard partly because Luke is also an organization/community leader, and probably I shouldn't have taken that into account. Still, overall my best guess is that this post distracted from the conversation, rather than adding to it (though others clearly disagree). Roughly, I think that the data points/perspectives were important but not particularly novel, and that the conflation of different questions could lead to people coming away more confused, or to making inaccurate inferences. But I agree that this is a pretty high standard, and maybe I should just comment in circumstances like this.

I also think I should have been more careful re seeming to discourage suggestions about EA. I wanted to signal "this particular set of suggestions seems muddled" not "suggestions are bad", but I definitely see how my post above could make people feel more hesitant to share suggestions, and that seems like a mistake on my part. To be clear: I would love feedback and suggestions! [EA · GW]

comment by Luke Freeman (lukefreeman) · 2022-05-21T04:53:09.033Z · EA(p) · GW(p)

Thanks Max. I agree that there is a lot of ground covered here that isn't broken up into different dimensions and that it could have been better if broken up as such. I disagree that entirely undermines the core proposition that: (a) whether we like it or not we are getting more attention; (b) it's particularly important to think carefully about our "shop fronts" with that increased attention; and therefore (c) staying true to "EA as a question" instead of a particular set of conclusions is going to ultimately serve our goals better (this might be our biggest disagreement?).

I'd be very interested to hear you unpack that you think the opposite of "easier to get big with the 'evidence and reasoning' framing".  This seems to be a pretty important crux.

Replies from: Maxdalton
comment by MaxDalton (Maxdalton) · 2022-05-23T07:48:28.267Z · EA(p) · GW(p)

Ah, I think I was actually a bit confused what the core proposition was, because of the different dimensions.

Here's what I think of your claims:

a) 100% agree, this is a very important consideration.

b) Agree that this is important. I think it's also very important to make sure that our shop fronts are accurate, and that we don't importantly distort the real work that we're doing (I expect you agree with this?).

c) I agree with this! Or at least, that's what I'm focused on and want more of. (And I'm also excited about people doing more cause-specific or community building to complement that/reach different audiences.)

So maybe I agree with your core thesis!

How easy is it to get big with evidence and reasoning?

I want to distinguish a few different worlds:

  1. We just do cause specific community building, or action-specific community building.
  2. We do community building focused on "EA as a question" with several different causes. Our epistemics are decent but not amazing.
  3. We do community building focused on "EA as a question" with several different causes. We are aiming for the epistemics of core members to be world class (like probably better than the average on this Forum, around the level that I see at some core EA organizations).

I'm most excited about option 3. I think that the thing we're trying to do is really hard and it would be easy for us to cause harm if we don't think carefully enough.

And then I think that we're kind of just about at the level I'd like to see for 3. As we grow, I naturally expect regression to the mean, because we're adding new people who have had less exposure to this type of thinking and may be less inclined to it. And also because I think that groups tend to reason less well as they get older and bigger. So I think that you want to be really careful about growth, and you can't grow that quickly with this approach.

I wonder if you mean something a bit more like 2? I'm not excited about that, but I agree that we could grow it much more quickly.

I'm personally not doing 1, but I'm excited about others trying it. I think that, at least for some causes, if you're doing 1 you can drop the epistemics/deep understanding requirements, and just have a lot of people coordinate around actions. E.g. I think that you could build a community of people who are earning to give for charities, and deferring to GiveWell and OpenPhilanthropy and GWWC  about where they give. I think that this thing could grow at >200%/year. (This is the thing that I'm most excited about GWWC being.) Similarly, I think you could make a movement focused on ending global poverty based on evidence and reasoning that grows pretty quickly - e.g. around lobbying governments to spend more on aid, and spend aid money more effectively. (I think that this approach basically doesn't work for pre-paradigmatic fields like AI safety, wild animal welfare, etc. though.)

Replies from: lukefreeman, lukefreeman
comment by Luke Freeman (lukefreeman) · 2022-05-24T04:30:26.733Z · EA(p) · GW(p)

Had a bit of time to digest overnight and wanted to clarify this a bit further.

I'm very supportive of #3 including "epistemics of core members to be world class". But fear that trying to achieve #3 too narrowly (demographics, worldviews, engagement levels etc) might ultimately undermine our goals (putting more people off, leaving the core group without as much support, worldviews becoming too narrow and this hurts our epistemics,  we don't create enough allies to get things we want to do done).

I think that nurturing the experience through each level of engagement from outsider to audience through to contributor and core while remaining a "big tent" (worldview and action diverse) will ultimately serve us better than focusing too much on just developing a world class core (I think remaining a "big tent" is a necessary precondition because the world class core won't exist without diversity of ideas/approaches and the support network needed for this core to succeed).

Happy to chat more about this.

comment by Luke Freeman (lukefreeman) · 2022-05-23T10:17:13.542Z · EA(p) · GW(p)

Thanks for clarifying! Not much to add now right this moment other than to say that I appreciate you going into detail about this.

comment by MichaelPlant · 2022-05-20T11:33:09.124Z · EA(p) · GW(p)

Hello Max,

In turn, I strongly downvoted your post.

Luke raised, you say, some "important issues". However, you didn't engage with the substance of those issues. Instead, you complained that he hadn't adequately separated them even though, for my money, they are substantially related. I wouldn't have minded that if you'd then go on to offer your thoughts on how EA should operate on each of the dimensions you listed, but you did not.

Given this, your comment struck me as unacceptably dismissive, particularly given you are the CEO of CEA. The message it conveys is something like "I will only listen to your concerns if you present them exactly in the format I want" which, again for my money, is not a good message to send.

Replies from: Maxdalton, nananana.nananana.heyhey.anon
comment by MaxDalton (Maxdalton) · 2022-05-20T11:46:44.746Z · EA(p) · GW(p)

I'm sorry that it came off as dismissive. I'll edit to make clearer that I appreciate and value the datapoints and perspectives. I am keen to get feedback and suggestions in any form [EA · GW]. I take the datapoints and perspectives that Luke shared seriously, and I've discussed lots of these things with him before. Sounds like you might want to share your perspective too? I'll send you a DM.

I viewed the splitting out of different threads as a substantive contribution to the debate, but I'm sorry you didn't see it that way. :) I agree that it would have been better if I'd given my take on all of the dimensions, but I didn't really want to get into all of those threads right now.

comment by nananana.nananana.heyhey.anon · 2022-05-21T11:08:39.123Z · EA(p) · GW(p)

Would you have this same reaction if you saw Luke and Max or GWWC/CEA as equals and peers? Maybe so! It seems like you think this as the head of CEA talking down to the OP. Max and Luke seem to know each other though; I read Max’s comment as a quick flag between equals that there’s a disagreement here, but writing it on the forum instead of an email means the rest of us get to participate a bit more in the conversation too.

Replies from: Maxdalton
comment by MaxDalton (Maxdalton) · 2022-05-23T07:49:14.391Z · EA(p) · GW(p)

FWIW, I do think that I reacted to this a bit differently because it's Luke (who I've worked with, and who I view as a peer). I think I would have been more positive/had lower standards for a random community member.

Replies from: lukefreeman
comment by Patrick Gruban (gruban) · 2022-05-20T05:04:52.836Z · EA(p) · GW(p)

Thank you for this post, I was thinking along similar lines and am grateful that you wrote this down. I would like to see the number of people grow that make decisions around career, donations and volunteering based on the central EA question regardless of whether they call themselves EA. More than a billion people live in high income countries alone and I find it conceivable that 1-10% would be open to making changes in their lives depending on the action they can take. But for EA to accommodate 10-100 million people I also assume different shopfronts in addition to the backend capabilities (having enough charities that can handle vast amounts of donations, having pipelines for charity entrepreneurship that can help these charities grow, consulting capacity to help existing organizations to switch to effectiveness metrics etc). If we look at the movement from the perspective of scaling to these numbers I assume we will see a relatively short term saturation in longtermist cause areas. Currently we don’t seem to be funding restricted in that area and I don’t see a world where millions working on these problems will be better than thousands. So from this perspective I would like us to think about longer view and build the capacity now for a big EA movement that will be less effective on the margin while advocating for the most effective choices now in parallel.

comment by Jamie_Harris · 2022-06-04T16:00:48.434Z · EA(p) · GW(p)

I initially found myself nodding along with this post, but I then realised I didn't really understand what point you were trying to make. Here are some things I think you argue for:

  • theoretically, EA could be either big tent or small tent
  • to the extent there is a meaningful distinction, it seems better in general for EA to aim to be big tent
  • Now is a particularly important time to aim for EA to be big tent
  • Here are some things that we could do help make EA more big tent.

Am I right in thinking these are the core arguments?

A more important concern of mine with this post is that I don't really see any evidence or arguments presented for any of these four things. I think your writing style is nice, but I'm not sure why (apart from something to do with social norms or deference) community builders should update their views in the directions you're advocating for?

comment by Kevin Lacker · 2022-05-20T22:52:03.805Z · EA(p) · GW(p)

I personally hope that EA shifts a bit more in the “big tent” direction, because I think the principles of being rational and analytical about the effectiveness of charitable activity are very important, even though some of the popular charities in the EA community do not really seem effective to me. Like I disagree with the analysis while agreeing on the axioms. And as a result I am still not sure whether I would consider myself an “effective altruist” or not.

comment by RedStateBlueState · 2022-05-20T04:09:23.671Z · EA(p) · GW(p)

I think we can use the EA/Rationality divide to form a home for the philosophy-oriented people in Rationality that doesn't dominate EA culture. Rationality used to totally dominate EA, something that has I think become less true over time, even if it's still pretty prevalent at current levels. Having separate rationality events that people know about, while still ensuring that people devoted to EA have strong rationalist fundamentals (which is a big concern!), seems like the way to go for creating a thriving community.

comment by Jonny Spicer · 2022-05-20T15:15:31.417Z · EA(p) · GW(p)

Thanks for writing this Luke! Much like others have said, there are some sections in this that really resonate me and others I'm not so sure on. In particular I would offer a different framing on this point:

Celebrate all the good actions[6] [EA(p) · GW(p)] that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).

Rather than celebrating actions that have altruistic intent but questionable efficacy, instead I think we could be more accepting of the idea that some of these things (eg donating blood) make us feel warm fuzzy feelings, and there's nothing wrong with wanting to feel those feelings and taking actions to achieve them, even if they might not be obviously maximally impactful. Impact is a marathon, not a sprint, and it's important that people who are looking to have a large impact make sustainable choices, including keeping their morale high. For example, for people working on causes like AI safety where it's difficult to see tangible impact, if donating blood gives you the boost you need to keep you feeling good about yourself and what you are doing with your life and therefor prevents you from becoming disillusioned with your choices and contributing less to AI safety, then I think that makes it very worth doing - however I think that is more an act of self-care rather than something that ought to be celebrated in the community (although perhaps acts of self-care ought to be more celebrated in the community).

I also think that a lot of average day-to-day charity (and perhaps other kinds of altruism) is primarily motivated by guilt, which I don't think is particularly helpful for donors and I'd be surprised if it proved to be sustainable for charities either. I think effective altruism does a great job of reframing this: when I donate to GiveWell MIF, instead of doing it to assuage a sense of guilt, I do it because it lets me feel good about myself, knowing that I am actually making a tangible difference in the world with my actions. These are the same warm fuzzy feelings as from before, and I think perhaps that's the framing I would prefer here: humans are warm-fuzzy-feeling-optimisers, and EA could do a better job at empowering people to feel those feelings when they make maximally impactful choices, rather than just ones where their impact is immediately obvious or provides some social kudos.

Replies from: casebash, Kevin Lacker
comment by Chris Leong (casebash) · 2022-05-21T04:23:08.886Z · EA(p) · GW(p)

"I think we could be more accepting of the idea that some of these things (eg donating blood) make us feel warm fuzzy feelings, and there's nothing wrong with wanting to feel those feelings and taking actions to achieve them, even if they might not be obviously maximally impactful. Impact is a marathon, not a sprint, and it's important that people who are looking to have a large impact make sustainable choices, including keeping their morale high."

Strongly agreed.

comment by Kevin Lacker · 2022-05-20T23:12:22.412Z · EA(p) · GW(p)

I think you may be underestimating the value of giving blood. It seems like according to the analysis here:

https://forum.effectivealtruism.org/posts/jqCCM3NvrtCYK3uaB/blood-donation-generally-not-that-effective-on-the-margin [EA · GW]

A blood donation is still worth about 1/200 of a QALY. That’s still altruistic; it isn’t just warm fuzzies. If someone does not believe the EA community’s analyses of the top charities, we should still encourage them to do things like give blood.

Replies from: tkwa, Jonny Spicer
comment by Thomas Kwa (tkwa) · 2022-05-21T08:20:18.159Z · EA(p) · GW(p)

Most of the value of giving blood is in fuzzies. You can buy a QALY from AMF for around $100, so that's $0.50, less than 0.1x US minimum wage if blood donation takes an hour.

If someone doesn't believe the valuation of a QALY it still feels wrong to encourage them to give blood for non-fuzzies reasons. I would encourage them to maximize their utility function, and I don't know what action does that without more context-- it might be thinking more about EA, donating to wildlife conservation, or doing any number of things with an altruistic theme.

comment by Jonny Spicer · 2022-05-21T07:06:44.951Z · EA(p) · GW(p)

Thanks for pointing that out, I didn't realise how effective blood donation was. I think my original point still stands, if "donating blood" is substituted with a different proxy for something that is sub-maximally effective but feels good though.

Replies from: lukefreeman, lukefreeman, lukefreeman
comment by Luke Freeman (lukefreeman) · 2022-05-21T08:30:23.403Z · EA(p) · GW(p)

Also, almost everything anyone does is sub-maximally effective. We simply do not know what maximally effective is. We do think it’s worth trying to figure out our best guesses using the best tools available but we can never know with 100% certainty.

comment by Luke Freeman (lukefreeman) · 2022-05-21T08:27:51.068Z · EA(p) · GW(p)

Yeah, I actually called this point out in general in my #8 footnote (“Plus some of these things could (low confidence) make a decent case for considering how low cost they might be.”). I’ve been at EA events or in social contexts with EAs when someone has asserted with great confidence that things like voting and giving blood are pointless. This hasn’t been well received by onlookers (for good reason IMHO) and I think it does more harm than good.

comment by Luke Freeman (lukefreeman) · 2022-05-21T08:28:09.095Z · EA(p) · GW(p)

Yeah, I actually called this point out in general in my #8 footnote (“Plus some of these things could (low confidence) make a decent case for considering how low cost they might be.”). I’ve been at EA events or in social contexts with EAs when someone has asserted with great confidence that things like voting and giving blood are pointless. This hasn’t been well received by onlookers (for good reason IMHO) and I think it does more harm than good.

comment by BrianTan · 2022-05-20T09:30:21.684Z · EA(p) · GW(p)

Thanks for this post! Just pointing out that the links in footnotes 3 and 4 seem to all be not working

Edit: They were working, just had to do a captcha

Replies from: Guy Raveh
comment by Guy Raveh · 2022-05-20T10:29:33.237Z · EA(p) · GW(p)

They currently work for me.

comment by Guy Raveh · 2022-05-20T10:28:13.414Z · EA(p) · GW(p)

Thanks for the post. I agree with most of it.

I think on the one hand, someone participating by donations only may still be huge, as we all know what direct impact GiveWell charities can have for relatively small amounts of money. Human lives saved are not to be taken lightly.

On the other hand, I think it's important to deemphasize donations as a basis for the movement. If we seek to cause greater impact through non-marginal change, relying on philanthropy can only be a first step.

Lastly, I don't think Elon Musk is someone we should associate ourselves with, since about yesterday.