The Importance of Truth-Oriented Discussions in EA

post by Freethinkers In EA · 2019-03-13T23:36:47.199Z · score: 40 (36 votes) · EA · GW · 39 comments

Contents

  Banning topics does not improve inclusivity (responds to link)
  Why Open Debate is Important For Truth (responds to link)
  While Both Matter, Truth is More Important than Inclusivity (continues responding to link)
  Alienation is More Complex than it Might Appear (responds to link)
  Experiences need to be Understood in Context (responds to link):
  Demographics Aren’t a Strong Indicator of Inclusiveness (responds to link):
  The Limits of the Narrative of Oppression (responds to link):
  Why Truth is of Paramount Importance for EA (responds to link)
  Suggestions
39 comments

This document is a collaborative effort. It responds to the document Making Discussions in EA Groups Inclusive [EA · GW] (henceforth referred to as Making Discussions Inclusive). We appreciate the time and care that went into making that document. It not only brought attention to issues that many EAs might not have previously been aware of, but it addressed these issues with a greater level of nuance than they are usually discussed with. At the same time, we also feel that there are several important issues that were not addressed. We will address the original document section by section and elaborate on why we believe that limiting discussion is a risky path to go down and why it won’t ultimately deliver on what we are told it will.

Content Note: This document discusses issues which were listed in Making Discussion Inclusive as potentially alienating.

We agree that discussing certain topics will often have impacts in terms of how comfortable or welcome certain people feel in a group and that these trade-offs should be considered. However, we believe that limiting debate tends to improve inclusion less than you might think, for a number of reasons:

Firstly, there is always the option to not participate in a discussion if you believe that engaging would be emotionally draining or a waste of time. We don’t want to dismiss how frustrating it can be to see people being wrong without it being sufficiently challenged, but we also believe that people are generally capable of overcoming these challenges and learning to adopt a broader perspective from where they can see that it usually isn’t actually very important if someone is wrong on the internet. However, people are less likely to develop this resilience when the community they are part of creates too much of an expectation of comfort. This is important since comfort in a particular context is a result of environmental factors and the individual’s resilience.

Secondly, there are competing access needs. Effective altruism provides a community where people can have discussions that they cannot have elsewhere. Limiting the discussion may make some people feel more welcome, but we also risk losing the most independent thinkers. And these are vital to discovering truth as detailed in the next section.

Thirdly, groups are not uniform. Measures to make a minority more welcome may make minorities within that minority feel less welcome. Attempts to limit criticisms of Islam, may marginalise ex-Muslims or women from Islamic countries. Preventing people from making arguments against abortion may make pro-choice women more comfortable, but at the cost of making pro-life women feel unwelcome (conservatives are a minority within EA). Limiting discussion of cultural reasons for poverty may be alienating to members of that minority who seek to reform their culture.

Lastly, we believe that someone is excluded to a greater degree when they are not allowed to share their sincerely held beliefs than when they are merely exposed to beliefs that they disagree with. This is especially the case with censorship as rules are often extended or interpreted more broadly over time. Even though certain rules may seem quite mild and reasonable by themselves, their mere existence creates a reasonable fear that those with certain viewpoints will eventually be completely pushed out.

This is not to claim that all discussions should necessarily occur in all contexts, just that we should be very wary of limiting them. In particular, the example given of someone arguing that women would be better if they were controlled by men does not seem to be at all typical of the kinds of discussions that usually occur in EA and hence of the kinds of discussions that likely motivated Making Discussions Inclusive or that would be limited were the advice in that document to be followed.

We also note that people do actually come into EA groups and challenge the basic ideas of EA (Three Biases that Made Me Believe in AI Risk [EA · GW], Charity vs. Revolution [EA · GW]). This is a good thing as it forces us to further refine our ideas. Now, of course, we have to ensure that such discussions don’t drive out other discussion by sucking up all the time, but we ought to engage with sincere criticism, even if it is tiring. An attitude where we expect others to automatically agree with us is unlikely to be the most effective for persuading people over the long term.

We accept that having people feel comfortable is valuable in and of itself. However, we don’t consider this to be the primary purpose of effective altruism. Most charity is about the giver and allowing them to feel like they are making a significant difference even when they are not. In contrast, EA is about being effective and that necessarily involves having a true understanding of things as they are. Indeed, many standard EA views, such as the importance of AI safety or that wild animal suffering matters, seem outrageous to most people at first. It would seem arrogant to suppose that there aren’t any issues in which we are in a similar position; i.e. causes that sound outrageous, but are honestly incredibly important to pursue (see crucial considerations and Cause X). For this reason we should be wary about ruling things out preemptively.

We agree that humans are often irrational and that power structures/dynamics have some effect the way that discussions play out. however, we think the reductive approach taken in Making Discussions Inclusive considerably over-states the impact for the following four reasons:

Firstly, if the goal is to rebalance conversations in order to make them more objective, we need to specifically consider conversational power, instead of power in general. Advocates of social justice have recently been unusually successful in limiting speech compared to other ideologies (see RIP culture war thread for a recent example). We might therefore be tempted to conclude that they have a disproportionate amount of conversational power and that any attempt at rebalancing would involve reducing the voice of these advocates (this should not be interpreted as support of rebalancing in the first place).

Secondly, the power of the speaker heavily depends on the specific circumstances. Even though rich people tend to have more power and social status than the poor, due to a desire to favour the underdog particular audiences may be heavily biased towards the latter to the point of being completely dismissive of the former. Similarly, it is plausible that a man questioning both a man and a woman equally aggressively would be more likely to be seen as a bully in the case of the woman because that would fit more inline with society’s preconceptions.

Lastly, the received view of power relations is significantly outdated. Even though historically men have been granted more authority than women, influence of feminism and social justice means that in many circumstances this has been mitigated or even reversed. For example, studies like Gornall and Strebulaev (2019) found that blinding evaluators to the race or sex of applicants showed that by default they were biased against white men. We acknowledge that there are other circumstances where society is still biased towards men, but we caution about turning this into a blanket assumption, even though it may have been more appropriate in the past. Taking this further, there is a negative selection effect in that the more that a group is disempowered and could benefit from having its views being given more consideration the less likely it is to have to power to make this happen.

So the power relations which the authors of Making Discussion Inclusive want to correct are much less clear defined than you might think at first glance. But even if we were to accept their premises, limiting debate still wouldn’t necessarily be a good choice. Why not?

Firstly, some people choosing not to participate tends to be less harmful to the quality of discussion than censorship as the latter prevents exposure to a set of viewpoints completely, while the former only reduces its prominence. Even when there is a cost to participating, someone who considers the topic important enough can choose to bear it and one strong advocate by themselves is often sufficient to change people’s minds (especially within the Effective Altruism community where steel-manning tends to be admired).

Secondly, the concerns here are mostly around people choosing not to participate because of the effort required or because the discussion makes them uncomfortable. This is generally less worrying than people declining to participate because of long term reputational risk. This is because it is much easier to bear short term costs in order to make an important point than longer term costs.

Thirdly, even if limiting particular discussions would clearly be good, once we’ve decided to limit discussions at all, we’ve opened the door to endless discussion and debate about what is or is not unwelcoming (see Moderator’s Dilemma). And ironically, these kinds of discussions tend to be highly partisan, political and emotional. In fact, we could go so far as to say that they tend to make people on both sides feel more unwelcome: one side feels like it is being pushed out, while the other side feels that their perspectives aren’t being taken seriously (and the prominence of the discussion makes it much more prominent in their mind).

This is a difficult topic to broach as some people may find this discussion alienating in itself, but selection effects are at the core of this discussion and the evaluation of its impact. It is important to remember that these comments relate to group averages and not to individuals. Just because someone ends up leaving EA because of discomfort with certain discussions, doesn’t mean that they are described by all or even any of qualities listed below.

One of the strongest effects is that the people who leave as a result of certain ideas being discussed are much less likely to be committed EAs. The mechanism here is simple: the more committed to a cause, the more you are willing to endure for it. We agree with CEA that committed EAs are several times more valuable than those who are vaguely aligned, so that we should optimising the movement for attracting more committed members.

Secondly, while we all have topics on which our emotions get the better of us, those who leave are likely to be overcome to a greater degree and on a wider variety of topics. This means that they will be less likely to be able to contribute productively by providing reasoned analysis. But further than this, they are more likely to contribute negatively by being dismissive, producing biased analysis or engaging in personal attacks.

Thirdly, the people who leave are likely to be more ideological. This is generally an association between being more radical and more ideological, even though there are also people who are radical without being ideological. People who are more ideological are less able to update in the face of new evidence and are also less likely to be able to provide the kind of reasoned analysis that would cause other EAs to update more towards their views.

Lastly, we note that some people feel that EA is unfriendly to those on the right, while other feel that it is unfriendly to those in social justice. Often in these kinds of circumstances the fairest resolution is one which neither side is completely happy with. That is, we should expect some level of people feeling that EA is unwelcome to them in the optimal solution.

We acknowledge that people subject to a social disadvantage will tend to be much more knowledgeable about how it plays out than the average person has a reason to be. We also are aware that it can be incredibly hard for others to understand an experience from a mere verbal description, without ever having experienced it themselves. At the same time, we worry that people often fail to be objective about issues that directly concern them and can often have difficult putting it in perspective.

We also worry that it is very easy for the experiences of a vocal minority to be presented as the experiences of a group as a whole as those who tend to have a more positive experience are less likely to have a reason to talk about it than those who have a negative experience. These effects are amplified by the incentives of journalists to focus on controversy and of activists to focus on driving change, both of whom are selective in whose voices are presented.

While historically the response to valid claims of structural inequality has often been denial, we believe that this has resulted in an over-correction where almost anything can and is argued to be oppression. These include practising yoga, eating sushi or sitting with your legs open too wide. Even though we believe that these are only minority views within social justice, we draw the lesson that any claims of unfairness need to be carefully analysed before deciding on whether to act

We acknowledge that it is doubly frustrating when you are not only treated unfairly, but you also have to engage in substantial effort in order to convince others that you were treated unfairly. Unfortunately, this is a problem that is generally not very easy to solve. One response would be to automatically accept any claim of unfairness no matter how absurd it sounds to the listener. The problems with this solution are so obvious that they need not be stated. It is possible to put in an extra special effort to try to understand the perspective of another person when their experiences are substantially different from their own, but there will still be circumstances when despite your best effort, you will still disagree with them. In these cases, there cannot be an expectation that a particular claim will automatically be accepted without having to argue for it.

Indeed, we don’t believe that this is a one-way street. These kinds of highly politicised, highly polarised discussions tend to be unpleasant for everyone involved and there are bad actors on all sides. Due to the very nature of the discussion, people on both sides are regularly attacked or have their best attempts at honest engagement cursorily dismissed.

We don’t believe that demographics are a very good indicator of how well members of a particular group are treated. Scott Alexander provides many examples of groups that have large numbers of women despite not being “the poster child for feminism” including the Catholics, Islamists and Trump supporters. In another, he points out the difficulty of using such explanations for race:

For the record, here is a small sample of other communities where black people are strongly underrepresented:
Runners (3%). Bikers (6%). Furries (2%). Wall Street senior management (2%). Occupy Wall Street protesters (unknown but low, one source says 1.6% but likely an underestimate). BDSM (unknown but low) Tea Party members (1%). American Buddhists (~2%). Bird watchers (4%). Environmentalists (various but universally low). Wikipedia contributors (unknown but low). Atheists (2%). Vegetarian activists (maybe 1-5%). Yoga enthusiasts (unknown but low). College baseball players (5%). Swimmers (2%). Fanfiction readers (2%). Unitarian Universalists (1%).
Can you see what all of these groups have in common?
No. No you can’t. If there’s some hidden factor uniting Wall Street senior management and furries, it is way beyond any of our pay grades.

The authors of Making Discussions Inclusive theorise that alienating discussions are the reason why women were less likely than men to return to meetings of EA London, despite being equally likely to attend in the first place. We note that such a conclusion would depend on an exceptionally high quantity of alienating discussions, and is prima facie incompatible with the generally high rating for welcomingness reported in the EA survey [EA · GW]. We note that there are several possible other theories. Perhaps women are more likely to have been socialised to perceive utilitarian calculations as “cold” (i.e. EA itself being the main cause of alienation, rather than any of the topics suggested in Making Discussions Inclusive). Perhaps women are less likely to be interested in the discussions that occur because they focus more on male interests (or at least what are predominantly male interests within our particular society). Perhaps EAs come off as socially awkward and this is more of a turn-off for women than men (women tend to have a greater interest in people and men a greater interest in things). The claim is not that any of these theories are necessarily correct, just that it would be premature to assume that the main cause of the gender gap is the kinds of alienating conversations discussed in Making Discussions Inclusive. So if we were to limit discussions in EA we could worsen our epistemics without actually increasing our diversity.

Further, we disagree with the focus on mere numbers, as opposed to emphasising recruiting those from these demographics who will make great effective altruists. Just as in impact, there is a power law in terms of influence. One modern-day Martin Luther King can ensure that EA takes into account black perspectives better than dozens of ordinary EAs. And any intervention that sacrifices the intellectual culture that makes EA unique is likely to turn away the best individuals of all demographics, including those which are under-represented. For this reason, interventions to increase representation can easily backfire, where representation is measured in terms of how much deliberation certain viewpoints receive, as opposed to the mere number of people from a particular demographic.

The authors Making Discussion Inclusive discuss the argument that women might not be “oppressed” on average in society as an example of an alienating discussion. Indeed, some people might find the following discussion alienating, but we believe that engaging with the object level issue is the only way to adequately respond to these assertions.

Firstly, this an incredibly controversial statement that is only held by a minority of people. It wouldn’t just be rejected by most conservatives, but also by many moderates and liberals as well. We note that the statement wasn’t just that women face certain disadvantages or that women face more disadvantages than men, but that the level of disadvantages is such that it could be fairly labelled “oppression”. This further contains the assumptions about men being the main cause of these disadvantages, as opposed to bad luck, in the case of pregnancy affecting women’s careers, and that such “oppression” is ongoing, instead of inequalities being the result of time-lag effects.

Secondly, we note that this isn’t the kind of statement that is easy to evaluate. It requires attempting to weigh hundreds of different advantages and disadvantages against each other. We agree with the authors of Making Discussion Inclusive that members of a group have a strong awareness and salience of the disadvantages they face and a much weaker ability to understand those faced by others. If we apply that here, it would seem to warrant caution in jumping too quickly to the conclusion that one group is advantaged over another.

Thirdly, we note that insisting that no-one challenge the received feminist position on this subject could very well be considered alienating as well. It is worthwhile considering the example of Atheism Plus, an attempt to insist that atheists also accepted the principles of social justice. This was incredibly damaging and destructive to the atheist movement due to the infighting that it led to and was perhaps partly responsible for the movement’s decline.

Fourthly, while we acknowledge that the denial of oppression has been used to justify mistreatment, so has the assertion of oppression. Nazis saw themselves as being oppressed by Jews, white supremacists as oppressed by political elites, terrorists as oppressed by the government. Any group can construct a narrative of oppression and these should not be accepted uncritically given how appealing such false narratives can be.

Fifthly, while we sympathise with many of the concerns expressed, we don’t believe that they are unique to one side. People regardless of ideology face the difficult choice of engaging in an unpleasant discussion or allowing views that they to be wrong and harmful to go unchallenged. People on all sides are frustrated by people who stick within the norms of politeness, but seem to be exceptionally stubborn nonetheless. People on all sides are frustrated by debates that don’t leave anyone closer to truth. This is alienating for everyone, not just for particular groups.

EA is a nascent field; we should expect over time our understanding of many things to change dramatically, in potentially unpredictable ways. This makes banning or discouraging topics, even if they seem irrelevant, harmful, because we don’t know which could come to be important.

Fortunately, there are some examples we have to make this clear. For example, Making Discussions Inclusive provides a list of things that we should not discuss (or at least we should be very wary of discussing). We will argue that there are actually very good reasons for EAs to discuss these topics. Even in cases where it would not be reasonable to dispute the statement as given, we suggest that people may often be accused of rejecting these statement when they actually believe something much more innocent. Here are the examples:

“Whether poor people are poor due to having a lower IQ”

We doubt that many people believe that all poor people are stupid or even that poor people are generally stupid. We think that most people recognise that many have grown up in a difficult environment or been denied opportunities. On the other hand, it is important to discuss the relation between intelligence and poverty, in order to discuss the ways in which our society fails people of low intelligence. As more of the simpler jobs become automated, there are legitimate concerns that there won’t be anything for many people to do, which may leave many people in a dire position.

“Whether it is or has been right or necessary that women have less influence over the intellectual debate and less economic and political power”

We doubt that many EAs honestly believe that women are intrinsically deserving of less influence or economic power. On the other hand, it is quite reasonable to believe: a) personal choices such as whether to take a high paying job with long hours or whether to be a stay home parent affect a person’s influence or earnings; b) it is perfectly valid for an individual to make this choice, even though this may result in one group having more power than another; and c) even though these choices are subject to social influences which we can seek to reduce, they are still predominantly free choices that ought to be respected.

"Justifying that the group should be inclusive for them"

This argument sounds like it could very easily be used in need an attempt to circumvent debate about whether a particular measure to increase inclusiveness is justified or not.

“Whether people in the developing world are poor because of character flaws”

At first glance this might seem an innocuous restriction. However, there are many valuable research projects into the causes and solutions of poverty that would be undermined by this:

Even if these discussions don’t fall afoul of the proposed measures, these kinds of considerations are less likely to be raised if they are right on the edge of the Overton Window.

Suggestions

This document is about EA spaces in general and not just about the Diversity and Inclusion Group. While it is up to the members of each group to decide on its policies, we make the following recommendations:

Any comments by this account on the post are personal opinion, rather than some kind of group consensus.

39 comments

Comments sorted by top scores.

comment by Aidan O'Gara · 2019-03-14T02:51:57.396Z · score: 56 (25 votes) · EA · GW

My 2 cents: Nobody's going to solve the question of social justice here, the path forward is to agree on whatever common ground is possible, and make sure that disagreements are (a) clearly defined, avoiding big vague words, (b) narrow enough to have a thorough discussion, and (c) relevant to EA. Otherwise, it's too easy to disagree on the overall "thumbs up or down to social justice" question, and not notice that you in fact do agree on most of the important operational questions of what EA should do.

So "When introducing EA to newcomers, we generally shouldn't discuss income and IQ, because it's unnecessary and could make people feel unwelcome at first" would be a good claim to disagree on, because it's important to EA, and because the disagreement is narrow enough to actually sort out.

Other examples of narrow and EA-relevant claims that therefore could be useful to discuss: "EA orgs should actively encourage minority applicants to apply to positions"; "On the EA Forum, no claim or topic should be forbidden for diversity reasons, as long as it's relevant to EA"; or "In public discussions, EAs should make minority voices welcome, but not single out members of minority groups and explicitly ask for their opinions/experiences, because this puts them in a potentially stressful situation."

On the other hand, I think this conversation has lots of claims that are (a) too vague to be true or false, (b) too broad to be effectively discussed, or (c) not relevant to EA goals. Questions like this would include "Are women oppressed?", "Is truth more important than inclusivity?", or "Is EA exclusionary?" It's not obvious what it would really mean for these to be true or false, you're unlikely to change anyone's mind in a reasonable amount of time, and their significance to EA is unclear.

My guess is that we all probably agree a lot on specific operationalized questions relevant to EA, and disagree much more when we abstract to overarching social justice debates. If we stick to specific, EA-relevant questions, there's probably a lot more common ground here than there seems to be.

comment by Freethinkers In EA · 2019-03-14T15:52:30.346Z · score: 16 (7 votes) · EA · GW

I'm in favour of operationalization and avoiding politics (and I suspect the other collaborators would as well). However, I suspect that those coming from a social justice perspective would feel that limiting the discussion in this way would be unfair to them. The kinds of arguments they might make for minority applicants deserving AA for EA roles, for example, would most likely be based upon a claim of massive, ongoing, systematic disadvantage and exclusion both in society itself and EA. There are other arguments that they could make, but they'd still probably feel that we had excluded a pillar of their main argument for many claims by fiat. For this reason, addressing these issues seems unavoidable.

comment by Jon_Behar · 2019-03-14T14:46:06.591Z · score: 29 (15 votes) · EA · GW
The authors of Making Discussions Inclusive theorise that alienating discussions are the reason why women were less likely than men to return to meetings of EA London, despite being equally likely to attend in the first place. We note that such a conclusion would depend on an exceptionally high quantity of alienating discussions, and is prima facie incompatible with the generally high rating for welcomingness reported in the EA survey [EA · GW]. We note that there are several possible other theories… The claim is not that any of these theories are necessarily correct, just that it would be premature to assume that the main cause of the gender gap is the kinds of alienating conversations discussed in Making Discussions Inclusive.

EA London recently published a 2018 Impact Report with a whole appendix on diversity issues, which discusses this issue directly and strongly suggests alienating conversations/behavior are a very real issue. Key excerpts (emphasis added):

I’ve included an appendix on our Holiday/EA Unconference to capture some of the negative feedback we received. To my memory, this is the event we received the worst feedback on and the feedback is often related to diversity and inclusion, which seems like the most prominent theme in the negative feedback we both received and sought throughout the year.
From 31/8 to 3/9, EA London held a Holiday/EA Unconference at the EA Hotel for 26 guests (plus 3 organisers). Some guests had also attended one/both of the retreats we held at the hotel in the previous week (Life Review Weekend and Careers Week).
Guests were asked to complete a feedback form on the final evening which included the question “Was there anything that made you feel uncomfortable or unwelcome during the event? (no need to write an answer if no)”. Of the 17 people who completed the form, 6 answered this question, and the 4 answers that mention gender follow:
· “On the Friday evening as people began to arrive I felt that as the members of the group changed, the vibe changed and I felt that it was too 'male Oxbridge graduate' (which I seem to find harder to connect with).”
· “It was quite a lot more male than the careers week, which was very noticeable and led to different topics to be discussed. I felt having a more diverse EA crowd in the careers week was more welcoming and relaxed (even as a corduroy wearing straight white man!) and it would be worth considering how to attract diverse representation.” (Compare to this person’s response to the question “Anything else you want to say?” on the last day of Careers Week: “I think this was one of the most diverse EA events I have been to in terms of having a lot of people with different backgrounds, ideologies (and more women!). I enjoyed this and it made the event feel welcoming.”)
· “Yes, I believe that insufficient time was taken on the code of conduct, and in a situation where someone made a somewhat sexist comment I wasn't really sure what to do, because I don't think we discussed what was/wasn't acceptable conversation and what to do in a situation where someone felt uncomfortable.”
· “We might've needed peaceful and open icebreakers (shy people in mind!!) or a welcome session with some notes on mental health and inclusiveness to create a more welcoming atmosphere. Many arrived without doing proper hellos, and the first night people cliqued up without much mingling, and this was very different from the previous 2 retreats. Also, some women and queer men felt excluded by the way that the suddenly majority-male crowd naturally behaved (pushed aside, talked over, not said hello to etc.)”

This narrative is also consistent with the EA Survey data on welcomingness, which found women rated EA as less welcoming than men [EA · GW] to a statistically significant degree. (The high rating for welcomingness across the whole EA Survey seems much less relevant, as those results will by definition largely reflect the beliefs of demographics with the highest representation.)

A few other notes:

· I’d guess the vast majority of behavior that’s perceived as unwelcoming wasn’t intended as unwelcoming.

· I doubt women are the only instance where an underrepresented EA demographic feels unwelcome. For example, I have a strong prior that conservatives wouldn’t feel very welcome interacting with the EA community (center left + left outnumbers center right + right by ~17x [EA · GW]) and that this is problematic.

· Tip of the hat to EA London for writing up their experiences so others can learn from them. I wish the next group running an EA retreat had access to a consolidated resource with synthesized lessons from other groups’ experience, and practical examples of how (not) to promote an inclusive, truth-seeking culture; to the best of my knowledge this doesn’t exist.

· I wholeheartedly agree with @Aidan O’Gara’s call to operationalize discussions of this nature as much as possible. Simply distinguishing between “issues that are relevant to EA” and “good issues to discuss at an intro to EA event” would go a long way toward helping people not talk past each other.

comment by xccf · 2019-03-14T22:51:58.090Z · score: 18 (11 votes) · EA · GW

Thanks for the info.

The way I'm reading these excerpts, only one refers to an alienating conversation of the sort discussed in Making Discussions Inclusive (the one about the "somewhat sexist" comment).
The other three seem like complaints about the "vibe", which feels like a separate issue. (Not saying there's nothing to do, just that Making Discussions Inclusive doesn't obviously offer suggestions.) Indeed, there could even be a tradeoff: Reading posts like Making Discussions Inclusive makes me less inclined to talk to women and queer men, because I think to myself "it'd be very easy for me to accidentally say something that would upset them... probably best to avoid opening my mouth at all, so I don't screw things up for the entire EA movement."

comment by Jon_Behar · 2019-03-16T15:24:27.790Z · score: 3 (3 votes) · EA · GW

The comment about how the gender imbalance “led to different topics to be discussed” might (or might not) reflect alienating conversations, but I agree with your general point that the survey quotes are more about the "vibe". I think the quotes suggest that simple things like running icebreakers and saying hi to people (whether or not they are women and/or queer) can be really valuable.

comment by toonalfrink · 2019-03-14T17:38:07.722Z · score: 12 (7 votes) · EA · GW

Appreciate the data!

comment by TruePath · 2019-03-14T17:38:00.233Z · score: 7 (3 votes) · EA · GW

The parent post already responded to a number of these points but let me give a detailed reply.

First, the evidence you cite doesn't actually contradict the point being made. Just because women rate EA as somewhat less welcoming doesn't mean that this is the reason they return at a lower rate. Indeed, the alternate hypothesis that says it's the same reason women are less likely to be attracted to EA in the first place seems quite plausible.

As far as the quotes we can ignore the people simply agreeing that something should be done to increase diversity and talk about the specific reactions. I'll defer the one about reporting a sexist remark till the end and focus on the complaints about the environment. These don't seem to be complaints suggesting any particular animus or bad treatment of women or other underprivileged groups merely people expressing a distaste for the kind of interactions they associate with largely male groups. However, other people do like that kind of interaction so, like the question of what to serve for dinner or whether alcohol should be served, you can't please everyone. While it's true that in our society there is a correlation between male gender and a preference for a combative, interrupting challenging style of interaction there are plenty of women who also prefer this interaction style (and in my own experience at academic conferences gay men are just as likely as straight men to behave this way). Indeed, the argument that it's anti-woman to interact in a way that involves interrupting etc.. when some women do prefer this style is the very kind of harmful gender essentialism that we should be fighting against.

Of course, I think everyone agrees that we should do what we can to make EA more welcoming *when that doesn't impose a greater cost than benefit.* Ideally, there would be parts of EA that appeal to people who like every kind of interaction style but there are costs in terms of community cohesion, resources etc.. etc..

The parent was arguing, persuasively imo, that imposing many of the suggested reforms would impose substantial costs elsewhere not that it might not improve diversity or offer benefits to some people. I don't see you making a persuasive case that the costs cited aren't very real or that the benefits outweigh them.

This finally brings us to the complaint about where to report a sexist comment. While I think no one disagrees that we should condemn sexist comments creating an official reporting structure with disciplinary powers is just begging to get caught up in the moderators dilema and create strife and argument inside the community. Better to leave that to informal mechanisms.

comment by Khorton · 2019-03-14T18:08:09.979Z · score: 10 (10 votes) · EA · GW

I definitely prefer being in gender-balanced settings to being the only woman in a group of men, so I agree that's a preference. You seem to be suggesting that if it's a preference, it's not the cause of our homogeneity, but I think the preference to be near similar people is a good explanation for why EA isn't very diverse. (cf Thomas Schelling's work on informal segregation)

comment by Jon_Behar · 2019-03-16T15:25:50.809Z · score: 4 (4 votes) · EA · GW

I intentionally avoided commenting on the OP’s broader claims as I’m squarely in the “Nobody's going to solve the question of social justice here” camp (per @Aidan O’Gara). I only meant to comment on the narrow issue of EA London’s gender-related attendance dynamics, to try and defuse speculation by pointing people to relevant data that’s available. In retrospect, I probably should have just commented on the thread about women being less likely to return to EA London meetups instead of this one, but here we are.

I think the quotes from the surveys offer important insights, and that it’d be bizarre to try to understand how EA London’s events are perceived without them. I didn’t claim they offer a definitive explanation (just one that’s more informed than pure intuition), and I certainly didn’t argue we should start restricting discussions on lots of important topics.

Actually, one of my biggest takeaways from the survey quotes is that there’s low-hanging fruit available, opportunities to make EA more inclusive and better at seeking truth at the same time. The cost/benefit profile of (for example) an icebreaker at a retreat is extremely attractive. It makes people feel more welcome, it builds the sort of trust that makes it easier to have conversations on controversial topics, and it makes those conversations better by inviting a broader range of perspectives. Even if you hate icebreakers (like I do), based on the survey data they seem like a really good idea for EA retreats and similar events.

comment by aarongertler · 2019-03-18T08:03:05.226Z · score: 19 (11 votes) · EA · GW

I work for CEA, but the following views are my own. I don't have any plans to change Forum policy around which topics are permitted, discouraged, etc. This response is just my attempt to think through some considerations other EAs might want to make around this topic.

--

There were some things I liked about this post, but my comments here will mostly involve areas where I disagree with something. Still, criticism notwithstanding:

  • I appreciate the moves the post makes toward being considerate (the content note, the emphasis on not calling out individuals).
  • Two points from the post that I think are generally correct and somewhat underrated in debates around moderation policy: You can't please everyone, and power relations within particular spaces can look very different than power relations outside of those spaces. This also rang true (though I consider it a good thing for certain "groups" to be disempowered in public discussion spaces):
There is a negative selection effect in that the more that a group is disempowered and could benefit from having its views being given more consideration, the less likely it is to have to power to make this happen.
  • The claim that we should not have "limited discussions" is closing the barn door after the horse is already out. The EA Forum, like almost every other discussion space, has limits already. Even spaces that don't limit "worldly" topics may still have meta-limits on style/discourse norms (no personal attacks, serious posts only, etc.). Aside from (maybe?) 4Chan, it's hard to think of well-known discussion spaces that truly have no limits. For example, posts on the EA Forum:
    • Can't advocate the use of violence.
    • Are restricted in the types of criticism they can apply: "We should remove Cause X from EA because its followers tend to smell bad" wouldn't get moderator approval, even if no individually smelly people were named.

--

While I don't fully agree with every claim in Making Discussions Inclusive, I appreciated the way that its authors didn't call for an outright ban on any particular form of speech -- instead, they highlighted the ways that speech permissions may influence other elements of group discussion, and noted that groups are making trade-offs when they figure out how to handle speech.

This post also mostly did this, but occasionally slipped into more absolute statements that don't quite square with reality (though I assume one is meant to read the full post while keeping the word "usually" in mind, to insert in various places). An example:

We believe that someone is excluded to a greater degree when they are not allowed to share their sincerely held beliefs than when they are merely exposed to beliefs that they disagree with.

This seems simplistic. The reality of "exclusion" depends on which beliefs are held, which beliefs are exposed, and the overall context of the conversation. I've seen conversations where someone shoehorned their "sincerely held beliefs" into a discussion to which they weren't relevant, in such an odious way that many people who were strained on various resources (including "time" and "patience") were effectively forced out. Perhaps banning the shoehorning user would have excluded them to a “greater degree”, but their actions excluded a lot of people, even if to a “lesser degree”. Which outcome would have been worse? It’s a complicated question.

I'd argue that keeping things civil and on-topic is frequently less exclusionary than allowing total free expression, especially as conversations grow, because some ideas/styles are repellent to almost everyone. If someone insists on leaving multi-page comments with Caps Lock on in every conversation within a Facebook group, I'd rather ask them to leave than ask the annoyed masses to grit their teeth and bear it.

This is an extreme example, of course, so I'll use a real-world example from another discussion space I frequent: Reddit.

On the main Magic: The Gathering subreddit, conversations about a recent tournament winner (a non-binary person) were frequently interrupted by people with strong opinions about the pronoun "they" being "confusing" or "weird" to use for a single person.

This is an intellectual position that may be worth discussing in other contexts, but in the context of these threads, it appeared hundreds of times and made it much more tedious to pick out actual Magic: The Gathering content. Within days, these users were being kicked out by moderators, and the forum became more readable as a result, to what I'd guess was the collective relief of a large majority of users.

--

The general point I'm trying to make:

"Something nearly everyone dislikes" is often going to be worth excluding even from the most popular, mainstream discussion venues.

In the context of EA, conversations that are genuinely about effective do-gooding should be protected, but I don't think several of your examples really fit that pattern:

  • Corruption in poor countries being caused by "character flaws" seems like a non sequitur.
    • When discussing ways to reduce corruption, we can talk about history, RCT results, and economic theory -- but why personal characteristics?
    • Even if it were the case that people in Country A were somehow more "flawed" than people in Country B, this only matters if it shows up in our data, and at that point, it’s just a set of facts about the world (e.g. “government officials in A are more likely to demand bribes than officials in B, and bribery demands are inversely correlated with transfer impact, which means we should prefer to fund transfers in B”). I don't see the point of discussing the venality of the A-lish compared to the B-nians separately from actual data.
  • I think honest advocates for cash-transfer RCTs could quite truthfully state that they aren't trying to study whether poor people are "lazy". Someone's choice not to work doesn't have to be the target of criticism, even if it influences the estimated benefit of a cash transfer to that person. It's also possible to conclude that poor people discount the future without attaching the "character flaw" label.
    • Frankly, labels like this tend to obscure discussion more than they help, by obscuring actual data and creating fake explanations ("poor people don't care as much about the future, which is bad" < "poor people don't care as much about the future, but this is moderated by factors A and B, and is economically rational if we factor in C, and here's a model for how we can encourage financial planning by people at different income levels").
    • The same problem applies to your discussion of female influence and power; whether or not a person's choices have led them to have less power seems immaterial to understanding which distributions of power tend to produce the best outcomes, and how particular policies might move us toward the best distributions.

To summarize the list of points above: In general, discussions of whether a state of the world is "right", or whether a person is "good" or "deserving", don't make for great EA content. While I wouldn't prohibit them, I think they are far more tempting than they are useful, and that we should almost always try to use "if A, then B" reasoning rather than "hooray, B!" reasoning.

Of course, "this reasoning style tends to be bad" doesn't mean "prohibit it entirely". But it makes the consequence of limiting speech topics seem a bit less damaging, compared to what we could gain by being more inclusive. (Again, I don’t actually think we should add more limits in any particular place, including the EA Forum. I’m just pointing out considerations that other EAs might want to make when they think about these topics.)

comment by Freethinkers In EA · 2019-03-18T14:46:50.084Z · score: 4 (2 votes) · EA · GW

"The claim that we should not have "limited discussions" is closing the barn door after the horse is already out." - Some discussions about what should or should not allowed to be discussed are much more politicised than others and hence much more damaging. (In case it seems like I've contradicted myself here, my point is not that discussions about what should be allowed to be discussed should be banned, merely that a serious movement towards banning particular ideas encourages more of these adversarial discussions and hence the more positive and welcoming environment that is typically promised rarely materialises)

"We believe that someone is excluded to a greater degree when they are not allowed to share their sincerely held beliefs than when they are merely exposed to beliefs that they disagree with." - This only referred to the degree of exclusion that would be experienced by a single individual for one of those two options. Obviously, the number of people affected is important as well as you point out.

Perhaps it is possible to discuss those issues while dodging political landmines, but there's also the worry about people being less willing to share views too close to the edge of the Overton Window.

comment by aarongertler · 2019-03-14T02:32:47.931Z · score: 15 (10 votes) · EA · GW

In general, for posts like this that lay out an argument point by point, I'd strongly recommend adding section headers (highlight text in your editor and click the "T" button to create a header). This will give you a cool floating table of contents next to your post and make it easier to navigate.

(To see what this would look like, see this post [EA · GW] for one example.)

comment by Freethinkers In EA · 2019-03-14T15:59:44.812Z · score: 2 (2 votes) · EA · GW

Good suggestion

comment by alexrjl · 2019-03-14T07:03:25.761Z · score: 12 (12 votes) · EA · GW

"Even though historically men have been granted more authority than women, influence of feminism and social justice means that in many circumstances this has been mitigated or even reversed. For example, studies like Gornall and Strebulaev (2019) found that blinding evaluators to the race or sex of applicants showed that by default they were biased against white men."

That is an unreasonably strong conclusion to draw from the study you've cited, not least given that even in the abstract of that study the authors make it extremely clear that "[their] experimental design is unable to capture discrimination at later stages [than the initial email/expression of interest stage]". https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3301982

[Edited for tone]

comment by toonalfrink · 2019-03-14T17:50:10.186Z · score: 6 (4 votes) · EA · GW

Downvoted because I felt that the "though not linked to" and the hyperboles in your comment suggest that you're coming from a subtly adversarial mindset

(I'm telling you this because I like to see more people explain their downvotes. They're great information value. No bad feels!)

comment by alexrjl · 2019-03-14T20:18:01.511Z · score: 6 (8 votes) · EA · GW

I looked for the study because I was surprised by the strength of the statement it was used to support. When I found the study, I was annoyed to find that actually, it doesn't come close to supporting a claim of the strength made. This annoyance prompted the tone of the original post, which you have characterised fairly and was a mistake. I've now edited this out, because I don't want it to distract from the claim I am making:

The study does not support the claim that is it is being used to support.

comment by Freethinkers In EA · 2019-03-15T01:23:24.804Z · score: 3 (4 votes) · EA · GW

I'll concede that the post could definitely be better than it is and as the primary author I take responsibility for post being somewhat light on references. However part of this was that the post this is responding to received many comments by people with views similar to mine, so I updated towards this post being less high priority and decided to just publish what we had.

comment by aarongertler · 2019-03-18T08:04:57.545Z · score: 10 (10 votes) · EA · GW

I work for CEA, but the following views are my own. I don't have any plans to change Forum policy around which topics are permitted, discouraged, etc. This response is just my attempt to think through some considerations other EAs might want to make around this topic.

--

Even when there is a cost to participating, someone who considers the topic important enough can choose to bear it.

This isn't always true, unless you use a circular definition of "important". As written, it implies that anyone who can't bear to participate must not consider the topic "important enough", which is empirically false. Our capacity to do any form of work (physical or mental) is never fully within our control. The way we react to certain stimuli (sights, sounds, ideas) is never fully within our control. If we decided to render all the text on the EA Forum at a 40-degree angle, we'd see our traffic drop, and the people who left wouldn't just be people who didn't think EA was sufficiently "important".

In a similar vein:

The more committed [you are] to a cause, the more you are willing to endure for it. We agree with CEA that committed EAs are several times more valuable than those who are vaguely aligned, so that we should [be] optimising the movement for attracting more committed members.

Again, this is too simplistic. If we could have 100 members who committed 40 hours/week or 1000 members who committed 35 hours/week, we might want to pursue the second option, even if we weren't "optimizing for attracting more committed members". (I don't speak for CEA here, but it seems to me like "optimize the amount of total high-fidelity and productive hours directed at EA work" is closer to what the movement wants, and even that is only partly correlated with "create the best world we can".)

You could also argue that "better" EAs tend to take ideas more seriously, that having a strong negative reaction to a dangerous idea is a sign of seriousness, and that we should therefore be trying very hard to accommodate people who have reportedly had very negative reactions to particular ideas within EA. This would also be too simplistic, but there's a kernel of truth there, just as there is in your statement about commitment.

Even if limiting particular discussions would clearly be good, once we’ve decided to limit discussions at all, we’ve opened the door to endless discussion and debate about what is or is not unwelcoming (see Moderator’s Dilemma). And ironically, these kinds of discussions tend to be highly partisan, political and emotional.

The door is already open. There are dozens of preexisting questions about which forms of discussion we should permit within EA, on specifically the EA Forum, within any given EA cause area, and so on. Should we limit fundraising posts? Posts about personal productivity? Posts that use obscene language? Posts written in a non-English language? Posts that give investing advice? Posts with graphic images of dying animals? I see "posts that discuss Idea X" as another set of examples in this very long list. They may be more popular to argue about, but that doesn't mean we should agree never to limit them just to reduce the incidence of arguments.

We note that such a conclusion would depend on an exceptionally high quantity of alienating discussions, and is prima facie incompatible with the generally high rating for welcomingness reported in the EA survey. We note that there are several possible other theories.

I don't think the authors of the Making Discussions Inclusive post would disagree. I don't see any conclusion in that post that alienating discussions are the main factor in the EA gender gap; all I see is the claim, with some evidence from a poll, that alienating discussions are one factor, along with suggestions for reducing the impact of that particular factor.

It is worthwhile considering the example of Atheism Plus, an attempt to insist that atheists also accept the principles of social justice. This was incredibly damaging and destructive to the atheist movement due to the infighting that it led to and was perhaps partly responsible for the movement’s decline.

I don't have any background on Atheism Plus, but as a more general point: Did the atheism movement actually decline? While the r/atheism subreddit is now ranked #57 by subscriber count (as of 13 March 2019) rather than #38 (4 July 2015), the American atheist population seems to have been fairly flat since 1991, and British irreligion is at an all-time high. Are there particular incidents (organizations shutting down, public figures renouncing, etc.) that back up the "decline" narrative? (I would assume so, I'm just unfamiliar with this topic.)

comment by Khorton · 2019-03-14T08:18:18.414Z · score: 6 (5 votes) · EA · GW

"it usually isn’t actually very important if someone is wrong on the internet" It USUALLY isn't, but certain perspectives published in public spaces (eg Facebook groups with thousands of people) and left unchallenged are a PR risk.

comment by Freethinkers In EA · 2019-03-14T16:15:06.907Z · score: 13 (8 votes) · EA · GW

PR risk is another whole topic by itself and there are some tough questions here. One comment though: We need to be wary that acting to prevent PR damage can actually encourage more people to put pressure on you as they've seen that you are vulnerable.

comment by toonalfrink · 2019-03-14T18:04:48.025Z · score: 9 (3 votes) · EA · GW

I'm glad that someone mentions this. I have a strong alief that misrepresenting your opinions to be more palatable is a bad idea if you're right. It pulls you into a bad equilibrium.

If you sermon the truth, you might lose the respect of those that are wrong, but you will gain the respect of those that are right, and those people are the ones you want in your community.

Having said that, you really do have to be right, and I feel like not even EA's are up to the herculean task of clearly seeing outside of their political intuitions. I for one have so far been wrong about many things that felt obvious to me.

I guess that's why we focus on meta truth instead. It seems that the set of rules that arrive at truth are much more easily described than the truth itself.

comment by Liam_Donovan · 2019-03-15T04:43:45.269Z · score: 4 (4 votes) · EA · GW

Are you saying there are groups who go around inflicting PR damage on generic communities they perceive as vulnerable, or that there are groups who are inclined to attack EA in particular, but will only do so if we are percieved as vulnerable (or something else I'm missing)? I'm having a hard time understanding the mechanism through which this occurs.

comment by Freethinkers In EA · 2019-03-15T19:21:51.821Z · score: 5 (2 votes) · EA · GW

It's not necessarily as intentional as that. Some people have certain political goals. They can achieve those goals co-operatively by engaging people in civil discussion or by adversarily by protesting/creating negative publicity. If the later tends to be successful, a greater proportion of people will be drawn towards it. Is that clearer?

comment by Khorton · 2019-03-15T19:26:31.973Z · score: 3 (5 votes) · EA · GW

Not for me! I really don't understand what you mean.

comment by Gregory_Lewis · 2019-03-16T02:44:03.880Z · score: 17 (9 votes) · EA · GW

I think I get the idea:

Suppose (heaven forbid) a close relative has cancer, and there's a new therapy which fractionally improves survival. The NHS doesn't provide it on cost-effectiveness grounds. If you look around and see the NHS often provides treatment it previously ruled out if enough public sympathy can be aroused, you might be inclined try to do the same. If instead you see it is pretty steadfast ("We base our allocation on ethical principles, and only change this when we find we've made a mistake in applying them"), you might not be - or at least change your strategy to show the decision the NHS has made for your relative is unjust rather than unpopular.

None of this requires you to be acting in bad faith looking for ways of extorting the government - you're just trying to do everything you can for a loved one (the motivation for pharmaceutical companies that sponsor patient advocacy groups may be less unalloyed). Yet (ideally) the government wants to encourage protest that highlights a policy mistake, and discourage those for when it has done the right thing for its population, but is against the interests of a powerful/photogenic/popular constituency. 'Caving in' to the latter type pushes in the wrong direction.

(That said, back in EA-land, I think a lot things that are 'PR risks' for EA look bad because they are bad (e.g. in fact mistaken, morally abhorrent, etc.), and so although PR considerations aren't sufficient to want to discourage something, they can further augment concern.)

comment by Khorton · 2019-03-16T09:27:46.617Z · score: 1 (1 votes) · EA · GW

Thank you!

comment by xccf · 2019-03-14T01:26:40.184Z · score: 6 (7 votes) · EA · GW
We don’t want to dismiss how frustrating it can be to see people being wrong without it being sufficiently challenged, but we also believe that people are generally capable of overcoming these challenges and learning to adopt a broader perspective from where they can see that it usually isn’t actually very important if someone is wrong on the internet.

People typically have the choice of many different communities they could become a part of. So if one community seems consistently wrong about something in a frustrating way, it's not surprising if someone chooses to move on to a different community which lacks this problem. Yes, I could overcome my frustrations with Scientologists, learn to adopt a broader perspective, and join the Scientology community, but why bother?

Even though certain rules may seem quite mild and reasonable by themselves, their mere existence creates a reasonable fear that those with certain viewpoints will eventually be completely pushed out.

How does this version sound? "Even though certain heterodox beliefs may seem quite mild and reasonable by themselves, their mere existence creates a reasonable fear that those with certain extreme viewpoints will eventually come to dominate."

comment by toonalfrink · 2019-03-14T01:35:00.654Z · score: 6 (3 votes) · EA · GW

I wonder where this fear of extreme viewpoints comes from. It seems to be a crux.

I personally don't have an alief that there is a slippery slope here. It seems to me that there are some meta rules for discussion in place that will keep this from happening.

For example, it seems to me that EA's are very keen to change their minds, take criticism and data very seriously, bring up contrarian viewpoints, and epistemics humility, to name a few things. I would like to call this Epistemic Honor.

Do you think that our culture of epistemic honor is insufficient for preventing extreme viewpoints, to the point that we need drastic measures like banning topics? My impression is that it's more than enough, but please prove me wrong!

comment by Khorton · 2019-03-14T10:44:53.272Z · score: 30 (12 votes) · EA · GW

In the Christians in EA group, someone (who had AFAIK never posted before) posted a 60-page document. This document outlined his theory that the best EA cause was to support the propogation of Mormonism, because any civilization based on the equality of men and women was doomed to fail (he saw Islam as another viable civilization, but inferior to Mormonism).

He wanted me to debate him point by point in his argument. I was not willing to argue with him, because it was a waste of my time.

What do you think is the best way to handle this from an 'epistemic honour' approach?

comment by kbog · 2019-03-14T23:45:09.298Z · score: 4 (3 votes) · EA · GW

As with all ideas, the best way to handle it is to go over it if you have time or interest in doing so; if not, then say that you do not have time or interest in justifying your opinion. I don't see what the dilemma is. I don't think toonalfrink was saying "you should spend a lot of time debating everyone you disagree with," nor were they saying "you shouldn't have an opinion about something unless you spend a lot of time debating it;" those aren't implied by that conception of Epistemic Honor.

Growing the Mormon church is not an extreme viewpoint.

comment by xccf · 2019-03-14T01:58:28.192Z · score: 7 (6 votes) · EA · GW

My intent was to point out that you can make the slippery slope argument in either direction. I wasn't trying to claim it was more compelling in one direction or the other.

If you believe EA has Epistemic Honor, that argument works in both directions too: "Because EA has Epistemic Honor, any rules we make will be reasonable, and we won't push people out just for having an unfashionable viewpoint."

I do think slippery slope arguments have some merit, and group tendencies can be self-reinforcing. Birds of feather flock together. Because Scientology has a kooky reputation, it will tend to attract more kooks. See also Schelling's model of segregation and this essay on evaporative cooling.

Perhaps it's valuable to brainstorm compromise positions which guard against slipping in either direction. (Example: "Discussion that could be alienating should be allowed in EA Facebook groups if and only if the person who starts the discussion is able to convince a moderator that the topic is important enough to outweigh the costs of alienation." That idea has flaws, but maybe you can think of a better one.)

comment by toonalfrink · 2019-03-14T02:30:07.880Z · score: 0 (3 votes) · EA · GW

any rules we make will be reasonable

Nah, it does apply to itself :)

and we won't push people out for having an unfashionable viewpoint

But you think pushing them out is the right thing to do, correct?

Let me just make sure I understand the gears of your model.

Do you think one person with an unfashionable viewpoint would inherently be a problem? Or will it only become a problem when this becomes a majority position? Or perhaps, is the boundary the point where this viewpoint starts to influence decisions?

Do you think any tendency exists for the consensus view to drift towards something reasonable and considerate, or do you think that it is mostly random, or perhaps there is some sort of moral decay that we have to actively fight with moderation?

Surely, well kept gardens die by pacifism, and so you want to have some measures in place to keep the quality of discussion high, both in the inclusivity/consideration sense and in the truth sense. I just hope that this is possible without banning topics. For most of the reasons stated by the OP. Before we start banning topics, I would want to look for ways that are less intrusive.

Case in point: it seems like we're doing just fine right now. Maybe this isn't a coincidence (or maybe I'm overlooking some problems, or maybe it's because we already ignore some topics)

comment by Freethinkers In EA · 2019-03-14T16:30:43.327Z · score: 3 (3 votes) · EA · GW
So if one community seems consistently wrong about something in a frustrating way, it's not surprising if someone chooses to move on to a different community which lacks this problem.

Indeed, however people will generally accept a certain level of frustration if you are providing sufficient value. As an example, couples often start picking up on the minor annoyances after they fall out of love. Continuing the analogy, focusing on these issues is the obvious thing to do, but it often won't be what is needed to fix the relationship.

How does this version sound? "Even though certain heterodox beliefs may seem quite mild and reasonable by themselves, their mere existence creates a reasonable fear that those with certain extreme viewpoints will eventually come to dominate."

Yes, the argument does cut both ways, but it's worth noting that we made this argument in response to arguments very similar to this.

comment by aarongertler · 2019-03-18T08:07:08.377Z · score: 3 (5 votes) · EA · GW

I work for CEA, but the following views are my own. I don't have any plans to change Forum policy around which topics are permitted, discouraged, etc. This response is just my attempt to think through some considerations other EAs might want to make around this topic.

--

While we all have topics on which our emotions get the better of us, those who leave are likely to be overcome to a greater degree and on a wider variety of topics. This means that they will be less likely to be able to contribute productively by providing reasoned analysis. But further than this, they are more likely to contribute negatively by being dismissive, producing biased analysis or engaging in personal attacks.

I don't really care how likely someone is to be "overcome" by their emotions during an EA discussion, aside from the way in which this makes them feel (I want people in EA, like people everywhere, to flourish).

Being "overcome" and being able to reason productively seem almost orthogonal in my experience; some of the most productive people I've met in EA (and some of the nicest!) tend to have unusually strong emotional reactions to certain topics. There are quite a few EA blogs that alternate between "this thing made me very angry/sad" and "here's an incredibly sophisticated argument for doing X". There's some validity to trying to increase the net percentage of conversation that isn't too emotionally inflected, but my preference would be to accommodate as many productive/devoted people as we can until it begins to trade off with discussion quality. I've seen no evidence that we're hitting this trade-off to an extent that demands we become less accommodating.

(And of course, biased analysis and personal attacks can be handled when they arise, without our needing to worry about being too inclusive of people who are "more likely" to contribute those things.)

The people who leave are likely to be more ideological. This is generally an association between being more radical and more ideological, even though there are also people who are radical without being ideological. People who are more ideological are less able to update in the face of new evidence and are also less likely to be able to provide the kind of reasoned analysis that would cause other EAs to update more towards their views.

See the previous point. I don't mind having ideological people in EA if they share the community's core values. If their commitment to an ideology leads them to stop upholding those values, we can respond to that separately. If they can provide reasoned analysis on Subject A while remaining incorrigibly biased on Subject B, I'll gladly update on the former and ignore the latter. (Steven Pinker disagrees with many EAs quite sharply on X-risk, but most of his last book was great!)

comment by Freethinkers In EA · 2019-03-18T14:54:45.641Z · score: -2 (5 votes) · EA · GW

We can accomodate people who have low levels of emotional control (this is distinct from feeling strong emotions) and who are more ideological. However, while it makes the group more welcoming for those individuals, it makes it less welcoming for everyone else, so it's not so clear that this results in the group being more welcoming overall like we were promised. In any case, it helps highlight how narrow the particular conception of inclusion put forward by Making Discussions Inclusive actually is.

comment by aarongertler · 2019-03-18T20:16:37.799Z · score: 11 (6 votes) · EA · GW

1. I'd really recommend finding a different phrase than "low levels of emotional control", which is both more insulting than seems ideal for conversations in an EA context and too vague to be a useful descriptor. (There are dozens of ways that "controlling one's emotions" might be important within EA, and almost no one is "high" or "low" for all of them.)

2. "Less welcoming for everyone else" is too broad. Accommodating people who prefer some topics not be brought up certainly makes EA less welcoming for some people: Competing access needs are real, and a lot of people aren't as comfortable with discussions where emotions aren't as controlled, or where topics are somewhat limited.

But having "high emotional control" (again, I'd prefer a different term) doesn't necessarily mean feeling unwelcome in discussions with people who are ideological or "less controlled" in some contexts.

One of the features I like most in a community is "people try to handle social interaction in a way that has the best average result for everyone".

I'd consider "we figure out true things" to be the most important factor we should optimize for, and our discussions should aim for "figuring stuff out". But that's not the only important result; another factor is "we all get along and treat each other well", because there's value in EA being a well-functioning community of people who are happy to be around each other. If having a topic consistently come up in conversation is draining and isolating to some members of the community, I think it's reasonable that we have a higher bar for that topic.

This doesn't mean abandoning global poverty because people think it seems colonialist; it might mean deciding that someone's Mormon manifesto [EA · GW] doesn't pass the bar for "deserves careful, point-by-point discussion". That isn't very inclusive to the manifesto's author, but it seems very likely to increase EA's overall inclusiveness.

comment by Freethinkers In EA · 2019-03-18T23:19:45.487Z · score: 3 (2 votes) · EA · GW

This is one of those circumstances where changing the phrase would likely mean avoiding the issue. I agree that we don't want people to be unfeeling automatons and that there are circumstances when expressing even "negative" emotions like anger can be positive. At the same time, the idea that different people have different levels of emotional control seems to be a very useful model, even if it doesn't perfectly describe reality (ie. context-dependence). You've already noted that some behaviours put a burden on most people - having low levels of emotional control/being ideological falls inside this category.

I'll note one argument that you could have put forward: possibly low levels of emotional control is correlated with positive characteristics, such as creativity or the ability to be enthusiastic or authentic. So perhaps a filter on this quality would be net negative.

comment by Khorton · 2019-03-19T10:40:47.672Z · score: 6 (5 votes) · EA · GW

I'm not sure what you mean by 'low emotional control.' Are you talking about people who can't control their reactions, or who can but find it tiring, or who can but choose not to?

I'm very emotional, but if someone's rude to me in the context of a government negotiation, no one would be able to tell I even heard the insult (depending - in some contexts it's strategic to assert yourself and set boundaries).

If someone's rude to me in a social context, though, they're going to get an earful! I don't get paid to take your crap, so when someone insults me, either they're going to hear about it or I'm going to leave.

So... Is that a low level of emotional control, or a high level of emotional control? What exactly are you referring to?

comment by Freer-Than-Thou · 2019-03-20T03:42:15.836Z · score: -1 (5 votes) · EA · GW

In study after study, people are very bad at predicting the experiences of other groups. I think this shows the importance of listening to other people's experiences - there's clearly a systematic gap in the average ability of men to predict what kinds of barriers and issues women face, heterosexuals to predict what others face, etc. Because of this, I think it is entirely appropriate to discount what someone says about another group unless they are themselves a member of that group as a general heuristic, and likewise to be cautious about what one says about another group if one is not oneself a member of that group. That may be simplistic, but it seems a good first rule of thumb.

I'm pretty sure I've been EA since before some newer EAs were born. I think it's nice that there's a label, but if EA gets colonized and warped, I have no problem abandoning the label and community. Not wanting to be associated with the movement is entirely different than not being committed to the cause. You can be EA without a label and without participating in debates. I don't participate much online because I don't have the time for that and don't think it's a way I can do the most good. I don't know how many people are like me in that respect, but since most of them will probably be quiet (as I am 999 times out of 1,000), I thought it good to speak up to be represented. I am not interested in engaging in debate on these kinds of issues as I deem it frankly a waste of time. This comment is probably also a waste of time, and I may come to regret it. I quickly disengage with things that are both irrelevant to my actual work and attention-sucking. I don't appreciate the suggestion that if I were truly committed and a good EA, I would be busy responding to every possible thing that I disagree with. I think I'm a better EA for disengaging and keeping busy where I can have a bigger impact. I don't think I have some special secret amazingly-wonderful-most-productive-of-all thing I'm working on - I think almost all people have something better they could be doing with their time, so it isn't at all guaranteed anyone will be left to engage with these arguments.

I also think that having fewer unproductive conversations is conducive to having more productive conversations, both because time is a scarce resource and because of social capital depletion; if someone wastes a lot of my time I am less inclined to wish to listen to anything they might have to say in the future, even on different topics, because of the base rate being higher that it will be a lousy argument and a net negative.

I have framed this in terms of unproductive vs. productive conversations, but in part this is a credibility issue. See: failure to predict other groups' experiences. Why would I trust people to credibly speak about other groups? I would generally encourage people to talk more about topics they know more about, and my advice would not be any different in this setting: talk more about your own group, less about others. So how come the opposite pattern seems to be observed in practice? How come there seem to be some people who go after group X, group Y, group Z, none of which is their own group, and still claim to be unbiased? It's not a bad look because of the contents (though others have already expressed concerns about how the contents look, separately), it's a bad look because it looks irrational and biased. I think that people who want to be perceived as rational/rationalist have to put a lot of effort into not undermining that image, and casual conversation about difficult topics without recognizing the hubris of that really hits one right in the credibility.

P.S. I hate to say this as it may undermine my own social capital and how my arguments are perceived, but as a separate matter, if you can separate it out, please take my username as gentle tongue-in-cheek criticism of the username Freethinkers of EA. What kind of name is Freethinkers of EA? Does that name not sound offensive? I mean no offense, but if the Freethinkers of EA can pick such a handle, so can I. I think their arguments can be separated from the name, and in general the arguments are better than their name (though still not good). Nonetheless, I'm surprised no one has yet criticized the attempt to position themselves as the sole purveyors of Truth, and I think it's important someone push back against that as it is clearly incorrect. Their name is kind of illustrative of the issue in microcosm: shitting on those who disagree with them, extreme arrogance, and myopia. I do not want the lovely word "freethinker" to pick up these narrow-minded connotations. We are all freethinkers here.