Posts

John Bridge's Shortform 2022-09-18T14:01:24.593Z
England & Wales & Windfalls 2022-06-03T10:26:47.042Z
What should we actually do in response to moral uncertainty? 2022-05-30T09:48:14.234Z
The Windfall Clause has a remedies problem 2022-05-23T10:31:46.397Z
New Sequence - Towards a worldwide, watertight Windfall Clause 2022-04-07T15:02:05.518Z
What is Intersectionality Theory? What does it mean for EA? 2022-03-27T20:41:23.205Z

Comments

Comment by John Bridge on John Bridge's Shortform · 2022-09-21T15:14:18.635Z · EA · GW

NB: One reason this might be tractable is that lots of non-EA folks are working on data protection already, and we could leverage their expertise.

Comment by John Bridge on John Bridge's Shortform · 2022-09-21T15:13:03.503Z · EA · GW

Focusing more on data governance:

GovAI now has a full-time researcher working on compute governance. Chinchilla's Wild Implications suggests that access to data might also be a crucial leverage point for AI development. However, from what I can tell, there are no EAs working full time on how data protection regulations might help slow or direct AI progress. This seems like a pretty big gap in the field.

What's going on here? I can see two possible answers:

  • Folks have suggested that compute is relatively to govern (eg). Someone might have looked into this and decided data is just too hard to control, and we're better off putting our time into compute.
  • Someone might already be working on this that I just haven't heard of.

If anyone has an answer to this I'd love to know!

Comment by John Bridge on John Bridge's Shortform · 2022-09-19T18:54:57.032Z · EA · GW

No Plans for Misaligned AI:

This talk by Jade Leung got me thinking - I've never seen a plan for what we do if AGI turns out misaligned. 

The default assumption seems to be something like "well, there's no point planning for that, because we'll all be powerless and screwed". This seems mistaken to me. It's not clear that we'll be so powerless that we have absolutely no ability to encourage a trajectory change, particularly in a slow takeoff scenario. Given that most people weight alleviating suffering higher than promoting pleasure, this is especially valuable work in expectation as it might help us change outcomes from 'very, very bad world' to 'slightly negative' world. This also seems pretty tractable - I'd expect ~10hrs thinking about this could help us come up with a very barebones playbook.

Why isn't this being done? I think there are a few reasons:

  • Like suffering focused ethics, it's depressing.
  • It seems particularly speculative - most of the 'humanity becomes disempowered by AGI' scenarios look pretty sci-fi. So serious academics don't want to consider it.
  • People assume, mistakenly IMO, that we're just totally screwed if AI is misaligned.
Comment by John Bridge on John Bridge's Shortform · 2022-09-18T14:37:55.679Z · EA · GW

Longtermist legal work seems particularly susceptible to the Cannonball Problem, for a few reasons:

  • Changes to hard law are difficult to reverse - legislatures rarely consider issues more than a couple times every ten years, and the judiciary takes even longer. 
  • At the same time, legal measures which once looked good can quickly become ineffectual due to shifts in underlying political, social or economic circumstances. 
  • Taken together, this means that bad laws have a long time to do a lot of harm, so we need to be careful when putting new rules on the books.
  • This is worsened by the fact that we don’t know what ideal longtermist governance looks like. In a world of transformative AI, it’s hard to tell if the rule of law will mean very much at all. If sovereign states aren’t powerful enough to act as leviathans, it’s hard to see why influential actors wouldn’t just revert to power politics.

Underlying all of this are huge, unanswered questions in political philosophy about where we want to end up. A lack of knowledge about our final destination makes it harder to come up with ways to get there.

I think this goes some way to explaining why longtermist lawyers only have a few concrete policy asks right now despite admirable efforts from LPP, GovAI and others.

Comment by John Bridge on John Bridge's Shortform · 2022-09-18T14:01:24.679Z · EA · GW

The Cannonball Problem:

Doing longtermist AI policy work feels a little like aiming heavy artillery with a blindfold on. We can’t see our target, we’ve no idea how hard to push the barrel in any one direction, we don't know how long the fuse is, we can’t stop the cannonball once it’s in motion, and we could do some serious damage if we get things wrong.

Comment by John Bridge on What if AI development goes well? · 2022-09-05T19:59:43.213Z · EA · GW

Taking each of your points in turn:

  1. Okay. Thanks for clarifying that for me - I think we agree more than I expected, because I'm pretty in favour of their institutional design work.
  2. I think you're right that we have a disagreement w/r/t scope and implications, but it's not clear to me to what extent this is also just a difference in 'vibe' which might dissolve if we discussed specific implications. In any case, I'll take a look at that paper.
Comment by John Bridge on What if AI development goes well? · 2022-09-04T12:15:54.030Z · EA · GW

I have a couple thoughts on this.

First - if you're talking about nearer-term questions, like 'What's the right governance structure for a contemporary AI developer to ensure its board acts in the common interest?' or 'How can we help workers reskill after being displaced from the transport industry' then I agree that doesn't seem too strange. However, I don't see how this would differ from the work that folks at places LPP and Gov.AI are doing already.

Second - if you're talking about longer-term ideal governance questions, I  reckon even relatively mundane topics are likely to seem pretty weird when studied in a longtermist context, because the bottom line for researchers will be how contemporary governance affects future generations. 

To use your example of the future of work, an important question in that topic might be whether and when we should attribute legal personhood to digital labourers, with the bottom line concerning the effect of any such policy on the moral expansiveness of future societies. The very act of supposing that digital workers as smart as humans will one day exist is relatively weird, let alone considering their legal status, let further alone discussing the potential ethics of a digital civilisation. 

This is of course a single, cherry-picked example, but I think that most papers justifying specific positive visions of the future will need to consider the impact of these intermediate positive worlds on the longterm future, which will appear weird and uncomfortably utopian. Meanwhile, I suspect that work with a negative focus ('How can we prevent an arms race with China?') or a more limited scope ('How can we use data protection regulations to prevent bad actors from accessing sensitive datasets?') doesn't require this sort of abstract speculation, suggesting that research into ideal AI governance carries reputational hazards that others forms of safety/governance work do not. I'm particularly concerned that this will open up AI governance to more hit-pieces of this variety, turning off potential collaborators whose first interaction with longtermism is bad faith critique. 

Comment by John Bridge on What if AI development goes well? · 2022-08-31T23:05:02.225Z · EA · GW

Thanks for this Rory, I'm excited to see what else you have to say on this topic.

One thing I think this post is missing is a more detailed response to the 'ideal governance as weird' criticism. You write that 'weird ideal governance theories may well be ineffective', but I would suggest that almost all fleshed-out theories of ideal AI governance will be inescapably weird, because most plausible post-trasformative AI worlds are deeply unfamiliar by nature.

A good intuition pump for this is to consider how weird modern Western society would seem to people from 1,000 years ago. We currently live in secular market-based democratic states run by a multiracial, multigender coalition of individuals whose primary form of communication is the instantaneous exchange of text via glowing, beeping machines. If you went back in time and tried to explain this world to an inhabitant of a mediaeval European theocratic monarchy, even to a member of the educated elite, they would be utterly baffled. How could society maintain order if the head of state was not blue-blooded and divinely ordained? How could peasants (particularly female ones) even learn to read and write, let alone effectively perform intellectual jobs? How could a society so dependent on usury avoid punishment by God in the form of floods, plagues or famines?

Even on the most conservative assumptions about AI capabilities, we can expect advanced AI to transform society at least as much as it has changed in the last 1,000 years. At a minimum, it promises to eliminate most productive employment, significantly extend our lifetimes, allow us to intricately surveil each and every member of society, and to drastically increase the material resources available to each person. A world with these four changes alone seems radically different and unfamiliar to our own, meaning any theory about its governance is going to seem weird. Throw in ideas like digital people and space colonisation and you're jumping right off the weirdness deep end.

Of course, weirdness isn't per se a reason not to go ahead with investigation into this topic, but I think the Wildeford post you cited is on the right track when it comes to weirdness points. AI Safety and Governance already struggles for respectability, so if you're advocating for more EA resources to be dedicated to the area I think you need to give a more thorough justification for why it won't discredit the field.

Comment by John Bridge on Critiques of EA that I want to read · 2022-06-20T13:09:16.900Z · EA · GW

Also strong upvote. I think nearly 100% of the leftist critiques of EA I've seen are pretty crappy, but I also think it's relatively fertile ground. 

For example, I suspect (with low confidence) that there is a community blindspot when it comes to the impact of racial dynamics on the tractability of different interventions, particularly in animal rights and global health.[1] I expect that this is driven by a combination of wanting to avoid controversy, a focus on easily quantifiable issues, the fact that few members of the community have a sociology or anthropology background, and (rightly) recognising that every issue can't just be boiled down to racism.

  1. ^

    See, for eg, my comment here.

Comment by John Bridge on Snakebites kill 100,000 people every year, here's what you should know · 2022-06-12T17:07:56.140Z · EA · GW

I'm a bit late to the party on this one, but I'd be interested to find out how differential treatment of indigenous groups in countries where snakebites are most prevalent impacts the tractability of any interventions. I don't have any strong opinions about how significant this issue is, but I would tentatively suggest that a basket of 'ethnic inequality issues' should be considered a third 'prong' in the analysis of why snakebites kill and maim so many people, and could substantially impact our cost-effectiveness estimates.

Explanation:

The WHO report linked by OP notes that, in many communities, over 3/4 of snakebite victims choose traditional medicine or spiritual healers instead of hospital treatment. I don't think this is a result of either of the two big issues that the OP identifies - it doesn't seem to stem from difficulty with diagnosis or cost of treatment, so much as being a thorny problem resulting from structural ethnic inequalities in developing countries.

I'm most familiar with the healthcare context of Amazonian nations, where deeply embedded beliefs around traditional medicine and general suspicion of mestizo-run governments can make it more difficult to administer healthcare to indigenous rainforest communities, low indigenous voter turnout reduces the incentives of elected officials to do anything about poor health outcomes, and discriminatory attitudes towards indigenous people can make health crises appear less salient to decisionmakers. Given that indigenous groups in developing countries almost universally receive worse healthcare treatment, and given that much indigenous land is in regions with high vulnerability to snake envenoming,[1] I wouldn't be surprised if this issue generalised outside of Amazonia.

Depending on the size of the effect here, this could considerably impact assessments of tractability. For example, if developing country governments won't pay for the interventions, it might be difficult to fund long-term antivenom distribution networks. Alternatively, if indigenous groups don't trust radio communications, communicating health interventions could be particularly difficult. Also, given the fact that 'indigenous' is a poorly-defined term which refers to a host of totally unrelated peoples, it might be difficult to generalise or scale community interventions.

 

  1. ^

    Study here (which I've not read).

Comment by John Bridge on [deleted post] 2022-06-03T11:47:46.488Z

Nothing to add, I just want to comment that this is a wonderful initiative. Thanks for setting this up!

Comment by John Bridge on Announcing a contest: EA Criticism and Red Teaming · 2022-06-03T10:53:44.353Z · EA · GW

I'm currently writing a sequence exploring the legal viability of the Windfall Clause in key jurisdictions for AI development. It isn't strictly a red-team or a fact-checking exercise, but one of my aims in writing the sequence is to critically evaluate of the Clause as a piece of longtermist policy.

If I'd like to participate, would this sort of thing be eligible? And should I submit the sequence as a whole or just the most critical posts?

Comment by John Bridge on How should people spend money to be more productive? · 2022-05-31T14:15:10.524Z · EA · GW

UK/European folks - if you're looking for a second monitor, I recommend you buy one of these. They usually have a  discount code, which makes them some of the best value on the market. 

The only thing to keep in mind is that they eat up your battery pretty fast, which may not be ideal if you plan to use them for long stretches away from a plug socket.

Comment by John Bridge on A retroactive grant for creating the HPMoR audiobook (Eneasz Brodski)? · 2022-05-04T19:38:40.520Z · EA · GW

I have no strong opinions on whether this is a good or a bad idea, all things considered. But:

  • I feel uneasy about retrofunding as an idea.
  • Retrofunding feels more like 'so-called philanthropists giving money to their pals' than 'high-impact EA philanthropy'.
  • Retrofunding also feels particularly bad for optics.

If you have an argument for why I should feel different, I'd appreciate if you explain the argument rather than downvoting.

Comment by John Bridge on New Sequence - Towards a worldwide, watertight Windfall Clause · 2022-04-08T09:03:50.925Z · EA · GW

Hi Will,

(1) Is a really good point. I will definitely consider this. A few thoughts right now:

Encouragingly, the position here in England for contractual damages comes from Hadley v Baxendale, where damages are given for all losses that 'were in the contemplation of both parties' at the time they contracted. Given the very nature of the agreement is that the Developer has to pay out a colossal sum if they reach a certain % of GDP/market cap, I'd assume that the Developer's future profits would be included here. That said, specific performance seems like a more satisfactory approach, because I'm sure the court would end up discounting the total sum. 

FWIW, HvB is an old case (1850s I believe), so I expect this same point will apply across most of the former Commonwealth.

(2) I hadn't considered this, but I will  look into it. My initial thoughts are that (a) the shareholders are unlikely to have assets worth >0.1% GDP, so this would be a clear example of bad faith (b) this could be a mark in favour of the stock options method, because it might allow the Counterparty to bring  an unfair prejudice or other derivative action against the Developer.

Thanks for highlighting both.

Comment by John Bridge on Where is the Social Justice in EA? · 2022-04-05T12:36:32.670Z · EA · GW

Strong upvote - this is a really great post and helped me understand the source of many disagreements between myself and my more social justice-oriented friends.

Comment by John Bridge on Companies with the most EAs and those with the biggest potential for new Workplace Groups · 2022-04-01T14:36:51.980Z · EA · GW

Sorry for the late comment, but I believe there are 40+ engaged EAs in the UK Civil Service, which is mostly based around Westminster. 

Did you leave them off because you are specifically looking at corporations?

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-30T10:17:13.742Z · EA · GW

So, your comment here:

’It doesn't predict that being a member of two "oppressed" classes can result in an intersectional "privilege".’

Is referring to the advantage that western Asian women receive on the dating scene. My point is that this is compatible with intersectionality theory, because although the general structure of the power relationships between men/women, majority/minority ethnic groups, and white people/Asians disadvantages western Asian women, none of these relationships are 100% downside.

So, the idea is that on balance the relationship is oppressive, rather than that the relationship is just 100% beneficial/harmful for either side.

Is that more clear?

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-29T17:22:10.750Z · EA · GW

I also don't think the prior should be 'people of all ethnicities feel the exact same set of charitable obligations' - that seems like a similarly strong claim. 

Still, in the absence of any good data to back up my claim or yours, I think it's appropriate to be very uncertain about any hypothesis we might have about why people do or don't give.

Thanks for improving my thinking on this.

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-29T16:45:29.038Z · EA · GW

I think you might have misunderstood the scope of this post. I want to emphasise that I endorse none of the following claims:

[a white person can never] understand or model the discrimination or pain faced by a (for example) queer, poor, Black and Muslim individual.

it’s pointless for a male to study female psychology because a male will never understand what it’s like to be female and should instead have no voice in the conversation

diversity and “equity” and “justice” [are] innate, disseminated values, rather than potential or circumstantial instrumental ones for prior lauded ends.

[EA] is somehow inherently in the wrong or discriminatory or evil for ending up mostly male, white, secular, tech based etc.

if your goal is to maximize diversity then you need intersectionality

you have to shame people for making assumptions or holding beliefs about those who are part of other [social] categories

If we remove these claims and just consider whether intersectionality would be a useful tool (of many different possible tools) for helping EAs think through difficult ideas, would this change your position at all?

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-29T15:25:29.835Z · EA · GW

That said, your comment has shifted me towards your perspective that intersectionality is unlikely to be useful for EAs, and it's better to start a new language game. I think the word comes with enough baggage that it is hard to use as a neutral tool for analysing issues, and is liable to be misunderstood. 

Thanks for helping to improve my thinking here.

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-29T15:23:18.117Z · EA · GW

it only predicts it when you are a member of a "privileged" class and an "oppressed" class*. It doesn't predict that being a member of two "oppressed" classes can result in an intersectional "privilege".

 

I think perhaps we mean different things when we use the words 'privilege' and 'oppression'. Under intersectionality theory, Group X is privileged in respect of Group Y if they are the beneficiaries of the power relationship, all things considered. Similarly, Group Y is oppressed if they are generally disadvantaged by that relationship. That doesn't mean that Group X benefits 100%, or that Group Y always suffers. 

To unpack that a bit, you might imagine the general structure of a male-female relationship in the early 1900s: Broadly-speaking, a woman born in 1900 would be disadvantaged compared to a man born in the same year. She was treated as subservient to her husband, she would be excluded from positions of power where men were not, and she was (in many countries) denied the vote. 

Men were the overall beneficiaries of this arrangement, essentially having a lifelong live-in servant and childcarer. However, women also benefitted from this relationship in some ways - a 1900s woman would never have been expected to go to war, and once her children had grown up she would not have been expected to work a job. Nonetheless, it is fair to say that the power relationship between men and women in earliest 20th Century was a unequal one. This is the sense in which intersectionality theory would describe women as oppressed and men as privileged.

This logic then extends to intersectional disadvantages, meaning that the model doesn't break even if you get an intersectional 'privilege'[1]. Going back to Western Asian women, it seems to be true that Asian folks are treated as more 'feminine' than White folks. The feminisation of Western Asians might therefore benefit Asian women (who are 'hyperfeminised', and so get even more of the benefits which accrue to women) and disadvantage Asian men (who are emasculated, and so get fewer of the benefits which accrue to men).
 

  1. ^

    Given the definition of privilege I've just set out, 'privilege' is probably the wrong word for what's going on here, but you get my point.

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-28T10:29:10.314Z · EA · GW

Thanks for your message David. I think this probably depends on your definition of 'discrimination' - in SJ language, discrimination is typically something that happens at a systemic, rather than an individual level. That is to say, a set of policies that systematically disadvantage a particular group can still be discrimination if they reflect a prevailing system in which that group is disadvantaged. This can be true even if there is no bad intent on the part of individuals.

I think this broader definition is not always helpful, particularly because (a) it often fuels controversy to describe such policies as discriminatory, and (b) it is not always clear that any given policy is actually a part of a system of discrimination. That's why I've tried to use 'disadvantage' in most of the post, which is a less loaded term. Nonetheless, I feel it's the most appropriate term to use in the DeGraffenreid context, as the originators of Intersectionality Theory describe the case in terms of discrimination.

Is there a way you feel I could make this more clear?

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-28T10:22:15.831Z · EA · GW

Thanks for your message Charles. First, to respond to:

By the above, do you mean that focusing on one cause area neglects the other? If so, that observation doesn't seem like a contribution. 

Otherwise, if you meant something else, that seems like "original research" as these speculations and tradeoffs pulls on a vast range of topics. I'm skeptical that this theory helps thinking here.

I don't mean that focusing on one cause area always neglects the other. Rather, I mean that some issues EAs care about could be at the intersection of two types of disadvantage, and that it is helpful to conceptualise these issues as intersectional. In the example above, the point is that just 'working to help future people' and 'working to help animals' may do little or nothing to help 'future animals', and that an explicitly intersectional approach can help us to spot this error.

In addition to the noise/chatter/attention demands of having to examine them, some of them have a large amount of issues/baggage/subtext. 

I think this is a really great point, and by far one of the strongest considerations against using intersectionality or other ideas taken from SJ-aligned academia. Still, I've discussed this in a bit more detail in my comment to Jackson above, which I hope addresses this concern.

However, this presentation of "intersectionality" just seems list out basic considerations (and a subset at that). It doesn't seem to contribute a way to resolve them, or even how to get started. 

I don't necessarily disagree with you in saying that it doesn't contribute a way to resolve issues. However, I think that misses the point. Concepts like 'deadweight loss' or 'moral foundations' don't contribute solutions, but they provide conceptual clarity which helps us understand and explain what is going on in economic or moral psychology. I think intersectionality can be useful on this front, without providing solutions as such.

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-28T10:09:10.161Z · EA · GW

Hi Chris, I've responded to this somewhat in my response to Jackson above.

FWIW, I'm trying to avoid focusing on race questions here, because I think they're pretty charged and racial equality isn't an EA cause area in any case. Still, I think it's worth responding to your comment that:

... the model doesn't deal very well with the fact that Asians earn more on average than people who are white or an Asian woman may be better off in some ways than an Asian man (such as dating).

I actually think the model deals very well with this, as intersectionality would predict that being Asian + a minority + male would present a separate set of issues to being Asian + a minority + female. So fact that Asian women do better off in the dating scene is actually a pretty good example of intersectionality creating novel outcomes that might not be easily predicted by just stacking up disadvantages. 

If, for some reason, discrimination against minority Asian women suddenly became an EA cause area, then people wanting to tackle the issue might do better trying tospecifically tackle 'issues encountered by minority Asian women' rather than just generally 'reduce discrimination against women/minorities/Asians'.

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-28T09:52:13.473Z · EA · GW

I totally agree with you on this Erich; in my opinion, intersectionality is a useful tool to describe the phenomena of overlapping disadvantage, but a problem isn't more important or more effective just because it's an intersectional one.

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-28T09:50:55.794Z · EA · GW

Hi Tyner, thanks for your message,

I don't have any studies I can point to on this, no, but the idea that privileged white men find it easier to take a universalising, impartial approach to doing good seems intuitively plausible. Admittedly, most of the data I have to support that argument are from private conversations, along with a general lack of demographic diversity in EA.

I'm open to the idea that I could be wrong here - can I ask you to explain in a little more detail why you feel that the PoC case isn't unique?

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-28T09:36:09.064Z · EA · GW

Hi evelynciara, thanks for sharing these. Can I ask - are you raising these as an indication that you support my thesis, or just to add to the discussion?

Comment by John Bridge on What is Intersectionality Theory? What does it mean for EA? · 2022-03-28T09:34:56.055Z · EA · GW

Thanks for your message Jackson. A few thoughts:

  • First thing is that, if intersectionality seems vague or poorly defined, then that's likely a fault of my writing rather than the idea. To clarify - "intersectionality" is the idea that different individuals encounter overlapping types of disadvantage, and that these disadvantages combine in ways cannot be easily explained by looking at either kind of disadvantage in isolation. This means that finding solutions to issues at the intersection of several axes of disadvantage often requires explicitly considering how these considerations interact.
  • Its (potential) relevance to EA comes from the fact that a lot of EA cause areas deal with multiple axes of disadvantage in tandem; the use of 'intersectionality' could help to bring conceptual clarity to these discussions. To return to the animal welfare example,  evelyciara has highlighted that there have been several recent posts about non-human animals being neglected on several axes simultaneously. Given the multiple disadvantages faced by future animals, intersectionality predicts that we will need to come up with novel solutions to protect them. Just trying to do (a) protect the long-term future, and (b) promote animal welfare is unlikely to achieve this goal. Guy Raveh highlights a similar example below in global health. I think the language of intersectionality is a neat way to explain what's going on here, and why we might need to bring a fresh approach to these issues.
  • I don't think animal welfare is the only cause area where intersectionality could bring conceptual clarity and improve our thinking. For example, engaging with how best to advance the welfare of digital people might benefit from an intersectional framing. It seems plausible that digital societies might end up with similar social ills - status games, inequality, 'poverty', etc - that we currently suffer. However, it's unlikely that standard EA development strategy (read: health interventions) would be at all useful in dealing with these issues. Again, that's because this is an intersectional issue, with multiple disadvantages (digital, poor) combining to create novel problems. If you agree with me that this seems obvious, then I think our disagreement has more to do with the use of the particular term 'intersectionality'. This brings me to my next point.
  • Even if intersectionality comes with intellectual baggage, I don't think we should shy away from using the term if it improves clarity. EAs already use terms that come with significant ideological baggage, because they're useful and help to express important ideas. The term 'nonhuman animals' is a good example here - EAs use it to indicate that the moral distinction between the two is illusory. But this term (and much of the language around veganism) is morally charged, indicating a set of beliefs is perceived by many outside of EA as an indictment of meat-eaters. Alternatively, EAs on the forum often discuss political liberalism or cosmopolitanism, and many leading EAs explicitly identify as neoliberals. All three terms are highly politically charged, identifying a fuzzily defined set of policy stances that are controversial on both sides of the political spectrum. Nonetheless, in all of the cases I've just outlined, we use these ideas because they're a helpful way of concisely explaining our ideas. I don't think intersectionality is different in any unique way from the terms I've just described.   I now think that this comment is right, inasmuch as it's worth starting a new language game given the baggage that comes with the term. 

I think this covers most of your comments, but please let me know if there's anything I can clarify. I expect our crux of disagreement is on how useful it is to introduce a politically charged term like intersectionality into EA discourse, and I'm happy to engage more on that topic.

Comment by John Bridge on $100 bounty for the best ideas to red team · 2022-03-19T11:03:49.979Z · EA · GW

Red-team - "Are longtermism and virtue ethics actually compatible?"

A convincing red-team wouldn't need a complex philosophical analysis, but rather a summary of divergences between the two theories and an exploration of five or six  'case studies' where consequentialist-type behaviour and thinking is clearly 'unvirtuous'. 


Explanation - Given just how large and valuable the long-term future could be, it seems plausible that longtermists should depart from standard heuristics around virtue. For instance, a longtermist working in biosecurity who cares for a sick relative might have good consequentialist reasons to abandon their caring obligations if a sufficiently promising position came up at an influential overseas lobbying group. I don't think EAs have really accepted that there is a tension here; doing so seems important if we are to have open, honest conversations about what EA is, and what it should be.