Posts

Information hazards: a very simple typology 2020-07-13T16:54:17.640Z · score: 52 (23 votes)
Exploring the Streisand Effect 2020-07-06T07:00:00.000Z · score: 39 (21 votes)
Concern, and hope 2020-07-05T15:08:47.766Z · score: 114 (67 votes)
What coronavirus policy failures are you worried about? 2020-06-19T20:32:41.515Z · score: 19 (7 votes)
willbradshaw's Shortform 2020-02-28T18:19:32.458Z · score: 4 (1 votes)
Thoughts on The Weapon of Openness 2020-02-13T00:10:14.841Z · score: 28 (14 votes)
The Web of Prevention 2020-02-05T04:51:51.158Z · score: 19 (13 votes)
Concrete next steps for ageing-based welfare measures 2019-11-01T14:55:03.431Z · score: 37 (17 votes)
How worried should I be about a childless Disneyland? 2019-10-28T15:32:03.036Z · score: 24 (14 votes)
Assessing biomarkers of ageing as measures of cumulative animal welfare 2019-09-27T08:00:22.716Z · score: 74 (31 votes)

Comments

Comment by willbradshaw on When you shouldn't use EA jargon and how to avoid it · 2020-10-27T20:11:53.401Z · score: 6 (3 votes) · EA · GW

It's not particularly associated with the EA community – I think your impression there is correct. I'd say it's more generic nerd jargon than EA jargon. I actually don't think I hear it used especially often in EA.

I honestly don't remember the detailed connotations from Stranger in a Strange Land, but since I'm neither a Martian nor a member of a weird New-Agey Martian cult I don't consider this a huge disadvantage.

Comment by willbradshaw on When you shouldn't use EA jargon and how to avoid it · 2020-10-27T20:09:12.279Z · score: 4 (2 votes) · EA · GW

I had a detailed comment here, but then I realised I seldom use the word "grok" anyway so I don't have much cause to be nitpicking other people's substitutions. :-P

Comment by willbradshaw on When you shouldn't use EA jargon and how to avoid it · 2020-10-27T15:47:01.198Z · score: 2 (3 votes) · EA · GW

(I notice that the use of ~ to mean approximately is also a kind of jargon.)

Comment by willbradshaw on When you shouldn't use EA jargon and how to avoid it · 2020-10-27T15:34:59.499Z · score: 3 (2 votes) · EA · GW

And the way it's used in tech is almost totally lacking the mystical angle from Stranger in a Strange Land anyway.

Also Stranger in a Strange Land is a profoundly weird and ideosyncratic book and there's not really any reason to evoke it in most EA contexts.

(That said I do think "deeply understand" doesn't quite do the job.)

Comment by willbradshaw on When you shouldn't use EA jargon and how to avoid it · 2020-10-27T15:29:02.936Z · score: 4 (2 votes) · EA · GW

"competes with"?

Comment by willbradshaw on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T21:04:13.164Z · score: 2 (1 votes) · EA · GW

It probably makes more sense in context, but the context is an entire book of Christian apologetics (sequel to a book on early 20th century philosophy called "Heretics") so I doubt you have time for that right now.

Honestly, I wish people said something like this more often.

I guess what I really meant was "regardless of how convincing it is to people other than me". By definition if I found something convincing it would change my mind, but in the hypothetical example it's more of a difference in values rather than facts.

This feels like it's teetering at the brink of a big moral-uncertainty rabbit hole, and I didn't read that book yet, so I propose leaving this here for now. ☺

Comment by willbradshaw on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T20:45:18.830Z · score: 4 (2 votes) · EA · GW

Upvoted. Thank you for raising your concerns in an honest but constructive / curious manner!

FWIW I think I've been the closest to what one might call the "weird radical view" of wild-animal welfare in this discussion, and I am very much not a negative utilitarian. I really hope we can make the future of nature a happy one.

Comment by willbradshaw on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T20:18:24.863Z · score: 2 (1 votes) · EA · GW

Ah, I deleted the second half of my comment but you must have already been writing your response to it. It's a bad habit of mine – my apologies for muddling the dialogue here.

I can only conceive of being convinced that any of my deeply held beliefs are wrong through appeal to an even more deeply held belief [...].

I think this is maybe the locus of our disagreement about how to think about statements like "regardless of how convincing the research is".

To me it seems important to be able to conceive of being convinced of something even if you can't currently think of any plausible way that would happen. Otherwise you're not really imagining the scenario as stated, which leads to various issues. This is mostly just Cromwell's rule as applied to philosophical/moral beliefs:

leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved.

If I'm honest I don't understand the Chesterton quote, so I'm not sure we'll make much progress there right now.

In general I think it's a mistake to put value on a species or an ecosystem instead of the beings within that species or ecosystem; but humanity is a more plausible exception to this than most.

Comment by willbradshaw on AaronBoddy's Shortform · 2020-10-26T19:59:18.626Z · score: 8 (3 votes) · EA · GW

I really like how you're using your shortform to ask these small, well-formed, interesting questions!

(I don't have anything useful to say here, I just wanted to give this my 👍.)

Is it possible to calculate the net utility (positive or negative) from bringing one suffering bee into existence?

I doubt it, but if so it would make a great unit of measurement.

Comment by willbradshaw on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T19:35:00.763Z · score: 7 (4 votes) · EA · GW

But, the research I've done and read has both made me a lot less sympathetic to a totalizing view of wild animals of this sort (e.g. I think many more wild animals than I previously thought live good lives), and less sympathetic to taking such a radical action.

This is very interesting to me.

Is there an accessible summary anywhere of the research underlying this shift in viewpoint?

Would you say this is a general shift in opinion in the WAW field as a whole?

Comment by willbradshaw on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T19:23:42.975Z · score: 6 (3 votes) · EA · GW

I guess I worry it will be convincing to people with a different ethical framework to me, and I won't be able to articulate an equally convincing objection?

A world in which consequentialists are able to convince a lot of people to accept the destruction of nature for welfare reasons is a pretty surprising world, given how much people like (and depend on) nature. In that hypothetical future, they must have come up with something really quite convincing.

That, or someone's been and gone and unilaterally done something drastic, but we all agree that's probably a bad idea.

Comment by willbradshaw on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T18:26:36.288Z · score: 4 (2 votes) · EA · GW

He said he worked at "another of the 6". (Emphasis mine)

i.e. He co-founded 2 (UF and WAI) and worked at another 1, out of 6 total.

(I don't know what the 6th is)

Comment by willbradshaw on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T17:34:19.367Z · score: 6 (3 votes) · EA · GW

You have 5/6 there already, so we're only missing one.

Comment by willbradshaw on When you shouldn't use EA jargon and how to avoid it · 2020-10-26T17:02:31.533Z · score: 13 (8 votes) · EA · GW

I usually find these "lists of jargon + replacements" quite bad (i.e. include many things where the distinction between the term used and the suggested replacement is important and useful, and/or seem to misunderstand the terms they're suggesting replacements for, and/or include many things that virtually nobody says except maybe as a joke).

But I think this one is pretty good and mostly agree with it.

Comment by willbradshaw on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T13:57:15.187Z · score: 5 (3 votes) · EA · GW

This comment should clearly have more karma than mine.

Comment by willbradshaw on AaronBoddy's Shortform · 2020-10-26T09:16:59.653Z · score: 3 (2 votes) · EA · GW

Yeah, I'm not currently that excited about Bezos as a philanthropist, but the near-term impact of Amazon in the countries it operates in has been hugely positive, especially for low-income people.

Comment by willbradshaw on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T08:26:04.715Z · score: 4 (4 votes) · EA · GW

I don't think the first one actually tells us much, because I don't think many of the wild-animal-welfare EAs I know were significantly influenced by the original result.

I'd never heard of it until I saw the talk debunking it.

Comment by willbradshaw on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T08:24:46.371Z · score: 48 (19 votes) · EA · GW

[Views my own, not my (former) employer's. I no longer work in the wild-animal-welfare sector and do not speak for them.]

Firstly, most of the wild-animal-welfare EAs I've worked with closely are not negative utilitarians. Most of them care deeply about wild animals living good lives, so I expect them to be quite motivated to find ways to improve WAW without removing WA populations, especially given how controversial that would be, and how huge the side-effects are.

That said...

I have this nagging feeling that they're going to do rigorous research for a few decades, then conclude that the majority of animals on Earth would be better off dead. At this point they'll presumably recommend that we start purging the world of animal life, and to me that sounds like a bad thing, regardless of how convincing the research is.

"Regardless of how convincing the research is" sets off big alarm bells for me. What if it's actually true that the majority of animals on Earth would be better off not existing? This seems pretty likely to be the case for factory farms, for example, so I'm not sure why you're so sure it's wrong for wild animals, many of whom live lives at least that bad.

I'm also not sure what the alternative to researching WAW would be. Just ignore the (plausibly very large) problem? How is that different from ignoring, say, the suffering of animals in factory farms, or of people in chronic pain?

Comment by willbradshaw on EA's abstract moral epistemology · 2020-10-25T12:01:47.563Z · score: 2 (1 votes) · EA · GW

If something is broadly convincing – that is, convincing to altruistic donors with a range of different values and priorities – that is a pretty good sign that it is, in fact, solid. In the case of animal welfare, if a lot of non-EA donors have shifted their funding towards priorities that were originally pushed mainly by EAs, that seems like good evidence that shifting towards those priorities is good for animal welfare across a wide range of value systems, and hence (under moral uncertainty) more likely to be in fact a good thing. In that case,

There are certainly ways this could not be true, but I do think the above is the most likely / default case, and that the ways it could not be true are more complex stories requiring additional evidence. You need some mechanism by which EA funders influenced non-EA funders to change their priorities in a way that went against their values, or alternatively some mechanism by which EA funding "deprived [activists] of significant funding [etc]" despite the pre-existing non-EA funders still being around. And you need to provide evidence for that mechanism operating in this case, as opposed to (IMO the much more likely case of) people just being sad that other people think that their preferred approach is less good for animals.

Comment by willbradshaw on EA's abstract moral epistemology · 2020-10-22T13:44:42.891Z · score: 7 (4 votes) · EA · GW

Thanks for this perspective.

I'm arguing with the OP rather than you here, but this seems...straightforwardly good? Like, if a lot of other donors are switching to things more in line with EA priorities, that suggests that EA priorities (in this domain) are broadly convincing, which seems like it makes it much harder to argue that "EA was having a damaging influence on animal advocacy".

Comment by willbradshaw on EA's abstract moral epistemology · 2020-10-22T07:53:21.365Z · score: 3 (2 votes) · EA · GW

As some of the activists were talking, they got on to the topic of how charitable giving on EA’s principles had either deprived them of significant funding, or, through the threat of the loss of funding, pushed them to pursue programs at variance with their missions.

The only way I can see this being true is if EAs convinced existing funding sources to switch their funding priorities along EA principles, or to (for some reason) move out of the field even though the new funding has priorities that differ from theirs. Has that happened? Otherwise, what happened to the funding that was already there?

Comment by willbradshaw on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-16T17:18:21.659Z · score: 4 (2 votes) · EA · GW

Okay, sure, at the margin I agree it's tricky. Both for reputational reasons, and the broad-tent/community-cohesion concerns I mention above.

Comment by willbradshaw on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-16T15:25:28.038Z · score: 17 (7 votes) · EA · GW

I would add that thinking strategically about this cultural phenomenon involves not only trying to understand its mechanism of action, but also coming up with frameworks for deciding what tradeoffs to make in response to it. I am personally very disturbed by the potential of cancel culture to undermine or destroy EA, and my natural reaction is to believe that we should stand firm and make no concessions to it, as well as to upvote posts and comments that express this sentiment. This is not, however, a position I feel I can endorse on reflection[...]

This seems right to me, and I upvoted to support (something like) this statement. I think there's a great deal of danger in both directions here.

(Not just for reputational reasons. I also think that there are lots of SJ-aligned – but very sincere – EAs who are feeling pretty alienated from anti-CC EAs right now, and it would be very bad to lose them.)

It seems instead that protecting our movement against this risk involves striking a difficult and delicate balance between excessive and insufficient relaxation of our epistemic standards. By giving in too much the EA movement risks relinquishing its core principles, but by giving in too little the movement risks ruining its reputation.

The epistemic standards seem totally core to EA to me. If we relax much at all on those I think the expected future value of EA falls quite dramatically. The question to me is whether we can relax/alter our discourse norms without compromising those standards.

Unfortunately, it seems that an open discussion of this issue may itself pose a reputational risk, and in fact I'm not sure it's even a good idea to have public posts like the one this comment is responding to, however much I agree with it.

I sympathise with this, but I think if we don't have public posts like this one, the outcome is more-or-less decided in advance. If everyone who thinks something is bad remains silent for the sake of reputational harm, the discourse in the movement will be completely dominated by those who disagree with them, while those who would agree with them become alienated and discouraged. This will in turn determine who engages with the movement, and how it evolves in relation to that idea in the future.

If that outcome (in this case, broad adoption of the kinds of norms that give rise to cancel culture within EA) is unacceptable, some degree of public opposition is necessary.

Comment by willbradshaw on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T17:14:53.575Z · score: 4 (2 votes) · EA · GW

Ah, then my comment was based on a misunderstanding. Apologies.

Comment by willbradshaw on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T07:49:58.041Z · score: 28 (18 votes) · EA · GW

It makes it more costly to be an organizer. In one discussion amongst group organizers after the Munich situation, one organizer wrote about the Peter Singer talk their group hosted. [I’m waiting to see if I can give a fuller quote, but their summary was about how the Q&A session got conflicted enough that the group was known as “the group that invited Peter Singer” for two years and basically overpowered any other impression students had of what the EA group was about.]

Just for context, if anyone is unaware, Peter Singer is extremely controversial in Germany, much (/even) more so than in the English-speaking world. There was a talk by him in Cologne a few years ago, and everyone was a bit surprised it didn't get shouted down by student activists.

So I can definitely see this happening, and sympathise with the desire for it not to happen again, even though I still think the Hanson decision was ill-made.

Comment by willbradshaw on Can my self-worth compare to my instrumental value? · 2020-10-12T17:16:07.503Z · score: 10 (5 votes) · EA · GW

On this point...there are a few arguments made in other comments here that I don't find very persuasive, but am avoiding arguing against for fear of seeming disagreeable or causing distress to people with fragile self-worth. What are people's thoughts about norms around arguing in these kinds of situations – or even raising the question in the first place?

EDIT: From my side, if there's an argument that I'm making that someone think is shaky, I'd rather they told me so – privately or publicly, as they prefer.

Comment by willbradshaw on Can my self-worth compare to my instrumental value? · 2020-10-11T17:36:52.716Z · score: 7 (4 votes) · EA · GW

This feels...not wrong, exactly, but also not what I was driving at with this comment. At least, I think I probably disagree with your conception of morality.

Comment by willbradshaw on Can my self-worth compare to my instrumental value? · 2020-10-11T15:36:53.698Z · score: 19 (13 votes) · EA · GW

Something I didn't say in my big comment above: I'm really happy the people in this thread are approaching this with the goal of "still staying intellectually honest with" ourselves. I think there's a lot of seductive but misleading thinking in this space, and that there's a strong urge to latch onto the first framing we find that makes us feel better in the face of these issues. I'm happy to see people approach this problem in the same truth-first mindset they apply to doing good in the world.

Comment by willbradshaw on Can my self-worth compare to my instrumental value? · 2020-10-11T15:36:35.026Z · score: 9 (3 votes) · EA · GW

I think all the different framings you suggest are at least partly true.

I think this is one of the fundamental challenges of EA, and is going to take a lot of different people thinking hard about it to really come to grips with as a community. I think it will always be a challenge – EA is fundamentally about (altruistic) ambition, and ambition is always going to be in some degree of tension with the need for comfort, even if it simultaneously provides a great deal of meaning.

As you say, I'm not sure EA will ever be as comforting as religion – it's optimising for very different things. But over time I hope we will generate community structures and wisdom literature to help manage this tension, care for each other, and create the emotional (as well as intellectual) conditions we need to survive and flourish.

Comment by willbradshaw on Can my self-worth compare to my instrumental value? · 2020-10-11T10:15:28.275Z · score: 53 (27 votes) · EA · GW

This definitely resonates with me, and is something I've been thinking about a lot lately, as I wrestle with my feelings around recreational activities and free time. I'm not sure if what follows is exactly an answer to your question, but here's where I'm at in thinking about this problem.

I think one thing it's very important to keep in mind is that, in utilitarianism (or any kind of welfarist consequentialism) your subjective wellbeing is of fundamental intrinsic value. Your happiness is deeply good, and your suffering is deeply bad, regardless of whatever other consequences your actions have in the world. That means that however much good you do in the world, it is better if you are happy as you do it.

Now, the problem, as your post makes clear, is that everyone else's subjective wellbeing is also profoundly valuable, in a way that is commensurate with your wellbeing and can be traded off against it. And, since your actions can affect the wellbeing of many other people, that indirect value can outweigh the direct value of your own wellbeing. This is the fundamental demandingness of consequentialist morality that so many people struggle with. Still, I find it helpful to remember that the same reasoning that makes other people so valuable also makes me valuable, in a deep and fundamental and moral way.

Turning, to instrumental value, I have two things to say. The first is about instrumental value in general, and the second is about the specific instrumental value of self-kindness.

The first thing I want to say is that almost everything I value I value instrumentally, and that fact does not make the value of those things less real, or less important. I care a great deal about freedom and civil liberties and democracy, and would pay high costs to protect those things, even though I only value them instrumentally, as ways to create more happiness and less suffering. I hate racism and speciesism and sickness and ageing, not because they are intrinsically bad in themselves, but because they are the source of so much suffering and foregone happiness. For some reason, we tend to view other things' instrumental value as deeply important, and our own instrumental value as a kind of half-real consolation prize. I think this is a tragic error.

Secondly, with regard to our own instrumental value, most people tend to significantly underestimate just how instrumentally valuable their mental health is. In my experience, when people think and talk about the instrumental value of their own wellbeing, they seem to have in mind about some kind of relaxation reserve that it's important to keep full in order to avoid burnout. I think something like this is probably true, but I also think that there's much deeper and broader instrumental value in being kind to yourself.

My ideas here aren't fully developed, but I think there's something toxic about too much self-abnegation, that whittles away at one's self-esteem and courage and enthusiasm and instinctive kindness toward others. At least for me, self-denial and guilt push me towards a timid and satisficing mindset, where I do what is required to not feel bad about myself and don't envision or reach out for higher achievements. It also makes me less instinctively kind to others, which has a lot of compounding bad effects on my impact, and also makes it harder for me to see and embrace new and different opportunities for doing good.

I'm still thinking through this shift in how I think about the instrumental value of my own wellbeing, but I think it has some pretty important consequences. Compared to the reserve-of-wellbeing model, it seems to militate in favour of being more generous to myself with my free time, less focused on self-optimisation insofar as that feels burdensome, and more focused on self-motivation through rewards rather than threats of self-punishment. How exactly this kind of thinking cashes out into lifestyle choices probably varies a lot from person to person; my main goal here is to illustrate how one's conception of one's instrumental value should be broader and deeper than just "if I don't relax sometimes I'll burn out".

In summary:

  • The same thing that makes it important to work for the wellbeing of others also makes you deeply and intrinsically valuable – to me, to others here, and hopefully also to yourself.
  • The instrumental value of your wellbeing is also deeply important, not merely some kind of second prize. Think about how you think about other things that you value a lot instrumentally, and compare how you think about your own instrumental value: are they the same?
  • The variety and scale of the effects of your wellbeing on your impact are probably greater than you think: your wellbeing isn't just instrumentally valuable, it's very very instrumentally valuable, in all kinds of hard-to-quantify ways.
  • Even if, at some point in the future, your wellbeing no longer has much instrumental value, you will still be just as intrinsically valuable as you are now: which is to say, very. The thing that makes you value the other sentient beings whose wellbeing you strive for will still apply to you: as long as you exist, you are important.
Comment by willbradshaw on How are the EA Funds default allocations chosen? · 2020-08-11T19:26:43.008Z · score: 7 (5 votes) · EA · GW

Nice, thanks Peter.

I have now changed the title of the post to be that which you suggested. I wanted it to be a question, but I couldn’t find a way to add a picture to a test in a question. Im new to this forum. Sorry.

Seems like a good reason. :-)

Comment by willbradshaw on The emerging school of patient longtermism · 2020-08-10T13:40:51.612Z · score: 9 (7 votes) · EA · GW

Totally frivolous question: why chairs?

Comment by willbradshaw on How are the EA Funds default allocations chosen? · 2020-08-10T13:15:04.427Z · score: 13 (7 votes) · EA · GW

I have a few responses to this:

  1. This should probably be a question, not a post.
  2. The question in the title is completely different from the question in the post. A better title would be something like "How are the EA Funds default allocations chosen?"

Actually answering the question, I don't think there's any reason to assume that whichever EA Funds staff member selected the default allocation thinks that this is the theoretically optimal way to allocate resources – in fact I think that's very unlikely. So the real question is what higher-level method, if any, was being used to select that allocation.

I don't think anyone except a member of the appropriate team at CEA can answer that. Have you asked them?

Comment by willbradshaw on Will Three Gorges Dam Collapse And Kill Millions? · 2020-07-27T14:27:30.523Z · score: 4 (2 votes) · EA · GW

FYI, the final paragraph is duplicated.

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-07-21T14:12:03.631Z · score: 8 (2 votes) · EA · GW

Interesting. I'm used to two hyphens for an en dash and three for an em dash.

Comment by willbradshaw on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T18:16:27.581Z · score: 9 (3 votes) · EA · GW

I personally would lean towards the "most AMAs" approach of having most dialogue be with the AMA-respondent. It's not quite "questions after a talk", since question-askers have much more capacity to respond and have a conversation, but I feel like it's more in that direction than, say, a random EA social. Maybe something like the vibe of a post-talk mingling session?

I think this is probably more important early in a comment tree than later. Directly trying to answer someone else's question seems odd/out-of-place to me, whereas chiming in 4 levels down seems less so. I think this mirrors how the "post-talk mingling" would work: if I was talking to a speaker at such an event, and I asked them a question, someone else answering before them would be odd/annoying – "sorry, I wasn't talking to you". Whereas someone else chiming in after a little back-and-forth would be much more natural.

Of course, you can have multiple parallel comment threads here, which alters things quite a bit. But that's the kind of vibe that feels natural to me, and Pablo's comment above suggests I'm not alone in this.

Comment by willbradshaw on Concern, and hope · 2020-07-16T21:34:36.059Z · score: 9 (3 votes) · EA · GW

Thanks, Abraham. It's really valuable to get these perspectives, and it's helpful to get people discussing these issues under their real names where they feel they can. I agree that there is a lot of overlap between the impulses that lead people into EA and those that lead many people into SJ.

I'm too tired right now to respond to this in the depth and spirit it deserves – I'll try and do so tomorrow – so just wanted to flag that this is a positive and valuable contribution to the discussion. I hope any responses to it in the meantime are made in the same spirit.

Comment by willbradshaw on Information hazards: a very simple typology · 2020-07-16T14:48:14.213Z · score: 4 (2 votes) · EA · GW

A mixture of conversations and shared Google Docs. Nothing publicly citable as far as I know.

Comment by willbradshaw on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-15T08:39:12.046Z · score: 6 (4 votes) · EA · GW

Thanks Vaidehi, great comment. If those numbers are right then the drops in both absolute and relative poverty in both South Asia and Indonesia seem pretty amazing.

Comment by willbradshaw on Max_Daniel's Shortform · 2020-07-14T13:12:16.640Z · score: 5 (3 votes) · EA · GW

Do you think it matters who's right?

I think it matters quite a lot when it comes to assessing where to go from here: in particular, how cautious and conservative to be, and how favourable towards untested radical change.

If things have gotten way better and are likely to continue to get way better in the foreseeable future, then we should probably broadly stick with what we're doing – some tinkering around the edges to fix obvious abuses, but no root-and-branch restructuring unless something goes obviously and profoundly wrong.

Whereas if things are failing to get better, or are actively getting worse, then it might be worth taking big risks in order to get out of the hole.

I've often had conversations with people to my left where they seem way too willing to smash stuff in the process of getting to deep systemic change, which is potentially sensible if you think we're in a very bad place and getting worse but madness if you think we're in an extremely unusually good place and getting better.

Comment by willbradshaw on Max_Daniel's Shortform · 2020-07-14T13:06:09.605Z · score: 4 (2 votes) · EA · GW

I have relatively little exposure to Hickel, save for reading his guardian piece and a small part of the dialogue that followed from that, but I don't get the impression he's coming from a position of putting more weight on Sanctity/purity or Authority/respect; in general I'd guess that few people in left-wing social-science academia are big on those sorts of moral foundations, except indirectly via moral/cultural relativism.

Taking Haidt's moral foundations theory as read for the moment, I'd guess that the Fairness foundation is doing a lot of the work in this disagreement. In general, leftists and liberals seem to differ a lot in what they consider culpable harm, and Fairness/exploitation seems like a big part of that.

Comment by willbradshaw on Concern, and hope · 2020-07-13T07:50:57.393Z · score: 8 (5 votes) · EA · GW

So far the comments here have overwhelmingly been (various forms of) litigating the controversy I discuss in the OP. I think this is basically fine – disagreements have all been civil – but insofar as there is still interest I'd be keen to hear people's thoughts on a more meta level: what sorts of things could we do to help increase understanding and goodwill in the community over this issue?

Comment by willbradshaw on Concern, and hope · 2020-07-07T19:03:47.050Z · score: 30 (11 votes) · EA · GW

I'm still pretty sceptical that the post in question was deliberately made with conscious intention to cause harm. In any case, I know of at least a couple of other EAs who have good-faith worries in that direction, so at worst it's exacerbating a problem that was already there, not creating a new one.

(Also worth noting that at this point we're probably Streisanding this dispute into irrelevance anyway.)

Comment by willbradshaw on Concern, and hope · 2020-07-07T08:00:57.244Z · score: 4 (2 votes) · EA · GW

(I have now cut the link.)

Comment by willbradshaw on Concern, and hope · 2020-07-07T07:53:22.271Z · score: 25 (15 votes) · EA · GW

This comment does a good job of summarising the "classical liberal" position on this conflict, but makes no effort to imagine or engage with the views of more moderate pro-SJ EAs (of whom there are plenty), who might object strongly to cultural-revolution comparisons or be wary of SSC given the current controversy.

As I already said in response to Buck's comment:

I agree that post was very bad (I left a long comment explaining part of why I strong-downvoted it). But I think there's a version of that post, that is phrased more moderately and tries harder to be charitable to its opponents, that I think would get a lot more sympathy from the left of EA. (I expect I would still disagree with it quite strongly.)

As you say, there aren't many right-wing EAs. The key conflict I'm worried about is between centre/centre-left/libertarian-leaning EAs and left-wing/SJ-sympathetic EAs[1]. So suggesting I need to find a right-wing piece to make the comparison is missing the point.

(This comment also quotes an old version of my post, which has since been changed on the basis of feedback. I'm a bit confused about that, since some of the changes were made more than a day ago – I tried logging out and the updated version is still the one I see. Can you update your quote?)


  1. I also don't want conservative-leaning EAs to be driven from the movement, but that isn't the central thing I'm worried about here. ↩︎

Comment by willbradshaw on 3 suggestions about jargon in EA · 2020-07-06T12:06:48.698Z · score: 11 (6 votes) · EA · GW

One aspect of how "information hazard" tends to be conceptualised that is fairly new[1], apart from the term itself, is the idea that one might wish to be secretive out of impartial concern for humankind, rather than for selfish or tribal reasons[2].

This especially applies in academia, where the culture and mythology are strongly pro-openness. Academics are frequently secretive, but typically in a selfish way that is seen as going against their shared ideals[3]. The idea that a researcher might be altruistically secretive about some aspect of the truth of nature is pretty foreign, and to me is a big part of what makes the "infohazard" concept distinctive.


  1. Not 100% unprecedentedly new, or anything, but rare in modern Western discourse pre-Bostrom. ↩︎

  2. I think a lot of people would view those selfish/tribal reasons as reasonable/defensible, but still different from e.g. worrying that such-and-such scientific discovery might damage humanity-at-large's future. ↩︎

  3. Brian Nosek talks about this a lot – academics mostly want to be more open but view being so as against their own best interests. ↩︎

Comment by willbradshaw on Concern, and hope · 2020-07-05T19:04:13.659Z · score: 19 (8 votes) · EA · GW

"Culminating" might be the wrong word, I agree the triggering event was fairly independent.

But I do think people's reactions to the SSC kerfuffle were coloured by their beliefs about the previous controversy (and Scott's political beliefs), and that it contributed to the general feeling I'm trying to describe here.

Comment by willbradshaw on Concern, and hope · 2020-07-05T18:56:01.480Z · score: 26 (11 votes) · EA · GW

I agree that post was very bad (I left a long comment explaining part of why I strong-downvoted it). But I think there's a version of that post, that is phrased more moderately and tries harder to be charitable to its opponents, that I think would get a lot more sympathy from the left of EA. (I expect I would still disagree with it quite strongly.)

I think there's a reasonable policy one could advocate, something like "don't link to heavily-downvoted posts you disagree with, because doing so undermines the filtering function of the karma system". I'm not sure I agree with that in all cases; in this case, it would have been hard for me to write this post without referencing that one, I think the things I say here need saying, and I ran this post by several people I respect before publishing it.

I could probably be persuaded to change that part given some more voices/arguments in opposition, here or in private.

(It's also worth noting that I expect there are a number of people here who think comparisons of the current situation to the Cultural Revolution are quite bad, see e.g. here.)

Comment by willbradshaw on Exploring the Streisand Effect · 2020-07-05T15:48:38.465Z · score: 4 (2 votes) · EA · GW

I don't see the connection to counterproductive secrecy.

Comment by willbradshaw on Exploring the Streisand Effect · 2020-07-03T17:05:10.759Z · score: 5 (3 votes) · EA · GW

A third category of things that are distinct from the classic Streisand effect, but similar enough that it is often worth discussing them together, is counterproductive secrecy. That is, cases where, instead of causing information spread by attempting to change the actions of others, you cause it by being ostentatiously secretive yourself.

One thing that would be very useful to me is a good name for this effect, as distinct from the Streisand effect. Like I said in the piece, they're clearly related, but I think distinct enough to merit separate terms, and having a good name would help clarify the conceptual space.

Anyone know any good cases of secrecy (as opposed to censorship) spectacularly backfiring?