Posts

Information hazards: a very simple typology 2020-07-13T16:54:17.640Z · score: 44 (18 votes)
Exploring the Streisand Effect 2020-07-06T07:00:00.000Z · score: 34 (20 votes)
Concern, and hope 2020-07-05T15:08:47.766Z · score: 109 (64 votes)
What coronavirus policy failures are you worried about? 2020-06-19T20:32:41.515Z · score: 18 (6 votes)
willbradshaw's Shortform 2020-02-28T18:19:32.458Z · score: 4 (1 votes)
Thoughts on The Weapon of Openness 2020-02-13T00:10:14.841Z · score: 28 (14 votes)
The Web of Prevention 2020-02-05T04:51:51.158Z · score: 19 (13 votes)
Concrete next steps for ageing-based welfare measures 2019-11-01T14:55:03.431Z · score: 36 (16 votes)
How worried should I be about a childless Disneyland? 2019-10-28T15:32:03.036Z · score: 24 (14 votes)
Assessing biomarkers of ageing as measures of cumulative animal welfare 2019-09-27T08:00:22.716Z · score: 74 (31 votes)

Comments

Comment by willbradshaw on Information hazards: a very simple typology · 2020-07-16T14:48:14.213Z · score: 2 (1 votes) · EA · GW

A mixture of conversations and shared Google Docs. Nothing publicly citable as far as I know.

Comment by willbradshaw on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-15T08:39:12.046Z · score: 5 (3 votes) · EA · GW

Thanks Vaidehi, great comment. If those numbers are right then the drops in both absolute and relative poverty in both South Asia and Indonesia seem pretty amazing.

Comment by willbradshaw on Max_Daniel's Shortform · 2020-07-14T13:12:16.640Z · score: 5 (3 votes) · EA · GW

Do you think it matters who's right?

I think it matters quite a lot when it comes to assessing where to go from here: in particular, how cautious and conservative to be, and how favourable towards untested radical change.

If things have gotten way better and are likely to continue to get way better in the foreseeable future, then we should probably broadly stick with what we're doing – some tinkering around the edges to fix obvious abuses, but no root-and-branch restructuring unless something goes obviously and profoundly wrong.

Whereas if things are failing to get better, or are actively getting worse, then it might be worth taking big risks in order to get out of the hole.

I've often had conversations with people to my left where they seem way too willing to smash stuff in the process of getting to deep systemic change, which is potentially sensible if you think we're in a very bad place and getting worse but madness if you think we're in an extremely unusually good place and getting better.

Comment by willbradshaw on Max_Daniel's Shortform · 2020-07-14T13:06:09.605Z · score: 4 (2 votes) · EA · GW

I have relatively little exposure to Hickel, save for reading his guardian piece and a small part of the dialogue that followed from that, but I don't get the impression he's coming from a position of putting more weight on Sanctity/purity or Authority/respect; in general I'd guess that few people in left-wing social-science academia are big on those sorts of moral foundations, except indirectly via moral/cultural relativism.

Taking Haidt's moral foundations theory as read for the moment, I'd guess that the Fairness foundation is doing a lot of the work in this disagreement. In general, leftists and liberals seem to differ a lot in what they consider culpable harm, and Fairness/exploitation seems like a big part of that.

Comment by willbradshaw on Concern, and hope · 2020-07-13T07:50:57.393Z · score: 7 (4 votes) · EA · GW

So far the comments here have overwhelmingly been (various forms of) litigating the controversy I discuss in the OP. I think this is basically fine – disagreements have all been civil – but insofar as there is still interest I'd be keen to hear people's thoughts on a more meta level: what sorts of things could we do to help increase understanding and goodwill in the community over this issue?

Comment by willbradshaw on Concern, and hope · 2020-07-07T19:03:47.050Z · score: 30 (11 votes) · EA · GW

I'm still pretty sceptical that the post in question was deliberately made with conscious intention to cause harm. In any case, I know of at least a couple of other EAs who have good-faith worries in that direction, so at worst it's exacerbating a problem that was already there, not creating a new one.

(Also worth noting that at this point we're probably Streisanding this dispute into irrelevance anyway.)

Comment by willbradshaw on Concern, and hope · 2020-07-07T08:00:57.244Z · score: 4 (2 votes) · EA · GW

(I have now cut the link.)

Comment by willbradshaw on Concern, and hope · 2020-07-07T07:53:22.271Z · score: 24 (13 votes) · EA · GW

This comment does a good job of summarising the "classical liberal" position on this conflict, but makes no effort to imagine or engage with the views of more moderate pro-SJ EAs (of whom there are plenty), who might object strongly to cultural-revolution comparisons or be wary of SSC given the current controversy.

As I already said in response to Buck's comment:

I agree that post was very bad (I left a long comment explaining part of why I strong-downvoted it). But I think there's a version of that post, that is phrased more moderately and tries harder to be charitable to its opponents, that I think would get a lot more sympathy from the left of EA. (I expect I would still disagree with it quite strongly.)

As you say, there aren't many right-wing EAs. The key conflict I'm worried about is between centre/centre-left/libertarian-leaning EAs and left-wing/SJ-sympathetic EAs[1]. So suggesting I need to find a right-wing piece to make the comparison is missing the point.

(This comment also quotes an old version of my post, which has since been changed on the basis of feedback. I'm a bit confused about that, since some of the changes were made more than a day ago – I tried logging out and the updated version is still the one I see. Can you update your quote?)


  1. I also don't want conservative-leaning EAs to be driven from the movement, but that isn't the central thing I'm worried about here. ↩︎

Comment by willbradshaw on 3 suggestions about jargon in EA · 2020-07-06T12:06:48.698Z · score: 11 (6 votes) · EA · GW

One aspect of how "information hazard" tends to be conceptualised that is fairly new[1], apart from the term itself, is the idea that one might wish to be secretive out of impartial concern for humankind, rather than for selfish or tribal reasons[2].

This especially applies in academia, where the culture and mythology are strongly pro-openness. Academics are frequently secretive, but typically in a selfish way that is seen as going against their shared ideals[3]. The idea that a researcher might be altruistically secretive about some aspect of the truth of nature is pretty foreign, and to me is a big part of what makes the "infohazard" concept distinctive.


  1. Not 100% unprecedentedly new, or anything, but rare in modern Western discourse pre-Bostrom. ↩︎

  2. I think a lot of people would view those selfish/tribal reasons as reasonable/defensible, but still different from e.g. worrying that such-and-such scientific discovery might damage humanity-at-large's future. ↩︎

  3. Brian Nosek talks about this a lot – academics mostly want to be more open but view being so as against their own best interests. ↩︎

Comment by willbradshaw on Concern, and hope · 2020-07-05T19:04:13.659Z · score: 19 (8 votes) · EA · GW

"Culminating" might be the wrong word, I agree the triggering event was fairly independent.

But I do think people's reactions to the SSC kerfuffle were coloured by their beliefs about the previous controversy (and Scott's political beliefs), and that it contributed to the general feeling I'm trying to describe here.

Comment by willbradshaw on Concern, and hope · 2020-07-05T18:56:01.480Z · score: 26 (11 votes) · EA · GW

I agree that post was very bad (I left a long comment explaining part of why I strong-downvoted it). But I think there's a version of that post, that is phrased more moderately and tries harder to be charitable to its opponents, that I think would get a lot more sympathy from the left of EA. (I expect I would still disagree with it quite strongly.)

I think there's a reasonable policy one could advocate, something like "don't link to heavily-downvoted posts you disagree with, because doing so undermines the filtering function of the karma system". I'm not sure I agree with that in all cases; in this case, it would have been hard for me to write this post without referencing that one, I think the things I say here need saying, and I ran this post by several people I respect before publishing it.

I could probably be persuaded to change that part given some more voices/arguments in opposition, here or in private.

(It's also worth noting that I expect there are a number of people here who think comparisons of the current situation to the Cultural Revolution are quite bad, see e.g. here.)

Comment by willbradshaw on Exploring the Streisand Effect · 2020-07-05T15:48:38.465Z · score: 4 (2 votes) · EA · GW

I don't see the connection to counterproductive secrecy.

Comment by willbradshaw on Exploring the Streisand Effect · 2020-07-03T17:05:10.759Z · score: 5 (3 votes) · EA · GW

A third category of things that are distinct from the classic Streisand effect, but similar enough that it is often worth discussing them together, is counterproductive secrecy. That is, cases where, instead of causing information spread by attempting to change the actions of others, you cause it by being ostentatiously secretive yourself.

One thing that would be very useful to me is a good name for this effect, as distinct from the Streisand effect. Like I said in the piece, they're clearly related, but I think distinct enough to merit separate terms, and having a good name would help clarify the conceptual space.

Anyone know any good cases of secrecy (as opposed to censorship) spectacularly backfiring?

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-07-01T12:23:15.740Z · score: 2 (1 votes) · EA · GW

I agree this is a good idea. Not sure about regular comments, but it would be great if shortform posts had a "Promote to full post" button.

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-06-30T12:56:09.435Z · score: 6 (3 votes) · EA · GW

I think sub-fora is a somewhat contentious issue, the counter-argument being that it's good to have the Forum be a clearing-house of EA ideas without too much splintering.

I agree the tag interface could be more discoverable. If you go to https://forum.effectivealtruism.org/tags/all you can see a list of all tags and how many posts each one has, but there doesn't seem to be much functionality beyond a featureless alphabetical list (e.g. it would be cool to allow them to be sorted by number of posts, and for the tags page to be discoverable from the homepage).

Once you get to a specific tag, though, it seems to already have the functionality you're looking for, including different sort orders: https://forum.effectivealtruism.org/tag/investing

Comment by willbradshaw on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T16:02:12.050Z · score: 6 (4 votes) · EA · GW

Thanks Arden. I agree this is probably the best case for why WAW is a longtermist cause.

Comment by willbradshaw on Slate Star Codex, EA, and self-reflection · 2020-06-29T15:36:20.347Z · score: 19 (8 votes) · EA · GW

I actually think it's true that the OP hasn't advocated for censoring anyone. They haven't said that SA or SSC should be suppressed, and if they think it's a good thing that SA has willingly chosen to delete it, well, I'd be lying if I said there weren't internet contributors I think we'd be better off without, even if I would strongly oppose attempts to silence them.

It's important to be able to say things are bad without saying they should be censored: that's basically the core of free-speech liberalism. "I don't think this should be censored, but I think it's bad, and I think it's worrying you don't think it's bad" is on its face a reasonable position, and it's important that it's one people can say.

I downvoted the post for several reasons, but I don't think pro-censorship is one of them. I might be wrong about this. But the horns effect is real and powerful, and we should all be wary of it.

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-06-29T12:26:04.789Z · score: 5 (3 votes) · EA · GW

I think it's already quite common for commenters on posts without these to request them; is there something in the UI you'd like to change to encourage this?

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-06-29T12:19:39.929Z · score: 7 (2 votes) · EA · GW

I also don't know what the best solution is, or if the best solution is a codebase change (as opposed to just a norm that you should avoid silently downvoting things if you can, unless feedback you agree with is already there).

But I agree this is a problem: downvoting silently achieves the function of allowing the forum to sort and filter content, but fails the function of allowing users to learn and get better.

Comment by willbradshaw on Slate Star Codex, EA, and self-reflection · 2020-06-28T15:18:35.684Z · score: 54 (20 votes) · EA · GW

(As a meta-level point, everyone, downvoting someone for asking for clarification on why you're downvoting someone is not a good look.)

Hi Michelle. I'm sorry you're getting downvotes for this comment. There are several reasons I strong-downvoted this post, but for the sake of "brevity" I'll focus on one: I think that the OP's presentation of the current SSC/NYT controversy – and especially of the community's response to that controversy – is profoundly biased and misleading.


The NYT plans to use Scott Alexander's real name in an article about him, against his express wishes. They have routinely granted anonymous or pseudonymous status to other people in the past, including the subjects of articles, but refused this in Alexander's case. Alexander gives several reasons why this will be very damaging for him, but they plan to do it anyway.

I think that pretty clearly fits the definition of "doxing", and even if it doesn't it's still clearly bad. The post is scathing towards these concerns, scare-quoting "doxing" wherever it can and giving no indication that it thinks the Times's actions are in any way problematic.

In his takedown post, Scott made it very clear that people should be polite and civil when complaining about this:

There is no comments section for this post. The appropriate comments section is the feedback page of the New York Times. You may also want to email the New York Times technology editor Pui-Wing Tam at pui-wing.tam@nytimes.com, contact her on Twitter at @puiwingtam, or phone the New York Times at 844-NYTNEWS.

(please be polite – I don’t know if Ms. Tam was personally involved in this decision, and whoever is stuck answering feedback forms definitely wasn’t. Remember that you are representing me and the SSC community, and I will be very sad if you are a jerk to anybody. Please just explain the situation and ask them to stop doxxing random bloggers for clicks. If you are some sort of important tech person who the New York Times technology section might want to maintain good relations with, mention that.)

The response has overwhelmingly followed these instructions. People have cancelled their subscriptions, wrote letters, organised a petition, and generally complained to the people responsible. These are all totally appropriate things to do when you are upset about something! The petition is polite and conciliatory; so are most of the letters I've seen. Some of the public figures I've seen respond on Twitter have used strong wording ("disgraceful", "shame on you")) but nothing that seems in any way out of place in a public discourse on a controversial decision.

The OP's characterisation of this? "Attack[ing] a woman of color on [Alexander's] word". Their evidence? Five tweets from random Twitter users I've never heard of, none of whom have more than a tiny number of followers. They provide no evidence of anyone prominent in EA (a high-karma Forum user, say, or a well-known public figure) doing anything that looks like harassment or ad hominem attacks on Ms Tam.

I hope it's obvious why this is bad practice: if the threshold for condemning the conduct of a group is "a few random people did something bad in support of the same position", you will never have to change your mind on anything. Somehow, I doubt the OP had much sympathy for people who were more interested in condemning the riots in Minneapolis than supporting the peaceful protesters; yet here they use a closely analogous tactic. If they want to persuade me the EA community has acted badly, they should cite bad conduct from the EA community; they do not.

The implicit claim that one shouldn't publicly criticise Pui-Wing Tam because she is a woman of colour is also profoundly problematic. Pui-Wing Tam is the technology editor of the NYT, the most powerful newspaper in the world. She is a powerful and influential person, and a public figure; more importantly, she is the powerful and influential public figure directly responsible for the thing all these people are mad about. Complaining to her about it, on Twitter and elsewhere, is entirely appropriate. Obviously personal harrassment is unacceptable; if you give me a link to that kind of behaviour, I will condemn it, wherever it comes from. But implying that you can't publicly complain about the conduct of a powerful person if that person is a member of a favoured group is incredibly dangerous.


That's my position on how the OP has presented the current controversy. I think the way they have misrepresented those who disagree with them on this is sufficient by itself for a strong downvote. I also disagree with their characterisation of Scott Alexander and the SSC project, but as I said, I don't want this comment to be any longer than it already is. :-)

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-06-28T13:44:11.481Z · score: 8 (5 votes) · EA · GW

I do actually quite like the UX mockups for the photo idea, which I think would have the positive effects already described (friendlier impression, easier to track comments). Here are two reasons I'm less keen:

  • People discriminate a lot based on how people look. My impression of someone on Facebook is coloured pretty strongly by their choice of profile picture, for example. I'd predict that attaching images to posts and comments would cause people to give relatively more weight to people who (a) look like them along various dimensions, and (b) have access to good, professional photographs of themselves.
  • There are a lot of users of the Forum who post anonymously or under a pseudonym. If the Forum had ubiquitous images, these users would have to either (a) use no image, (b) use a cartoon/non-human image (as is/was common in Slate Star Codex comment threads, for example), or (c) use a fake photo à la thispersondoesnotexist.com. Apart from the third option, which is ethically somewhat dubious, I think this would be significantly harmful to other users' impression of these users, especially if they are in dispute with named users with real photos, in a way I don't think we want.

Both of these effects are arguably present even in the current, text-only medium, but I think to a far lesser extent. I'm not claiming these effects would necessarily outweigh the benefits, but I think they're real and important, and on balance would currently cause me to lean against images.

(Separately, I'm pretty strongly opposed to gamification, which has a big effect on my behaviour in a way I virtually always think is bad for me. I think it's quite unlikely that the Forum will implement badges/achievements/anything of this sort except karma, but if they did I'd be quite mad. And I think it's quite important that karma is given by users in response to the content you add to the site, not the developers for jumping through hoops.)

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-06-28T13:22:33.204Z · score: 9 (4 votes) · EA · GW

I also didn't vote on it but I do kind of hate this idea. I definitely don't want anyone else editing my comments "for clarity"; if they want to clarify something, they can leave a reply comment and ask if that's actually what I meant.

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-06-28T13:19:21.995Z · score: 9 (2 votes) · EA · GW

I think in the case of regular comments there's a desire not to let people edit the record too much; if you say something you no longer endorse the intended action is that you retract it (which applies strikethrough but leaves the comment standing).

Of course, there are some issues with this setup:

  • One can edit one's comments freely, so it's easy enough to remove unwanted content anyway (as we see here, and in the occasional comment consisting entirely of a struckthrough ".").
  • If the original comment is yours and no-one has responded to it, there's no conversation to protect, so I'm not sure blocking deletion makes much sense.
  • Since shortform is implemented as one big comment thread, it's impossible to delete shortform posts except by asking a mod to do it (I've run into this one myself). So one has less power over one's own shortform feed than one's major posts, which seems backwards to me given the intended purpose of shortform.
Comment by willbradshaw on Problem areas beyond 80,000 Hours' current priorities · 2020-06-26T14:00:24.754Z · score: 8 (4 votes) · EA · GW

Yeah, this is why I said "medium-term" rather than "near-term". I agree that calling wild-animal welfare "neartermist" is confusing and perhaps misleading, but I think probably less so than calling it "longtermist", given how the latter term is generally used in EA.

I'm optimistic about wild-animal welfare work achieving a lot of good over the next century or two. I don't expect it to have major positive impact on the longrun future, except perhaps indirectly via values-spreading.

Comment by willbradshaw on Problem areas beyond 80,000 Hours' current priorities · 2020-06-25T19:45:56.180Z · score: 21 (5 votes) · EA · GW

Nice list, thanks for compiling it!

It would be great to hear your thoughts about putting wild-animal welfare in the "Other longtermist issues" section. I know quite a few people who are sceptical about the value of wild-animal welfare for the long-term future. (I think the medium-term case for it is pretty solid.)

Comment by willbradshaw on Problem areas beyond 80,000 Hours' current priorities · 2020-06-25T19:42:05.942Z · score: 3 (2 votes) · EA · GW

The link at the end of the biomedical science section ("You might also be able to use training in biomedical research to work on other promising areas discussed above, like biosecurity or anti-aging research. Read more.") is also broken, at least for me.

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-06-25T14:00:51.633Z · score: 2 (1 votes) · EA · GW

Probably this should go on LessWrong rather than here, but: it would be great if the Markdown editor could handle basic image formatting, rather than stripping out all the HTML so all my images revert to maximum-width.

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-06-25T13:58:55.310Z · score: 4 (2 votes) · EA · GW

The reference to LaTeX here isn't very clear to me. Does Elementor provide an alternative equation-rendering system? Or did you mean something else?

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-06-25T13:57:49.735Z · score: 7 (3 votes) · EA · GW

I agree, I think the Forum has enough very-high-karma content now that randomising it here as well would be a good idea.

Comment by willbradshaw on How should we run the EA Forum Prize? · 2020-06-25T13:55:59.167Z · score: 5 (3 votes) · EA · GW

I agree with this and also expect a "best first post" comment to probably be net harmful.

If there were to be some sort of "best new user" prize, I think it should probably be awarded less frequently, say every 6 or 12 months, for someone whose first post was within that window. I still think this would probably not be the best use of the available funding, but it seems to fit the spirit of the "best first post" idea while avoiding some of the worst side-effects.

Comment by willbradshaw on EA Forum feature suggestion thread · 2020-06-24T21:23:38.803Z · score: 6 (2 votes) · EA · GW

Strongly agree with this, have been very frustrated in the past with how the Forum (via LessWrong) coerces my header usage.

It looks bad in the sidebar too.

Comment by willbradshaw on How should we run the EA Forum Prize? · 2020-06-24T20:07:18.889Z · score: 7 (4 votes) · EA · GW

General comments

In general, I am in favour of having a forum prize of some kind. I personally find it motivating and I think it is a useful way for people to gain credibility within the EA movement. I think it also helps lend the forum an air of seriousness, which is sometimes good and sometimes bad but does probably raise the quality of the content.

From the "giving individuals credibility" perspective, having a few large prizes is good (a $750 prize is large enough to put on a CV for many people, a $50 one is clearly not). From the "motivating good content" perspective, having a moderately larger number of moderately smaller prizes would probably be better.

Perhaps I'm wrong about this, but I've sometimes felt like comment prizes are used to make up the difference on post prizes ("your post was very good but not top-3 this month, so we found a decent comment you made on that post and gave it to that!"). If so I think that is good evidence that we want more smaller prizes.

Comments on specific proposals

Themed prizes (e.g. setting aside one prize for “the best post on topic X”)

This seems like something you might want to do less frequently (annually?) in addition to more regular general prizes.

Giving more prizes, even if they end up being smaller on average

On the whole I think this would be a good idea (see above).

Selecting at least one judge to represent each major EA cause area

I don't feel like any major cause area has been getting shafted here; I can readily recall winners on global health, animal welfare, long-term future, and meta topics. In the absence of a specific problem this seems like not a good idea; I feel like it would not provide much benefit and might encourage tribalism/factionalism.

The above comments assume topic-general prizes like the current system; I think themed prizes probably would want some specialist judges involved.

Including a community vote (not just upvotes, but a separate voting process). This would likely supplement judges' votes, not replace them.

A lot of award ceremonies have some kind of "people's choice" award. If you do this I again think it should probably be less frequent than monthly; perhaps quarterly or annually.

Having a special “first post” and/or “first comment” prize for people who make a really good first contribution from an account

Meh. This sounds cute but I don't actually think it would be very valuable, and could actually be harmful on net. I think it's good to set a high quality bar and incentivise people to work up to it, rather than agonising over making their first post perfect even more than they probably already do.

Having separate prizes for orgs/professional researchers and people who contribute to the Forum on more of an “amateur” basis

So I suppose the argument for this is that some people basically get paid to write on the Forum, and it's not surprising that those people (coughSauliuscough) win lots of prizes, which crowds out good content from amateurs. I'm not sure to what extent this is true, and if true, I'm unconvinced that it's bad.

Firstly, a lot of that content is just very good and I want there to be strong incentives to get that stuff on the forum. Secondly, a lot of prize-winning posts are from people who work for EA Orgs but are writing in their private capacity; indeed, you'd expect people who are especially committed to thinking about EA topics to both be more likely to win Forum prizes and be more likely to work for an EA Org. This seems like a good thing we want more of, no less so than contributions from non-Org-employees, so I don't think they should all be put in a special category and forced to compete with each other.

There's a similar argument which is something like "separate out content people write for work vs for pleasure". I think this is more defensible but still probably wrong. If prizes incentivise orgs to post more of their work on the EA Forum, this seems like a good thing that benefits everybody. But I wouldn't be extremely surprised to be argued out of this.

Having more flexible prize amounts (e.g. maybe one post should win all the money in some months if it’s especially good, or maybe money should be distributed according to vote ratios rather than just first/second/third place)

This sounds bad to me, but for fairly vague reasons. I feel like it gives the judges too many degrees of freedom, and that it's probably good practice to have prize amounts be fairly predictable. But I'm not sure about this.

Having judges who are somewhat removed the community (or finding some other way to reduce the extent to which the Prize may reflect the biases of the community or of central orgs within the community)

This sounds like a great way to remove most of the value of the prize in return for highly dubious gains.

If I look at the winners of the EA Forum Prize, I expect to see exemplars of great EA content. The best people to judge great EA content are EAs, i.e. community members. We could bring in external expects from adjacent fields (global health, AI, etc), but why?

  • Firstly, those people are likely to be far more limited in the range of posts they can judge, which basically forces us into the "subject-specific prizes" model above; while I think those could be good to have occasionally I wouldn't want them to completely take over the prize.
  • Secondly, those people are almost certain to be less well-aligned with the distinctive values of the EA community, which makes their opinions on which posts should get prizes much less valuable in this particular context.
  • Thirdly, if we do want domain experts to judge a domain-specific EA prize I'm pretty sure we can find them within the EA community, rather than bending over backwards to make them external.
  • And finally, all those fields already have their own fora, their own prizes, and their own way of sharing and evaluating information. It's fine – indeed, very good – for the particular nexus known as EA to have their own systems and prizes too. It only becomes a problem if we think EA Forum prizes are the sole arbiters of truth and quality.

Getting rid of the Prize entirely without replacing it (one survey respondent, who has written many excellent posts and won at least one Prize, believes it to be “distracting and unnecessarily divisive”)

Has there been any evidence of the prize being divisive, i.e. actually causing conflict? Perhaps there is, but I'm not aware of it. And calling it distracting is confusing to me; is the claim that it incentivises people to write content based on a Keynesian beauty contest, rather than what they actually think it would be best for them to write?

Anyway, I don't have much sympathy with that claim as stated, or for abolishing the prize entirely on that basis, but there might be an alternative interpretation that I'd be more sympathetic to.

Comment by willbradshaw on How should we run the EA Forum Prize? · 2020-06-24T19:37:57.759Z · score: 4 (2 votes) · EA · GW

I am fairly confused by the strong (!) downvote on this comment.

Comment by willbradshaw on Antibiotic resistance and meat: why we should be careful in assigning blame · 2020-06-21T17:26:07.079Z · score: 1 (1 votes) · EA · GW

Cool, thanks.

One comment: while all your caveats about simplified reasoning and so on are well-made and still apply, I would generally be surprised if you could substitute a number like this in your analysis with another number three times the size, without affecting anything else, such that you could make the substitution and leave the wording unchanged.

That is to say, if a contribution of 7.5% was "very significant and worth pursuing", I'd expect a contribution of 23% to be extremely significant, and worth making a high (or near-top) priority.

Of course, that's the result for 10%, and 10% is just a made-up number. But I think the general point stands.

Comment by willbradshaw on Antibiotic resistance and meat: why we should be careful in assigning blame · 2020-06-21T10:33:32.593Z · score: 5 (3 votes) · EA · GW

This feels a bit petty, since I don't really disagree with any of your conclusions, but there are some mistakes in the mathematics here.

Let's assume a fraction of all antibiotics used are used in animals, and a fraction are used in humans. (In your example, .)

Let's also assume that antibiotics use in animals is as effective at causing a resistance burden in humans per unit antibiotics used. (In your main example, .)

Then the total resistance burden in humans is given by , which in algebraic terms is .

The fraction of the total burden caused by animal use is then . If and , this is . So, quite a bit more than 7.5%.

If (use in animals is 1% as efficient at causing a resistance burden in humans), then . If , .

So the fraction of the human burden caused by animal use could be quite high even if the per-unit efficiency is quite low.

Comment by willbradshaw on Against opposing SJ activism/cancellations · 2020-06-20T09:20:59.737Z · score: 32 (14 votes) · EA · GW

This post seems doomed to low karma regardless of its quality. You'll get downvotes from people who support aggressive SJ activism, people who think it's very bad and we should fight it, and people who think talking about this at all in public is unwise.

Not that that low karma necessarily means it shouldn't have been written. I fall somewhere between the second group and the third, but I didn't downvote this. I don't fully agree with the argument laid out here (if I did, I think I'd probably think the post shouldn't have been published), but I'm moderately glad the post exists.

Comment by willbradshaw on Against opposing SJ activism/cancellations · 2020-06-20T09:08:21.555Z · score: 1 (1 votes) · EA · GW

I don't think the above reply is supposed to be pasted twice?

Comment by willbradshaw on Antibiotic resistance and meat: why we should be careful in assigning blame · 2020-06-18T19:29:03.182Z · score: 4 (3 votes) · EA · GW

Yeah, I agree that biosafety concerns leading to consolidation, and thus reducing animal welfare, is more of a concern in countries that are on the threshold of industrialising farming. Though I'd guess it would usually be a fairly minor effect compared to the general rising demand for meat as wealth increases, (a) that might not always be the case (China had a catastrophic pig pandemic recently, so I bet safety incentives there are very strong right now), and (b) given how ethically disastrous factory farms are, a small effect could be enough for the thing that caused it to be net bad. (I also haven't read the article)

As far as people in poorer countries getting cheaper meat, I agree it becomes more complex, but I'm still pretty confident that fewer factory farms is robustly net-good. I don't think meat is sufficiently important to a healthy diet that giving people more of it in exchange for torturing vast numbers of animals is a good trade-off anywhere, even instrumentally, and I'd also guess that if meat gets more expensive there are other dietary luxuries people can transition to on the margin that are only slightly less pleasant.

That's just concerning the direct ethical effects, though. I can't speak to strategic considerations.

Comment by willbradshaw on Antibiotic resistance and meat: why we should be careful in assigning blame · 2020-06-18T15:12:57.776Z · score: 9 (4 votes) · EA · GW

If taxation on vetinary antimicrobials increases the price of meat (both because of the need to pay for antimicrobials and the reduced growth rate if less is used), that seems strongly positive to me. Higher prices means less demand for meat means fewer animals in factory farms.

It's not obvious to me that improved biosafety on factory farms would entail making them bigger; they're already pretty enormous, and it's not clear to me how the costs of biosafety would scale with size (this is a weak opinion, I wouldn't be surprised to change it). But in any case, are smaller (but still very big) factory farms any worse for animals than even bigger factory farms? If I imagine doubling the size of a modern (enormous) chicken factory farm, that doesn't obviously seem like it makes the lives of the chickens any more torturous than they already are.

Comment by willbradshaw on Why might one value animals far less than humans? · 2020-06-08T11:07:03.123Z · score: 8 (3 votes) · EA · GW

(1) and (2) would be roughly my answers as well. There's also an instrumental factor (which I'm not sure is in the scope of the original question, but seems important) that human suffering and death has far larger knock-on effects on the future than that of non-human animals.

Regarding (3), is there reason to think joy and elation are only possible for humans? It seems likely to me that food, sex, caring for young, pair bonding etc. feel good for nonhuman animals, dogs seem like they're pretty happy a lot of the time, et cetera. Of course (1) and (2) apply to positive as well as negative experiences – humans are more able to pleasantly anticipate and fondly remember good experiences – but the phrasing here seemed to be making a stronger claim than that.

Comment by willbradshaw on Will protests lead to thousands of coronavirus deaths? · 2020-06-05T13:26:41.428Z · score: 4 (3 votes) · EA · GW

Upvoted for this line, which made me laugh:

I agree that 'you' in a general sense can, but unfortunately this doesn't mean that 'I' specifically can!

Comment by willbradshaw on Will protests lead to thousands of coronavirus deaths? · 2020-06-04T15:44:41.024Z · score: 8 (6 votes) · EA · GW

True, though the loss of trust seems to fall more on the authorities than the protesters to me.

Comment by willbradshaw on Will protests lead to thousands of coronavirus deaths? · 2020-06-04T14:51:38.755Z · score: 3 (2 votes) · EA · GW

Sure, I didn't think you were saying that the protests would be a panacea. My main point was less about probability/degree of success and more about counterfactual impact.

Comment by willbradshaw on Will protests lead to thousands of coronavirus deaths? · 2020-06-04T13:53:55.877Z · score: 11 (8 votes) · EA · GW

Thanks, this is exactly the kind of response I'd like to see.

I agree that the first point points in a pro-protesting direction. The second might also but I am uneasy about it (for a young person, most of the impact of their getting sick is infecting others, so the actual message is "I care enough about this to risk giving others a deadly disease", which is somewhat less attractive). I agree that the third could go either way.

(Notably, the third point makes rioting an even more terrible idea than I already thought it was.)

Comment by willbradshaw on Will protests lead to thousands of coronavirus deaths? · 2020-06-04T12:20:09.764Z · score: 12 (12 votes) · EA · GW

I think this kind of argument is broadly true (though the true magnitude of the effect is, as you say, very uncertain), but I think it's important to note that these kinds of arguments also apply to politicians, administrators, and the police.

If you could reasonably predict that incidents like this would lead to mass protests (which you could, because it's happened before), and that this could result in a severe increase in the number of pandemic deaths, then you have a duty (even more so than usual) to make sure this does not happen. As an administrator, you should put extra pressure on police to make sure these incidents don't take place. As a police officer, you should take extra care that it doesn't happen on your watch, and that your colleagues know what the kinds of consequences could be. As a politician, you should be making it clear that if something like this happens, heads will very quickly roll.

(Of course, all these people should have been doing this anyway, because incidents like this are a profound moral stain on American institutions. You could reasonably argue that if these groups were receptive to these kinds of arguments, they wouldn't be causing these incidents in the first place. But even given the status quo, everyone in law enforcement and administration should have been taking extra super special care right now.)

Comment by willbradshaw on Will protests lead to thousands of coronavirus deaths? · 2020-06-04T12:09:47.944Z · score: 31 (20 votes) · EA · GW

I suspect that a lot of protesters would be very angry we're even raising these kinds of issues, but...

If we're being consequentialist about this, then the impact of the protests is not the difference between fixing these injustices, and the status quo continuing forever. It's the difference between a chance of fixing these injustices now, and a chance of fixing them next time a protest-worthy incident comes around.

Sadly, opportunities for these kinds of protests seem to come around fairly regularly in the US. So I expect these protests are probably only reducing future injustices by a few years in expectation. Add to that the decent chance that the protests don't achieve very much[1], and it might be even less.

Normally, of course, it would be well worth it. But if it's true that mass protests during a pandemic will cause many thousands of deaths, then the above reasoning becomes pretty important.

Regardless of where a consequentialist analysis would come down, it is a tragedy that people feel they need to choose between missing an opportunity to fix a horrible system of state violence, and not spreading a dangerous pandemic.

This is certainly true.


  1. In particular, it's important to ask how a pandemic affects the chances of success. If it decreases them (say, because people are unusually unsympathetic to people seen as irresponsibly crowding together) then the expected value of these protests (relative to waiting) falls. If it increases them (say, because politicians and public authorities are unusually keen to resolve the crisis and get people off the streets) then that would be a counterargument to my claims here. ↩︎

Comment by willbradshaw on It's OK To Also Donate To Non-EA Causes · 2020-06-03T22:56:23.646Z · score: 6 (4 votes) · EA · GW

Given the framing of discretionary donations, how broad you're willing to go with your spending is entirely up to you. Broader means (sometimes much) more impact but less of...whatever hard-to-exactly-define thing it is that motivates people to donate to specific causes rather than for general impact. I imagine different people will set their thresholds for that trade-off in different places. My main point is that it would be good to explicitly consider how one might broaden the remit, not that there is necessarily a right or wrong place to put the boundary.

On the object level, there is is a reading of your comment here that I do disagree with quite strongly, but it doesn't seem terribly valuable to me to argue about it here.

Comment by willbradshaw on It's OK To Also Donate To Non-EA Causes · 2020-06-03T10:29:06.523Z · score: 20 (13 votes) · EA · GW

As a matter of pragmatic trade-offs and community health, I broadly agree with this. However, I do also think it's important to point out that you[1] don't have to throw out all your EA principles when making "emotional" donating decisions. If it's necessary for your happiness to donate to cause area , you can still try to make your donation to as effective as possible, within your time constraints.

I suspect that the best way to do this is often to think about how narrow the cause area you're drawn to actually is. Would you feel bad if you donated to anything other than exactly , narrowly defined? This is an important question, since if is the national cause du jour it's likely to be getting a lot of attention and funding, and even small extensions in beyond what's in the news every day are likely to open up big opportunities to have more impact. The more you can comfortably extend the remit for your donation, the more impact you're likely to have[2].

This has come up in both of the recent questions on the Forum about racial injustice, and not only in comments by me. If your goal is to tackle racism or discrimination broadly, there's no particular reason to limit your concern to recent high-profile cases in the US. I'd predict that dollars going towards, say, helping largely-forgotten Rohingya refugees would be far more cost-effective than contributing even more money to a cause that's currently all over the global news. Even better would be to find a group that's been the victims of horrific attacks that no-one in the West has heard of.

Of course, none of this is to say you have to do that. We're assuming ex hypothesi that this is "discretionary" donating that doesn't count towards your GWWC pledge or whatever, and if the only way for you to not feel guilty is to donate to combating something very specific, like reducing police brutality against racial minorities in the USA, then you should (within this framing) do that. (Though even there there's a lot of value in thinking about how to do that as effectively as possible, and I'm glad some people have been doing that.)

Overall, for this kind of discretionary/personal-wellbeing donating, I think an algorithm like the following would probably be a good idea:

  1. Consider the cause area you feel like you need to contribute to. Think about a few ways you might extend it (e.g. in space, in time, in mechanism, in species). Would you feel okay with making those extensions? If so, do so, and repeat until your remit is as wide as you can make it without feeling you're betraying the cause (or whatever other feelings are spurring these donations).
  2. Within that remit, think/read/ask about how you could make your donation as effectively as possible, within whatever time and emotional limits apply.
  3. Make your donation in accordance with the findings from (2).

  1. In all cases, I'm using "you" in the general sense, not specifically to address orthonormal. ↩︎

  2. Trivially, the value of the highest-impact opportunity will monotonically increase as the breadth of the remit expands; at full generality, you're just back to EA again, but the principle applies to partial extensions as well. ↩︎

Comment by willbradshaw on What are some good charities to donate to regarding systemic racial injustice? · 2020-06-03T10:17:07.497Z · score: 17 (10 votes) · EA · GW

Just to point out, at the time of writing, that this question is now at 41 karma, which is pretty good. So whoever was downvoting it at the beginning appears to have been outvoted. :-)

As I said in my other comment, I think this is a good question, well-phrased and thoughtful, and I'd be happy to see more like it on the Forum. Thank you for contributing here.

Comment by willbradshaw on What are some good charities to donate to regarding systemic racial injustice? · 2020-06-02T18:00:04.320Z · score: 3 (2 votes) · EA · GW

Good answer. Helping refugees of ethnic cleansing is a good way to go here, I think.