Posts

Announcing the Nucleic Acid Observatory project for early detection of catastrophic biothreats 2022-04-29T19:30:22.800Z
Why do you find the Repugnant Conclusion repugnant? 2021-12-17T10:00:44.822Z
Biosecurity needs engineers and materials scientists 2021-12-16T11:37:30.799Z
Hamming Questions for Project Planning 2021-12-06T11:27:41.674Z
Should EA Global London 2021 have been expanded? 2021-11-09T08:28:09.818Z
Who are your role models? 2021-11-05T12:41:28.168Z
What are your favourite ways to buy time? 2021-11-02T10:02:52.241Z
Final Version Perfected: An Underused Execution Algorithm 2020-12-03T11:03:45.662Z
Information hazards: a very simple typology 2020-07-13T16:54:17.640Z
Exploring the Streisand Effect 2020-07-06T07:00:00.000Z
Concern, and hope 2020-07-05T15:08:47.766Z
What coronavirus policy failures are you worried about? 2020-06-19T20:32:41.515Z
willbradshaw's Shortform 2020-02-28T18:19:32.458Z
Thoughts on The Weapon of Openness 2020-02-13T00:10:14.841Z
The Web of Prevention 2020-02-05T04:51:51.158Z
Concrete next steps for ageing-based welfare measures 2019-11-01T14:55:03.431Z
How worried should I be about a childless Disneyland? 2019-10-28T15:32:03.036Z
Assessing biomarkers of ageing as measures of cumulative animal welfare 2019-09-27T08:00:22.716Z

Comments

Comment by Will Bradshaw (willbradshaw) on Who's hiring? (May-September 2022) · 2022-08-30T16:38:56.149Z · EA · GW

The Nucleic Acid Observatory is looking for a Research Scientist to co-lead research and development of wastewater monitoring laboratory methods, as part of our general goal of achieving reliable early warning of catastrophic biological threats.

We're looking for someone with deep wet-lab expertise, who's excited about applying that expertise to a key problem in biosecurity. Experience in virology, microbiology or wastewater work is ideal but not required. High-performing hires will have the opportunity to grow into a leadership role within the NAO, helping to build a healthy and productive team and determine the long-term direction of the NAO project.

For more information, visit the MIT website here or reach out to me on the Forum.

Comment by Will Bradshaw (willbradshaw) on Some potential lessons from Carrick’s Congressional bid · 2022-05-19T16:18:46.524Z · EA · GW

It's very much not obvious to me that EAs should generally prefer progressive democratic candidates in general, or Salinas in particular.

Speaking personally, I am generally not excited about Democratic progressives gaining more power in the party relative to centrists, and I'm pretty confident I'm not alone here in that[1]

I also think it's false to claim that Salinas's platform as linked gives much reason to think she will be a force for good on global poverty, animal welfare, or meaningful voting reform. (I'd obviously change my mind on this if there are other Salinas quotes that pertain more directly to these issues.)

There are also various parts of her platform that make me think there's a decent chance that her time in office will turn out to be bad for the world by my lights (not just relative to Carrick). I obviously don't expect everyone here to agree with me on that, and I'm certainly not confident about it, but I also don't want broad claims that progressives are better by EA values to stand uncontested, because I personally don't think that's true.

  1. ^

    To be clear, I think this is very contestable within an EA framework, and am not trying to claim that my political preferences here are necessarily implied by EA.

Comment by Will Bradshaw (willbradshaw) on Bad Omens in Current Community Building · 2022-05-16T00:33:07.685Z · EA · GW

I keep going back and forth on this.

My first reaction was "this is just basic best practice for any people-/relationship-focused role, obviously community builders should have CRMs".

Then I realised none of the leaders of the student group I was most active in had CRMs (to my knowledge) and I would have been maybe a bit creeped out if they had, which updated me in the other direction.

Then I thought about it more and realised that group was very far in the direction of "friends with a common interest hang out", and that for student groups that were less like that I'm still basically pro CRMs. This feels obviously true for "advocacy" groups (anything explicitly religious or political, but also e.g. environmentalist groups, sustainability groups, help-your-local-community groups, anything do-goody). But I think I'd be in favour of even relatively neutral groups (e.g. student science club, student orchestras, etc) doing this.

Given how hard it is to keep any student group alive across multiple generations of leadership, not having a CRM is starting to seem very foolhardy to me.

Comment by Will Bradshaw (willbradshaw) on Against “longtermist” as an identity · 2022-05-14T15:19:27.549Z · EA · GW

I feel like  many (most?) of the "-ist"-like descriptors that apply to me are dependent on empirical/refutable claims. For example, I'm also an atheist -- but that view would potentially be quite easy to disprove with the right evidence.

Indeed, I think it's just very common for people who hold a certain empirical view to be called an X-ist. Maybe that's somewhat bad for their epistemics, but I think this piece goes too far in suggesting that I have to think something is "for-sure-correct" before "-ist" descriptors apply.

Separately, I think identifying as a rationalist and effective altruist is good for my epistemics. Part of being a good EA is having careful epistemics, updating based on evidence, being impartially compassionate, etc. Part of being a rationalist is to be aware of and willing to correct for my cognitive biases. When someone challenges me on one of these points, my professed identity gives me a strong incentive to behave well that I wouldn't otherwise have. (To be fair, I'm less sure this applies to "longtermist", which I think has much less pro-epistemic baggage than EA.)

Comment by Will Bradshaw (willbradshaw) on Bad Omens in Current Community Building · 2022-05-13T19:59:19.834Z · EA · GW

Occasionally, apparent coldness to immediate suffering:  I've only seen this a bit, but even one example could be enough to put someone off for good.

I would really like to ban the term "rounding error".

Comment by Will Bradshaw (willbradshaw) on Notes From a Pledger · 2022-05-02T11:23:48.403Z · EA · GW
  1. Strong messaging to the effect of "we need talent" gives the impression that there are enough jobs that if you are reasonably skilled, you can get a job.
  2. Strong messaging to the effect of "we need founders", or "just apply for funding" gives the impression that you will get funding.

I'm a bit confused, because this doesn't seem to match the scenario described in the OP that you quoted. My summary of that scenario would be:

  1. An EA org paid the OP to work for them as a contractor;
  2. The org  then invited them to apply for an open position for a similar role;
  3. They didn't get the position (presumably because the org found another candidate they thought would be better?).

I have a lot of sympathy for the OP in this scenario, and expect it was a very painful and disheartening experience. I definitely cringe a bit when I read it. But it doesn't seem to me like anyone did anything wrong here -- it just seems like the kind of unfortunate-but-unavoidable situation that comes up all the time in the workplace. But you're saying this is "harmful" and that orgs need to "do a lot better", which suggests that you disagree?

Comment by Will Bradshaw (willbradshaw) on Notes From a Pledger · 2022-05-01T13:22:58.951Z · EA · GW

I really hope that orgs can do a lot better on this, because I think this and similar things are pretty harmful.

Can you elaborate on what part of this you think is harmful, and what would be better?

Comment by Will Bradshaw (willbradshaw) on Against immortality? · 2022-04-29T15:54:06.428Z · EA · GW

Thanks, Owen! I do feel quite conflicted about my feelings here, appreciate your engagement. :)

I do claim that it's good to have articulations of things like this even if the case is reasonably well known

Yeah, I agree with this -- ultimately it's on those of us more on the pro-immortality side to make the case more strongly, and having solid articulations of both sides is valuable. Also flagging that this...

Would I eventually like to move to a post-death world? Probably, but I'm not certain. For one thing I think quite likely the concept of "death" will not carve reality at its joins so cleanly in the future.

...seems roughly right to me.

Comment by Will Bradshaw (willbradshaw) on Increasing Demandingness in EA · 2022-04-29T14:07:06.340Z · EA · GW

(This has not applied evenly. People who were already planning to make EA central to their career are generally experiencing EA as less demanding: pay in EA organizations has gone up, there is less stress around fundraising, and there is less of a focus on frugality or other forms of personal sacrifice. In some cases these changes mean that if someone does decide to shift their career it is less of a sacrifice than it would've been, though that does depend on how the field you enter is funded.)

Thanks, I found this discussion of in what ways EA is now more vs less demanding quite clarifying. I appreciate the point that for some people EA is much less demanding than it used to be, while for others it's much more so.

Comment by Will Bradshaw (willbradshaw) on Against immortality? · 2022-04-29T14:05:43.330Z · EA · GW

I felt quite frustrated by this post, because the preponderance of EA discourse is already quite sceptical of anti-ageing interventions (as you can tell by the fact that no major EA funder is putting significant resources into it). I would in fact claim that the amount of time and ink spent within EA in discussing reasons not to support anti-ageing interventions significantly exceeds that spent on the pro side.

So this post is repeating well-covered arguments, and strengthening the perception that "EAs don't do longevity", while claiming to be promoting an under-represented point of view.

Comment by Will Bradshaw (willbradshaw) on Introducing Canopy Retreats · 2022-04-25T14:36:37.022Z · EA · GW

This seems very promising -- thank you for taking this on for the community!

I would personally find it helpful if you could elaborate a bit on what you mean by a "retreat" here, and give a few examples of types of events that would or wouldn't fall within your remit. In my experience there are several quite different types of event that are sometimes called "retreats", so it would be helpful to get more information on which of these you are excited about running, and why.

Comment by Will Bradshaw (willbradshaw) on How should people spend money to be more productive? · 2022-04-14T19:23:09.331Z · EA · GW

The suggestions in the comments on this post might be a helpful starting point, though they generally don't dig into the evidential basis for each suggestion.

Comment by Will Bradshaw (willbradshaw) on Free-spending EA might be a big problem for optics and epistemics · 2022-04-13T23:04:27.240Z · EA · GW

I think this happened because the flow of money into EA has made the obligations to optimise cost-efficiency and to think counterfactually seem a lot weaker to many EAs. I don't think the obligations are any weaker than they were - we should just have a slightly lower cost effectiveness bar for funding things than before.

To me, the most important issue that this (and other comments here) raises is that, as a community, we don't yet have a good model of how an altruist who (rationally/altruistically) places a very high value on their time should actually act. Or, for that matter, how they shouldn't.

Comment by willbradshaw on [deleted post] 2022-04-13T11:17:25.048Z

It manifests as paying CEO-grade salaries for entry level positions in charities.

I'm pretty concerned about the cultural effects of all the money pouring into EA (especially with regard to grifters/vultures), but this is a point I've seen made a couple of times that I think is (currently) misguided.

Most nonprofit workers are absurdly underpaid, in ways that pretty clearly bottleneck the effectiveness of those organisations. That doesn't make for a very good reference class.

While they pay better than other nonprofits, most EA nonprofits still pay significantly below market rate. Their employees are still generally taking a financial hit to work for them, albeit a smaller one than they would working for a traditional nonprofit (or going to grad school).

When lots of prominent EA nonprofits start paying at or above market rates for talent, then I think we should consider worrying about harmfully inflated salaries. Until then, I'm quite happy for EA orgs to reduce the financial cost of doing the most good with one's career.

Comment by Will Bradshaw (willbradshaw) on Announcing Alvea—An EA COVID Vaccine Project · 2022-02-25T01:08:35.548Z · EA · GW

At present, it is basically impossible to advance any drug to market without extensive animal testing – certainly in the US, and I think everywhere else as well. The same applies to many other classes of biomedical intervention. A norm of EAs not doing animal testing basically blocks them from biomedical science and biotechnology; among other things, this would largely prevent them from making progress across large swathes of technical biosecurity.

This seems bad – the moral cost of failing to avert biocatastrophe, in my view, hugely outweigh the moral costs of animal testing. At the same time, speaking as a biologist who has spent a lot of time around (and on occasion conducting) animal testing, I do think that mainstream scientific culture around animal testing is deeply problematic, leading to large amounts of unnecessary suffering and a cavalier disregard for the welfare of sentient beings (not to mention a lot of pretty blatantly motivated argumentation). I don't want EAs to fall into that mindset, and the reactions to this comment (and their karma totals) somewhat concern me.

I wouldn't support a norm of EAs not doing animal testing. But I think I would support a norm of EAs approaching animal testing with much more solemnity, transparency, gratitude and regret than is normal in the life sciences. We need to remember at all times that we are dealing with living, feeling beings, who didn't & couldn't consent to be treated as we treat them, and who should be cared for and remembered. And we need to make sure we utilise animal testing as little as we can get away with, and make what testing we do use as painless as possible.

Finally, while I don't know everyone on the Alvea team personally, those I do know have a strong track record of deeply believing in, and living out, EA values around impartial concern for all sentient beings. I expect that if I had detailed knowledge of their animal testing decisions, I would believe they were necessary and the right thing to do. As an early test case on EAs in animal testing, I think it would be worth the Alvea team responding to this and developing a transparent policy around animal testing – but as a way to set a good example, not because I think there is reason to be suspicious of their decisions or motives.

Comment by Will Bradshaw (willbradshaw) on Hiring EA Developers · 2022-02-18T10:59:29.604Z · EA · GW

Yeah, if we're talking a shortish paragraph given to candidates who ask for it, I think that's fine. I think I'd be fine with that as a norm.

Comment by Will Bradshaw (willbradshaw) on Hiring EA Developers · 2022-02-15T16:05:34.140Z · EA · GW

As someone who's been reading & thinking about hiring a lot lately, I agree with quite a few things here. For example, I agree that providing a clear understanding of when to apply, emphasising mentorship & culture, tackling the remote-work issue head on, and being as straight with candidates as possible about the role, organisation, and application process, all seem good.

However, there are also things that feel like mistakes, some of them quite significant. (I'm open to being persuaded I'm wrong about most of these, especially since this could represent a significant improvement in my group's hiring practices.)  

As a community, I think it would be better if we’d give applicants some kind of feedback.

As I've written elsewhere recently, giving thorough feedback to candidates is very expensive, and frequently isn't the best use of current staff's time. I'm in favour of at least giving some feedback, but I'm currently weakly opposed to any strong cultural expectation of in-depth feedback.

I recommend asking for ONLY a CV, without even filling in a form. I think that would be attractive, and good for us as a community, since it will give the developers more time to do useful things, including applying to more EA orgs.

I agree that it's often a good idea to skip the cover letter, which mainly tests rhetorical ability rather than ability to do the job. I also agree that application forms should be kept reasonably short. But only asking for a CV doesn't seem great to me, for two reasons. Firstly, CVs are typically very hard to blind well for unbiased application review. Secondly,  there's a variety of other critical info I need about candidates, and a Google Form or similar is typically the most efficient way (for both us and the candidates) to get that information[1].

Insofar as a longer application form allows for less reliance on interviewing, I think adding more stuff to the application form will often be a good trade-off.

My current-best recommendation (from a startup that is really good at hiring) is to ask the candidate what they prefer.

This currently seems like quite a bad idea to me, for two important reasons.

Firstly, some assessment methods are straight-up better at predicting job performance than others, and we have evidence of which ones those are – we shouldn't let candidates choose worse ones. EA orgs don't put a lot of emphasis on take-home tasks because we enjoy them, we do it because they are better than the alternative for the thing we are trying to do[2]

Secondly, comparability of assessment is critical to comparing candidates fairly. Letting different candidates choose different assessment methods seems like it would make that impossible.

I am a strong proponent of paying applicants for their time, and otherwise being respectful of their time and needs. But I think we should use the assessment methods we have evidence for, and take a lot of care to make sure we assess every candidate as identically as possible, not allow applicants to pick and choose.

  1. ^

    I could potentially be persuaded that a two-step process, with an initial very short application form followed by a longer one for credible applicants, could be a decent workaround. 

  2. ^

    (And because a lot of us are copying OpenPhil.)

Comment by Will Bradshaw (willbradshaw) on If you're after a job in the US: the H-1B Lottery is in six weeks · 2022-02-13T19:28:27.349Z · EA · GW

Furthermore, the process by which a nonprofit is designated a "research nonprofit" is kind of arcane (for example, it's independent of how the IRS classifies the org in their 501(c)(3) designation). If the org you're applying to hasn't successfully sponsored cap-exempt H-1Bs in the past, expect additional delays while their lawyers argue with USCIS about it.

Comment by Will Bradshaw (willbradshaw) on Is Combatting Ageism The Most Potentially Impactful Form of Social Activism? · 2022-02-11T08:58:16.340Z · EA · GW

All the violent-state-oppression stuff applies to every law equally; I don't think it gives a special reason to focus on laws affecting teenagers. Are you trying to convince us to become anarchists, or that the treatment of teenagers in rich countries is particularly unjust even within a statist framework? If the latter, I think it would be more productive to focus on the evidence you think shows that people's beliefs around teenagers' capacity are false; just linking your book doesn't do much.

Even accepting the broader framing and the claim that teenagers are treated unjustly, you don't make any case that this is competitive with other top causes. Even if we're focusing on societal justice within rich countries (which I don't think is good cause prioritisation), would-be immigrants seem more oppressed than teenagers to me.

(FWIW, I'm basically opposed to compulsory schooling after age 12, think teenagers should be given many more rights than they are, and tentatively think that parents should have substantially less power to override their children's wishes on various things.)

Comment by Will Bradshaw (willbradshaw) on Apply for Professional Coaching · 2022-02-09T18:58:12.379Z · EA · GW

How is this different from just applying to those coaches directly?

Comment by Will Bradshaw (willbradshaw) on The best $5,800 I’ve ever donated (to pandemic prevention). · 2022-02-08T12:15:39.365Z · EA · GW

That's...a lot of karma.

Comment by Will Bradshaw (willbradshaw) on willbradshaw's Shortform · 2022-02-07T10:35:47.835Z · EA · GW

From the Rockefeller Foundation's history page, 1923:

Frederick T. Gates, credited with urging John D. Rockefeller Sr. to launch the Foundation says to his fellow trustees in his last meeting as member of the Board, “When you die and come to approach the judgment of Almighty God, what do you think He will demand of you? Do you for an instant presume to believe that He will inquire into your petty failures or your trivial virtues? No! He will ask just one question: ‘What did you do as a trustee of the Rockefeller Foundation?’”

I think this might be the earliest example I've yet come across of someone Doing The Thing.

Comment by Will Bradshaw (willbradshaw) on The best $5,800 I’ve ever donated (to pandemic prevention). · 2022-02-04T20:46:35.805Z · EA · GW

I commented separately before I noticed the meta edit to this comment. I was going to write another comment admonishing people downvoting Caleb's reply here, but instead I'll just say that I strongly agree with Zach's take here, and that I have pretty bad feelings right now toward the people who downvoted Caleb's comment (post-unendorsement) and reply.

This is speculating about hidden motives in a way I feel uncomfortable about, but I have a bad feeling there might be some political downvoting here, where people are downvoting the comments because they want people outside the community who this post to see that those comments had negative karma. I hope I'm wrong about that; but it matches a pattern I've seen on other Forum posts connecting to external partisan politics. If I'm not wrong about it, I very strongly condemn it.

Comment by Will Bradshaw (willbradshaw) on The best $5,800 I’ve ever donated (to pandemic prevention). · 2022-02-04T20:36:08.584Z · EA · GW

I think this comment has significantly more negative karma than it did when I last saw it, by which point it was already unendorsed.

I think downvoting a comment once it's been unendorsed is very bad form, and creates bad incentives that work directly against what the unendorse feature is supposed to achieve. If I'm right that people have been doing this, I think they should stop, and preferably undo their votes.

(If people have been downvoting a comment after it was already unendorsed because they wanted it to get hidden-by-default, I think that's even worse.)

I'd endorse a feature where unendorsing a comment prevented further karma changes, or reverted karma to 0, or something. Probably there are important wrinkles here. But I'm in favour of the general class of thing I'm waving at.

Comment by Will Bradshaw (willbradshaw) on The best $5,800 I’ve ever donated (to pandemic prevention). · 2022-02-04T20:29:38.843Z · EA · GW

Yeah, my reading of your comment was in some ways the opposite of Habryka's original take, since I was reading it as primarily directed at people who might support Carrick in weird/antisocial ways, rather than people who might dissent from supporting him.

Comment by Will Bradshaw (willbradshaw) on Which EA orgs provide feedback on test tasks? · 2022-01-31T07:40:35.196Z · EA · GW

I applied to OpenPhil in early 2019, but was rejected after multiple trial tasks. I asked for feedback and got a short feedback statement in response (1-2 short paragraphs). While I was a little frustrated at the minimal feedback, the work trials were very well-compensated, and I didn't ultimately feel like I was being hard-done-by by not getting detailed feedback.

In the end, if the trial task is paid at a reasonable rate, I don't think it is or should be a requirement for organisations to offer detailed feedback to candidates. "We buy your time for a trial task" seems like a fair deal to me, and in-depth feedback should be considered a nice extra. If costly feedback obligations caused orgs to make changes to their hiring process (e.g. trialling fewer candidates, putting less emphasis on trial tasks, or paying candidates less for their time) I expect I would usually think those changes weren't worth it.

Comment by Will Bradshaw (willbradshaw) on Democratising Risk - or how EA deals with critics · 2022-01-29T09:58:47.899Z · EA · GW

While I appreciate that we're all busy people with many other things to do than reply to Forum comments, I do think I would need clarification (and per-item argumentation) of the kind I outline above in order to take a long list of sweeping changes like this seriously, or to support attempts at their implementation.

Especially given the claim that "EA needs to make such structural adjustments in order to stay on the right side of history".

Comment by Will Bradshaw (willbradshaw) on Biosecurity needs engineers and materials scientists · 2022-01-29T09:48:10.178Z · EA · GW

I feel conflicted about this comment. On the one hand, I feel like I can see and appreciate the sequence of events that led you to put it here, and sympathise with its content. On the other hand, it's off-topic: this isn't what this post is about, and I'd prefer discussion of software developers in biosecurity to happen somewhere else. Maybe make a Question post?

Comment by Will Bradshaw (willbradshaw) on [Updated] EA conferences in 2022: save the dates · 2022-01-28T08:56:21.384Z · EA · GW

I think calendar quarters (e.g. Q1 = Jan/Feb/Mar) are fairly widely used and understood?

In any case, the EAG organisers seem some notation to indicate that they're hoping to hold an event during a rough period (e.g. summer) but don't have a specific date (or even month) yet. If seasons are no good, we need some alternative.

Comment by Will Bradshaw (willbradshaw) on [Updated] EA conferences in 2022: save the dates · 2022-01-28T07:19:19.056Z · EA · GW

Good spot.

If we want to avoid seasons but also be vague, quarters (e.g. "Q2 2022") could work?

Comment by Will Bradshaw (willbradshaw) on Ideas for avoiding optimizing the wrong things day-to-day? · 2022-01-27T08:41:00.317Z · EA · GW

I've been experimenting with different ways to do this for a number of years now, since I'm temperamentally quite susceptible to tunnel-vision/completionism of the kind you describe. Assorted thoughts below.

GTD is a great way to manage your to-do list and make sure you don't miss stuff, and I definitely recommend implementing it or something like it. But I find it a bit lacklustre for actually deciding what tasks to execute on on a given day. The same is true for quite a few task-management systems.

One system I've found effective for breaking through ugh fields and focusing on what's important is Final Version Perfected. Eisenhower matrices (or at least the concepts underlying them) can also be useful. Timeboxing can help too – I recommend Complice's pomodoro timer for this, though if you overuse that it can lose its force somewhat.

If you can, having regular check-ins with another human to describe how you're planning to spend your time can be quite effective – it's harder to rationalise time-wasting activities to another person than to yourself.

Finally, an important prerequisite for doing what's important is having some idea of what's important in the first place. For that, some kind of Weekly Review system is extremely valuable – mine is currently a mashup of GTD's system, Complice's built-in reviews, and various personal modifications that have built up over the years.

Comment by Will Bradshaw (willbradshaw) on Get In The Van · 2022-01-22T09:22:05.582Z · EA · GW

I agree with others that this post is excellent.

Comment by Will Bradshaw (willbradshaw) on Launching the EA Groups Resource Centre · 2022-01-19T15:38:59.502Z · EA · GW

Wow, I'm really impressed at how much CEA has been getting done over the past quarter or so.

Comment by Will Bradshaw (willbradshaw) on Phil Torres on Against Longtermism · 2022-01-18T13:09:05.843Z · EA · GW

Reflecting on the above, I think I sound more confident about my take here than I actually am. I do lean in the direction I describe here, but I can see why some reasonable people would disagree with me that what we've seen from Torres is sufficient to push him past the "actively engaging with critical arguments is good" and into "this is a bad actor we should just avoid associating with".

But I do think that in cases like this, where there's a credible (if not ironclad) case that someone is a bad actor, it's especially important that you provide opportunities for pushback in the form of counter-critical reading, debate partners, et cetera.

Comment by Will Bradshaw (willbradshaw) on Phil Torres on Against Longtermism · 2022-01-17T19:56:29.375Z · EA · GW

I am, in general, in favour of inviting critical-of-EA speakers, even those whose critiques I think are unfair or ill-founded (and I agree that "of course we're favour of critiques as long as they're true" is not a viable policy). I think if you gave me a list of prominent EA critics I'd be in favour of most of them being invited as speakers.

But Phil Torres has repeatedly crossed lines that I think should not be crossed. It is not okay to accuse EAs of being white supremacists without credible evidence, or to repeatedly and knowingly misrepresent your opponents in order to gain rhetorical advantage. I can't speak directly to his personal behaviour towards members of this community, but it's my impression that there are some quite problematic patterns there, too.

The world contains bad actors. We should be careful about labelling those who disagree with us as such, mindful of how hard it is to account for our biases when doing so. But when we do have strong evidence that someone is such, ignoring it isn't virtuous, it's irresponsible.

(And yes, I appreciate that very similar arguments to mine could be – and are – made in contexts where I would find their use abhorrent. I think this is just an unfortunate feature of the way the world is.)

Regarding particular arguments Phil has made, I think the bar for "writing someone off" as no-longer worthy of being platformed should be extremely high.

Sounds right to me. Also sounds right that, in the modern world, repeatedly accusing a movement of being tainted with white supremacy using extremely flimsy evidence, in a manner that clearly seems to be about wielding a useful rhetorical weapon rather than anything connected to truth-seeking, is above that bar.

Comment by Will Bradshaw (willbradshaw) on Bryan Caplan on EA groups · 2022-01-17T14:25:04.416Z · EA · GW

Yeah, I think you've encapsulated the two key ways people think about karma, and the difference between them. There was some discussion on LessWrong about this here.

I think probably the ideal would be for everyone to vote purely based on their reaction to the post and not at all in response to its current total. That's probably not feasible – the information about the total is there and people will react to it – but I do think that complaining that a post has the wrong total karma (which is my reading of the top-level comment) is pushing the community towards total-based voting in a way I think isn't great.

Comment by Will Bradshaw (willbradshaw) on My positive experience taking the antidepressant Wellbutrin / Bupropion, & why maybe you should try it too · 2022-01-14T10:48:44.387Z · EA · GW

I took bupropion from about February to November 2021.

After a pretty rough transition I found it to be a quite effective antidepressant, but it gave me very bad insomnia which I needed to take sleeping pills to overcome (this kind of sucked as it meant I had to be very careful to take all my meds at the right time of day, and couldn't increase the dose). I'm also quite confident that it made me more anxious.

That said, I would still definitely recommend it over SSRIs or mirtazapine, both of which have very common and serious side effects that I think are worse than bupropion's for most people.

I'm considering trying bupropion again when I move to the US in 2022, since there is a shorter-half-life version available there that is not available in the UK.

Comment by Will Bradshaw (willbradshaw) on Concrete Biosecurity Projects (some of which could be big) · 2022-01-13T20:34:40.348Z · EA · GW

It wasn't obvious to me, and apparently also not to others, that your statements about "pandemics" were not meant to apply to pandemics in general.

In general, when you realise you have been communicating unclearly, it's a bad idea to blame the people you confused.

Comment by Will Bradshaw (willbradshaw) on Concrete Biosecurity Projects (some of which could be big) · 2022-01-13T13:55:41.374Z · EA · GW

I don't think accusations of off-topic-ness at this point are very helpful.

You have been making strong claims about "pandemics" in general, which others have responded to by pointing out examples of pandemics that don't fit your claims. If by "pandemics" you meant "civilisation-ending pandemics" only, I think it was on you to make that clear.

Comment by Will Bradshaw (willbradshaw) on Concrete Biosecurity Projects (some of which could be big) · 2022-01-13T13:52:08.595Z · EA · GW

The AIDs epidemic is widely considered a pandemic (pandemics are a subset of epidemics). And one of the deadliest pandemics of the 20th century, at that.

In the 19th century, cholera, a faecal-oral pathogen, caused several pandemics, killing very many people. It doesn't do that any more thanks to sanitation in rich countries, but it's certainly not impossible for non-respiratory pathogens to achieve rapid global spread.

Everyone agrees with you that respiratory viruses are the biggest concern, and you've provided some good resources in this thread that I appreciate. But I do think you are being undernuanced and overconfident here.

Comment by Will Bradshaw (willbradshaw) on Phil Torres on Against Longtermism · 2022-01-13T09:53:48.847Z · EA · GW

I would hope that an event inviting someone with as controversial a record as Phil Torres would at least recommend some readings responding to his claims.

More generally, I think Torres's recent engagement with longtermism (particularly around spurious claims of white supremacy, but not only that) crosses the line from valuable criticism into toxic personal attack, and I'm sad to see EA groups inviting him as a speaker.

Comment by Will Bradshaw (willbradshaw) on Democratising Risk - or how EA deals with critics · 2022-01-11T09:06:01.681Z · EA · GW

I don't see where anonea2021 has made that claim. Did you mean to write "property" instead of "state" in this paragraph? (genuine question) Either way, I'm having trouble following what you want to say with this paragraph.

Yes, it seems like there's some crossed wires here.

I claimed that ancaps are "clearly trying to formulate a way for a capitalist society to exist without a state". The intended implicature was that since anarchy = the absence of a state (according to common understanding, the dictionary definition, and etymology) it was therefore proper to call them anarchists.

anonea2021 responded with "From the perspective of every other lineage of anarchists, private property is one of the things that enforces injust hierarchies." I was confused about this, since it didn't seem like a direct response to my claims. I wasn't sure whether to read it as (a) a claim that unjust hierarchies = a state (which seemed like a bad definition of "state"), or (b) a claim that anarchism wasn't actually about the absence of a state but instead about abolishing unjust hierarchies in general (which seemed like a bad, question-begging definition of "anarchism", given that ~everyone wants to minimise unjust hierarchies).

I tried to respond to the superposition of these two interpretations, which probably led to my phrasing being more confusing than it needed to be. 

I can confirm that this indeed the view of every other lineage of anarchists that I'm aware of. The anarchist's goal is to minimize unjust hierarchies. And given that private property (esp. of the means of production) is seen as one of the main causes of unjust hierarchies in today's world, it is plausible that a movement that tries create a society which structures itself completely along the lines of private property, is seen as utterly missing the point of anarchism. Thus "anarcho-"capitalism.

As before, this begs the question. Everyone wants to minimise unjust hierarchies, so that's not a useful description of anarchism. People who disagree about which hierarchies are unjust, what interventions are effective for reducing them, and what the costs of those interventions are, will end up advocating for radically different systems of government. Some of those will end up advocating for a society without a state, and it's useful to refer to that subset of positions as "anarchist" even if they are very different from each other.

Anarcho-capitalism is really quite different from other forms of capitalist social organisation, and its distinctive feature is the absence of a coercive state. "Anarcho-capitalism" is thus a completely appropriate name for it – indeed, it's hard to see what other name would fit better. Also, it's what they call themselves, and we should heavily lean towards using people's own self-labels.

It's fine to just say "anarcho-capitalism is radically different from other forms of anarchism, and anarchists on the left will typically deeply disagree with its tenets". That much is clear. Putting scare-quotes around "anarcho" is bad for the discourse in multiple ways.

Comment by Will Bradshaw (willbradshaw) on Democratising Risk - or how EA deals with critics · 2022-01-02T11:24:23.190Z · EA · GW

Firstly, I wasn't responding to the OP, I was responding several levels into a conversation between two different commentators about the responsibility of individual funders to reward critical work. I think this is an important and difficult problem that doesn't go away even in a world with more diverse funding. You brought up "diversify funding" as a solution to that problem, and I responded that it is helpful but insufficient. I didn't say anything critical of the OP's proposal in my response. Unless you think the only reasonable response would be to stop talking about an issue as soon as one partial solution is raised, I don't understand your accusation of unreasonableness here at all.

Secondly, "have more diversity in funders" is not remotely a concrete proposal. It's a vague desideratum that could be effected in many different ways, many of which are probably bad. If that is "as concrete as [you] can imagine" then we are operating under different definitions of "concrete".

Comment by Will Bradshaw (willbradshaw) on Democratising Risk - or how EA deals with critics · 2021-12-31T18:34:05.244Z · EA · GW

I share the surprise and dismay other commenters have expressed about the experience you report around drafting this preprint. While I disagree with many claims and framings in the paper, I agreed with others, and the paper as a whole certainly doesn't seem beyond the usual pale of academic dissent. I'm not sure what those who advised you not to publish were thinking.

In this comment, I'd like to turn attention to the recommendations you make at the end of this Forum post (rather than the preprint itself). Since these recommendations are somewhat more radical than those you make in the preprint, I think it would be worth discussing them specifically, particularly since in many cases it's not clear to me exactly what is being proposed.

Having written what follows, I realise it's quite repetitive, so if in responding you wanted to pick a few points that are particularly important to you and focus on those, I wouldn't blame you!


You claim that EA needs to...

diversify funding sources by breaking up big funding bodies 

Can you explain what this would mean in practice? Most of the big funding bodies are private foundations. How would we go about breaking these up? Who even is "we" in this instance?

[diversify funding sources] by reducing each orgs’ reliance on EA funding and tech billionaire funding

What sorts of funding sources do you think EA orgs should be seeking, other than EA orgs and individual philanthropists (noting that EA-adjacent academic researchers already have access to the government research funding apparatus)?

produce academically credible work

Speaking as a researcher who has spent a lot of time in academia, I think how much I care about work being "academically credible" depends a lot on the field. In many cases, I think post-publication review in places like the Forum is more robust and useful than pre-publication academic review.

Many academic fields (especially in the humanities) seem to have quite bad epistemic and political cultures, and even those that don't often have very particular ideas of what sorts of problems & approaches are suitable for peer-reviewed articles (e.g. requiring that work be "interesting" or "novel" in particular ways). And the current peer-review system is well-known to be painfully inadequate in many ways.

I don't want to overstate this – I think there are many cases where the academic publication route is a good option, for many reasons. But I've read a lot of pretty bad academic papers in my time, sometimes in prestigious journals, and it's not all that rare for a Forum report to significantly exceed the quality of the academic literature. I don't think academic credibility per se is something we should be aiming for for epistemic reasons. But perhaps you had other benefits in mind?

set up whistle-blower protection

Can you elaborate on what sorts of concrete systems you think would be useful here? Whistle-blower protection is usually intra-organisational – is this what you have in mind here, or are you imagining something more pan-community?

actively fund critical work

This sounds great, but I think is probably quite hard to implement in practice in a way that seems appealing. A lot depends on the details. Can you elaborate on what sorts of concrete proposals you would endorse here?

For example, do you think OpenPhil should deliberately fund "red-team" work they disagree with, solely for the sake of community epistemics? If so, how should they go about doing that?

allow for bottom-up control over how funding is distributed

I think having ways to aggregate small-donor preferences regarding EA grantees is valuable. I don't think it should replace large philanthropic donors with concentrated expertise. But I think I'd have a better opinion if I had a better idea of what you were advocating.

diversify academic fields represented in EA

This isn't something you can just change by fiat. You could modify the core messages of EA to deliberately appeal to a wider variety of backgrounds, but that seems like it has a lot of important downsides. Again, I think I would need a better idea of what exactly you have in mind as interventions to really evaluate this.

, make the leaders' forum and funding decisions transparent

These seem like two different cases. I'm generally pro public reporting of grants, but I don't really know what you have in mind for the leaders' forum (or other similar meetings).

stop glorifying individual thought-leaders

I'm guessing for more detail on this we should refer to the section on intelligence from your earlier post? I'm torn between sympathy and scepticism here, and don't feel like I have much to add, so let's move on to...

stop classifying everything as info hazards

OK, but how do you handle actual serious information hazards?

I'm on record in various places (e.g. here) saying that I think secrecy has lots of really serious downsides, and I still think these downsides are frequently underrated by many EAs. I certainly think that there is substantial progress still to be made in improving how we think about and deal with these problems. But that doesn't make the core problem go away – sometimes information really is hazardous, in a fairly direct (though rarely straightforward) way.

Comment by Will Bradshaw (willbradshaw) on Democratising Risk - or how EA deals with critics · 2021-12-31T16:56:10.571Z · EA · GW

I agree that more diversity in funders would help with these problems. It would inevitably bring in some epistemic diversity, and make funding less dependent on maintaining a few specific interpersonal relationships. If funders were more numerous and less inclined to defer to each others' analyses, then someone persistently failing to get funding would represent somewhat stronger evidence that the quality of their work is actually low.

That said, I don't think this solves the problem. Distinguishing between bad (negatively-valuable) work and work you're biased against because you disagree with it will still be hard in many cases, and more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.

Plus, adding more funders to EA adds its own tricky balancing acts. If your funders are too aligned with each other, you're basically in the same position you were with only one funder. If they're too unaligned, you end up with some funders regularly funding work that other funders think is bad for the world, and the movement either splinters or becomes too broad and vague to be useful.

Comment by willbradshaw on [deleted post] 2021-12-30T13:06:24.577Z

I hadn't realised that your comment on LessWrong was your first public comment on the incident for 3 years. That is an update for me.

But also, I do find it quite strange to say nothing about the incident for years, then come back with a very long and personal (and to me, bitter-seeming) comment, deep in the middle of a lengthy and mostly-unrelated conversation about a completely different organisation.

Commenting on this post after it got nominated for review is, I agree, completely reasonable and expected. That said, your review isn't exactly very reflective – it reads more as just another chance to rehash the same grievance in great detail. I'd expect a review of a post that generated so much in-depth discussion and argument to mention and incorporate some of that discussion and argument; yours gives the impression that the post was simply ignored, a lone voice in the wilderness. If 72 comments represents deafening silence, I don't know what noise would look like.

[Edited to soften language.]

Comment by willbradshaw on [deleted post] 2021-12-30T12:48:52.164Z

I believe GiveWell has corrupted itself

Is it so hard to believe reasonable people can disagree with you, for reasons other than corruption or conspiracy?

What is your credence that you're wrong about this?

Comment by willbradshaw on [deleted post] 2021-12-30T12:43:19.971Z

Do you believe that the following representation of the incident is unfair?

Yes, at present I do.

I haven't yet seen evidence to support the strong claims you are making about Julia Wise's knowledge and intentions at various stages in this process. If your depiction of events is true (i.e. Wise both knowingly concealed the leak from you after realising what had happened, and explicitly lied about it somewhere) that seems very bad, but I haven't seen evidence for that. Her own explanation of what happened seems quite plausible to me.

(Conversely, we do have evidence that MacAskill read your draft, and realised it was confidential, but didn't tell you he'd seen it. That does seem bad to me, but much less bad than the leak itself – and Will has apologised for it pretty thoroughly.) 

Your initial response to Julia's apology seemed quite reasonable, so I was surprised to see you revert so strongly in your LessWrong comment a few months back. What new evidence did you get that hardened your views here so much?

And that since "the actual consequences were so minor and that the alternative hypothesis (that it was just a mistake) is so plausible" this doesn't really matter?

It matters – it was a serious error and breach of Wise's duty of confidentiality, and she has acknowledged it as such (it is now listed on CEA's mistakes page). But I do think it is important to point out that, other than having your expectation of confidentiality breached per se, nothing bad happened to you. 

One reason I think this is important is because it makes the strong "conspiracy" interpretation of these events much less plausible. You present these events as though the intent of these actions was to in some way undermine or discredit your criticisms (you've used the word "sabotage") in order to protect MacAskill's reputation. But nobody did this, and it's not clear to me what they plausibly could have done – so what's the motive?

What sharing the draft with MacAskill did enable was a prepared response – but that's normal in EA and generally considered good practice when posting public criticism. Said norm is likely a big part of the reason this screw-up happened.

Comment by Will Bradshaw (willbradshaw) on Democratising Risk - or how EA deals with critics · 2021-12-29T21:46:04.619Z · EA · GW

I suspect I disagree with the users that are downvoting this comment. The considerations Guy raises in the first half of this comment are real and important, and the strong form of the opposing view (that anyone should be "responsible for ensuring harmful and wrong ideas are not widely circulated" through anything other than counterargument) is seriously problematic and, in my view, prone to lead to some pretty dark places.

A couple of commenters here have edged closer to this strong view than I'm comfortable with, and I'm happy to see pushback against that. If strong forms of this view are more prevalent among the community than I currently think, that would for me be an update in favour of the claims made in this post/paper.

That said, I do agree that "consistently making bad arguments should eventually lead to the withdrawal of funding", and that this problem is hard (see my other reply to Guy below).

Comment by Will Bradshaw (willbradshaw) on Democratising Risk - or how EA deals with critics · 2021-12-29T21:37:23.512Z · EA · GW

The argument is too vague to counter: how do you disprove claims about unspecified problems with unspecified tools in unspecified contexts?

Halstead gives one alternative (thresholding) and names some specific problems with it. A productive response that considered this inadequate might have named some others.

I'd like to add that as someone whose social circle includes both EAs and non-EAS, I have never witnessed reactions as defensive and fragile as those made by some EAs in response to criticism of orthodox EA views. This kind of behaviour simply isn't normal.

The original post here is substantially upvoted, as are most posts criticising EA in these general terms. There are comments both supportive and critical of the piece that have received substantial upvotes. The fact that your comments here are being downvoted says more about your approach to commenting than about EAs' receptiveness to criticism.