Posts

How important is it for new hires to be EA-aligned? 2020-10-02T20:47:08.731Z · score: 12 (10 votes)
BERI seeking new collaborators 2020-05-06T23:11:18.462Z · score: 17 (9 votes)

Comments

Comment by sawyer on Analysis: Which animal products cause the most animal suffering and deaths in the United States? · 2020-10-22T21:14:56.483Z · score: 2 (2 votes) · EA · GW

Totally, that makes sense. I think unfortunately this is beyond my expertise at this point, but I appreciate you offering! For anyone else watching...in particular, I'm envisioning something like the Excel tool created for this SSC adversarial collaboration, where a user can input their subjective "human life-year equivalents" for various animals, then connect that to your data and analysis, and output...something. Maybe new charts, but more realistically something more basic.

Comment by sawyer on Analysis: Which animal products cause the most animal suffering and deaths in the United States? · 2020-10-21T19:06:06.266Z · score: 6 (5 votes) · EA · GW

Very cool analysis! I wonder how these orderings would change if the suffering levels were adjusted by brain complexity / capacity for suffering. In other words, most people think that killing or hurting a cow is worse than killing or hurting a salmon, but your analysis seems to treat them as equivalent. I realize that this would introduce a lot more uncertainty into the calculations, and that there's potential for significant moral disagreement on this topic, but  has there been any effort to do something like this with your data?

Comment by sawyer on List of EA-related organisations · 2020-10-21T17:20:41.128Z · score: 5 (3 votes) · EA · GW

This is a great list, and I really appreciate your "what this is" / "what this is not" introduction! I shared some thoughts elsewhere which are somewhat specific to BERI, but also relate to the more general question of what it means to be "EA-related" or "EA-aligned": https://forum.effectivealtruism.org/posts/fXcN6kBpzfqjowc5D/does-the-berkeley-existential-risk-initiative-self-identify?commentId=zxCh3eDYpJNzn2Djj

Comment by sawyer on Getting money out of politics and into charity · 2020-10-06T18:46:20.774Z · score: 2 (2 votes) · EA · GW

I would use this. At least, I would use it in the early stages, in order to help build momentum, and I would also share it with my politically-minded friends.

Other commenters mentioned something similar, but I think your framing and marketing will be really important. Maybe talk to a sociologist or psychologist about why people donate money to political campaigns? It would probably help for your website to cater to those same motivations.

Comment by sawyer on Does the Berkeley Existential Risk Initiative (self-)identify as an EA-aligned organization? · 2020-09-15T20:23:02.198Z · score: 24 (13 votes) · EA · GW

I'm the Executive Director of BERI. Thanks for the question, sorry no one's answered it until now!

BERI doesn't explicitly identify as an EA-aligned organization, but I wouldn't actively disagree with someone who called BERI an EA org. Some ways in which we are related to the EA movement:

  • Our mission is to improve human civilization’s long-term prospects for survival and flourishing. That sounds pretty EA-aligned to me.
  • We shared an office with CEA for a year.
  • BERI job openings have been posted on the 80k job board, and we had a table at an EAG SF job fair in 2019.
  • Multiple past and current BERI employees (including myself) personally identify as EA-aligned.

Despite these links, I haven't felt the need to state our alignment explicitly (e.g. on our website). I have some worries that this sort of question can contribute to an in-group/out-group mentality. BERI works with people who want to reduce existential risk, or otherwise improve human civilization’s long-term prospects for survival and flourishing. We don't require that our collaborators identify as EA. I worry that if we explicitly branded ourselves as EA, potential collaborators would feel less inclined to reach out if they didn't personally identify as EA-aligned.

Put another way: BERI has historically been an "EA friendly" organization, and will likely continue to be so. But "aligned" has some implications that I don't think are necessary or helpful for us as an organization.