Posts

EA Group Organizer Career Paths Outside of EA 2020-07-14T23:44:10.799Z · score: 33 (16 votes)
Are there robustly good and disputable leadership practices? 2020-03-19T01:46:38.484Z · score: 14 (7 votes)
Harsanyi's simple “proof” of utilitarianism 2020-02-20T15:27:33.621Z · score: 48 (41 votes)
Quote from Strangers Drowning 2019-12-23T03:49:51.205Z · score: 43 (14 votes)
Peaceful protester/armed police pictures 2019-12-22T20:59:29.991Z · score: 21 (11 votes)
How frequently do ACE and Open Phil agree about animal charities? 2019-12-17T23:56:09.987Z · score: 62 (28 votes)
Summary of Core Feedback Collected by CEA in Spring/Summer 2019 2019-11-07T16:26:55.458Z · score: 106 (46 votes)
EA Art: Neural Style Transfer Portraits 2019-10-03T01:37:30.703Z · score: 40 (23 votes)
Is pain just a signal to enlist altruists? 2019-10-01T21:25:44.392Z · score: 65 (28 votes)
Ways Frugality Increases Productivity 2019-06-25T21:06:19.014Z · score: 70 (44 votes)
What is the Impact of Beyond Meat? 2019-05-03T23:31:40.123Z · score: 25 (10 votes)
Identifying Talent without Credentialing In EA 2019-03-11T22:33:28.070Z · score: 42 (19 votes)
Deliberate Performance in People Management 2017-11-25T14:41:00.477Z · score: 28 (28 votes)
An Argument for Why the Future May Be Good 2017-07-19T22:03:17.393Z · score: 28 (30 votes)
Vote Pairing is a Cost-Effective Political Intervention 2017-02-26T13:54:21.430Z · score: 14 (15 votes)
Living on minimum wage to maximize donations: Ben's expenses in 2016 2017-01-29T16:07:28.405Z · score: 28 (21 votes)
Voter Registration As an EA Group Meetup Activity 2016-09-16T15:28:46.898Z · score: 4 (6 votes)
You are a Lottery Ticket 2015-05-10T22:41:51.353Z · score: 10 (10 votes)
Earning to Give: Programming Language Choice 2015-04-05T15:45:49.192Z · score: 3 (3 votes)
Problems and Solutions in Infinite Ethics 2015-01-01T20:47:41.918Z · score: 6 (9 votes)
Meetup : Madison, Wisconsin 2014-10-29T18:03:47.983Z · score: 0 (0 votes)

Comments

Comment by ben_west on What actually is the argument for effective altruism? · 2020-09-30T00:25:41.120Z · score: 2 (1 votes) · EA · GW

Thanks Ben, even though I’ve been involved for a long time, I still found this helpful.

Nitpick: was the acronym intentionally chosen to spell “SIN”? Even though that makes me laugh, it seems a little cutesy.

Comment by ben_west on How have you become more (or less) engaged with EA in the last year? · 2020-09-29T01:52:54.660Z · score: 6 (3 votes) · EA · GW

I can relate to the difficulties of living in a city with few EA's, though I did eventually end up organizing a group that was reasonably successful. I'm curious if you have participated in any online events (e.g. the icebreakers) and whether those filled some of the void you have?

Comment by ben_west on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-25T22:37:03.748Z · score: 23 (7 votes) · EA · GW

I'm excited to hear that! Looking forward to seeing the article. I particularly had trouble distinguishing between three potential criticisms you could be making:

  1. It's correct to try do the most good, but people who call themselves "EA's" define "good" incorrectly. For example, EA's might evaluate reparations on the basis of whether they eliminate poverty as opposed to whether they are just.
  2. It's correct to try to do the most good, but people who call themselves "EA's" are just empirically wrong about how to do this. For example, EA's focus too much on short-term benefits and discount long-term value.
  3. It's incorrect to try to do the most good. (I'm not sure what the alternative you are proposing in your essay is here.)

If you are able to elucidate which of these criticisms, if any, you are making, I would find it helpful. (Michael Dickens writes something similar above.)

Comment by ben_west on Long-Term Future Fund: September 2020 grants · 2020-09-18T16:13:16.516Z · score: 31 (10 votes) · EA · GW

It might be more relevant to consider the output: 500,000 views (or ~80,000 hours of watch time).  Given that the median video gets 89 views, it might be hard for other creators to match the output, even if they could produce more videos per se. 

Comment by ben_west on Are there robustly good and disputable leadership practices? · 2020-09-16T20:26:16.733Z · score: 2 (1 votes) · EA · GW

This is excellent, thanks!

Comment by ben_west on How have you become more (or less) engaged with EA in the last year? · 2020-09-15T19:53:17.382Z · score: 6 (3 votes) · EA · GW

My involvement hasn't changed too much – I continue to work at an EA organization, which keeps my level of involvement pretty consistent.

My social circle has become less EA over the past year, which is a combination of people who I knew moving away and me failing to stay in touch with the remainder during quarantine.

Comment by ben_west on Some thoughts on EA outreach to high schoolers · 2020-09-15T18:58:01.485Z · score: 4 (2 votes) · EA · GW

It's honestly mostly "things I currently think are cool" which is probably not the best way to grow a channel but oh well. My most popular content is analysis of TikTok itself and cosmetics analysis/recommendations.

I'm @benthamite on the app. Would love to connect if you join!

Comment by ben_west on Some thoughts on EA outreach to high schoolers · 2020-09-15T18:53:14.962Z · score: 5 (3 votes) · EA · GW

Agreed! I think they are a good example of transitioning from a medium mostly serving older generations to a different medium that serves younger people.

Comment by ben_west on Some thoughts on EA outreach to high schoolers · 2020-09-15T18:51:37.097Z · score: 3 (2 votes) · EA · GW

I somewhat agree with this but think it's worth pointing out that a lot of "our positions" are not very complicated or controversial, it's just that most people don't think about the topic. E.g. we just did a video celebrating the extinction of smallpox, and I don't expect that to cause many problems.

Some 80 K things like this might be the value of doing cheap tests or ABZ plans. Or even "maybe do a little bit of thinking before deciding on your career." I'd be interested to talk to you all about this if/when you think videos would be beneficial.

Comment by ben_west on Some thoughts on EA outreach to high schoolers · 2020-09-15T02:56:35.135Z · score: 39 (20 votes) · EA · GW

EA seems reliant on nerdy millennial technology, namely long plaintext social media posts.

I'm interested in communicating in Gen Z ways, which I think roughly means "short amateur videos". I've had moderate success on TikTok (35,000 followers as of this writing), and I would encourage more people to try it out.

There's a nice self-selection where your content is only displayed to 16-year-olds who spend their free time watching math videos (or whatever niche you target), which I expect to be one of the best easily-available audiences of young people.

Comment by ben_west on More empirical data on 'value drift' · 2020-09-11T20:27:03.623Z · score: 2 (1 votes) · EA · GW

In 2019, only about half of the respondents reported a 5/5 or a 4/5 level of engagement with EA (someone working at an EA organisation would be at ‘5’). So, we should also expect it to be an overestimate of the drop out rate among the more engaged.

In 2020 we will be able to apply the same method among a subset of more engaged respondents

 

My understanding is that David/Rethink has a reasonably accurate model of this, i.e. they can predict how someone would respond to the engagement questions on the basis of how they answered other questions.

It might be interesting to try doing this to get data from prior years.

Comment by ben_west on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T00:50:09.182Z · score: 14 (9 votes) · EA · GW

Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren't obviously wrong to do so. So signaling the former would be nice.

Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don't have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven't thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying "I think factory farming is terrible but XYZ" instead of just "XYZ".

Comment by ben_west on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-31T22:39:00.357Z · score: 11 (7 votes) · EA · GW

My impression is that, of FHI's focus areas, biotechnology is substantially more credentialist than the others. I've been hesitant to recommend RSP to life scientists who are considering a PhD because I'm worried that not having a "traditional" degree is harmful to their job prospects.

Do you think that's an accurate concern? (I mostly speak with US-based people, if that's relevant.)

Comment by ben_west on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-31T16:56:09.911Z · score: 5 (4 votes) · EA · GW

I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.  

I'd be curious how many people you think are not willing to "tolerate real intellectual diversity". I'm not sure if you are saying  

  • "Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it's worth the cost," or  
  • "Anyone who is upset by intellectual diversity isn't someone we want to attract anyway, so losing them isn't a real cost."

(Presumably you are saying something between these two points, but I'm not sure where.)

Comment by ben_west on Informational Lobbying: Theory and Effectiveness · 2020-08-29T00:06:58.151Z · score: 2 (1 votes) · EA · GW

I worked on influencing healthcare policy during both the Obama and Trump presidencies, which I think is about as big of a swing as you can get on the executive side. My experience was that there was moderate leeway on the executive side. For example, legislation would require a certain amount of money to be distributed amongst healthcare providers who had "high quality" care, but "high-quality" shifted from "scores better than their peers" to "reports any amount of quality data to the government." (The latter standard effectively meaning that everyone was "high-quality", so the program was approximately useless.) However, the government has a ton of inertia and executives have limited resources, so things often continued on as they were before, even if executives really wanted things to change.

I can think of a couple of ways in which executive branch lobbying can be "sticky":

  1. Something which I didn't appreciate until working on this is that it's often quite hard for the government to actually do the thing it is trying to do. Officials often simply haven't thought through how some policy would affect a stakeholder, or what would happen in some unusual circumstance, simply because there are so many different things to consider. Many of my suggestions were things like "this section contradicts this other section if some circumstance occurs so you should fix that" and I expect to those to stick relatively well because it's pretty uncontroversial.
  2. As I alluded to above, most government employees are nonpolitical staffers who mostly just do their job the way their predecessor trained them. I'm sure you've heard stories about government departments using computer systems from the 1970s or whatever, and a similar thing can happen at the process level. Even if the executive branch has the ability to change the interpretation of some term, they often won't, just because changing is hard.

This is just from my personal experience, and I'm not sure how it would compare to working with other branches of government (or even other executive-branch agencies).

Comment by ben_west on EA Group Organizer Career Paths Outside of EA · 2020-08-28T17:06:21.226Z · score: 3 (2 votes) · EA · GW

Thanks! I added this at the end.

Comment by ben_west on We're (surprisingly) more positive about tackling bio risks: outcomes of a survey · 2020-08-26T20:25:26.288Z · score: 4 (2 votes) · EA · GW

This is pretty surprising to me. Thanks for doing this investigation and sharing the results!

Comment by ben_west on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-20T16:40:01.706Z · score: 4 (2 votes) · EA · GW

Greaves' cluelessness paper was published in 2016. My impression is that the broad argument has existed for 100+ years, but the formulation of cluelessness arising from flow-through effects outweighing direct effects (combined with EA's tending to care quite a bit about flow-through effects) was a relatively novel and major reformulation (though probably still below your bar).

Comment by ben_west on Against the Social Discount Rate (Cowen & Parfit) - Weak refutations · 2020-08-12T18:03:19.618Z · score: 2 (1 votes) · EA · GW

Thanks for writing this up! Minor point:

> I agree that by some moral views, it is not right that a voluntary provider of gifts should be given any privileges, but as Cowen and Parfit admits, this is not the case in a pure utilitarian view. 

Maybe I'm getting confused by the double negatives, but isn't this backwards? A pure utilitarian would argue that no one has any special privileges, right?
 

Apart from that minor point though, I would be interested in refutations to the objection. 

Comment by ben_west on Social Movement Lessons from the US Prisoners' Rights Movement · 2020-08-06T21:23:39.129Z · score: 3 (2 votes) · EA · GW

Thanks!

I'm hoping that at some point, I'll be able to do a bit more of a roundup / analysis post, where I look at some of the key themes and leanings from across several of our case studies. There might be more scope for making these sorts of claims or estimates in a post like that, though it still might not be worth the time. I'd be interested in your thoughts on that!

Yes, I personally would be interested and would be happy to give my opinions about which of these would be most useful. But (obviously) the priorities of EAA leaders who can put your advice into practice is probably more important.

I'm afraid I can't really help here. I did write "Is the US Supreme Court a Driver of Social Change or Driven by it? A Literature Review"

Thanks! I hadn't seen that literature review before and it seems interesting. Added it to my reading list.

Comment by ben_west on Social Movement Lessons from the US Prisoners' Rights Movement · 2020-08-04T23:33:50.353Z · score: 3 (2 votes) · EA · GW

Thanks for writing this up! Regarding this:

Key findings of this report include that incremental successes of modest improvements in welfare may distract advocates’ attention from more fundamental political and systemic issues

I would phrase this differently: I think you are saying that activists focused on the welfare of prisoners over the absolute number of prisoners, and that perhaps this was a mistake. This has clear analogues in the farmed animal welfare movement (e.g. promoting meat replacements versus promoting cage free systems), but I would not say that either of these are more "fundamental" than the other.

I also thought that this point:

Litigation may have unintended negative consequences, such as entrenching and legitimizing institutions that advocates are opposed to

Was interesting and seems very relevant to animal welfare. But it sounds like maybe you don't think the evidence there is strong? I would be curious if you could give some kind of statement about how confident you are that this "legitimizing" consequence happened and/or how likely it is to happen in farmed animal welfare.

Lastly, you mention this:

the federal government, including the Supreme Court, seems to have become more hostile to various forms of public interest law during the 1980s

How receptive the legal system is to these challenges is clearly a crucial consideration for how effective they are, so I would be interested in thoughts/resources about the current legal climate.

Comment by ben_west on What posts do you want someone to write? · 2020-07-29T21:55:20.041Z · score: 2 (1 votes) · EA · GW

Thanks! Yeah I should have posted that both of these have now been published, so if anyone else reading this has a request for posts that they haven't stated publicly, consider doing so!

Comment by ben_west on How should we run the EA Forum Prize? · 2020-07-29T04:08:24.603Z · score: 4 (2 votes) · EA · GW

Could you say more about the other sources you are thinking of as better locations? Are they aggregators or are you just thinking about the long tail of e.g. blogs hosted by individual organizations or academic journals?

Crossposting to the Forum is one obvious way that externally hosted content could get here, though it's inelegant.

Comment by ben_west on Best resources for introducing longtermism and AI risk? · 2020-07-22T21:13:45.052Z · score: 5 (3 votes) · EA · GW

Note: this comes from a comment thread that has some more discussion in it for those interested.

Comment by ben_west on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T19:33:16.177Z · score: 23 (9 votes) · EA · GW

What would you recommend as the best introduction to concerns (or lack thereof) about risks from AI?

If you have time and multiple recommendations, I would be interested in a taxonomy. (E.g. this is the best blog post for non-technical readers, this is the best book-length introduction for CS undergrads.)

Comment by ben_west on 'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper · 2020-06-22T18:23:40.350Z · score: 6 (3 votes) · EA · GW

That's a fair point. I guess the thought experiment could be something like: the department is responsible for some set of people's welfare 50 years from now. We have to either convince that department to have a lower discount rate 50 years from now, or adopt some measures such that the people born 50 years from now will have doubled the consumption (relative to the counterfactual)?

If that's right, the discount rate thing still seems easier. It seems hard to double consumption over a 50 year period, though definitely easier than doubling it immediately.

Comment by ben_west on 'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper · 2020-06-16T23:10:07.899Z · score: 2 (1 votes) · EA · GW

I just looked at the one paper. I'm not sure if other sources disagree.

Even if was 2 though, I still feel like it would be cheaper?

It's hard for me to think about changing individuals, but if I think of governments: there some government department which is responsible for the welfare of some population. We have 2 options:

  1. Convince them to update their models to move their discount rate from 2% to 1%, or
  2. Convince them to adopt some policy which doubles the consumption of everyone within that population

Surely the first one is easier? If only because it's at least in principle possible – even if the US government would magically do whatever I said, I don't know if I could suggest a policy change that would double consumption.

Comment by ben_west on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-04T18:49:49.437Z · score: 10 (5 votes) · EA · GW

Thanks for posting these! I had never read a few of them, and I especially liked "desperation".

Comment by ben_west on 'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper · 2020-06-03T22:50:53.836Z · score: 5 (2 votes) · EA · GW

Thanks for writing this up! It sparked some good discussion here at CEA.

This paper claims that CRRA (γ) is around 1. According to the table in the article (p13 in your summary), if γ=1.1, then we are indifferent between decreasing the discount rate from 2% to 1.87% versus doubling consumption.

I admit that I have a very poor model of how hard it is to decrease someone's discount rate, but it seems massively easier to decrease people's discount rates by 13 basis points versus doubling consumption. Are my intuitions really off about that?

If I'm right, it seems like this model pretty clearly argues for spreading longtermism versus economic growth.

Comment by ben_west on Climate Change Is Neglected By EA · 2020-05-27T15:57:13.139Z · score: 6 (5 votes) · EA · GW

Thanks for clarifying! I understand the intuition behind calling this "neglectedness", but it pushes in the opposite direction of how EA's usually use the term. I might suggest choosing a different term for this, as it confused me (and, I think, others).

To clarify what I mean by "the opposite direction": the original motivation behind caring about "neglectedness" was that it's a heuristic for whether low hanging fruit in the field exists. If no one has looked into something, then it's more likely that there is low hanging fruit, so we should probably prefer domains that are less established . (All other things being equal.)

The fact that many people have looked into climate change but we still have not "flattened the emissions curve" indicates that there is not low hanging fruit remaining. So an argument that climate change is "neglected" in the sense you are using the term is actually an argument that it is not neglected in the usual sense of the term. Hence the confusion from me and others.

Comment by ben_west on Climate Change Is Neglected By EA · 2020-05-26T20:02:31.483Z · score: 5 (4 votes) · EA · GW

The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention. - Will

Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell - Uri

I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. I'm curious if this is the crux – I would guess that Will agrees (certain types of) climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.

Comment by ben_west on Climate Change Is Neglected By EA · 2020-05-26T19:55:13.879Z · score: 9 (6 votes) · EA · GW

Thanks for sharing! This does seem like an area many people are interested in, so I'm glad to have more discussion.

I would suggest considering the opposite argument regarding neglectedness. If I had to steelman this, I would say something like: a small number of people (perhaps even a single PhD student) do solid research about existential risks from climate change -> existential risks research becomes an accepted part of mainstream climate change work -> because "mainstream climate change work" has so many resources, that small initial bit of research has been leveraged into a much larger amount.

(Note: I'm not sure how reasonable this argument is – I personally don't find it that compelling. But it seems more compelling to me than arguing that climate change isn't neglected, or that we should ignore neglectedness concerns.)

Comment by ben_west on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-20T22:20:29.953Z · score: 3 (5 votes) · EA · GW

This is really interesting! It seems like there's also compelling evidence for more than 2:

While there is no direct evidence that any of the 25 [18] species of Hawaiian land birds that have become extinct since the documented arrival of Culex quinquefasciatus in 1826 [19] were even susceptible to malaria and there is limited anecdotal information suggesting they were affected by birdpox [19], the observation that several remaining species only persist either on islands where there are no mosquitoes or at altitudes above those at which mosquitoes can breed and that these same species are highly susceptible to avian malaria and birdpox [18,19] is certainly very strong circumstantial evidence...

The formerly abundant endemic rats Rattus macleari and Rattus nativitas disappeared from Christmas Island in the Indian Ocean (10°29′ S 105°38′ E) around the turn of the twentieth century. Their disappearance was apparently abrupt, and shortly before the final collapse sick individuals were seen crawling along footpaths [22]. At that time, trypanosomiasis transmitted by fleas from introduced black rats R. rattus was suggested as the causative agent. Recently, Wyatt et al. [22] managed to isolate trypanosome DNA from both R. rattus and R. macleari specimens collected during the period of decline, whereas no trypanosome DNA was present in R. nativitas specimens collected before the arrival of black rats. While this is good circumstantial evidence, direct evidence that trypanosomes caused the mortality is limited

Comment by ben_west on 162 benefits of coronavirus · 2020-05-14T16:54:45.996Z · score: 3 (2 votes) · EA · GW

Yeah, even if it just leads to acceptance that higher education is about signaling, that seems like a step in the right direction to me. It at least lays the groundwork for future innovators who can optimize for signaling as opposed to "education."

Comment by ben_west on 162 benefits of coronavirus · 2020-05-13T18:41:55.921Z · score: 3 (2 votes) · EA · GW

Re-assessment of education & educational institutions

I'm curious to see what happens here. I know a lot of people who are saying "I'm paying $50,000 a year to watch the same lecture I could have watched on YouTube for free?" Of course, that was also true before quarantine, but somehow quarantine has made it more salient.

I'm not sure whether this salience will last and cause a switch towards nontraditional learning.

Comment by ben_west on 162 benefits of coronavirus · 2020-05-13T18:38:28.629Z · score: 2 (1 votes) · EA · GW

Thanks for this thorough list! Regarding:

Change of government/leader in some countries: if they did not handle pandemic well

Do you have a sense for how well correlated public opinion and government performance is? At least in the US, my impression is that Trump's approval ratings got a slight bump but are now back to normal levels, and public opinion mostly tracks party allegiance rather than any government policy.

Comment by ben_west on Reducing long-term risks from malevolent actors · 2020-05-13T16:57:22.483Z · score: 4 (3 votes) · EA · GW

I wonder if one could find more credible signals of things like "caring for your employers", ideally in statistical form. Money invested in worker safety might be one such metric.

That seems reasonable. Another possibility is looking at benefits, which have grown rapidly (though there are also many confounders here).

Something which I can't easily measure but seems more robust is the fraction of "iterated games". E.g. I would expect enterprise salespeople to be less malevolent than B2C ones (at least towards their customers), because successful enterprise sales relies on building relationships over years or decades. Similarly managers are often recruited and paid well because they have a loyal team who will go with them, and so screwing over that team is not in their self-interest.

Comment by ben_west on Reducing long-term risks from malevolent actors · 2020-05-05T19:59:50.121Z · score: 2 (1 votes) · EA · GW

A minor copyediting suggestion (adding the words in bold):

Factor 1—characterized by cruelty, grandiosity, manipulativeness, and a lack of guilt—arguably represents the core personality traits of psychopathy. However, scoring highly on factor 2—characterized by impulsivity, reactive anger, and lack of realistic goals—is less problematic from our perspective. In fact, humans scoring high on factor 1 but low on factor 2 are probably more dangerous than humans scoring high on both factors (more on this below).

It's not a big deal, but it took me a minute to understand why you were saying it's both less problematic and more dangerous.

Comment by ben_west on Reducing long-term risks from malevolent actors · 2020-05-05T19:15:09.428Z · score: 14 (8 votes) · EA · GW

Thanks for this interesting article. Regarding malevolence among business leaders: my impression is that corporations have rewarded malevolence less over time.

E.g. in the early 1900s you had Frederick Taylor (arguably the most influential manager of the 20th century) describing his employees like:

one of the very first requirements for a man who is fit to handle pig iron as a regular occupation is that he shall be so stupid and so phlegmatic that he more nearly resembles in his mental make-up the ox than any other type.

Modern executives would never say this about their staff, and no doubt this is partly because what's said in the boardroom is different from what's said in public, but there is a serious sense in which credibly signaling prosocial behaviors towards your employees is useful. E.g. 80 years later you have Paul O'Neill, in almost exactly the same industry as Taylor, putting worker safety as his key metric, because he felt that people would work harder if they felt taken care of by the company.

My guess is that corporations which rely on highly skilled workers benefit more from prosocial executives, and that it's hard to pretend to be prosocial over a decades-long career, though certainly not impossible. So possibly one hard-to-fake measure of malevolence is whether you repeatedly succeed in a corporation where success requires prosociality.

Comment by ben_west on Racial Demographics at Longtermist Organizations · 2020-05-05T00:35:18.918Z · score: 24 (11 votes) · EA · GW

Thanks for writing this. Another reference point: YC founders are ~16% black or Hispanic.

(I'm not sure if this is the best reference class, I was just curious in the comparison because the population of people who start YC companies seems somewhat similar to the population who join longtermist organizations.)

Comment by ben_west on What posts do you want someone to write? · 2020-04-30T17:34:53.740Z · score: 2 (1 votes) · EA · GW

You rock, thanks so much!

Comment by ben_west on The Case for Impact Purchase | Part 1 · 2020-04-22T22:48:53.100Z · score: 3 (2 votes) · EA · GW

You can also borrow against the future prize or impact purchase, e.g. as Goldman Sachs allows you to do (in some limited cases). This moves the risk onto diversified private investors.

Comment by ben_west on Why I'm Not Vegan · 2020-04-22T19:22:30.210Z · score: 9 (5 votes) · EA · GW

I have an intuition that this is more of the disagreement between you and vegans (as opposed to having different moral weights). My guess is that one could literally prevent three chicken-years for less than $500/year?[1] And also that some vegans' personal happiness is more affected by not eating chickens than donating $500.

If that's true, then the reason vegans are vegan instead of donating is because they view it as "morality" as opposed to "axiology".

This accords with my intuition: having someone tell me they care about nonhuman animals while eating a chicken sandwich rubs me in a way that having someone tell me they care about the developing world while wearing $100 shoes does not.


  1. As one heuristic: Beyond meat is $4.59 for 9 ounces. So it would cost $424 to replace all 52.9 pounds Peter says the average American eats in a year. ↩︎

Comment by ben_west on Why I'm Not Vegan · 2020-04-22T19:01:59.772Z · score: 12 (5 votes) · EA · GW

Do the weights really affect the argument? I think Jeff is saying that being omnivorous results in ~6 additional animals alive at any given point. If an animal's existence on a farm is as bad as one human in the developing world is good (a pretty non-speciesist weighting), then it's $600 to go vegan.

$600 is admittedly much more than $0.43, but my guess is that Jeff still would rather donate the $600.

Comment by ben_west on COVID-19 response as XRisk intervention · 2020-04-17T23:28:27.466Z · score: 18 (6 votes) · EA · GW

I generally agree with your response, but wanted to point out one example of establishing credibility: Scott Aaronson says:

It does cause me to update in the direction of AI-risk being a serious concern. For the Bay Area rationalists have now publicly sounded the alarm about a looming crisis for the human race, well before it was socially acceptable to take that crisis too seriously (and when taking it seriously would have made a big difference), and then been 100% vindicated by events. Where previously they were 0 for 0 in predictions of that kind, they’re now 1 for 1.
...
[After Adam Scholl invites him to a workshop]: Thanks for asking! Absolutely, I’d be interested to attend an AI-risk workshop sometime. Partly just to learn about the field, partly to find out whether there’s anything that someone with my skillset could contribute.

(Note: part of what impressed Scott here was being early to raise the alarm, and that boat has already sailed, so it could be that future COVID-19 work won't do much to impress people like him.)

Comment by ben_west on willbradshaw's Shortform · 2020-04-07T23:28:45.765Z · score: 2 (1 votes) · EA · GW

This is a really interesting point. An additional consideration is that global leaders tend to be older, and hence more at risk (cf. Boris Johnson). You could imagine that their deaths are especially destabilizing.

If the longtermist argument for preventing pandemics is that they trigger destabilization which leads to, say, nuclear war, the age impacts could be an important factor.

Comment by ben_west on What posts do you want someone to write? · 2020-04-01T20:42:10.345Z · score: 2 (1 votes) · EA · GW

Awesome!

I personally would suggest a format of:
1. One paragraph summary that any educated layperson easily can understand
2. One page summary that a layperson with college-level math skills can understand
3. 2-5 pages of detail that someone with college-level math and Econ 101 skills can understand

This is just a suggestion though, I don't have a lot of confidence that it's correct.

Comment by ben_west on What are examples of EA work being reviewed by non-EA researchers? · 2020-03-30T23:27:13.335Z · score: 7 (4 votes) · EA · GW

Probably more informal than you want, but here's a Facebook thread debating AI safety involving some of the biggest names in AI.

Comment by ben_west on The Precipice is out today in US & Canada! Audiobook now available · 2020-03-26T16:57:58.540Z · score: 4 (3 votes) · EA · GW

See also Toby's AMA

Comment by ben_west on What posts do you want someone to write? · 2020-03-24T16:54:08.791Z · score: 10 (7 votes) · EA · GW

Defining "management constraints" better.

Anecdotally, many EA organizations seem to think that they are somehow constrained by management capacity. My experience is that this term is used in different ways (for example, some places use it to mean that they need senior researchers who can mentor junior researchers; others use it to mean that they need people who can do HR really well).

It would be cool for someone to interview different organizations and get a better sense of what is actually needed here