EA Group Organizer Career Paths Outside of EA 2020-07-14T23:44:10.799Z
Are there robustly good and disputable leadership practices? 2020-03-19T01:46:38.484Z
Harsanyi's simple “proof” of utilitarianism 2020-02-20T15:27:33.621Z
Quote from Strangers Drowning 2019-12-23T03:49:51.205Z
Peaceful protester/armed police pictures 2019-12-22T20:59:29.991Z
How frequently do ACE and Open Phil agree about animal charities? 2019-12-17T23:56:09.987Z
Summary of Core Feedback Collected by CEA in Spring/Summer 2019 2019-11-07T16:26:55.458Z
EA Art: Neural Style Transfer Portraits 2019-10-03T01:37:30.703Z
Is pain just a signal to enlist altruists? 2019-10-01T21:25:44.392Z
Ways Frugality Increases Productivity 2019-06-25T21:06:19.014Z
What is the Impact of Beyond Meat? 2019-05-03T23:31:40.123Z
Identifying Talent without Credentialing In EA 2019-03-11T22:33:28.070Z
Deliberate Performance in People Management 2017-11-25T14:41:00.477Z
An Argument for Why the Future May Be Good 2017-07-19T22:03:17.393Z
Vote Pairing is a Cost-Effective Political Intervention 2017-02-26T13:54:21.430Z
Living on minimum wage to maximize donations: Ben's expenses in 2016 2017-01-29T16:07:28.405Z
Voter Registration As an EA Group Meetup Activity 2016-09-16T15:28:46.898Z
You are a Lottery Ticket 2015-05-10T22:41:51.353Z
Earning to Give: Programming Language Choice 2015-04-05T15:45:49.192Z
Problems and Solutions in Infinite Ethics 2015-01-01T20:47:41.918Z
Meetup : Madison, Wisconsin 2014-10-29T18:03:47.983Z


Comment by ben_west on Are there robustly good and disputable leadership practices? · 2021-01-28T00:55:49.334Z · EA · GW

That's fair. My understanding though is that management training doesn't seem very useful in general, implying that either the things they are teaching aren't very useful or people aren't very good at filtering to find the parts that are useful to them.

Comment by ben_west on (Autistic) visionaries are not natural-born leaders · 2021-01-26T20:51:17.598Z · EA · GW

indicating that I'm not making such a claim about people I discuss in the post, but rather my impression that they exhibited a host of traits typically associated with autism/asperger's.

FWIW I don't interpret title words being in parentheses as indicating it's the author's impression. I interpreted your title as meaning something like "I think probably all visionaries are not natural-born leaders, but I'm more confident that autistic ones are not."

Comment by ben_west on (Autistic) visionaries are not natural-born leaders · 2021-01-26T20:38:20.555Z · EA · GW

Thanks for writing this. I feel like it's written with an implication of something like "you can be bad at management but eventually learn", but I think another theory is something more like "you can win the lottery without being good at math".

E.g. a common explanation for the success of the PayPal mafia is that they became rich when everyone else in tech became poor, and were therefore able to purchase stakes in a bunch of companies and then just join the most successful or otherwise get an "unfair" advantage. This seems roughly true of Musk, as I understand it.

Another interpretation is something like "executive people management either doesn't matter, or matters in a way substantially different from how people usually think it should matter." Successful executives have a wide range of approaches (including, as you point out, some which seem intuitively terrible), and one interpretation of this is that your approach actually doesn't matter very much. I've remarked before that there seemed to be surprisingly few robustly good management practices.

I'm curious whether you have opinions about which of these interpretations are correct, or if there's something else you take away from these stories?

Comment by ben_west on Ranking animal foods based on suffering and GHG emissions · 2021-01-26T05:35:49.080Z · EA · GW

Do you have suggestions for a new domain name? :)

Comment by ben_west on EAG survey data analysis · 2021-01-25T16:21:34.409Z · EA · GW

On behalf of CEA, I'd like to extend a huge thank you to the SEADS team. The correlation between satisfaction, LTR (likelihood to recommend), and other variables (or lack thereof) is something that's featured in numerous discussions here at CEA, and I would encourage all EA event organizers to consider it. Their demographic analysis has informed our diversity work (e.g. before this analysis, we suspected there would be more of a correlation between gender/ethnicity and connections).

Also, while not mentioned in this document, the primary metric that the EA Forum uses was changed because of their work.

And of course, I greatly appreciate them not just doing this analysis, but also taking the time to clean it up and present publicly!

Comment by ben_west on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-24T23:08:48.790Z · EA · GW

That makes sense; your interpretation does seem reasonable, so perhaps a rephrase a would be helpful.

Comment by ben_west on Ranking animal foods based on suffering and GHG emissions · 2021-01-21T02:28:26.630Z · EA · GW

This is awesome! I like the model, and the UI is intuitive and clean. Two requests/suggestions:

  1. Could you say "eggs from caged hens" or something instead of just "caged hen"? And similarly "chicken meat" instead of "broiler"? Or something like that – I think many people aren't familiar with those more technical terms.
  2. Would you be able to get a simpler domain name? I'd like to direct people to this, and I think the current name will be hard to remember.
Comment by ben_west on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-21T00:43:10.871Z · EA · GW

I think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc.

As a toy example, say that  is some bounded sigmoid function, and my utility function is to maximize ; it's always going to be the case that  so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging. (Correct me if this is wrong though.)

Comment by ben_west on Things CEA is not doing · 2021-01-20T01:48:21.606Z · EA · GW

(These are personal comments, I'm not sure to what extent they are endorsed by others at CEA.)

Thanks for writing this up Ozzie! For what it's worth, I'm not sure that you and Max disagree too much, though I don't want to speak for him.

Here's my attempt at a crux: suppose CEA takes on some new thing, and as a result Max manages me less well, making my work worse, but does that new thing better (or at all) because Max is spending time on it.

My view is that the marginal value of a Max hour is inverse U-shaped for both of these, and the maxima are fairly far out. (E.g. Max meeting with his directs once every two weeks would be substantially worse than meeting once a week.) As CEA develops, the maximum marginal value of his management hour will shift left while the curve for new projects remains constant, and at some point it will be more valuable for him to think about a new thing than speak with me about an old thing.

Please enjoy my attached paint image illustrating this:

I can think of two objections:

1. Management: Max is currently spending too much time managing me. Processes are well-developed and don't need his oversight (or I'm too stubborn to listen to him anyway or something) so there's no point in him spending so much time. (I.e. my "CEA in the future" picture is actually how CEA looks today for management.)
2. New projects: there is super low hanging fruit, and even doing a half-assed version of some new project would be way more valuable than making our existing projects better. (I.e. my "CEA in the future" picture is actually how CEA looks today for new projects.)

I'm curious if either of those seem right/useful to you?

Comment by ben_west on Training Bottlenecks in EA (professional skills) · 2021-01-20T01:07:56.848Z · EA · GW

Thanks for writing this up Michelle! I would be excited for you to write more things like this in the future. Regarding this:

The more similar to mine someone’s situation is, the more likely they’ll be able to recommend resources tailored to me

A common observation[1] is that firms retain older employees but rarely hire them. One explanation for this is that organization-specific knowledge (what acronyms mean, how you make a project plan, etc.) is valuable, but general-purpose skills aren't as valuable, so there's no point in recruiting someone who has 30 years of experience from your competitor. (Or, alternatively: too few people actually learn valuable general-purpose skills for this to show up in the data.)

This roughly seems correct to me, anecdotally.

To the extent that this is accurate in EA, it might imply that EA-specific communication norms or other EA-specific things are the most valuable to train.

An additional hobbyhorse of mine is that certification might be more valuable than training. Having a mentor who can teach you things is nice, but it might actually be more valuable for these skilled and trusted mentors to evaluate people's existing abilities and then credibly certify them.

  1. See e.g. Are older workers overpaid? A literature review: "Theories emphasizing specific human capital are able to explain why firms employ older workers but hardly ever hire them." ↩︎

Comment by ben_west on Why EA meta, and the top 3 charity ideas in the space · 2021-01-20T00:40:54.766Z · EA · GW

Thanks for sharing this! All three of these seem valuable.

A couple questions about the EA training one:

  1. You give the examples of operations skills, communication skills, and burnout prevention. These all seem valuable but not differentially valuable to EA. Are you thinking that this would be training for EA-specific things like cause prioritization or that they would do non-EA-specific things but in an EA way? If the latter, could you elaborate why an EA-specific training organization like this would be better than people just going to Toastmasters or one of the other million existing professional development firms?
  2. Sometimes when people say that they wish there were more EA's with some certain skill, I think they actually mean that they wish there were more EA's who had credibly demonstrated that skill. When I think of EA-specific training (e.g. cause prioritization) I have a hard time imagining a 3 week course[1] which substantially improves someone's skills, but it seems a little more plausible to me that people could work on a month-long "capstone project" which is evaluated by some person whose endorsement of their work would be meaningful. (And so the benefit someone would get from attending is a certification to put on their resume, rather than some new skill they have learned.) Have you considered "EA certification" as opposed to training?

  1. I think there are weeks long courses like "learn how to comply with this regulation" which are helpful, but those already exist outside EA. ↩︎

Comment by ben_west on Incompatibility of moral realism and time discounting · 2020-12-17T23:05:48.716Z · EA · GW

Thanks for posting this!

You might be interested in this from On the Overwhelming Importance of Shaping the Far Future:

The Separated Worlds: There are only two planets with life. These planets are outside of each other’s light cones. On each planet, people live good lives. Relative to each of these planets’ reference frames, the planets exist at the same time. But relative to the reference frame of some comet traveling at a great speed (relative to the reference frame of the planets), one planet is created and destroyed before the other is created. If we treat space and time asymmetrically, we would have to claim that, relative to the reference frame of the planets, this outcome was not as good as it is relative to the reference frame of the comet. But this is very hard to believe. The value of this possible world should not be relative to any reference frame.

Also it's worth pointing out that "regular claims about the world (like 'Elsa is taller than Anna')" are also not "real" in the sense you are using the term. I'm not super familiar with the subject, but I wouldn't be surprised if many moral realists are okay describing moral claims as "only" as real as claims about length.

Comment by ben_west on Careers Questions Open Thread · 2020-12-16T19:29:27.138Z · EA · GW

My experience with bioinformatics is almost exclusively on the industry side, and more the informatics than the bio. With that caveat, a few thoughts:

should I prioritize developing skills that will make me more employable and E2G (e.g. develop and apply sexy, ad hoc methods to rich-person illnesses in a more mainstream bioinformatics-y role)

My experience is that the highest earning positions are not "sexy" (in the way I think you are using the term). I recall one conference I attended in which the speaker was describing some advanced predictive algorithm, and a doctor in the back raised their hand and said "this is all nice but I can't even generate a list of my diabetic patients so could you start with that please?"

This might also address your question "how easy is it to, say, break into industry data science for anthropology graduates with experience in computational stats methods development?" – I think it depends very much on what you mean by "data science". A lot of the most successful bioinformatics companies' products are quite mundane by academic standards: alerting clinicians to well-known drug-drug interactions, identifying patients based on well validated reference ranges for lab tests, etc. My impression is that getting a position at one of these places is approximately similar to getting any other programming job. If you are looking for something more academic though, the requirements are different.

focus more on greater blights afflicting larger numbers of human and non-human animals (say, to understand differential responses to tropical diseases, or maybe variation in the human aging process, or pivot to food science and work on cultured meat or something, as well as work on more interpretable methods)

A problem I suspect you will run into is that methods development requires (often quite large) data sets. I get the sense from your brief bio that you aren't interested in doing any wet lab work, meaning that if you were to work on, say, cultured meat, you would need a data set from some collaborator.

If I were you, I might try to resolve this first. I know GFI has an academic network you can join and you could message people there about the existence of data sets.

Also, you might be interested in OpenPhil's early career GCBR funding. Even if you don't need funding, they might be able to connect you with useful collaborators.

Comment by ben_west on What do you think about the "Initiative to Accelerate Charitable Giving", a new US legislative proposal? Yay or nay? · 2020-12-15T23:18:19.918Z · EA · GW

I find it hard to come up with an argument supportive of this proposal, but as one clarification: the proposal is that donors could choose to create a DAF with no time limit, but where the donor receives only capital gains tax benefits at the time of donation, and income tax benefits at the time of disbursement. Many large donors get most of their income through capital gains, so maybe aren't too bothered by this, and small donors might receive some benefit by being able to save up their donations for several years and then receive income tax benefits all at once when they disburse. (This would be helpful if they normally don't donate enough per year to get over the standard deduction but would be able to get over it after saving up donations for several years.)

My guess is that this is mostly harmful for people with low six-figure incomes who want to donate a substantial portion of their incomes and wait > 15 years.

Comment by ben_west on Guerrilla Foundation Response to EA Forum Discussion · 2020-12-15T23:01:48.175Z · EA · GW

Thanks for continuing to engage! I have been looking forward to seeing your response article, and this was interesting to read.

I suspect that many readers of this Forum would agree with most of your points, particularly the first one. Ironically, it sometimes feels like the two most common criticisms of EA are that it focuses too much on measurable data (e.g. critiquing randomista-related areas of EA) and that it focuses too little on measurable data (e.g. critiquing AI safety). This seems like a sign that we could better explain ourselves.

One area of genuine difference might be regarding impact investing: plenty of EA's believe you should invest instead of donating now, but impact investing seems relatively rare (OpenPhil's investment in Imposssible Foods being one prominent counter example). I'm curious if you have read Founders Pledge's report on impact investing? In particular: you mentioned divestment from publicly traded companies, which FP considers an especially difficult way to have an impact (Principle 4, pages 17-27). I would be curious to hear if you disagree with any of their claims, or the examples they analyzed like Acumen Fund.

Comment by ben_west on 80k hrs #88 - Response to criticism · 2020-12-11T23:56:31.818Z · EA · GW

Thanks for posting this! I thought it was interesting, and I would support more people writing up responses to 80 K podcasts.

Minor: you have a typo in your link to

Comment by ben_west on Introducing Animal Advocacy Africa · 2020-12-10T23:33:54.250Z · EA · GW

This is awesome! I'm looking forward to hearing more about your progress.

Comment by ben_west on AMA: Jason Crawford, The Roots of Progress · 2020-12-07T20:24:26.571Z · EA · GW

What were your goals for the Progress Studies for Young Scholars program? In particular: is there work that you are hoping (perhaps a small subset of) participants can do immediately, or were you hoping instead to lay some sort of foundation which might payoff years/decades down the line?

Comment by ben_west on An experiment to evaluate the value of one researcher's work · 2020-12-01T17:57:27.964Z · EA · GW

This is a cool idea. Thanks Nuno for doing this evaluation, and thanks Ozzie for being willing to participate!

Comment by ben_west on Progress Open Thread: November 2020 · 2020-11-17T17:23:35.946Z · EA · GW

The world's first lab grown meat restaurant opened in Israel:

Comment by ben_west on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-11-11T20:45:46.898Z · EA · GW

Thanks Ben. I like this answer, but I feel like every time I have seen people attempt to implement it they still end up facing a trade-off.  

Consider moving someone from role r1 to role r2. I think you are saying that the person you choose for r2 should be the person you expect to be best at it, which will often be people who aren't particularly good at r1.

This seems fine, except that r2 might be more desirable than r1. So now a) the people who are good at r1 feel upset that someone who was objectively performing worse than them got a more desirable position, and b) they respond by trying to learn/demonstrate r2-related skills rather than the r1 stuff they are good at.

You might say something like "we should try to make the r1 people happy with r1 so r2 isn't more desirable" which I agree is good, but is really hard to do successfully.

An alternative solution is to include proficiency in r1 as part of the criteria for who gets position r2. This addresses (a) and (b) but results in r2 staff being less r2-skilled.

I'm curious if you disagree with this being a trade-off?

Comment by ben_west on Progress Open Thread: November 2020 · 2020-11-06T22:13:14.016Z · EA · GW

Amartya Sen won the same prize 

No pressure.


Just kidding, congratulations!

Comment by ben_west on [Summary] Impacts of Animal Well‐Being and Welfare Media on Meat Demand · 2020-11-05T22:36:01.740Z · EA · GW

Thanks Michael! This is really interesting. Decreasing demand by a few percent is a pretty big deal.

My intuition is that the number of articles published isn't exact the right thing to regress on, probably instead you want something like "article views". Did the authors discuss this? I guess if all the articles are published in equally-viewed sources, looking at just the raw article count would be fine.

Comment by ben_west on Thoughts on whether we're living at the most influential time in history · 2020-11-05T21:46:34.375Z · EA · GW

I found this rephrasing helpful, thanks Richard.

Comment by ben_west on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-11-04T17:47:00.587Z · EA · GW

I'm curious about your approach to management: there are two broad schools of thought, one of which says that you should promote the best performers, and the other which says that management is a different skill, and therefore you should promote the people who you think will be best at management. (Some organizations have a "dual ladder" system as an attempted hybrid between these.)

Startups often face this problem more acutely than most, because the skills which made someone very successful in a 5 person company are quite different than the ones which make them successful in a 500 person company, so someone's previous job performance is not the greatest predictor of their future success.

I'm curious what your thoughts are on this. For most of my career I have been in the "management is a different skill" camp, but over the past couple of years I have moved towards the other camp.

(I'm not sure if this question is too broad. If it is, some specific some questions are: 1. To what extent does someone's ability to do a specific technical job predict their ability to manage others doing that job? 2. Does the implicit incentive structure of promoting people who are the best managers rather than the best at their jobs warp people's efforts so much that it outweighs the benefits of having better managers?)

Comment by ben_west on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-30T20:42:24.313Z · EA · GW

Thanks! Beyond Meat and SpaceX are great examples.

Comment by ben_west on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-29T20:50:36.940Z · EA · GW

That makes sense, thanks!

Comment by ben_west on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-29T20:50:04.627Z · EA · GW

stakeholders start to be willing to pay for the solution

Under some ethical theories, the vast majority of stakeholders (nonhuman animals, future persons) are unable to pay in any meaningful sense. Are you more positive about nonprofit entrepreneurship for organizations that serve these stakeholders?

Comment by ben_west on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-29T20:07:49.866Z · EA · GW

To the extent that markets are efficient, that narrow slice is the only slice available (since the ways of creating value for which you can easily be paid have already been exploited).

(This is one reason why I personally am usually more excited about nonprofit startups: the low hanging fruit is usually picked in the for-profit world, but there's a lot more remaining in the nonprofit space.) 

Comment by ben_west on Progress Open Thread: October // Student Summit 2020 · 2020-10-23T22:56:31.300Z · EA · GW

Congrats! And Rings is a cool idea. I hope you write up a Forum post about the results! 

Comment by ben_west on What actually is the argument for effective altruism? · 2020-09-30T00:25:41.120Z · EA · GW

Thanks Ben, even though I’ve been involved for a long time, I still found this helpful.

Nitpick: was the acronym intentionally chosen to spell “SIN”? Even though that makes me laugh, it seems a little cutesy.

Comment by ben_west on How have you become more (or less) engaged with EA in the last year? · 2020-09-29T01:52:54.660Z · EA · GW

I can relate to the difficulties of living in a city with few EA's, though I did eventually end up organizing a group that was reasonably successful. I'm curious if you have participated in any online events (e.g. the icebreakers) and whether those filled some of the void you have?

Comment by ben_west on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-25T22:37:03.748Z · EA · GW

I'm excited to hear that! Looking forward to seeing the article. I particularly had trouble distinguishing between three potential criticisms you could be making:

  1. It's correct to try do the most good, but people who call themselves "EA's" define "good" incorrectly. For example, EA's might evaluate reparations on the basis of whether they eliminate poverty as opposed to whether they are just.
  2. It's correct to try to do the most good, but people who call themselves "EA's" are just empirically wrong about how to do this. For example, EA's focus too much on short-term benefits and discount long-term value.
  3. It's incorrect to try to do the most good. (I'm not sure what the alternative you are proposing in your essay is here.)

If you are able to elucidate which of these criticisms, if any, you are making, I would find it helpful. (Michael Dickens writes something similar above.)

Comment by ben_west on Long-Term Future Fund: September 2020 grants · 2020-09-18T16:13:16.516Z · EA · GW

It might be more relevant to consider the output: 500,000 views (or ~80,000 hours of watch time).  Given that the median video gets 89 views, it might be hard for other creators to match the output, even if they could produce more videos per se. 

Comment by ben_west on Are there robustly good and disputable leadership practices? · 2020-09-16T20:26:16.733Z · EA · GW

This is excellent, thanks!

Comment by ben_west on How have you become more (or less) engaged with EA in the last year? · 2020-09-15T19:53:17.382Z · EA · GW

My involvement hasn't changed too much – I continue to work at an EA organization, which keeps my level of involvement pretty consistent.

My social circle has become less EA over the past year, which is a combination of people who I knew moving away and me failing to stay in touch with the remainder during quarantine.

Comment by ben_west on Some thoughts on EA outreach to high schoolers · 2020-09-15T18:58:01.485Z · EA · GW

It's honestly mostly "things I currently think are cool" which is probably not the best way to grow a channel but oh well. My most popular content is analysis of TikTok itself and cosmetics analysis/recommendations.

I'm @benthamite on the app. Would love to connect if you join!

Comment by ben_west on Some thoughts on EA outreach to high schoolers · 2020-09-15T18:53:14.962Z · EA · GW

Agreed! I think they are a good example of transitioning from a medium mostly serving older generations to a different medium that serves younger people.

Comment by ben_west on Some thoughts on EA outreach to high schoolers · 2020-09-15T18:51:37.097Z · EA · GW

I somewhat agree with this but think it's worth pointing out that a lot of "our positions" are not very complicated or controversial, it's just that most people don't think about the topic. E.g. we just did a video celebrating the extinction of smallpox, and I don't expect that to cause many problems.

Some 80 K things like this might be the value of doing cheap tests or ABZ plans. Or even "maybe do a little bit of thinking before deciding on your career." I'd be interested to talk to you all about this if/when you think videos would be beneficial.

Comment by ben_west on Some thoughts on EA outreach to high schoolers · 2020-09-15T02:56:35.135Z · EA · GW

EA seems reliant on nerdy millennial technology, namely long plaintext social media posts.

I'm interested in communicating in Gen Z ways, which I think roughly means "short amateur videos". I've had moderate success on TikTok (35,000 followers as of this writing), and I would encourage more people to try it out.

There's a nice self-selection where your content is only displayed to 16-year-olds who spend their free time watching math videos (or whatever niche you target), which I expect to be one of the best easily-available audiences of young people.

Comment by ben_west on More empirical data on 'value drift' · 2020-09-11T20:27:03.623Z · EA · GW

In 2019, only about half of the respondents reported a 5/5 or a 4/5 level of engagement with EA (someone working at an EA organisation would be at ‘5’). So, we should also expect it to be an overestimate of the drop out rate among the more engaged.

In 2020 we will be able to apply the same method among a subset of more engaged respondents


My understanding is that David/Rethink has a reasonably accurate model of this, i.e. they can predict how someone would respond to the engagement questions on the basis of how they answered other questions.

It might be interesting to try doing this to get data from prior years.

Comment by ben_west on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T00:50:09.182Z · EA · GW

Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren't obviously wrong to do so. So signaling the former would be nice.

Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don't have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven't thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying "I think factory farming is terrible but XYZ" instead of just "XYZ".

Comment by ben_west on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-31T22:39:00.357Z · EA · GW

My impression is that, of FHI's focus areas, biotechnology is substantially more credentialist than the others. I've been hesitant to recommend RSP to life scientists who are considering a PhD because I'm worried that not having a "traditional" degree is harmful to their job prospects.

Do you think that's an accurate concern? (I mostly speak with US-based people, if that's relevant.)

Comment by ben_west on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-31T16:56:09.911Z · EA · GW

I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.  

I'd be curious how many people you think are not willing to "tolerate real intellectual diversity". I'm not sure if you are saying  

  • "Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it's worth the cost," or  
  • "Anyone who is upset by intellectual diversity isn't someone we want to attract anyway, so losing them isn't a real cost."

(Presumably you are saying something between these two points, but I'm not sure where.)

Comment by ben_west on Informational Lobbying: Theory and Effectiveness · 2020-08-29T00:06:58.151Z · EA · GW

I worked on influencing healthcare policy during both the Obama and Trump presidencies, which I think is about as big of a swing as you can get on the executive side. My experience was that there was moderate leeway on the executive side. For example, legislation would require a certain amount of money to be distributed amongst healthcare providers who had "high quality" care, but "high-quality" shifted from "scores better than their peers" to "reports any amount of quality data to the government." (The latter standard effectively meaning that everyone was "high-quality", so the program was approximately useless.) However, the government has a ton of inertia and executives have limited resources, so things often continued on as they were before, even if executives really wanted things to change.

I can think of a couple of ways in which executive branch lobbying can be "sticky":

  1. Something which I didn't appreciate until working on this is that it's often quite hard for the government to actually do the thing it is trying to do. Officials often simply haven't thought through how some policy would affect a stakeholder, or what would happen in some unusual circumstance, simply because there are so many different things to consider. Many of my suggestions were things like "this section contradicts this other section if some circumstance occurs so you should fix that" and I expect to those to stick relatively well because it's pretty uncontroversial.
  2. As I alluded to above, most government employees are nonpolitical staffers who mostly just do their job the way their predecessor trained them. I'm sure you've heard stories about government departments using computer systems from the 1970s or whatever, and a similar thing can happen at the process level. Even if the executive branch has the ability to change the interpretation of some term, they often won't, just because changing is hard.

This is just from my personal experience, and I'm not sure how it would compare to working with other branches of government (or even other executive-branch agencies).

Comment by ben_west on EA Group Organizer Career Paths Outside of EA · 2020-08-28T17:06:21.226Z · EA · GW

Thanks! I added this at the end.

Comment by ben_west on We're (surprisingly) more positive about tackling bio risks: outcomes of a survey · 2020-08-26T20:25:26.288Z · EA · GW

This is pretty surprising to me. Thanks for doing this investigation and sharing the results!

Comment by ben_west on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-20T16:40:01.706Z · EA · GW

Greaves' cluelessness paper was published in 2016. My impression is that the broad argument has existed for 100+ years, but the formulation of cluelessness arising from flow-through effects outweighing direct effects (combined with EA's tending to care quite a bit about flow-through effects) was a relatively novel and major reformulation (though probably still below your bar).

Comment by ben_west on Against the Social Discount Rate (Cowen & Parfit) - Weak refutations · 2020-08-12T18:03:19.618Z · EA · GW

Thanks for writing this up! Minor point:

> I agree that by some moral views, it is not right that a voluntary provider of gifts should be given any privileges, but as Cowen and Parfit admits, this is not the case in a pure utilitarian view. 

Maybe I'm getting confused by the double negatives, but isn't this backwards? A pure utilitarian would argue that no one has any special privileges, right?

Apart from that minor point though, I would be interested in refutations to the objection. 

Comment by ben_west on Social Movement Lessons from the US Prisoners' Rights Movement · 2020-08-06T21:23:39.129Z · EA · GW


I'm hoping that at some point, I'll be able to do a bit more of a roundup / analysis post, where I look at some of the key themes and leanings from across several of our case studies. There might be more scope for making these sorts of claims or estimates in a post like that, though it still might not be worth the time. I'd be interested in your thoughts on that!

Yes, I personally would be interested and would be happy to give my opinions about which of these would be most useful. But (obviously) the priorities of EAA leaders who can put your advice into practice is probably more important.

I'm afraid I can't really help here. I did write "Is the US Supreme Court a Driver of Social Change or Driven by it? A Literature Review"

Thanks! I hadn't seen that literature review before and it seems interesting. Added it to my reading list.