Posts

[Job Ad] Help us make this Forum better 2021-03-25T23:23:26.801Z
Layman’s Summary of Resolving Pascallian Decision Problems with Stochastic Dominance 2021-03-12T03:51:24.215Z
Retention in EA - Part II: Possible Projects 2021-02-05T19:09:31.361Z
Retention in EA - Part I: Survey Data 2021-02-05T19:09:18.450Z
Retention in EA - Part III: Retention Comparisons 2021-02-05T19:02:05.324Z
EA Group Organizer Career Paths Outside of EA 2020-07-14T23:44:10.799Z
Are there robustly good and disputable leadership practices? 2020-03-19T01:46:38.484Z
Harsanyi's simple “proof” of utilitarianism 2020-02-20T15:27:33.621Z
Quote from Strangers Drowning 2019-12-23T03:49:51.205Z
Peaceful protester/armed police pictures 2019-12-22T20:59:29.991Z
How frequently do ACE and Open Phil agree about animal charities? 2019-12-17T23:56:09.987Z
Summary of Core Feedback Collected by CEA in Spring/Summer 2019 2019-11-07T16:26:55.458Z
EA Art: Neural Style Transfer Portraits 2019-10-03T01:37:30.703Z
Is pain just a signal to enlist altruists? 2019-10-01T21:25:44.392Z
Ways Frugality Increases Productivity 2019-06-25T21:06:19.014Z
What is the Impact of Beyond Meat? 2019-05-03T23:31:40.123Z
Identifying Talent without Credentialing In EA 2019-03-11T22:33:28.070Z
Deliberate Performance in People Management 2017-11-25T14:41:00.477Z
An Argument for Why the Future May Be Good 2017-07-19T22:03:17.393Z
Vote Pairing is a Cost-Effective Political Intervention 2017-02-26T13:54:21.430Z
Ben's expenses in 2016 2017-01-29T16:07:28.405Z
Voter Registration As an EA Group Meetup Activity 2016-09-16T15:28:46.898Z
You are a Lottery Ticket 2015-05-10T22:41:51.353Z
Earning to Give: Programming Language Choice 2015-04-05T15:45:49.192Z
Problems and Solutions in Infinite Ethics 2015-01-01T20:47:41.918Z
Meetup : Madison, Wisconsin 2014-10-29T18:03:47.983Z

Comments

Comment by Ben_West on Concerns with ACE's Recent Behavior · 2021-04-22T02:42:47.651Z · EA · GW

Thanks for correcting my mistaken impression Jakub! I've updated my comment to link to yours.

Comment by Ben_West on Concerns with ACE's Recent Behavior · 2021-04-19T19:07:27.504Z · EA · GW

I guess I don't know OP's goals but yeah if their goal is to publicly shame ACE then publicly shaming ACE is a good way to accomplish that goal.

My point was a) sending a quick emails to someone about concerns you have with their work often has a very high benefit to cost ratio, and b) despite this, I still regularly talk to people who have concerns about some organization but have not sent them an email.

I think those claims are relatively uncontroversial, but I can say more if you disagree.

Comment by Ben_West on How much does performance differ between people? · 2021-04-17T03:59:42.112Z · EA · GW

Basic statistics question: the GMA predictors research seems to mostly be using the Pearson correlation coefficient, which I understand to measure linear correlation between variables.

But a linear correlation would imply that billionaires have an IQ of 10,000 or something which is clearly implausible. Are these correlations actually measuring something which could plausibly be linearly related (e.g. Z score for both IQ and income)?

I read through a few of the papers cited and didn't see any mention of this. I expect this to be especially significant at the tails, which is what you are looking at here.

Comment by Ben_West on Concerns with ACE's Recent Behavior · 2021-04-16T19:42:41.011Z · EA · GW

Yep, definitely don't want people to swing too far in the opposite direction. Just commenting that "talk to people about your concerns with them" is a surprisingly underutilized approach, in my experience.

Comment by Ben_West on Concerns with ACE's Recent Behavior · 2021-04-16T19:39:45.720Z · EA · GW

Thanks! I had interpreted "We are yet to see how successful the leadership transition turns out" as a pretty strong statement, but I agree that the review doesn't specify how the different factors they list are weighted and your interpretation could be correct. I hope someone from ACE can clarify.

Comment by Ben_West on Concerns with ACE's Recent Behavior · 2021-04-16T15:49:21.364Z · EA · GW

I do wish we could be having this discussion in a more productive and conciliatory way, which has less of a chance of ending in an acrimonious split.

At the risk of stating the obvious: emailing organizations (anonymously, if you want) is a pretty good way of raising concerns with them.

I've emailed a number of EA organizations (including ACE) with question/concerns, and generally find they are responsive.

And I've been on the receiving side of emails as well, and usually am appreciative; I often didn't even consider that there could be some confusion or misinterpretation of what I said, and am appreciative of people who point it out.

Comment by Ben_West on Concerns with ACE's Recent Behavior · 2021-04-16T14:49:28.118Z · EA · GW

Edit: Jakub says that ACE's evaluation was based on the Facebook comments, not leadership transition. The below is kept for historical purposes. Also, I should have noted in this post my appreciation for Anima's transparency – it wouldn't have been possible for me to post something like this with most organizations, because they would state that their CEO stepped down "spend more time with her family" or something similar.

Nevertheless, given the overall positive assessment, it's strange that Anima was awarded a "weak" rating in this category, and I think it's likely that Anima is being heavily punished for the public comments made by staff members.

Last year, Anima fired their CEO. The public statement said:

However, no matter how much we value her merits, there are issues in regards to everyday behaviour towards employees that we as an organization cannot accept. In Anima International we have to be a team that strongly supports each other.

I think ACE's rating about poor leadership and culture was based on that rather than Facebook comments made by staff members.

Comment by Ben_West on Meta-EA Needs Models · 2021-04-06T00:50:49.895Z · EA · GW

Good point

Comment by Ben_West on Meta-EA Needs Models · 2021-04-05T23:47:19.715Z · EA · GW

Thanks for sharing this! 

Feels like all the top people in EA would have gotten into EA anyway?

Possibly you don't endorse this statement and were just using it as an intro, but I think your interlocutor's response (1) is understated: I can't think of any products which don't benefit from having a marketing department. If EA doesn't benefit from marketing (broadly defined), it would be an exceptionally unusual product.

I imagine taking my best guess at the "current plan of meta-EA" and giving it to Paul Graham and him not funding my startup because the plan isn't specific/concrete enough to even check if it's good and this vagueness is a sign that the key assumptions that need to be true for the plan to even work haven't been identified.

For what it's worth, CEA's plans seem more concrete than mine were when I interviewed at YC. CLR's thoughts on creating disruptive research teams are another thing which comes to mind as having key assumptions which could be falsified.

Comment by Ben_West on A ranked list of all EA-relevant (audio)books I've read · 2021-04-05T23:11:44.059Z · EA · GW

I'm glad I could help. Feel free to quote me in the annual Rethink Priorities impact analysis.

Comment by Ben_West on Resources on the expected value of founding a for-profit start-up? · 2021-04-05T22:59:51.311Z · EA · GW

Some things not mentioned above:

  1.  https://www.nber.org/system/files/working_papers/w9109/w9109.pdf
  2. Baum, Joel AC, and Brian S. Silverman. "Picking winners or building them? Alliance, intellectual, and human capital as selection criteria in venture financing and performance of biotechnology startups." Journal of business venturing 19.3 (2004): 411-436.
    http://www.library.auckland.ac.nz/subject-guides/bus/docs/PickingWinners2004.pdf
  3. Agrawal, A., Kapur, D., McHale, J., 2008. How do spatial and social proximity influence knowledge flows? Evidence from patent data, Journal of Urban Economics, 64. 
  4. Zacharakis, Andrew L., and G. Dale Meyer. "The potential of actuarial decision models: can they improve the venture capital investment decision?." Journal of Business Venturing 15.4 (2000): 323-346.
    http://www.sciencedirect.com/science/article/pii/S0883902698000160
  5. Keely, Robert H. Determinants of new venture success before 1982 and after a preliminary look at two eras. Instituto de Estudios Superiores de la Empresa, Universidad de Navarra, 1989.
    http://www.iese.edu/research/pdfs/DI-0173-E.pdf
  6. Chrisman, James J., Alan Bauerschmidt, and Charles W. Hofer. "The determinants of new venture performance: An extended model."
  7. Entrepreneurship Theory and Practice 23 (1998): 5-30.
    http://misweb.cbi.msstate.edu/~COBI/faculty/users/jchrisman/files/autoweb/mgt8123/MGT8123(Chrismanetal.,ETP,1998).pdf
  8. Ross Levine, Yona Rubinstein, Smart and Illicit: Who Becomes an Entrepreneur and Do They Earn More?, The Quarterly Journal of Economics, Volume 132, Issue 2, May 2017, Pages 963–1018,
    https://economics.uchicago.edu/workshops/Rubinstein%20Yona%20Smart%20and%20Illicit.pdf
  9. J. Robert Baum, Edwin A. Locke, and Ken G. Smith, 2001: A Multidimensional Model of Venture Growth. AMJ, 44, 292–303 http://www.taranomco.com/wp-content/uploads/2013/11/247.pdf
  10. Experimentation and the Returns to Entrepreneurship. Gustavo Manso
    https://www.gsb.stanford.edu/sites/default/files/documents/Gustavo.pdf

Note that many of these are trying to test some model of venture success, and only calculate things related to EV as a subcomponent of that project. So it might not always be easy to answer the question you're actually trying to answer here.

Also, it's surprisingly hard to define "startup", and some of the variance in these estimates comes from using different reference classes.

Comment by Ben_West on EA Debate Championship & Lecture Series · 2021-04-05T22:46:17.537Z · EA · GW

This is a really great idea. Thanks for organizing this and writing up the results. A couple questions:

Overall, we feel we managed to achieve deep and positive engagement with tournament participants, but haven’t yet cracked the question of how to properly engage (more shallowly) with the broader debating community.

I'm curious about the "deep" engagement aspect. You mentioned that there was the Facebook group – have people continued to engage in other ways? E.g. attending meet ups or reading this forum.

high probability that within a decade or so we can expect members of this audience to be in global positions of influence makes the community a great outreach target

Do you know of anything which has measured this? I'm imagining something similar to how 80 K analyzed predictors of becoming an MP. It seems plausible to me that top debaters are disproportionately likely to gain positions of influence, but I'm not well calibrated on how big of an effect this is.

Comment by Ben_West on A ranked list of all EA-relevant (audio)books I've read · 2021-04-02T20:10:53.058Z · EA · GW

I read Command and Control on your recommendation and it inspired my most popular EA TikTok to date. (150,000 views).

Comment by Ben_West on How much does performance differ between people? · 2021-03-31T21:35:06.359Z · EA · GW

Data from the IAP indicates that they can identify the top few percent of successful inventions with pretty good accuracy. (Where "success" is a binary variable – not sure how they perform if you measure financial returns.)

Comment by Ben_West on How much does performance differ between people? · 2021-03-31T21:30:45.561Z · EA · GW

(Although I wonder what evidence indicates they can reliably tell the top 5% from those below, rather than they just think they can).


The Canadian inventors assistance program provides a rating of how good an invention is to inventors for a nominal fee. A large fraction of the people who get a bad rating try to make a company anyway, so we can judge the accuracy of their evaluations.

55% of the inventions which they give the highest rating to achieve commercial success, compared to 0% for the lowest rating. 

https://www.researchgate.net/publication/227611370_Profitable_Advice_The_Value_of_Information_Provided_by_Canadas_Inventors_Assistance_Program 

Comment by Ben_West on [Job Ad] Help us make this Forum better · 2021-03-29T17:17:10.622Z · EA · GW

Thanks for the suggestion! I've updated the title

Comment by Ben_West on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-24T19:32:30.874Z · EA · GW

I think maybe I was confused about what you are saying. You said:

 I think this applies to growth in local groups particularly well... While I've no doubt that many of the groups that have been founded by people who joined since 2015*, I suspect that even if we cut those people out of the data, we'd still see an increase in the number of local groups over that time frame- so we can't infer that EA is continuing to grow based on increase in local group numbers.

But then also:

Fwiw, this seems like more direct evidence of growth in EA since 2015 than any of the other metrics

In my mind, A being evidence of B means that you can (at least partially) infer B from A. But I'm guessing you mean "infer" to be something like "prove", and I agree the evidence isn't that strong.

Comment by Ben_West on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-18T22:33:24.335Z · EA · GW

The CEA events team uses a combination (or variant) of these frameworks to design a "canvas" for each event. I personally use both here and at prior jobs. I'm a strong advocate for knowing your users in general, but am less opinionated about those specific frameworks.

Some thoughts on each:

  • Personas can end up becoming an end in themselves. You can get sidetracked into creating these elaborate back stories and forget that you're really just trying to figure out if the widget should be blue or green. It can also be tempting to use multiple personas as a justification for complexity (e.g. because you create one feature per persona instead of one feature for everyone). But there are definitely circumstances in which you are serving a genuinely diverse audience, and it makes sense to segment the audience. I also like them as a way of encouraging focus on the most important things – you can ask how (persona) would want you to prioritize. I usually see them more as an adjunct to some other process. E.g. if you are mapping out your marketing channels, you might create one set of channels per persona.
  • Jobs to be done generally seem more helpful to me because it keeps your focus on what the user thinks is most important. Sometimes talking about the users pain's makes more sense, e.g. I hire an accountant to reduce audit risk, rather than the "job" of filling out tax forms. I tend to think about pains when first developing a product (because if you aren't solving some huge pain point probably no one will use your product) but jobs later in the product lifecycle (because people are already using your product so it's worth thinking about minor improvements).

Especially for early-stage products I like the thought experiment of "hair on fire customers."

Comment by Ben_West on Open and Welcome Thread: March 2021 · 2021-03-12T21:44:03.751Z · EA · GW

Welcome Tristan!

Comment by Ben_West on Open and Welcome Thread: March 2021 · 2021-03-12T21:43:43.570Z · EA · GW

Welcome Schuyler!

Comment by Ben_West on Layman’s Summary of Resolving Pascallian Decision Problems with Stochastic Dominance · 2021-03-12T18:24:18.507Z · EA · GW

Oh interesting, thanks for sharing. These are compelling counterexamples

Comment by Ben_West on Layman’s Summary of Resolving Pascallian Decision Problems with Stochastic Dominance · 2021-03-12T04:15:29.259Z · EA · GW

Could you elaborate why this violates Pareto? I'm used to that assumption being phrased in terms of sure things, but even if you make it stochastic it still seems fine to say "if A stochastically dominates B for each person, then A > B".

And for what it's worth, this is not one of my major intuitions behind utilitarianism. Cluelessness already implies that I need to consider a butterfly flapping its wings before deciding whether to donate to AMF; stating that the butterfly could be outside my light cone doesn't seem qualitatively different.

(Possibly it is a key intuition that Harsanyi had, not sure. Also I do agree that considering consequences unaffected by my actions is a counterintuitive thing for any decision theory to do, moral or otherwise.)

Comment by Ben_West on Don't Be Bycatch · 2021-03-11T23:20:50.978Z · EA · GW

Ask, don't tell.

This is really good advice, at least for a subset of people.

Whether someone is problem-oriented ("the shortage of widgets in EA") versus solution-oriented ("find a way to use my widget-making skills") often gives me a strong signal of how likely they are to be successful.

I'd add on that sharing the answers people give to your questions (e.g. on this forum) is helpful. The set of things EA needs is vast, and there's no reason for us to all start from scratch in figuring out how to help.

Comment by Ben_West on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-11T20:16:43.708Z · EA · GW

University group members are mostly undergraduates, meaning they are younger than ~22. This implies that they would have been younger than 18 in 2017, and there was almost no one like that on the 2017 survey. And they would have been under 16 in 2015, although I don't think we have data going back that far. I can think of one or two people who might have gotten involved as 15-year-olds in 2015, but it seems quite rare. Is there something I'm missing?

Comment by Ben_West on Progress Open Thread: March 2021 · 2021-03-08T23:17:06.933Z · EA · GW

Congratulations!

Comment by Ben_West on [deleted post] 2021-03-02T04:53:37.832Z

I think this is a good thought! @lukefreeman has been doing rooms weekly on Monday. The sessions are listed on the online EA calendar.

Comment by Ben_West on Layman’s Summary of Resolving Pascallian Decision Problems with Stochastic Dominance · 2021-02-28T16:19:14.243Z · EA · GW

Thanks! I think the "stochastic dominance + background uncertainty" decision criterion makes two claims about muggings:

  1. If the mugging is not too Pascallian, it stochastically dominates "safe" options, which is a pretty strong argument for accepting it (and probably agrees with what an expected value calculation would dictate)
  2. If it is too Pascallian, neither it nor the safe option stochastically dominates, giving a principled reason for rejecting it

The hope is that your example would fall under case (2), but of course this depends on a bunch of particular assumptions about the background uncertainty.

Comment by Ben_West on A ranked list of all EA-relevant (audio)books I've read · 2021-02-26T01:03:54.093Z · EA · GW

Thanks for sharing this list Michael, I added some of these to my queue.

Comment by Ben_West on AMA: Lewis Bollard, Open Philanthropy · 2021-02-25T20:31:57.411Z · EA · GW

You recently said:

Israel has a really strong animal advocacy movement, has a strong vegan movement, but also a really strong animal welfare reform movement. And I actually think it’s a great example of how these two don’t need to be diametrically opposed. Israel has very high rates of veganism, but also some of the most progressive animal welfare laws.

Do you have a sense for why the animal advocacy movement is so strong in Israel? You mentioned that they have limited amounts of land for farming, but this seems more like an explanation for why the government is supporting alternative proteins than why there is a vegan/animal welfare movement.

I've heard claims that it was driven by this YouTube video, but I'm not sure how accurate those claims are.

Comment by Ben_West on AMA: Lewis Bollard, Open Philanthropy · 2021-02-25T20:13:11.839Z · EA · GW

You are quoted in the 80,000 hours article about management consulting as saying:

I think that if you are looking to work in management with a non-profit, you can learn some really useful skills. Analytical skills are certainly brought to bear... 

I don’t really believe that in that time I gained a lot of useful skills. I think I mainly gained a lot of information about very particular business sectors, which would be useful if I wanted to go and work in those business sectors. Otherwise, I’m not sure it is completely generalisable.

In your most recent podcast interview you stated:

I think we have seen greater need for managers, makes sense. As groups are growing, are professionalising, there is more of a need for people who have management expertise and who are learning management and thinking hard about it.

Your quote in the 80 K article seems pretty lukewarm on management consulting, and I'm wondering if this talent gap in animal advocacy has made you more positive?

Comment by Ben_West on Retention in EA - Part I: Survey Data · 2021-02-22T23:45:28.723Z · EA · GW

Thanks! I will check it out.

Comment by Ben_West on Proving too much: A response to the EA forum · 2021-02-18T22:04:38.067Z · EA · GW

Thanks for posting a follow-up. My understanding of your claim is something like:

It's true that there is a nonzero probability of infinitely good or bad things happening over any timescale, making expected value calculations equally meaningless for short-term and long-term decisions. However, it's fine to just ignore those infinities in the short-term but incorrect to ignore them in the long term. Therefore, short-term thinking is okay but long-term thinking is not.

Is that accurate? If so, could you elaborate on why you see this distinction?

I see no particular reason to think Pasadena games are more likely one thousand years from now than they are today (and indeed even using the phrase "more likely today" seems to sink the approach of avoiding probability).

Comment by Ben_West on Deference for Bayesians · 2021-02-16T04:05:45.787Z · EA · GW

Thanks for writing this. I liked the examples and I thought this point, while obvious in retrospect, wasn't originally clear in my mind:

Many subject matter experts are not experts on epistemology - on whether Bayesianism is true. So, this approach does not obviously violate epistemic modesty.

Comment by Ben_West on Open thread: Get/give feedback on career plans · 2021-02-15T21:06:40.872Z · EA · GW

You might be interested in the lottery of fascinations.

For what it's worth, I was barely able to do CS homework, but worked pretty successfully as a programmer for about a decade and still occasionally code for fun. Some (all?) universities have a remarkable ability to make the most interesting subjects monotonous, and I would be cautious in going from "I don't want to read a CS textbook" to "CS is not for me". (If you are directly reading AI safety research, and you have enough of the background to understand it yet still are bored by it, that seems like a stronger signal to me.)

Comment by Ben_West on Open thread: Get/give feedback on career plans · 2021-02-15T20:56:59.899Z · EA · GW

EA Global Reconnect will probably have a slack/discord. That might be a convenient place to try out chat format

Comment by Ben_West on Progress Open Thread: February 2021 · 2021-02-11T18:48:35.272Z · EA · GW

EA Giving Tuesday directed $400,000 of Facebook matching funds to EA charities. Congratulations and thank you to Avi, Megan, Rebecca, Gina, William, Marisa, Angelina, and Nix for organizing the initiative, and everyone who donated $1.6 million on giving Tuesday 2020!

Comment by Ben_West on Retention in EA - Part III: Retention Comparisons · 2021-02-11T18:41:46.121Z · EA · GW

Interesting, thanks! Something which probably isn’t obvious without reading the methods (pages 125-127) is that study participants were recruited through church mailing lists and Facebook groups. So the interpretation of that statistic is “of the people who answer surveys from their church, 92% report at least moderate engagement”. 

“Moderate engagement” is defined as an average of a bunch of questions, but roughly it means someone who attends church at least once per month.

I think that definition of “moderate engagement” is a bit higher than “willing to answer surveys from my church” (as evidenced by the people who answered the survey but did not report moderate engagement), but it’s not a ton higher, so I’m hesitant to read too much into the percentage who report moderate engagement.

I felt like “high engagement” was enough above “willing to answer a survey” that some value could be gotten from the statistic, but even there I’m hesitant to conclude too much, and wouldn’t blame someone who discounted the entire result because of the research method (or interpreted the result in a pretty different way from me).

If we want to compare it to Ben’s EA estimates: I guess one analog would be to look at people who attended that weekend away but also answered the EA survey five years later. I’m not sure if such a data set exists.

Comment by Ben_West on Retention in EA - Part III: Retention Comparisons · 2021-02-10T22:24:52.558Z · EA · GW

Thanks Kieren! I was interpreting it to be the expectation of a geometric distribution (i.e. mean length assuming a constant annual probability of leaving), which I think is the correct way to interpret that number? Let me know if that’s wrong though!

The assumption that length is geometrically distributed might not be warranted, I'm not sure.

Comment by Ben_West on Retention in EA - Part I: Survey Data · 2021-02-10T22:24:11.231Z · EA · GW

Hey Ben, one person mentioned cause area disagreements, 4 mentioned interpersonal conflict, and 5 mentioned cultural fit.

One person mentioned to me that there is almost always some sort of interpersonal conflict involved in driving people out of EA, even if other factors are also important.

Comment by Ben_West on Retention in EA - Part I: Survey Data · 2021-02-10T22:23:49.482Z · EA · GW

Good question! I, and I think most of the people I talked to, would not consider that leaving the movement. I would look to whether the career decision was motivated by EA considerations, rather than whether the employer officially considers itself "EA".

That being said, I do think some people who left EA might have left because of this misunderstanding: they were not a good fit for some small number of "EA careers" (e.g. 80 K priority paths), and therefore assumed there wasn't a place in EA for them, even though that small list of careers is not a definitive list of what it means to be in EA. 80 K has tried to clarify this (e.g. 

here), which I think is helpful, but there is probably still more to be done.

Comment by Ben_West on Retention in EA - Part I: Survey Data · 2021-02-10T22:23:26.343Z · EA · GW

Thanks David – an earlier draft of this post had a table cross-referencing which factors had been listed in which previous work, including EA Survey data, but it got too confusing since every post used its own categorization scheme. I decided to just publish my synthesis without trying to clean that up since I don’t want the perfect to be the enemy of the good, and I appreciate you doing some of that cross-referencing in this comment!

Overall, it seemed like different sources more or less agreed about the most common retention risks, which is encouraging and seems consistent with your analysis in this comment.

And I do see that I linked to the 2018 EA survey but not the 2019 one; I’ve added that as a link now, thanks!

Comment by Ben_West on Retention in EA - Part I: Survey Data · 2021-02-08T23:25:46.935Z · EA · GW

It stands for Representation, Equity, and Inclusion. It’s an alternative to the more common Diversity, Equity, and Inclusion, which some people prefer because it’s often more accurate to describe an organization’s goals as trying to be representative of some population then it is to say they want  “diversity” per se. I’ve edited the post to clarify this as well.

Comment by Ben_West on Retention in EA - Part I: Survey Data · 2021-02-08T23:25:29.562Z · EA · GW

Thanks! I remember finding that post helpful when it came out. I've added it to the list above

Comment by Ben_West on Retention in EA - Part III: Retention Comparisons · 2021-02-08T23:24:30.975Z · EA · GW

Thanks Alex! You are correct. I accidentally put the annual dropout rates there instead of five-year dropout rates.

The implied five-year rate is 53%-77%, approximately in line with Ben’s estimates for GWWC members. I’ve updated the text accordingly.

Comment by Ben_West on Retention in EA - Part I: Survey Data · 2021-02-05T19:10:38.503Z · EA · GW

As a meta-comment: I’m trying to share more earlier-stage thinking publicly, both to solicit feedback to make my own thinking better and to help others’ investigations. There are obvious downsides to this (e.g. people interpreting my draft thoughts as an authoritative statement from CEA). 

If anyone has feedback about the structure of these posts or the process of me sharing earlier-stage ideas, I would be interested to hear them.

Comment by Ben_West on Progress Open Thread: February 2021 · 2021-02-03T19:31:02.581Z · EA · GW

About a year ago, I discussed with some of my colleagues how EA is very tied to longform text, and the world seems to be moving towards short form amateur video. I decided to create a TikTok account, and recently passed 50,000 followers.

Also I made a video about moral cluelessness that was unpopular even by the standards of my EA videos, but I thought was good, so I'm calling it a win.

Comment by Ben_West on Open and Welcome Thread: February 2021 · 2021-02-02T21:11:26.817Z · EA · GW

That's awesome, congratulations!

Comment by Ben_West on Are there robustly good and disputable leadership practices? · 2021-01-28T00:55:49.334Z · EA · GW

That's fair. My understanding though is that management training doesn't seem very useful in general, implying that either the things they are teaching aren't very useful or people aren't very good at filtering to find the parts that are useful to them.

Comment by Ben_West on (Autistic) visionaries are not natural-born leaders · 2021-01-26T20:51:17.598Z · EA · GW

indicating that I'm not making such a claim about people I discuss in the post, but rather my impression that they exhibited a host of traits typically associated with autism/asperger's.

FWIW I don't interpret title words being in parentheses as indicating it's the author's impression. I interpreted your title as meaning something like "I think probably all visionaries are not natural-born leaders, but I'm more confident that autistic ones are not."

Comment by Ben_West on (Autistic) visionaries are not natural-born leaders · 2021-01-26T20:38:20.555Z · EA · GW

Thanks for writing this. I feel like it's written with an implication of something like "you can be bad at management but eventually learn", but I think another theory is something more like "you can win the lottery without being good at math".

E.g. a common explanation for the success of the PayPal mafia is that they became rich when everyone else in tech became poor, and were therefore able to purchase stakes in a bunch of companies and then just join the most successful or otherwise get an "unfair" advantage. This seems roughly true of Musk, as I understand it.

Another interpretation is something like "executive people management either doesn't matter, or matters in a way substantially different from how people usually think it should matter." Successful executives have a wide range of approaches (including, as you point out, some which seem intuitively terrible), and one interpretation of this is that your approach actually doesn't matter very much. I've remarked before that there seemed to be surprisingly few robustly good management practices.

I'm curious whether you have opinions about which of these interpretations are correct, or if there's something else you take away from these stories?