That's fair. My understanding though is that management training doesn't seem very useful in general, implying that either the things they are teaching aren't very useful or people aren't very good at filtering to find the parts that are useful to them.
indicating that I'm not making such a claim about people I discuss in the post, but rather my impression that they exhibited a host of traits typically associated with autism/asperger's.
FWIW I don't interpret title words being in parentheses as indicating it's the author's impression. I interpreted your title as meaning something like "I think probably all visionaries are not natural-born leaders, but I'm more confident that autistic ones are not."
Thanks for writing this. I feel like it's written with an implication of something like "you can be bad at management but eventually learn", but I think another theory is something more like "you can win the lottery without being good at math".
E.g. a common explanation for the success of the PayPal mafia is that they became rich when everyone else in tech became poor, and were therefore able to purchase stakes in a bunch of companies and then just join the most successful or otherwise get an "unfair" advantage. This seems roughly true of Musk, as I understand it.
Another interpretation is something like "executive people management either doesn't matter, or matters in a way substantially different from how people usually think it should matter." Successful executives have a wide range of approaches (including, as you point out, some which seem intuitively terrible), and one interpretation of this is that your approach actually doesn't matter very much. I've remarked before that there seemed to be surprisingly few robustly good management practices.
I'm curious whether you have opinions about which of these interpretations are correct, or if there's something else you take away from these stories?
On behalf of CEA, I'd like to extend a huge thank you to the SEADS team. The correlation between satisfaction, LTR (likelihood to recommend), and other variables (or lack thereof) is something that's featured in numerous discussions here at CEA, and I would encourage all EA event organizers to consider it. Their demographic analysis has informed our diversity work (e.g. before this analysis, we suspected there would be more of a correlation between gender/ethnicity and connections).
Also, while not mentioned in this document, the primary metric that the EA Forum uses was changed because of their work.
And of course, I greatly appreciate them not just doing this analysis, but also taking the time to clean it up and present publicly!
This is awesome! I like the model, and the UI is intuitive and clean. Two requests/suggestions:
Could you say "eggs from caged hens" or something instead of just "caged hen"? And similarly "chicken meat" instead of "broiler"? Or something like that – I think many people aren't familiar with those more technical terms.
Would you be able to get a simpler domain name? I'd like to direct people to this, and I think the current name will be hard to remember.
I think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc.
As a toy example, say that S(x) is some bounded sigmoid function, and my utility function is to maximize E[S(x)]; it's always going to be the case that E[S(x1)]≥E[S(x2)]⇔x1≥x2 so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging. (Correct me if this is wrong though.)
(These are personal comments, I'm not sure to what extent they are endorsed by others at CEA.)
Thanks for writing this up Ozzie! For what it's worth, I'm not sure that you and Max disagree too much, though I don't want to speak for him.
Here's my attempt at a crux: suppose CEA takes on some new thing, and as a result Max manages me less well, making my work worse, but does that new thing better (or at all) because Max is spending time on it.
My view is that the marginal value of a Max hour is inverse U-shaped for both of these, and the maxima are fairly far out. (E.g. Max meeting with his directs once every two weeks would be substantially worse than meeting once a week.) As CEA develops, the maximum marginal value of his management hour will shift left while the curve for new projects remains constant, and at some point it will be more valuable for him to think about a new thing than speak with me about an old thing.
Please enjoy my attached paint image illustrating this:
I can think of two objections:
1. Management: Max is currently spending too much time managing me. Processes are well-developed and don't need his oversight (or I'm too stubborn to listen to him anyway or something) so there's no point in him spending so much time. (I.e. my "CEA in the future" picture is actually how CEA looks today for management.) 2. New projects: there is super low hanging fruit, and even doing a half-assed version of some new project would be way more valuable than making our existing projects better. (I.e. my "CEA in the future" picture is actually how CEA looks today for new projects.)
I'm curious if either of those seem right/useful to you?
Thanks for writing this up Michelle! I would be excited for you to write more things like this in the future. Regarding this:
The more similar to mine someone’s situation is, the more likely they’ll be able to recommend resources tailored to me
A common observation is that firms retain older employees but rarely hire them. One explanation for this is that organization-specific knowledge (what acronyms mean, how you make a project plan, etc.) is valuable, but general-purpose skills aren't as valuable, so there's no point in recruiting someone who has 30 years of experience from your competitor. (Or, alternatively: too few people actually learn valuable general-purpose skills for this to show up in the data.)
This roughly seems correct to me, anecdotally.
To the extent that this is accurate in EA, it might imply that EA-specific communication norms or other EA-specific things are the most valuable to train.
An additional hobbyhorse of mine is that certification might be more valuable than training. Having a mentor who can teach you things is nice, but it might actually be more valuable for these skilled and trusted mentors to evaluate people's existing abilities and then credibly certify them.
Thanks for sharing this! All three of these seem valuable.
A couple questions about the EA training one:
You give the examples of operations skills, communication skills, and burnout prevention. These all seem valuable but not differentially valuable to EA. Are you thinking that this would be training for EA-specific things like cause prioritization or that they would do non-EA-specific things but in an EA way? If the latter, could you elaborate why an EA-specific training organization like this would be better than people just going to Toastmasters or one of the other million existing professional development firms?
Sometimes when people say that they wish there were more EA's with some certain skill, I think they actually mean that they wish there were more EA's who had credibly demonstrated that skill. When I think of EA-specific training (e.g. cause prioritization) I have a hard time imagining a 3 week course which substantially improves someone's skills, but it seems a little more plausible to me that people could work on a month-long "capstone project" which is evaluated by some person whose endorsement of their work would be meaningful. (And so the benefit someone would get from attending is a certification to put on their resume, rather than some new skill they have learned.) Have you considered "EA certification" as opposed to training?
I think there are weeks long courses like "learn how to comply with this regulation" which are helpful, but those already exist outside EA. ↩︎
The Separated Worlds: There are only two planets with life. These planets are outside
of each other’s light cones. On each planet, people live good lives. Relative to each of
these planets’ reference frames, the planets exist at the same time. But relative to the
reference frame of some comet traveling at a great speed (relative to the reference frame
of the planets), one planet is created and destroyed before the other is created.
If we treat space and time asymmetrically, we would have to claim that, relative to the reference
frame of the planets, this outcome was not as good as it is relative to the reference frame of the
comet. But this is very hard to believe. The value of this possible world should not be relative to
any reference frame.
Also it's worth pointing out that "regular claims about the world (like 'Elsa is taller than Anna')" are also not "real" in the sense you are using the term. I'm not super familiar with the subject, but I wouldn't be surprised if many moral realists are okay describing moral claims as "only" as real as claims about length.
My experience with bioinformatics is almost exclusively on the industry side, and more the informatics than the bio. With that caveat, a few thoughts:
should I prioritize developing skills that will make me more employable and E2G (e.g. develop and apply sexy, ad hoc methods to rich-person illnesses in a more mainstream bioinformatics-y role)
My experience is that the highest earning positions are not "sexy" (in the way I think you are using the term). I recall one conference I attended in which the speaker was describing some advanced predictive algorithm, and a doctor in the back raised their hand and said "this is all nice but I can't even generate a list of my diabetic patients so could you start with that please?"
This might also address your question "how easy is it to, say, break into industry data science for anthropology graduates with experience in computational stats methods development?" – I think it depends very much on what you mean by "data science". A lot of the most successful bioinformatics companies' products are quite mundane by academic standards: alerting clinicians to well-known drug-drug interactions, identifying patients based on well validated reference ranges for lab tests, etc. My impression is that getting a position at one of these places is approximately similar to getting any other programming job. If you are looking for something more academic though, the requirements are different.
focus more on greater blights afflicting larger numbers of human and non-human animals (say, to understand differential responses to tropical diseases, or maybe variation in the human aging process, or pivot to food science and work on cultured meat or something, as well as work on more interpretable methods)
A problem I suspect you will run into is that methods development requires (often quite large) data sets. I get the sense from your brief bio that you aren't interested in doing any wet lab work, meaning that if you were to work on, say, cultured meat, you would need a data set from some collaborator.
If I were you, I might try to resolve this first. I know GFI has an academic network you can join and you could message people there about the existence of data sets.
I find it hard to come up with an argument supportive of this proposal, but as one clarification: the proposal is that donors could choose to create a DAF with no time limit, but where the donor receives only capital gains tax benefits at the time of donation, and income tax benefits at the time of disbursement. Many large donors get most of their income through capital gains, so maybe aren't too bothered by this, and small donors might receive some benefit by being able to save up their donations for several years and then receive income tax benefits all at once when they disburse. (This would be helpful if they normally don't donate enough per year to get over the standard deduction but would be able to get over it after saving up donations for several years.)
My guess is that this is mostly harmful for people with low six-figure incomes who want to donate a substantial portion of their incomes and wait > 15 years.
Thanks for continuing to engage! I have been looking forward to seeing your response article, and this was interesting to read.
I suspect that many readers of this Forum would agree with most of your points, particularly the first one. Ironically, it sometimes feels like the two most common criticisms of EA are that it focuses too much on measurable data (e.g. critiquing randomista-related areas of EA) and that it focuses too little on measurable data (e.g. critiquing AI safety). This seems like a sign that we could better explain ourselves.
One area of genuine difference might be regarding impact investing: plenty of EA's believe you should invest instead of donating now, but impact investing seems relatively rare (OpenPhil's investment in Imposssible Foods being one prominent counter example). I'm curious if you have read Founders Pledge's report on impact investing? In particular: you mentioned divestment from publicly traded companies, which FP considers an especially difficult way to have an impact (Principle 4, pages 17-27). I would be curious to hear if you disagree with any of their claims, or the examples they analyzed like Acumen Fund.
What were your goals for the Progress Studies for Young Scholars program? In particular: is there work that you are hoping (perhaps a small subset of) participants can do immediately, or were you hoping instead to lay some sort of foundation which might payoff years/decades down the line?
Thanks Ben. I like this answer, but I feel like every time I have seen people attempt to implement it they still end up facing a trade-off.
Consider moving someone from role r1 to role r2. I think you are saying that the person you choose for r2 should be the person you expect to be best at it, which will often be people who aren't particularly good at r1.
This seems fine, except that r2 might be more desirable than r1. So now a) the people who are good at r1 feel upset that someone who was objectively performing worse than them got a more desirable position, and b) they respond by trying to learn/demonstrate r2-related skills rather than the r1 stuff they are good at.
You might say something like "we should try to make the r1 people happy with r1 so r2 isn't more desirable" which I agree is good, but is really hard to do successfully.
An alternative solution is to include proficiency in r1 as part of the criteria for who gets position r2. This addresses (a) and (b) but results in r2 staff being less r2-skilled.
I'm curious if you disagree with this being a trade-off?
Thanks Michael! This is really interesting. Decreasing demand by a few percent is a pretty big deal.
My intuition is that the number of articles published isn't exact the right thing to regress on, probably instead you want something like "article views". Did the authors discuss this? I guess if all the articles are published in equally-viewed sources, looking at just the raw article count would be fine.
I'm curious about your approach to management: there are two broad schools of thought, one of which says that you should promote the best performers, and the other which says that management is a different skill, and therefore you should promote the people who you think will be best at management. (Some organizations have a "dual ladder" system as an attempted hybrid between these.)
Startups often face this problem more acutely than most, because the skills which made someone very successful in a 5 person company are quite different than the ones which make them successful in a 500 person company, so someone's previous job performance is not the greatest predictor of their future success.
I'm curious what your thoughts are on this. For most of my career I have been in the "management is a different skill" camp, but over the past couple of years I have moved towards the other camp.
(I'm not sure if this question is too broad. If it is, some specific some questions are: 1. To what extent does someone's ability to do a specific technical job predict their ability to manage others doing that job? 2. Does the implicit incentive structure of promoting people who are the best managers rather than the best at their jobs warp people's efforts so much that it outweighs the benefits of having better managers?)
stakeholders start to be willing to pay for the solution
Under some ethical theories, the vast majority of stakeholders (nonhuman animals, future persons) are unable to pay in any meaningful sense. Are you more positive about nonprofit entrepreneurship for organizations that serve these stakeholders?
To the extent that markets are efficient, that narrow slice is the only slice available (since the ways of creating value for which you can easily be paid have already been exploited).
(This is one reason why I personally am usually more excited about nonprofit startups: the low hanging fruit is usually picked in the for-profit world, but there's a lot more remaining in the nonprofit space.)
I can relate to the difficulties of living in a city with few EA's, though I did eventually end up organizing a group that was reasonably successful. I'm curious if you have participated in any online events (e.g. the icebreakers) and whether those filled some of the void you have?
I'm excited to hear that! Looking forward to seeing the article. I particularly had trouble distinguishing between three potential criticisms you could be making:
It's correct to try do the most good, but people who call themselves "EA's" define "good" incorrectly. For example, EA's might evaluate reparations on the basis of whether they eliminate poverty as opposed to whether they are just.
It's correct to try to do the most good, but people who call themselves "EA's" are just empirically wrong about how to do this. For example, EA's focus too much on short-term benefits and discount long-term value.
It's incorrect to try to do the most good. (I'm not sure what the alternative you are proposing in your essay is here.)
If you are able to elucidate which of these criticisms, if any, you are making, I would find it helpful. (Michael Dickens writes something similar above.)
It might be more relevant to consider the output: 500,000 views (or ~80,000 hours of watch time). Given that the median video gets 89 views, it might be hard for other creators to match the output, even if they could produce more videos per se.
It's honestly mostly "things I currently think are cool" which is probably not the best way to grow a channel but oh well. My most popular content is analysis of TikTok itself and cosmetics analysis/recommendations.
I'm @benthamite on the app. Would love to connect if you join!
I somewhat agree with this but think it's worth pointing out that a lot of "our positions" are not very complicated or controversial, it's just that most people don't think about the topic. E.g. we just did a video celebrating the extinction of smallpox, and I don't expect that to cause many problems.
Some 80 K things like this might be the value of doing cheap tests or ABZ plans. Or even "maybe do a little bit of thinking before deciding on your career." I'd be interested to talk to you all about this if/when you think videos would be beneficial.
EA seems reliant on nerdy millennial technology, namely long plaintext social media posts.
I'm interested in communicating in Gen Z ways, which I think roughly means "short amateur videos". I've had moderate success on TikTok (35,000 followers as of this writing), and I would encourage more people to try it out.
There's a nice self-selection where your content is only displayed to 16-year-olds who spend their free time watching math videos (or whatever niche you target), which I expect to be one of the best easily-available audiences of young people.
In 2019, only about half of the respondents reported a 5/5 or a 4/5 level of engagement with EA (someone working at an EA organisation would be at ‘5’). So, we should also expect it to be an overestimate of the drop out rate among the more engaged.
In 2020 we will be able to apply the same method among a subset of more engaged respondents
My understanding is that David/Rethink has a reasonably accurate model of this, i.e. they can predict how someone would respond to the engagement questions on the basis of how they answered other questions.
It might be interesting to try doing this to get data from prior years.
Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren't obviously wrong to do so. So signaling the former would be nice.
Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don't have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven't thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying "I think factory farming is terrible but XYZ" instead of just "XYZ".
My impression is that, of FHI's focus areas, biotechnology is substantially more credentialist than the others. I've been hesitant to recommend RSP to life scientists who are considering a PhD because I'm worried that not having a "traditional" degree is harmful to their job prospects.
Do you think that's an accurate concern? (I mostly speak with US-based people, if that's relevant.)
I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.
I'd be curious how many people you think are not willing to "tolerate real intellectual diversity". I'm not sure if you are saying
"Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it's worth the cost," or
"Anyone who is upset by intellectual diversity isn't someone we want to attract anyway, so losing them isn't a real cost."
(Presumably you are saying something between these two points, but I'm not sure where.)
I worked on influencing healthcare policy during both the Obama and Trump presidencies, which I think is about as big of a swing as you can get on the executive side. My experience was that there was moderate leeway on the executive side. For example, legislation would require a certain amount of money to be distributed amongst healthcare providers who had "high quality" care, but "high-quality" shifted from "scores better than their peers" to "reports any amount of quality data to the government." (The latter standard effectively meaning that everyone was "high-quality", so the program was approximately useless.) However, the government has a ton of inertia and executives have limited resources, so things often continued on as they were before, even if executives really wanted things to change.
I can think of a couple of ways in which executive branch lobbying can be "sticky":
Something which I didn't appreciate until working on this is that it's often quite hard for the government to actually do the thing it is trying to do. Officials often simply haven't thought through how some policy would affect a stakeholder, or what would happen in some unusual circumstance, simply because there are so many different things to consider. Many of my suggestions were things like "this section contradicts this other section if some circumstance occurs so you should fix that" and I expect to those to stick relatively well because it's pretty uncontroversial.
As I alluded to above, most government employees are nonpolitical staffers who mostly just do their job the way their predecessor trained them. I'm sure you've heard stories about government departments using computer systems from the 1970s or whatever, and a similar thing can happen at the process level. Even if the executive branch has the ability to change the interpretation of some term, they often won't, just because changing is hard.
This is just from my personal experience, and I'm not sure how it would compare to working with other branches of government (or even other executive-branch agencies).
Greaves' cluelessness paper was published in 2016. My impression is that the broad argument has existed for 100+ years, but the formulation of cluelessness arising from flow-through effects outweighing direct effects (combined with EA's tending to care quite a bit about flow-through effects) was a relatively novel and major reformulation (though probably still below your bar).
I'm hoping that at some point, I'll be able to do a bit more of a roundup / analysis post, where I look at some of the key themes and leanings from across several of our case studies. There might be more scope for making these sorts of claims or estimates in a post like that, though it still might not be worth the time. I'd be interested in your thoughts on that!
Yes, I personally would be interested and would be happy to give my opinions about which of these would be most useful. But (obviously) the priorities of EAA leaders who can put your advice into practice is probably more important.