Posts

When should EAs allocate funding randomly? An inconclusive literature review. 2018-11-17T14:53:38.803Z · score: 34 (22 votes)

Comments

Comment by max_daniel on [Link] "Revisiting the Insights model" (Median Group) · 2019-07-16T08:24:44.978Z · score: 14 (6 votes) · EA · GW

Thanks for linkposting this as I might not have seen it otherwise. FWIW, my own intuition is that work like this is among the most marginally valuable things we can do. Here, "work like this" roughly means something like "build a legible model with implications for something that's clearly an important parameter when thinking about the long-term future and, crucially, have some way to empirically ground that model". However, I didn't look at this model in detail yet and so cannot currently comment on its specific predictions.

Comment by max_daniel on A case for strategy research: what it is and why we need more of it · 2019-07-11T16:35:06.472Z · score: 4 (3 votes) · EA · GW

Thank you for your response, David! One quick observation:

I think the idea cluster of existential risk reduction was formed through something I'd call "research". I think, in a certain way, we need more work of this type.

I agree that the current idea cluster of existential risk reduction was formed through research. However, it seems that one key difference between our views is: you seem to be optimistic that future research of this type (though different in some ways, as you say later) would uncover similarly useful insights, while I tend to think that the space of crucial considerations we can reliably identify with this type of research has been almost exhausted. (NB I think there are many more crucial considerations "out there", it's just that I'm skeptical we can find them.)

If this is right, then it seems we actually make different predictions about the future, and you could prove me wrong by delivering valuable strategy research outputs within the next few years.

Comment by max_daniel on Critique of Superintelligence Part 1 · 2019-07-08T07:32:48.058Z · score: 7 (3 votes) · EA · GW

[Sorry for picking out a somewhat random point unrelated to the main conversation. This just struck me because I feel like it's similar to a divergence in intuitions I often notice between myself and other EAs and particularly people from the 'rationalist' community. So I'm curious if there is something here it would be valuable for me to better understand.]

To give a silly human example, I'll name Tim Ferriss, who has used the skills of "learning to learn", "ignoring 'unwritten rules' that other people tend to follow", and "closely observing the experience of other skilled humans" to learn many languages, become an extremely successful investor, write a book that sold millions of copies before he was well-known, and so on. His IQ may not be higher now than when he begin, but his end results look like the end results of someone who became much more "intelligent".
Tim has done his best to break down "human-improving ability" into a small number of rules. I'd be unsurprised to see someone use those rules to improve their own performance in almost any field, from technical research to professional networking.

Here is an alternative hypothesis, a bit exaggerated for clarity:

  • There is a large number of people who try to be successful in various ways.
  • While trying to be successful, people tend to confabulate explicit stories for what they're doing and why it might work, for example "ignoring 'unwritten rules' that other people tend to follow".
  • These confabulations are largely unrelated to the actual causes of success, or at least don't refer to them in a way nearly as specific as they seem to do. (E.g., perhaps a cause could be 'practicing something in an environment with frequent and accurate feedback', while a confabulation would talk about quite specific and tangential features of how this practice was happening.)
  • Most people actually don't end up having large successes, but a few do. We might be pulled to think that their confabulations about what they were doing are insightful or worth emulating, but in fact it's all a mix of survivorship bias and people with certain innate traits (IQ, conscientiousness, perhaps excitement-seeking, ...) not occurring in the confabulations doing better.

Do you think we have evidence that this alternative hypothesis is false?

Comment by max_daniel on Effective Altruism is an Ideology, not (just) a Question · 2019-07-06T17:25:55.522Z · score: 2 (2 votes) · EA · GW

I already did this. - I was implicitly labelling this "double upvote" and was trying to say something like "I wish I could upvote this post even more strongly than with a 'strong upvote'". But thanks for letting me know, and sorry that this wasn't clear. :)

Comment by max_daniel on X-risks of SETI and METI? · 2019-07-04T13:19:42.671Z · score: 1 (1 votes) · EA · GW

I agree that information we received from aliens would likely spread widely. So in this sense I agree it would clearly be a potential info hazard.

It seems unclear to me whether the effect of such information spreading would be net good or net bad. If you see reasons why it would probably be net bad, I'd be curious to learn about them.

Comment by max_daniel on X-risks of SETI and METI? · 2019-07-04T06:53:59.594Z · score: 5 (2 votes) · EA · GW

([This is not a serious recommendation and something I might well change my mind about if I thought about it for one more hour:] Yes, though my tentative view is that there are fairly strong, and probably decisive, irreversability/option value reasons for holding off actions like SETI until their risks and benefits are better understood. NB the case is more subtle for SETI than METI, but I think the structure is the same: once we know there are aliens there is no way back to our previous epistemic state, and it might be that knowing about aliens is an info hazard.)

Comment by max_daniel on A case for strategy research: what it is and why we need more of it · 2019-07-03T14:23:29.725Z · score: 1 (1 votes) · EA · GW
These are all about AI (except maybe the one about China). Is that because you believe the easy and valuable wins are only there, or because you're most aware of those?

My guess is that AI examples were most salient to me because AI has been the area I've thought about the most recently. I strongly suspect there are easy wins in other areas as well.

Comment by max_daniel on Information security careers for GCR reduction · 2019-07-02T10:49:30.733Z · score: 4 (3 votes) · EA · GW

That's helpful, thanks!

Do you have a sense of whether the required talent is relatively generic quantitative/technical talent that would e.g. predict success in fields like computer science, physics, or engineering, or something more specific? And also what the bar is?

Currently I'm not sure if what you're saying is closer to "if you struggled with maths in high school, this career is probably not for you" or "you need to be at a +4 std level of ability in these specific things" (my guess is something closer to the former).

No worries if that was beyond the depth of your investigation.

Comment by max_daniel on Announcing the launch of the Happier Lives Institute · 2019-06-29T13:08:47.429Z · score: 1 (1 votes) · EA · GW

Hi Michael, thank you for your thoughtful reply. This all makes a lot of sense to me.

FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad - because of the downsides you mention - for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically. (Though I appreciate that maybe you didn't write that post specifically for this Forum, and that maybe it just isn't worth the effort to do so.) Waiting and checking if other people flag similar concerns seems like a very sensible response to me.

One quick reply:

Holding one's views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.

I agree I didn't make intelligible why this would be confusing to me. I think my thought was roughly:

(i) Contingently, we can have an outsized impact on the expected size of the total future population (e.g. by reducing specific extinction risks).

(ii) If you endorse totalism in population ethics (or a sufficiently similar aggregative and non-person-affecting view), then whatever your theory of well-being, because of (i) you should think that we can have an outsized impact on total future well-being by affecting the expected size of the total future population.

Here, I take "outsized" to mean something like "plausibly larger than through any other type of intervention, and in particular larger than through any intervention that optimized for any measure of near-term well-being". Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would "screen off" questions about the theory of well-being, or how to measure well-being - that is, my guess is that reducing existential risk would be (contingently!) a convergent priority (at least on the axiological, even though not necessarily normative level) of all bundles of ethical views that include totalism, in particular irrespective of their theory of well-being. [Of course, taken literally this claim would probably be falsified by some freak theory of well-being or other ethical view optimized for making it false, I'm just gesturing at a suitably qualified version I might actually be willing to defend.]

However, I agree that there is nothing conceptually confusing about the assumption that a different theory of well-being would imply different career priorities. I also concede that my case isn't decisive - for example, one might disagree with the empirical premise (i), and I can also think of other at least plausible defeaters such as claims that improving near-term happiness correlates with improving long-term happiness (in fact, some past GiveWell blog posts on flow-through effects seem to endorse such a view).

Comment by max_daniel on Information security careers for GCR reduction · 2019-06-28T22:31:35.635Z · score: 12 (8 votes) · EA · GW

I've seen some people advise against this career path, and I remember a comment by Luke elsewhere that he's aware of some people having that view. Given this, I'm curious if there are any specific arguments against pursuing a career in information security that you've come across?

(It's not clear to me that there must be any. - E.g. perhaps all such advice was based on opaque intuitions, or was given for reasons not specific to information security such as "this other career seems even better".)

Comment by max_daniel on Information security careers for GCR reduction · 2019-06-28T22:27:17.412Z · score: 10 (6 votes) · EA · GW

Could you elaborate on why you "expect the training [for becoming an information security professional] to be very challenging"?

Based on the OP, I could see the answer being any combination of the following, and I'm curious if you have more specific views.

  • a) The training and work is technically challenging.
  • b) The training and work has idiosyncratic aspects that may be psychologically challenging, e.g. the requirement to handle confidential information over extended periods of time.
  • c) The training and work requires an unusually broad combination of talents, e.g. both technical aptitude and the ability to learn to manage large teams.
  • d) You don't know of any specific reasons why the training would be challenging, but infer that it must be for structural reasons such as few people pursuing that career despite lucrative pay.
Comment by max_daniel on Effective Altruism is an Ideology, not (just) a Question · 2019-06-28T13:03:08.008Z · score: 19 (13 votes) · EA · GW

[Disclaimer: I used to be the Executive Director of the Foundational Research Institute, and currently work at the Future of Humanity Institute, both of which you mention in your post. Views are my own.]

Thank you so much for writing this! I wish I could triple-upvote this post. It seems to fit very well with some thoughts and unarticulated frustrations I've had for a while. This doesn't mean I agree with everything in the OP, but I feel excited about conversations it might start. I might add some more specific comments over the next few days.

[FWIW, I'm coming roughly from a place of believing that (i) at least some of the central 'ideological tenets' of EA are conducive to the community causing good outcomes, (ii) the overall ideological and social package of EA making me more optimistic about the EA community causing good outcomes per member than about any other major existing social and ideological package. However, I think these are messy empirical questions we are ultimately clueless about. And I do share a sense that in at least some conversations within the community it's not being acknowledged that these are debatable questions, and that the community's trajectory is being and will be affected by these implicit "ideological" foundations. (Even though I probably wouldn't have chosen the term "ideology".)

I do think that an awareness of EA's implicit ideological tenets sometimes points to marginal improvements I'd like the community to make. This is particularly true for more broadly investigating potential long-termist cause areas, including ones that don't have to do with emerging technologies. I also suspect that qualitative methodologies from the social sciences and humanities are currently being underused, e.g. I'd be very excited to see thoroughly conducted interviews with AI researchers and certain government staff on several topics.

Of course, all of this reflects that I'm thinking about this in a sufficiently outcome-oriented ethical framework.

My perception also is that within the social networks most tightly coalescing around the major EA organizations in Oxford and the Bay Area it is more common for people to be aware of the contingent "ideological" foundations you point to than one would maybe expect based on published texts. As a random example, I know of one person working at GPI who described themselves as a dualist, and I've definitely seen discussions around "What if certain religious views are true?" - in fact, I've seen many more discussions of the latter kind than in other mostly secular contexts and communities I'm familiar with.]

Comment by max_daniel on A case for strategy research: what it is and why we need more of it · 2019-06-25T23:25:24.599Z · score: 8 (6 votes) · EA · GW

Thank you for this post. I'm usually wary of attempts to establish terminology unless there are clear demonstrations of its usefulness. However, in this case my impression is that public writing on related terms such as 'global priorities research' or 'macrostrategy' is sufficiently vague or ill-focused that I think this post might contribute to a valuable conversation. I'm not sure if the specific terms you're using here will catch on, but I'm happy to see its framework clearly spelled out.

A few reactions:

[Epistemic status: I've only thought about your post specifically for 1 minute, but about the broader issue of the marginal utility of different types of longtermist-relevant research for something between 1 and 1000 hours depending on how you count. Still, I don't think I have very crisp arguments or data to back up the following impressions. I think in the following I'm mostly simply stating my view rather than providing reasons to believe it.]

  • I strongly agree with the following (I also think it would be better if I shared more of my implicit models, though way less useful than for some other researchers):
"We believe most researchers have some implicit models which, when written up, would not meet the standards for academic publication. However, sharing them will allow these models to be built upon and improved by the community. This will also make it easier for outsiders, such as donors and aspiring researchers, to understand the crucial considerations within the field."
  • Relative to my own intuitions, I feel like you underestimate the extent to which your "spine" ideally would be a back-and-forth between its different levels rather than (except for informing and improving research) a one-way street. Put differently, my own intuition is that things like "insights down the line might give rise to new questions higher up" and "trying out some tactics research might illuminate some strategic uncertainties" are very important - I would have mentioned them more prominently. In fact, I tend to be quite pessimistic about strategy research that is not in some way informed by tactics research (and informing research) - I think the room of useful 'tactics- and data-free strategy research' is very limited, and that EA has actually mostly exhausted it in the area of existential risk reduction. [I know that this last view of mine is particularly controversial among people I epistemically respect.]
  • I think I would find it easier to understand to what extent I agree with your recommendations if you gave specific examples of (i) what you consider to be valuable past examples of strategy research, and (ii) how you're planning to do strategy research going forward (or what methods you'd recommend to others).
  • An attempt to state my overall (weakly held) view in your terminology:
    • There are some easy wins in existential-risk-relevant informing research. Pursuing them often requires domain expertise in an existing field or entrepreneur-style initiative. I would like to see the EA community to both reap these low-hanging fruit, and to more highly value the required kinds of domain expertise and entrepreneurship in order to increase our structural ability to pursue such easy wins.
      • Examples of easy wins (I'm aware that some of them are already being pursued by people inside and outside of EA - that's great!): (1) gather more comprehensive data on AI inputs and outputs (e.g. it is symptomatic that both AI Impacts's page on Funding of AI Research and more well-resourced attempts such as the AI Index arguably are substantially incomplete), (2) create a list of AI capability milestones that can be used for forecasting, (3) a qualitative social science research project that interviews US government officials to get a sense of their understanding of AI, (4) an accessible summary of theories explaining the timing and location of the Industrial Revolution, (5) an accessible summary of what properties of AI seem most relevant for AI's impact on economic growth, (6) <lots of specific things about China>, ... - I think I could generate 10-100 of such easy wins for which it's true that I'd pay significant amounts of money for acquiring an answer I could be confident and that doesn't require me to do the research myself [maybe that bar is too low to be interesting].
    • While I agree that we face substantial strategic uncertainty, I think I'm significantly less optimistic about the marginal tractability of strategy research than you seem to be. (Exceptions about which I'm more optimistic: questions directly tied to tactics or implementation; and strategy research that is largely one of the above 'easy wins'.) Given the resources that have been invested into strategy research, e.g. at FHI, if the marginal value was high then I would expect to be able to point at more specific valuable outputs of strategy research from the last 5 years. For example, while I tend to be excited about work that, say, immediately helps Open Phil to determine their funding allocation, I tend to be quite pessimistic about external researchers sitting at their desks and considering questions such as "how to best allocate resources between reducing various existential risks" in the abstract. To be clear, I think there are some valuable insights that can be found by such research - for example, that anthropogenic existential risk is much higher than natural risk. However, my impression is that the remaining open questions are either very intractable or require intimate familiarity with a specific context (which could, for example, be provided by tactics research or access to information internal to some organization more broadly).
    • I feel like you overstate the point that "[s]trategic uncertainty implies that interacting with the ‘environment’ has a reduced net value of information". To me, this seems true only for some ways of interacting with your environment. In your example, a way of interacting with the environment that seems safe and like it has a high value of information would be to broadly understand how the government operates without making specific recommendations - e.g. by looking at relevant case studies, working in government, or interviewing government staff.
    • Very loosely, I expect marginal activities that effectively reduce strategic uncertainty to look more like executives debating their companies strategy in a meeting rather than, say, Newton coming up with his theory of mechanics. I'm therefore reluctant to call them "research".
Comment by max_daniel on Announcing the launch of the Happier Lives Institute · 2019-06-25T12:53:25.040Z · score: 23 (10 votes) · EA · GW

Congratulations to launching HLI. From my outside perspective, it looks like you have quite some momentum, and I'm glad to see more diverse approaches being pursued within EA. (Even though I don't anticipate to support yours in particular.)

One thing I'm curious about is to what extent HLI's strategy or approach depend on views in population ethics (as opposed to other normative questions, including the theory of well-being), and to what extent you think the question whether maximizing consequentialisism would recommend to support HLI hinges on population ethics.

I'm partly asking because I vaguely remember you having written elsewhere that regarding population ethics you think that (i) death is not bad in itself for any individual's well-being, (ii) creating additional people is never good for the world. My impression is that (i) and (ii) have major implications for how to do 'cause prioritization', and for how to approach the question of "how to do the most good we can" more broadly. It thus would make sense to me that someone endorsing (i) and (ii) thought that, say, they need to research and provide their own career advice as it would likely differ from the one provided by 80K and popular views in EA more generally. (Whereas, without such an explanation, I would be confused why someone would start their own organization "[a]ssessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.") More broadly, it would make sense to me that people endorsing (i) and (ii) embark on their own research programme and practical projects.

However, I'm struck by what seems to me a complete absence of such explicit population ethical reasoning in your launch post. It seems to me that everything you say is consistent with (i) and (ii), and that e.g. in your vision you almost suggest a view that is neutral about 'making happy people'. But on the face of it, 'increasing the expected number of [happy] individuals living in the future, for example by reducing the risk of human extinction' seems a reasonable candidate answer to your guiding question, i.e., “What are the most cost-effective ways to increase self-reported subjective well-being?”

Put differently, I'd expect that your post raises questions such as 'How is this different from what others EA orgs are doing?' or 'How will your career advice differ from 80K's?' for many people. I appreciate there are many other reasons why one might focus to, as you put it, "welfare-maximization in the nearer-term" - most notably empirical beliefs. For example, someone might think that the risk of human extinction this century was extremely small, or that reducing that risk was extremely intractable. And perhaps an organization such as HLI is more useful as a broad tent that unites 'near-term happiness maximizers' irrespective of their reasons for why they focus on the near term. You do mention some of the differences, but it doesn't seem to me that you provide sufficient reasons for why you're taking this different approach. Instead, you stress that you take value to exclusively consist of happiness (and suffering), how you operationalize happiness etc. - but unless I'm mistaken, these points belonging to the theory of well-being don't actually provide an answer to the question that to me seems a bit like the unacknowledged elephant in the room: 'So why are you not trying to reduce existential risk?' Indeed, if you were to ask me why I'm not doing roughly the same things as you with my EA resources, I'd to a first approximation say 'because we disagree about population ethics' rather than 'because we disagree about the theory of well-being' or 'I don't care as much about happiness as you do', and my guess is this is similar for many EAs in the 'longtermist mainstream'.

To be clear, this is just something I was genuinely surprised by, and am curious to understand. The launch post currently does seem slightly misleading to me, but not more so than I'd expect posts in this reference class to generally be, and not so much that I clearly wish you'd change anything. I do think some people in your target audience will be similarly confused, and so perhaps it would make sense for you to at least mention this issue and possibly link to a page with a more in-depth explanation for readers who are interested in the details.

In any case, all the best for HLI!

Comment by max_daniel on New Report Claiming Understatement of Existential Climate Risk · 2019-06-13T10:08:04.229Z · score: 2 (2 votes) · EA · GW

[Epistemic status: climate change is outside of my areas of competence, I'm mostly reporting what I've heard from others, often in low-bandwidth conversations. I think their views are stronger overall evidence than my own impressions based on having engaged on the order of 10 hours with climate change from an existential risk perspective.]

FWIW, before having engaged with the case made by that report, I'm skeptical whether climate change is a significant "direct" existential risk. (As opposed to something that hurts our ability to prevent or mitigate other risks, and might be very important for that reason.) This is mostly based on:

  • John Halstead's work on this question, which I found accessible and mostly convincing.
  • My loose impression that 1-5 other people whose reasoning I trust and have engaged more deeply with that question have concluded that climate change is unlikely to be a "direct" extinction risk, and me not being aware of any other case for why climate change might be a particularly large existential risk otherwise (i.e. I don't have seen suggested mechanisms for how/why climate change might permanently reduce the value/quality of the future that seemed significantly more plausible to me than just-so stories one could tell about almost any future development). Unfortunately, I don't think there is a publicly accessible presentation of that reasoning (apart from Halstead's report mentioned above).

FWIW, I'd also guess that the number of EAs with deep expertise on climate change is smaller than optimal. However, I'm very uncertain about this, and I don't see particular reasons why I would have good intuitions about large-scale talent allocation questions (it's even quite plausible that I'm misinformed about the number of EAs that do have deep expertise on climate change).

Comment by max_daniel on New Report Claiming Understatement of Existential Climate Risk · 2019-06-13T10:00:34.715Z · score: 5 (2 votes) · EA · GW

(Meta: Curious why this post was downvoted at least once. Personally, I'm grateful for this pointer, as I consider it relevant and might not have become aware of the report otherwise. I don't view the linkpost as an endorsement of the content or epistemic stance exhibited by the report.)

Comment by max_daniel on EA is vetting-constrained · 2019-05-11T02:55:10.275Z · score: 11 (6 votes) · EA · GW
Overall, I think generating more experienced grantmakers/mentors for new projects is a priority for the movement.

Do you have any thoughts on how to best do this, and on who is in a position to do this? For example, my own weakly held guess is that I could have substantially more impact in a "grantmaker/mentor for new projects" role than in my current role, but I have a poor sense of how I could go about getting more information on whether that guess is correct; and if it was correct, I wouldn't know if this means I should actively try to get into such a role or if the bottleneck is elsewhere (e.g. it could be that there are many people who have the skills to be a good grantmaker/mentor but that actors who hold other required resources such as funding or trust don't have the capacity to utilize more grantmakers/mentors). (My current guess is the second, which is why I'm not actively pursuing this.)

Comment by max_daniel on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-05-11T01:13:11.636Z · score: 8 (5 votes) · EA · GW

Unfortunately I find it hard to give examples that are comprehensible without context that is either confidential or would take me a lot of time to describe. Very very roughly I'm often not convinced by the use of quantitative models in research (e.g. the "Racing to the Precipice" paper on several teams racing to develop AGI) or for demonstrating impact (e.g. the model behind ALLFED's impact which David Denkenberger presented in some recent EA Forum posts). OTOH I often wish that for organizational decisions or in direct feedback more quantitative statements were being made -- e.g. "this was one of the two most interesting papers I read this year" is much more informative than "I enjoyed reading your paper". Again, this is somewhat more subtle than I can easily convey: in particular, I'm definitely not saying that e.g. the ALLFED model or the "Racing to the Precipice" paper shouldn't have been made - it's more that I wish they would have been accompanied by a more careful qualitative analysis, and would have been used to find conceptual insights and test assumptions rather than as a direct argument for certain practical conclusions.

Comment by max_daniel on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-26T19:10:43.634Z · score: 9 (5 votes) · EA · GW

FWIW, as someone who was and is broadly sympathetic to the aims of the OP, my general impression agrees with "excessive quantification in some areas of the EA but not enough of it in other areas."

(I think the full picture has more nuance than I can easily convey, e.g. rather than 'more vs. less quantification' it often seems more important to me how quantitative estimates are being used - what role they play in the overall decision-making or discussion process.)

Comment by max_daniel on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-06T12:41:13.394Z · score: 2 (2 votes) · EA · GW
If we are talking about charity evaluations then reliability can be estimated directly so this is no longer a predictable error.

Hmm. This made me wonder whether the paper's results depends on the decision-maker being uncertain about which options have been estimated reliably vs. unreliably. It seems possible that the effect could disappear if the reliability of my estimates varies but I know that the variance of my value estimate for option 1 is v_1, the one for option 2 v_2 etc. (even if the v_i vary a lot). (I don't have time to check the paper or get clear on this I'm afraid.)

Is this what you were trying to say here?

Comment by max_daniel on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-06T12:35:48.099Z · score: 7 (5 votes) · EA · GW
Kind of an odd assumption that dependence on luck varies from player to player.

Intuitively, it strikes me as appropriate for some realistic situations. For example, you might try to estimate the performance of people based on quite different kinds or magnitudes of inputs; e.g. one applicant might have a long relevant track record, for another one you might just have a brief work test. Or you might compare the impact of interventions that are backed by very different kinds of evidence - say, a RCT vs. a speculative, qualitative argument.

Maybe there is something I'm missing here about why the assumption is odd, or perhaps even why the examples I gave don't have the property required in the paper? (The latter would certainly be plausible as I read the paper a while ago, and even back then not very closely.)

Comment by max_daniel on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-04T22:48:22.012Z · score: 29 (12 votes) · EA · GW

I haven't had time yet to think about your specific claims, but I'm glad to see attention for this issue. Thank you to contributing what I suspect is an important discussion!

You might be interested in the following paper which essentially shows that under an additional assumption the Optimizer's Curse not only makes us overestimate the value of the apparent top option but in fact can make us predictably choose the wrong option.

Denrell, J. and Liu, C., 2012. Top performers are not the most impressive when extreme performance indicates unreliability. Proceedings of the National Academy of Sciences, 109(24), pp.9331-9336.

The crucial assumption roughly is that the reliability of our assessments varies sufficiently much between options. Intuitively, I'm concerned that this might apply when EAs consider interventions across different cause areas: e.g., our uncertainty about the value of AI safety research is much larger than our uncertainty about the short-term benefits of unconditional cash transfers.

(See also the part on the Optimizer's Curse and endnote [6] on Denrell and Liu (2012) in this post by me, though I suspect it won't teach you anything new.)

Comment by max_daniel on The career and the community · 2019-03-26T18:07:18.802Z · score: 11 (5 votes) · EA · GW

Thank you, this concrete analysis seems really useful to understand where the perception of skew toward EA organizations might be coming from.

Last year I talked to maybe 10 people over email, Skype, and at EA Global, both about what priority path to focus on, and then what to do within AI strategy. Based on my own experience last year, your "word of mouth is more skewed toward jobs at EA org than advice in 80K articles" conjecture feels true, though not overwhelmingly so. I also got advice from several people specifically on standard PhD programs, and 80K was helpful in connecting me with some of these people, for which I'm grateful. However, my impression (which might be wrong/distorted) was that especially people who themselves were 'in the core of the EA community' (e.g. working at an EA org themselves vs. a PhD student who's very into EA but living outside of an EA hub) favored me working at EA organizations. It's interesting that I recall few people saying this explicitly but have a pretty strong sense that this was their view implicitly, which maybe means that my guess about what is generally approved of within EA rather than people's actual views is behind this impression. It could even be a case of pluralistic ignorance (in which case public discussions/post like this would be particularly useful).

Anyway, here are a few other hypotheses of what might contribute to a skew toward 'EA jobs' that's stronger than what 80K literally recommends:

  • Number of people who meet the minimal bar for applying: Often, jobs recommended by 80K require specialized knowledge/skills, e.g. programming ability or speaking Chinese. By contrast, EA orgs seem to open a relatively large number of roles where roughly any smart undergraduate can apply.
  • Convenience: If you're the kind of person who naturally hears about, say, the Open Phil RA job posting, it's quite convenient to actually apply there. It costs time, but for many people 'just time' as opposed to creativity or learning how to navigate an unfamiliar field or community. For example, I'm a mathematician who was educated in Germany and considered doing a PhD in political science in the US. It felt like I had to find out a large number of small pieces of information someone familiar with the US education system or political science would know naturally. Also the option just generally seemed more scary and unattractive because it was in 'unfamiliar terrain'. Relatedly, it was much easier to me to talk to senior staff at EA organizations than it was to talk to, say, a political science professor at a top US university. None of these felt like an impossible bar to overcome, but it definitely seemed to me that they skewed my overall strategy somewhat in favor of the 'familiar' EA space. I generally felt a bit that given that there's so much attention on career choice in EA I had surprisingly little support and readily available knowledge after I had decided to broadly "go into AI strategy" (which I feel like my general familiarity with EA would have enabled me to figure out anyway, and was indeed my own best guess before I found out that many others agreed with this). NB as I said 80,000 Hours was definitely somewhat helpful even in this later stage, and it's not clear to me if you could feasibly have done more (e.g. clearly 80K cannot individually help anyone with my level of commitment and potential to figure out details of how to execute their career plan). [I also suspect that I find things like figuring out the practicalities of how to get into a PhD program unusually hard/annoying, but more like 90th than 99th percentile.] But maybe there's something we can collective do to help correct this bias, e.g. the suggestion of nurturing strong profession-specific EA networks seems like it would help with enabling EAs to enter that profession as well (as can research by 80K e.g. your recent page on US AI policy). To the extent that telling most people to work on AI prevents the start of such networks this seems like a cost to be aware of.
  • Advice for 'EA jobs' is more unequivocal, see this comment.
Comment by max_daniel on The career and the community · 2019-03-26T00:31:03.358Z · score: 2 (2 votes) · EA · GW

It probably is, but I don't think this explanation is rationalizing. I.e. I don't think this founder effect would provide a good reason to think that this distribution of knowledge and opinions is conducive to reaching the community's goals.

Comment by max_daniel on The career and the community · 2019-03-24T20:36:40.366Z · score: 9 (3 votes) · EA · GW

Hmm, thanks for sharing your impression, I think talking about specific examples is often very useful to spot disagreements and help people learn from each other.

I've never lived in the US or otherwise participated in one of these communities, so I can't tell from first-hand experience. But my loose impression is that there have been substantial disagreements both synchronically and diachronically within those movements; for example, in social justice about trans* issues or sex work, and in conservatism about interventionist vs. isolationist foreign policy, to name but a few examples. Of course, EAs disagree substantially about, say, their favored cause area. But my impression at least is that disagreements within those other movements can be much more acrimonious (jtbc, I think it's mostly good that we don't have this in EA), and also that the difference in 'cultural vibe' I would get from attending, say, a Black Lives Matters grassroots group meeting vs. a meeting of the Hilary Clinton presidential campaign team is larger than the one between the local EA group in Harvard and the EA Leaders Forum. Do your impressions of these things differ, or were you thinking of other manifestations of conformity?

(Maybe that's comparing apples to oranges because a much larger proportion of EAs are from privileged backgrounds and in their 20s, and if one 'controlled' social justice and conservatism for these demographic factors they'd be closer to EA levels of conformity. OTOH maybe it's something about EA that contributes to causing this demographic narrowness.)

Also, we have an explanation for the conformity within social justice and conservatism that on some readings might rationalize this conformity - namely Haidt's moral foundations theory. To put it crudely, given that you're motivated by fairness and care but not authority etc. maybe it just is rational to hold the 'liberal' bundle of views. (I think that's true only to a limited but still significant extent, and also maybe that the story for why the mistakes reflected by the non-rational parts are so correlated is different from the one for EA in an interesting way.) By contrast, I'm not sure there is a similarly rationalizing explanation for why many EAs agree on both (i) there's a moral imperative for cost-effectiveness, and (ii) you should one-box in Newcomb's problem, and for why many know more about cognitive biases than about the leading theories for why the Industrial Revolution started in Europe rather than China.

Comment by max_daniel on The career and the community · 2019-03-24T20:14:20.382Z · score: 1 (1 votes) · EA · GW

Thank you, your comment made me realize both that I maybe wasn't quite aware what meaning and connotations 'community' has for native speakers, and maybe that I was implicitly comparing EA against groups that aren't a community in that sense. I guess it's also quite unclear to me if I think it's good for EA to be a community in this sense.

Comment by max_daniel on The career and the community · 2019-03-22T19:01:53.994Z · score: 20 (12 votes) · EA · GW

I don't have relevant data nor have I thought very systematically about this, but my intuition is to strongly agree with basically everything you say.

In particular, I feel that the "Having exposure to a diverse range of perspectives and experiences is generally valuable." squares fairly well with my own experience. There just are so many moving parts to how communities and organizations work - how to moderate meetings, how to give feedback, how much hierarchies and structure to have etc. etc. - that I think it's fairly hard to even be aware of the full space of options (and impossible to experiment with a non-negligible fraction of it). Having an influx of people with diverse experiences in that respect can massively multiply the amount of information available on these intangible things. This seems particularly valuable to EA to me because I feel that relative to the community's size there's an unusual amount of conformity on these things within EA, perhaps due to the tight social connections within the community and the outsized influence of certain 'cultural icons'.

Personally, I feel that I've learned a lot of the (both intellectual and interpersonal) skills that are most useful in my work right now outside of EA, and in fact that outside of EA's core focus (roughly, what are the practical implications of 'sufficiently consequentialist' ethics) I've learned surprisingly little in EA even after correcting for only having been in the community for a small fraction of my life.

(Perhaps more controversially, I think this also applies to the epistemic rather than the purely cultural or organizational domain: I.e. my claim roughly is that things like phrasing lots of statement in terms of probabilities, having discussions mostly in Google docs vs. in person, the kind of people one circulates drafts to, how often one is forced to face a situation where one has to explain one's thoughts to people one has never met before, and various small things like that affect the overall epistemic process in messy ways that are hard to track or anticipate other than by actually having experienced how several alternatives play out.)

Comment by max_daniel on The career and the community · 2019-03-22T18:38:39.709Z · score: 22 (11 votes) · EA · GW

Related: Julia Galef's post about 'Planners vs. Hayekians'. See in particular how she describes the Hayekians' conclusion, which sounds similar to (though stronger than) your recommendation:

Therefore, the optimal approach to improving the world is for each of us to pursue projects we find interesting or exciting. In the process, we should keep an eye out for ways those projects might yield opportunities to produce a lot of social value — but we shouldn’t aim directly at value-creation.

My impression is that I've been disagreeing for a while with many EAs (my sample is skewed toward people working full-time at EA orgs in Oxford and especially Berlin) about how large the 'Hayekian' benefits from excellence in 'conventional' careers are. That is, how many unanticipated benefits will becoming successful in some field X have? I think I've consistently been more optimistic about this than most people I've talked to, which is one of several reasons for being less excited about 'EA jobs' relative to other options than I think many EAs. My reasoning here seems to broadly agree with yours, and I'm glad to see it spelled out that well.

(Apologies if you've linked to that in your post already, I didn't thoroughly check all links.)

Comment by max_daniel on Sharing my experience on the EA forum · 2019-03-20T17:05:02.582Z · score: 1 (1 votes) · EA · GW

What about feedback that's anonymous but public? This has some other downsides (e.g. misuse potential) but seems to avoid the first two problems you've pointed out.

Comment by max_daniel on Sharing my experience on the EA forum · 2019-03-19T15:15:14.100Z · score: 2 (2 votes) · EA · GW

My initial reaction is to really like the idea of being prompted to give anonymous feedback. I think there probably are also reasons against this, but maybe it's at least worth thinking about.

(One reason why I like this is that it would be helpful for authors and mitigate problems such as the one expressed by the OP. Another reason is that it might change the patterns of downvotes in ways that are beneficial. For example, I currently almost never downvote something that's not spam, but quite possibly it wouldn't be optimal if everyone used downvotes as narrowly [though I'm not sure and feel confused about the proper role of downvotes in general]. At the same time, I often feel like the threshold for explaining my disagreement in a non-anonymous comment would be too high. I anticipate that the opportunity to add anonymous feedback to a downvote would sometimes make me express useful concerns or disagreements I currently don't express.)

Comment by max_daniel on Getting People Excited About More EA Careers: A New Community Building Challenge · 2019-03-11T15:11:57.784Z · score: 5 (4 votes) · EA · GW

Thanks for sharing, I suspect this might be somewhat common. I've speculated about a related cause in another comment.

Comment by max_daniel on Getting People Excited About More EA Careers: A New Community Building Challenge · 2019-03-11T15:00:30.194Z · score: 13 (6 votes) · EA · GW
Of our top rated plan changes, only 25% involve people working at EA orgs

For what it's worth, given how few EA orgs there are in relation to the number of highly dedicated EAs and how large the world outside of EA is (e.g. in terms of institutions/orgs that work in important areas or are reasonably good at teaching important skills), 25% actually strikes me as a high figure. Even if this was right, there might be good reasons for the figure being that high, e.g. it's natural and doesn't necessarily reflect any mistake that 80K knows more about which careers at EA orgs are high-impact, can do a better job at finding people for them etc. However, I would be surprised if as the EA movement becomes more mature the optimal proportion was as high.

(I didn't read your comment as explicitly agreeing or disagreeing with anything in the above paragraph, just wanted to share my intuitive reaction.)

Thank you for your comments here, they've helped me understand 80K's current thinking on the issue raised by the OP.

Comment by max_daniel on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-04T11:39:32.660Z · score: 15 (7 votes) · EA · GW

FWIW, without having thought systematically about this, my intuition is to agree. I'd be particularly keen to see:

  • More explicit models for what trainable skills and experiences are useful for improving the long-term future, or will become so in the future (as new institutions such as CSET are being established).
  • More actionable advice on how to train these skills.

My gut feeling is that in many places we could do a better job at utilizing skills and experiences people can get pretty reliably in the for-profit world, academia, or from other established 'institutions'.

I'm aware this is happening to some extent already, e.g. GPI trying to interface with academia or 80K's guide on US policy. I think both are great!

NB this is different from the idea that there are many other career paths that would be high-impact to stay in indefinitely. I think this is also true, but at least if one has a narrow focus on the long-term future I feel less sure if there are 'easy wins' left here.

(An underlying disagreement here might be: Is this feasible, or are we just too much bottlenecked by something like what Carrick Flynn has called 'disentanglement'. Very crudely, I tend to agree that we're bottlenecked by disentanglement but that there are still some improvements we can make along the above lines. A more substantive underlying question might be how important domain knowledge and domain-specific skills are for being able to do disentanglement, where my impression is that I place an unusually high value on them whereas other EAs are closer to 'the most important thing is to hang out with other EAs and absorb the epistemic norms, results, and culture'.)

Comment by max_daniel on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T21:08:59.773Z · score: 3 (2 votes) · EA · GW
I didn't think people consistently recommended EA orgs over other options

Interesting, thank you for this data point. My speculation was partly based on recently having talked to people who told me something like "you're the first one [or one of very few among many] who doesn't clearly recommend me to choose <EA org> over <some other good option>". It's good to know that this isn't what always happens.

Comment by max_daniel on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T20:01:17.637Z · score: 30 (15 votes) · EA · GW

A speculative thought I just had on one possible reason for why some people are overly focussed on EA jobs relative to e.g. the other options you list here:

  • Identifying one's highest-impact career option is quite challenging, and there is no way to easily conclusively verify a candidate answer.
  • Therefore (and for other reasons), many people rely a lot on advice provided by 80K and individual EAs they regard as suitable advisors.
  • At least within the core of the (longtermist) EA community, almost all sources of advice agree that one of the most competitive jobs at an explicitly EA-motivated org usually is among the top options for people who are a good fit.
  • However, for most alternatives there is significant disagreement among the most trusted sources of advice on whether these alternatives are competitive (in terms of expected impact) with an 'EA job', or indeed good ideas at all. For example, someone who I believe many people consult for career advice discouraged me from 'train up as a cybersecurity expert' - an option I had brought up (and according to my own impression still consider an attractive option) -, at least relative to working at an EA org. Similarly, there are significant disagreements about the value of academic degrees, even in machine learning (and a bunch of hard-to-resolve underlying disagreements e.g. about how much ML experience is essential/useful for AI safety and strategy).
  • As a result, people will often be faced with a distribution of views similar to: 'Everyone agrees working at <EA org> would be great. Many people think a machine learning PhD would be great, one or two even think it's better for me specifically, but a significant minority thinks it's useless. One person was excited about cybersecurity, one person was pessimistic, and most said they couldn't comment on it.' Perhaps if all of these opinions had been conveyed with maximal reasoning transparency and one was extremely careful about aggregating the opinions this wouldn't be a problem. But in practice I think this often means that 'apply to <EA org>' seems like the top option, at least in terms of psychological pull.
  • (Another contributing factor to the large number of applications to EA jobs, perhaps less so for how it affects people, may be that that few EA orgs have a very explicit model of the specific skills they require for their most competitive jobs - at least that's my impression. As a result, they cannot offer reliable guidance people can use to decide if they're a good fit apart from applying.)
Comment by max_daniel on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T19:43:48.766Z · score: 31 (11 votes) · EA · GW

In a nutshell, I'm worried that the people would not find the options you list exciting from their perspective, and instead would perceive not working in one of the 20 most competitive jobs at explicitly EA-motivated employers as some kind of personal shortcoming, hence the frustration.

I think the OP is evidence that his can happen e.g. because the author reports that

this is the message I felt I was getting from the EA community:
“Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint… (20 applications later) … Yeah, when we said that we need people, we meant capable people. Not you. You suck.”

Note that I agree with you that in fact "[t]here are lots of exciting things for new EAs" including the options you've listed. However, even given this considered belief of mine, I think I was overly focussed on 'EA jobs' in a way that negatively affected my well-being.

Even when I consider that my guess is that I'm unusually susceptible to such psychological effects (though not extremely so, my crude guess would be '80th to 99th percentile'), I'd expect some others to be similarly affected even if they agree - like I - about the impact of less competitive options.

Perhaps with "the kind of thing described in the original post" you meant specifically refer to the issue 'people spend a lot of time applying for EA jobs'. Certainly a lot of the information in the OP and in one of my comments was about this. In that case I'd like to clarify that it's not the time cost itself that's the main cause of effects (i)-(iii) I described in the parent. In fact I somewhat regret to have contributed to the whole discussion perhaps being focused on time costs by providing more data exclusively about this. The core problem as I see it is how the OP, I, and I believe many others, think about and are psychologically affected by the current EA job market and the surrounding messaging. The objective market conditions (e.g. number of applicants for jobs) contribute to this, as do many aspects of messaging by EA orgs and EAs, as do things that have nothing to do with EA at all (e.g. people's degree of neuroticism and other personality traits). I don't have a strong view on which of these contributing factors is the best place to intervene.

Comment by max_daniel on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T11:22:52.227Z · score: 18 (16 votes) · EA · GW

I think there are at least two effects where the world loses impact: (i) People in less privileged positions not applying for EA jobs; sometimes one of these would actually have been the best candidate. (ii) More speculatively (in the sense that I can't point to a specific example, though my prior is this effect is very likely to be non-zero), people in less privileged positions might realize that it's not possible for them to apply for many of the roles they perceived to be described as highest-impact and this might reduce their EA motivation/dedication in general, and make them feel unwelcome in the community.

I emphatically agree that them taking another potentially impactful job is positive. In fact, as I said in another comment, I wish there was more attention on and support for identifying and promoting such jobs.

Comment by max_daniel on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T00:01:59.802Z · score: 51 (32 votes) · EA · GW

One thing that might be worth noting: I was only able to invest that many resources because of things like (i) having had an initial runway of more than $10,000 (a significant fraction of which I basically 'inherited' / was given to me for things like academic excellence that weren't very effortful for me), (ii) having a good relationship to my sufficiently well-off parents that moving back in with them always was a safe backup option, (iii) having access to various other forms of social support (that came with real costs for several underemployed or otherwise struggling people in my network).

I do think current conditions mean that we 'lose' more people in less comfortable positions than we otherwise would.

Comment by max_daniel on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T22:41:55.322Z · score: 57 (26 votes) · EA · GW

Some related half-baked thoughts:

[Epistemic status: I appreciate that there are people who've thought about the EA talent landscape systematically and have access to more comprehensive information, e.g. perhaps some people at 80K or people doing recruiting for major EA orgs. I would therefore place significantly more weight on their impressions. I'm not one of these people. My thoughts are based on (i) having talked 10-100 hours with other EAs about related things over the last year, mostly in a non-focussed way, (ii) having worked full-time for 2 EA organizations (3 if one counts a 6-week internship), (iii) having hired 1-5 people for various projects at the Effective Altruism Foundation, (iv) having spent about 220h on changing my career last year, see another comment. I first heard of EA around October 2015, and have been involved in the community since April 2016. Most of that time I spent in Berlin, then over last summer and since October in Oxford.]

  • I echo the impression that several people I've talked to - including myself - were or are overly focussed on finding a job at a major EA org. This applies both in terms of time spent and number of applications submitted, and in terms of more fuzzy notions such as how much status or success is associated with roles. I'm less sure if I disagreed with these people about the actual impact of 'EA jobs' vs. the next best option, but it's at least plausible to me that (relative to my own impression) some of them overvalue the relative impact of 'EA jobs'. E.g. my own guess is that a machine learning graduate course is competitive with most 'EA jobs' one could do well without such an education. [I think this last belief of mine is somewhat unusual and at least some very thoughtful people in EA disagree with me about this.]
  • I think several people were in fact too optimistic about getting an 'EA job'. It's plausible they could have accessed information (e.g. do a Fermi estimate of how many people will apply for a role) that would have made them more pessimistic, but I'm not sure.
  • I know at least 2 people who unsuccessfully applied to a large number of 'EA jobs'. (I'm aware there are many more.) I feel confident that they have several highly impressive relevant skills, e.g. because I've seen some of their writing and/or their CVs. I'm aware I don't know the full distribution of their relevant skills, and that the people who made the hiring decisions are in a much better position to make them than I. I'm still left with a subjective sense of "wow, these people are really impressive, and I find it surprising that they could not find a job". This contributes to (i) me feeling more pressure to perform well in and more doubtful about the counterfactual impact of my current role because I have a visceral sense that 'the next best candidate would have been about as good as I or better' / 'it would in some sense be tragic or unfair if I don't perform well' (these aren't endorsed beliefs, but still affect me) , (ii) me being more reluctant to introduce new people into the EA community because and I don't want them to make frustrating experiences, (iii) me being worried that some of my friends and other community members will make frustrating experiences [which costs attention and life satisfaction but also sometimes time e.g. when talking with someone about their frustration - as an aside, I'd guess that the burden of emotional labor of the latter kind is disproportionately shouldered by relatively junior women in the community]. (None of these effects are very large. I don't want to make this sound more dramatic than it is, but overall I think there are non-negligible costs even for someone like me who got one of the competitive jobs.)
  • I agree that identifying and promoting impactful roles outside of EA orgs may be both helpful for the 'EA job market' and impactful independently. I really like that the 80K job board sometimes includes such roles. I wonder if there is a diffusion of responsibility problem where identifying such jobs is no-one's main goal and therefore doesn't get done even if it would be valuable. [I also appreciate that this is really hard and costs a lot of time, and what I perceive to be 80K's strategy on this, i.e. focussing on in-depth exploration of particularly valuable paths such as US AI policy, seems on the right track to me.]
  • I think communication around this is really hard in general, and something that is particularly tricky for people like me and most EAs that are young and have little experience with similar situations. I also think there are some unavoidable trade-offs between causing frustration and increasing the expected quality of applicants for important roles. I applaud 80K for having listened to concerns around this in the past and having taken steps such as publishing a clarifying article on 'talent constraints'. I think as a community we can still do better, but I'm optimistic that the relevant actors will be able to do so and certain that they have good intentions. I've seen EA leaders have valuable and important conversations around this, but it's not quite clear to me if anyone in particular 'owns' optimizing the EA talent landscape at large, and so again wonder if there is a diffusion of responsibility issue that prevents 'easy wins' such as better data/feedback collection from getting done (while also being open to the possibility that 'optimizing the EA talent landscape' is too broad or fuzzy for one person to focus on it).
Comment by max_daniel on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T21:52:02.983Z · score: 94 (53 votes) · EA · GW

Adding some more data from my own experience last year.

Personally, I'm glad about some aspects of it and struggled with others, and there are some things I wish I had done differently, at least in hindsight. But here I just mean to quickly provide data I have collected anyway in a 'neutral' way, without implying anything about any particular application.

Total time I spent on 'career change' in 2018: at least 220h, of which at least about 101h were for specific applications. (The rest were things like: researching job and PhD opportunities; interviewing people about their jobs and PhD programs; asking people I've worked with for input and feedback; reflection before I decided in January to quit my previous job at the EA Foundation by April.) This does neither include 1 week I spent in in San Francisco to attend EAG SF and during which I was able to do little other work nor 250h of self-study that seems robustly useful but which I might not have done otherwise. (Nor 6 full weeks plus about 20h afterwards I spent doing an internship at an EA org, which overall I'm glad I did but might not have done otherwise.)

  • Open Phil Research Analyst - rejected after conversation notes test - 16h [edit: worth noting that they offered compensation for the time spent on the trial task]
  • OpenAI Fellows program - after more than 6 months got a rejection email encouraging me to apply again within the next 12 months - 5h [plus 175h studying machine learning including 46h on a project I tried to do specifically for that application - I count none of this as application cost because I think it was quite robustly useful]
  • BERI project manager application - rejected immediately (the email was ambiguous between a regular desk reject and them actually not hiring at all for that role for now) - 1h
  • Travelling to EAG SF from Germany to get advice on my career and find out about jobs - ~1 full week plus something between USD 1,000 and 5,000, which was between 10% and 50% of my liquid runway
  • CEA Summer Research Fellowship [NB this was a 6-week internship, not a full-time role] - got an offer and accepted - 4.5h
  • 2nd AI safety camp (October) [NB the core of this was a 1-2 week event organized by 'grassroots' efforts, and nothing that comes with funding above covering expenses] - got an offer and accepted - 1.2h
  • FHI Research Scholars Programme - got an offer and accepted [this is what I'm doing currently] - 30h
  • AI Impacts researcher - withdrew my application after the 1st interview because I accepted the FHI RSP offer - 44h [NB this was because I 'unilaterally' spent way more time to create a work sample than anyone had asked me to do, and in a quite inefficient way; I think one could have done an application in 1-5h if one had had a shovel-ready work sample. Again I'm excluding an additional 64h of teaching myself basic data processing and visualization skills with R because I think they are robustly useful.]

[I did manual time tracking so there might be some underestimation, with the error varying a lot between applications. A systematic error is that I never logged time spent in job interviews, but this is overall negligible.]

(I feel slightly nervous about sharing this. But I think the chance that it contributes to identifying if there are valuable changes to make in the overall talent/job landscape and messaging is well worth the expected cost; and also that as someone with a fixed-term but full-time job at an EA org I'm well-positioned to take some risks.)

Comment by max_daniel on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T21:05:36.211Z · score: 36 (18 votes) · EA · GW

Thank you and respect for having written this. I really appreciate this, particularly you being open about this having been mentally challenging for you and the concrete data on time cost you provide.

Comment by max_daniel on Near-term focus, robustness, and flow-through effects · 2019-02-08T23:32:50.078Z · score: 5 (4 votes) · EA · GW

If I recall correctly this paper by Tom Sittler also makes the point you paraphrased as "some reasonable base rate of x-risk means that the expected lifespan of human civilization conditional on solving a particular risk is still hundreds or thousands of years", among others.

Comment by max_daniel on When should EAs allocate funding randomly? An inconclusive literature review. · 2018-11-23T19:10:54.295Z · score: 1 (1 votes) · EA · GW

Ok, thanks, I now say "Prove that a certain nonrandom, non-Bayesian ...".

Comment by max_daniel on Some cruxes on impactful alternatives to AI policy work · 2018-11-22T23:20:55.199Z · score: 10 (8 votes) · EA · GW

Thank you for posting this, I find it very interesting and useful to have discussions of this kind publicly available!

For now just one point, even though I don't think it matters much for the high-level disagreement (in particular, I probably still disagree with Ben's view on the impact of Google, Wikipedia etc.):

I don't think current IT has had much of an effect by standard metrics of labour productivity, for example.

The context makes me think that maybe by "current IT" you specifically mean things like Facebook or Twitter that became big in the last 10 years. In that case, for all I know the quoted claim may well be correct. I'm not so sure if "current IT" includes e.g. the internet: I believe a prominent view in economics is that IT was a major cause of the US productivity growth resurgence in the 1990s to mid-2000s. For example:

  • David Romer's popular textbook Advanced Macroeconomics (4th ed., p. 32) says:
Until the mid-1990s, the rapid technological progress in computers and their introduction in many sectors of the economy appear to have had little impact on aggregate productivity. In part, this was simply because computers, although spreading rapidly, were still only a small fraction of the overall capital stock. And in part, it was because the adoption of the new technologies involved substantial adjustment costs. The growth-accounting studies find, however, that since the mid-1990s, computers and other forms of information technology have had a large impact on aggregate productivity.
  • Gordon (2014, p. 6), who in general argues against techno-optimists and predicts a growth slowdown, describes 1996-2004 as "the productivity revival associated with the invention of e-mail, the internet, the web, and e-commerce".

More broadly, the sense I got from the literature is that many people would be comfortable endorsing claims like (i) innovation has been and still is a major driver of productivity growth (say responsible for >10% of productivity growth), and (ii) within the last 10 years a significant share (weighted by impact on productivity, say again >10% of the effect) of innovation has happened in IT. (Admittedly, the arguments behind similar claims often seemed a bit handwavy to me and not as data-drived as I'd like.) So even if productivity growth has slowed down considerably and will remain low, IT would be responsible for a significant part of what little growth we have, and the absolute effect wouldn't be less than one order of magnitude of typical effects of technology on productivity.

I think this all of this is consistent with e.g. the views that IT has increased productivity less than past innovations such as the steam engine, or that most people overestimate the effect of IT. I'd also guess it's consistent with Cowen's and Thiel's views, but I haven't read the books by them that you mentioned.

(I said "a prominent view" because I don't have a good sense of whether it's a majority view. In particular, I wasn't able to find a relevant IGM Forum survey of economists. My overall impression is based on having engaged on the order of 10 hours with the relevant literature, albeit in an only moderately systematic way, and I don't have a background in economics. I think there's a good chance you're aware of the above points, and I'm partly writing this comments to see if you or someone else can spot a flaw in my current impression.)

Comment by max_daniel on When should EAs allocate funding randomly? An inconclusive literature review. · 2018-11-22T18:57:26.699Z · score: 2 (2 votes) · EA · GW

On your first point: I agree that the paper just shows that, as you wrote, "if your decision strategy is to just choose the option you (naively) expect to be best, you will systematically overestimate the value of the selected option".

I also think that "just choose the option you (naively) expect to be best" is an example of a "nonrandom, non-Bayesian decision strategy". Now, the first sentence you quoted might reasonably be read to make the stronger claim that all nonrandom, non-Bayesian decision strategies have a certain property. However, the paper actually just shows that one of them does.

Is this what you were pointing to? If so, I'll edit the quoted sentence accordingly, but I first wanted to check if I understood you correctly.

In any case, thank you for your comment!

Comment by max_daniel on When should EAs allocate funding randomly? An inconclusive literature review. · 2018-11-22T18:49:39.719Z · score: 2 (2 votes) · EA · GW

On your second point: I think you're right, and that's a great example. I've added a link to your comment to the post.

Comment by max_daniel on When should EAs allocate funding randomly? An inconclusive literature review. · 2018-11-22T18:43:05.528Z · score: 2 (2 votes) · EA · GW

Hi Aaron, thank you for the suggestion. I agree that posting a more extensive summary would help readers decide if they should read the whole thing, and I will strongly consider doing so in case I ever plan to post similar things. For this specific post, I probably won't add a summary because my guess is that in this specific case the size of the beneficial effect doesn't justify the cost. (I do think extremely few people would use their time optimally by reading the post, mostly because it has no action-guiding conclusions and a low density of generally applicable insights.) I'm somewhat concerned that more people read this post than would be optimal just because there's some psychological pull toward reading whatever you clicked on, and that I could reduce the amount of time spent suboptimally by having a shorter summary here, with accessing the full text requiring an additional click. However, my hunch is that this harmful effect is sufficiently small. (Also, the cost to me would be unusually high because I have a large ugh field around this project and would really like to avoid spending any more time on it.) But do let me know if you think replacing this text with a summary is clearly warranted, and thank you again for the suggestion!

Comment by max_daniel on When should EAs allocate funding randomly? An inconclusive literature review. · 2018-11-18T21:56:41.682Z · score: 3 (2 votes) · EA · GW
do you have a vague impression of when randomisation might be a big win purely by reducing costs of evaluation?

Not really I'm afraid. I'd expect that due to the risk of inadvertent negative impacts and large improvements from weeding out obviously suboptimal options a pure lottery will rarely be a good idea. How much effort to expend beyond weeding out clearly suboptimal options to me likely seems to depend on contextual information specific to the use case. I'm not sure how much there is to be said in general except for platitudes along the lines of "invest time into explicit evaluation until the marginal value of information has diminished sufficiently".