Posts

Notes on 'Atomic Obsession' (2009) 2019-10-26T00:30:21.491Z · score: 61 (23 votes)
Information security careers for GCR reduction 2019-06-20T23:56:58.275Z · score: 154 (63 votes)
Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood 2017-06-28T15:49:11.655Z · score: 18 (19 votes)
Meetup : GiveWell research event for Bay Area effective atlruists! 2015-06-30T01:22:05.169Z · score: 0 (0 votes)

Comments

Comment by lukeprog on The EA Meta Fund is now the EA Infrastructure Fund · 2020-08-20T16:17:39.948Z · score: 21 (12 votes) · EA · GW

I like the new name; solid choice.

Comment by lukeprog on Informational Lobbying: Theory and Effectiveness · 2020-08-03T12:10:25.052Z · score: 5 (3 votes) · EA · GW

Thanks for this!

FWIW, I'd love to see a follow-up review on lobbying Executive Branch agencies. They're less powerful than Congress, but often more influenceable as well, and can sometimes be the most relevant target of lobbying if you're aiming for a very specific goal (that is too "in the weeds" to be addressed directly in legislation). I found Godwin et al. (2012) helpful here, but I haven't read much else. Interestingly, Godwin et al. find that some of the conclusions from Baumgartner et al. (2009) about Congressional lobbying don't hold for agency lobbying.

Comment by lukeprog on Forecasting Newsletter: May 2020. · 2020-05-31T18:33:35.044Z · score: 26 (7 votes) · EA · GW

Thanks!

Some additional recent stuff I found interesting:

  • This summary of US and UK policies for communicating probability in intelligence reports.
  • Apparently Niall Ferguson’s consulting firm makes & checks some quantified forecasts every year: “So at the beginning of each year we at Greenmantle make predictions about the year ahead, and at the end of the year we see — and tell our clients — how we did. Each December we also rate every predictive statement we have made in the previous 12 months, either “true”, “false” or “not proven”. In recent years, we have also forced ourselves to attach probabilities to our predictions — not easy when so much lies in the realm of uncertainty rather than calculable risk. We have, in short, tried to be superforecasters.”
  • Review of some failed long-term space forecasts by Carl Shulman.
  • Some early promising results from DARPA SCORE.
  • Assessing Kurzweil predictions about 2019: the results
  • Bias, Information, Noise: The BIN Model of Forecasting is a pretty interesting result if it holds up. Another explanation by Mauboussin here. Supposedly this is what Kahneman's next book will be about; HBR preview here.
  • GJP2 is now recruiting forecasters.
Comment by lukeprog on Forecasting Newsletter: April 2020 · 2020-05-06T19:34:30.403Z · score: 9 (3 votes) · EA · GW

The headline looks broken in my browser. It looks like this:

/(Good Judgement?[^]*)|(Superforecast(ing|er))/gi

The last explicit probabilistic prediction I made was probably a series of forecasts on my most recent internal Open Phil grant writeup, since it's part of our internal writeup template to prompt the grant investigator for explicit probabilistic forecasts about the grant. But it could've easily been elsewhere; I do somewhat-often make probabilistic forecasts just in conversation, or in GDoc/Slack comments, though for those I usually spend less time pinning down a totally precise formulation of the forecasting statement, since it's more about quickly indicating to others roughly what my views are rather than about establishing my calibration across a large number of precisely stated forecasts.

Comment by lukeprog on Forecasting Newsletter: April 2020 · 2020-05-01T17:52:49.915Z · score: 2 (1 votes) · EA · GW

Note that the headline ("Good Judgement Project: gjopen.com") is still confusing, since it seems to be saying GJP = GJO. The thing that ties the items under that headline is that they are all projects of GJI. Also, "Of the questions which have been added recently" is misleading since it seems to be about the previous paragraph (the superforecasters-only questions), but in fact all the links go to GJO.

Comment by lukeprog on Forecasting Newsletter: April 2020 · 2020-04-30T22:46:37.148Z · score: 14 (8 votes) · EA · GW

Nice to see a newsletter on this topic!

Clarification: The GJO coronavirus questions are not funded by Open Phil. The thing funded by Open Phil is this dashboard (linked from our blog post) put together by Good Judgment Inc. (GJI), which runs both GJO (where anyone can sign up and make forecasts) and their Superforecaster Analytics service (where only superforecasters can make forecasts). The dashboard Open Phil funded uses the Superforecaster Analytics service, not GJO. Also, I don't think Tetlock is involved in GJO (or GJI in general) much at all these days, but GJI is indeed the commercial spinoff from the Good Judgment Project (GJP) that Tetlock & Mellers led and which won the IARPA ACE forecasting competition and resulted in the research covered in Tetlock's book Superforecasting.

Comment by lukeprog on Insomnia with an EA lens: Bigger than malaria? · 2020-04-21T16:10:30.794Z · score: 9 (4 votes) · EA · GW

I wrote up some thoughts on CBT-I and the evidence base behind it here.

Comment by lukeprog on Information security careers for GCR reduction · 2019-12-31T22:44:04.006Z · score: 2 (1 votes) · EA · GW

Is it easy to say more about (1) which personality/mindset traits might predict infosec fit, and (2) infosec experts' objections to typical GCR concerns of EAs?

Comment by lukeprog on Rethink Priorities 2019 Impact and Strategy · 2019-12-03T02:44:45.181Z · score: 51 (22 votes) · EA · GW

FWIW I was substantially positively surprised by the amount and quality of the work you put out in 2019, though I didn't vet any of it in depth. (And prior to 2019 I think I wasn't aware of Rethink.)

Comment by lukeprog on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T22:41:18.029Z · score: 27 (14 votes) · EA · GW

FWIW, it's not clear to me that AI alignment folks with different agendas have put less effort into (or have made less progress on) understanding the motivations for other agendas than is typical in other somewhat-analogous fields. Like, MIRI leadership and Paul have put >25 (and maybe >100, over the years?) hours into arguing about merits of their differing agendas (in person, on the web, in GDocs comments), and my impression is that central participants to those conversations (e.g. Paul, Eliezer, Nate) can pass the others' ideological Turing tests reasonably well on a fair number of sub-questions and down 1-3 levels of "depth" (depending on the sub-question), and that might be more effort and better ITT performance than is typical for "research agenda motivation disagreements" in small niche fields that are comparable on some other dimensions.

Comment by lukeprog on Opinion: Estimating Invertebrate Sentience · 2019-11-07T17:06:38.592Z · score: 19 (12 votes) · EA · GW

Interesting stuff, thanks!

Comment by lukeprog on Notes on 'Atomic Obsession' (2009) · 2019-11-04T04:42:11.151Z · score: 3 (2 votes) · EA · GW

Thanks!

Comment by lukeprog on Information security careers for GCR reduction · 2019-10-15T22:48:13.619Z · score: 5 (3 votes) · EA · GW

Awesome, thanks for letting us know!

Comment by lukeprog on [Link] "How feasible is long-range forecasting?" (Open Phil) · 2019-10-14T00:59:41.447Z · score: 6 (3 votes) · EA · GW

Thanks! I knew there was one major study I was missing from the 70s, and that I had emailed people about before, but I couldn't track it down when I was writing this post, and I'm pretty sure this is the one I was thinking of. Of course, this study suffers from several of the problems I list in the post.

Comment by lukeprog on What to know before talking with journalists about EA · 2019-09-05T03:53:19.766Z · score: 7 (4 votes) · EA · GW

The linked "full guide" seem to require sign-in?

Comment by lukeprog on AI Forecasting Question Database (Forecasting infrastructure, part 3) · 2019-09-04T02:58:31.374Z · score: 5 (4 votes) · EA · GW

Nice work!

Comment by lukeprog on Are we living at the most influential time in history? · 2019-09-04T02:48:39.940Z · score: 30 (13 votes) · EA · GW

Great post!

Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them.

Minor point, but I think this is unclear. On AI see e.g. here. On synbio I'm less familiar but I'm guessing someone more than a few decades ago was able to think thoughts like "Once we understand cell biology realy well, seems like we might be able to engineer pathogens much more destructive than those served up by nature."

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-15T20:53:44.307Z · score: 8 (3 votes) · EA · GW

On the difference between the role we've tried to hire for at Open Phil specifically and a typical Security Analyst or Security Officer role, a few things come to mind, though we also think we don't yet have a great sense of the range of security roles throughout the field. One possible difference is that many security roles focus on security systems for a single organization, whereas we've primarily looked for someone who could help both Open Phil and some of our grantees, each of whom have potentially quite different needs. Another possible difference is that our GCR focus in AI and biosecurity leads us to some non-standard threat models, and it has been difficult thus far for us to find experienced security experts who readily adapt standard security thinking to a somewhat different set of threat models.

Re: industry roles that would be particularly good or bad preparation. My guess is that for the GCR-mitigating roles we discuss above (i.e. not just potential future roles at Open Phil), the better-preparation for roles will tend to (a) expose one to many different types of challenges, and different aspects of those challenges, rather than being very narrowly scoped, (b) involve threat modeling of, and defense from, very capable and well-resourced attackers, and (c) require some development of novel solutions (not necessary new crypto research; could also just be new configurations of interacting hardware/software systems and user behavior policies and training), among other things.

Comment by lukeprog on Invertebrate Welfare Cause Profile · 2019-07-15T01:37:28.591Z · score: 9 (6 votes) · EA · GW

On factors of potential relevance to moral weight, including some that could intuitively upweight many invertebrates, see also my Preliminary thoughts on moral weight.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-05T20:22:26.084Z · score: 1 (1 votes) · EA · GW

Presumably, though I know very little about that and don't know how much value would be added there by someone focused on worst case scenarios (over their replacement).

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-05T20:20:56.755Z · score: 10 (4 votes) · EA · GW

This all sounds right to me, though I think some people have different views, and I'm hardly an expert. Speaking for myself at least, the things you point to are roughly why I wanted the "maybe" in front of "relevant roles in government." Though one added benefit of doing security in government is that, at least if you get a strong security clearance, you might learn classified helpful things about e.g. repelling state-originating APTs.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-03T03:31:01.714Z · score: 4 (3 votes) · EA · GW

Yeah, something closer to the former.

Comment by lukeprog on How likely is a nuclear exchange between the US and Russia? · 2019-07-02T01:14:56.852Z · score: 5 (3 votes) · EA · GW

Very minor: "GJP" should be "GJI." Good Judgment Project ended with the end of the IARPA ACE tournaments. The company that pays superforecasters from that project to continue making forecasts for clients is Good Judgment Inc.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-01T20:53:44.025Z · score: 8 (3 votes) · EA · GW

I think we meant a bit of (b) and (c) but especially (a).

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-01T20:53:28.453Z · score: 14 (4 votes) · EA · GW

IIRC the main concern in the earlier conversations was about how many high-impact roles of this type there might really be in the next couple decades. Probably the number is smaller than (e.g.) the number of similarly high-impact "AI policy" roles, but (as our post says) we think the number of high-impact roles of this type will be substantial. And given how few GCR-focused people there are in general, and how few of them are likely a personal fit for this kind of career path anyway, it might well be that even if many of the people who are a good fit for this path pursue it, that would still not be enough to meet expected need in the next couple decades.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-01T20:53:10.918Z · score: 16 (9 votes) · EA · GW

The key roles we have in mind are a bit closer to what is sometimes called "security officer," i.e. someone who can think through (novel, GCR-focused) threat models, plausibly involving targeted state-based attacks, develop partly-custom system and software solutions that are a match to those threat models, think through and gather user feedback about tradeoffs between convenience and security of those solutions, develop and perhaps deliver appropriate training for those users, etc. Some of this might include things like "protect some unusual configuration of AWS services," but I imagine that might also be something that the security officer is able to outsource. We’ve tried working with a few security consultants, and it hasn’t met our needs so far.

Projects like "develop novel cryptographic methods" might also be useful in some cases — see my bullet points on research (rather than implementation) applications of security expertise in the context of AI — but they aren't the modal use-case we're thinking of.

But also, we haven't studied this potential career path to the level of depth that (e.g.) 80,000 Hours typically does when developing a career profile, so we have more uncertainty about many of the details here even than is typically represented in an 80,000 Hours career profile.

Comment by lukeprog on Who are the people that most publicly predicted we'd have AGI by now? Have they published any kind of retrospective, and updated their views? · 2019-06-29T18:35:17.032Z · score: 5 (4 votes) · EA · GW

You could dive into the specific examples in the spreadsheet linked here (the MIRI AI predictions dataset).

Comment by lukeprog on Invertebrate Sentience: A Useful Empirical Resource · 2019-06-10T02:20:30.954Z · score: 18 (10 votes) · EA · GW

Exciting — I look forward to the rest! Ya'll might want to consider writing a target article (summarizing your findings) for Animal Sentience; I suspect Harnad would be interested.

Comment by lukeprog on Which scientific discovery was most ahead of its time? · 2019-05-16T14:34:13.589Z · score: 22 (10 votes) · EA · GW

Cases where the scientific knowledge was actually lost and then rediscovered much later provide especially strong evidence w.r.t. the discovery counterfactuals. E.g. Hero's eolipile or al-Kindi's development of relative frequency analysis for decoding messages. Probably there are far more cases of this than we realize, because the evidence that someone somewhere once had the knowledge and then lost it has itself been lost; e.g. we could easily have just never rediscovered the Antikythera mechanism.

Comment by lukeprog on Should we consider the sleep loss epidemic an urgent global issue? · 2019-05-07T05:23:25.081Z · score: 2 (2 votes) · EA · GW

I looked into this a bit. Unfortunately the quality of evidence in sleep medicine was underwhelming, e.g. on behavioral treatments.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-08T01:31:59.312Z · score: 29 (14 votes) · EA · GW

BTW my "reflections on the 2018 RA recruiting round" post is now up, here.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-04T04:22:00.496Z · score: 31 (11 votes) · EA · GW

I agree that if it's true that "many EAs feel that either they're working at a top EA org or they're not contributing much," then that is much worse than anything about application time cost and urgently needs to be fixed. I've never felt that way about EA org work vs. alternatives, so I may have just missed that this is a message many people are getting.

E.g. Scott's post also says:

Should also acknowledge the possibility that "talent-constrained” means the world needs more clean meat researchers, malaria vaccine scientists, and AI programmers, and not just generic high-qualification people applying to EA organizations. This wasn’t how I understood the term but it would make sense.

…and my reply is "Yes, talent-constrained also means those other things, and it's a big problem if that was unclear to a noticeable fraction of the community."

FWIW I suspect there's also something a bit more subtle going on than overly narrow misunderstandings of "talent-constrained," e.g. something like Max Daniel's hypothesis.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T21:49:21.997Z · score: 10 (4 votes) · EA · GW

Thanks, that all makes sense to me. Will think more about this. Also still curious to hear replies from others here.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T19:09:22.891Z · score: 11 (6 votes) · EA · GW

Thanks for +1ing the above comment. I'd be keen to hear your reply to this comment, too.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T19:07:41.472Z · score: 23 (11 votes) · EA · GW

Thanks for sharing. As someone who spends a lot of time trying to fill EA meta/longtermist talent gaps — e.g. by managing Open Phil RA recruiting, helping to match the strongest applicants we don't hire to other openings, and by working on field-building in AI strategy/policy (e.g. CSET) — hearing stories like yours is unnerving.

What changes to the landscape, or hiring processes, or whatever, do you think would've made the most difference in your case?

I'm also curious to hear your reaction to my comment elsewhere about available paths:

There are lots of exciting things for EAs to do besides "apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers," including "keep doing what you're doing and engage with EA as an exciting hobby" and "apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren't at one of a handful of explicitly EA-motivated orgs" and "do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not," as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I'd be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you'll be in extremely high demand and highly paid).

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T22:09:24.154Z · score: 68 (31 votes) · EA · GW

Sorry to hear about your long, very difficult experience. I think part of what happened is that it did in fact get a lot harder to get a job at leading EA-motivated employers in the past couple years, but that wasn't clear to many EAs (including me, to some extent) until very recently, possibly as recently as this very post. So while it's good news that the EA community has grown such that these particular high-impact jobs can attract talent sufficient for them to be so competitive, it's unfortunate that this change wasn't clearer sooner, and posts like this one help with that, albeit not soon enough to help mitigate your own 1.5 years of suffering.

Also, the thing about some people not having runway is true and important, and is a major reason Open Phil pays people to take our remote work tests, and does quite a few things for people who do an in-person RA trial with us (e.g. salary, health benefits, moving costs, severance pay for those not made a subsequent offer). We don't want to miss out on great people just because they don't have enough runway/etc. to interact with our process.

FWIW, I found some of your comments about "elite culture" surprising. For context: I grew up in rural Minnesota, then dropped out of counseling psychology undergrad at the University of Minnesota, then worked at a 6-person computer repair shop in Glendale, CA. Only in the past few years have I begun to somewhat regularly interact with many people from e.g. top schools and top tech companies. There are aspects of interacting with such "elites" that I've had to learn on the fly and to some degree am still not great at, but in my experience the culture in those circles is still pretty different from the culture at major EA-motivated employers, even though many of the staff at EA-motivated employers are now people who e.g. graduated from schools like Oxford or Harvard. For example, it's not my experience that people at major EA organizations are as effusively positive as many people in non-EA "elite" circles are. In fact, I would've described the culture at the EA organizations I interact with the most in sorta opposite terms, in that it's hard to get them excited about things. E.g. if you tell one of my Open Phil RA colleagues about a new study in Nature on some topic they care about, a pretty common reaction is to shrug and say "Yeah but who knows if it's true; most of the time we dig into a top-journal study, it completely falls apart." Or if you tell people at most EA orgs about a cool-sounding global health or poverty-reduction intervention, they'll probably say "Could be interesting, but very low chance it'll end up looking as cost-effective as AMF or even GiveDirectly upon further investigation, so: meh." Also, EA-motivated employers are generally not as "credentialist," in my experience, as most "elite" employers (perhaps except for tech companies).

Finally, re: "you never know for sure if it's not just perfect meritocracy correctly filtering [certain people out]." I can't speak to your case in particular, but at least w.r.t. Open Phil's RA recruiting efforts (which I've been managing since early 2018), I think I am sure it's not a perfect meritocracy. We think our application process probably has a high false negative rate (i.e. rejecting people who are actually strong fits, or would be with 3mo of training), and it's just very difficult to reduce the false negative rate without also greatly increasing the false positive rate. Just to make this more concrete: in our 2018 RA hiring round, if somebody scored really well on our stage-3 work test, we typically thought "Okay, decent chance this person is a good fit," but when somebody scored medium/low on it, we often threw up our hands and said "No clue if this person is a good fit or not, there are lots of reasons they could've scored poorly without actually being a poor fit, I guess we just don't get to know either way without us and them paying infeasibly huge time costs." (So why not just improve that aspect of our work tests? We're trying, e.g. by contracting several "work test testers," but it's harder than one might think, at least for such ill-defined "generalist" roles.)

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T23:48:23.798Z · score: 12 (6 votes) · EA · GW

Sounds plausible. E.g. I'm pro "train up as a cybersecurity expert" but I know others have advised against.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T23:47:27.261Z · score: 45 (22 votes) · EA · GW

Thanks for sharing your comment about personalized invitations, that's interesting. At Open Phil, almost all our personalized invitations (even to people we already knew well) were only lightly personalized. But perhaps a noticeable fraction of people misperceived that as "high chance you'll get the job if you apply," or something. The Open Phil RA hiring committee is discussing this issue now, so thanks for raising it.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T21:23:02.381Z · score: 27 (12 votes) · EA · GW

Sorry to hear how much misery you've experienced. I'm curious to ask a follow-up question, but feel free to ignore if you aren't comfortable answering.

In particular, I'm wondering whether "make [EA] my career" feels ~identical (to you) to "work at a handful of explicitly EA-motivated employers." If it does, then maybe the messaging or energy or something in the EA community is pretty far from what I think it should be, which is more like what I said in another comment:

There are lots of exciting things for EAs to do besides "apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers," including "keep doing what you're doing and engage with EA as an exciting hobby" and "apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren't at one of a handful of explicitly EA-motivated orgs" and "do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not," as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I'd be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you'll be in extremely high demand and highly paid).

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T21:22:26.249Z · score: 10 (4 votes) · EA · GW

Nearly all of Open Phil's RA hiring process is focused on assessing someone's immediate fit for the kind of work we do (via the remote work tests), not (other types of) fit with the team and mission.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T17:47:11.475Z · score: 9 (5 votes) · EA · GW

Per Buck's comment, I think identifying software engineering talent is a pretty different problem than identifying e.g. someone who is already a good fit for Open Phil generalist RA roles.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T17:46:38.318Z · score: 26 (15 votes) · EA · GW

Thanks for mentioning the thing about the conversation notes test. It was simply an oversight to not explicitly say "Please don't spend more than X hours on this work test," and I've now added such a sentence to our latest draft of those work test instructions. We had explicit time limits for our other two tests.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T17:46:16.664Z · score: 48 (22 votes) · EA · GW

Not sure I follow the part about how the kind of thing described in the original post makes you "more reluctant to introduce new people into the EA community." There are lots of exciting things for EAs to do besides "apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers," including "keep doing what you're doing and engage with EA as an exciting hobby" and "apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren't at one of a handful of explicitly EA-motivated orgs" and "do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not," as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I'd be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you'll be in extremely high demand and highly paid).

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T17:43:46.039Z · score: 32 (17 votes) · EA · GW

Yeah, this is one reason Open Phil pays people for doing our remote work tests, so that people who don't happen to have runway/similar can still go through our process. Possibly more EA orgs should do this if they aren't already.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T16:36:25.793Z · score: 62 (30 votes) · EA · GW

Oof, 8 weeks of effort to get 0/20 positions is pretty brutal. It's easy to see how that would feel like your "Hey you!…" paragraph. And while I suspect you're a bit of an outlier in time spent and positions applied for, I also think you're pointing at something true about the current situation re: job openings at EA-motivated employers, as evidenced by how many upvotes this post has gotten, some of the comments on this page, and the data I've got as a result of managing Open Phil's 2018 recruitment round of Research Analysts, during which we had to say "no" to tons of applicants with quite impressive resumes.

I've been writing up some reflections on that recruiting round, which I hope to share soon. One of my takeaways is something like "The base of talent out there is strong, and Open Phil's current ability to deploy it is weak." In that way we might be an extreme opposite of Teach for America, and I suspect many other EA-motivated orgs are as well.

Anyway, I plan to say more on these topics when I share my "reflections" post, but in the meantime I just want to say I'm sorry that you spent so much time applying to EA orgs and got no offers. Also, setting the time investment aside, it's also just emotionally difficult to get an "Unfortunately, we've decided…" email, let alone receive 20 of them in a row.

A couple other random notes for now:
- A colleague of mine has heard some EAs — perhaps motivated by considerations like those in this post — saying stuff like "maybe I shouldn't even try to apply because I don't want to waste orgs' time." In case future potential Open Phil applicants end up reading this comment, let it be known that we don't think it's a waste of our time to process applications. If we don't have the staff capacity to process all the applications we receive, we can always just drop a larger fraction of applicants at each stage. But if someone never applies, we have no opportunity at all to figure out how good a fit they might be. Also, what we're looking for is pretty unclear (especially to potential applicants), and so e.g. some of our recent hires are people who told us they probably wouldn't have bothered applying if we hadn't proactively encouraged them to apply. Of course, an applicant could be worried about whether applying is worth their time, and that's a different matter.
- I think it would've been good to mention that some of these organizations pay applicants for some/all of the time they spend on the application process. (Hopefully Open Phil isn't the only one?)

Comment by lukeprog on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-20T13:59:15.656Z · score: 20 (7 votes) · EA · GW

Roughly how much time per month/year does each of the fund managers currently expect to spend on investigating, discussing, and deciding on grant opportunities?

Comment by lukeprog on Long-Term Future Fund AMA · 2018-12-20T13:58:44.808Z · score: 20 (8 votes) · EA · GW

Roughly how much time per month/year does each of the fund managers currently expect to spend on investigating, discussing, and deciding on grant opportunities?

Comment by lukeprog on Insomnia: a promising cure · 2018-11-21T22:33:30.309Z · score: 3 (7 votes) · EA · GW

I disagree about the strength of evidence for CBT-I effectiveness.

Comment by lukeprog on Relieving extreme physical pain in humans – an opportunity for effective funding · 2018-10-16T04:51:37.330Z · score: 11 (7 votes) · EA · GW

Open Phil grant on chronic pain.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-05-23T21:01:43.745Z · score: 1 (1 votes) · EA · GW

Somebody asked the following question:

I understand that OP’s preference is to have promising candidates attend an in-person work trial rather than do an additional remote work test; would this preference still stand if the candidate in question has to obtain a US work visa sponsorship in order to attend the in-person trial?

Our reply is: Yes, that preference stands regardless of current work authorization status, though of course in some cases there won't be any way for us to help an applicant get US work authorization, depending on their situation.