Posts

Notes on 'Atomic Obsession' (2009) 2019-10-26T00:30:21.491Z · score: 60 (22 votes)
Information security careers for GCR reduction 2019-06-20T23:56:58.275Z · score: 147 (58 votes)
Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood 2017-06-28T15:49:11.655Z · score: 16 (18 votes)
Meetup : GiveWell research event for Bay Area effective atlruists! 2015-06-30T01:22:05.169Z · score: 0 (0 votes)

Comments

Comment by lukeprog on Opinion: Estimating Invertebrate Sentience · 2019-11-07T17:06:38.592Z · score: 20 (10 votes) · EA · GW

Interesting stuff, thanks!

Comment by lukeprog on Notes on 'Atomic Obsession' (2009) · 2019-11-04T04:42:11.151Z · score: 3 (2 votes) · EA · GW

Thanks!

Comment by lukeprog on Information security careers for GCR reduction · 2019-10-15T22:48:13.619Z · score: 5 (3 votes) · EA · GW

Awesome, thanks for letting us know!

Comment by lukeprog on [Link] "How feasible is long-range forecasting?" (Open Phil) · 2019-10-14T00:59:41.447Z · score: 6 (3 votes) · EA · GW

Thanks! I knew there was one major study I was missing from the 70s, and that I had emailed people about before, but I couldn't track it down when I was writing this post, and I'm pretty sure this is the one I was thinking of. Of course, this study suffers from several of the problems I list in the post.

Comment by lukeprog on What to know before talking with journalists about EA · 2019-09-05T03:53:19.766Z · score: 7 (4 votes) · EA · GW

The linked "full guide" seem to require sign-in?

Comment by lukeprog on AI Forecasting Question Database (Forecasting infrastructure, part 3) · 2019-09-04T02:58:31.374Z · score: 5 (4 votes) · EA · GW

Nice work!

Comment by lukeprog on Are we living at the most influential time in history? · 2019-09-04T02:48:39.940Z · score: 29 (12 votes) · EA · GW

Great post!

Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them.

Minor point, but I think this is unclear. On AI see e.g. here. On synbio I'm less familiar but I'm guessing someone more than a few decades ago was able to think thoughts like "Once we understand cell biology realy well, seems like we might be able to engineer pathogens much more destructive than those served up by nature."

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-15T20:53:44.307Z · score: 8 (3 votes) · EA · GW

On the difference between the role we've tried to hire for at Open Phil specifically and a typical Security Analyst or Security Officer role, a few things come to mind, though we also think we don't yet have a great sense of the range of security roles throughout the field. One possible difference is that many security roles focus on security systems for a single organization, whereas we've primarily looked for someone who could help both Open Phil and some of our grantees, each of whom have potentially quite different needs. Another possible difference is that our GCR focus in AI and biosecurity leads us to some non-standard threat models, and it has been difficult thus far for us to find experienced security experts who readily adapt standard security thinking to a somewhat different set of threat models.

Re: industry roles that would be particularly good or bad preparation. My guess is that for the GCR-mitigating roles we discuss above (i.e. not just potential future roles at Open Phil), the better-preparation for roles will tend to (a) expose one to many different types of challenges, and different aspects of those challenges, rather than being very narrowly scoped, (b) involve threat modeling of, and defense from, very capable and well-resourced attackers, and (c) require some development of novel solutions (not necessary new crypto research; could also just be new configurations of interacting hardware/software systems and user behavior policies and training), among other things.

Comment by lukeprog on Invertebrate Welfare Cause Profile · 2019-07-15T01:37:28.591Z · score: 7 (5 votes) · EA · GW

On factors of potential relevance to moral weight, including some that could intuitively upweight many invertebrates, see also my Preliminary thoughts on moral weight.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-05T20:22:26.084Z · score: 1 (1 votes) · EA · GW

Presumably, though I know very little about that and don't know how much value would be added there by someone focused on worst case scenarios (over their replacement).

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-05T20:20:56.755Z · score: 10 (4 votes) · EA · GW

This all sounds right to me, though I think some people have different views, and I'm hardly an expert. Speaking for myself at least, the things you point to are roughly why I wanted the "maybe" in front of "relevant roles in government." Though one added benefit of doing security in government is that, at least if you get a strong security clearance, you might learn classified helpful things about e.g. repelling state-originating APTs.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-03T03:31:01.714Z · score: 4 (3 votes) · EA · GW

Yeah, something closer to the former.

Comment by lukeprog on How likely is a nuclear exchange between the US and Russia? · 2019-07-02T01:14:56.852Z · score: 5 (3 votes) · EA · GW

Very minor: "GJP" should be "GJI." Good Judgment Project ended with the end of the IARPA ACE tournaments. The company that pays superforecasters from that project to continue making forecasts for clients is Good Judgment Inc.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-01T20:53:44.025Z · score: 8 (3 votes) · EA · GW

I think we meant a bit of (b) and (c) but especially (a).

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-01T20:53:28.453Z · score: 14 (4 votes) · EA · GW

IIRC the main concern in the earlier conversations was about how many high-impact roles of this type there might really be in the next couple decades. Probably the number is smaller than (e.g.) the number of similarly high-impact "AI policy" roles, but (as our post says) we think the number of high-impact roles of this type will be substantial. And given how few GCR-focused people there are in general, and how few of them are likely a personal fit for this kind of career path anyway, it might well be that even if many of the people who are a good fit for this path pursue it, that would still not be enough to meet expected need in the next couple decades.

Comment by lukeprog on Information security careers for GCR reduction · 2019-07-01T20:53:10.918Z · score: 14 (7 votes) · EA · GW

The key roles we have in mind are a bit closer to what is sometimes called "security officer," i.e. someone who can think through (novel, GCR-focused) threat models, plausibly involving targeted state-based attacks, develop partly-custom system and software solutions that are a match to those threat models, think through and gather user feedback about tradeoffs between convenience and security of those solutions, develop and perhaps deliver appropriate training for those users, etc. Some of this might include things like "protect some unusual configuration of AWS services," but I imagine that might also be something that the security officer is able to outsource. We’ve tried working with a few security consultants, and it hasn’t met our needs so far.

Projects like "develop novel cryptographic methods" might also be useful in some cases — see my bullet points on research (rather than implementation) applications of security expertise in the context of AI — but they aren't the modal use-case we're thinking of.

But also, we haven't studied this potential career path to the level of depth that (e.g.) 80,000 Hours typically does when developing a career profile, so we have more uncertainty about many of the details here even than is typically represented in an 80,000 Hours career profile.

Comment by lukeprog on Who are the people that most publicly predicted we'd have AGI by now? Have they published any kind of retrospective, and updated their views? · 2019-06-29T18:35:17.032Z · score: 5 (4 votes) · EA · GW

You could dive into the specific examples in the spreadsheet linked here (the MIRI AI predictions dataset).

Comment by lukeprog on Invertebrate Sentience: A Useful Empirical Resource · 2019-06-10T02:20:30.954Z · score: 18 (10 votes) · EA · GW

Exciting — I look forward to the rest! Ya'll might want to consider writing a target article (summarizing your findings) for Animal Sentience; I suspect Harnad would be interested.

Comment by lukeprog on Which scientific discovery was most ahead of its time? · 2019-05-16T14:34:13.589Z · score: 21 (9 votes) · EA · GW

Cases where the scientific knowledge was actually lost and then rediscovered much later provide especially strong evidence w.r.t. the discovery counterfactuals. E.g. Hero's eolipile or al-Kindi's development of relative frequency analysis for decoding messages. Probably there are far more cases of this than we realize, because the evidence that someone somewhere once had the knowledge and then lost it has itself been lost; e.g. we could easily have just never rediscovered the Antikythera mechanism.

Comment by lukeprog on Should we consider the sleep loss epidemic an urgent global issue? · 2019-05-07T05:23:25.081Z · score: 2 (2 votes) · EA · GW

I looked into this a bit. Unfortunately the quality of evidence in sleep medicine was underwhelming, e.g. on behavioral treatments.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-08T01:31:59.312Z · score: 29 (14 votes) · EA · GW

BTW my "reflections on the 2018 RA recruiting round" post is now up, here.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-04T04:22:00.496Z · score: 31 (11 votes) · EA · GW

I agree that if it's true that "many EAs feel that either they're working at a top EA org or they're not contributing much," then that is much worse than anything about application time cost and urgently needs to be fixed. I've never felt that way about EA org work vs. alternatives, so I may have just missed that this is a message many people are getting.

E.g. Scott's post also says:

Should also acknowledge the possibility that "talent-constrained” means the world needs more clean meat researchers, malaria vaccine scientists, and AI programmers, and not just generic high-qualification people applying to EA organizations. This wasn’t how I understood the term but it would make sense.

…and my reply is "Yes, talent-constrained also means those other things, and it's a big problem if that was unclear to a noticeable fraction of the community."

FWIW I suspect there's also something a bit more subtle going on than overly narrow misunderstandings of "talent-constrained," e.g. something like Max Daniel's hypothesis.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T21:49:21.997Z · score: 10 (4 votes) · EA · GW

Thanks, that all makes sense to me. Will think more about this. Also still curious to hear replies from others here.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T19:09:22.891Z · score: 10 (5 votes) · EA · GW

Thanks for +1ing the above comment. I'd be keen to hear your reply to this comment, too.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T19:07:41.472Z · score: 20 (9 votes) · EA · GW

Thanks for sharing. As someone who spends a lot of time trying to fill EA meta/longtermist talent gaps — e.g. by managing Open Phil RA recruiting, helping to match the strongest applicants we don't hire to other openings, and by working on field-building in AI strategy/policy (e.g. CSET) — hearing stories like yours is unnerving.

What changes to the landscape, or hiring processes, or whatever, do you think would've made the most difference in your case?

I'm also curious to hear your reaction to my comment elsewhere about available paths:

There are lots of exciting things for EAs to do besides "apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers," including "keep doing what you're doing and engage with EA as an exciting hobby" and "apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren't at one of a handful of explicitly EA-motivated orgs" and "do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not," as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I'd be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you'll be in extremely high demand and highly paid).

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T22:09:24.154Z · score: 66 (30 votes) · EA · GW

Sorry to hear about your long, very difficult experience. I think part of what happened is that it did in fact get a lot harder to get a job at leading EA-motivated employers in the past couple years, but that wasn't clear to many EAs (including me, to some extent) until very recently, possibly as recently as this very post. So while it's good news that the EA community has grown such that these particular high-impact jobs can attract talent sufficient for them to be so competitive, it's unfortunate that this change wasn't clearer sooner, and posts like this one help with that, albeit not soon enough to help mitigate your own 1.5 years of suffering.

Also, the thing about some people not having runway is true and important, and is a major reason Open Phil pays people to take our remote work tests, and does quite a few things for people who do an in-person RA trial with us (e.g. salary, health benefits, moving costs, severance pay for those not made a subsequent offer). We don't want to miss out on great people just because they don't have enough runway/etc. to interact with our process.

FWIW, I found some of your comments about "elite culture" surprising. For context: I grew up in rural Minnesota, then dropped out of counseling psychology undergrad at the University of Minnesota, then worked at a 6-person computer repair shop in Glendale, CA. Only in the past few years have I begun to somewhat regularly interact with many people from e.g. top schools and top tech companies. There are aspects of interacting with such "elites" that I've had to learn on the fly and to some degree am still not great at, but in my experience the culture in those circles is still pretty different from the culture at major EA-motivated employers, even though many of the staff at EA-motivated employers are now people who e.g. graduated from schools like Oxford or Harvard. For example, it's not my experience that people at major EA organizations are as effusively positive as many people in non-EA "elite" circles are. In fact, I would've described the culture at the EA organizations I interact with the most in sorta opposite terms, in that it's hard to get them excited about things. E.g. if you tell one of my Open Phil RA colleagues about a new study in Nature on some topic they care about, a pretty common reaction is to shrug and say "Yeah but who knows if it's true; most of the time we dig into a top-journal study, it completely falls apart." Or if you tell people at most EA orgs about a cool-sounding global health or poverty-reduction intervention, they'll probably say "Could be interesting, but very low chance it'll end up looking as cost-effective as AMF or even GiveDirectly upon further investigation, so: meh." Also, EA-motivated employers are generally not as "credentialist," in my experience, as most "elite" employers (perhaps except for tech companies).

Finally, re: "you never know for sure if it's not just perfect meritocracy correctly filtering [certain people out]." I can't speak to your case in particular, but at least w.r.t. Open Phil's RA recruiting efforts (which I've been managing since early 2018), I think I am sure it's not a perfect meritocracy. We think our application process probably has a high false negative rate (i.e. rejecting people who are actually strong fits, or would be with 3mo of training), and it's just very difficult to reduce the false negative rate without also greatly increasing the false positive rate. Just to make this more concrete: in our 2018 RA hiring round, if somebody scored really well on our stage-3 work test, we typically thought "Okay, decent chance this person is a good fit," but when somebody scored medium/low on it, we often threw up our hands and said "No clue if this person is a good fit or not, there are lots of reasons they could've scored poorly without actually being a poor fit, I guess we just don't get to know either way without us and them paying infeasibly huge time costs." (So why not just improve that aspect of our work tests? We're trying, e.g. by contracting several "work test testers," but it's harder than one might think, at least for such ill-defined "generalist" roles.)

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T23:48:23.798Z · score: 12 (6 votes) · EA · GW

Sounds plausible. E.g. I'm pro "train up as a cybersecurity expert" but I know others have advised against.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T23:47:27.261Z · score: 44 (21 votes) · EA · GW

Thanks for sharing your comment about personalized invitations, that's interesting. At Open Phil, almost all our personalized invitations (even to people we already knew well) were only lightly personalized. But perhaps a noticeable fraction of people misperceived that as "high chance you'll get the job if you apply," or something. The Open Phil RA hiring committee is discussing this issue now, so thanks for raising it.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T21:23:02.381Z · score: 27 (12 votes) · EA · GW

Sorry to hear how much misery you've experienced. I'm curious to ask a follow-up question, but feel free to ignore if you aren't comfortable answering.

In particular, I'm wondering whether "make [EA] my career" feels ~identical (to you) to "work at a handful of explicitly EA-motivated employers." If it does, then maybe the messaging or energy or something in the EA community is pretty far from what I think it should be, which is more like what I said in another comment:

There are lots of exciting things for EAs to do besides "apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers," including "keep doing what you're doing and engage with EA as an exciting hobby" and "apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren't at one of a handful of explicitly EA-motivated orgs" and "do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not," as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I'd be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you'll be in extremely high demand and highly paid).

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T21:22:26.249Z · score: 10 (4 votes) · EA · GW

Nearly all of Open Phil's RA hiring process is focused on assessing someone's immediate fit for the kind of work we do (via the remote work tests), not (other types of) fit with the team and mission.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T17:47:11.475Z · score: 8 (4 votes) · EA · GW

Per Buck's comment, I think identifying software engineering talent is a pretty different problem than identifying e.g. someone who is already a good fit for Open Phil generalist RA roles.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T17:46:38.318Z · score: 26 (15 votes) · EA · GW

Thanks for mentioning the thing about the conversation notes test. It was simply an oversight to not explicitly say "Please don't spend more than X hours on this work test," and I've now added such a sentence to our latest draft of those work test instructions. We had explicit time limits for our other two tests.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T17:46:16.664Z · score: 47 (21 votes) · EA · GW

Not sure I follow the part about how the kind of thing described in the original post makes you "more reluctant to introduce new people into the EA community." There are lots of exciting things for EAs to do besides "apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers," including "keep doing what you're doing and engage with EA as an exciting hobby" and "apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren't at one of a handful of explicitly EA-motivated orgs" and "do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not," as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I'd be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you'll be in extremely high demand and highly paid).

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T17:43:46.039Z · score: 30 (16 votes) · EA · GW

Yeah, this is one reason Open Phil pays people for doing our remote work tests, so that people who don't happen to have runway/similar can still go through our process. Possibly more EA orgs should do this if they aren't already.

Comment by lukeprog on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T16:36:25.793Z · score: 60 (28 votes) · EA · GW

Oof, 8 weeks of effort to get 0/20 positions is pretty brutal. It's easy to see how that would feel like your "Hey you!…" paragraph. And while I suspect you're a bit of an outlier in time spent and positions applied for, I also think you're pointing at something true about the current situation re: job openings at EA-motivated employers, as evidenced by how many upvotes this post has gotten, some of the comments on this page, and the data I've got as a result of managing Open Phil's 2018 recruitment round of Research Analysts, during which we had to say "no" to tons of applicants with quite impressive resumes.

I've been writing up some reflections on that recruiting round, which I hope to share soon. One of my takeaways is something like "The base of talent out there is strong, and Open Phil's current ability to deploy it is weak." In that way we might be an extreme opposite of Teach for America, and I suspect many other EA-motivated orgs are as well.

Anyway, I plan to say more on these topics when I share my "reflections" post, but in the meantime I just want to say I'm sorry that you spent so much time applying to EA orgs and got no offers. Also, setting the time investment aside, it's also just emotionally difficult to get an "Unfortunately, we've decided…" email, let alone receive 20 of them in a row.

A couple other random notes for now:
- A colleague of mine has heard some EAs — perhaps motivated by considerations like those in this post — saying stuff like "maybe I shouldn't even try to apply because I don't want to waste orgs' time." In case future potential Open Phil applicants end up reading this comment, let it be known that we don't think it's a waste of our time to process applications. If we don't have the staff capacity to process all the applications we receive, we can always just drop a larger fraction of applicants at each stage. But if someone never applies, we have no opportunity at all to figure out how good a fit they might be. Also, what we're looking for is pretty unclear (especially to potential applicants), and so e.g. some of our recent hires are people who told us they probably wouldn't have bothered applying if we hadn't proactively encouraged them to apply. Of course, an applicant could be worried about whether applying is worth their time, and that's a different matter.
- I think it would've been good to mention that some of these organizations pay applicants for some/all of the time they spend on the application process. (Hopefully Open Phil isn't the only one?)

Comment by lukeprog on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-20T13:59:15.656Z · score: 20 (7 votes) · EA · GW

Roughly how much time per month/year does each of the fund managers currently expect to spend on investigating, discussing, and deciding on grant opportunities?

Comment by lukeprog on Long-Term Future Fund AMA · 2018-12-20T13:58:44.808Z · score: 20 (8 votes) · EA · GW

Roughly how much time per month/year does each of the fund managers currently expect to spend on investigating, discussing, and deciding on grant opportunities?

Comment by lukeprog on Insomnia: a promising cure · 2018-11-21T22:33:30.309Z · score: 3 (7 votes) · EA · GW

I disagree about the strength of evidence for CBT-I effectiveness.

Comment by lukeprog on Relieving extreme physical pain in humans – an opportunity for effective funding · 2018-10-16T04:51:37.330Z · score: 11 (7 votes) · EA · GW

Open Phil grant on chronic pain.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-05-23T21:01:43.745Z · score: 1 (1 votes) · EA · GW

Somebody asked the following question:

I understand that OP’s preference is to have promising candidates attend an in-person work trial rather than do an additional remote work test; would this preference still stand if the candidate in question has to obtain a US work visa sponsorship in order to attend the in-person trial?

Our reply is: Yes, that preference stands regardless of current work authorization status, though of course in some cases there won't be any way for us to help an applicant get US work authorization, depending on their situation.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-05-11T21:11:10.151Z · score: 3 (3 votes) · EA · GW

Responding to your second formulation of the question, the answer is "more the latter than the former." We intend to invest heavily in training and mentoring new hires, and we hope that research analysts will end up being long-term core contributors at Open Phil — as research analysts, as grant investigators, and as high-level managers, among other roles — or, in some cases, in important roles that require similar skills outside Open Phil.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-05-02T22:43:33.362Z · score: 0 (0 votes) · EA · GW

Unfortunately the likelihood is still pretty unclear to us at this point, and the available options vary a fair bit by applicant, depending on which country they're from, whether they recently graduated from undergrad or graduate school, and other factors.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-05-02T22:43:15.441Z · score: 1 (1 votes) · EA · GW

Quick replies to each:

1a. Our goals for 2018 are laid out in the post you linked to.

1b. The expectation is based mostly on the fact that we gave well over $100 million last year, and we're devoting similar time and effort to grantmaking in 2018.

2a. Open Phil is still a fairly new organization, and I don't think many know much about us yet. Probably we are best known in the effective altruism community, where we seem to have a strong reputation.

2b. Does it matter for our reputation, do you mean? I'm not sure. I'm not aware of us having received critiques about that.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-04-26T05:08:17.979Z · score: 0 (0 votes) · EA · GW

Completion of an honours or masters program provides us with a bit more evidence about an applicant's capabilities than an undergraduate degree does, but both are less informative to us than the applicant's performance on the various work samples that are part of our application process.

Because our roles are so "generalist," there are few domains that are especially relevant, though microeconomics and statistics are two unusually broadly relevant fields. In general, we find that those with a STEM background do especially well at the kind of work we do, but a STEM background is not required. A couple other things that are likely helpful for getting and excelling in an Open Phil research analyst role are calibration training and practice making Fermi estimates.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-04-21T02:46:22.505Z · score: 1 (1 votes) · EA · GW

Possibly. If a research analyst excels at the job after being hired, and seems likely to continue excel even while remote, we'd certainly consider it.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-04-19T03:04:58.375Z · score: 6 (5 votes) · EA · GW

Here's a question I received via email, which I'll answer here so others can benefit from the answer.

QUESTION

Would you be able to give examples of how the research has been used for decision making around giving money/grants?

ANSWER

Sure, here are a few examples:

  • Our research on the history of philanthropy helped us decide to be more ambitious: "we are more interested in working on daunting problems over long periods of time after learning about some of philanthropy’s past contributions." This has increased the amount of attention and funding we've spent on daunting challenges that will likely require investment over many years to achieve the impacts we hope for.
  • Our shallow investigations into several different global catastrophic risks enabled us to choose initial priorities in that category. Most importantly, we launched grantmaking focus areas in "biosecurity and pandemic preparedness" and "potential risks from advanced artificial intelligence." Across those two focus areas we've since made more than $80 million in grants.
  • Our research on moral patienthood persuaded us to begin making grants related to fish welfare. Since then, we've made more than $6 million in fish welfare grants.
Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-04-15T01:13:08.633Z · score: 0 (0 votes) · EA · GW

Yes, you may submit a writing sample by sending it to jobs@openphilanthropy.org, as FirstName.LastName.Sample (e.g. John.Smith.Sample.doc or John.Smith.Sample.pdf). If you'd like to submit a letter of recommendation, please include it as a page of your résumé.

Please keep in mind that writing samples and letters of recommendation are entirely optional, so if you don't already have them handy, I don't recommend spending time pulling them together. Our application process puts much more weight on work test performance anyway.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-04-13T21:45:44.156Z · score: 0 (0 votes) · EA · GW

What kind of conformity are you asking about? Certainly, some degree of alignment with our mission and values is important to us, and so is talent and "fit" for the work. Our team members are encouraged to focus on optimizing for Open Phil's mission, even when it means pushing back on their manager.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-04-13T21:44:31.444Z · score: 3 (3 votes) · EA · GW

Because applicants are working through our application process at different speeds, we're still learning what portion of applicants are invited forward at each stage. We are also adjusting the process as we go along. As of today, our application process looks like this:

  1. Initial submission of application + résumé.
  2. 2-question timed test.
  3. A "conversation notes" work test. (compensated via honorarium)
  4. A brief call, to explain the rest of our process and answer the applicant's questions.
  5. An "internal grant write-up" work test. (compensated via honorarium)

To some degree we are still determining next steps after #5, and they depend somewhat on the applicant's availability and preferences, and that's the main thing we discuss with applicants at step #4, on an individual basis.

Comment by lukeprog on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-04-02T20:52:09.056Z · score: 4 (3 votes) · EA · GW

Yes, we recently removed a 2-question timed test from the job ad page. We wanted to lower the cost of submitting an initial application, and also we want to filter some applications before we request that applicants take the timed test. However, we are still using this timed test in an early part of our application process, and we still think it is diagnostic of who will be a good fit for our research analyst positions.

(I work for Open Phil.)