Posts

History of Philanthropy Literature Review: Pugwash Conferences on Science and World Affairs 2021-09-24T08:52:22.827Z
Call to Vigilance 2021-09-15T18:46:09.068Z
How to make the best of the most important century? 2021-09-14T21:05:57.096Z
AI Timelines: Where the Arguments, and the "Experts," Stand 2021-09-07T17:35:12.431Z
Forecasting transformative AI: the "biological anchors" method in a nutshell 2021-08-31T18:17:03.013Z
Are we "trending toward" transformative AI? (How would we know?) 2021-08-24T17:15:18.742Z
Forecasting transformative AI: what's the burden of proof? 2021-08-17T17:14:37.482Z
Transformative AI Timelines Part 1 of 4: What Kind of AI? 2021-08-10T21:38:46.178Z
This Can't Go On 2021-08-03T15:53:33.837Z
Digital People FAQ 2021-07-27T17:19:59.605Z
Digital People Would Be An Even Bigger Deal 2021-07-27T17:19:41.500Z
The Duplicator: Instant Cloning Would Make the World Economy Explode 2021-07-20T16:41:42.011Z
New blog: Cold Takes 2021-07-13T17:14:33.220Z
All Possible Views About Humanity's Future Are Wild 2021-07-13T16:57:28.414Z
My current impressions on career choice for longtermists 2021-06-04T17:07:29.979Z
History of Philanthropy Case Study: The Campaign for Marriage Equality 2018-09-20T08:58:12.938Z
Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy 2018-03-26T16:32:28.313Z
Update on Cause Prioritization at Open Philanthropy 2018-01-26T16:40:08.648Z
History of Philanthropy Case Study: Clinton Health Access Initiative’s Role in Global Price Drops for Antiretroviral Drugs 2018-01-10T16:44:28.840Z
Some Thoughts on Public Discourse 2017-02-23T17:29:09.085Z
Radical Empathy 2017-02-16T12:41:39.017Z
History of Philanthropy Case Study: The Founding of the Center for Global Development 2016-06-15T15:48:22.913Z
History of Philanthropy Case Study: The Founding of the Center on Budget and Policy Priorities 2016-05-20T15:30:25.853Z
Some Background on Our Views Regarding Advanced Artificial Intelligence (Open Philanthropy) 2016-05-07T02:13:00.000Z
Hits-based Giving 2016-04-04T07:45:14.230Z
Why the Open Philanthropy Project isn't currently funding organizations focused on promoting effective altruism 2015-10-28T19:40:30.129Z
The Track Record of Policy-Oriented Philanthropy 2013-11-06T17:11:30.181Z
Geoengineering Research 2013-10-16T16:07:52.800Z
Excited Altruism 2013-08-20T07:00:00.000Z
Effective altruism 2013-08-14T04:00:40.000Z
Empowerment and Catastrophic Risk 2013-07-18T16:04:59.716Z
Objections and Concerns About Our New Direction 2012-06-22T15:59:28.903Z
Philanthropy's Success Stories 2012-03-01T16:53:11.722Z

Comments

Comment by Holden Karnofsky (HoldenKarnofsky) on This Can't Go On · 2021-08-27T18:13:17.498Z · EA · GW

I've just put up a post with more discussion of this point: https://www.cold-takes.com/more-on-multiple-world-size-economies-per-atom/

Comment by Holden Karnofsky (HoldenKarnofsky) on Forecasting transformative AI: what's the burden of proof? · 2021-08-24T17:33:34.710Z · EA · GW

Most of your post seems to be arguing that current economic trends don't suggest a coming growth explosion.

If current economic trends were all the information I had, I would think a growth explosion this century is <<50% likely (maybe 5-10%?) My main reason for a higher probability is AI-specific analysis (covered in future posts).

This post is arguing not "Current economic trends suggest a growth explosion is near" but rather "A growth explosion is plausible enough (and not strongly enough contraindicated by current economic trends) that we shouldn't too heavily discount separate estimates implying that transformative AI will be developed in the coming decades." I mostly don't see the arguments in the piece you linked as providing a strong counter to this claim, but if you highlight which you think provide the strongest counters, I can consider more closely.

The one that seems initially like the best candidate for such an argument is "Many of our technologies cannot get orders of magnitude more efficient." But I'm not arguing that e.g. particular energy technologies will get orders of magnitude more efficient; I'm arguing we'll see enough acceleration to be able to quickly develop something as transformative as digital people. There may be an argument that this isn't possible due to key bottlenecks being near their efficiency limits, but I don't think the case in your piece is at that level of specificity.

Comment by Holden Karnofsky (HoldenKarnofsky) on Forecasting transformative AI: what's the burden of proof? · 2021-08-24T17:33:05.463Z · EA · GW

Thanks! This post is using experimental formatting so I can't fix this myself, but hopefully it will be fixed soon.

Comment by Holden Karnofsky (HoldenKarnofsky) on Forecasting transformative AI: what's the burden of proof? · 2021-08-24T17:32:30.982Z · EA · GW

Agreed. This is similar in spirit to the "My cause is most important" part.

Comment by Holden Karnofsky (HoldenKarnofsky) on Forecasting transformative AI: what's the burden of proof? · 2021-08-24T17:31:58.640Z · EA · GW

It seems to me like "transformative AI is coming this century" and "this century is the most important century" are very different claims which you tend to conflate in this sequence.

I agree they're different claims; I've tried not to conflate them. For example, in this section I give different probabilities for transformative AI and two different interpretations of "most important century."

This post contains a few cases where I think the situation is somewhat confusing, because there are "burden of proof" arguments that take the basic form, "If this type of AI is developed, that will make it likely that it's the most important century; there's a burden of proof on arguing that it's the most important century because ___." So that does lead to some cases where I am defending "most important century" within a post on AI timelines.

More generally, I think that claims which depend on the specifics of our long-term trajectory after transformative AI are much easier to dismiss as being speculative (especially given how much pushback claims about reaching TAI already receive for being speculative). So I'd much rather people focus on the claim that "AI will be really, really big" than "AI will be bigger than anything else which comes afterwards". But it seems like framing this sequence of posts as the "most important century" sequence pushes towards the latter.

I struggled a bit with this; you might find this page helpful, especially the final section, "Holistic intent of the 'most important century' phrase." I ultimately decided that relative to where most readers are by default, "most important century" is conveying a more accurate high-level message than something like "extraordinarily important century" - the latter simply does not get across the strength of the claim - even though it's true that "most important century" could end up being false while the overall spirit of the series (that this is a massively high-stakes situation) ends up being right.

I also think it's the case that the odds of "most important century" being literally true are still decently high (though substantially lower than "transformative AI this century"). A key intuition behind this claim is the idea that PASTA could radically speed things up, such that this century ends up containing as much eventfulness as we're used to from many centuries. (Some more along these lines in the section starting "To put this possibility in perspective, it's worth noting that the world seems to have 'sped up'" from the page linked above.)

Oh, also, depending on how you define "important", it may be the case that past centuries were more important because they contained the best opportunities to influence TAI - e.g. when the west became dominant, or during WW1 and WW2, or the cold war. Again, that's not very action-guiding, but it does make the "most important century" claim even more speculative.

I address this briefly in footnote 1 on the page linked above: "You could say that actions of past centuries also have had ripple effects that will influence this future. But I'd reply that the effects of these actions were highly chaotic and unpredictable, compared to the effects of actions closer-in-time to the point where the transition occurs."

Comment by Holden Karnofsky (HoldenKarnofsky) on This Can't Go On · 2021-08-09T19:14:53.976Z · EA · GW

Thanks for all the thoughts on this point! I don't think the comparison to currency is fair (the size of today's economy is a real quantity, not a nominal one), but I agree with William Kiely that the "several economies per atom" point is best understood as an intuition pump rather than an airtight argument. I'm going to put a little thought into whether there might be other ways of communicating how astronomically huge some of these numbers are, and how odd it would be to expect 2% annual growth to take us there and beyond.

One thought: it is possible that there's some hypothetical virtual world (or other configuration of atoms) with astronomical value compared to today's economy. But if so, getting to that probably involves some sort of extreme control and understanding of our environments, such as what might be possible with digital people. And I'd expect the path to such a thing to look more like "At some point we figure out how to essentially escape physical constraints and design an optimal state [e.g., via digital people], causing a spike (not necessarily instantaneous, but quite quick) in the size of the economy" than like "We get from here to there at 2% growth per year."

Comment by Holden Karnofsky (HoldenKarnofsky) on Digital People Would Be An Even Bigger Deal · 2021-08-09T19:13:51.819Z · EA · GW

I think this depends on empirical questions about the returns to more compute for a single mind. If the mind is closely based on a human brain, it might be pretty hard to get much out of more compute, so duplication might have better returns. If the mind is not based on a human brain, it seems hard to say how this shakes out.

Comment by Holden Karnofsky (HoldenKarnofsky) on All Possible Views About Humanity's Future Are Wild · 2021-08-09T19:13:21.983Z · EA · GW

I'm not sure I'm fully following, but I think the "almost exactly the same time" point is key (and I was getting at something similar with "However, note that this doesn't seem to have happened in ~13.77 billion years so far since the universe began, and according to the above sections, there's only about 1.5 billion years left for it to happen before we spread throughout the galaxy"). The other thing is that I'm not sure the "observation selection effect" does much to make this less "wild": anthropically, it seems much more likely that we'd be in a later-in-time, higher-population civilization than an early-in-time, low-population one.

Comment by Holden Karnofsky (HoldenKarnofsky) on All Possible Views About Humanity's Future Are Wild · 2021-08-06T04:09:12.886Z · EA · GW

Working on that!

Comment by Holden Karnofsky (HoldenKarnofsky) on Digital People FAQ · 2021-07-29T21:05:51.844Z · EA · GW

If we have advanced AI that is capable of constructing a digital human simulation, wouldn't it also by proxy be advanced enough to be conscious on its own, without the need for anything approximating human beings? I can imagine humans wanting to create copies of themselves for various purposes but isn't it much more likely for completely artificial silicon-first entities to take over the galaxy? Those entities wouldn't have the need for any human pleasures and could thus conquer the universe much more efficiently than any "digital humans" ever could.

It does seem likely to me that advanced AI would have the capabilities needed to spread through the galaxy on its own. Where digital people might come in is that - if advanced AI systems remain "aligned" / under human control - digital people may be important for steering the construction of a galaxy-wide civilization according to human-like (or descended-from-human-like) values. It may therefore be important for digital people to remain "in charge" and to do a lot of work on things like reflecting on values, negotiating with each other, designing and supervising AI systems, etc.

If we get to a point where "digital people" are possible, can we expect to be able to tweak the underlying circuitry to eliminate the concept of pain and suffering altogether, creating "humans" incapable of experiencing anything but joy, no matter what happens to them? Its really hard to imagine from a biological human perspective but anything is possible in a digital world and this wouldn't necessarily make these "humans" any less productive.

"Tweaking the underlying circuitry" wouldn't automatically be possible just as a consequence of being able to simulate human minds. But I'd guess the ability to do this sort of tweak would follow pretty quickly.

As a corollary, do we have a reason to believe that "digital humans" will want to experience anything other than 24/7 heroin-like euphoria in their "down time", rather than complex experiences like zero-g? Real-life humans cannot do that as our bodies quickly break down from heroin exposure, but digital ones won't have such arbitrary limitations.

I think a number of people (including myself) would hesitate to experience "24/7 heroin-like euphoria" and might opt for something else.

Comment by Holden Karnofsky (HoldenKarnofsky) on New blog: Cold Takes · 2021-07-29T21:03:14.202Z · EA · GW

Thanks, I agree it's not ideal, but haven't found a way to change the color of that button between light and dark mode.

Comment by Holden Karnofsky (HoldenKarnofsky) on New blog: Cold Takes · 2021-07-29T21:02:56.382Z · EA · GW

No need to follow any unusual commenting norms! The "cold" nature of the blog is due to my style and schedule, not a request for others.

Comment by Holden Karnofsky (HoldenKarnofsky) on All Possible Views About Humanity's Future Are Wild · 2021-07-29T21:02:26.099Z · EA · GW

I'm not sure I follow this. I think if there were extraterrestrials who were going to stop us from spreading, we'd likely see signs of them (e.g., mining the stars for energy, setting up settlements), regardless of what speed they traveled while moving between stars.

Comment by Holden Karnofsky (HoldenKarnofsky) on All Possible Views About Humanity's Future Are Wild · 2021-07-29T21:01:41.176Z · EA · GW

I think your last comment is the key point for me - what's wild is how early we are, compared to the full galaxy population across time.

Comment by Holden Karnofsky (HoldenKarnofsky) on All Possible Views About Humanity's Future Are Wild · 2021-07-29T21:01:21.534Z · EA · GW

I think it's wild if we're living in the century (or even the 100,000 years) that will produce a misaligned AI whose values come to fill the galaxy for billions of years. That would just be quite a remarkable, high-leverage (due to the opportunity to avoid misalignment, or at least have some impact on what the values end up being) time period to be living in.

Comment by Holden Karnofsky (HoldenKarnofsky) on All Possible Views About Humanity's Future Are Wild · 2021-07-29T21:00:59.135Z · EA · GW

I'm not sure I can totally spell it out - a lot of this piece is about the raw intuition that "something is weird here."

One Bayesian-ish interpretation is given in the post: "The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher." In other words, there is something "suspicious" about a view that implies that we are in an unusually important position - it's the kind of view that seems (by default) more likely to be generated by wishful thinking, ego, etc. than by dispassionate consideration of the facts.

There's also an intuition along the lines of "If we're really in such a special position, I'd think it would be remarked upon more; I'm suspicious of claims that something really important is going on that isn't generally getting much attention."

I ultimately think we should bite these bullets (that we actually in the kind of special position that wishful thinking might falsely conclude we're in, and that there actually is something very important going on that isn't getting commensurate attention). I think some people imagine they can avoid biting these bullets by e.g. asserting long timelines to transformative AI; this piece aims to argue that doesn't work.

Comment by Holden Karnofsky (HoldenKarnofsky) on All Possible Views About Humanity's Future Are Wild · 2021-07-29T21:00:18.485Z · EA · GW

Ben, that sounds right to me. I also agree with what Paul said. And my intent was to talk about what you call temporal wildness, not what you call structural wildness.

I agree with both you and Arden that there is a certain sense in which the "conservative" view seems significantly less "wild" than my view, and that a reasonable person could find the "conservative" view significantly more attractive for this reason. But I still want to highlight that it's an extremely "wild" view in the scheme of things, and I think we shouldn't impose an inordinate burden of proof on updating from that view to mine.

Comment by Holden Karnofsky (HoldenKarnofsky) on The Duplicator: Instant Cloning Would Make the World Economy Explode · 2021-07-29T20:59:32.688Z · EA · GW

(Response to both AppliedDivinityStudies and branperr)

My aim was to argue that a particular extreme sort of duplication technology would have extreme consequences, which is important because I think technologies that are "extreme" in the relevant way could be developed this century. I don't think the arguments in this piece point to any particular conclusions about biological cloning (which is not "instant"), natalism, etc., which have less extreme consequences.

Comment by Holden Karnofsky (HoldenKarnofsky) on Digital People Would Be An Even Bigger Deal · 2021-07-29T20:57:26.895Z · EA · GW

It seems very non-obvious to me whether we should think bad outcomes are more likely than good ones. You asked about arguments for why things might go well; a couple that occur to me are (a) as long as large numbers of digital people are committed to protecting human rights and other important values, it seems like there is a good chance they will broadly succeed (even if they don't manage to stop every case of abuse); (b) increased wealth and improved social science might cause human rights and other important values to be prioritized more highly, and might help people coordinate more effectively.

Comment by Holden Karnofsky (HoldenKarnofsky) on Digital People Would Be An Even Bigger Deal · 2021-07-29T20:51:48.650Z · EA · GW

I broadly agree with this. The point of my post was to convey intuitions for why "a world of [digital people] will be so different from modern nations states just as modern states are from chimps," not to claim that the long-run future will be just as described in Age of Em. I do think despite the likely radical unfamiliarity of such a world, there are properties we can say today it's pretty likely to have, such as the potential for lock-in and space colonization.

Comment by Holden Karnofsky (HoldenKarnofsky) on My current impressions on career choice for longtermists · 2021-06-23T03:30:49.709Z · EA · GW

Thanks for the thoughtful comments, Linch.

Response on point 1: I didn't mean to send a message that one should amass the most impressive conventional credentials possible in general - only that for many of these aptitudes, conventional success is an important early sign of fit and potential.

I'm generally pretty skeptical by default of advanced degrees unless one has high confidence that one wants to be on a track where the degree is necessary (I briefly give reasons for this skepticism in the "political and bureaucratic aptitudes" section). This piece only mentions advanced degrees for the "academia," "conceptual and empirical research" and "political and bureaucratic" aptitudes. And for the latter two, these aren't particularly recommended, more mentioned as possibilities.

More generally, I didn't mean to advocate for "official credentials that anyone could recognize from the outside." These do seem crucial for some aptitudes (particularly academia and political/bureaucratic), but much less so for other aptitudes I listed. For org running/building/boosting, I emphasized markers of success that are "conventional" (i.e., they're not contrarian goals) but are also not maximally standardized or legible to people without context - e.g., raises, promotions, good relationships.

Response on point 2: this is interesting. I agree that when there is some highly neglected (often because "new") situation, it's possible to succeed with a lot less time invested. Crypto, COVID-19, and AI safety research of today all seem to fit that bill. This is a good point.

I'm less sure that this dynamic is going to be reliably correlated with the things that matter most by longtermist lights. When I picture "crunch time," I imagine that generalists whose main asset is their "willingness to drop everything" will have opportunities to have impact, but I also imagine that (a) their opportunities will be better insofar as they've developed the sorts of aptitudes listed in this piece; (b) a lot of opportunities to have impact will really rely on having built aptitudes and/or career capital over the long run. 

For example, I imagine there will be a lot of opportunities for (a) people who are high up in AI labs and government; (b) people who know how to run large projects/organizations; (c) people with large existing audiences and/or networks; (d) people who have spent many years working with large AI models; (e) people who have spent years developing rich conceptual and empirical understanding of major potential risk factors, and that these opportunities won't exist for generalists.

"Drop everything and work hard" doesn't particularly seem to me like the sort of thing one needs to get practice with (although it is the sort of thing one needs to be prepared for, i.e., one needs not to be too attached to their current job/community/etc.) So I guess overall I would think most people are getting "better prepared" by building the sorts of aptitudes described here than by "simulating crunch time early." That said, jumping into areas with unusually "short climbs to the top" (like the examples you gave) could be an excellent move because of the opportunity to build outsized career capital and take on outsized responsibilities early in one's career. And I'll reiterate my reservations about "advice," so wouldn't ask you to defer to me here!

Comment by Holden Karnofsky (HoldenKarnofsky) on My current impressions on career choice for longtermists · 2021-06-23T03:27:06.602Z · EA · GW

I like this; I agree with most of what you say about this kind of work.

I've tried to mostly list aptitudes that one can try out early on, stick with if they're going well, and pretty reliably build careers (though not necessarily direct-work longtermist careers) around. I think the aptitude you're describing here might be more of later-career/"secondary" aptitude that often develops as someone moves up along an "organization building/running/boosting" or "political/bureaucratic" track. But I agree it seems like a cluster of skills that can be intentionally developed to some degree and used in a lot of different contexts.

Comment by Holden Karnofsky (HoldenKarnofsky) on My current impressions on career choice for longtermists · 2021-06-23T03:26:29.145Z · EA · GW

Thanks for the thoughtful comments!

On your first point: the reason I chose to emphasize longtermism is because:

  • It's what I've been thinking about the most (note that I am now professionally focused on longtermism, which doesn't mean I don't value other areas, but does mean that that's where my mental energy goes).
  • I think longtermism is probably the thorniest, most frustrating area for career choice, so I wanted to focus my efforts on helping people in that category think through their options.
  • I thought a lot of what I was saying might generalize further, but I wasn't sure and didn't want to claim that it would. And I would have found it harder to make a list of aptitudes for all of EA without having noticeable omissions.

With all of that said, I hear you on why this felt unwelcoming, and regret that. I'll add a link to this comment to the main post to help clarify.

On your second point, I did try to acknowledge the possibility of for-profit startups from a learning/skill-building point of view (paragraph starting with "I do think that if you have any idea for an organization that you think could succeed ...") though I do agree this sort of entrepreneurship can be useful for making money and having impact in other ways (as noted by MichaelA, below), not just for learning, and should have been clearer about that.

Comment by Holden Karnofsky (HoldenKarnofsky) on My current impressions on career choice for longtermists · 2021-06-23T03:23:43.927Z · EA · GW

I think a year of full-time work is likely enough to see the sort of "signs of life" I alluded to, but it could take much longer to fulfill one's potential. I'd generally expect a lot of people in this category to see steady progress over time on things like (a) how open-ended and poorly-scoped of a question they can tackle, which in turn affects how important a question they can tackle; (b) how efficiently and thoroughly they can reach a good answer; (c) how well they can communicate their insights; (d) whether they can hire and train other people to do conceptual and empirical research, and hence "scale" what they're doing (this won't apply to everyone).

There could also be an effect in the opposite direction, though - I expect some people in this category to have their best insights relatively early on, and to have more trouble innovating in a field as the field becomes better-developed.

Overall this track doesn't seem like the one most likely to offer a steady upward trajectory, though I think some people will experience that. (I'd guess that people focused on "answering questions" would probably have more of a steady upward trajectory than people focused on "asking new questions / having totally original ideas.")

Comment by Holden Karnofsky (HoldenKarnofsky) on My current impressions on career choice for longtermists · 2021-06-23T03:22:44.859Z · EA · GW

This general idea seems pretty promising to me.

Comment by Holden Karnofsky (HoldenKarnofsky) on My current impressions on career choice for longtermists · 2021-06-23T03:19:46.217Z · EA · GW

I didn't mean to express a view one way or the other on particular current giving opportunities; I was instead looking for something a bit more general and timeless to say on this point, since especially in longtermism, giving opportunities can sometimes look very appealing at one moment and much less so at another (partly due to room-for-more-funding considerations). I think it's useful for you to have noted these points, though.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:41:28.176Z · EA · GW

This is still a common practice. The point of it isn't to evaluate employees by # of hours worked; the point is for their manager to have a good understanding of how time is being used, so they can make suggestions about what to go deeper on, what to skip, how to reprioritize tasks, etc.

Several employees simply opt out from this because they prefer not to do it. It's an optional practice for the benefit of employees rather than a required practice used for performance assessment.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:38:52.687Z · EA · GW

I'm referring to the possibility of supporting academics (e.g. philosophers) to propose and explore different approaches to moral uncertainty and their merits and drawbacks. (E.g., different approaches to operationalizing the considerations listed at https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy#Allocating_capital_to_buckets_and_causes , which may have different consequences for how much ought to be allocated to each bucket)

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:31:05.952Z · EA · GW

Keep in mind that Milan worked for GiveWell, not OP, and that he was giving his own impressions rather than speaking for either organization in that post.

That said:

*His "Flexible working schedule" point sounds pretty consistent with how things are here.

*We continue to encourage time tracking (but we don't require it and not everybody does it).

*We do try to explicitly encourage self-care.

Does that respond to what you had in mind?

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:15:26.525Z · EA · GW

GiveWell's CEA was produced by multiple people over multiple years - we wouldn't expect a single person to generate the whole thing :)

I do think you should probably be able to imagine yourself engaging in a discussion over some particular parameter or aspect of GiveWell's CEA, and trying to improve that parameter or aspect to better capture what we care about (good accomplished per dollar). Quantitative aptitude is not a hard requirement for this position (there are some ways the role could evolve that would not require it), but it's a major plus.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:58:19.292Z · EA · GW

The role does include all three of those things, and I think all three things are well served by the job qualifications listed in the posting. A common thread is that all involve trying to deliver an informative, well-calibrated answer to an action-relevant question, largely via discussion with knowledgeable parties and critical assessment of evidence and arguments.

In general, we have a list of the projects that we consider most important to complete, and we look for good matches between high-ranked projects and employees who seem well suited to them. I expect that most entry-level Research Analysts will try their hand at both cause prioritization and grant investigation work, and we'll develop a picture of what they're best at that we can then use to assign them more of one or the other (or something else, such as the work listed at https://www.openphilanthropy.org/get-involved/jobs/analyst-specializing-potential-risks-advanced-artificial-intelligence) over time.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:54:33.185Z · EA · GW

We do formal performance reviews twice per year, and we ask managers to use their regular (~weekly) checkins with reports to sync up on performance such that nothing in these reviews should be surprising. There's no unified metric for an employee's output here; we set priorities for the organization, set assignments that serve these priorities, set case-by-case timelines and goals for the assignments (in collaboration with the people who will be working on them), and compare output to the goals we had set.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:41:35.104Z · EA · GW

All bios here: https://www.openphilanthropy.org/about/team

Grants Associates and Operations Associates are likely to report to Derek or Morgan. Research Analysts are likely to report to people who have been in similar roles for a while, such as Ajeya, Claire, Luke and Nick. None of this is set in stone though.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:37:07.286Z · EA · GW

A few things that come to mind:

  1. The work is challenging, and not everyone is able to perform at a high enough level to see the career progression they want.

  2. The culture tends toward direct communication. People are expected to be open with criticism, both of people they manage and of people who manage them. This can be uncomfortable for some people (though we try hard to create a supportive and constructive context).

  3. The work is often solitary, consisting of reading/writing/analysis and one-on-one checkins rather than large-group collaboration. It's possible that this will change for some roles in the future (e.g. it's possible that we'll want more large-group collaboration as our cause prioritization team grows), but we're not sure of that.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:36:35.312Z · EA · GW

We don't control the visa process and can't ensure that people will get sponsorship. We don't expect sponsorship requirements to be a major factor for us in deciding which applicants to move forward with.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:35:50.772Z · EA · GW

There will probably be similar roles in the future, though I can't guarantee that. To become a better candidate, one can accomplish objectively impressive things (especially if they're relevant to effective altruism); create public content that gives a sense for how they think (e.g., a blog); or get to know people in the effective altruism community to increase the odds that one gets a positive & meaningful referral.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:34:06.552Z · EA · GW

Most of the roles here involve a lot of independent work, consisting of reading/writing/analysis and one-on-one checkins rather than large-group collaboration. It’s possible that this will change for some roles in the future (e.g. it’s possible that we’ll want more large-group collaboration as our cause prioritization team grows), but we’re not sure of that. I think you should probably be prepared for a fair amount of work along the lines of what I've described here.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:29:56.686Z · EA · GW

They're different organizations and I don't know nearly as much about the GiveWell role. One big difference is the causes we work on.

If you're interested in both, I'd recommend applying to both, and if you are offered both roles, there will be lots of opportunities to learn more about each at that point in order to inform the decision.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:28:05.739Z · EA · GW

I answered a similar question here: http://effective-altruism.com/ea/1mf/hi_im_holden_karnofsky_ama_about_jobs_at_open/dpl

In general, people who have been in the Research Analyst role for a while will be the managers and primary mentors of new Research Analysts. There will be regular (~weekly) scheduled checkins as well as informal interaction as needed (e.g., over Slack).

There's no hard line between training and "just doing the work" - every assignment should have some direct value and some training value. We expect to lean pretty hard toward the training end of the spectrum for people's first few months, then gradually move along the spectrum to where assignments are more optimized for direct value.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:18:40.694Z · EA · GW

Yes, I mean statutory holidays like Thanksgiving.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:49:45.765Z · EA · GW

We're flexible. People don't clock in or out; we evaluate performance based on how much people get done on a timescale of months. We encourage people to work hard but also prioritize work-life balance. The right balance varies by the individual.

Most people here work more than one would in a traditional 9-5 job. (A common figure is 35-40 "focused" hours per week.) I think that reflects that they're passionate about their work rather than that they feel pressure from management to work a lot. We regularly check in with people about work-life balance and encourage them to work less if it seems this would be good for their happiness.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:43:02.578Z · EA · GW

We're in the process of reviewing our policies, but we're likely to settle on something like 25 paid days off (including sick days), 10 holiday days (with the option to work on holidays and use the paid time off elsewhere), several months of paid parental leave, and a flexible unpaid leave policy for people who want to take more time off. We are also flexible with respect to working from home.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:19:52.808Z · EA · GW

Perhaps other staff will chime in here, but my take: our pay is competitive and takes cost of living into account, and we are near public transportation, so I don't think the rents or commutes are a major issue. As a former NYC resident, I think the Bay Area is a great place to live (weather, food, etc.) and has a very strong effective altruist community. I don't see a lot of drawbacks to living here if you can make it work.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:16:11.082Z · EA · GW

Hm, I'm not sure why our form asks for more detail on undergrad relative to grad - we copied the form from GiveWell and may not have thought about it. It's possible this is because the form was being used in an earlier GiveWell search where few applicants had been to grad schools. I'll ask around about this.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:15:46.173Z · EA · GW

Broadly speaking, we're going to try to give people assignments that are relevant to our work and that we think include a lot of the core needed skills - things like evaluating a potential grant (or renewal) and writing up the case for or against. We'll evaluate these assignments, give substantial feedback, and iterate so that people improve. We'll also be providing resources for gaining background knowledge, such as "flex time," recommended reading lists and optional Q&A sessions. We've seen people improve a lot in the past and become core contributors, and think this basic approach is likely to lead to more of that.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:11:18.662Z · EA · GW

I would rate those about equally, though I'd add that GiveWell would prefer not to hire people whose main goal is to go to OP.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T16:59:20.032Z · EA · GW

We currently have a happy hour every 3 weeks and host group activities as well, including occasional parties and a multiple-day staff retreat this year. We want to make it easy for staff to socialize and be friends, without making it a requirement or an overly hard nudge (if people would rather stick to their work, that's fine by us).

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T16:56:23.481Z · EA · GW

We could certainly imagine ramping up grantmaking without a much better answer. As an institution we're often happy to go with a "hacky" approach that is suboptimal, but captures most of the value available under multiple different assumptions.

If someone at Open Phil has an idea for how to make useful progress on this kind of question in a reasonable amount of time, we'll very likely find that worthwhile and go forward. But there are lots of other things for Research Analysts to work on even if we don't put much more time into researching or reflecting on moral uncertainty.

Also note that we may pursue an improved understanding via grantmaking rather than via researching the question ourselves.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T16:52:37.039Z · EA · GW

All else equal, we consider applicants stronger when they have degrees in challenging fields from strong institutions. It’s not the only thing we’re looking at, even at that early stage. And the early stage is for filtering; ultimately, things like work trial assignments will be far more important to hiring decisions.

Comment by Holden Karnofsky (HoldenKarnofsky) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T16:51:25.990Z · EA · GW

This varies by the individual. We have some Research Analysts who are always working on a variety of things, and some who have become quite specialized. It varies largely by the interests/preferences of the employee.