Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy 2018-03-26T16:32:28.313Z
Some Thoughts on Public Discourse 2017-02-23T17:29:09.085Z
Radical Empathy 2017-02-16T12:41:39.017Z
Why the Open Philanthropy Project isn't currently funding organizations focused on promoting effective altruism 2015-10-28T19:40:30.129Z
Excited altruism 2013-08-20T07:00:00.000Z
Effective altruism 2013-08-14T04:00:40.000Z


Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:41:28.176Z · EA · GW

This is still a common practice. The point of it isn't to evaluate employees by # of hours worked; the point is for their manager to have a good understanding of how time is being used, so they can make suggestions about what to go deeper on, what to skip, how to reprioritize tasks, etc.

Several employees simply opt out from this because they prefer not to do it. It's an optional practice for the benefit of employees rather than a required practice used for performance assessment.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:38:52.687Z · EA · GW

I'm referring to the possibility of supporting academics (e.g. philosophers) to propose and explore different approaches to moral uncertainty and their merits and drawbacks. (E.g., different approaches to operationalizing the considerations listed at , which may have different consequences for how much ought to be allocated to each bucket)

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:31:05.952Z · EA · GW

Keep in mind that Milan worked for GiveWell, not OP, and that he was giving his own impressions rather than speaking for either organization in that post.

That said:

*His "Flexible working schedule" point sounds pretty consistent with how things are here.

*We continue to encourage time tracking (but we don't require it and not everybody does it).

*We do try to explicitly encourage self-care.

Does that respond to what you had in mind?

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:15:26.525Z · EA · GW

GiveWell's CEA was produced by multiple people over multiple years - we wouldn't expect a single person to generate the whole thing :)

I do think you should probably be able to imagine yourself engaging in a discussion over some particular parameter or aspect of GiveWell's CEA, and trying to improve that parameter or aspect to better capture what we care about (good accomplished per dollar). Quantitative aptitude is not a hard requirement for this position (there are some ways the role could evolve that would not require it), but it's a major plus.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:58:19.292Z · EA · GW

The role does include all three of those things, and I think all three things are well served by the job qualifications listed in the posting. A common thread is that all involve trying to deliver an informative, well-calibrated answer to an action-relevant question, largely via discussion with knowledgeable parties and critical assessment of evidence and arguments.

In general, we have a list of the projects that we consider most important to complete, and we look for good matches between high-ranked projects and employees who seem well suited to them. I expect that most entry-level Research Analysts will try their hand at both cause prioritization and grant investigation work, and we'll develop a picture of what they're best at that we can then use to assign them more of one or the other (or something else, such as the work listed at over time.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:54:33.185Z · EA · GW

We do formal performance reviews twice per year, and we ask managers to use their regular (~weekly) checkins with reports to sync up on performance such that nothing in these reviews should be surprising. There's no unified metric for an employee's output here; we set priorities for the organization, set assignments that serve these priorities, set case-by-case timelines and goals for the assignments (in collaboration with the people who will be working on them), and compare output to the goals we had set.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:41:35.104Z · EA · GW

All bios here:

Grants Associates and Operations Associates are likely to report to Derek or Morgan. Research Analysts are likely to report to people who have been in similar roles for a while, such as Ajeya, Claire, Luke and Nick. None of this is set in stone though.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:37:07.286Z · EA · GW

A few things that come to mind:

  1. The work is challenging, and not everyone is able to perform at a high enough level to see the career progression they want.

  2. The culture tends toward direct communication. People are expected to be open with criticism, both of people they manage and of people who manage them. This can be uncomfortable for some people (though we try hard to create a supportive and constructive context).

  3. The work is often solitary, consisting of reading/writing/analysis and one-on-one checkins rather than large-group collaboration. It's possible that this will change for some roles in the future (e.g. it's possible that we'll want more large-group collaboration as our cause prioritization team grows), but we're not sure of that.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:36:35.312Z · EA · GW

We don't control the visa process and can't ensure that people will get sponsorship. We don't expect sponsorship requirements to be a major factor for us in deciding which applicants to move forward with.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:35:50.772Z · EA · GW

There will probably be similar roles in the future, though I can't guarantee that. To become a better candidate, one can accomplish objectively impressive things (especially if they're relevant to effective altruism); create public content that gives a sense for how they think (e.g., a blog); or get to know people in the effective altruism community to increase the odds that one gets a positive & meaningful referral.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:34:06.552Z · EA · GW

Most of the roles here involve a lot of independent work, consisting of reading/writing/analysis and one-on-one checkins rather than large-group collaboration. It’s possible that this will change for some roles in the future (e.g. it’s possible that we’ll want more large-group collaboration as our cause prioritization team grows), but we’re not sure of that. I think you should probably be prepared for a fair amount of work along the lines of what I've described here.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:29:56.686Z · EA · GW

They're different organizations and I don't know nearly as much about the GiveWell role. One big difference is the causes we work on.

If you're interested in both, I'd recommend applying to both, and if you are offered both roles, there will be lots of opportunities to learn more about each at that point in order to inform the decision.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:28:05.739Z · EA · GW

I answered a similar question here:

In general, people who have been in the Research Analyst role for a while will be the managers and primary mentors of new Research Analysts. There will be regular (~weekly) scheduled checkins as well as informal interaction as needed (e.g., over Slack).

There's no hard line between training and "just doing the work" - every assignment should have some direct value and some training value. We expect to lean pretty hard toward the training end of the spectrum for people's first few months, then gradually move along the spectrum to where assignments are more optimized for direct value.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:18:40.694Z · EA · GW

Yes, I mean statutory holidays like Thanksgiving.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:49:45.765Z · EA · GW

We're flexible. People don't clock in or out; we evaluate performance based on how much people get done on a timescale of months. We encourage people to work hard but also prioritize work-life balance. The right balance varies by the individual.

Most people here work more than one would in a traditional 9-5 job. (A common figure is 35-40 "focused" hours per week.) I think that reflects that they're passionate about their work rather than that they feel pressure from management to work a lot. We regularly check in with people about work-life balance and encourage them to work less if it seems this would be good for their happiness.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:43:02.578Z · EA · GW

We're in the process of reviewing our policies, but we're likely to settle on something like 25 paid days off (including sick days), 10 holiday days (with the option to work on holidays and use the paid time off elsewhere), several months of paid parental leave, and a flexible unpaid leave policy for people who want to take more time off. We are also flexible with respect to working from home.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:19:52.808Z · EA · GW

Perhaps other staff will chime in here, but my take: our pay is competitive and takes cost of living into account, and we are near public transportation, so I don't think the rents or commutes are a major issue. As a former NYC resident, I think the Bay Area is a great place to live (weather, food, etc.) and has a very strong effective altruist community. I don't see a lot of drawbacks to living here if you can make it work.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:16:11.082Z · EA · GW

Hm, I'm not sure why our form asks for more detail on undergrad relative to grad - we copied the form from GiveWell and may not have thought about it. It's possible this is because the form was being used in an earlier GiveWell search where few applicants had been to grad schools. I'll ask around about this.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:15:46.173Z · EA · GW

Broadly speaking, we're going to try to give people assignments that are relevant to our work and that we think include a lot of the core needed skills - things like evaluating a potential grant (or renewal) and writing up the case for or against. We'll evaluate these assignments, give substantial feedback, and iterate so that people improve. We'll also be providing resources for gaining background knowledge, such as "flex time," recommended reading lists and optional Q&A sessions. We've seen people improve a lot in the past and become core contributors, and think this basic approach is likely to lead to more of that.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:11:18.662Z · EA · GW

I would rate those about equally, though I'd add that GiveWell would prefer not to hire people whose main goal is to go to OP.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T16:59:20.032Z · EA · GW

We currently have a happy hour every 3 weeks and host group activities as well, including occasional parties and a multiple-day staff retreat this year. We want to make it easy for staff to socialize and be friends, without making it a requirement or an overly hard nudge (if people would rather stick to their work, that's fine by us).

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T16:56:23.481Z · EA · GW

We could certainly imagine ramping up grantmaking without a much better answer. As an institution we're often happy to go with a "hacky" approach that is suboptimal, but captures most of the value available under multiple different assumptions.

If someone at Open Phil has an idea for how to make useful progress on this kind of question in a reasonable amount of time, we'll very likely find that worthwhile and go forward. But there are lots of other things for Research Analysts to work on even if we don't put much more time into researching or reflecting on moral uncertainty.

Also note that we may pursue an improved understanding via grantmaking rather than via researching the question ourselves.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T16:52:37.039Z · EA · GW

All else equal, we consider applicants stronger when they have degrees in challenging fields from strong institutions. It’s not the only thing we’re looking at, even at that early stage. And the early stage is for filtering; ultimately, things like work trial assignments will be far more important to hiring decisions.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T16:51:25.990Z · EA · GW

This varies by the individual. We have some Research Analysts who are always working on a variety of things, and some who have become quite specialized. It varies largely by the interests/preferences of the employee.

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T16:50:16.634Z · EA · GW

We're certainly not using the same standards as academia! In general, we aim to base assignments on a combination of 1. How we judge what's most important to do (in terms of accomplishing as much good as possible) 2. What the employees themselves are motivated and interested to work on (including their own judgments of how to do as much good as possible).

Comment by holdenkarnofsky on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T16:40:56.391Z · EA · GW

I'd recommend that recent grads looking to help with AI governance and policy apply for the Research Analyst position. With Research Analysts, we'll first focus on mentorship & training, then try to figure out where everyone can do the most good based on their interests and skills. Someone who has a high aptitude & interest for AI strategy would likely end up putting substantial time into that within a year or so (maybe less).

You can also check out roles at the Future of Humanity Institute.

Comment by holdenkarnofsky on Upcoming AMA with Luke Muehlhauser on consciousness and moral patienthood (June 28, starting 9am Pacific) · 2017-06-22T01:05:21.915Z · EA · GW

I super highly recommend reading this report. In full, including many of the appendices (and footnotes :) )

I thought it was really interesting, and helpful for thinking this question through and understanding the state of what evidence and arguments are out there (unfortunately there is much less to go on than I’d even expected, though).

I was the most proximate audience for the report, so discount my recommendation as much as feels appropriate with that in mind.

Comment by holdenkarnofsky on Some Thoughts on Public Discourse · 2017-03-30T23:43:19.876Z · EA · GW

The principles were meant as descriptions, not prescriptions.

I'm quite sympathetic to the idea expressed by your Herbert Simon quote. This is part of what I was getting at when I stated: "I think that one of the best ways to learn is to share one's impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things." But because the risks are what they are, I've concluded that public discourse is currently the wrong venue for this sort of thing, and it indeed makes more sense in the context of more private discussions. I suspect many others have reached a similar conclusion; I think it would be a mistake to infer someone's attitude toward low-stakes brainstorming from their public communications.

Comment by holdenkarnofsky on Some Thoughts on Public Discourse · 2017-03-07T02:44:00.515Z · EA · GW

Michael, this post wasn't arguing that there are no benefits to public discourse; it's describing how my model has changed. I think the causal chain you describe is possible and has played out that way in some cases, but it seems to call for "sharing enough thinking to get potentially helpful people interested" rather than for "sharing thinking and addressing criticisms comprehensively (or anything close to it)."

The EA Forum counts for me as public discourse, and I see it as being useful in some ways, along the lines described in the post.

Comment by holdenkarnofsky on Some Thoughts on Public Discourse · 2017-03-07T02:42:16.639Z · EA · GW

Hi John, thanks for the thoughts.

I agree with what you say about public discourse as an "advertisement" and "critical first step," and allude to this somewhat in the post. And we plan to continue a level of participation of public discourse that seems appropriate for that goal - which is distinct from the level of public discourse that would make it feasible for readers to understand the full thinking behind the many decisions we make.

I don't so much agree that there is a lot of low-hanging fruit to be had in terms of getting more potentially helpful criticism from the outside. We have published lists of questions and asked for help thinking about them (see this series from 2015 as well as this recent post; another recent example is the Worldview Diversification post, which ended with an explicit call for more ideas, somewhat along the lines you suggest). We do generally thank people for their input, make changes when warranted, and let people know when we've made changes (recent example from GiveWell).

And the issue isn't that we've gotten no input, or that all the input we've gotten has been low-quality. I've seen and had many discussions about our work with many very sharp people, including via phone and in-person research discussions. I've found these discussions helpful in the sense of focusing my thoughts on the most controversial premises, understanding where others are coming from, etc. But I've become fairly convinced - through these discussions and through simply reflecting on what kind of feedback I would be giving groups like GiveWell and Open Phil, if I still worked in finance and only engaged with their work occasionally - that it's unrealistic to expect many novel considerations to be raised by people without a great deal of context.

Even if there isn't low-hanging fruit, there might still be "high-hanging fruit." It's possible that if we put enough effort and creative thinking in, we could find a way to get a dramatic increase in the quantity and quality of feedback via public discourse. But we don't currently have any ideas for this that seem highly promising; my overall model of the world (as discussed in the previous paragraph) predicts that it would be very difficult; and the opportunity cost of such a project is higher than it used to be.

Comment by holdenkarnofsky on Some Thoughts on Public Discourse · 2017-03-01T18:32:45.255Z · EA · GW

Thanks for the thoughts!

I'm not sure I fully understand what you're advocating. You talk about "only selectively engag[ing] with criticism" but I'm not sure whether you are in favor of it or against it. FWIW, this post is largely meant to help understand why I only selectively engage with criticism.

I agree that "we should be skeptical of our stories about why we do things, even after we try to correct for this." I'm not sure that the reasons I've given are the true ones, but they are my best guess. I note that the reasons I give here aren't necessarily very different from the reasons others making similar transitions would give privately.

I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions, but at this point I believe that public discourse is not promising as a potential solution, for reasons outlined above. I think there is a bit of a false dichotomy between "engage in public discourse" and "let one's views calcify"; unfortunately I think the former does little to prevent the latter.

I don't understand the claim that "The principles section is an outline of a potential future straightjacket." Which of the principles in that section do you have in mind?

Comment by holdenkarnofsky on Some Thoughts on Public Discourse · 2017-03-01T18:31:00.485Z · EA · GW

Thanks for the thoughts, Vipul! Responses follow.

(1) I'm sorry to hear that you've found my writing too vague. There is always a tradeoff between time spent, breadth of issues covered, and detail/precision. The posts you hold up as more precise are on narrower topics; the posts you say are too vague are attempts to summarize/distill views I have (or changes of opinions I've had) that stem from a lot of different premises, many hard to articulate, but that are important enough that I've tried to give people an idea of what I'm thinking. In many cases their aim is to give people an idea of what factors we are and aren't weighing, and to help people locate beliefs of ours they disagree (or might disagree) with, rather than to provide everything needed to evaluate our decisions (which I don't consider feasible).

While I concede that these posts have had limited precision, I strongly disagree with this: "the vagueness is not a bug, from your perspective, it's a corollary of trying to make your content really hard for people to take issue with." That is not my intention. The primary goal of these posts has been to help people understand where I'm coming from and where the most likely points of disagreement are likely to lie. Perhaps they failed at this (I suspect different readers feel differently about this), but that was what they were aiming to do, and if I hadn't thought they could do that, I wouldn't have written them.

(2) I agree with all of your thoughts here except for the way you've characterized my comments. Is there a part of this essay that you thought was making a universal claim about transparency, as opposed to a claim about my own experience with it and how it has affected my own behavior and principles? The quote you provide does not seem to point this way.

(3) My definition of "public discourse" does not exclude benefits that come from fundraising/advocacy/promotion. It simply defines "public discourse" as writing whose focus is on truth-seeking rather than those things. This post, and any Open Phil blog post, would count as "public discourse" by my definition, and any fundraising benefits of these posts would count as benefits of public discourse.

I also did not claim that the reputational effects of openness are skewed negative. I believe that the reputational effects of our public discourse have been net positive. I believe that the reputational effects of less careful public discourse would be skewed negative, and that has implications for how time-consuming it is for us to engage, which in turn has implications for how much we engage.

(4) We have incurred few costs from public discourse, but we are trying to avoid risks that we perceive. As for "who gets the blame," I didn't intend to cover that topic one way or the other in this post. The intent of the post was to help people understand how and why my attitude toward public discourse has changed and what to expect from me in the future.

Comment by holdenkarnofsky on Some Thoughts on Public Discourse · 2017-03-01T18:29:21.509Z · EA · GW

Thanks for the comments, everyone!

I appreciate the kind words about the quality and usefulness of our content. To be clear, we still have a strong preference to share content publicly when it seems it would be useful and when we don't see significant downsides. And generally, the content that seems most likely to be helpful has fairly limited overlap with the content that poses the biggest risks.

I have responded to questions and criticisms on the appropriate threads.

Comment by holdenkarnofsky on GiveWell and the problem of partial funding · 2017-02-17T05:14:13.135Z · EA · GW

(Continued from previous comment)

Thoughts on your recommendations. I appreciate your making suggestions, and providing helpful context on the spirit in which you intend them. Here I only address suggestions for Open Phil.

  • Maintaining a list of open investigations: I see some case for this, but at the moment we don't plan on it. I don't think we can succinctly and efficiently maintain such a list without incurring a number of risks (e.g., causing people to excessively plan on our support; causing controversy due to hasty communication or miscommunication). Instead, we encourage people who want to know whether we're working on something to contact us and ask.
  • We have considered and in some cases done some (limited) execution on all of the suggestions you make under "Symmetry," and all remain potential tools if we want to ramp up giving further in the future. I think they are all good ideas, perhaps things we should have done more of already, and perhaps things we will do more of later on. However, I do not think the situation is "symmetrical" as you imply because our mission - which we are building up expertise and capacity around optimizing for - is giving away large sums of money effectively and according to the basic stance of effective altruism. The same is not generally true of our grantees. We generally try to do something approximating "give to grantees up until the point where marginal dollars would be worse than our last dollar" (though of course very imprecisely and with many additional considerations. Finally, I'll add that any of the four options you list - and many more - are things we could probably find a way of doing if we put in some time and internal discussion, resulting in good outcomes. But we think that time and internal discussion is better spent on other priorities that will lead to better outcomes. In general, any new idea we pursue involves a fair amount of discussion and refinement, which itself has major opportunity costs, so we accept a degree of inertia in our policies and approaches.
  • For reasons stated above and in previous posts, I don't believe the optimal level of funding for top charities is 100% of the gap or 0%. I also wish to note that your comment "I expect fairly few donors would accept this offer. But it still seems like it would be a powerful, credible signal of cooperative intent." highlights what I suspect may be one of the most important disagreements underlying this discussion. As noted above, we are comfortable with "hacky" approaches to dilemmas that let us move on to our next priority, and we are very unlikely to undertake time-consuming projects with little expected impact other than to signal cooperative intent in a general and undirected way. For us, a disagreement whose importance is mostly symbolic is not likely to become a priority. We would be more likely to prioritize disagreements that implied we could do much more good (or much less harm) if we took some action, such that this action is competitive with our other priorities.
  • I think your final suggestion would have substantial costs, and don't agree that you've identified sufficient harms to consider it.

I'm not sure I've understood all of your points, but hopefully this is helpful in identifying which threads would be useful to pursue further. Thanks again for your thoughtful feedback.

Comment by holdenkarnofsky on GiveWell and the problem of partial funding · 2017-02-17T05:13:40.604Z · EA · GW

Hi Ben,

Thanks for putting so much thought into this topic and sharing your feedback.

I'm going to discuss the reasoning behind the "splitting" recommendation that was made in 2015, as well as our current stance, and how they relate to your points. I'll start with the latter because I think that will make this comment easier to follow. I'll then address some more specific points and suggestions.

I'm not addressing recommendations addressed to GiveWell - I think it will make more sense for someone more involved in GiveWell to do that - though I will address both the 2015 and 2016 decisions about how much to recommend that Good Ventures support GiveWell's top charities, because I was closely involved in those decisions.

Current stance on Good Ventures support for GiveWell's top charities. As noted here, we (Open Phil) currently feel that the "last dollar" probably beats GiveWell's top charities according to our (extrapolated) values. We are quite uncertain of this view at this time and are hoping to do a more thorough investigation and writeup this year. We recommended $50 million to top charities for the 2016 giving season, for reasons laid out in that post and not discussed in the original post on this thread.

You seem to find our take on the "last dollar" a difficult-to-justify conclusion (or at least difficult to square with the fact that we are currently well under eventual peak giving, and not closing the gap via the actions you list under "symmetry"). You argue that the key issue here is the question of returns to scale, and say that we should regrant to larger organizations if we think returns are increasing, and smaller organizations if returns are decreasing. But I don't think the question "Are returns to scale increasing or decreasing?" is a particularly core question here (nor does it have a single general answer). Instead, our reason for thinking our "last dollar" can beat top charities and many other options is largely bound up in our model of ourselves as people who aspire to become "experts" in the domain of giving away large amounts of money effectively and according to the basic stance of effective altruism. I've written about my model of broad market efficiency in the past; I don't think it is trivial to "beat the market," but nor do I think it is prohibitively difficult, and I expect that we can do so in the long run. Another key part of the view is that there is more than one plausible worldview under which it looks (in the long run) quite tractable to spend essentially arbitrary amounts of money in a way that has better value for money than top charities (this is also discussed in the post on our current view ).

Previously, our best guess was different. We thought that the "last dollar" was worse than top charities - but not much worse, and with very low confidence. We fully funded things we thought were much better than the "last dollar" (including certain top charities grants) but not things we thought were relatively close when they also posed coordination issues. For this case, fully funding top charities would have had pros and cons relative to splitting: we think the dollars we spent would've done slightly more good, but the dollars spent by others would've done less good (and we think we have a good sense of the counterfactual for most of those dollars). We guessed that the latter outweighed the former.

I think that an important factor playing into both decisions, and a potentially key factor causing you and me to see things differently, pertains to conservatism. For the 2015 decision in particular, we didn't have much time to think carefully about these issues, and "fully funding" might be the kind of thing we couldn't easily walk back (we worried about a consistent dynamic in which our entering a cause led to other donors' immediately fleeing it). It's often the case that when we need to make high-stakes decisions without sufficient time or information, we err on the side of preserving option value and avoiding particularly bad outcomes (especially those that pose risks to GiveWell or Open Phil as an organization); this often leads to "hacky" actions that are knowably not ideal for any particular set of facts and values, if we had confidently sorted these facts and values out (but we haven't).

Responses to more specific points

"First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start?"

I don't think this is a case of "defecting" or "adversarial framing." We were trying to approximate the outcome we would've reached if we'd been able to have a friendly, open discussion and coordination with individual donors, which we couldn't.

"if you take into account the difference in scale between Good Ventures and other GiveWell donors, Good Ventures’s 'fair share' seems more likely to be in excess of 80%, than a 50-50 split."

We expected individual giving to grow over time, and thought that it would grow less if we had a policy of fully funding top charities. Calculating "fair share" based on current giving alone, as opposed to giving capacity construed more broadly and over a longer-term, would have created the kinds of problematic incentives we wrote that we were worried about. 50% is within range of what I'd guess would be a long-term fair share. Given that it is within range, 50% was chosen as a proportion that would (accurately) signal that we had chosen it fairly arbitrarily, in order to commit credibly to splitting, as mentioned in the post.

"This ethical objection doesn’t make sense. It implies that it’s unethical to cooperate on the iterated Prisoner’s Dilemma." The ethical objection was to being misleading, not to the game-theoretic aspects of the approach.

I don't follow your argument under "Influence via habituation vs track record." The reason there was "not enough money to cover the whole thing" was because we were unwilling to pay more than what we considered our fair share, due to the incentives it would create and the long-run implications for total positive impact. We were open about that. I also think that the "surface case" for low-engagement donors who didn't read our work was about as close to the truth as a surface case could be. (I would describe the "surface case" as something like: "If I give this money, then bednets will be delivered; if I do not, that will not happen." I do not believe that the majority of GiveWell donors - including very large donors - base their giving on Open Phil's opinions, or in many cases even know what Open Phil is.) I don't see how this situation implies any of your #1-#3, and I don't see how it is deceptive.

"Access via size" and "Independence via many funders" were not part of our reasoning.

(Continued in next comment)

Comment by holdenkarnofsky on How Should a Large Donor Prioritize Cause Areas? · 2016-05-03T19:16:17.312Z · EA · GW

I think the more uncertain you are, the more learning and option value matter, as well as some other factors I will probably discuss more in the future. I agree that committing to a cause, and helping support a field in its early stages, can increase room for more funding, but I think it's a pretty slow and unpredictable process. In the future we may see enough room for more funding in our top causes to transition to a more concentrated funding, but I think the tradeoffs implied in OP have very limited relevance to the choices we're making today.

Comment by holdenkarnofsky on How Should a Large Donor Prioritize Cause Areas? · 2016-05-03T01:37:13.749Z · EA · GW

Thanks, all, for the very thoughtful post and comments!

At some point this year, I hope to make a post about our general reasons for wanting to put some resources into the causes that look best according to different plausible background worldviews and epistemology. Dan Keys and Telofy touched on a lot of these reasons (especially Dan's #3 and #4).

I think our biggest disagreement with Michael is that he seems to see a couple of particular categories of giving (those relating to farm animal suffering and direct existential risk) as massively and clearly better than others, with high certainty. If we agreed, our approach would be much more similar to what Michael suggests than it is now. We have big uncertainty about our cost-effectiveness estimates, especially as they pertain to issues like flow-through effects. I'll note that I've followed some of Michael's links but haven't ended up updating in the direction of more certainty about things he seems to be certain of (such as how we should weigh helping animals compared to helping humans).

We do think we've learned a lot about how to compare causes by exploring specific grants, and we think that in the long run, our current approach will yield important option value if we end up buying into worldview/background epistemology that doesn't match our current best guess. It's also worth noting that our approach requires commitments to causes, so our choice of focus areas will change less frequently than our views (and with a lag).

I think our other biggest disagreement with Michael is about room for more funding. We are still ramping up knowledge and capacity and have certainly not maxed out what we can do in certain causes, including farm animal welfare, but I expect this to be pretty temporary. I expect that we will hit real bottlenecks to giving more pretty soon. In particular, I am highly skeptical that we could recommend $50 million with even reasonable effectiveness on potential risks from advanced artificial intelligence in the next year (though recommending smaller amounts will hopefully, over time, increase field capacity and make it possible to recommend much more later). We're not sure yet whether we want to prioritize wild animal suffering, but I think here there is even more of a bottleneck to effective spending in the reasonably near term.

Comment by holdenkarnofsky on Why the Open Philanthropy Project isn't currently funding organizations focused on promoting effective altruism · 2015-11-02T20:57:11.897Z · EA · GW

Thanks for the comments, all.

Telofy and kgallas: I'm not planning to write up an exhaustive list of the messages associated with EA that we're not comfortable with. We don't have full internal agreement on which messages are good vs. problematic, and writing up a list would be a bit of a project in itself. But I will give a couple of examples, speaking only for myself:

  1. I'm generally uncomfortable with (and disagree with) the "obligation" frame of EA. I'm particularly uncomfortable with messages along the lines of "The arts are a waste when there are people suffering," "You should feel bad about (or feel the need to defend) every dollar you spend on yourself beyond necessities," etc. I think messages along these lines make EA sound overly demanding/costly to affiliate with as well as intellectually misguided.

  2. I think there are a variety of messages associated with EA that communicate unwarranted confidence on a variety of dimensions, implying that we know more than we do about what the best causes are and to what extent EAs are "outperforming" the rest of the world in terms of accomplishing good. "Effective altruism could be the last social movement we ever need" and "Global poverty is a rounding error compared to other causes" are both examples of this; both messages have been prominently enough expressed to get in this article , and both messages are problematic in my view.

Telofy: my general answer on a given grant idea is going to be to ask whether it fits into any of our focus areas, and if not, to have a very high bar for it as a "one-off" grant. In this case, supporting ACE fits into the Farm Animal Welfare focus area, where we've recently made a new hire; it's too early to say where this sort of thing will rank in our priorities after Lewis has put some work into considering all the options.