My Q1 2022 donation to Free Migration Project 2021-12-25T02:38:25.913Z
Understanding Open Philanthropy's evolution on migration policy 2021-12-18T19:45:36.058Z
The positive case for a focus on achieving safe AI? 2021-06-25T04:01:24.056Z
How has biosecurity/pandemic preparedness philanthropy helped with coronavirus, and how might it help with similar future situations? 2020-03-13T18:50:11.290Z
Donor strategies for separating "how much" from "where" to donate 2019-09-16T00:54:26.443Z
My Q1 2019 EA Hotel donation 2019-04-01T02:23:23.107Z
My 2018 donations 2018-12-23T20:38:16.877Z
The AIDS/malaria puzzle: bleg 2017-09-26T15:55:37.077Z
Effective Altruism Forum web traffic from Google Analytics 2016-12-31T21:23:04.132Z
Looking for global health-related Wikipedia contributors 2016-12-16T22:19:23.712Z
GiveWell money moved in 2015: a review of my forecast and some future predictions 2016-05-15T20:41:08.108Z
Looking for Wikipedia article writers (topics include many of interest to effective altruists) 2016-04-17T04:37:21.267Z
Donation insurance 2015-12-20T22:33:57.990Z
Conditional donation commitment to GiveWell top-recommended charity 2015-12-20T06:16:41.829Z
GiveWell money moved forecasts and implications 2015-12-19T20:22:21.563Z
Should you donate to the Wikimedia Foundation? 2015-03-28T18:58:20.337Z


Comment by vipulnaik on Understanding Open Philanthropy's evolution on migration policy · 2021-12-20T04:48:43.395Z · EA · GW

Good point. My understanding is that Open Phil made a general decision to focus only on US policy for most of their policy areas, for the reason that there are high fixed costs to getting familiar with a policy space. In some areas like animal welfare they've gone beyond US policy, but those are areas where they are spending way more money.

Their grants to Labor Mobility Partnerships stand out as not being US-specific, though LaMP is still currently more focused on the US.

I do expect that if there are shovel-ready, easy-to-justify opportunities outside the US, Open Phil would take them.

Comment by vipulnaik on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-12-13T21:20:07.651Z · EA · GW

Hello! I'm wondering what implications the switch to rolling applications has on how payout reports are published? doesn't include anything beyond April 1, 2021. Previously there would be three reports per year tied to the (discrete) grant rounds.

Comment by vipulnaik on Wikipedia editing is important, tractable, and neglected · 2021-12-03T22:37:48.669Z · EA · GW

Hi Darius!

I appreciate that you've raised this issue and provided a reasonably thorough discussion of it. I would like to highlight a bunch of aspects based on my experience editing Wikipedia as well as studying its culture in some depth. While the paid editing phase and the subsequent fallout inform my views partly, these are actually based on several years of experience before (and some after) that incident.

While none of what I say falsifies what you wrote, it is in tension with some of your tone and emphasis. So in some ways these observations are critical of your post.

How much reverence does Wikipedia's process deserve?

I think that, if your goal is to spend a lot of time editing Wikipedia, it's really important to study Wikipedia's policies -- both the de jure ones and the de facto ones. That's because the policies are not completely intuitive, and the enforcement can often be swift and unfriendly -- giving you little chance to recover once you get on the bad side of it.

So in that sense, it is important to respect and understand Wikipedia's policies.

But, that is not the same as being reverent toward the policies and enforcement mechanisms. I think your post has some of that reverence, as well as a "just world" belief, that the policies and their enforcement are sensible and just, and align with effective altruist ideals. For instance, you write:

Therefore, anyone considering making contributions to Wikipedia should become familiar with its rules, and in particular adhere to the requirement not to approach editing as an advocacy tool. This is important both because trying to paint an overly favourable picture of EA-related topics will, as Brian notes above, likely backfire, and because observing such a requirement is in line with EA's commitment to intellectual honesty and moral cooperation. Wikipedia is one of the world’s greatest altruistic projects—their contributors share many of our core values, and we should respect their norms and efforts to maintain Wikipedia’s high quality.


Don’t feel like you need to have read all articles about Wikipedia rules and norms before you can start to edit. While reading them upfront may help you avoid some frustrating experiences later, the biggest failure mode is getting overwhelmed and being discouraged from ever taking the first step on your editing journey. Most of Wikipedia’s rules and norms are commonsensical, and you are bound to become familiar with them as you gather editing experience.

In contrast, my take on understanding the Wikipedia system is that it bears many resemblances to other legal and bureaucratic systems -- many of the rules make sense in theory, and have good rationales, but their application is often really bad. Going in with a positive "just world" belief in Wikipedia seems like a recipe for falling down rather hard at the first incident. I think the best way is to be well-prepared in terms of understanding the dynamics and the kinds of attacks you may endure, so that then once you do get in there you have no false expectations, and if you do get into a fight you can bow down and stay cool without feeling rattled.

You've linked to Gwern's inclusionism article already; a few other links I recommend: Wikipedia Bureaucracy (continued), Robert Walker's answer on frustrating aspects of being a Wikipedia editor, and Gwern's piece of dark side editing.

On that note, what kind of preparation is necessary?

Based on my experience editing Wikipedia, and seeing my edited articles spend several years surviving, growing, getting deleted, or shrinking -- all of which have happened to me -- I can say it's important to be prepared when editing Wikipedia in a few ways:

  • Prepare for your work getting deleted or maimed: On a process level, this means keeping off-Wikipedia backups (Issa and I implemented technical solutions to back up the content of articles we were editing automatically, in addition to manual syncing we did at every edit). During a mass deletion episode following the paid editing, we almost lost the content of several articles, but were fortunately able to retrieve it. At an emotional level, it means accepting the possibility that stuff you spent a lot of time writing can, sometimes immediately and sometimes after years, randomly get deleted or maimed beyond recognition. And even if reasons are proffered for the maiming or deletion, you are unlikely to consider them good reasons.

  • Prepare to be attacked or questioned in ways you might find ridiculous: This may not happen to you for years, and then may suddenly happen even if you are on your best behavior -- because somebody somewhere notices something. While there are a number of strategies to reduce the probability of this happening (don't get into fights, avoid editing controversial stuff, avoid overtly promotional or low-quality edits) they are no guarantee. And if you have a large corpus of edits, once somebody is suspicious of you, they can go after your whole body of work. The emotional and psychological preparation for that -- and the background knowledge of it so that you can make an informed decision to edit Wikipedia -- is important.

A few specific tripping points of effective altruist projects to edit Wikipedia

When do you get into trouble on Wikipedia, keep in mind these likely truths about the other side (though this could vary a lot from situation to situation, and you could well get lucky enough for these not to apply to you):

  • The bulk of the people will be highly suspicious of you.
  • Those opposing you probably have a lot more time than you do and a better ability to navigate Wikipedia's channels.
  • They will not be impressed by your efforts to defend yourself, even against points you consider clearly illogical.
  • Efforts to point to noble goals (e.g. effective altruism) or measurement tools (e.g. pageviews) will make them more suspicious of yours, as it will be taken as evidence of a conflict of interest.
  • Your efforts to recruit people through off-Wikipedia channels (e.g., this EA Forum post) may make matters worse, as it might lead to accusations of canvassing.
  • Being mindful of your feelings will not be a priority for them.

What kind of Wikipedia editing might still be safe and okay to do?

This will vary from person to person. I think the following are likely to be okay for anybody altruistically inclined but moderately risk-averse:

  • Drive-by fixes to spelling, grammar, punctuation, formatting, broken links, etc.: Once you have acquired basic familiarity with Wikipedia editing, making these fixes when you notice issues is quick and easy.
  • Substantive edits or even new page creations where you have fairly high confidence that your edits will pass under the radar of zealous attackers (this tends to work well for obscure but protected topics; some academic areas such as in higher mathematics could be like this).
  • Substantive edits or even new page creations where, even if the edit gets reverted or the page deleted, the output you create (in terms of update to your state of mind, or the off-Wikipedia copy of the updated content) makes it worthwhile.

A positive note to end on

I will end with a wholehearted commendation of the spirit of your post; as I see it, this is about being prosocial in a broad sense, "giving back" to a great resource, and finding opportunities to benefit multiple communities and work in a collaborative fashion with different groups to create more for the world. I generally favor producing public output while learning new topics; where the format and goals allow it, this could be Wikipedia pages! Issa Rice has even documented this "paper trail" approach I follow.

PS: I thank Issa Rice for some of the links and thoughts that I've included in this comment as well as for reviewing my draft of the comment. Responsibility for errors and omissions is fully mine; I did not incorporate all of Issa's feedback.

Comment by vipulnaik on Wikipedia editing is important, tractable, and neglected · 2021-11-29T18:50:08.166Z · EA · GW

Hi Linch! I have a loose summary of my sponsored Wikipedia editing efforts at that I have just updated to include more information and links.

For third-party coverage of the incident, check out -- I'm linking to Wayback Machine since that wiki seems to no longer exist; also a warning that the site's general viewpoints are redpill, which might be a dealbreaker for some readers. But this particular article seems reasonably well-done in terms of its reporting/coverage, and isn't too redpill.

Comment by vipulnaik on Christiano, Cotra, and Yudkowsky on AI progress · 2021-11-25T18:44:23.500Z · EA · GW

The link for Takeoff Speeds Discussion is erroring. It looks like the correct, public link is

Comment by vipulnaik on Announcing an updated drawing protocol for the donor lotteries · 2019-01-28T05:32:19.194Z · EA · GW

It looks like the NIST randomness beacon will be back in time for the draw date of the lottery. says "NIST will reopen at 6:00 AM on Monday, January 28, 2019."

Might it make sense to return to the NIST randomness beacon for the drawing?

Comment by vipulnaik on In defence of epistemic modesty · 2017-10-30T01:00:23.246Z · EA · GW

The comments on naming beliefs by Robin Hanson (2008) appears to be how the consensus around the impressions/beliefs distinction began to form (the commenters include such movers and shakers as Eliezer and Anna Salamon).

Also, impression track records by Katja (September 2017) recent blog post/article circulated in the rationalist community that revived the terminology.

Comment by vipulnaik on Pitfalls in Diversity Outreach · 2017-10-29T16:26:09.022Z · EA · GW

Still think so, in light of the heated discussion in the comments at ?

Comment by vipulnaik on Introducing fortify hEAlth: an EA-aligned charity startup · 2017-10-28T03:35:17.999Z · EA · GW

Against Malaria Foundation was started by a guy who had some business and marketing experience but no global health chops. It is now a GiveWell top charity

Disclosure: I funded the creation of the latter page, which inspired the creation of the former.

Comment by vipulnaik on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T14:02:08.730Z · EA · GW

I'm not sure why you brought up the downvoting in your reply to my reply to your comment, rather than replying directly to the downvoted comment. To be clear, though, I did not downvote the comment, ask others to downvote the comment, or hear from others saying they had downvoted the comment.

Also, I could (and should) have been clearer that I was focusing only on points that I didn't see covered in the post, rather than providing an exhaustive list of points. I generally try to comment with marginal value-add rather than reiterating things already mentioned in the post, which I think is sound, but for others who don't know I'm doing that, it can be misleading. Thank you for making me notice that.


I think this may be part of the problem in this context. Some EAs seem to take the attitude (i'm exaggerating a bit for effect) that if there was a post on the internet about it once, it's been discussed.

In my case, I was basing it on stuff explicitly, directly mentioned in the post on which I am commenting, and a prominently linked post. This isn't "there was a post on the internet about it once" this is more like "it is mentioned right here, in this post". So I don't think my comment is an example of this problem you highlight.

Speaking to the general problem you claim happens, I think it is a reasonable concern. I don't generally endorse expecting people to have intricate knowledge of years' worth of community material. People who cite previous discussions should generally try to link as specifically as possible to them, so that others can easily know what they're talking about without having had a full map of past discussions.

But imo it's also bad to bring up points as if they are brand new, when they have already been discussed before, and especially when others in the discussion have already explicitly linked to past discussions of those points.

Comment by vipulnaik on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T02:16:59.959Z · EA · GW

I tried to avoid things that have already been discussed heavily and publicly in the community, and I think the math/philosopher angle is one that is often mentioned in the context of EA not being diverse enough. The post itself notes:

"""people who are both that and young, white, cis-male, upper middle class, from men-dominated fields, technology-focused, status-driven, with a propensity for chest-beating, overconfidence, narrow-picture thinking/micro-optimization, and discomfort with emotions."""

This also mentioned in the post by Alexander Gordon-Brown that Kelly links to:

"""EA is heavy on mathematicians, programmers, economists and philosophers. Those groups can get a lot done, but they can't get everything done. If we want to grow, I think we could do with more PR types. Because we're largely web-based, people who understand how to make things visually appealing also seem valuable. My personal experience in London is that we would love more organisers, though I can imagine this varying by location."""

Comment by vipulnaik on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T00:29:34.546Z · EA · GW

"I take your point that skews can happen, but it seems a bit suspicious to me that desire to be effective and altruistic should be so heavily skewed towards straight, white dudes."

(1) Where did "straight" come into this picture? The author says that EAs are well-represented on sexual diversity (and maybe even overrepresented on some fairly atypical sexual orientations), and my comment (and the data I used) had nothing to say about sexual orientation?

(2) """it seems a bit suspicious to me that desire to be effective and altruistic should be so heavily skewed towards straight, white dudes"""

I didn't say that desire to be effective and altruistic is heavily skewed toward men. I just said that membership in a specific community, or readership of a specific website, and things like that, can have significant gender skews, and that is not atypical. The audience for a specific community, like the effective altruist community, can be far smaller than the set of people with desire to be effective and altruistic.

For instance, if a fashion website has a 90% female audience (a not atypical number), that is not a claim that the "desire to look good" is that heavily skewed toward female. It means that the specific things that website caters to, the way it has marketed itself, etc. have resulted in it getting a female audience. Men could also desire to look good, albeit in ways that are very different from those catered to by that fashion website (or more broadly by the majority of present-day fashion websites).

Comment by vipulnaik on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T00:16:19.699Z · EA · GW

I find it interesting that most of the examples given in the article conform to mainstream, politically correct opinion about who is and isn't overrepresented. A pretty similar article could be written about e.g. math graduate students with almost the exact list of overrepresented and underrepresented groups. In that sense it doesn't seem to get to the core of what unique blind spots or expansion problems EA might have.

An alternate perspective would be to look at minorities, subgroups, and geographical patterns that are way overrepresented in EAs relative to the world population, or even, say, the US population; this could help triangulate to blind spots in EA or ways that make it difficult for EA to connect with broader populations. A few things stand out.

Of these, I know at least (1) and (2) have put off people or been major points of concern.

(1) Heavy clustering in the San Francisco Bay Area and a few other population centers, excluding large numbers of people from being able to participate in EA while feeling a meaningful sense of in-person community. It doesn't help that the San Francisco Bay Area is one of the most notoriously expensive in the world, and also located in a country (the United States) that is hard for most people to enter and live in.

(2) Overrepresentation of "poly" sexual orientations and behaviors relative to larger populations -- so that even those who aren't poly have trouble getting along in EA if they don't like rubbing shoulders with poly folks.

(3) Large proportion of people of Jewish descent. I don't think there's any problem with this, but some people might argue that this makes the ethics of EA heavily influenced by traditional Jewish ethical approaches, to the exclusion of other religious and ethical traditions. [This isn't just a reflection of greater success of people of Jewish descent; I think EAs are overrepresented among Jews even after education and income controls].

(4) Overrepresentation of vegetarians and vegans. I'm not complaining, but others might argue that this reduces EAs' ability to connect with the culinary habits and food-related traditions of a lot of cultures.

Comment by vipulnaik on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T14:42:40.623Z · EA · GW

You report EA as being 70% male. How unusual is that for a skew? One comparison point for this, for which data is easily abundant, is readerships of websites that are open-to-read (no entry criteria, no member fees). Looking at the distribution of such websites, 70% seems like a relatively low end of skew. For instance, Politico and The Hill, politics news sites, see 70-75% male audiences ( and whereas, a mainstream TV, entertainment, and celebrity site, sees a 70% female audience:

(I'm not trying to pick anything too extreme, I'm picking things pretty close to the middle. A lot of topics have far more extreme skews, like programming, hardcore gaming, fashion, see for more details on how the gender skew of websites differs based on the topic).

Based on this, and similar data I've seen, a 70% skew in either gender direction feels pretty unremarkable to me in the context of today's broader society and the domain-specific skews that are common across both mainstream and niche domains. I expect something similar to be true for race/ethnicity based on the Quantcast and similar data but I haven't obtained that much familiarity with the numbers or their reliability.

Comment by vipulnaik on 80,000 Hours' 20 most enduringly popular pieces of research · 2017-10-24T15:31:09.907Z · EA · GW

Related: Top posts on LessWrong 1.0:

Mirror of the same post on LW 2.0 (but still top posts _of_ LW 1.0):

Disclosure: I sponsored work on this post.

Comment by vipulnaik on Effective Altruism Grants project update · 2017-10-04T16:22:41.560Z · EA · GW

Also related:

Comment by vipulnaik on Effective Altruism Grants project update · 2017-10-01T00:51:16.942Z · EA · GW

Thanks for the detailed post, Roxanne! I am a little confused by the status of the recipients and the way these grants are treated by recipients from an accounting/tax perspective.

First off, are all the grants made to individuals only, or are some of them made to corporations (such as nonprofits)? Your spreadsheet lists all the recipients as individuals, but the descriptions of the grants suggest that in at least some cases, the money is actually going to an organization that is (probably) incorporated. Three examples: Oliver Habryka for LessWrong 2.0 (which he has reported at is a project under CFAR), Katja Grace for AI Impacts (which is a separate organization, that used to be classified as a project of MIRI), and Kelly Witwicki (whose work is under the Sentience Institute). If the grant money for some grants is going to corporations rather than individuals, is there a way to see in which cases the grant is going to a corporation, and what the corporation is?

Secondly, I was wondering about the tax and reporting implications of the grants that are made to individuals. Do the receiving individuals have to treat the grants as personal income? What if somebody is coordinating a project involving multiple people and splitting the money across different people? Do you directly pay each of the individuals involved, or does the person doing the coordination receive the totality of the money as personal income and then distribute parts to the other people and expense those?

Comment by vipulnaik on Effective Altruism Grants project update · 2017-09-30T23:13:34.204Z · EA · GW

It now went from 20,000 to 200,000. Is that what you intended? My crude calculation yields a number closer to 20,000 than 200,000.

Comment by vipulnaik on Why donate to 80,000 Hours · 2017-09-18T01:31:05.085Z · EA · GW

I'm following up regarding this :).

Comment by vipulnaik on Is EA Growing? Some EA Growth Metrics for 2017 · 2017-09-06T18:50:01.606Z · EA · GW

The subreddit stats used to be public (or rather, moderators could choose to make them public) but that option was removed by Reddit a few months ago.

I discussed Reddit stats a little bit in this article:

Comment by vipulnaik on Is EA Growing? Some EA Growth Metrics for 2017 · 2017-09-06T18:47:16.772Z · EA · GW

I have been using PredictionBook for recording predictions related to GiveWell money moved; see for links to the predictions. Unfortunately searching on PredictionBook itself does not turn up all the predictions because they use Google, which does not index all pages or at least doesn't surface them in search results.

Comment by vipulnaik on Changes to the EA Forum · 2017-07-04T07:00:00.253Z · EA · GW

Do you foresee any changes being made to the moderation guidelines on the forum? Now that CEA's brand name is associated with it, do you think that could mean forbidding the posting of content that is deemed "not helpful" to the movement, similar to what we see on the Effective Altruists Facebook group?

If there are no anticipated changes to the moderation guidelines, how do you anticipate CEA navigating reputational risks from controversial content posted to the forum?

Comment by vipulnaik on Update on Effective Altruism Funds · 2017-04-23T00:21:13.163Z · EA · GW

Thanks again for writing about the situation of the EA Funds, and thanks also to the managers of the individual funds for sharing their allocations and the thoughts behind it. In light of the new information, I want to raise some concerns regarding the Global Health and Development fund.

My main concern about this fund is that it's not really a "Global Health and Development" fund -- it's much more GiveWell-centric than global health- and development-centric. The decision to allocate all fund money to GiveWell's top charity reinforces some of my concerns, but it's actually something that is clear from the fund description.

From the description, it seems to be serving largely as a backup to GiveWell Incubation Grants (in cases where e.g. Good Ventures chooses not to fund the full amount) and as additional funding for GiveWell top charities.

This fund will support charities that the fund manager believes may be better in expectation than those recommended by GiveWell, a charity evaluator focused on outstandingly effective giving opportunities. For example, by pooling the funds of many individual donors, the fund could support new, but very promising global health charities in getting off the ground (e.g. Charity Science Health or No Lean Season). These organizations may not be able to meet GiveWell’s rigorous evaluation criteria at the moment, but may be able to meet the criteria in the future. If no such options are available, the fund will likely donate to GiveWell for granting. This means we think there is a strong likelihood that the fund will be at least as good as donating in accordance with GiveWell’s recommendations, but could be better in expectation.

Both the cited examples are recipients of GiveWell Incubation Grants, and in the pipeline for evaluation by GiveWell for top charity status. Even setting aside actual grantees, the value of the fund, according to the fund manager, is in terms of its value to GiveWell (emphasis mine):

Nonetheless, donating to this fund is valuable because it helps demonstrate to GiveWell that there is donor demand for higher-risk, higher-reward global health and development giving opportunities.

The GiveWell-centric nature of the fund is fine except that the fund's name suggests that it is a fund for global health and development, without affiliation to any institution.

Even beyond the GiveWell-as-an-organization-centered nature of the fund, there is a sense in which the fund reinforces the association of global health and development with quantifiable-and-low-risk, linear, easy buys. That association makes sense in the context of GiveWell (whose job it is to recommend linear-ish buys) but seems out of place to me here. Again quoting from the page about the fund:

Interventions in global health and development are generally tractable and have strong evidence to support them.

There are two distinct senses in which the statement could be interpreted:

  • There is large enough room for more funding for interventions in global health that have a strong evidence base, so that donors who want to stick to things with a strong evidence base won't run out of stuff to buy (i.e., lots of low-hanging fruit)
  • There's not much scope in global health for high-risk but high-expected value investments, because any good buy in global health would have a strong evidence base

I'd agree with the first interpretation, but the second interpretation seems quite false (looking at the Gates Foundation's portfolio shows a fair amount of risky, nonlinear efforts including new vaccine development, storage and surveillance technology breakthroughs, breakthroughs in toilet technology, etc.). The framing of the sentence, however, most naturally suggests the second interpretation, and moreover, may lead the reader to a careless conflation of the two. It seems to me like there's a lot of conflation in the EA community (and penumbra) between "global health and development" and "GiveWell current and potential top charities", and the setup of this EA Fund largely reflects that. So in that sense, my criticism isn't just of the fund but of what seems to me an implicit conflation.

Similar issues exist with two of the other funds: the animal welfare fund and the far future fund, but I think they are less concerning there. With "animal welfare" and "far future", the way the terms are used in EA Funds and in the EA community are different from the picture they'll conjure in the minds of people in general. But as far as I know, there isn't so much of an established existing cohesive infrastructure of organizations, funding sources, etc. that is at odds with the EA community.* Whereas with global health and development, you have things like WHO, Gates Foundation, Global Fund, and even an associated academic discipline etc. so the appropriation of the term for a fund that's somewhat of a GiveWell satellite seems jarring to me.

Some longer-term approaches that I think might help; obviously they wouldn't be changes you can make quickly:

(a) Rename funds so that the names capture more specifically the sort of things the funds are doing. e.g. if a fund is only being used for last-mile delivery of interventions as opposed to e.g. vaccine development, that can be specified within the fund name.

(b) Possibly have multiple funds within the same domain (e.g., global health & development) that capture different kinds of use cases (intervention delivery versus biomedical research) and have fund managers with relevant experience in the domains. e.g. it's possible that somebody with experience at the Gates Foundation, Global Fund, WHO, IHME, etc. could do fund allocation in some domains of global health and development better for some use cases.

Anyway, these are my thoughts. I'm not a contributor (or potential contributor, in the near term) to the funds, so take with appropriate amount of salt.

*It could be that if I had deeper knowledge of mainstream animal welfare and animal rights, or of mainstream far future stuff (like climate change) then I would find these jarring as well.

Comment by vipulnaik on Update on Effective Altruism Funds · 2017-04-22T15:53:10.408Z · EA · GW

I appreciate the information being posted here, in this blog post, along with all the surrounding context. However, I don't see the information on these grants on the actual EA Funds website. Do you plan to maintain a grants database on the EA Funds website, and/or list all the grants made from each fund on the fund page (or linked to from it)? That way anybody can check in at any time to see how how much money has been raised, and how much has been allocated and where.

The Open Philanthropy Project grants database might be a good model, though your needs may differ somewhat.

Comment by vipulnaik on Effective Altruism Forum web traffic from Google Analytics · 2017-04-16T20:44:04.870Z · EA · GW

Public link with up-to-date data

Comment by vipulnaik on [deleted post] 2017-03-17T15:33:58.355Z

Commenting here to avoid a misconception that some readers of this post might have. I wasn't trying to "spread effective altruism" to any community with these editing efforts, least of all the Wikipedia community (it's also worth noting that the Wikipedia community that participates in these debates is basically disjoint from the people who actually read those specific pages in practice -- many of the latter don't even have Wikipedia accounts).

Some of the editing activities were related to effective altruism in these two ways: (1) The pages we edited, and the content we added, were disproportionately (though not exclusively) of interest to people in and around the EA-sphere, and (2) Some of the topics worked on, I selected based on EA-aligned interests (an example would be global health and disease timelines).

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-03-02T00:17:15.351Z · EA · GW

Great points! (An upvote wasn't enough appreciation, hence the comment as well).

Comment by vipulnaik on Essay contest: general considerations for evaluating small-scale giving opportunities ($300 for winning submission) · 2017-02-26T02:10:09.765Z · EA · GW

Hi Dony,

The submission doesn't qualify as serious, and was past the deadline. So we won't be considering it.

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-25T05:38:56.882Z · EA · GW

One point to add: the frustratingly vague posts tend to get FEWER comments than the specific, concrete posts.

From my list, the posts I identified as clearly vague: got 1 comment (a question that hasn't been answered) got 1 comment (a single sentence praising the post) got 6 comments got 8 comments

In contrast, the posts I identified as sufficiently specific (even though they tended on the fairly technical side) got 17 comments got 14 comments got 27 comments got 7 comments

If engagement is any indication, then people really thirst for specific, concrete content. But that's not necessarily in contradiction with Holden's point, since his goal isn't to generate engagement. In fact comments engagement can even be viewed negatively in his framework because it means more effort necessary to respond to and keep up with comments.

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-24T21:17:50.074Z · EA · GW

(4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility: You keep alluding to costs of publishing your work more clearly, yet there are no examples of how such costs have negatively affected Open Phil, or the specific monetary, emotional, or other damages you have incurred (this is related to (1), where I am critical of your frustrating vagueness). This vagueness makes your claims of the risks to openness frustrating to evaluate in your case.

As a more general claim about being public, though, your claim strikes me as misguided. The main obstacle to writing up stuff for the public is just that writing stuff up takes a lot of time, but this is mostly a limitation on the part of the writer. The writer doesn't have a clear picture of what he or she wants to say. The writer does not have a clear idea of how to convey the idea clearly. The writer lacks the time and resources to put things together. Failure to do this is a failure on the part of the writer. Blaming readers for continually trying to misinterpret their writing, or carrying out witch hunts, is simply failing to take responsibility.

A more humble framing would highlight this fact, and some of its difficult implications, e.g.: "As somebody in charge of a foundation that is spending ~$100 million a year and recommending tens of millions in donations by others, I need to be very clear in my thinking and reasoning. Unfortunately, I have found that it's often easier and cheaper to spend millions of dollars in grants than write up a clear public-facing document on the reasons for doing so. I'm very committed to writing publicly where it is possible (and you can see evidence of this in all the grant writeups for Open Phil and the detailed charity evaluations for GiveWell). However, there are many cases where writing up my reasoning is more daunting than signing off on millions of dollars in money. I hope that we are able to figure out better approaches to reducing the costs of writing things up."

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-24T21:17:41.264Z · EA · GW

(3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative.

"By "public discourse," I mean communications that are available to the public and that are primarily aimed at clearly describing one's thinking, exploring differences with others, etc. with a focus on truth-seeking rather than on fundraising, advocacy, promotion, etc."

If you exclude from public discourse any benefits pertaining to fundraising, advocacy, and promotion, then you are essentially stacking the deck against public discourse -- now any reputational or time-sink impacts are likely to be negative.

Here's an alternate perspective. Any public statement should be thought of both in terms of the object-level points it is making (specifically, the information it is directly providing or what it is trying to convince people of), and secondarily in terms of how it affects the status and reputation of the person or organization making the statement, and/or their broader goals. For instance, when I wrote my direct goal was to provide information about web traffic to the Effective Altruism Forum and what the patterns tell us about effective altruism movement growth, but an indirect goal was to highlight the value of using data-driven analytics, and in particular website analytics, something I've championed in the past. Whether you choose to label the public statement as "fundraising", "advocacy", or whatever, is somewhat besides the point.

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-24T21:17:24.276Z · EA · GW

(2) Overstated connotations of expertise with respect to the value of transparency and openness:

"Regardless of the underlying reasons, we have put a lot of effort over a long period of time into public discourse, and have reaped very little of this particular kind of benefit (though we have reaped other benefits - more below). I'm aware that this claim may strike some as unlikely and/or disappointing, but it is my lived experience, and I think at this point it would be hard to argue that it is simply explained by a lack of effort or interest in public discourse."

Your writing makes it appear like you've left no stone unturned to try every approach at transparency and confirmed that the masses are wanting. But digging into the facts suggests support for a much weaker conclusion. Which is: for the particular approach that GiveWell used and the particular kind of content that GiveWell shared, the people who responded in ways that made sense to you and were useful to you were restricted to a narrow pool. There is no good reason offered on why these findings would be general across any domains or expository approaches than the ones you've narrowly tried at GiveWell.

This doesn't mean GiveWell or Open Phil is obligated to try new approaches -- but it does suggest more humility in making claims about the broader value of transparency and openness.

There is a wealth of ways that people seek to make their work transparent. Public projects on GitHub make details about both their code evolution and contributor list available by default, without putting in any specific effort into it, because of the way the system is designed. This pays off to different extents for different kinds of projects; in some cases, there are a lot of issue reports and bugfixes from random strangers, in many others, nobody except the core contributors cares. In some, malicious folks find vulnerabilities in the code because it's so open. If you ran a few projects on GitHub and observed something about how frequently strangers make valuable commits or file bug reports, it would not behoove you to then use that information to make broad claims about the value of putting projects on GitHub. Well, you seem to be doing the same based on a couple of things you ran (GiveWell, Open Phil).

Transparency/Semi-transparency/openness is a complex subject and a lot of its value comes from a wide variety of downstream effects that differentially apply in different contexts. Just a few of the considerations: precommitment (which gives more meaning to transparency, think research preregistration), transparent-by-definition processes and workflows (think tools like git on GitHub, automatically and transparently updated accounts ledgers such as those on blockchains), computability and pluggability (stuff that is in a computable format and can therefore be plugged into other datasets or analyses with minimal effort by others, e.g., the Open Philanthropy grants database and the International Aid Transparency Initiative (both of which were used by Issa in collating summary information about grant trends and patterns), donation logs (which I used to power the donations lists at, integrity and consistency forced by transparency (basically your data has to check out if you are making it transparently available, e.g., when I moved all my contract work payments to , I had to make sure the entire payment system was consistent), etc.

It seems like, at GiveWell, many of the key parts of transparency (precommitment, transparent-by-definition processes and workflows, computability and pluggability, integrity and consistency) are in minimal use. Given this rather abridged use case of transparency (which could be great for you), it really doesn't make sense to argue broadly about the value of being transparent.

Here is what I'd consider a better way to frame this:

"At GiveWell, we made some of our reasoning and the output of our work transparent, and reaped a variety of benefits. However, we did not get widespread engagement from the general public for our work. Getting engagement from the general public was something we wanted and hoped to achieve but not the main focus of our work. We couldn't figure out the right strategy for doing it, and have deprioritized it. I hope that others can benefit from what worked and didn't work in our efforts to engage the public with our research, and come up with better strategies to engender public engagement. I should be clear that I am not making any broader claims about the value of transparency in contexts beyond ours."

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-24T21:17:06.193Z · EA · GW

(1) Frustrating vagueness and seas of generality: This post, as well as many other posts you have recently written (such as , , , struck me as fairly vague. Even posts where you were trying to be concrete (e.g., , were really hard for me to parse and get a grip on your precise arguments.

I didn't really reflect on this much with the previous posts, but reading your current post sheds some light: the vagueness is not a bug, from your perspective, it's a corollary of trying to make your content really hard for people to take issue with. And I think therein lies the problem. I think of specificity, falsifiability, and concreteness as keys to furthering discourse and helping actually converge on key truths and correcting error. By glorifying the rejection of these virtues, I think your writing does a disservice to public discourse.

For a point of contrast, here are some posts from GiveWell and Open Phil that I feel were sufficiently specific that they added value to a conversation: , , , -- notice how most of these posts make a large number of very concrete claims and highlight their opposition to very specific other parties, which makes them targets of criticism and insult, but really helps delineate an issue and pushes conversations forward. I'm interested in seeing more of this sort of stuff and less of overly cautious diplomatic posts like yours.

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-24T21:16:53.312Z · EA · GW

Thank you for the illuminative post, Holden. I appreciate you taking the time to write this, despite your admittedly busy schedule. I found much to disagree with in the approach you champion in the post, that I attempt to articulate below.

In brief: (1) Frustrating vagueness and seas of generality in your current post and recent posts, (2) Overstated connotations of expertise with regards to transparency and openness, (3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative, (4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility.

I'll post each point as a reply comment to this since the overall comment exceeds the length limits for a comment.

Comment by vipulnaik on Changes in funding in the AI safety field · 2017-02-04T01:45:48.279Z · EA · GW

I appreciate posts like this -- they are very helpful (and would be more so if I were thinking of donating money or contributing in kind to the topic).

Comment by vipulnaik on Essay contest: general considerations for evaluating small-scale giving opportunities ($300 for winning submission) · 2017-01-28T01:20:29.021Z · EA · GW

Awesome, excited to see you flesh out your thinking and submit!

Comment by vipulnaik on How Should I Spend My Time? · 2017-01-17T21:42:38.097Z · EA · GW

"So if I could be expected to work 4380 hours over 2016-2019, earn $660K (95%: $580K to $860K) and donate $160K, that’s an expected earnings of $150.68 per hour worked. [...] I consider my entire earnings to be the altruistic value of this project."

What about taxes?

Comment by vipulnaik on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-12T06:24:38.903Z · EA · GW

The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

In brief:

  • EA orgs' and communities' growth metrics are centered around numbers of people and quantity of money moved. These don't correlate much with epistemic virtue.
  • (more speculative) EA orgs' donors/supporters don't demand much epistemic virtue. The orgs tend to hold themselves to higher standards than their current donors.
  • (even more speculative; not much argument offered) Even long-run growth metrics don't correlate too well with epistemic virtue.
  • Quantifying (some aspects of) quality and virtue into metrics seems to me to have the best shot at changing the incentive structure here.

The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity evaluators and for movement-building orgs). These are the headline numbers they highlight in their self-evaluations and reports, and these are the numbers that people giving elevator pitches about the orgs use ("GiveWell moved more than $100 million in 2015" or "GWWC has (some number of hundreds of millions) in pledged money"). Some orgs have slightly different metrics, but still essentially ones that rely on changing the minds of large numbers of people: 80,000 Hours counts Impact-Adjusted Significant Plan Changes, and many animal welfare orgs count numbers of converts to veganism (or recruits to animal rights activism) through leafleting.

These incentives don't directly align with improved epistemic virtue! In many cases, they are close to orthogonal. In some cases, they are correlated but not as much as you might think (or hope!).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

With that said, the organizations I am aware of in the EA community hold themselves to much higher standards than (as far I can make out) their donor and supporter base seems to demand of them. My guess is that GiveWell could have been a LOT more sloppy with their reviews and still moved pretty similar amounts of money as long as they produced reviews that pattern-matched a well-researched review. (I've personally found their review quality improved very little from 2014 to 2015 and much more from 2015 to 2016; and yet I expect that the money moved jump from 2015 to 2016 will be less, or possibly even negative). I believe (with weaker confidence) that similar stuff is true for Animal Charity Evaluators in both directions (significantly increasing or decreasing review quality won't affect donations that much). And also for Giving What We Can: the amount of pledged money doesn't correlate that well with the quality or state of their in-house research.

The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like "Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you." I think there's some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

My best guess is that unless we can get a better handle on epistemic virtue and quantify quality in some meaningful way, the incentive structure problem will remain.

Comment by vipulnaik on My 5 favorite posts of 2016 · 2017-01-06T16:29:31.962Z · EA · GW

My thoughts precisely!

Comment by vipulnaik on Tell us how to improve the forum · 2017-01-03T16:15:11.065Z · EA · GW

I haven't been able to successfully log in to EAF from my phone (which is a pretty old Windows Mobile phone, so might be something unique to it). That probably increases the number of pageviews I generated for EAF, because I revisit on desktop to leave a comment :).

Comment by vipulnaik on Individual Project Fund: Further Details · 2017-01-03T05:37:47.245Z · EA · GW

Great to hear about this, Jacob! As somebody who funds a lot of loosely similar activities in the "EA periphery" I have some thoughts and experience on the challenges and rewards of funding. Let me know if you'd like to talk about it.

You can get a list of stuff I've funded at

Comment by vipulnaik on Effective Altruism Forum web traffic from Google Analytics · 2017-01-01T21:45:51.751Z · EA · GW

Thanks, I added the explication of the acronym at the beginning.

Comment by vipulnaik on Effective Altruism Forum web traffic from Google Analytics · 2016-12-31T21:32:29.493Z · EA · GW

You can get data on the Facebook group(s) using tools like -- however, they can take a while to load all the data. A full analysis of that data would be worth another post.

Comment by vipulnaik on 2016 AI Risk Literature Review and Charity Comparison · 2016-12-31T20:48:40.252Z · EA · GW

Why does the post have "2017" in the title?

Comment by vipulnaik on Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation · 2016-12-31T18:48:05.575Z · EA · GW

Some people in the effective altruist community have argued that small donors should accept that they will use marginal charitable dollars less efficiently than large actors such as Open Phil, for lack of time, skill, and scale to find and choose between charitable opportunities. Sometimes this is phrased as advice that small donors follow GiveWell's recommendations, while Open Phil pursues other causes and strategies such as scientific research and policy.

The argument that I have heard is a little different. It is that the entry of big players like Open Phil has made it harder to have the old level of marginal impact with one's donation.


Marginal impact of one's donation now that Open Phil is plucking a lot of low-hanging fruit < Marginal impact of one's donation a few years ago ... (1)

Whereas the claim that you are critiquing is:

Marginal impact of one's donation < Marginal impact of Open Phil's donation ... (2)

Why does (1) matter? Some donors have fixed charity budgets, i.e., they wish to donate a certain amount every year to charity. For them, then, the challenge is just to find the best use of money, so even if marginal impacts are down across the board, it doesn't matter much because all the matters is relative impact.

For other donors and potential donors, charitable donations compete with other uses of money. Therefore, whether or not one donates to charity, and how much one donates to charity, would depend on how large the marginal impact is. If the entry of players like Open Phil has reduced the marginal impact achievable, then that's good reason to donate less.

So I feel that the argument you are attacking isn't the actually correct one to attack. Though you do address (1) a bit in the post, I think it would have made more sense to make it the main focus.

Comment by vipulnaik on Why donate to 80,000 Hours · 2016-12-28T23:09:46.767Z · EA · GW

As further evidence, a survey of meta-charity donors carried out by Open Phil and 80,000 Hours found that they expect to give about £4.5m this year, and not all will go to meta-charities. Given that CEA is aiming to raise £2.5m-£5m alone, the capacity of meta-charity donors is going to be used up this year. This means we need new meta-charity donors, or good meta opportunities will go unfunded.

Is there more information about this survey currently available, and/or are there plans to release more information? This is the first I am hearing about the survey, and it sounds like something that deserves standalone coverage.

Comment by vipulnaik on What is the expected value of creating a GiveWell top charity? · 2016-12-19T20:09:08.927Z · EA · GW

Thanks for updating the post! I still see the somewhat outdated sentence:

For example, a fifth top charity would likely lead Good Ventures to make an additional incentive grant of $2.5M that they would not have otherwise made.

Since GiveWell now has seven top charities, that should read "eighth" rather than fifth.

Comment by vipulnaik on What is the expected value of creating a GiveWell top charity? · 2016-12-18T21:04:02.092Z · EA · GW

Your estimates could probably benefit a bit more by explicitly incorporating the 2016 top charity recommendations as well as information released in GiveWell's blog post about the subject. In particular:

  • Good Ventures is expected to donate $50 million to GiveWell top charities (+ special recognition charities) and is likely to allocate a similar amount for the next few years. This should be incorporated into estimation of total annual money moved (mostly in terms of reducing variance).

Due to the growth of the Open Philanthropy Project this year and its increased expectation of the size and value of the opportunities it may have in the future, we expect Good Ventures to set a budget of $50 million for its contributions to GiveWell top charities. The Open Philanthropy Project plans to write more about this in a future post on its blog.

  • The "top charity incentive" grant is now set at $2.5 million, up from $1 million (and therefore it is now 5% of Good Ventures' share of donations). This should factor into the estimate of the money moved to any charity. In particular, it sets a lower bound on absolute money moved, though of course the top charity incentive could change.

  • The addition of new 2016 top charities as well as the change to top charity incentive also make this part of your post outdated:

For example, a fifth top charity would likely lead Good Ventures to make an additional incentive grant of $1M that they would not have otherwise made

if your post was drafted prior to the release of the new top charities and you didn't get a chance to update it fully to take into account the new information, it would be helpful to mention that in the post.

Comment by vipulnaik on Should you donate to the Wikimedia Foundation? · 2016-12-17T01:22:27.340Z · EA · GW

See also my recent post for more updates on editing and improving Wikipedia.

Comment by vipulnaik on What the EA community can learn from the rise of the neoliberals · 2016-12-11T00:45:05.948Z · EA · GW

Your post is yeoman's work and much appreciated.

There were a few areas where your reading of history seems to differ from mine, as well as a bunch of key distinctions that I believe should have made it in a piece of this length.

First, I think the piece gives too much credit to and puts too much focus on Hayek as an intellectual architect of neoliberalism. Hayek's work was influential, and his impact on Fisher as well, but I don't think Hayek is treated as a blueprint for neoliberalism.

The significant focus on Hayek is coupled with a lack of focus on the key philosophical and methdological distinctions, and actual successes and failures.

Philosophical and methodological distinctions

Neoliberalism isn't a school of economics. There were several fairly distinct schools of economics that can broadly be classified as neoliberal. The tradition that Hayek was part of was the Austrian school. The Austrian school has a vibrant community (that has flourished online) but it is a fairly small minority of economists. And it has pretty significant methodological differences with mainstream economics, mostly in terms of rejecting some of mainstream economics' efforts at quantification. Notably, Austrians also have a different way of looking at money than monetarism. With that said, Hayek's branch of the Austrian school has embraced many parts of mainstream economics.

And then there are the schools of economics that broadly fall under "neoclassical economics" such as the Chicago School, which uses a pretty large amount of quantification and uses price theory (inherently quantitative) as its base. Although Hayek did interact with a lot of the Chicago School and contributed somewhat to its thought, he isn't one of its central figures: (the last few predate Hayek). Unlike the Austrian school, the Chicago School has had a lot of success in getting mainstream recognition. The Chicago School is probably a key part of neoliberalism as people refer to the term but, with the exception of a couple people (mostly Milton Friedman) had little by way of explicit links with the intellectual activist movement to champion neoliberalism.

And then there are a bunch of other schools of thought like New Keynesianism that can also be broadly considered neoliberal (examples of New Keynesian include Greg Mankiw, former George Bush adviser) but a sort-of continuation of the old Keynesianism.

Related to these fairly distinct (and separately motivated and originated) schools of thought are the different political philosophies that get bunched as neoliberalism. Probably the most distinctive (and most minority) philosophy is modern libertarianism. This political philosophy and the associated intellectual infrastructure is what can be traced most closely to the sort of deliberate efforts you allude to (Hayek, Fisher, etc.) though a number of other key figures also show up (such as Austrian economist and radical anarcho-capitalist Murray Rothbard, explainer Walter Reed, and billionaire backers the Koch brothers). Libertarianism, which focuses on both economic and "social" freedom, has had important success and spillovers even if it hasn't caught on as a philosophy (things like opposition to conscription, a direct success, and opposition to the War on Drugs, one that would penetrate mainstream liberal views soon). And then there are also other non-libertarian but market-friendly liberal and market-friendly conservative think tanks and institutes that have flourished in recent decades.

Overall, I would say that the growth of "neoliberalism" has involved some good initial planning by key figures but resembles a Hayekian spontaneous order more than the execution of Hayek's central plan.

Actual successes and failures

The article makes neoliberalism appear like a huge success. Many of the leading proponents of various schools of neoliberalism take a fairly different view. For instance, when Hayek wrote "A Road to Serfdom" the non-war US federal welfare state was fairly small. Then in the 1960s welfare was expanded significantly. In the 1970s there were huge amounts of additional regulation that (depending on the school) could be treated as big negatives. Reaganism dialed back some of the changes, but without fundamentally changing them, just dialing them down in quantity. In the United States, according to various measures, economic freedom has been flat or declined somewhat rather than moving steadily toward more freedom. (Globally, economic freedom measured by various indices has increased mostly as communist regimes have ended and some big economies like China and India moved in a pro-market direction).