My Q1 2019 EA Hotel donation

2019-04-01T02:23:23.107Z · score: 105 (40 votes)
Comment by vipulnaik on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-28T05:32:19.194Z · score: 2 (2 votes) · EA · GW

It looks like the NIST randomness beacon will be back in time for the draw date of the lottery. https://www.nist.gov/programs-projects/nist-randomness-beacon says "NIST will reopen at 6:00 AM on Monday, January 28, 2019."

Might it make sense to return to the NIST randomness beacon for the drawing?

My 2018 donations

2018-12-23T20:38:16.877Z · score: 37 (12 votes)
Comment by vipulnaik on In defence of epistemic modesty · 2017-10-30T01:00:23.246Z · score: 6 (6 votes) · EA · GW

The comments on naming beliefs by Robin Hanson (2008) appears to be how the consensus around the impressions/beliefs distinction began to form (the commenters include such movers and shakers as Eliezer and Anna Salamon).

Also, impression track records by Katja (September 2017) recent blog post/article circulated in the rationalist community that revived the terminology.

Comment by vipulnaik on Pitfalls in Diversity Outreach · 2017-10-29T16:26:09.022Z · score: 4 (4 votes) · EA · GW

Still think so, in light of the heated discussion in the comments at http://effective-altruism.com/ea/1g3/why_how_to_make_progress_on_diversity_inclusion/ ?

Comment by vipulnaik on Introducing fortify hEAlth: an EA-aligned charity startup · 2017-10-28T03:35:17.999Z · score: 7 (7 votes) · EA · GW

Against Malaria Foundation was started by a guy who had some business and marketing experience but no global health chops. It is now a GiveWell top charity

https://issarice.com/against-malaria-foundation

https://timelines.issarice.com/wiki/Timeline_of_Against_Malaria_Foundation

Disclosure: I funded the creation of the latter page, which inspired the creation of the former.

Comment by vipulnaik on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T14:02:08.730Z · score: 7 (7 votes) · EA · GW

I'm not sure why you brought up the downvoting in your reply to my reply to your comment, rather than replying directly to the downvoted comment. To be clear, though, I did not downvote the comment, ask others to downvote the comment, or hear from others saying they had downvoted the comment.

Also, I could (and should) have been clearer that I was focusing only on points that I didn't see covered in the post, rather than providing an exhaustive list of points. I generally try to comment with marginal value-add rather than reiterating things already mentioned in the post, which I think is sound, but for others who don't know I'm doing that, it can be misleading. Thank you for making me notice that.

Also:

I think this may be part of the problem in this context. Some EAs seem to take the attitude (i'm exaggerating a bit for effect) that if there was a post on the internet about it once, it's been discussed.

In my case, I was basing it on stuff explicitly, directly mentioned in the post on which I am commenting, and a prominently linked post. This isn't "there was a post on the internet about it once" this is more like "it is mentioned right here, in this post". So I don't think my comment is an example of this problem you highlight.

Speaking to the general problem you claim happens, I think it is a reasonable concern. I don't generally endorse expecting people to have intricate knowledge of years' worth of community material. People who cite previous discussions should generally try to link as specifically as possible to them, so that others can easily know what they're talking about without having had a full map of past discussions.

But imo it's also bad to bring up points as if they are brand new, when they have already been discussed before, and especially when others in the discussion have already explicitly linked to past discussions of those points.

Comment by vipulnaik on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T02:16:59.959Z · score: 3 (3 votes) · EA · GW

I tried to avoid things that have already been discussed heavily and publicly in the community, and I think the math/philosopher angle is one that is often mentioned in the context of EA not being diverse enough. The post itself notes:

"""people who are both that and young, white, cis-male, upper middle class, from men-dominated fields, technology-focused, status-driven, with a propensity for chest-beating, overconfidence, narrow-picture thinking/micro-optimization, and discomfort with emotions."""

This also mentioned in the post by Alexander Gordon-Brown that Kelly links to: http://effective-altruism.com/ea/ek/ea_diversity_unpacking_pandoras_box/

"""EA is heavy on mathematicians, programmers, economists and philosophers. Those groups can get a lot done, but they can't get everything done. If we want to grow, I think we could do with more PR types. Because we're largely web-based, people who understand how to make things visually appealing also seem valuable. My personal experience in London is that we would love more organisers, though I can imagine this varying by location."""

Comment by vipulnaik on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T00:29:34.546Z · score: 7 (7 votes) · EA · GW

"I take your point that skews can happen, but it seems a bit suspicious to me that desire to be effective and altruistic should be so heavily skewed towards straight, white dudes."

(1) Where did "straight" come into this picture? The author says that EAs are well-represented on sexual diversity (and maybe even overrepresented on some fairly atypical sexual orientations), and my comment (and the data I used) had nothing to say about sexual orientation?

(2) """it seems a bit suspicious to me that desire to be effective and altruistic should be so heavily skewed towards straight, white dudes"""

I didn't say that desire to be effective and altruistic is heavily skewed toward men. I just said that membership in a specific community, or readership of a specific website, and things like that, can have significant gender skews, and that is not atypical. The audience for a specific community, like the effective altruist community, can be far smaller than the set of people with desire to be effective and altruistic.

For instance, if a fashion website has a 90% female audience (a not atypical number), that is not a claim that the "desire to look good" is that heavily skewed toward female. It means that the specific things that website caters to, the way it has marketed itself, etc. have resulted in it getting a female audience. Men could also desire to look good, albeit in ways that are very different from those catered to by that fashion website (or more broadly by the majority of present-day fashion websites).

Comment by vipulnaik on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T00:16:19.699Z · score: 16 (24 votes) · EA · GW

I find it interesting that most of the examples given in the article conform to mainstream, politically correct opinion about who is and isn't overrepresented. A pretty similar article could be written about e.g. math graduate students with almost the exact list of overrepresented and underrepresented groups. In that sense it doesn't seem to get to the core of what unique blind spots or expansion problems EA might have.

An alternate perspective would be to look at minorities, subgroups, and geographical patterns that are way overrepresented in EAs relative to the world population, or even, say, the US population; this could help triangulate to blind spots in EA or ways that make it difficult for EA to connect with broader populations. A few things stand out.

Of these, I know at least (1) and (2) have put off people or been major points of concern.

(1) Heavy clustering in the San Francisco Bay Area and a few other population centers, excluding large numbers of people from being able to participate in EA while feeling a meaningful sense of in-person community. It doesn't help that the San Francisco Bay Area is one of the most notoriously expensive in the world, and also located in a country (the United States) that is hard for most people to enter and live in.

(2) Overrepresentation of "poly" sexual orientations and behaviors relative to larger populations -- so that even those who aren't poly have trouble getting along in EA if they don't like rubbing shoulders with poly folks.

(3) Large proportion of people of Jewish descent. I don't think there's any problem with this, but some people might argue that this makes the ethics of EA heavily influenced by traditional Jewish ethical approaches, to the exclusion of other religious and ethical traditions. [This isn't just a reflection of greater success of people of Jewish descent; I think EAs are overrepresented among Jews even after education and income controls].

(4) Overrepresentation of vegetarians and vegans. I'm not complaining, but others might argue that this reduces EAs' ability to connect with the culinary habits and food-related traditions of a lot of cultures.

Comment by vipulnaik on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T14:42:40.623Z · score: 18 (30 votes) · EA · GW

You report EA as being 70% male. How unusual is that for a skew? One comparison point for this, for which data is easily abundant, is readerships of websites that are open-to-read (no entry criteria, no member fees). Looking at the distribution of such websites, 70% seems like a relatively low end of skew. For instance, Politico and The Hill, politics news sites, see 70-75% male audiences (https://www.quantcast.com/politico.com#demographicsCard and https://www.quantcast.com/thehill.com#demographicsCard) whereas nbc.com, a mainstream TV, entertainment, and celebrity site, sees a 70% female audience: https://www.quantcast.com/nbc.com#demographicsCard

(I'm not trying to pick anything too extreme, I'm picking things pretty close to the middle. A lot of topics have far more extreme skews, like programming, hardcore gaming, fashion, see https://www.wikihow.com/Understand-Your-Website-Audience-Profile#Understanding_the_gender_composition_and_index_of_your_website_sub for more details on how the gender skew of websites differs based on the topic).

Based on this, and similar data I've seen, a 70% skew in either gender direction feels pretty unremarkable to me in the context of today's broader society and the domain-specific skews that are common across both mainstream and niche domains. I expect something similar to be true for race/ethnicity based on the Quantcast and similar data but I haven't obtained that much familiarity with the numbers or their reliability.

Comment by vipulnaik on 80,000 Hours' 20 most enduringly popular pieces of research · 2017-10-24T15:31:09.907Z · score: 4 (4 votes) · EA · GW

Related: Top posts on LessWrong 1.0: http://lesswrong.com/lw/owa/lesswrong_analytics_february_2009_to_january_2017/

Mirror of the same post on LW 2.0 (but still top posts _of_ LW 1.0): https://www.lesserwrong.com/posts/SWNn53RryQgTzT7NQ/lesswrong-analytics-february-2009-to-january-2017

Disclosure: I sponsored work on this post.

Comment by vipulnaik on Effective Altruism Grants project update · 2017-10-04T16:22:41.560Z · score: 1 (1 votes) · EA · GW

Also related: https://www.facebook.com/vipulnaik.r/posts/10211030780941382

Comment by vipulnaik on Effective Altruism Grants project update · 2017-10-01T00:51:16.942Z · score: 1 (1 votes) · EA · GW

Thanks for the detailed post, Roxanne! I am a little confused by the status of the recipients and the way these grants are treated by recipients from an accounting/tax perspective.

First off, are all the grants made to individuals only, or are some of them made to corporations (such as nonprofits)? Your spreadsheet lists all the recipients as individuals, but the descriptions of the grants suggest that in at least some cases, the money is actually going to an organization that is (probably) incorporated. Three examples: Oliver Habryka for LessWrong 2.0 (which he has reported at http://lesswrong.com/r/discussion/lw/pes/lw_20_strategic_overview/ is a project under CFAR), Katja Grace for AI Impacts (which is a separate organization, that used to be classified as a project of MIRI), and Kelly Witwicki (whose work is under the Sentience Institute). If the grant money for some grants is going to corporations rather than individuals, is there a way to see in which cases the grant is going to a corporation, and what the corporation is?

Secondly, I was wondering about the tax and reporting implications of the grants that are made to individuals. Do the receiving individuals have to treat the grants as personal income? What if somebody is coordinating a project involving multiple people and splitting the money across different people? Do you directly pay each of the individuals involved, or does the person doing the coordination receive the totality of the money as personal income and then distribute parts to the other people and expense those?

Comment by vipulnaik on Effective Altruism Grants project update · 2017-09-30T23:13:34.204Z · score: 2 (2 votes) · EA · GW

It now went from 20,000 to 200,000. Is that what you intended? My crude calculation yields a number closer to 20,000 than 200,000.

The AIDS/malaria puzzle: bleg

2017-09-26T15:55:37.077Z · score: 6 (6 votes)
Comment by vipulnaik on Why donate to 80,000 Hours · 2017-09-18T01:31:05.085Z · score: 2 (2 votes) · EA · GW

I'm following up regarding this :).

Comment by vipulnaik on Is EA Growing? Some EA Growth Metrics for 2017 · 2017-09-06T18:50:01.606Z · score: 0 (0 votes) · EA · GW

The subreddit stats used to be public (or rather, moderators could choose to make them public) but that option was removed by Reddit a few months ago.

https://www.reddit.com/r/ModSupport/comments/6atvgi/upcoming_changes_view_counts_users_here_now_and/

I discussed Reddit stats a little bit in this article: https://www.wikihow.com/Understand-Your-Website-Traffic-Variation-with-Time

Comment by vipulnaik on Is EA Growing? Some EA Growth Metrics for 2017 · 2017-09-06T18:47:16.772Z · score: 1 (1 votes) · EA · GW

I have been using PredictionBook for recording predictions related to GiveWell money moved; see http://effective-altruism.com/ea/xn/givewell_money_moved_in_2015_a_review_of_my/#predictions-for-2016 for links to the predictions. Unfortunately searching on PredictionBook itself does not turn up all the predictions because they use Google, which does not index all pages or at least doesn't surface them in search results.

Comment by vipulnaik on Changes to the EA Forum · 2017-07-04T07:00:00.253Z · score: 3 (3 votes) · EA · GW

Do you foresee any changes being made to the moderation guidelines on the forum? Now that CEA's brand name is associated with it, do you think that could mean forbidding the posting of content that is deemed "not helpful" to the movement, similar to what we see on the Effective Altruists Facebook group?

If there are no anticipated changes to the moderation guidelines, how do you anticipate CEA navigating reputational risks from controversial content posted to the forum?

Comment by vipulnaik on Update on Effective Altruism Funds · 2017-04-23T00:21:13.163Z · score: 5 (7 votes) · EA · GW

Thanks again for writing about the situation of the EA Funds, and thanks also to the managers of the individual funds for sharing their allocations and the thoughts behind it. In light of the new information, I want to raise some concerns regarding the Global Health and Development fund.

My main concern about this fund is that it's not really a "Global Health and Development" fund -- it's much more GiveWell-centric than global health- and development-centric. The decision to allocate all fund money to GiveWell's top charity reinforces some of my concerns, but it's actually something that is clear from the fund description.

From the description, it seems to be serving largely as a backup to GiveWell Incubation Grants (in cases where e.g. Good Ventures chooses not to fund the full amount) and as additional funding for GiveWell top charities.

This fund will support charities that the fund manager believes may be better in expectation than those recommended by GiveWell, a charity evaluator focused on outstandingly effective giving opportunities. For example, by pooling the funds of many individual donors, the fund could support new, but very promising global health charities in getting off the ground (e.g. Charity Science Health or No Lean Season). These organizations may not be able to meet GiveWell’s rigorous evaluation criteria at the moment, but may be able to meet the criteria in the future. If no such options are available, the fund will likely donate to GiveWell for granting. This means we think there is a strong likelihood that the fund will be at least as good as donating in accordance with GiveWell’s recommendations, but could be better in expectation.

Both the cited examples are recipients of GiveWell Incubation Grants, and in the pipeline for evaluation by GiveWell for top charity status. Even setting aside actual grantees, the value of the fund, according to the fund manager, is in terms of its value to GiveWell (emphasis mine):

Nonetheless, donating to this fund is valuable because it helps demonstrate to GiveWell that there is donor demand for higher-risk, higher-reward global health and development giving opportunities.

The GiveWell-centric nature of the fund is fine except that the fund's name suggests that it is a fund for global health and development, without affiliation to any institution.

Even beyond the GiveWell-as-an-organization-centered nature of the fund, there is a sense in which the fund reinforces the association of global health and development with quantifiable-and-low-risk, linear, easy buys. That association makes sense in the context of GiveWell (whose job it is to recommend linear-ish buys) but seems out of place to me here. Again quoting from the page about the fund:

Interventions in global health and development are generally tractable and have strong evidence to support them.

There are two distinct senses in which the statement could be interpreted:

  • There is large enough room for more funding for interventions in global health that have a strong evidence base, so that donors who want to stick to things with a strong evidence base won't run out of stuff to buy (i.e., lots of low-hanging fruit)
  • There's not much scope in global health for high-risk but high-expected value investments, because any good buy in global health would have a strong evidence base

I'd agree with the first interpretation, but the second interpretation seems quite false (looking at the Gates Foundation's portfolio shows a fair amount of risky, nonlinear efforts including new vaccine development, storage and surveillance technology breakthroughs, breakthroughs in toilet technology, etc.). The framing of the sentence, however, most naturally suggests the second interpretation, and moreover, may lead the reader to a careless conflation of the two. It seems to me like there's a lot of conflation in the EA community (and penumbra) between "global health and development" and "GiveWell current and potential top charities", and the setup of this EA Fund largely reflects that. So in that sense, my criticism isn't just of the fund but of what seems to me an implicit conflation.

Similar issues exist with two of the other funds: the animal welfare fund and the far future fund, but I think they are less concerning there. With "animal welfare" and "far future", the way the terms are used in EA Funds and in the EA community are different from the picture they'll conjure in the minds of people in general. But as far as I know, there isn't so much of an established existing cohesive infrastructure of organizations, funding sources, etc. that is at odds with the EA community.* Whereas with global health and development, you have things like WHO, Gates Foundation, Global Fund, and even an associated academic discipline etc. so the appropriation of the term for a fund that's somewhat of a GiveWell satellite seems jarring to me.

Some longer-term approaches that I think might help; obviously they wouldn't be changes you can make quickly:

(a) Rename funds so that the names capture more specifically the sort of things the funds are doing. e.g. if a fund is only being used for last-mile delivery of interventions as opposed to e.g. vaccine development, that can be specified within the fund name.

(b) Possibly have multiple funds within the same domain (e.g., global health & development) that capture different kinds of use cases (intervention delivery versus biomedical research) and have fund managers with relevant experience in the domains. e.g. it's possible that somebody with experience at the Gates Foundation, Global Fund, WHO, IHME, etc. could do fund allocation in some domains of global health and development better for some use cases.

Anyway, these are my thoughts. I'm not a contributor (or potential contributor, in the near term) to the funds, so take with appropriate amount of salt.

*It could be that if I had deeper knowledge of mainstream animal welfare and animal rights, or of mainstream far future stuff (like climate change) then I would find these jarring as well.

Comment by vipulnaik on Update on Effective Altruism Funds · 2017-04-22T15:53:10.408Z · score: 4 (4 votes) · EA · GW

I appreciate the information being posted here, in this blog post, along with all the surrounding context. However, I don't see the information on these grants on the actual EA Funds website. Do you plan to maintain a grants database on the EA Funds website, and/or list all the grants made from each fund on the fund page (or linked to from it)? That way anybody can check in at any time to see how how much money has been raised, and how much has been allocated and where.

The Open Philanthropy Project grants database might be a good model, though your needs may differ somewhat.

Comment by vipulnaik on Effective Altruism Forum web traffic from Google Analytics · 2017-04-16T20:44:04.870Z · score: 1 (1 votes) · EA · GW

Public link with up-to-date data https://www.reddit.com/r/EffectiveAltruism/about/traffic

Comment by vipulnaik on [deleted post] 2017-03-17T15:33:58.355Z

Commenting here to avoid a misconception that some readers of this post might have. I wasn't trying to "spread effective altruism" to any community with these editing efforts, least of all the Wikipedia community (it's also worth noting that the Wikipedia community that participates in these debates is basically disjoint from the people who actually read those specific pages in practice -- many of the latter don't even have Wikipedia accounts).

Some of the editing activities were related to effective altruism in these two ways: (1) The pages we edited, and the content we added, were disproportionately (though not exclusively) of interest to people in and around the EA-sphere, and (2) Some of the topics worked on, I selected based on EA-aligned interests (an example would be global health and disease timelines).

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-03-02T00:17:15.351Z · score: 1 (1 votes) · EA · GW

Great points! (An upvote wasn't enough appreciation, hence the comment as well).

Comment by vipulnaik on Essay contest: general considerations for evaluating small-scale giving opportunities ($300 for winning submission) · 2017-02-26T02:10:09.765Z · score: 0 (2 votes) · EA · GW

Hi Dony,

The submission doesn't qualify as serious, and was past the deadline. So we won't be considering it.

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-25T05:38:56.882Z · score: 6 (6 votes) · EA · GW

One point to add: the frustratingly vague posts tend to get FEWER comments than the specific, concrete posts.

From my list, the posts I identified as clearly vague:

http://www.openphilanthropy.org/blog/radical-empathy got 1 comment (a question that hasn't been answered)

http://www.openphilanthropy.org/blog/worldview-diversification got 1 comment (a single sentence praising the post)

http://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing got 6 comments

http://blog.givewell.org/2016/12/22/front-loading-personal-giving-year/ got 8 comments

In contrast, the posts I identified as sufficiently specific (even though they tended on the fairly technical side)

http://blog.givewell.org/2016/12/06/why-i-mostly-believe-in-worms/ got 17 comments

http://blog.givewell.org/2017/01/04/how-thin-the-reed-generalizing-from-worms-at-work/ got 14 comments

http://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms got 27 comments

http://blog.givewell.org/2016/12/12/amf-population-ethics/ got 7 comments

If engagement is any indication, then people really thirst for specific, concrete content. But that's not necessarily in contradiction with Holden's point, since his goal isn't to generate engagement. In fact comments engagement can even be viewed negatively in his framework because it means more effort necessary to respond to and keep up with comments.

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-24T21:17:50.074Z · score: 3 (13 votes) · EA · GW

(4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility: You keep alluding to costs of publishing your work more clearly, yet there are no examples of how such costs have negatively affected Open Phil, or the specific monetary, emotional, or other damages you have incurred (this is related to (1), where I am critical of your frustrating vagueness). This vagueness makes your claims of the risks to openness frustrating to evaluate in your case.

As a more general claim about being public, though, your claim strikes me as misguided. The main obstacle to writing up stuff for the public is just that writing stuff up takes a lot of time, but this is mostly a limitation on the part of the writer. The writer doesn't have a clear picture of what he or she wants to say. The writer does not have a clear idea of how to convey the idea clearly. The writer lacks the time and resources to put things together. Failure to do this is a failure on the part of the writer. Blaming readers for continually trying to misinterpret their writing, or carrying out witch hunts, is simply failing to take responsibility.

A more humble framing would highlight this fact, and some of its difficult implications, e.g.: "As somebody in charge of a foundation that is spending ~$100 million a year and recommending tens of millions in donations by others, I need to be very clear in my thinking and reasoning. Unfortunately, I have found that it's often easier and cheaper to spend millions of dollars in grants than write up a clear public-facing document on the reasons for doing so. I'm very committed to writing publicly where it is possible (and you can see evidence of this in all the grant writeups for Open Phil and the detailed charity evaluations for GiveWell). However, there are many cases where writing up my reasoning is more daunting than signing off on millions of dollars in money. I hope that we are able to figure out better approaches to reducing the costs of writing things up."

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-24T21:17:41.264Z · score: 7 (7 votes) · EA · GW

(3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative.

"By "public discourse," I mean communications that are available to the public and that are primarily aimed at clearly describing one's thinking, exploring differences with others, etc. with a focus on truth-seeking rather than on fundraising, advocacy, promotion, etc."

If you exclude from public discourse any benefits pertaining to fundraising, advocacy, and promotion, then you are essentially stacking the deck against public discourse -- now any reputational or time-sink impacts are likely to be negative.

Here's an alternate perspective. Any public statement should be thought of both in terms of the object-level points it is making (specifically, the information it is directly providing or what it is trying to convince people of), and secondarily in terms of how it affects the status and reputation of the person or organization making the statement, and/or their broader goals. For instance, when I wrote http://effective-altruism.com/ea/15o/effective_altruism_forum_web_traffic_from_google/ my direct goal was to provide information about web traffic to the Effective Altruism Forum and what the patterns tell us about effective altruism movement growth, but an indirect goal was to highlight the value of using data-driven analytics, and in particular website analytics, something I've championed in the past. Whether you choose to label the public statement as "fundraising", "advocacy", or whatever, is somewhat besides the point.

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-24T21:17:24.276Z · score: 8 (12 votes) · EA · GW

(2) Overstated connotations of expertise with respect to the value of transparency and openness:

"Regardless of the underlying reasons, we have put a lot of effort over a long period of time into public discourse, and have reaped very little of this particular kind of benefit (though we have reaped other benefits - more below). I'm aware that this claim may strike some as unlikely and/or disappointing, but it is my lived experience, and I think at this point it would be hard to argue that it is simply explained by a lack of effort or interest in public discourse."

Your writing makes it appear like you've left no stone unturned to try every approach at transparency and confirmed that the masses are wanting. But digging into the facts suggests support for a much weaker conclusion. Which is: for the particular approach that GiveWell used and the particular kind of content that GiveWell shared, the people who responded in ways that made sense to you and were useful to you were restricted to a narrow pool. There is no good reason offered on why these findings would be general across any domains or expository approaches than the ones you've narrowly tried at GiveWell.

This doesn't mean GiveWell or Open Phil is obligated to try new approaches -- but it does suggest more humility in making claims about the broader value of transparency and openness.

There is a wealth of ways that people seek to make their work transparent. Public projects on GitHub make details about both their code evolution and contributor list available by default, without putting in any specific effort into it, because of the way the system is designed. This pays off to different extents for different kinds of projects; in some cases, there are a lot of issue reports and bugfixes from random strangers, in many others, nobody except the core contributors cares. In some, malicious folks find vulnerabilities in the code because it's so open. If you ran a few projects on GitHub and observed something about how frequently strangers make valuable commits or file bug reports, it would not behoove you to then use that information to make broad claims about the value of putting projects on GitHub. Well, you seem to be doing the same based on a couple of things you ran (GiveWell, Open Phil).

Transparency/Semi-transparency/openness is a complex subject and a lot of its value comes from a wide variety of downstream effects that differentially apply in different contexts. Just a few of the considerations: precommitment (which gives more meaning to transparency, think research preregistration), transparent-by-definition processes and workflows (think tools like git on GitHub, automatically and transparently updated accounts ledgers such as those on blockchains), computability and pluggability (stuff that is in a computable format and can therefore be plugged into other datasets or analyses with minimal effort by others, e.g., the Open Philanthropy grants database and the International Aid Transparency Initiative (both of which were used by Issa in collating summary information about grant trends and patterns), donation logs (which I used to power the donations lists at https://donations.vipulnaik.com/)), integrity and consistency forced by transparency (basically your data has to check out if you are making it transparently available, e.g., when I moved all my contract work payments to https://contractwork.vipulnaik.com/ , I had to make sure the entire payment system was consistent), etc.

It seems like, at GiveWell, many of the key parts of transparency (precommitment, transparent-by-definition processes and workflows, computability and pluggability, integrity and consistency) are in minimal use. Given this rather abridged use case of transparency (which could be great for you), it really doesn't make sense to argue broadly about the value of being transparent.

Here is what I'd consider a better way to frame this:

"At GiveWell, we made some of our reasoning and the output of our work transparent, and reaped a variety of benefits. However, we did not get widespread engagement from the general public for our work. Getting engagement from the general public was something we wanted and hoped to achieve but not the main focus of our work. We couldn't figure out the right strategy for doing it, and have deprioritized it. I hope that others can benefit from what worked and didn't work in our efforts to engage the public with our research, and come up with better strategies to engender public engagement. I should be clear that I am not making any broader claims about the value of transparency in contexts beyond ours."

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-24T21:17:06.193Z · score: 15 (19 votes) · EA · GW

(1) Frustrating vagueness and seas of generality: This post, as well as many other posts you have recently written (such as http://www.openphilanthropy.org/blog/radical-empathy , http://www.openphilanthropy.org/blog/worldview-diversification , http://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing , http://blog.givewell.org/2016/12/22/front-loading-personal-giving-year/) struck me as fairly vague. Even posts where you were trying to be concrete (e.g., http://www.openphilanthropy.org/blog/three-key-issues-ive-changed-my-mind-about , http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity) were really hard for me to parse and get a grip on your precise arguments.

I didn't really reflect on this much with the previous posts, but reading your current post sheds some light: the vagueness is not a bug, from your perspective, it's a corollary of trying to make your content really hard for people to take issue with. And I think therein lies the problem. I think of specificity, falsifiability, and concreteness as keys to furthering discourse and helping actually converge on key truths and correcting error. By glorifying the rejection of these virtues, I think your writing does a disservice to public discourse.

For a point of contrast, here are some posts from GiveWell and Open Phil that I feel were sufficiently specific that they added value to a conversation: http://blog.givewell.org/2016/12/06/why-i-mostly-believe-in-worms/ , http://blog.givewell.org/2017/01/04/how-thin-the-reed-generalizing-from-worms-at-work/ , http://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms , http://blog.givewell.org/2016/12/12/amf-population-ethics/ -- notice how most of these posts make a large number of very concrete claims and highlight their opposition to very specific other parties, which makes them targets of criticism and insult, but really helps delineate an issue and pushes conversations forward. I'm interested in seeing more of this sort of stuff and less of overly cautious diplomatic posts like yours.

Comment by vipulnaik on Some Thoughts on Public Discourse · 2017-02-24T21:16:53.312Z · score: 9 (11 votes) · EA · GW

Thank you for the illuminative post, Holden. I appreciate you taking the time to write this, despite your admittedly busy schedule. I found much to disagree with in the approach you champion in the post, that I attempt to articulate below.

In brief: (1) Frustrating vagueness and seas of generality in your current post and recent posts, (2) Overstated connotations of expertise with regards to transparency and openness, (3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative, (4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility.

I'll post each point as a reply comment to this since the overall comment exceeds the length limits for a comment.

Comment by vipulnaik on Changes in funding in the AI safety field · 2017-02-04T01:45:48.279Z · score: 8 (8 votes) · EA · GW

I appreciate posts like this -- they are very helpful (and would be more so if I were thinking of donating money or contributing in kind to the topic).

Comment by vipulnaik on Essay contest: general considerations for evaluating small-scale giving opportunities ($300 for winning submission) · 2017-01-28T01:20:29.021Z · score: 2 (2 votes) · EA · GW

Awesome, excited to see you flesh out your thinking and submit!

Comment by vipulnaik on How Should I Spend My Time? · 2017-01-17T21:42:38.097Z · score: 1 (1 votes) · EA · GW

"So if I could be expected to work 4380 hours over 2016-2019, earn $660K (95%: $580K to $860K) and donate $160K, that’s an expected earnings of $150.68 per hour worked. [...] I consider my entire earnings to be the altruistic value of this project."

What about taxes?

Comment by vipulnaik on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-12T06:24:38.903Z · score: 14 (14 votes) · EA · GW

The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

In brief:

  • EA orgs' and communities' growth metrics are centered around numbers of people and quantity of money moved. These don't correlate much with epistemic virtue.
  • (more speculative) EA orgs' donors/supporters don't demand much epistemic virtue. The orgs tend to hold themselves to higher standards than their current donors.
  • (even more speculative; not much argument offered) Even long-run growth metrics don't correlate too well with epistemic virtue.
  • Quantifying (some aspects of) quality and virtue into metrics seems to me to have the best shot at changing the incentive structure here.

The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity evaluators and for movement-building orgs). These are the headline numbers they highlight in their self-evaluations and reports, and these are the numbers that people giving elevator pitches about the orgs use ("GiveWell moved more than $100 million in 2015" or "GWWC has (some number of hundreds of millions) in pledged money"). Some orgs have slightly different metrics, but still essentially ones that rely on changing the minds of large numbers of people: 80,000 Hours counts Impact-Adjusted Significant Plan Changes, and many animal welfare orgs count numbers of converts to veganism (or recruits to animal rights activism) through leafleting.

These incentives don't directly align with improved epistemic virtue! In many cases, they are close to orthogonal. In some cases, they are correlated but not as much as you might think (or hope!).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

With that said, the organizations I am aware of in the EA community hold themselves to much higher standards than (as far I can make out) their donor and supporter base seems to demand of them. My guess is that GiveWell could have been a LOT more sloppy with their reviews and still moved pretty similar amounts of money as long as they produced reviews that pattern-matched a well-researched review. (I've personally found their review quality improved very little from 2014 to 2015 and much more from 2015 to 2016; and yet I expect that the money moved jump from 2015 to 2016 will be less, or possibly even negative). I believe (with weaker confidence) that similar stuff is true for Animal Charity Evaluators in both directions (significantly increasing or decreasing review quality won't affect donations that much). And also for Giving What We Can: the amount of pledged money doesn't correlate that well with the quality or state of their in-house research.

The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like "Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you." I think there's some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

My best guess is that unless we can get a better handle on epistemic virtue and quantify quality in some meaningful way, the incentive structure problem will remain.

Comment by vipulnaik on My 5 favorite posts of 2016 · 2017-01-06T16:29:31.962Z · score: 0 (0 votes) · EA · GW

My thoughts precisely!

Comment by vipulnaik on Tell us how to improve the forum · 2017-01-03T16:15:11.065Z · score: 1 (1 votes) · EA · GW

I haven't been able to successfully log in to EAF from my phone (which is a pretty old Windows Mobile phone, so might be something unique to it). That probably increases the number of pageviews I generated for EAF, because I revisit on desktop to leave a comment :).

Comment by vipulnaik on Individual Project Fund: Further Details · 2017-01-03T05:37:47.245Z · score: 1 (1 votes) · EA · GW

Great to hear about this, Jacob! As somebody who funds a lot of loosely similar activities in the "EA periphery" I have some thoughts and experience on the challenges and rewards of funding. Let me know if you'd like to talk about it.

You can get a list of stuff I've funded at https://contractwork.vipulnaik.com

Comment by vipulnaik on Effective Altruism Forum web traffic from Google Analytics · 2017-01-01T21:45:51.751Z · score: 1 (1 votes) · EA · GW

Thanks, I added the explication of the acronym at the beginning.

Comment by vipulnaik on Effective Altruism Forum web traffic from Google Analytics · 2016-12-31T21:32:29.493Z · score: 4 (4 votes) · EA · GW

You can get data on the Facebook group(s) using tools like http://sociograph.io -- however, they can take a while to load all the data. A full analysis of that data would be worth another post.

Effective Altruism Forum web traffic from Google Analytics

2016-12-31T21:23:04.132Z · score: 10 (8 votes)
Comment by vipulnaik on 2016 AI Risk Literature Review and Charity Comparison · 2016-12-31T20:48:40.252Z · score: 1 (1 votes) · EA · GW

Why does the post have "2017" in the title?

Comment by vipulnaik on Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation · 2016-12-31T18:48:05.575Z · score: 0 (0 votes) · EA · GW

Some people in the effective altruist community have argued that small donors should accept that they will use marginal charitable dollars less efficiently than large actors such as Open Phil, for lack of time, skill, and scale to find and choose between charitable opportunities. Sometimes this is phrased as advice that small donors follow GiveWell's recommendations, while Open Phil pursues other causes and strategies such as scientific research and policy.

The argument that I have heard is a little different. It is that the entry of big players like Open Phil has made it harder to have the old level of marginal impact with one's donation.

Basically:

Marginal impact of one's donation now that Open Phil is plucking a lot of low-hanging fruit < Marginal impact of one's donation a few years ago ... (1)

Whereas the claim that you are critiquing is:

Marginal impact of one's donation < Marginal impact of Open Phil's donation ... (2)

Why does (1) matter? Some donors have fixed charity budgets, i.e., they wish to donate a certain amount every year to charity. For them, then, the challenge is just to find the best use of money, so even if marginal impacts are down across the board, it doesn't matter much because all the matters is relative impact.

For other donors and potential donors, charitable donations compete with other uses of money. Therefore, whether or not one donates to charity, and how much one donates to charity, would depend on how large the marginal impact is. If the entry of players like Open Phil has reduced the marginal impact achievable, then that's good reason to donate less.

So I feel that the argument you are attacking isn't the actually correct one to attack. Though you do address (1) a bit in the post, I think it would have made more sense to make it the main focus.

Comment by vipulnaik on Why donate to 80,000 Hours · 2016-12-28T23:09:46.767Z · score: 0 (0 votes) · EA · GW

As further evidence, a survey of meta-charity donors carried out by Open Phil and 80,000 Hours found that they expect to give about £4.5m this year, and not all will go to meta-charities. Given that CEA is aiming to raise £2.5m-£5m alone, the capacity of meta-charity donors is going to be used up this year. This means we need new meta-charity donors, or good meta opportunities will go unfunded.

Is there more information about this survey currently available, and/or are there plans to release more information? This is the first I am hearing about the survey, and it sounds like something that deserves standalone coverage.

Comment by vipulnaik on What is the expected value of creating a GiveWell top charity? · 2016-12-19T20:09:08.927Z · score: 1 (1 votes) · EA · GW

Thanks for updating the post! I still see the somewhat outdated sentence:

For example, a fifth top charity would likely lead Good Ventures to make an additional incentive grant of $2.5M that they would not have otherwise made.

Since GiveWell now has seven top charities, that should read "eighth" rather than fifth.

Comment by vipulnaik on What is the expected value of creating a GiveWell top charity? · 2016-12-18T21:04:02.092Z · score: 0 (0 votes) · EA · GW

Your estimates could probably benefit a bit more by explicitly incorporating the 2016 top charity recommendations as well as information released in GiveWell's blog post about the subject. In particular:

  • Good Ventures is expected to donate $50 million to GiveWell top charities (+ special recognition charities) and is likely to allocate a similar amount for the next few years. This should be incorporated into estimation of total annual money moved (mostly in terms of reducing variance).

Due to the growth of the Open Philanthropy Project this year and its increased expectation of the size and value of the opportunities it may have in the future, we expect Good Ventures to set a budget of $50 million for its contributions to GiveWell top charities. The Open Philanthropy Project plans to write more about this in a future post on its blog.

  • The "top charity incentive" grant is now set at $2.5 million, up from $1 million (and therefore it is now 5% of Good Ventures' share of donations). This should factor into the estimate of the money moved to any charity. In particular, it sets a lower bound on absolute money moved, though of course the top charity incentive could change.

  • The addition of new 2016 top charities as well as the change to top charity incentive also make this part of your post outdated:

For example, a fifth top charity would likely lead Good Ventures to make an additional incentive grant of $1M that they would not have otherwise made

if your post was drafted prior to the release of the new top charities and you didn't get a chance to update it fully to take into account the new information, it would be helpful to mention that in the post.

Comment by vipulnaik on Should you donate to the Wikimedia Foundation? · 2016-12-17T01:22:27.340Z · score: 0 (0 votes) · EA · GW

See also my recent post http://effective-altruism.com/ea/150/looking_for_global_healthrelated_wikipedia/ for more updates on editing and improving Wikipedia.

Looking for global health-related Wikipedia contributors

2016-12-16T22:19:23.712Z · score: 6 (6 votes)
Comment by vipulnaik on What the EA community can learn from the rise of the neoliberals · 2016-12-11T00:45:05.948Z · score: 6 (6 votes) · EA · GW

Your post is yeoman's work and much appreciated.

There were a few areas where your reading of history seems to differ from mine, as well as a bunch of key distinctions that I believe should have made it in a piece of this length.

First, I think the piece gives too much credit to and puts too much focus on Hayek as an intellectual architect of neoliberalism. Hayek's work was influential, and his impact on Fisher as well, but I don't think Hayek is treated as a blueprint for neoliberalism.

The significant focus on Hayek is coupled with a lack of focus on the key philosophical and methdological distinctions, and actual successes and failures.

Philosophical and methodological distinctions

Neoliberalism isn't a school of economics. There were several fairly distinct schools of economics that can broadly be classified as neoliberal. The tradition that Hayek was part of was the Austrian school. The Austrian school has a vibrant community (that has flourished online) but it is a fairly small minority of economists. And it has pretty significant methodological differences with mainstream economics, mostly in terms of rejecting some of mainstream economics' efforts at quantification. Notably, Austrians also have a different way of looking at money than monetarism. With that said, Hayek's branch of the Austrian school has embraced many parts of mainstream economics.

And then there are the schools of economics that broadly fall under "neoclassical economics" such as the Chicago School, which uses a pretty large amount of quantification and uses price theory (inherently quantitative) as its base. Although Hayek did interact with a lot of the Chicago School and contributed somewhat to its thought, he isn't one of its central figures: https://en.wikipedia.org/wiki/Chicago_school_of_economics#Scholars (the last few predate Hayek). Unlike the Austrian school, the Chicago School has had a lot of success in getting mainstream recognition. The Chicago School is probably a key part of neoliberalism as people refer to the term but, with the exception of a couple people (mostly Milton Friedman) had little by way of explicit links with the intellectual activist movement to champion neoliberalism.

And then there are a bunch of other schools of thought like New Keynesianism that can also be broadly considered neoliberal (examples of New Keynesian include Greg Mankiw, former George Bush adviser) but a sort-of continuation of the old Keynesianism.

Related to these fairly distinct (and separately motivated and originated) schools of thought are the different political philosophies that get bunched as neoliberalism. Probably the most distinctive (and most minority) philosophy is modern libertarianism. This political philosophy and the associated intellectual infrastructure is what can be traced most closely to the sort of deliberate efforts you allude to (Hayek, Fisher, etc.) though a number of other key figures also show up (such as Austrian economist and radical anarcho-capitalist Murray Rothbard, explainer Walter Reed, and billionaire backers the Koch brothers). Libertarianism, which focuses on both economic and "social" freedom, has had important success and spillovers even if it hasn't caught on as a philosophy (things like opposition to conscription, a direct success, and opposition to the War on Drugs, one that would penetrate mainstream liberal views soon). And then there are also other non-libertarian but market-friendly liberal and market-friendly conservative think tanks and institutes that have flourished in recent decades.

Overall, I would say that the growth of "neoliberalism" has involved some good initial planning by key figures but resembles a Hayekian spontaneous order more than the execution of Hayek's central plan.

Actual successes and failures

The article makes neoliberalism appear like a huge success. Many of the leading proponents of various schools of neoliberalism take a fairly different view. For instance, when Hayek wrote "A Road to Serfdom" the non-war US federal welfare state was fairly small. Then in the 1960s welfare was expanded significantly. In the 1970s there were huge amounts of additional regulation that (depending on the school) could be treated as big negatives. Reaganism dialed back some of the changes, but without fundamentally changing them, just dialing them down in quantity. In the United States, according to various measures, economic freedom has been flat or declined somewhat rather than moving steadily toward more freedom. (Globally, economic freedom measured by various indices has increased mostly as communist regimes have ended and some big economies like China and India moved in a pro-market direction).

Comment by vipulnaik on Contra the Giving What We Can pledge · 2016-12-04T20:57:02.408Z · score: 2 (2 votes) · EA · GW

"The general consensus is that utility of money goes as log(income), so giving a fixed percentage is more painful at lower incomes than higher ones"

Seems to me from the math that if it's literally log then giving a fixed percentage of income has exactly the same effect on utility regardless of income level.

Comment by vipulnaik on Setting Community Norms and Values: A response to the InIn Open Letter · 2016-10-27T05:44:28.909Z · score: 3 (5 votes) · EA · GW

The jargon used in this post is confusing. An "Open Letter" addressed to Gleb was indeed drafted by Jeff and others, but that's not the document that was published. As Jeff writes at http://www.jefftk.com/p/details-behind-the-inin-document:

I had initially hoped to create both a listing of concerns and an "open letter" that summarized the problem and recommended a course of action to the community, but we weren't able to agree on what that should be. Instead we decided we would just post the evidence doc to the EA forum, and let the community go from there.

Perhaps you drafted your post earlier, when Jeff was still planning to publish the open letter?

Comment by vipulnaik on Should you switch away from earning to give? Some considerations. · 2016-08-27T08:32:16.073Z · score: 0 (0 votes) · EA · GW

I see your point about long disclosures being cumbersome. I believe a better solution is to have a canonical long disclosure and simply to link to it with a brief description. I just added a canonical long disclosure about my relationship with GiveWell at http://vipulnaik.com/givewell/ and I have edited my two recent EAF posts about them to include a disclosure link to that. I will try to do the same for any further posts or comments I write about GiveWell, and write similar canonical long disclosures to link to for topics that I frequently write about and have had long, complicated associations with.

Comment by vipulnaik on Should you switch away from earning to give? Some considerations. · 2016-08-27T07:08:02.623Z · score: 0 (0 votes) · EA · GW

Did you mean "blurb" instead of "barb"?

Comment by vipulnaik on Should you switch away from earning to give? Some considerations. · 2016-08-27T06:42:28.303Z · score: -3 (9 votes) · EA · GW

I would like to see clearer disclosure of your institutional ties, insofar as knowledge of these ties might affect people's assessment of the direction in which your advice might be biased. Proactive disclosure would also help you preempt criticism or dismissal of your advice due to your institutional affiliations.

I'm also curious for others' thoughts on whether such disclosure would be helpful.

Here's a suggested disclosure.

"I am affiliated with the Centre for Effective Altruism (CEA) [description] and 80000 Hours [description]. I have written a book on the effective altruism movement and have been described as a "co-founder" of effective altruism. I also gave the closing keynotes at Effective Altruism Global in 2015 and 2016. Views expressed here are solely my own but are informed by my experience working at CEA and 80000 Hours and interacting with people in the context of the effective altruism movement. While I have a vested personal interest (in financial and prestige terms) in increased funding flowing to the effective altruism movement and to the two institutions I am affiliated with, I believe that my advice is not compromised by this vested interest."

GiveWell money moved in 2015: a review of my forecast and some future predictions

2016-05-15T20:41:08.108Z · score: 12 (12 votes)

Looking for Wikipedia article writers (topics include many of interest to effective altruists)

2016-04-17T04:37:21.267Z · score: 10 (12 votes)

Donation insurance

2015-12-20T22:33:57.990Z · score: 6 (8 votes)

Conditional donation commitment to GiveWell top-recommended charity

2015-12-20T06:16:41.829Z · score: 3 (3 votes)

GiveWell money moved forecasts and implications

2015-12-19T20:22:21.563Z · score: 8 (8 votes)

Should you donate to the Wikimedia Foundation?

2015-03-28T18:58:20.337Z · score: 17 (13 votes)