Posts

Racial Demographics at Longtermist Organizations 2020-05-01T16:02:16.202Z
Which Community Building Projects Get Funded? 2019-11-13T20:31:17.209Z

Comments

Comment by anonymouseaforumaccount on Announcing Effective Altruism Ventures · 2020-07-06T16:38:24.387Z · EA · GW

I’m glad someone is asking what happened with EA Ventures (EAV): it’s an important question that hasn’t yet received a satisfactory answer.

When EAV was discontinued, numerous people asked for a post-mortem of some type (e.g. here, here, and here) to help capture learning opportunities. But nothing formal was ever published. The “Celebrating Failed Projects” panel eventually shared a few lessons, but someone would need to watch an almost hour-long video (much of which does not relate to EAV) to see them all. And the lessons seem trivial (“if you’re doing a project which gives money to people, you need to have that money in your bank account first”) about as often as they seem insightful (“Finding excellent entrepreneurs is much, much harder than I thought it was going to be”).

If a proper post-mortem with community input had been conducted, I’m confident many other lessons would emerge*, including one prominent one: “Don’t over-promise and under-deliver.” This has obvious relevance to a grantmaking project that launched before it had lined up funds to grant (as far as I know EAV only made two grants- the one Jamie mentioned and a $19k grant to EA Policy Analytics). But it also relates to more mundane aspects of EAV: my understanding is that applicants were routinely given overly optimistic expectations about how quickly the process would move.

The missed opportunity to learn these lessons went on to impact other projects. As just one example, EA Grants was described as “the spiritual successor to EA Ventures”. And it did reflect the narrow lesson from that project, as it lined up money before soliciting grant applications. However, the big lesson wasn’t learned and EA Grants consistently overpromised and under-delivered throughout its entire history. EA Grants announced plans to distribute millions of dollars more money than it actually granted, repeatedly announced unrealistic and unmet plans to accept open applications, explicitly described educational grants as eligible when they were not, granted money to a very narrow set of projects, and (despite its public portrayal as a project capable of distributing millions of dollars annually) did not maintain an “appropriate operational infrastructure and processes [resulting] in some grant payments taking longer than expected [which in some cases] contributed to difficult financial or career situations for recipients.”

EAV and EA Grants have both been shuttered, and there’s a new management team in place at CEA. So if I had a sense that the new management had internalized the lessons from these projects, I wouldn’t bring any of this up. But CEA’s recently updated “Mistakes” page doesn’t mention over-promising/under-delivering, which makes me worry that’s not the case. That’s especially troubling because the community has repeatedly highlighted this issue: when CEA synthesized community feedback it had received, the top problem reported was “respondents mentioned several times that CEA ‘overpromised and under delivered’”. The most upvoted comment on that post? It was Peter Hurford describing that specific dynamic as “my key frustration with CEA over the past many years.”

To be fair, the “Mistakes” page discusses problems that are related to over-promising/under-delivering, such as acknowledging that “running too many projects from 2016-present” has been an “underlying problem.” But it’s possible to run too many projects without overpromising, and it’s possible to be narrowly focused on one or a few projects while still overpromising and under-delivering. “Running too many projects” explains why EA Grants had little or no dedicated staff in early 2018; it doesn’t explain why CEA repeatedly committed to scaling the project during that period despite not having the staff in place to execute. I agree CEA has had a problem of running too many projects, but I see the consistent over-promising/under-delivering dynamic as far more problematic. I hope that CEA will increasingly recognize and incorporate this recurring feedback from the EA community. And I hope that going forward, CEA will prioritize thorough post-mortems (that include stakeholder input) on completed projects, so that the entire community can learn as much as possible from them.

* Simple example: with the benefit of hindsight, it seems likely that EAV significantly overinvested in developing a complex evaluation model before the project launched, and that EAV’s staff may have had an inflated sense of their own expertise. From the EAV website at its launch:

“We merge expert judgment with statistical models of project success. We used our expertise and the expertise of our advisers to determine a set of variables that is likely to be positively correlated with project success. We then utilize a multi-criteria decision analysis framework which provides context-sensitive weightings to several predictive variables. Our framework adjusts the weighting of variables to fit the context of the projects and adjusts the importance of feedback from different evaluators to fit their expertise.”
Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-15T19:20:19.017Z · EA · GW

As another resource on effective D&I practices, HBR just published a new piece on “Diversity and Inclusion Efforts that Really Work.” It summarizes a detailed report on this topic, which “offers concrete, research-based evidence about strategies that are effective for reducing discrimination and bias and increasing diversity within workplace organizations [and] is intended to provide practical strategies for managers, human resources professionals, and employees who are interested in making their workplaces more inclusive and equitable.”

Comment by anonymouseaforumaccount on 2019 Ethnic Diversity Community Survey · 2020-05-13T21:16:57.178Z · EA · GW

Very interesting to see this data- thanks so much for collecting it and writing it up! I hope future versions of the EA Survey will adopt some of your questions, to get a broader perspective.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-07T21:28:00.127Z · EA · GW

Thanks Ben! That’s an interesting reference point. I don’t think there are any perfect reference points, so it’s helpful to see a variety of them.

By way of comparison, 1.8% of my sample was black (.7%) or Hispanic (1.1%).

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-07T21:26:32.553Z · EA · GW

I don’t think placing no value on diversity is a PR risk simply because it’s a view held by an ideological minority. Few people, either in the general population or the EA community, think mental health is the top global priority. But I don’t think EA incurs any PR risk from community members who prioritize this cause. And I also believe there are numerous ways EA could add different academic backgrounds, worldviews, etc. that wouldn’t entail any material PR risk.

I want to be very explicit that I don’t think EA should seek to suppress ideas simply because they are an extreme view and/or carry PR risks (which is not to say those risks don’t exist, or that EAs should pretend they don’t exist). That’s one of the reasons why I haven’t been downvoting any comments in this thread even if I strongly disagree with them: I think it’s valuable for people to be able to express a wide range of views without discouragement.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-04T23:26:41.070Z · EA · GW

Glad this is something you're tracking. For reference, here's the relevant section of the annual review.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-04T22:12:10.865Z · EA · GW

To clarify, my comment about EA's political skew wasn't meant to suggest Larks doesn't care about viewpoint diversity. Rather, I was pointing out that the position of not caring about racial diversity is more extreme in a heavily left leaning community than it would be in a heavily right leaning community.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-04T22:09:29.670Z · EA · GW

Thanks Ben! Great to see 80K making progress on this front! And while I haven’t crunched the numbers, my impression is that 80K’s podcast has also been featuring a significantly more diverse set of guests than when the podcast first started- this also seems like a very positive development.

Given the nature of your work, 80K seems uniquely positioned to influence the makeup of the Longtermist ecosystem as a whole. Do you track the demographic characteristics of your pipeline: people you coach, people who apply for coaching, people who report plan changes due to your work, etc.? If not, is this something you’d ever consider?

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-04T22:07:58.182Z · EA · GW

Thanks Sky! I’ll be in touch over email.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-03T14:39:01.963Z · EA · GW
Agreed - though many of the more successful diversity efforts are really just efforts to make companies nicer and more collaborative places to work (e.g. cross-functional teams, mentoring).

Agreed. This makes those sorts of policies all the more attractive in my opinion, since improving diversity is just one of the benefits.

I'm also a little sceptical of the huge gains the HBR article suggests - do diversity task forces really increase the number of Asian men in management by a third? It suggests looking at Google as an example of "a company that's made big bets on [diversity] accountability... We should know in a few years if that moves the needle for them" - it didn't.

I’m also skeptical that particular programs will lead to huge gains. But I don’t think it’s fair to say that Google’s efforts to improve diversity haven’t worked. The article you cited on that was from 2017. Looking at updated numbers from Google’s site, the mix of new hires (which are less sticky than total employees) does seem to have shifted since 2014 (when Google began its initiatives) and 2018 (most recent data available). These aren’t enormous gains, but new hires do seem to have become notably more diverse. I certainly wouldn’t look at this data and say that Google’s efforts didn’t move the needle.

Women: 30.7% in 2014 vs 33.2% in 2018 (2.5% diff, 8% Pct Change)

Asian+: 37.9% in 2014 vs 43.9% in 2018 (6% diff, 16% Pct Change)

Black+: 3.5% in 2014 vs 4.8% in 2018 (1.3% diff, 37% Pct Change)

Latinx+: 5.9% in 2014 vs 6.8% in 2018 (.9% diff, 15% Pct Change)

Native American+: .9% in 2014 vs 1.1% in 2018 (.2% diff, 22% Pct Change)

White+: 59.3% in 2014 vs 48.5% in 2018 (-10.8% diff, -18% Pct Change)

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-02T19:06:17.147Z · EA · GW

The HBR study you cite actually says the evidence shows that some types of programs do effectively improve diversity, but many companies employ outdated methods that can be counterproductive.

Despite a few new bells and whistles, courtesy of big data, companies are basically doubling down on the same approaches they’ve used since the 1960s—which often make things worse, not better. Firms have long relied on diversity training to reduce bias on the job, hiring tests and performance ratings to limit it in recruitment and promotions, and grievance systems to give employees a way to challenge managers. Those tools are designed to preempt lawsuits by policing managers’ thoughts and actions. Yet laboratory studies show that this kind of force-feeding can activate bias rather than stamp it out. As social scientists have found, people often rebel against rules to assert their autonomy. Try to coerce me to do X, Y, or Z, and I’ll do the opposite just to prove that I’m my own person.
In analyzing three decades’ worth of data from more than 800 U.S. firms and interviewing hundreds of line managers and executives at length, we’ve seen that companies get better results when they ease up on the control tactics. It’s more effective to engage managers in solving the problem, increase their on-the-job contact with female and minority workers, and promote social accountability—the desire to look fair-minded. That’s why interventions such as targeted college recruitment, mentoring programs, self-managed teams, and task forces have boosted diversity in businesses. Some of the most effective solutions aren’t even designed with diversity in mind.

The rest of the article has some good examples and data on which sort of programs work, and would probably be a good reference for anyone looking to design an effective diversity program.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-02T14:51:12.977Z · EA · GW
However, my sense is that, despite problems with diversity in EA, this has been recognized, and the majority view is actually that diversity is important and needs to be improved (see for instance CEA's stance on diversity).

Also supporting this view, most of the respondents in 80K’s recent anonymous survey on diversity said they valued demographic diversity. The people who didn’t mention this explicitly generally talked about other types of diversity (e.g. epistemic and political) instead. And nobody expressed Larks’ view that they “do not place any value on diversity.” I agree with Hauke that this perspective carries PR risk, and in my opinion seems especially extreme in a community that politically skews ~20:1 left vs. right.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-02T14:29:46.515Z · EA · GW

A few points…

First, I’d very much like to see EA and/or Longtermist organizations hire people with “different academic backgrounds, different world views and different ideologies.” But I don’t think that would eliminate the need for improving diversity on other dimensions like race or gender, which can provide a different set of perspectives and experiences (see, for example, “when I find myself to be the only person of my group in the room I want to leave”) than could be captured by, for example, hiring more white males who studied art history.

Second, I’m not advocating for quotas, which I have a lot of concerns about. I’d prefer to look at interventions that could encourage talented minorities to apply. My prior is that there are headwinds that (on the margins) discourage minority applicants. As multiple respondents to 80K’s recent survey on diversity noted, there’s “a snowball effect there, where once you have a sufficiently non-diverse group it’s hard to make it more diverse.” If that effect is real, claims like “we hired a white male because he was the best candidate” become less meaningful since there might have been better minority candidates who didn’t apply in the first place.

Third, some methods of increasing minority applicants are extremely low cost. For example, I saw one recent job posting from a Longtermist organization that didn’t include language like one often sees in job descriptions along the lines of “We’re an equal opportunity employer and welcome applications from all backgrounds.” It’s basically costless to include that language, so I doubt any minorities see it and think “I’m going to apply because this organization cares about diversity.” But it’s precisely because this language is costless that not including it signals that an organization doesn’t care about diversity, which discourages minorities from applying (especially if they see that the organization’s existing team is very homogeneous.)

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-02T13:59:38.840Z · EA · GW

How do you think cohorts like the self-identified conservatives in western democracies or the US intelligence community would view ideas coming from that hypothetical think tank? I’m pretty sure there’d be some skepticism, and that that skepticism would make it harder for the think tank to accomplish its goals. (I'm not arguing that they should be skeptical; I'm arguing that they would be skeptical.)

I agree we should expect a Chinese think tank to be largely staffed with Chinese people because of the talent pool it would be drawing from. I’ve provided a variety of possible reference classes for the Longtermist community; do you have views on what the appropriate benchmark should be?

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-02T13:56:57.238Z · EA · GW

Thanks for sharing your experience! It’s valuable to get the perspective of someone who’s been involved in the Longtermist community for so long, and I’m glad you haven’t felt excluded during that time.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-01T23:02:54.820Z · EA · GW

Thanks for sharing the survey data! I’ll update the post with those numbers.

it seems somewhat risky to compare this to numbers "based on the pictures displayed on the relevant team page", since it seems like this will inevitably under-count mixed race people who appear white.

This is a fair point. For what it’s worth, I classified a handful of people who very well could be white as POC since it looked like they could possibly be mixed race. But these people probably accounted for something like 1% of my sample, far short of the 6.5% mixed race share of EA survey respondents. So it’s plausible that because of this issue diversity at longtermist organizations is pretty close to diversity in the EA community (though that’s not exactly a high bar).

On the other hand, I’d also note Asians are by far the largest category of POC in both my sample and the EA Community, so presumably a large share of the mixed white/non-white population is part white and part Asian. It seems reasonable to assume that ~1/2 of this group would have last names that suggest Asian heritage, but there weren’t many (any?) people in my sample with such names who looked white. This might indicate that my sample had fewer mixed race people than the EA Survey, which would make the issue you're raising less of a problem.

Interestingly, the EA survey data also has a surprisingly high (at least to me) number of mixed race respondents relative to the number of non-mixed POC. In the survey, 33% of people who aren’t non-mixed white are mixed race. For comparison, this figure is 15% at Stanford and 13% at Harvard. So I think the measurement issue you’re pointing out is much less of a problem for benchmarks other than the EA community.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-01T23:01:14.011Z · EA · GW

Thanks for pointing this out, good to know!

DeepMind was on my original list of organizations to include, but doesn’t have a team page on its website. In an earlier draft I mentioned that I would have otherwise included DeepMind, but one of the people I got feedback from (who I consider very knowledgeable about the Longtermist ecosystem) said they didn’t think it should be counted as a Longtermist organization so I removed that comment. And the same is true for OpenAI, FYI.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-01T17:51:40.070Z · EA · GW

I think the discrepancy is related to mixed race people, a cohort I’m including in my POC figures. Since the 2019 survey question allowed for multiple responses, I calculated the percentage of POC by adding all the responses other than white, rather than taking 1 - % of white respondents (which results in the 13% you mention).

Thinking more about this in response to your question, it’d probably be more accurate to adjust my number by dividing by the sum of total responses (107%). That would bring my 21% figure down to 19%, still well above the figure for longtermist organizations. But I think the best way of looking at this would be to go directly to the survey data and calculate the percentage of respondents who did not self-describe themselves as entirely white. If anyone with access to the survey data can crunch this number, I'd be happy to edit my post accordingly.

Comment by anonymouseaforumaccount on Racial Demographics at Longtermist Organizations · 2020-05-01T17:39:17.832Z · EA · GW

Thanks Milan!

I’d also add that this concern applies in a domestic context as well. Efforts to influence US policy will require broad coalitions, including the 23 congressional districts that are majority black. The representatives of those districts (among others) may well be skeptical of ideas coming from a community where just 3 of the 459 people in my sample (.7%) are black (as far as I can tell). And if you exclude Morgan Freeman (who is on the Future of Life Institute’s scientific advisory board but isn’t exactly an active member of the Longtermist ecoystem), black representation is under half of a percent.

Comment by anonymouseaforumaccount on Update on CEA's EA Grants Program · 2020-02-04T21:35:29.275Z · EA · GW

Thanks Nicole!

My strong prior (which it sounds like you disagree with), is that we should generally expect funding needs to increase over time. If that’s true, then EA Funds would need to grow by more than enough to offset EA Grants in order to keep pace with needs. More reliance on EA Funds would shift the mix of funding too: for instance, relatively more funding going to established organizations (which EA Grants doesn’t fund) and no natural source of funding for individuals working on Global Poverty (as that fund doesn’t grant to individuals).

I agree it would be helpful for Fund management teams to explicitly make it known if they think there are a lot of strong opportunities going unfunded. Similarly, if Fund managers think they have limited opportunities to make strong grants with additional funds, it would be good to know that too. I’ve been operating on the assumption that the funds all believe they have room for more funding; if that’s not the case, seems like an important thing to share.

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2020-01-24T19:00:58.517Z · EA · GW

OK, thanks for explaining how this works!

Comment by anonymouseaforumaccount on Which Community Building Projects Get Funded? · 2020-01-21T20:49:34.851Z · EA · GW

Thanks for cleaning up this data Sam! I also appreciate your putting together that spreadsheet. It’d be great if the fund pages could link to it to make that info more easily accessible. And over time, I’d love to see that file evolve to include additional data about each funds’ grants with corresponding subtotals. I think that would be a big aid for donors trying to understand what types of grants each fund makes.

Comment by anonymouseaforumaccount on Which Community Building Projects Get Funded? · 2020-01-14T22:28:07.428Z · EA · GW

Thanks for this Joan! I may have some followup questions related to this, but will wait to see if they’re addressed by the forthcoming writeup as I want to be respectful of your time. I look forward to reading it!

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2020-01-14T22:24:43.941Z · EA · GW

I was asking about the end of November balances that were displayed throughout December. It sounds like those did not reflect grants instructed in November if I’m correctly understanding what you mean by “the current balances don’t account from the latest round that were only paid out last month”. Can you clarify the timing of when grants do get reflected in the fund balances? Do the fund balances get updated when the grants are recommended to CEA? Approved by CEA? Disbursed by CEA? When the payout reports are published? FWIW, I’d find it most helpful if they were updated when the payout reports are published, since that would make the information on the fund pages internally consistent.

Thanks for the update on the Global Development Fund!

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2020-01-14T22:24:06.838Z · EA · GW

1. Thanks for sharing this data!

2. Ah, that makes sense re: privacy issues. However, I’m a bit confused by this: “we do send out email newsletters with updates about how money from EA Funds has been spent.” Is this something new? I’ve given to EA Funds and organizations through the Funds platform for quite some time, and the only non-receipt email I’ve ever gotten from EA Funds was a message in late December soliciting donations and sharing the OP. To be clear, I’d love to see more updates and solicitations for donors (and not just during giving season), as I believe not asking past donors to renew their giving is likely leaving money on the table.

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2020-01-14T22:23:05.013Z · EA · GW

Got it, thanks for clarifying!

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2020-01-14T22:22:33.799Z · EA · GW

I was asking about how large the EA Funds organization itself should be, but nice to get your thoughts on the management teams as well. Thank you!

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2020-01-14T22:22:01.128Z · EA · GW
I basically tried to come up with an ontology that would make intuitive sense to the average donor, and then tried to address the shortcomings by using examples on our risk page. I agree with Oli that it doesn’t fully capture things, but I think it’s a reasonable attempt to capture an important sentiment (albeit in a very reductive way), especially for donors who are newer to the product and to EA.

Yeah, I think the current ontology is a pretty reasonable/intuitive way to address a complex issue. I’d update if I learned that concerns about “risk of abuse” more common among donors than concerns about other types of risk, but my suspicion is that concerns about “risk of abuse” is mostly an issue for the LTFF since it makes more grants to individuals and the grant that was recommended to Lauren serves as something of a lightning rod.

I do think, per my original question about the LTFF’s classification, that the LTFF is meaningfully more risky than the other funds along multiple dimensions of risk: relatively more funding of individuals vs. established organizations, more convoluted paths to impact (even for more established grantees), and more risk of abuse (largely due to funding more individuals and perhaps a less consensus based grantmaking process).

everyone will have their own sense of what they consider too risky, which is why we encourage donors to read through past grant reports and see how comfortable they feel before donating.

Now that the new Grantmaking and Impact section lists illustrative grants for each fund, I expect donors will turn to that section rather than clicking through each grant report and trying to mentally aggregate the results. But as I pointed out in another discussion, that section is problematic in that the grants it lists are often misrepresentative and/or incorrect, and even if it were accurate to begin with the information would quickly grow stale.

As a solution (which other people seemed interested in), I suggested a spreadsheet that would list and categorize grants. If I created such a spreadsheet, would you be willing to embed it in the fund pages and keep it up to date as new grants are made? The maintenance is the kind of thing a (paid?) secretariat could help with.

Comment by anonymouseaforumaccount on Update on CEA's EA Grants Program · 2020-01-09T23:30:49.907Z · EA · GW
It’s unclear to me how much of a shortage of funding there actually is, though, given what people expressed interest in doing this year. I think that a more important constraint in the ecosystem is not funding, but likely support and feedback for grantees and applicants. I think that it is very hard and time costly to give good feedback and support, but very important for individuals and early stage projects. This is part of why I’m excited about Charity Entrepreneurship’s incubation program. I am also exploring how the funding ecosystem may be able to provide more support to grantees to try to address this problem somewhat, though I expect this to be a hard problem to solve. On funding, if the Meta Fund sees funding shortages, I hope that they will make that known to donors, so that donors can fund the Meta Fund accordingly. To my knowledge, this has not been the case to date.

In your opinion, is this a recent development or do you think feedback was a larger constraint than funding even when EA Grants was more actively funding projects? If you think it’s a recent development, was the change driven by EA Grants, EA Funds, and other grantmakers funding the most funding constrained projects, or did something else change?

Comment by anonymouseaforumaccount on Update on CEA's EA Grants Program · 2020-01-09T23:29:52.543Z · EA · GW

Thank you for your thoughtful response Nicole!

Where the freed up resources go is dependent on donors. EA Grants never had (to my knowledge) multi-year commitments. For example, since I've started, it's been ~entirely funded by 1 anonymous donor.

Can you share any information about how likely it is these donors will fund similar projects through alternative means if EA Grants winds down? Do you know what the 1 anonymous donor is planning?

Taking a longer perspective, my understanding is that Open Phil funded the initial 2017 round of EA Grants (~$475k), and I’d guess they wouldn’t fund small early stage projects without a mechanism like EA Grants to do so through. Then in 2018, EA Grants awarded ~$850k through the referral round and some amount (that I haven’t seen announced) during the September 2018 round. Were these also funded by Open Phil? Do you have any sense of whether the funder(s) of these rounds funded similar projects through non-EA Grants channels in 2019? If not, is there any reason to expect them to fund these types of projects in 2020 or beyond? Are you able to share the amount granted from the September 2018 round, to help the community understand how much funding would need to be replaced if other channels need to fill the role EA Grants historically played?

Comment by anonymouseaforumaccount on Update on CEA's EA Grants Program · 2020-01-09T23:28:36.239Z · EA · GW

Yes, this was fixed. Thanks!

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2020-01-03T01:08:59.176Z · EA · GW
To be clear, I am claiming that the section you are linking is not very predictive of how I expect CEA to classify our grants, and is not very predictive of the attitudes that I have seen from CEA and other stakeholders and donors of the funds, in terms of whether they will have an intuitive sense that a grant is "risky". Indeed, I think that page is kind of misleading and think we should probably rewrite it.
I am concretely claiming that both CEA's attitudes, the attitudes of various stakeholders, and most donors attitudes is better predicted by the "risk of abuse" framing I have outlined. In that sense, I disagree with you that most donors will be primarily concerned about the kind of risk that is discussed on the EA Funds page.

If risk of abuse really is the big concern for most stakeholders, then I agree rewriting the risk page would make a lot of sense. Since that’s a fairly new page, I’d assumed it incorporated current thinking/feedback.

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2020-01-03T00:24:05.490Z · EA · GW

Very helpful response! This (like much of the other detailed transparency you’ve provided) really helped me understand how you think about your grantmaking (strong upvote), though I wasn’t actually thinking about “risk of abuse” in my question.

I’d been thinking of “risk” in the sense that the EA Funds materials on the topic use the term: “The risk that a grant will have little or no impact.” I think this is basically the kind of risk that most donors will be most concerned about, and is generally a pretty intuitive framing. And while I’m open to counterarguments, my impression is that the LTFF’s grants are riskier in this sense than grants made by the other funds because they have longer and less direct paths to impact.

I think “risk of abuse” is an important thing to consider, but not something worth highlighting to donors through a prominent section of the fund pages. I’d guess that most donors assume that EA Funds is run in a way that “risk of abuse” is quite low, and that prospective donors would be turned off by lots of content suggesting otherwise. Also, I’m not sure “risk of abuse” is the right term. I’ve argued that some parts of EA grantmaking are too dependent on relationships and networks, but I’m much more concerned about unintentional biases than the kind of overt (and unwarranted) favoritism that “risk of abuse” implies. Maybe “risk of bias”?

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2019-12-31T18:58:52.561Z · EA · GW

Exciting to see the new dashboard! Two observations:

1) The donation figures seem somewhat off, or at least inconsistent with the payout report data. For example, the Meta Fund payout reports list >$2 million in “payouts to date”. But the dashboard shows donations of <$1.7 million from the inception of the fund through the most recent payout in November. Looks like the other funds have similar (though smaller) discrepancies.

2) I think the dashboard would be much more helpful if data for prior years were shown for the full year, not just YTD. That would essentially make all the data available and add a lot of valuable context for interpreting the current year YTD numbers. Otherwise, the dashboard will only be able to answer a very narrow set of questions once we move to the new year.

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2019-12-31T00:36:41.500Z · EA · GW

Thanks for sharing this potential shift Oli. If the fund split, would the same managers be in charge of both funds or would you add a new management team? Also, would you mind giving a couple of examples of grants that you’d consider “medium-risk”? And do you see these grants as comparably risky to the “medium risk” grants made by the other funds, or just less risky than other grants made by the LTFF?

My sense is that the other funds are making “medium-risk” grants that have substantially simpler paths to impact. Using the Health Fund’s grant to Fortify Health as an example, the big questions are whether FH can get the appropriate nutrients into food and then get people to consume that food, as there’s already strong evidence micronutrient fortification works. By contrast, I’d argue that the LTFF’s mandate comes with a higher baseline level of risk since “it is very difficult to know whether actions taken now are actually likely to improve the long-term future.” (Of course, that higher level of risk might be warranted; I’m not making any claims about the relative expected values of grants made by different funds).

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2019-12-23T21:25:40.738Z · EA · GW

Btw, it's great that you're offering to have this post function as an AMA, but I think many/most people will miss that info buried in the commenting guidelines. Maybe add it to the post title and pin the post?

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2019-12-23T20:22:30.023Z · EA · GW

What’s the rationale behind categorizing the LTFF as “medium-high risk” vs. simply “high risk”? This isn’t meant as a criticism of the fund managers or any of the grants they’ve made, it’s just that trying to influence the LTF seems inherently high risk.

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2019-12-23T20:14:43.111Z · EA · GW
We think it’s likely that EA Funds would fare better if it were set up to run as its own organization, with its own director, and a team specifically focused on developing the product, while CEA will benefit from having a narrower scope.
It’s still not clear what the best format for this would be, but in the medium term this will probably look like EA Funds existing as a separately managed organization within the Centre for Effective Altruism (à la 80,000 Hours or Forethought), reporting directly to CEA’s board. At first, CEA would still provide operational support in the form of payroll and grantmaking logistics etc., but eventually we may want to build capacity to do some or all of these things in-house.

Can you provide a ballpark estimate of how long the “medium term” is in this context? Should we expect to see this change in 2020?

Also, once this change is implemented, roughly how large would you expect a “right-sized” EA Funds team to be? This relates to Misha’s question about overhead, but on more of a forward looking basis.

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2019-12-23T20:07:18.799Z · EA · GW

Two questions about the money EA Funds has processed for specific organizations rather than the Funds themselves (and thanks for sharing data on this type of giving via the new dashboard!):

1. How much of the money raised for organizations is “incremental” in the sense that giving through EA Funds allowed donors to claim tax deductions that they otherwise wouldn’t be able to get? As an example, I wouldn’t consider gifts to AMF through EA Funds to be incremental since US and UK donors could already claim tax deductions by giving directly to AMF. But I would call donations to ACE through EA Funds by UK donors incremental, since these gifts wouldn’t be tax deductible if it weren’t for EA Funds. (I recognize giving through EA Funds might have other benefits besides incremental tax deductibility, such as the ability to give to multiple organizations at once.)

2. How does donor stewardship work for gifts made directly to organizations? Do the organizations receive information about the donors and manage the relationships themselves, or does CEA handle the donor stewardship?

Comment by anonymouseaforumaccount on Effective Altruism Funds Project Updates · 2019-12-23T20:06:15.803Z · EA · GW

Thanks for this update!

Are the current fund balances (reasonably) accurate? Do they reflect the November grants made by the Animal and Meta Funds? And has the accounting correction you mentioned for the Global Health fund been implemented yet?

Comment by anonymouseaforumaccount on Which Community Building Projects Get Funded? · 2019-12-11T20:51:07.620Z · EA · GW

Distinguishing between grants made by the current management team and previous management makes sense (though it’d be good to state this methodology explicitly). If a spreadsheet does get built, the management regime would be a good piece of info to capture.

FWIW, for the Meta Fund “120k+ to Founders Pledge” looks quite reasonable if we only look at grants from the management team, but “250k+ to 80K” still seems like a poor way to describe the $415k granted to 80k by the team.

Comment by anonymouseaforumaccount on Which Community Building Projects Get Funded? · 2019-12-11T19:48:36.303Z · EA · GW

Re: information being up to date on websites being a signal that CEA is making progress on this problem...

I was glad to see that the Meta Fund page was updated to include a link where people can apply. And more generally, I’m happy CEA is working to improve EA Funds and expect the recent design changes to the EA Funds pages to improve users’ experience.

However, I’m quite frustrated that the new “Grantmaking and Impact” section is written in a way that’s likely to be confusing or misleading for donors. In the OP, I noted concerns about concentration in the Meta Fund’s grantmaking. The two largest grantees (by a wide margin) are Founders Pledge and 80K, which have each received roughly half a million in grants.

But the Grantmaking and Impact section doesn’t mention this, instead referring to grants of “$250k+ to 80,000 Hours” and “$120k+ to Founders Pledge”. It may be technically accurate to describe ~$500k grants as “$250k+” or “$120k+”, but it's hard to expect people to read those descriptions and get an accurate sense of how much money the largest grantees have received or how concentrated the Meta Fund’s grant history has been historically. If one reads only the Grantmaking and Impact section, they’d get the impression that the largest grants were a few hundred thousand out of a total of “several million” granted, when the reality is that Founders Pledge and 80K together account for about half of the $2.05 million the Meta Fund has granted to date.

The Grantmaking and Impact section is problematic for all the funds, not just the Meta Fund, and generally appears to give a distorted picture of what the largest grantees have been (and no information on “impact”):

  • The Animal Fund describes “$250k+ to support research related to improving animal welfare”. ACE and Rethink Priorities both focus on this type of research, and have received grants of $500k and $475k respectively which aren’t mentioned.
  • The Long-term Future Fund lists “$440k+ to support researchers working on relevant topics” as its largest category, and the copy clarifies that these grants “support individual researchers – both those working in academia, and alongside it – to skill up, and to work on key problems.” There’s no mention of MIRI (~$580k in grants) or CFAR (~$325k in grants, plus another $150k that was recommended but not granted since another funder stepped in before the grant was made).
  • The Global Development Fund mentions “$3m+ to malaria prevention”. This significantly overstates (by ~$1 million) the amount the fund has actually granted to malaria prevention ($1.7 million to Malaria Consortium and $330k to AMF).

I support the intention of trying to give donors more clarity on the types of grants each EA Fund makes, but the current implementation of the Grantmaking and Impact section doesn’t really achieve that. It will also likely be hard to keep up to date.

I suggest creating a google sheet with: a list of all grants, the fund the grant came from, the date, a categorization (which would vary by fund but could be similar to the categories used in the Grantmaking and Impact section), and a subtotal for each category. That would make it easy to see all grants in one place (rather than clicking through each payout report), the categorization would be transparent, and the subtotals would update automatically as new grants were made.

Comment by anonymouseaforumaccount on Did Fortify Health receive $1 million from EA Funds? · 2019-12-09T23:44:30.463Z · EA · GW

Thanks for explaining how this works Sam. I’ve got a few followup questions about fund balances.

The animal and meta fund pages both show new grant reports with November payout dates- are these grants reflected in the fund balances? Both funds have end of November fund balances that are the same ballpark size as their November grants, suggesting they might not be updated.

The Payout Reports shouldn't affect the Fund Balance, as that number is calculated directly from our accounting system. That said, this means it's subject to some of the vagaries of bookkeeping, which means we ask donors to treat it as an estimate.

This makes sense. Generally speaking, how accurate should we expect those estimates to be? Is it possible to say something along the lines of “we expect the fund balance estimates to be accurate plus or minus 10% and generally not off by more than $100,000”?

Comment by anonymouseaforumaccount on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-12-09T23:40:02.910Z · EA · GW

Thanks Aaron, that’s a helpful clarification. Focusing on “people shaping the overall direction of the EA movement” rather than just movement building seems like a sensible decision. But one drawback is that coming up with a list of those people is a much more subjective (and network-reliant) exercise than, for example, making a list of movement building organizations and inviting representatives from each of them.

Comment by anonymouseaforumaccount on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-12-09T23:38:27.773Z · EA · GW

Thank you for taking the time to respond, Max. I appreciate your engagement, your explanation of how the invitation process worked this year, and your willingness to acknowledge that CEA may have historically been too aggressive in how it has pushed longtermism and how it has framed the results of past surveys.

In the future, I’d like CEA to take a more agnostic approach to cause prioritisation, trying to construct non-gameable mechanisms for making decisions about how much we talk about different causes.

Very glad to hear this. As you note, implementing this sort of thing in practice can be tricky. As CEA starts designing new mechanisms, I’d love to see you gather input (as early possible) from people who have expressed concern about CEA’s representativeness in the past (I’d be happy to offer opinions if you’d like). These also might be good people to serve as "external advisors" who generate suggestions for the invite list.

Good luck with the retreat! I look forward to seeing your strategy update once that’s written up.

Comment by anonymouseaforumaccount on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-12-06T17:13:10.304Z · EA · GW

Thanks Aaron. Glad to hear the invitee list included a broader list of organizations, and that you'll consider a more explicit discussion of potential selection bias effects going forward.

Comment by anonymouseaforumaccount on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-12-02T22:15:51.120Z · EA · GW
At present, we see Leaders Forum as an event focused on movement building and coordination. We focus on inviting people who play a role in trying to shape the overall direction of the EA movement (whatever cause area they focus on), rather than people who mostly focus on direct research within a particular cause area. As you’d imagine, this distinction can be somewhat fuzzy, but that’s the mindset with which CEA approaches invites (though other factors can play a role).

I really wish this had been included in the OP, in the section that discusses the weaknesses of the data. That section seems to frame the data as a more or less random subset of leaders of EA organizations (“These results shouldn’t be taken as an authoritative or consensus view of effective altruism as a whole. They don’t represent everyone in EA, or even every leader of an EA organization.”)

When I look at the list of organizations that were surveyed, it doesn’t look like the list of organizations most involved in movement building and coordination. It looks much more like a specific subset of that type of org: those focused on longtermism or x-risk (especially AI) and based in one of the main hubs (London accounts for ~50% of respondents, and the Bay accounts for ~30%).* Those that prioritize global poverty, and to a lesser extent animal welfare, seem notably missing. It’s possible the list of organizations that didn’t respond or weren’t named looks a lot different, but if that’s the case it seems worth calling attention to and possibly trying to rectify (e.g. did you email the survey to anyone or was it all done in person at the Leaders Forum?)

Some of the organizations I’d have expected to see included, even if the focus was movement building/coordination: GiveWell (strategy/growth staff, not pure research staff), LEAN, Charity Entrepreneurship, Vegan Outreach, Rethink Priorities, One for the World, Founders Pledge, etc. Most EAs would see these as EA organizations involved to some degree with movement building. But we’re not learning what they think, while we are apparently hearing from at least one org/person who “want to avoid being connected explicitly to the EA movement -- for example, if almost all their work happens in non-EA circles, where EA might have a mixed reputation.”

I’m worried that people who read this report are likely to misinterpret the data being presented as more broadly representative than it actually is (e.g. the implications of respondents believing ~30% of EA resources should go to AI work over the next 5 years are radically different if those respondents disproportionally omit people who favor other causes). I have the same concerns about this survey was presented as Jacy Reese expressed about how the leaders survey from 2 years ago (which also captured a narrow set of opinions) was presented:

My main general thought here is just that we shouldn't depend on so much from the reader. Most people, even most thoughtful EAs, won't read in full and come up with all the qualifications on their own, so it's important for article writers to include those themselves, and to include those upfront and center in their articles.

Lastly, I’ll note that there’s a certain irony in surveying only a narrow set of people, given that even among those respondents: “The most common theme in these answers [about problems in the EA community] seems to be the desire for EA to be more inclusive and welcoming. Respondents saw a lot of room for improvement on intellectual diversity, humility, and outreach, whether to distinct groups with different views or to the general population.” I suspect if a more diverse set of leaders had been surveyed, this theme would have been expressed even more strongly.

* GFI and Effective Giving both have London offices, but I’ve assumed their respondents were from other locations.

Comment by anonymouseaforumaccount on Which Community Building Projects Get Funded? · 2019-11-22T18:49:00.610Z · EA · GW

Thanks for this background info Sam! Glad to hear that people from more places were considered in the last recruitment round. Were there any candidates from the US-excluding-Bay (which has more EAs than the UK, Bay Area, and Australia combined)? Did the candidates from the Bay who were approached give any reasons for why they couldn’t/wouldn't take the position (e.g. is the volunteer nature of the role a limiting factor)?

I’ll email you some nominations (and FWIW, if the fund managers agree this is a priority it’d be great to also solicit nominations in higher profile places like the EA newsletter).

Comment by anonymouseaforumaccount on Which Community Building Projects Get Funded? · 2019-11-22T18:47:25.882Z · EA · GW

Thank you Harri! As mentioned in the OP, I think a publicity strategy that promotes CBGs in more places and has longer (or rolling) application windows would help attract more applicants.

To understand the geographical distribution of applications better, I suggest asking some of the better established groups in places that didn’t see many applications why they didn’t apply. And it might be worth reviewing early communications with attendees/applicants to the European Group Retreat to see if they encouraged CBG applications, as this could explain why there were so many applications from Europe relative to everywhere else (though I take Joan’s point about the retreat itself happening after CBG applications closed).

Comment by anonymouseaforumaccount on Which Community Building Projects Get Funded? · 2019-11-20T23:14:43.483Z · EA · GW

I agree that your list of “signals that we are making progress on this problem” would all be positive developments. And I recognize that since CEA is resetting some of its strategy, it’s a difficult time to make firm commitments, especially around short-term priorities.

However, there’s one near-term step that I think would be relatively easy and particularly high impact. I’d love to see CEA publish how much money it has regranted through EA Grants and CBG in 2018 and 2019, and a rough estimated range for how much it expects to grant through those programs in 2020. As I commented on the recent writeup on EA Grants, the public communication around these programs has indicated that they’d grant significantly more money than they appear to have granted in practice. For instance, CEA’s end of 2018 fundraising post referred to “a regranting budget of $3.7M for EA Grants and EA Community Building Grants, which we use to fund other organizations, projects, individuals, and groups” but the grants that have been announced from those programs don’t even come close to that figure.

I think it’s critical for the EA community to get a more accurate understanding of how much funding has been/will be available, so that other funders and potential grantees can make informed decisions. As SiebeRozendal writes: “I'm afraid there is a dynamic where CB-efforts have trouble finding non-CEA funding because alternative funders believe CEA has got all the good opportunities covered.”

If CEA will be granting large amounts next year, that’s great: I think past grantees have mostly been good projects to fund, and as I argued in OP I think there are plenty more good opportunities out there. If CEA will only be making small grants, or doesn’t really know how much it will grant (due to uncertainty around the future of EA Grants, for example) other donors can adjust their behavior accordingly. But for that to happen, they need to be informed about CEA’s plans.