Posts

Historical EA funding data 2022-08-14T13:21:34.451Z
EA Chicago November Social 2021-11-18T23:57:22.451Z
Cash Transfers as a Simple First Argument 2021-04-17T15:00:27.305Z
Total Funding by Cause Area 2021-03-07T22:06:03.565Z

Comments

Comment by TylerMaule (tylermaule) on Historical EA funding data · 2022-08-18T18:39:22.088Z · EA · GW

The biggest factor is the arrival of FTX, which has given more to infrastructure YTD than all others combined the prior two years

Comment by TylerMaule (tylermaule) on What We Owe The Future is out today · 2022-08-18T18:28:59.560Z · EA · GW

Relevant excerpt from his prior 80k interview:

Rob Wiblin: ...How have you ended up five or 10 times happier? It sounds like a large multiple.

Will MacAskill: One part of it is being still positive, but somewhat close to zero back then...There’s the classics, like learning to sleep well and meditate and get the right medication and exercise. There’s also been an awful lot of just understanding your own mind and having good responses. For me, the thing that often happens is I start to beat myself up for not being productive enough or not being smart enough or just otherwise failing or something. And having a trigger action plan where, when that starts happening, I’m like, “OK, suddenly the top priority on my to-do list again is looking after my mental health.” Often that just means taking some time off, working out, meditating, and perhaps also journaling as well to recognize that I’m being a little bit crazy.

Comment by TylerMaule (tylermaule) on Historical EA funding data · 2022-08-16T22:36:59.419Z · EA · GW

Yes, sorry, on reflection that seems totally reasonable

Comment by TylerMaule (tylermaule) on Historical EA funding data · 2022-08-16T09:55:10.093Z · EA · GW

Yeah it looked like grants had been announced roughly through June, so the methodology here was to divide by proportion dated Jan-Jun in prior years (0.49)

Comment by TylerMaule (tylermaule) on Historical EA funding data · 2022-08-16T09:51:14.638Z · EA · GW

I'm not sure that inflation makes sense—this money isn't being spent on bread :) I think most of these funds would alternatively be invested, and returning above inflation on average.

Comment by TylerMaule (tylermaule) on Historical EA funding data · 2022-08-15T19:26:02.737Z · EA · GW

2012-Pres. (first longtermist grant was in 2015) no projection

Comment by TylerMaule (tylermaule) on Historical EA funding data · 2022-08-15T19:15:58.007Z · EA · GW

Estimates for Open Phil:

 

Comment by TylerMaule (tylermaule) on Historical EA funding data · 2022-08-15T17:55:14.569Z · EA · GW

FTX has so far granted 10x more to AI stuff than OPP

This is not true, sorry the Open Phil database labels are a bit misleading. 

It appears that there is a nested structure to a couple of the Focus Areas, where e.g. 'Potential Risks from Advanced AI' is a subset of 'Longtermism', and when downloading the database only one tag is included. So for example, this one grant alone from March '22 was over $13M, with both tags applied, and shows up in the .csv as only 'Longtermism'. Edit: this is now flagged more prominently in the spreadsheet.

Comment by TylerMaule (tylermaule) on Historical EA funding data · 2022-08-15T07:56:22.505Z · EA · GW

Many of the sources used here can't be automated, but the spreadsheet is simple to update

Comment by TylerMaule (tylermaule) on Historical EA funding data · 2022-08-15T07:41:11.272Z · EA · GW

Fixed

Comment by TylerMaule (tylermaule) on Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey) · 2022-05-19T19:08:44.314Z · EA · GW

EA does seem a bit overrepresented (sort of acknowledged here).

Possible reasons: (a) sharing was encouraged post-survey, with some forewarning (b) EAs might be more likely than average to respond to 'Student Values Survey'?

Comment by TylerMaule (tylermaule) on Increasing Demandingness in EA · 2022-04-30T00:01:30.493Z · EA · GW

I strongly agree with this comment, especially the last bit.

In line with the first two paragraphs, I think the primary constraint is plausibly founders [of orgs and mega-projects], rather than generically 'switching to direct work'.

Comment by TylerMaule (tylermaule) on Increasing Demandingness in EA · 2022-04-29T23:19:08.454Z · EA · GW

Re footnote, the only public estimate I've seen is $400k-$4M here, so you're in the same ballpark.

Personally I think $3M/y is too high, though I too would like to see more opinions and discussion on this topic.

Comment by TylerMaule (tylermaule) on My bargain with the EA machine · 2022-04-29T16:55:57.490Z · EA · GW

I enjoyed this post and the novel framing, but I'm confused as to why you seem to want to lock in your current set of values—why is current you morally superior to future you?

Do I want my values changed to be more aligned with what’s good for the world? This is a hard philosophical question, but my tentative answer is: not inherently – only to the extent that it lets me do better according to my current values.

Speaking for myself personally, my values have changed quite a bit in the past ten years (by choice). Ten-years-ago-me would likely be doing something much different right now, but that's not a trade that the current version of myself would want to make. In other words, it seems like in the case where you opt for 'impactful toil', that label no longer applies (it is more like 'fun work' per your updated set of values).

Comment by TylerMaule (tylermaule) on The value of small donations from a longtermist perspective · 2022-02-28T00:18:17.621Z · EA · GW

Some of the comments here are suggesting that there is in fact tension between promoting donations and direct work. The implication seems to be that while donations are highly effective in absolute terms, we should intentionally downplay this fact for fear that too many people might 'settle' for earning to give.

Personally, I would much rather employ honest messaging and allow people to assess the tradeoffs for their individual situation. I also think it's important to bear in mind that downplaying cuts both ways—as Michael points out, the meme that direct work is overwhelmingly effective has done harm.

There may be some who 'settle' for earning to give when direct work could have been more impactful, and there may be some who take away that donations are trivial and do neither. Obviously I would expect the former to be hugely overrepresented on the EA Forum.

Comment by TylerMaule (tylermaule) on Some thoughts on vegetarianism and veganism · 2022-02-15T03:16:18.437Z · EA · GW

See also

Offsetting the carbon cost of going from an all-chicken diet to an all-beef diet would cost $22 per year, or about 5 cents per beef-based meal. Since you would be saving 60 chickens, this is three chickens saved per dollar, or one chicken per thirty cents. A factory farmed chicken lives about thirty days, usually in extreme suffering. So if you value preventing one day of suffering by one chicken at one cent, this is a good deal.

Comment by TylerMaule (tylermaule) on Future-proof ethics · 2022-02-05T06:26:24.616Z · EA · GW

I didn't read the goal here as literally to score points with future people, though I agree that the post is phrased such that it is implied that future ethical views will be superior.

Rather, I think the aim is to construct a framework that can be applied consistently across time—avoiding the pitfalls of common-sense morality both past and future.

In other words, this could alternatively be framed as 'backtesting ethics' or something, but 'future-proofing' speaks to (a) concern about repeating past mistakes (b) personal regret in future.

Comment by TylerMaule (tylermaule) on doing more good vs. doing the most good possible · 2022-01-02T23:39:46.724Z · EA · GW

I was especially interested in a point/thread you mentioned about people perceiving many charities as having similar effectiveness and that this may be an impediment to people getting interested in effective altruism

 

See here

A recent survey of Oxford students found that they believed the most effective global health charity was only ~1.5x better than the average — in line with what the average American thinks — while EAs and global health experts estimated the ratio is ~100x. This suggests that even among Oxford students, where a lot of outreach has been done, the most central message of EA is not yet widely known.

Comment by TylerMaule (tylermaule) on Has anything in the EA global health sphere changed since the critiques of "randomista development" 1-2 years ago? · 2021-12-04T16:13:50.886Z · EA · GW
  1. As Jackson points out, those willing to go the 'high uncertainty/high upside' route tend to favor far future or animal welfare causes. Even if we think these folks should consider more medium-term causes, comparing cost-effectiveness to GiveWell top charities may be inapposite.
  2. It seems like there is support for hits-based policy interventions in general, and Open Phil has funded at least some of this.
  3. The case for growth was based on historical success of pro-growth policy. Not only is this now less neglected, but much of the low-hanging fruit has been taken.
Comment by TylerMaule (tylermaule) on The Explanatory Obstacle of EA · 2021-12-02T04:21:10.542Z · EA · GW

Thanks for this—I have often wished I had a better elevator pitch for EA.

One thing I might add is some mention of just how wide the disparity can be amongst possible interventions, since this seems to be one of the most overlooked key ideas.

Comment by TylerMaule (tylermaule) on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-23T21:31:33.656Z · EA · GW

I believe both this post and Ben’s original ‘Funding Overhang’ post mentioned that this is an update towards a career with direct impact vs earning-to-give.

But earning-to-give is still very high impact in absolute terms.

Comment by TylerMaule (tylermaule) on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-23T14:09:55.808Z · EA · GW

Thanks for writing; I too have worried that many folks got the wrong impression here.

Comment by tylermaule on [deleted post] 2021-11-15T03:25:24.945Z

I do think that more generally there is an inefficiency with so many EAs independently sinking time into investment management. I don't think that the answer is safe/passive/crowdsourcing, though.

Instead, I think what might be valuable is some sort of 'EA Mutual Funds'—a menu of investment profiles, each tied to a fund/manager. Possible value-add:

  1. Consolidation of labor to fund manager (research, tax planning)
  2. Access to leverage/accreditation
  3. Save fees vs using a DAF
Comment by TylerMaule (tylermaule) on Make a $100 donation into $200 (or more) · 2021-11-01T21:57:12.066Z · EA · GW

Anyone know where the $250k is coming from? This is all I could find:

Matching funds are provided by generous donors who contribute to help amplify grassroots giving

Comment by TylerMaule (tylermaule) on EA Survey 2020 Series: Donation Data · 2021-10-29T16:36:52.489Z · EA · GW

Is there any consideration for Investing-to-Give in the survey?

  1. Is contributing to a DAF meant to count as a 'donation'? I would think yes, though not all I2G is done via DAF
  2. According to this post, many in the community think deploying <5% of available capital/yr is currently optimal

Perhaps it could be interesting to ask for both 'amount donated' and 'amount earmarked for donation'?

Comment by TylerMaule (tylermaule) on An update in favor of trying to make tens of billions of dollars · 2021-10-17T15:38:33.759Z · EA · GW

Depends immensely on if you think there are EAs who could start billion-dollar companies, but would not be able to without EA funding. I.e. they're great founders, but can't raise money from VCs.

 

I think the core argument here is that not enough EAs try to start a company, as opposed to try and are rejected by VCs. IMO the point of seeding would be to take more swings.

Also, presumably the bar should be lower for an EA VC, because much of the founders' stake will also go to effective charity.

Comment by TylerMaule (tylermaule) on An update in favor of trying to make tens of billions of dollars · 2021-10-16T19:03:42.208Z · EA · GW

Would it not make sense to start some sort of 'EA Venture Capital' firm?

Surely more EAs would take this leap if provided with some runway/salary (in exchange for equity, which by this logic would be a phenomenal investment for patient philanthropy money)

Comment by TylerMaule (tylermaule) on What should "counterfactual donation" mean? · 2021-09-23T22:57:14.795Z · EA · GW

I think I agree that only the last two should qualify, but presently I would assume a weaker definition is most common.

I suppose this can create a bad incentive where someone offering a counterfactual donation then has to make sure to do something not charitable with that money later on? I guess in my view a ‘counterfactual donation’ really only ever makes sense when you have a strong prior the money would not otherwise be put to similar use.

Comment by TylerMaule (tylermaule) on EffectiveAltruismData.com: A Website for Aggregating and Visualising EA Data · 2021-09-20T04:06:27.123Z · EA · GW

This is very cool! I share your view that comprehensive data is an important part of my personal e2g decision-making (and can be difficult to find).

If you haven't seen it already, this recent post by Ben Todd is probably the best source I know of as far as resource allocation.

  • Make a line plot of cumulative grants from Open Philanthropy (for each focus area individually and in total).
  • Do all the same plots I have for Open Philanthropy for EA Funds as well.

I made a rough attempt to this effect earlier this year (there you can also find a link to the source code).

Comment by TylerMaule (tylermaule) on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-30T14:31:05.280Z · EA · GW

That all seems reasonable.

Shouldn’t the displacement value be a factor though? This might be wrong, but my thinking is (a) the replacement person in the $1M job will on average give little or nothing to effective charity (b) the switcher has no prior experience or expertise in non-profit, so presumably the next-best hire there is only marginally worse?

Comment by TylerMaule (tylermaule) on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-25T23:51:48.140Z · EA · GW

In reality I don't think we'd want to go that close to the breakeven point - because there may be better uses of money, due to the reputation costs of unusually high salaries, and because salaries are harder to lower than to raise (and so if uncertain, it's better to undershoot).

Good points, I agree it would be better to undershoot.

Still, even with the pessimistic assumptions, the high end of that $0.4-4M range seems quite unlikely.

Does 80k actually advise people making >$1M to quit their jobs in favor of entry-level EA work? If so, that would be a major update to my thinking.

Comment by TylerMaule (tylermaule) on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-22T14:54:32.452Z · EA · GW

Agreed, just a function of how many salaries you assume will have to be doubled alongside to fill that one position

(a) Hopefully, doubling ten salaries to fill one is not a realistic model. Each incremental wage increase should expand the pool of available labor. If the EA movement is labor-constrained, I expect a more modest raise would cause supply to meet demand.

(b) Otherwise, we should consider that the organization was paying only half of market salary, which perhaps inflated their ‘effectiveness’ in the first place. Taking half of your market pay is itself an altruistic act, which is not counted towards the org’s costs. Presumably if these folks chose that pay cut, they would also choose to donate much of their excess salary (whether pay raise from this org, or taking a for-profit gig).

Comment by TylerMaule (tylermaule) on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-17T13:22:31.721Z · EA · GW

Doubling costs to get +10% labour doesn't seem like a great deal

 

I agree in principal, but in this case the alternative is eliminating$400k-4M of funding, which is much more expensive than doubling the salary of e.g. a research assistant.

To be clear, I am more so skeptical of this valuation than I am actually suggesting doubling salaries. But conditional on the fact that one engaged donor entering the non-profit labor force is worth >$400k, seems like the right call.

Comment by TylerMaule (tylermaule) on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-17T03:03:22.170Z · EA · GW

For each person in a leadership role, there’s typically a need for at least several people in the more junior versions of these roles or supporting positions — e.g. research assistants, operations specialists, marketers, ML engineers,...I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million per year

 

If this is true, why not spend way more on recruiting and wages? It's surprising to me that the upper bound could be so much larger than equivalent salary in the for-profit sector.

I might be missing something, but it seems to me the basic implication of the funding overhang is that EA should convert more of its money into 'talent' (via Meta spending or just paying more).

Comment by TylerMaule (tylermaule) on How are resources in EA allocated across issues? · 2021-08-11T03:15:11.883Z · EA · GW

Thanks for writing, and I agree it would be great to see more like this in future.

It does seem like 'ideal portfolio of resources' vs 'ideal split of funds donated this year' can be quite a bit different—perhaps a question for next time?

(see here for some similar funding estimates)

Comment by TylerMaule (tylermaule) on Cash Transfers as a Simple First Argument · 2021-04-20T11:34:03.899Z · EA · GW

Thanks for sharing! I like the way you phrased it in the interview, I think that’s a nice way to start.

Comment by TylerMaule (tylermaule) on Cash Transfers as a Simple First Argument · 2021-04-18T18:50:40.803Z · EA · GW

Hi Benjamin,

I totally forgot about that article, thank you for pointing it out! That is an excellent resource.

Your concern totally makes sense. Something I've been thinking about lately is whether EA should make a more concerted effort to promote 'streams' of varying fidelity intended for audiences which are coming from very different places.

Put another way: say I have a co-worker who every year gives to traditional, community-based charitable orgs, and has never considered giving that money elsewhere. Is this person more likely to spend the time on excellent and in-depth philosophical articles + podcasts I push on them, or engage with a more direct and irrefutable appeal to logic? I tend to think that the latter can serve as a gateway to the former.

Comment by TylerMaule (tylermaule) on Status update: Getting money out of politics and into charity · 2021-04-11T18:03:31.925Z · EA · GW

I see now that this and a couple other points were mentioned in Repledge++. One more I would add to the list:

'Relative advantage' in cash vs percentage terms could be a sticking point. In the case of a $10M/$8M split, giving $2M/$0 to the respective candidates seems unfair to candidate B, because $2M is infinitely more than $0 in percentage terms. Say this money was going to ad buys, instead of running 100 vs 80 ad spots, candidate A now runs 20 spots vs zero for candidate B, and is the only candidate on the airwaves.

I would argue that a fair split would be $1.111M vs $0.889M, but I'm not sure that supporters of candidate A would agree.

Of course, if you assume that the platform is only a tiny fraction of total campaign contributions this is much less significant, but still worth a thought.

Comment by TylerMaule (tylermaule) on Status update: Getting money out of politics and into charity · 2021-04-10T22:37:10.880Z · EA · GW

I like the idea of political contributions going to charity, though I can't help thinking about the game theory implications here:

If I (a left-leaning person who prefers charity to political donations) felt strongly that much more money would come in on the Democrat side, I imagine I'd route my usual donation through this platform under the Republican candidate.

I guess it's difficult to imagine an actual Republican contributing to this platform unless they preferred giving to charity anyway. Arguably this platform would then only deplete the funds of one candidate (the Democrat), with much of the funds intended for charity in the first place. But still, to be clear, this would be a net positive contribution IMO.

Comment by TylerMaule (tylermaule) on Against neutrality about creating happy lives · 2021-03-18T13:50:57.366Z · EA · GW

Not taking a side here, but couldn't you get around this by framing your values as 'maximizing sum of global utility'? This way there is no need to make a comparison between Joe and [absence of Joe]; I can simply say that Joe's existence has caused my objective function to increase.

Comment by TylerMaule (tylermaule) on Total Funding by Cause Area · 2021-03-10T15:11:05.011Z · EA · GW

Thanks for the reply, definitely gives me a lot to consider.

"Cause area" is also a pretty weird/arbitrary unit of analysis

Personally, I quite like the cause area distinction. One alternate definition I might propose is that a cause area is a subset of interventions which are plausibly cross-comparable. Direct comparisons across these cause areas are flimsy at best, and even if I felt strongly that one of them was the most effective, I would still value each of the others receiving non-trivial funding for the purposes of (a) hedging (b) worldview diversification.

Also I think the choice of what you are funding within each cause also matters a lot.

It certainly does, but so long as I donate via EA Funds or GiveWell, that decision is passed along to the very most qualified people I know of.

I'm not sure if this makes sense from a donor collaboration/coordination/cooperation standpoint

I might disagree here. Using base rate funding to inform decisions is no different than 'neglectedness' as a pillar of EA—If I had to be truly agnostic I suppose I'd give money to climate change, or purchasing COVID vaccines.

That 80k article is very cool, though they also seem to agree: "If the community is unresponsive to what you do, you can (mostly) take a single-player approach to working out the best action."

a lot of these areas have very large individual donors that aren't captured

It would be great to know more about these donors, and specifically which orgs they donate to. It's starting to feel like a satisfactory measure of 'fundedness' would require a lot more future work.

I imagine your personal views about the difference in the value of cause areas will dominate this, given that causes might be 10x different whereas these gaps are only 5x at most.

The size of the gaps are dependent on my personal views, so I think we're in agreement here.

Comment by TylerMaule (tylermaule) on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-09T20:50:28.799Z · EA · GW

Also, while Open Phil's donations to GiveWell have remained at a similar level, the amount they direct to the EA movement as a whole has grown substantially:

Comment by TylerMaule (tylermaule) on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-09T20:27:15.878Z · EA · GW
  1. As Katja's response alludes to, the non-Open-Phil chunk of GiveWell has more than doubled since 2015 (plus EA funds has gone from zero to $9M, etc.)
  2. I see a few comments at the Reddit/LessWrong versions of this post intimating that EA does not want [much] more money, or has stopped trying to fundraise. This was not my impression at all. Is it not true that even just considering GiveWell's top charities and near-misses, they could absorb many millions more without being saturated?
Comment by TylerMaule (tylermaule) on Total Funding by Cause Area · 2021-03-09T00:14:01.889Z · EA · GW

I was thinking along very similar lines with 'Limitations' #1. It would be much better to model this as a contribution function in four dimensions, rather than only counting 'EA dollars'. 

Not only would this require more data, but one would need to assign a multiplier to each separate intervention à la GiveWell moral weights. What fraction of a 'Global Health' dollar is counted when Bill Gates funds vaccine research? Could be interesting for future work.

Comment by TylerMaule (tylermaule) on What is the financial size of the Effective Altruism movement? · 2021-03-07T22:13:11.821Z · EA · GW

I estimate $263 Million as of 2020.