Cash Transfers as a Simple First Argument

post by tylermaule · 2021-04-17T15:00:27.305Z · EA · GW · 4 comments


  Data and Discussion


The beauty of GiveDirectly is that while it is unlikely to be the very best use of funds on the margin, it probably has the most robust case of any intervention that is plausibly close to being the very best. It seems to me that GiveDirectly is immune to the knee-jerk criticisms of EA (besides 'charity starts at home', but even then there are some domestic cash transfer options.) 

As such, it has occurred to me that this is probably the best starter argument I can make to a friend who is EA-agnostic. Hoping to flesh out that case here—read on for more detail, but the main points are as follows:

  1. There is a pretty clear logarithmic relationship between personal income and self-reported happiness, both within and across countries.
  2. Even if all we do is transfer cash, no strings attached, happiness data suggests an average American could have an altruistic return of ~40x and a rich American ~600x.
  3. (If you think that's good, GiveWell believes there are several options ~15x better!)
  4. Arguably, some of the 'cost' (the presumed dip in your happiness) could be recouped by the feeling that your actions are having a disproportionately positive impact.
Figure 1: Log-linearity of happiness/income, within and across countries

Data and Discussion

The log-linear relationship between income and happiness (as best represented by Our World in Data, e.g. Figures 1&2) was one of the main threads that first got me thinking with an EA bent. While admittedly many of the causes I now value can at first seem quite abstract and questionable, I found that this was very difficult to ignore from the start.

Although there's a lot going on in Figure 1, and the underlying data isn't fully public, just by zooming in and tracing some short trendlines we can make the following observations:

Figure 2: GDP vs Happiness on a linear scale



Comments sorted by top scores.

comment by Benjamin_Todd · 2021-04-18T16:40:06.918Z · EA(p) · GW(p)

Hi there, I agree it's an interesting opening argument. I was wondering if you had seen this article before, which takes a similar approach:

One concern about this approach is that I think it can seem obviously "low leverage" to the types of people focused on entrepreneurship, policy change and research (some of the people we most want to appeal to), and can give them the impression that EA is mainly about 'high confidence' giving rather than 'high expected value' giving, which is already a one of the most common misconceptions out there.

Replies from: tylermaule
comment by tylermaule · 2021-04-18T18:50:40.803Z · EA(p) · GW(p)

Hi Benjamin,

I totally forgot about that article, thank you for pointing it out! That is an excellent resource.

Your concern totally makes sense. Something I've been thinking about lately is whether EA should make a more concerted effort to promote 'streams' of varying fidelity intended for audiences which are coming from very different places.

Put another way: say I have a co-worker who every year gives to traditional, community-based charitable orgs, and has never considered giving that money elsewhere. Is this person more likely to spend the time on excellent and in-depth philosophical articles + podcasts I push on them, or engage with a more direct and irrefutable appeal to logic? I tend to think that the latter can serve as a gateway to the former.

comment by Neel Nanda · 2021-04-18T22:10:56.832Z · EA(p) · GW(p)

I really like this example! I used in an interview I gave about EA and thought it went down pretty well. My main concern with using it is that I don't personally fund direct cash transfers (or think they're anywhere near the highest impact thing), and both think it can misrepresent the movement, and think that it's disingenuous to imply that EA is about robustly good things like this, when I actually care most about things like AI Safety.

As a result, I frame the example like this (if I can have a high-context conversation):

  • Effectiveness, and identifying the highest impact interventions is a cornerstone of EA. I think this is super important, because there's really big spread between how much good different interventions, much more than feels intuitive
  • Direct cash transfers are a proof of concept: There's good evidence that doubling your income increases your wellbeing by the same amount, no matter how wealthy you were to start with. We can roughly think of helping someone as just giving them money, and so increasing their income. The average person in the US has income about 100x the income of the world's poorest people, and so with the resources you'd need to double the income of an average American, you could help 100x as many of the world's poorest people!
    • Contextualise, and emphasise just how weird 100x differences are - these don't come up in normal life. It'd be like you were considering renting buying a laptop for $1000, shopped around for a bit, and found one just as good for $10! (Pick an example that I expect to resonate with big expenses the person faces, eg a laptop, car, rent, etc)
    • Emphasis that this is just a robust example as a proof of concept, and that in practice I think we can do way better - this just makes us confident that spread is out there, and worth looking for. Depending on the audience, maybe explain the idea of hits-based giving, and risk neutrality.
Replies from: tylermaule
comment by tylermaule · 2021-04-20T11:34:03.899Z · EA(p) · GW(p)

Thanks for sharing! I like the way you phrased it in the interview, I think that’s a nice way to start.