Why not give 90%?

post by HaydenW · 2020-03-23T02:53:54.938Z · score: 50 (26 votes) · EA · GW · 26 comments

(Cross-posted from my own blog. I also gave a talk on this at EAGxAustralia 2019.)

Agape donates 10% of her income each year to effective charities. This donation brings far greater happiness (or welfare, or whatever you think is valuable) to the recipients than it’d bring to her. And 10% is no small thing. (You could say it’s a good attenth.)
But Agape has a comfortable life. She could donate up to 90% before it really got tough. Before each dollar would do more good for her than it would for those in poverty, or those on factory farms, etc, whom she might benefit with it.
By only giving 10%, is Agape doing the wrong thing?

Agape and her dilemma might sound familiar. She’s me. She might be you too.

Personally, I give a lot less than 90%. So I worry about this a lot– am I doing something deeply immoral, by not sacrificing much more than I do? I suspect that many effective altruists have the same worry, and perhaps even feel guilty about not doing absolutely everything they can.

And this doesn’t just apply to donations. It’s similar for careers – you might, for example, go into a less high-impact career, maybe in academia, doing research which is a bit less beneficial to the world than some of the things you could do, but you really enjoy that research. Or, you could go do a job that you don’t enjoy at all. Maybe earning to give in finance could be an example of this, if that’s something you wouldn’t be able to bear. Not that that’s the most impactful option for many people, but imagine your own hypothetical high-impact career path that you’d find unbearable. The question is: by choosing a career which is more enjoyable but a bit less impactful, am I doing the wrong thing?

Or, even more importantly, am I not being a ‘Proper Effective Altruist’™?

Here’s my preferred, and pretty standard, definition of effective altruism.

Note that there’s no moral claim here, in the sense that it doesn’t say anything about what you should do. Effective altruism is simply something you do, rather a belief about what you’re obligated to do. Regardless of someone’s moral beliefs, if they use some amount of their time, money, etc to help others as effectively as they can, then they’re ‘doing effective altruism’.

In Agape’s case, giving less than she could, there’s no direct conflict between that and effective altruism.

But a lot of effective altruists do think we have moral obligations to engage in effective altruism – e.g., Peter Singer. I do too. I believe the following, which I suspect Peter Singer would agree with.

And anyone who believes this should conclude that we should use a large portion of our resources to help others. That would probably involve giving 90% of our income to charity each year, or to use the 80,000 hours in our careers to do whatever helps others most, no matter how unpleasant the job is.

Of course, this wouldn’t imply that we should give away 99% of our income, that we should sell every piece of clothing and wear potato sacks to work, that we should literally bring ourselves down to the poverty line, that perhaps we shouldn’t even spend money on water to shower. That would probably be counterproductive. If you want to keep earning money, or keep having an impact in other ways, you probably need to smell okay, and not show up to work in a potato sack. Or you’ll get fired. (Maybe not in academia though…) Plus, it’s probably worth paying to have a place to live, a good night’s sleep, a decent diet and so on, so you can keep your productivity up.

But basic hygiene and so on often won’t cost the majority of your income. Maybe 10%, leaving you 90% to give away. Or maybe it takes you 50% to satisfy those basic needs. It’ll depend on the person.

But whether it’s 90% or 50%, Obligatory, demanding effective altruism seems to entail that almost all of us should do a lot more than we currently do. So, are most of us doing something deeply wrong? Should Peter Singer be disappointed in us?

No, I don't think he should. I think we're doing okay. In fact, I think we may be required to give a lot less than 90%. Peter Singer should still be happy with us.

To make the case for this, I’ll first have to introduce you to Professor Procrastinate - a classic example from philosophy.

Professor Procrastinate
A student is applying for a grad job at the last minute and needs a reference. She emails Professor Procrastinate on Monday, asking him to write her a reference letter. If he says yes, she will send him more info on Tuesday. The reference is due on Friday.
Procrastinate is the best person to do the reference and he can do it on time. However, he knows that he (almost) certainly will not do it on time  (P<5%). He is a habitual procrastinator and will (almost) certainly choose not to finish it. This failure would not be due to outside factors (e.g., a natural disaster). What’s more, Procrastinate’s failure to deliver would have very bad consequences. The student would not have time to seek another reference, so would not get the job.
If Procrastinate says no on Monday, the student will ask Dr Reliable, who would write the reference on time. It wouldn’t be quite as good as Procrastinate’s – perhaps good enough to get the job at a lower salary - but this would be much better than no reference.
What should Procrastinate do on Monday? Say yes or no?

Of course, the best thing that Professor Procrastinate can do that week is to say yes on Monday and then write the reference on Tuesday-Friday. But we're interested in what he should do on Monday.

He can't control his future actions; he can't ensure that his future self will carry through. In fact, he's nearly certain that he won't. Perhaps past experience has shown him this over and over. And yes, that may make him a bad person. But does it mean that he should say yes this time?

Suppose I and Procrastinate were a two-person academic team. And it was up to me whether to answer emails - to say yes or no to the student - and then up to him to actually write the reference. Surely I shouldn't say yes unless I actually think he'll do it. If I'm almost certain he won't, I should say no. I think this situation is the same in the relevant ways as above - Present!Procrastinate and Future!Procrastinate are effectively separate agents. And Present!Procrastinate should say no - doing so saves the student from disappointment. (I talk more about the case for saying no in this post.)

Back to Agape now. Here's the case from above, but with some extra details. You might notice that it's starting to sound similar to the case of Professor Procrastinate.

Agape donates 10% of her income each year to effective charities. This donation brings far greater value to the recipients than it’d bring to her.
But Agape has a comfortable life. She could donate up to 90% before it really got tough. Before each dollar would do more good for her than it would for those in poverty, or those on factory farms, etc, whom she might benefit with it.
Agape really likes a bit of luxury, e.g., her sports car. Without her luxury comforts, she'd be 'demotorvated', you might say.
By her best estimate, if she had to subsist on 10% of her income, there’s a 50% chance each year that her future self would give up on donating altogether and keep everything for herself. She has 40 years left in her career.
By only giving 10% this year, is Agape doing the wrong thing?

Why would she give up on her plan to donate? It might be due to burnout, which is discussed a lot in the effective altruism community. Or it might just be due to changing her mind later on. I think that too is a risk worth predicting and mitigating. Just as it is for Professor Procrastinate.

If I was donating 90% every year, I think my probability of giving up permanently would be even higher than 50% each year. If I had zero time and money left to enjoy myself, my future self would almost certainly get demotivated and give up on this whole thing. Maybe I’d come back and donate a bit less but, for simplicity, let’s just assume that if Agape gives up, she stays given up.

And Agape has 40 years left in her career. Suppose she tries to give 90% each year. Then, over 40 years, the expected amount she donates is:

𝔼(amount donated) = 0.9 + 0.9×0.5 + 0.9×0.5^2+...+0.9×0.5^39 = 1.8

That's in units of "years' worth of income". And 1.8 years' worth of income isn't a huge amount.

What if she tried to donate just 10% each year? And she has no risk of giving up. Then we have:

𝔼(amount donated) = 0.1+ 0.1 + 0.1 + ... + 0.1 = 4

That's more than twice as much - more than twice the positive impact (assuming constant marginal impact per dollar). And, if we suppose that her income increases over time, then the difference would be even greater.

Now, what if she donated 20%? And that brought her up to 5% annual risk of giving up?

𝔼(amount donated) = 0.2+ 0.2×0.95 + 0.2×0.95^2 + ... + 0.2×0.95^39 = 3

Again, that's less than if she donated 10% with no risk! And that seems surprising. She's giving twice as much as she was at 10%; at 90% she was giving 9 times as much! But, with that additional risk, the expected total shrinks down to even less. This is because the probability of failure is compounding over 40 years, and ends up awfully high. For instance, with a 5% per year chance of giving up, that ends up being an 86% chance of giving up by the end.

So Agape’s total impact is more sensitive to changes in that annual probability here than it is to how much she’s actually donating each year. A similar result holds in general. Whenever she could donate twice as much per year and that would incur an additional 5% annual risk of giving up, it's actually not worth it! Any additional compounding 5% probability ends up cutting your impact in half over 40 years. Which is pretty counterintuitive, but that's probability for you.

Of course, I’ve made up all of Agape's probabilities. I don't know of any good data on the probability of effective altruists giving up based on their level of donation or career demandingness, so I’ve picked numbers out of thin air. If you, dear reader, want to figure out what to do in your own situation, you’ll have to figure out how much more likely you are to give up if you had a certain amount less spending money. You might be unaffected by that, or you might really struggle (as I probably would).

But, in general, I think we can justify donating less than 90%, since that'd be enough to make any of us very likely to give up. In fact, we might be able to justify donating a lot less, depending on how sensitive our motivation is to being deprived of nice things. Assuming that we’re at least a bit sensitive, we should probably donate quite a lot less.

In fact, donating a full 90% would then be reckless. You’d end up doing a lot less good. According to Obligatory, demanding effective altruism - the same view that initially recommended giving 90% - you’re actually morally obligated not to!

26 comments

Comments sorted by top scores.

comment by Halffull · 2020-03-24T19:38:22.971Z · score: 14 (9 votes) · EA(p) · GW(p)

I think this is actually quite a complex question. I think it's clear that there's always a chance of value drift, so you can never put the chance of "giving up" at 0. If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.

If we take the data from here with 0 grains of salt [EA · GW], you're actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many reasons this might be, such as consistency and justification effects, but the point is the object level question is complicated :).

comment by Peter_Hurford · 2020-03-27T07:58:42.831Z · score: 15 (5 votes) · EA(p) · GW(p)

such as consistency and justification effects

And selection effects!

comment by HaydenW · 2020-03-25T11:22:47.758Z · score: 2 (2 votes) · EA(p) · GW(p)
I think this is actually quite a complex question.

Definitely! I simplified it a lot in the post.

If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.

Good point! I hadn't thought of this. I think it ends up being best to frontload if your annual risk of giving up isn't very sensitive to the amount you donate, it's high, and your income isn't going to increase a whole lot over your lifetime. I think those first two things might be true of a lot of people. And so will the third thing, effectively, if your income doesn't increase by more than 2-3x.

If we take the data from here with 0 grains of salt [EA · GW], you're actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many reasons this might be, such as consistency and justification effects, but the point is the object level question is complicated :).

My guess is that the main reason for that is that more devoted people tend to pledge higher amounts. I think if you took some of those 10%ers and somehow made them choose to switch to 50%, they'd be far more likely than before to give up.

But yeah, it's not entirely clear that P(giving up) increases with amount donated, or that either causally affects the other. I'm just going by intuition on that.

comment by Jamie_Harris · 2020-03-28T10:11:16.710Z · score: 4 (2 votes) · EA(p) · GW(p)

<<My guess is that the main reason for that is that more devoted people tend to pledge higher amounts.>>

That could account for part of it, though, according to this article, "multiple studies have demonstrated that people perform better when goals are set higher and made more challenging.” I haven't looked into this in more detail, but I've heard other social scientists who research behaviour change make similar claims (e.g. on this podcast).

My guess is that there's a sweet spot of challenge/demandingness that is optimal, and that that sweet spot varies substantially by the individual.

(PS thanks for this post, I've had similar thoughts before and like the theoretical demonstration in expected value terms of the risk of giving up.)

comment by Gregory_Lewis · 2020-03-25T12:15:34.912Z · score: 11 (4 votes) · EA(p) · GW(p)

Nice one. Apologies for once again offering my 'c-minor mood' key variation: Although I agree with the policy upshot, 'obligatory, demanding effective altruism' does have some disquieting consequences for agents following this policy in terms of their moral self-evaluation.

As you say, Agape does the right thing if she realises (similar to prof procrastinate) that although, in theory, she could give 90% (or whatever) of her income/effort to help others, in practice she knows this isn't going to work out, and so given she wants to do the most good, she should opt for doing somewhat less (10% or whatever), as she foresees being able to sustain this.

Yet the underlying reason for this is a feature of her character which should be the subject of great moral regret. Bluntly: she likes her luxuries so much that she can't abide being without them, despite being aware (inter alia) that a) many people have no choice but to go without the luxuries she licenses herself to enjoy; b) said self-provision implies grave costs to those in great need if (per impossible) she could give more; c) her competing 'need' doesn't have great non-consequentialist defences (cf. if she was giving 10% rather than 90% due to looking after members of her family); d) there's probably not a reasonable story of desert for why she is in this fortunate position in the first place; e) she is aware of other people, similarly situated to her, who nonetheless do manage to do without similar luxuries and give more of themselves to help others.

This seems distinct from other prudential limitations a wise person should attend to. Agape, when making sure she gets enough sleep, may in some sense 'regret' she has to sleep for several hours each day. Yet it is wise for Agape to sleep enough, and needing to sleep (even if she needs to sleep more than others) is not a blameworthy trait. It is also wise for Agape to give less in the OP given her disposition of, essentially, "I know I won't keep giving to charity unless I also have a sports car". But even if Agape can't help this no more than needing to sleep, this trait is blameworthy.

Agape is not alone in having blameworthy features of her character - I, for one, have many; moral saintliness is rare, and most readers probably could do more to make the world better were they better people. 'Obligatory, demanding effective altruism' would also make recommendations against responses to this fact which are counterproductive (e.g. excessive self-flagellation, scrupulosity). I'd agree, but want to say slightly more about the appropriate attitude as well as the right action - something along the lines of non-destructive and non-aggrandising regret.[1] I often feel EAs tend to err in the direction of being estranged from their own virtue; but they should also try to avoid being too complaisant to their own vice.


[1] Cf. Kierkegaard, Sickness unto Death

Either in confused obscurity about oneself and one’s significance, or with a trace of hypocrisy, or by the help of cunning and sophistry which is present in all despair, despair over sin is not indisposed to bestow upon itself the appearance of something good. So it is supposed to be an expression for a deep nature which thus takes its sin so much to heart. I will adduce an example. When a man who has been addicted to one sin or another, but then for a long while has withstood temptation and conquered -- if he has a relapse and again succumbs to temptation, the dejection which ensues is by no means always sorrow over sin. It may be something else, for the matter of that it may be exasperation against providence, as if it were providence which had allowed him to fall into temptation, as if it ought not to have been so hard on him, since for a long while he had victoriously withstood temptation. But at any rate it is womanish [recte maudlin] without more ado to regard this sorrow as good, not to be in the least observant of the duplicity there is in all passionateness, which in turn has this ominous consequence that at times the passionate man understands afterwards, almost to the point of frenzy, that he has said exactly the opposite of that which he meant to say. Such a man asseverated with stronger and stronger expressions how much this relapse tortures and torments him, how it brings him to despair, "I can never forgive myself for it"; he says. And all this is supposed to be the expression for how much good there dwells within him, what a deep nature he is.

comment by willbradshaw · 2020-03-25T16:06:46.023Z · score: 4 (3 votes) · EA(p) · GW(p)

I'd agree, but want to say slightly more about the appropriate attitude as well as the right action - something along the lines of non-destructive and non-aggrandising regret.

Out of interest, do you think this attitude for consequentialist reasons (e.g. such an attitude will lead to greater effort devoted towards self-improvement / pruning of not-actually-needed luxuries) or non-consequentialist ones (it's just inherently blameworthy to really want a sports car when children in Africa are starving)?

needing to sleep (even if she needs to sleep more than others) is not a blameworthy trait

It's not really clear to me why a need for sleep is not blameworthy while a psychological attachment to luxuries is. One need is universal, while the other is particular, but I'm not sure that matters per se? And even that distinction breaks down if you posit that Agape needs more sleep than other people.

I think you could make the claim that in reality there is a difference in your ability to affect your future self's attitude to luxuries (e.g. by incrementally weaning yourself off them, cultivating mindfulness, etc.), such that regret is more useful in one case than the other, but if we assume ex hypothesi that that isn't the case (Agape's desire for a sports care is deep-seated and unshakeable) then I'm not sure whence the difference in blameworthiness comes.

comment by Gregory_Lewis · 2020-03-26T11:42:23.593Z · score: 6 (4 votes) · EA(p) · GW(p)

Part of the story, on a consequentialising-virtue account, is typically desire for luxury is amenable to being changed in general, if not in Agape's case in particular. Thus her attitude of regret rather than shrugging her shoulders typically makes things go better, if not for her but for third parties who have a shot at improving this aspect of themselves.

I think most non-consequentialist views (including ones I'm personally sympathetic to) would fuzzily circumscribe character traits where moral blameworthiness can apply even if they are incorrigible. To pick two extremes: if Agape was born blind, and this substantially impeded her from doing as much good as she would like, the commonsense view could sympathise with her regret, but insist she really has 'nothing to be sorry about'; yet if Agape couldn't help being a vicious racist, and this substantially impeded her from helping others (say, because the beneficiaries are members of racial groups she despises), this is a character-staining fault Agape should at least feel bad about even if being otherwise is beyond her - plausibly, it would recommend her make strenuous efforts to change even if both she and others knew for sure all such attempts are futile.

comment by ofer · 2020-03-25T12:17:13.136Z · score: 10 (8 votes) · EA(p) · GW(p)

Thanks for writing this!

I worry that people who are new to EA might read this post and get the impression that there's an expectation from people in EA to have some form of utilitarianism as their only intrinsic goal. So I'd like to flag that EA is a community of humans :). Humans are the result of human evolution—a messy process that roughly optimizes for inclusive fitness. It's unlikely that any human can be perfectly modeled as a utilitarian (with limited will power etcetera, but without any intrinsic goal that is selfish).

Of course, this does not imply we shouldn't have important discussions about burnout in EA. (In the case of the OP I would just pose the question a bit differently, maybe: "Should a utilitarian give 90%?").

comment by Khorton · 2020-03-25T13:35:33.044Z · score: 8 (5 votes) · EA(p) · GW(p)

Also, many EAs don't identify as utilitarians, like me!

comment by kbog · 2020-03-23T21:06:09.110Z · score: 7 (5 votes) · EA(p) · GW(p)

You're assuming that the probability of giving up each year is conditionally independent. In reality, if we can figure out how to give a lot for one or two years without becoming selfish, we are more likely to sustain that for a longer period of time. This boosts the case for making larger donations.

Moreover, I rather doubt that the probability of turning selfish and giving up on Effective Altruism can be nearly as high as 50% in a given year. If it were that high, I think we'd have more evidence of it, in spite of the typical worries regarding how we can hear back from people who aren't interested anymore.

Also, this doesn't break your point, but I think percentages are the wrong way to think about this. In reality, donations should be much more dependent upon local cost of living than upon your personal salary. If COL is $40k and you make $50k then donate up to $10k. If COL is $40k and you make $200k then donate up to $160k.

People whose jobs are higher impact/higher-salary (they are correlated due to donation potential if nothing else) are likely to face more expensive costs of living and are also likely to obtain greater benefits from personal spending (averting a 1% chance of personal burnout is much more important if your job is high-impact, saving an hour out of your week is much more important if your hourly wage is higher, etc). So the appropriate amount of personal spending does scale somewhat with income. However this effect is weak enough that I think it makes more sense to usually think about thresholds rather than percentages.

comment by HaydenW · 2020-03-23T23:30:08.066Z · score: 4 (2 votes) · EA(p) · GW(p)
In reality, if we can figure out how to give a lot for one or two years without becoming selfish, we are more likely to sustain that for a longer period of time. This boosts the case for making larger donations.

Yep, I agree. In general, the real-life case is going to be more complicated in a bunch of ways, which tug in both directions.

Still, I suspect that, even if someone managed to donate a lot for a few years, there'd still be some small independent risk of giving up each year. And even a small such risk cuts down your expected lifetime donations by quite a bit: e.g., a 1% p.a. risk of giving up for 37 years cuts down the expected value by 16% (and far more if your income increases over time).

Moreover, I rather doubt that the probability of turning selfish and giving up on Effective Altruism can be nearly as high as 50% in a given year. If it were that high, I think we'd have more evidence of it, in spite of the typical worries regarding how we can hear back from people who aren't interested anymore.

Yep, that seems right. Certainly at the 10% donation level, it should be a lot lower than 50% (I hope!). I was thinking of 50% p.a. as the probability of giving up after ramping up to 90% per year, at least in my own circumstances (living on a pretty modest grad student stipend).

Also, there's a little bit of relevant data on this in this post [EA · GW]. Among the 38 people that person surveyed, the dropout rate was >50% over 5 years. So it's pretty high at least. But not clear how much of that was due to feeling it was too demanding and then getting demotivated, rather than value drift.

Also, this doesn't break your point, but I think percentages are the wrong way to think about this. In reality, donations should be much more dependent upon local cost of living than upon your personal salary. If COL is $40k and you make $50k then donate up to $10k. If COL is $40k and you make $200k then donate up to $160k.

Yes, good point! I'd agree that that's a better way to look at it, especially for making broad generalisations over different people.

comment by David_Moss · 2020-03-26T13:25:00.882Z · score: 4 (4 votes) · EA(p) · GW(p)

There is detailed discussion of some closely related issues in this book chapter in the Effective Altruism: Philosophical Issues book edited by Hilary Graves and Theron Pummer. The author discusses these in less detail in this post on PEA Soup.

I also ran a small survey to test effective altruists' views on the thought experiments discusses. I haven't gotten around to writing it up, due to more pressing tasks. I could also share the survey again here, if people are particularly interested.

comment by elifland · 2020-03-23T21:00:20.005Z · score: 4 (3 votes) · EA(p) · GW(p)
If I was donating 90% every year, I think my probability of giving up permanently would be even higher than 50% each year. If I had zero time and money left to enjoy myself, my future self would almost certainly get demotivated and give up on this whole thing. Maybe I’d come back and donate a bit less but, for simplicity, let’s just assume that if Agape gives up, she stays given up.

The assumption that if she gives up, she is most likely to give up on donating completely seems not obvious to me. I would think that it's more likely she scales back to a lower level, which would change the conclusion. It would be helpful to have data to determine which of these intuitions are correct.

Perhaps we should be encouraging a strategy where people increase their percentage donated by a few percentage points per year until they find the highest sustainable level for them. Combined with a community norm of acceptance for reductions in amounts donated, people could determine their highest sustainable donation level while lowering risk of stopping donations entirely.

comment by HaydenW · 2020-03-23T23:05:10.349Z · score: 3 (2 votes) · EA(p) · GW(p)
The assumption that if she gives up, she is most likely to give up on donating completely seems not obvious to me. I would think that it's more likely she scales back to a lower level, which would change the conclusion.

Yep, I agree that that's probably more likely. I focused on giving up completely to keep things simple. But if it's even somewhat likely (say, 1% p.a.), that may make a far bigger dent in your expected lifelong donations than do risks of giving up partially.

Perhaps we should be encouraging a strategy where people increase their percentage donated by a few percentage points per year until they find the highest sustainable level for them. Combined with a community norm of acceptance for reductions in amounts donated, people could determine their highest sustainable donation level while lowering risk of stopping donations entirely.

That certainly sounds sensible to me!

comment by Pigman · 2020-03-27T12:00:44.997Z · score: -6 (6 votes) · EA(p) · GW(p)

As Effective Altruists, maybe the most effective thing to do would be to try to change the wealth distribution somehow...that could have the most positive and impactful effect.

Giving Pledge is one example of an effort to change that, but I'm not so sure of its efficacy....

Last time I checked, 1% of world's population still holds, at least, half of the total wealth in the world. In 2013 Credit-suisse estimate that 3.2 billion individuals – more than two thirds of adults in the world – have wealth below US$10,000......With that in mind I would say that great majority of people aren't even motivated to give the 10%.....

Logic would say that that gap in wealth will keep getting bigger and bigger, or at least has that potential.

comment by HaydenW · 2020-03-27T22:56:51.808Z · score: 3 (3 votes) · EA(p) · GW(p)

This is pretty off-topic, sorry.

comment by Pigman · 2020-03-28T13:31:20.694Z · score: 1 (1 votes) · EA(p) · GW(p)

I see, thanks for the feedback, wasn't aware of the forum's rules

comment by srh3 · 2020-06-15T18:18:48.500Z · score: -3 (3 votes) · EA(p) · GW(p)

Wow THIS is everything that's wrong with Effective Altruism. Let's post hoc justify our role in an oppressive system with a bunch of algorithms pulled from thin air and pats on the back. Changing the system? Totally off-topic.

comment by lucy.ea8 · 2020-03-23T06:34:05.628Z · score: -12 (13 votes) · EA(p) · GW(p)

hey, interesting post. i dont think EAers should beat themselves up about how much they donate or not donate. anybody who gives 10% for EA has done more than 99.99% of humanity, that is worth a good nights sleep with a clear conscience.

However the EA community should ask, if they are missing a cause priority. Why EA won't talk about race, gender, intersectionality etc.. EA will be more effective if those lenses are used. Likewise diversity in EA is poor, the papers and experts referred to are western, with little representation of voices from people around the world. The just concluded EA Global is a good example of this, even "Global Health and Development" track featured two people from the USA.

"Global Health and Development" itself is problematic framing, the UN via UNDP has published the Human Development Index, EA should be at a minimum focused on Human Development Indicators and not arbitrarily decide according to their biases.


(To people who want to downvote please explain why)

comment by trammell · 2020-03-23T14:58:32.045Z · score: 7 (7 votes) · EA(p) · GW(p)

I downvoted the comment because it's off-topic.

comment by lucy.ea8 · 2020-03-23T23:48:50.672Z · score: 0 (0 votes) · EA(p) · GW(p)

Thanks trammell. I notice that only you told me why, I assume I got 5 downvotes at a minimum.

While not directly on topic, giving more is about bigger impact, if D&I is poor EA impact is worse. That's why I responded. My thinking is that money is not the constraint an understanding or lack of it is the constraint in improving the world. For which EA needs open hearts and minds, not https://en.wikipedia.org/wiki/In-group_favoritism

comment by Manuel_Allgaier · 2020-03-25T10:23:19.029Z · score: 1 (1 votes) · EA(p) · GW(p)

Only trammel told me why

Maybe others downvoted for the same reason (off-topic), saw that trammel already commented and then just upvoted Trammell's comment (5 upvoted) instead of writing the same thing themselves?

comment by Khorton · 2020-03-23T17:32:27.429Z · score: 6 (5 votes) · EA(p) · GW(p)

You could discuss this on a new top level post or this Facebook group: https://m.facebook.com/profile.php?id=1863633717221799&ref=content_filter

comment by kbog · 2020-03-24T02:40:31.471Z · score: 3 (2 votes) · EA(p) · GW(p)

To respond to the on-topic part of your post (I also downvoted because it's mostly off-topic), I don't see how you can shrug off the benefits of donating >10% as if 10% is good enough, while also saying that we must interview and read whole swathes of additional papers and people in the hope that some of it might be useful for achieving better cause prioritization. If you really want Effective Altruists to capture the benefits from reading non-Western scientific literature, then clearly you don't think that we can shrug our shoulders and say that we're good enough, and should recognize that donating more money is another way we can similarly do better. The two are actually fungible, as you can donate money to movement growth with advertisements targeted to foreign countries, or you can donate to cause prioritization efforts with researchers hired to survey, review and summarize the fields of literature that you think are valuable.

comment by lucy.ea8 · 2020-03-24T03:56:05.211Z · score: -2 (2 votes) · EA(p) · GW(p)

After spending more than half a billion dollars, and potentially directing more than 100 millon dollars every year. EA community has no understanding of why HDI was created, and has no answer for why Education was dropped.

"Global health and development" = HDI - Education

It is not a question of money, it is a question of Diversity and Inclusion.

My hypothesis is that if humanity really understands how the world works then the problems can be solved easily, otherwise we will keep putting effort into less effective ways, sure EA is more effective but it still has far to go, the deficit in EA is not money it is understanding.

comment by kbog · 2020-03-24T14:47:59.840Z · score: 5 (6 votes) · EA(p) · GW(p)

You can receive answers to these claims by making a dedicated thread rather than hijacking the current one.