Posts

Defining Effective Altruism 2019-07-19T10:49:54.253Z · score: 59 (30 votes)
Age-Weighted Voting 2019-07-12T15:21:31.538Z · score: 52 (36 votes)
A philosophical introduction to effective altruism 2019-07-10T13:40:19.228Z · score: 49 (20 votes)
Aid Scepticism and Effective Altruism 2019-07-03T11:34:22.630Z · score: 66 (37 votes)
Announcing the new Forethought Foundation for Global Priorities Research 2018-12-04T10:36:06.536Z · score: 62 (38 votes)
Projects I'd like to see 2017-06-12T16:19:52.178Z · score: 29 (33 votes)
Introducing CEA's Guiding Principles 2017-03-08T01:57:00.660Z · score: 40 (44 votes)
[CEA Update] Updates from January 2017 2017-02-13T20:56:21.121Z · score: 9 (9 votes)
Introducing the EA Funds 2017-02-09T00:15:29.301Z · score: 46 (46 votes)
CEA is Fundraising! (Winter 2016) 2016-12-06T16:42:36.985Z · score: 9 (11 votes)
[CEA Update] October 2016 2016-11-15T14:49:34.107Z · score: 7 (9 votes)
Setting Community Norms and Values: A response to the InIn Open Letter 2016-10-26T22:44:30.324Z · score: 30 (38 votes)
CEA Update: September 2016 2016-10-12T18:44:34.883Z · score: 7 (11 votes)
CEA Updates + August 2016 update 2016-10-12T18:41:43.964Z · score: 7 (11 votes)
Should you switch away from earning to give? Some considerations. 2016-08-25T22:37:19.691Z · score: 14 (16 votes)
Some Organisational Changes at the Centre for Effective Altruism 2016-07-23T04:29:02.144Z · score: 31 (33 votes)
Call for papers for a special journal issue on EA 2016-03-14T12:46:39.712Z · score: 9 (11 votes)
Assessing EA Outreach’s media coverage in 2014 2015-03-18T12:02:38.223Z · score: 11 (11 votes)
Announcing a forthcoming book on effective altruism 2014-03-16T13:00:35.000Z · score: 1 (1 votes)
The history of the term 'effective altruism' 2014-03-11T02:03:32.000Z · score: 20 (16 votes)
Where I'm giving and why: Will MacAskill 2013-12-30T23:00:54.000Z · score: 1 (1 votes)
What's the best domestic charity? 2013-12-10T19:16:42.000Z · score: 1 (1 votes)
Want to give feedback on a draft sample chapter for a book on effective altruism? 2013-09-22T04:00:15.000Z · score: 0 (0 votes)
How might we be wildly wrong? 2013-09-04T19:19:54.000Z · score: 1 (1 votes)
Money can buy you (a bit) of happiness 2013-07-29T04:00:59.000Z · score: 0 (0 votes)
On discount rates 2013-07-22T04:00:53.000Z · score: 0 (0 votes)
Notes on not dying 2013-07-15T04:00:05.000Z · score: 1 (1 votes)
Helping other altruists 2013-07-01T04:00:08.000Z · score: 2 (2 votes)
The rules of effective altruism. Rule #1: don’t die 2013-06-24T04:00:29.000Z · score: 1 (1 votes)
Vegetarianism, health, and promoting the right changes 2013-06-07T04:00:43.000Z · score: 0 (0 votes)
On the robustness of cost-effectiveness estimates 2013-05-24T04:00:47.000Z · score: 1 (1 votes)
Peter Singer's TED talk on effective altruism 2013-05-22T04:00:50.000Z · score: 0 (0 votes)
Getting inspired by cost-effective giving 2013-05-20T04:00:41.000Z · score: 1 (1 votes)
$1.25/day - What does that mean? 2013-05-17T04:00:25.000Z · score: 0 (0 votes)
An example of do-gooding done wrong 2013-05-15T04:00:16.000Z · score: 3 (3 votes)
What is effective altruism? 2013-05-13T04:00:31.000Z · score: 9 (9 votes)
Doing well by doing good: careers that benefit others also benefit you 2013-04-18T04:00:02.000Z · score: 0 (0 votes)
To save the world, don’t get a job at a charity; go work on Wall Street 2013-02-27T05:00:23.000Z · score: 2 (2 votes)
Some general concerns about GiveWell 2012-12-23T05:00:10.000Z · score: 0 (2 votes)
GiveWell's recommendation of GiveDirectly 2012-11-30T05:00:28.000Z · score: 1 (1 votes)
Researching what we should 2012-11-12T05:00:37.000Z · score: 0 (0 votes)
The most important unsolved problems in ethics 2012-10-15T02:28:58.000Z · score: 6 (5 votes)
How to be a high impact philosopher, part II 2012-09-27T04:00:27.000Z · score: 0 (2 votes)
How to be a high impact philosopher 2012-05-08T04:00:25.000Z · score: 0 (2 votes)
Practical ethics given moral uncertainty 2012-01-31T05:00:01.000Z · score: 2 (2 votes)
Giving isn’t demanding* 2011-11-25T05:00:04.000Z · score: 0 (0 votes)

Comments

Comment by william_macaskill on Age-Weighted Voting · 2019-07-18T14:54:40.458Z · score: 2 (1 votes) · EA · GW

Thanks! I also do favour a tapering-in system, if I had to guess now. And I think that surrogacy voting is pretty interesting, too.

Comment by william_macaskill on Age-Weighted Voting · 2019-07-18T14:53:35.648Z · score: 2 (1 votes) · EA · GW

Thanks! I hadn't seen that before!

Comment by william_macaskill on Age-Weighted Voting · 2019-07-18T14:51:45.979Z · score: 4 (2 votes) · EA · GW

Cool! Could you send me a link to the study?

Comment by william_macaskill on Age-Weighted Voting · 2019-07-18T14:40:34.563Z · score: 7 (4 votes) · EA · GW
First of all I think using live political examples like this is not a great idea.

I don't think that a blanket ban on live political examples is a great norm. There are definite risks from tribalism from doing so, but we also just have a lot more information with which to test our views (compared to, say, how age-weighted voting would have affected the French Revolution). If we're worried about tribalism, we should just call tribalism out directly, rather than ban certain topics.

In this particular case, I found that thinking about Brexit and the Scottish Independence Referendum helpful to test my starting intuitions. In particular, it somewhat weakened my adherence to my starting assumption of rational self-interest of voters' political positions - I don't really see the age-related discrepancies in people's votes on Brexit and Scottish Independence as being well explained by whether the position involves short-term benefits for long-term harms. (Rather than, say, by how much weight one puts on national sovereignty, which is a political view that might just go in and out of fashion.)

Comment by william_macaskill on Age-Weighted Voting · 2019-07-18T14:30:03.547Z · score: 2 (1 votes) · EA · GW
(I personally think that I'm better at picking policies at 30 than 20, and expect to be better still at 40.)

Again, see comments to Holly and Larks about where the median voting age ends up. I'm going to add that point as an edit into the main post.


Under your proposal the change happens when the next generation turns 18-37, but doesn't seem to be lessened. For example, the brexit inconsistency would have been between 20 years ago and today rather than between today and 20 years from now, but it would have been just as large.

This is a good point, and my post overstates the case on this. There is still an important difference, though, which is that if there's a difference between the views of 60 year olds and 30 year olds, we can foresee there will be an intertemporal inconsistency and can choose to avoid it. Whereas if there's a difference between the views of 30 year olds and 0 year olds we (presumably) don't know about it and can't do anything about it.


There's another intertemporal inconsistency consideration: If we assume rational self-interest and risk-aversion (just in the sense of consumption having diminishing utility), we should expect that earlier on in life, people will prefer more redistributive policies (e.g. progressive tax and redistribution, social safety net for disabilities, weighing costs to prisoners of harsh penalties against benefits of lower crime rate). This is because they have uncertainty about how much they are going to earn, whether they are going to end up disabled, whether they'll commit a crime. Whereas older people know how things have turned out for them, and face much less risk: those who are wealthier will no longer support redistributive policies; those who know they aren't going to jail will prefer harsh on crime policies. The early age-weighting is therefore one way to hold people to the decisions they'd make ex ante. I think it's up for debate how much that matters, but it's appealing to me - I'm generally attracted to veil of ignorance arguments, and this makes political decision-making slightly more veil-of-ignorance-y.

Comment by william_macaskill on Age-Weighted Voting · 2019-07-18T14:16:18.442Z · score: 2 (1 votes) · EA · GW

I mentioned this in response to Larks too, but one thing to bear in mind is that even using the the weighting scheme I suggested in the post - which seemingly strongly favors young people - that would move the median voter (in the US) from age 55 to age 40. So, at least assuming the median voter theorem is approximately accurate in this context, the key epistocratic question is about 40yr olds vs 55yr olds.


And if I had to choose now, I would also prefer a tapering system, where vote-weight starts off lower, then increases, and then decreases again. A benefit of that system is that you could make the 'voting age' a gradual progression rather than an immediate jump. Perhaps 12yr olds get a very weak vote, which scales up until 25, then scales down after 35.

Comment by william_macaskill on Age-Weighted Voting · 2019-07-18T14:09:45.083Z · score: 4 (2 votes) · EA · GW

I really like this proposal! And agree it's radically more tractable than such a major change to voting systems.

Comment by william_macaskill on Age-Weighted Voting · 2019-07-18T14:07:20.059Z · score: 8 (2 votes) · EA · GW

Hi, thanks so much for doing this! This is really interesting.

Something I think wasn't sufficiently clear from the post itself: even using the the weighting scheme I suggested in the post, that would move the median voter (in the US) from age 55 to age 40. (H/T Zach Groff for these numbers. Note this doesn’t account for incentive effects, of younger people being more likely to go out to vote, which could lower the median age to a little under 40.) And under reasonable assumptions (with the most controversial being single-peaked preferences), the median voter is decisive. So it’s not like 20 year olds are now deciding what happens. On the epistocratic question, then, we should be asking whether we think 40yr olds will make better decisions than 55 year olds; not whether 20 year olds make better decisions than 60 year olds. I'd need to dig into the studies a lot more to determine whether 40 year olds discount more steeply than 55 year olds.

And then, I've only done a quick scan of the studies you link to, but I don't think the discounting literature you're pointing to is actually all that relevant, because the timescales they are looking at are so short: 90 days in one case; up to 6 months in another. Whereas the time horizons for the impact of political decisions, especially the most important ones, are on the order of years or decades - over such timescales, discounting due to risk of death become a much bigger factor than discounting due to impulsiveness / impatience.

Usually such a brief perusal of the literature would not give me a huge amount of confidence in the core claims; however in this case the conclusion should seem prima facie very plausible to anyone who has ever met a young boy.

Again, I think this depends on what timescales we're talking about. Sure, it seems prima facie plausible that someone who is 21 is more likely to prefer $5 today to $10 in a month's time than a 60 year old is. But (on the assumption of self-interest) I'd strongly wager that a 21 year old is more likely to prefer $100 in 40 years' time over $10 in a month's time than a 60 year old is, because the 21 year old is so much more likely to be around and be able to enjoy the benefits.


The altruism and age discussion is interesting, and I agree that if it were borne out it could form part of an epistocratic argument for the age-weighting going the other way around.

Comment by william_macaskill on A philosophical introduction to effective altruism · 2019-07-12T14:53:05.558Z · score: 8 (5 votes) · EA · GW

No I don't think so. Moral realism vs anti-realism is orthogonal to whether one thinks we have a duty or merely an opportunity to be an effective altruist.

For example: a non-cognitivist would interpret my statement, 'You have a duty to give 10% of your income to charity' as an expression of the sentiment 'Hooray to giving away 10% of your income to charity' or 'Boo to not-giving away 10% of your income to charity'. Alternatively, a subjectivist (who is sometimes classed as a moral realist, but of a 'non-robust' type) would interpret my statement, 'You have a duty to give 10% of your income to charity' as made true, in some sense, by the fact that I want you to give away 10% of your income to charity. Similarly a relativist could claim it's true, but only relative to some standard of assessment.


I am talking about obligations in this Introduction (rather than 'opportunities'). But I'm not claiming that effective altruism is, by definition, about obligations to do good. I'm arguing that we have an obligation to use at least a significant proportion of our resources to do as much good as we can - i.e. we have an obligation to be partial effective altruists.



Comment by william_macaskill on Aid Scepticism and Effective Altruism · 2019-07-08T17:28:03.310Z · score: 8 (5 votes) · EA · GW

In order:

1. Yes, it's definitely taken seriously but it's currently widely misunderstood - associated very closely with Peter Singer's views.

2. I think that Larry himself is more sympathetic to what EA is doing after my and others' conversations with him, or at least has a more nuanced view. But in terms of bystanders - yes, from my impressions at the lectures I think the audience came out more EA-sympathetic than when they went in. And especially at the graduate level there's a lot of recent interest, driven primarily by GPI, and for that purpose it's important to engage with critiques, especially if they are high-profile.

3. Honestly, not really. Outsiders usually have some straw man perception of EA, and so the critiques aren't that helpful. The best critiques I've found have tended to come from insiders, but I'm hoping that will change as more unsympathetic academics better understand what EA is and isn't claiming. I do find engaging with philosophers who have very different views of morality (e.g. that there's just no such thing as 'the good') very helpful though.

Comment by william_macaskill on Why You Should Invest In Upgrading Democracy And Give To The Center For Election Science · 2018-12-15T19:59:11.844Z · score: 11 (7 votes) · EA · GW

As one data point: I'm very positive about CES, and think they're one of the best marginal uses of funding right now. (Note that Aaron didn't ask me to write this.)

(Ties: I've recommended a grant to CES from Open Phil before, and a further grant is under consideration at OP right now; even given this possible grant, CES would have need for further funding for the coming years.)

Comment by william_macaskill on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-21T17:10:19.711Z · score: 16 (7 votes) · EA · GW

I second Julia in her apology. In hindsight, once I’d seen that you didn’t want the post shared I should have simply ignored it, and ensured you knew that it had been accidentally shared with me.

When it was shared with me, the damage had already been done, so I thought it made sense to start prepping a response. I didn’t think your post would change significantly, and at the time I thought it would be good for me to start going through your critique to see if there were indeed grave mistakes in DGB, and offer a speedy response for a more fruitful discussion. I’m sorry that I therefore misrepresented you. As you know, the draft you sent to Julia was quite a bit more hostile than the published version; I can only say that as a result of this I felt under attack, and that clouded my judgment.

Comment by william_macaskill on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-18T16:59:32.559Z · score: 33 (15 votes) · EA · GW

I agree with all the points you make here, including on the suggested upvote/downvote distribution, and on the nature of DGB. FWIW, my (current, defeasible) plan for any future trade books I write is that they'd be more highbrow (and more caveated, and therefore drier) than DGB.

I think that's the right approach for me, at the moment. But presumably at some point the best thing to do (for some people) will be wider advocacy (wider than DGB), which will inevitably involve simplification of ideas. So we'll have to figure out what epistemic standards are appropriate in that context (given that GiveWell-level detail is off the table).

Some preliminary thoughts on heuristics for this (these are suggestions only):

Standards we'd want to keep as high as ever:

  • Is the broad brush strokes picture of what is being conveyed accurate? Is there any easy way the broad brush of what is conveyed could have been made more accurate?
  • Are the sentences being used to support this broad brush strokes picture warranted by the evidence?
  • Is this the way of communicating the core message about as caveated and detailed as one can reasonably manage?

Standards we'd need to relax:

  • Does this communicate as much detail as possible with respect to the relevant claims?
  • Does this communicate all the strongest possible counterarguments to the key claim?
  • Does this include every reasonable caveat?

I think that a blogpost that does very well with respect to the above, without compromising on the clarity of the core message, is Max Roser's recent post: 'The world is much better; The world is awful; The world can be much better'.

Comment by william_macaskill on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-17T16:10:13.363Z · score: 72 (25 votes) · EA · GW

Hi Alexey,

I appreciate that you’ve taken the time to consider what I’ve said in the book at such length. However, I do think that there’s quite a lot that’s wrong in your post, and I’ll describe some of that below. Though I think you have noticed a couple of mistakes in the book, I think that most of the alleged errors are not errors.

I’ll just focus on what I take to be the main issues you highlight, and I won’t address the ‘dishonesty’ allegations, as I anticipate it wouldn’t be productive to do so; I’ll leave that charge for others to assess.

tl;dr:

  • Of the main issues you refer to, I think you’ve identified two mistakes in the book: I left out a caveat in my summary of the Baird et al (2016) paper, and I conflated overheads costs and CEO pay in a way that, on the latter aspect, was unfair to Charity Navigator.
  • In neither case are these errors egregious in the way you suggest. I think that: (i) claiming that the Baird et al (2016) should cause us to believe that there is ‘no effect’ on wages is a misrepresentation of that paper; (ii) my core argument against Charity Navigator, regarding their focus on ‘financial efficiency’ metrics like overhead costs, is both successful and accurately depicts Charity Navigator.
  • I don’t think that the rest of the alleged major errors are errors. In particular: (i) GiveWell were able to review the manuscript before publication and were happy with how I presented their research; the quotes you give generally conflate how to think about GiveWell’s estimates with how to think about DCP2’s estimates; (ii) There are many lines of evidence supporting the 100x multiplier, and I don’t rely at all on the DCP2 estimates, as you imply.

(Also, caveating up front: for reasons of time limitations, I’m going to have to precommit to this being my last comment on this thread.)

(Also, Alexey’s post keeps changing, so if it looks like I’m responding to something that’s no longer there, that’s why.)

1. Deworming

Since the book came out, there has been much more debate about the efficacy of deworming. As I’ve continued to learn about the state and quality of the empirical evidence around deworming, I’ve become less happy with my presentation of the evidence around deworming in Doing Good Better; this fact has been reflected on the errata page on my website for the last two years. On your particular points, however:

Deworming vs textbooks

If textbooks have a positive effect, it’s via how much children learn in school, rather than an incentive for them to spend more time in school. So the fact that there doesn’t seem to be good evidence for textbooks increasing test scores is pretty bad.

If deworming has a positive effect, it could be via a number of mechanisms, including increased school attendance or via learning more in school, or direct health impacts, etc. If there are big gains on any of these dimensions, then deworming looks promising. I agree that more days in school certainly aren’t good in themselves, however, so the better evidence is about the long-run effects.

Deworming’s long-run effects

Here’s how GiveWell describes the study on which I base my discussion of the long-run effects of deworming:

“10-year follow-up: Baird et al. 2016 compared the first two groups of schools to receive deworming (as treatment group) to the final group (as control); the treatment group was assigned 2.41 extra years of deworming on average. The study's headline effect is that as adults, those in the treatment group worked and earned substantially more, with increased earnings driven largely by a shift into the manufacturing sector.” Then, later: “We have done a variety of analyses to assess the robustness of the core findings from Baird et al. 2016, including reanalyzing the data and code underlying the study, and the results have held up to our scrutiny.”

You are correct that my description of the findings of the Baird et al paper was not fully accurate. When I wrote, “Moreover, when Kremer’s colleagues followed up with the children ten years later, those who had been dewormed were working an extra 3.4 hours per week and earning an extra 20 percent of income compared to those who had not been dewormed,” I should have included the caveat “among non-students with wage employment.” I’m sorry about that, and I’m updating my errata page to reflect this.

As for how much we should update on the basis of the Baird et al paper — that’s a really big discussion, and I’m not going to be able to add anything above what GiveWell have already written (here, here and here). I’ll just note that:

(i) Your gloss on the paper seems misleading to me. If you include people with zero earnings, of course it’s going to be harder to get a statistically significant effect. And the data from those who do have an income but who aren’t in wage employment are noisier, so it’s harder to get a statistically significant effect there too. In particular, see here from the 2015 version of the paper: “The data on [non-agricultural] self-employment profits are likely measured with somewhat more noise. Monthly profits are 22% larger in the treatment group, but the difference is not significant (Table 4, Panel C), in part due to large standard errors created by a few male outliers reporting extremely high profits. In a version of the profit data that trims the top 5% of observations, the difference is 28% (P < 0.10).”

(ii) GiveWell finds the Baird et al paper to be an important part of the evidence behind their support of deworming. If you disagree with that, then you’re engaged in a substantive disagreement with GiveWell’s views; it seems wrong to me to class that as a simple misrepresentation.

2. Cost-effectiveness estimates

Given the previous debate that had occurred between us on how to think and talk about cost-effectiveness estimates, and the mistakes I had made in this regard, I wanted to be sure that I was presenting these estimates in a way that those at GiveWell would be happy with. So I asked an employee of GiveWell to look over the relevant parts of the manuscript of DGB before it was published; in the end five employees did so, and they were happy with how I presented GiveWell’s views and research.

How can that fact be reconciled with the quotes you give in your blog post? It’s because, in your discussion, you conflate two quite different issues: (i) how to represent that cost-effectiveness estimates provided by DCP2, or by single studies; (ii) how to represent the (in my view much more rigorous) cost-effectiveness estimates provided by GiveWell. Almost all the quotes from Holden that you give are about (i). But the quotes you criticise me for are about (ii). So, for example, when I say ‘these estimates’ are order of magnitude estimates that’s referring to (i), not to (ii).

There’s a really big difference between (i) and (ii). I acknowledge that back in 2010 I was badly wrong about the reliability of DCP2 and individual studies, and that GWWC was far too slow to update its web pages after the unreliability of these estimates came to light. But the level of time, care and rigour that have gone into the GiveWell estimates are much greater than those that have gone into the DCP2 estimates. It’s still the case that there’s a huge amount of uncertainty surrounding the GiveWell estimates, but describing them as “the most rigorous estimates” we have seems reasonable to me.

More broadly: Do I really think that you do as much good or more in expectation from donating $3500 to AMF as saving a child’s life? Yes. GiveWell’s estimate of the direct benefits might be optimistic or pessimistic (though it has stayed relatively stable over many years now — the median GiveWell estimate for ‘cost for outcome as good as averting the death of an individual under 5’ is currently $1932), but I really don’t have a view on which is more likely. And, what’s more important, the biggest consideration that’s missing from GiveWell’s analysis is the long-run effects of saving a life. While of course it’s a thorny issue, I personally find it plausible that the long-run expected benefits from a donation to AMF are considerably larger than the short-run benefits — you speed up economic progress just a little bit, in expectation making those in the future just a little bit better off than they would have otherwise been. Because the future is so vast in expectation, that effect is very large. (There’s *plenty* more to discuss on this issue of long-run effects — Might those effects be negative? How should you discount future consumption? etc — but that would take us too far afield.)

3. Charity Navigator

Let’s distinguish: (i) the use of overhead ratio as a metric in assessing charities; (ii) the use of CEO pay as a metric in assessing charities. The idea of evaluating charities on overheads and on the basis of CEO pay are often run together in public discussion, and are both wrong for similar reasons, so I bundled them together in my discussion.

Regarding (ii): CN-of-2014 did talk a lot about CEO pay: they featured CEO pay, in both absolute terms and as a proportion of expenditure, prominently on their charity evaluation pages (see, e.g. their page on Books for Africa), they had top-ten lists like, “10 highly-rated charities with low paid CEOs”, and “10 highly paid CEOs at low-rated charities” (and no lists of “10 highly-rated charities with high paid CEOs” or “10 low-rated charities with low paid CEOs”). However, it is true that CEO pay was not a part of CN’s rating system. And, rereading the relevant passages of DGB, I can see how the reader would have come away with the wrong impression on that score. So I’m sorry about that. (Perhaps I was subconsciously still ornery from their spectacularly hostile hit piece on EA that came out while I was writing DGB, and was therefore less careful than I should have been.) I’ve updated my errata page to make that clear.

Regarding (i): CN’s two key metrics for charities are (a) financial health and (b) accountability and transparency. (a) is in very significant part about the charities’ overheads ratios (in several different forms), where they give a charity a higher score the lower its overheads are, breaking the scores into five broad buckets: see here for more detail. The doughnuts for police officers example shows that a really bad charity could score extremely highly on CN’s metrics, which shows that CN’s metrics must be wrong. Similarly for Books for Africa, which gets a near-perfect score from CN, and features in its ‘ten top-notch charities’ list, in significant part because of its very low overheads, despite having no good evidence to support its program.

I represent CN fairly, and make a fair criticism of its approach to assessing charities. In the extended quote you give, they caveat that very low overheads are not make-or-break for a charity. But, on their charity rating methodology, all other things being equal they give a charity a higher score the lower the charity’s overheads. If that scoring method is a bad one, which it is, then my criticism is justified.

4. Life satisfaction and income and the hundredfold multiplier

The hundredfold multiplier

You make two objections to my 100x multiplier claim: that the DCP2 deworming estimate was off by 100x, and that the Stevenson and Wolfers paper does not support it.

But there are very many lines of evidence in favour of the 100x multiplier, which I reference in Doing Good Better. I mention that there are many independent justifications for thinking that there is a logarithmic (or even more concave) relationship between income and happiness on p.25, and in the endnotes on p.261-2 (all references are to the British paperback edition - yellow cover). In addition to the Stevenson and Wolfers lifetime satisfaction approach (which I discuss later), here are some reasons for thinking that the hundredfold multiplier obtains:

  • The experiential sampling method of assessing happiness. I mention this in the endnote on p.262, pointing out that, on this method, my argument would be stronger, because on this method the relationship between income and wellbeing is more concave than logarithmic, and is in fact bounded above.
  • Imputed utility functions from the market behaviour of private individuals and the actions of government. It’s absolutely mainstream economic thought that utility varies with log of income (that is, eta=1 in an isoelastic utility function) or something more concave (eta>1). I reference a paper that takes this approach on p.261, the Groom and Maddison (2013). They estimate eta to be 1.5.
  • Estimates of cost to save a life. I discuss this in ch.2; I note that this is another strand of supporting evidence prior to my discussion of Stevenson and Wolfers on p.25: “It’s a basic rule of economics that money is less valuable to you the more you have of it. We should therefore expect $1 to provide a larger benefit for an extremely poor Indian farmer than it would for you or me. But how much larger? Economists have sought to answer this question through a variety of methods. We’ll look at some of these in the next chapter, but for now I’ll just discuss one [the Stevenson and Wolfers approach].” Again, you find 100x or more discrepancy in the cost to save a life in rich or poor countries.
  • Estimate of cost to provide one QALY. As with the previous bullet point.

Note, crucially, that the developing world estimates for cost to provide one QALY or cost to save a life come from GiveWell, not — as you imply — from DCP2 or any individual study.

Is there a causal relationship from income to wellbeing?

It’s true that there Stevenson and Wolfers only shows the correlation is between income and wellbeing. But that there is a causal relationship, from income to wellbeing, is beyond doubt. It’s perfectly obvious that, over the scales we’re talking, higher income enables you to have more wellbeing (you can buy analgesics, healthcare, shelter, eat more and better food, etc).

It’s true that we don’t know exactly the strength of the causal relationship. Understanding this could make my argument stronger or weaker. To illustrate, here’s a quote from another Stevenson and Wolfers paper, with the numerals in square brackets added in by me:

“Although our analysis provides a useful measurement of the bivariate relationship between income and well-being both within and between countries, there are good reasons to doubt that this corresponds to the causal effect of income on well-being. It seems plausible (perhaps even likely) that [i] the within-country well-being-income gradient may be biased upward by reverse causation, as happiness may well be a productive trait in some occupations, raising income. A different perspective, from offered by Kahneman, et al. (2006), suggests that [ii] within-country comparisons overstate the true relationship between subjective well-being and income because of a “focusing illusion”: the very nature of asking about life satisfaction leads people to assess their life relative to others, and they thus focus on where they fall relative to others in regard to concrete measures such as income. Although these specific biases may have a more important impact on within-country comparisons, it seems likely that [iii] the bivariate well-being-GDP relationship may also reflect the influence of third factors, such as democracy, the quality of national laws or government, health, or even favorable weather conditions, and many of these factors raise both GDP per capita and well-being (Kenny, 1999).29 [iv] Other factors, such as increased savings, reduced leisure, or even increasingly materialist values may raise GDP per capita at the expense of subjective well-being. At this stage we cannot address these shortcomings in any detail, although, given our reassessment of the stylized facts, we would suggest an urgent need for research identifying these causal parameters.”

To the extent to which (i), (ii) or (iv) are true, the case for the 100x multiplier becomes stronger. To the extent to which (iii) is true, the case for the 100x multiplier becomes weaker. We don’t know, at the moment, which of these are the most important factors. But, given that the wide variety of different strands of evidence listed in the previous section all point in the same direction, I think that estimating a 100x multiplier as a causal matter is reasonable. (Final point: noting again that all these estimates do not factor in the long-run benefits of donations, which would increase the ratio of benefits others to benefits to yourself even further in the direction of benefits to others.)

On the Stevenson and Wolfers data, is the relationship between income and happiness weaker for poor countries than for rich countries?

If it were the case that money does less to buy happiness (for any given income level) in poor countries than in rich countries, then that would be one counterargument to mine.

However, it doesn’t seem to me that this is true of the Stevenson and Wolfers data. In particular, it’s highly cherry-picked to compare Nigeria and the USA as you do, because Nigeria is a clear outlier in terms of how flat the slope is. I’m only eyeballing the graph, but it seems to me that, of the poorest countries represented (PHL, BGD, EGY, CHN, IND, PAK, NGA, ZAF, IDN), only NGA and ZAF have flatter slopes than USA (and even for ZAF, that’s only true for incomes less than $6000 or so); all the rest have slopes that are similar to or steeper than that of USA (IND, PAK, BGD, CHN, EGY, IDN all seem steeper than USA to me). Given that Nigeria is such an outlier, I’m inclined not to give it too much weight. The average trend across countries, rich and poor, is pretty clear.

Comment by william_macaskill on Projects I'd like to see · 2017-06-13T17:34:00.516Z · score: 6 (6 votes) · EA · GW

Thanks Owen!

Re Etg buy-out - yes, you're right. For people who think that CEA is a top donation target, hopefully we could just come to agreement as a trade wouldn't be possible, or would be prohibitively costly (if there were only slight differences in our views on which places were best to fund).

Re local group activities: These are just examples of some of the things I'd be excited about local groups doing, and I know that at least some local groups are funding constrained (e.g. someone is running them part-time, unpaid, and will otherwise need to get a job).

Re AI safety fellowship at ASI - as I understand it, that is currently funding constrained (they had great applicants who wanted to take the fellowship but ASI couldn't fund it). For other applications (e.g. Google Brain) it could involve, say, spending some amount of time during or after a physics or math PhD in order to learn some machine learning and be more competitive.

Re anthropogenic existential risks - ah, I had thought that it was only in presentation form. In which case: that paper is exactly the sort of thing I'd love to see more of.

Comment by william_macaskill on Announcing Effective Altruism Grants · 2017-06-13T11:19:07.435Z · score: 4 (4 votes) · EA · GW

It is a successor to EA Ventures, though EA Grants already has funding, and is more focused on individuals than start-up projects.

Comment by william_macaskill on Projects I'd like to see · 2017-06-13T11:18:12.653Z · score: 7 (7 votes) · EA · GW

Yes, the money is raised; we have a pot of £500,000 in the first instance.

It is a successor to EA Ventures, though EA Grants already has funding, and is more focused on individuals than start-up projects.

Comment by william_macaskill on How accurately does anyone know the global distribution of income? · 2017-04-11T20:18:03.990Z · score: 7 (7 votes) · EA · GW

"However, $700/year (= $1.91/day, =€1.80/day, =£1.53 /day) (without gifts or handouts) is not a sufficient amount of money to be alive in the west. You would be homeless. You would starve to death. In many places, you would die of exposure in the winter without shelter."

One could live on that amount of money per day in the West. You'd live in a second-hand tent, you'd scavenge food from bins (which would count towards your 'expenditure', because we're talking about consumption expenditure, but wouldn't count that much). Your life expectancy would be considerably lower than others in the West, but probably not lower than the 55 years which is the life expectancy in Burkina Faso (as an example comparison, bear in mind that number includes infant mortality). Your life would suck very badly, but you wouldn't die, and it wouldn't be that dissimilar to the lives of the millions of people who live in makeshift slums or shanty towns and scavenge from dumps to make a living. (Such people aren't representative of all extremely poor people, but they are a notable fraction.)

Comment by william_macaskill on Intuition Jousting: What It Is And Why It Should Stop · 2017-04-04T19:00:51.736Z · score: 0 (0 votes) · EA · GW

"counts as an xrisk (and therefore as a GCR)"

My understanding: GCR = (something like) risk of major catastrophe that kills 100mn+ people

(I think the GCR book defines it as risk of 10mn+ deaths, but that seemed too low to me).

So, as I was using the term, something being an x-risk does not entail it being a GCR. I'd count 'Humanity's moral progress stagnates or we otherwise end up with the wrong values' as an x-risk but not a GCR.

Interesting (/worrying!) how we're understanding widely-used terms so differently.

Comment by william_macaskill on Intuition Jousting: What It Is And Why It Should Stop · 2017-04-04T18:55:55.975Z · score: 6 (6 votes) · EA · GW

Mea culpa that I switched from "impact on beings alive today" to "benefits over the next 50 years" without noticing.

Comment by william_macaskill on Intuition Jousting: What It Is And Why It Should Stop · 2017-03-31T17:13:07.562Z · score: 1 (3 votes) · EA · GW

That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:

"We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent." "One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x) [So $30-ish per equivalent life saved]." http://www.openphilanthropy.org/blog/worldview-diversification

And to clarify my first comment, "unlikely to be optimal" = I think it's a contender, but the base rate for "X is an optimal intervention" is really low.

Comment by william_macaskill on Intuition Jousting: What It Is And Why It Should Stop · 2017-03-30T23:34:06.741Z · score: 9 (9 votes) · EA · GW

Agree that GCRs are a within-our-lifetime problem. But in my view mitigating GCRs is unlikely to be the optimal donation target if you are only considering the impact on beings alive today. Do you know of any sources that make the opposite case?

And it's framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren't GCRs - like humanity having the right values, for example.

Comment by william_macaskill on Introducing CEA's Guiding Principles · 2017-03-08T01:59:04.524Z · score: 12 (12 votes) · EA · GW

In my previous post I wrote: “The existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.” I now think that’s an incorrect statement. EA, currently, is all of the following: an idea/movement, a community, and a small group of organisations. On the ‘movement’ understanding of EA, analogies of EA don’t have a community panel similar to what I suggested, and only some have ‘guiding principles’. (Though communities and organisations, or groups of organisations, often do.)

Julia created a list of potential analogies here:

[https://docs.google.com/document/d/1aXQp_9pGauMK9rKES9W3Uk3soW6c1oSx68bhDmY73p4/edit?usp=sharing].

The closest analogy to what we want to do is given by the open source community: many but not all of the organisations within the open source community created their own codes of conduct, many of them very similar to each other.

Comment by william_macaskill on Introducing the EA Funds · 2017-02-11T00:07:00.083Z · score: 4 (4 votes) · EA · GW

One thing to note, re diversification (which I do think is an important point in general) is that it's easy to think of Open Phil as a single agent, rather than a collection of agents; and because Open Phil is a collective entity, there are gains from diversification even with the funds.

For example, there might be a grant that a program officer wants to make, but there's internal disagreement about it, and the program officer doesn't have time (given opportunity cost) to convince others at Open Phil why it's a good idea. (This has been historically true for, say, the EA Giving Fund). Having a separate pool of money would allow them to fund things like that.

Comment by william_macaskill on Introducing the EA Funds · 2017-02-10T23:45:05.504Z · score: 10 (10 votes) · EA · GW

Thanks so much for this, Luke! If someone who spends half their working time dedicating to philanthropy, as you do, says "There is a limit to how much high quality due diligence one could do. It takes time to build relationships, analyse opportunities and monitor them" - that's pretty useful information!

Comment by william_macaskill on Introducing the EA Funds · 2017-02-10T07:45:25.453Z · score: 2 (2 votes) · EA · GW

Thanks! That's really helpful to know. The Funds are potentially solving a number of problems at once, and we know there's some demand for each of these problems to be solved, but not how much demand, so comments like this are very useful.

Comment by william_macaskill on Introducing the EA Funds · 2017-02-10T07:43:48.826Z · score: 4 (4 votes) · EA · GW

"On EA Ventures, points 1 and 2 seem particularly surprising when put together. You found too few exciting projects but even they had trouble generating funder interest?"

This isn't surprising if the model is just that new projects were uniformly less exciting than one might have expected: there were few projects above the bar for 'really cool project', and even they were only just above the bar, hence hard to get funding for.

Comment by william_macaskill on Selecting investments based on covariance with the value of charities · 2017-02-07T00:06:05.569Z · score: 10 (9 votes) · EA · GW

Thanks for this. Hauke Hillebrandt has been thinking about this concept of what he calls 'mission hedging' for a while, hopefully he'll weigh in.

In my view, the most potentially compelling example of this is shorting Facebook stock. From publicly available information, it seems that the large majority of Dustin Moskowitz and Cari Tuna's wealth is still in Facebook. If Facebook were to go under (unlikely, but possible), then the large majority of explicitly EA money would disappear. Given strongly diminishing returns, if you're interested in funding the areas that Open Phil funds that have a small gap (like AI or EA community growth), you'd therefore have a much bigger impact in the world in which Facebook decreases in value.

Comment by william_macaskill on Setting Community Norms and Values: A response to the InIn Open Letter · 2016-10-31T10:31:52.343Z · score: 10 (10 votes) · EA · GW

Hey, I haven’t had much time to respond here, and won’t for the next week, but just to say I’m really loving the statements of the concerns (AGB in particular, thank you for working through a position even though you’re unsure of your views - would love this to become a more regular norm on here). My views are that this issue is sufficiently important that we should try to get all considerations, and all possible permutations of solutions to the issue, on the table; but I’m not wedded to any particular proposal at this stage, so all the comments are particularly helpful. Plan to write more in the near future.

Comment by william_macaskill on Setting Community Norms and Values: A response to the InIn Open Letter · 2016-10-31T10:10:51.611Z · score: 5 (5 votes) · EA · GW

Thanks, Gleb, it's appreciated.

Comment by william_macaskill on Setting Community Norms and Values: A response to the InIn Open Letter · 2016-10-27T09:28:04.868Z · score: 3 (3 votes) · EA · GW

I drafted the document afterwards, but didn't realise that the blog post was something different than the originally-planned 'open letter'.

Comment by william_macaskill on Setting Community Norms and Values: A response to the InIn Open Letter · 2016-10-27T09:24:06.862Z · score: 1 (1 votes) · EA · GW

Done!

Comment by william_macaskill on Setting Community Norms and Values: A response to the InIn Open Letter · 2016-10-27T09:23:01.995Z · score: 13 (15 votes) · EA · GW

HI Peter - thanks for this comment. I didn't mean to belittle all the non-CEA contribution to the forum, which is of course very great, and much greater than the CEA contribution to the forum. So I'm sorry if it came across that way. I only put in "which CEA runs" because I thought that many readers wouldn't know that we are involved at all with the forum, and so wanted to give some explanation for why this might be an avenue of action for us at all. I've modified the text to "help to run" to make it more accurate.

Comment by william_macaskill on CEA Update: September 2016 · 2016-10-15T11:28:10.501Z · score: 4 (4 votes) · EA · GW

GPP has fully folded under Special Projects. GPP had two tracks: policy research and outreach, and fundamental EA theory. These now have their own distinct teams under the special project division. The third team is philanthropic advising, which was previously under GWWC. Owen and Seb are continuing with Special Projects.

Comment by william_macaskill on CEA Update: September 2016 · 2016-10-15T11:25:54.012Z · score: 4 (4 votes) · EA · GW

My current plan is that to a first approximation we won't accept restricted donations, including to GWWC. (It's a fiction that truly restricted donations are possible, anyway). But we will give donors the chance to express their preferences about how the money is to be used, which we'll consider in the aggregate when making strategic decisions. If donors think we're making major mistakes in allocation of resources between different activities, I'd love to see that written up, it would be very helpful to us.

Comment by william_macaskill on CEA Updates + August 2016 update · 2016-10-13T10:36:50.254Z · score: 0 (0 votes) · EA · GW

Apologies; fixed now.

Comment by william_macaskill on Should you switch away from earning to give? Some considerations. · 2016-09-07T00:28:50.831Z · score: 0 (0 votes) · EA · GW

My assumption would be that basically everyone who reads this post knows who I am, and from the upvote/downvote ratio, it seems that others probably agree. But I don't think there's much harm in regularly using Carl's disclosure (except for his abominable American spelling of 'Centre' ;) ), as it's a reasonable general norm to have.

Comment by william_macaskill on Should you switch away from earning to give? Some considerations. · 2016-09-07T00:23:56.719Z · score: 0 (0 votes) · EA · GW

That's fair, if I use it again I'll try to make that explicit. The 15% also doesn't include skill-building in well-paid jobs as a stepping stone to direct work.

Comment by william_macaskill on Should you switch away from earning to give? Some considerations. · 2016-08-26T23:16:20.646Z · score: 2 (2 votes) · EA · GW

This is a nice idea!

Comment by william_macaskill on Should you switch away from earning to give? Some considerations. · 2016-08-26T22:56:45.454Z · score: 2 (2 votes) · EA · GW

It's pretty explicit in the original blogpost:

One of the most common misconceptions that we’ve encountered about 80,000 Hours is that we’re exclusively or predominantly focused on earning to give. This blog post is to say definitively that this is not the case. Moreover, the proportion of people for whom we think earning to give is the best option has gone down over time.

To get a sense of this, I surveyed the 80,000 Hours team on the following question: “At this point in time, and on the margin, what portion of altruistically motivated graduates from a good university, who are open to pursuing any career path, should aim to earn to give in the long term?” (Please note that this is just a straw poll used as a way of addressing the misconception stated; it doesn’t represent a definitive answer to this question).

Will: 15% Ben: 20% Rob: 10% Roman: 15%

Instead, we think that most people should be doing things like politics, policy, high-value research, for-profit and non-profit entrepreneurship, and direct work for highly socially valuable organizations.

The purpose of the number was to show the view of 80k (which we perceived most people to not be aware of). I guess the usefulness of it depends on how reliable you think the gestalt judgment of the employees at 80k are.

Comment by william_macaskill on Should you switch away from earning to give? Some considerations. · 2016-08-26T22:52:55.171Z · score: 5 (5 votes) · EA · GW

Early on in 80k, when promoting earning to give we were regularly getting the opposite argument, that what we were promoting was too much of a sacrifice! I just about agree with you, but I think it's unclear - there are a lot of people who want to do meaningful work, and don't care much about giving.

Comment by william_macaskill on Should you switch away from earning to give? Some considerations. · 2016-08-26T22:49:51.564Z · score: 3 (3 votes) · EA · GW

I think that e.g. talking to someone at 80k can help give you a sense of this - certainly better than nothing. If you're thinking of leaving earning to give, but people at 80k can think of several examples of people who are currently earning to give and have greater comparative advantage at direct work, then we can at least say that someone's making a mistake.

Comment by william_macaskill on Some Organisational Changes at the Centre for Effective Altruism · 2016-07-25T17:59:38.094Z · score: 2 (2 votes) · EA · GW

Thanks! Lots of points here.

One thing: despite the confusing name, from CEA's perspective, EAO was the organisation that included EAG, EAV as parts.

Working with other groups: I hope the new structure will make it quite a bit easier for other groups to co-ordinate with CEA, because the structure will be substantially simpler.

'Exit assessment': This is slightly complicated by the fact that there's no simple "we tried this project and it didn't work" story here. But I do hope to be able to write more about what things we've learned at CEA in the near future.

Comment by william_macaskill on Some Organisational Changes at the Centre for Effective Altruism · 2016-07-25T17:38:00.076Z · score: 3 (3 votes) · EA · GW

I'm sorry we can't say more at this stake. One downside of policy work is that much more of the work can't always have the same level of transparency as other projects.

Comment by william_macaskill on Some Organisational Changes at the Centre for Effective Altruism · 2016-07-25T17:35:52.173Z · score: 4 (4 votes) · EA · GW

Thanks!

Fundraising: The current plans it that CEA will fundraise for all projects (with me as lead on that). We'll update all donors every two weeks with info across all CEA projects (most individual projects already do this to their own donors), and have an annual review.

Earmarking: Fungibility has been a headache since forever; and in the past 'restricting' to a particular project, even though we were very careful with the budget lines, wouldn't completely avoid fungibility concerns, because other donors are responsive to RFMF and would then become a little less likely to donate to a project that's received more money.

The idea that's currently in my head, but not (yet) a policy, is that we to a first approximation only accept unrestricted donations, but that every donor is asked to 'vote' by telling us how, ideally, they would want their donation to be used. This 'vote' isn't binding on CEA, but gives us useful information about what smart people with money on the line think CEA should be doing more of. I take the views of our donors very seriously - they tend to be the external people who are most highly engaged with CEA's work - and so it wouldn't at all just be for show. I'd welcome ideas about other ways of doing donations.

And to be clear, previously restricted money to a CEA project will still be used in the manner it was restricted for, under the new CEA structure, unless the donor tells us that they're happy to lift the restriction.

Comment by william_macaskill on Some Organisational Changes at the Centre for Effective Altruism · 2016-07-23T22:18:55.381Z · score: 7 (7 votes) · EA · GW

Thanks so much for this comment!

Do you have plans to publish summaries of the research you do, e.g. on Wikipedia or the EA Wiki?

Yes, the default will be that everything we produce is published openly.

I'd also challenge you to think about what CEA's "secret sauce" is for doing this research for >donors in a way that's superior to whatever other group they would consult with in order to >have it done.

In most cases so far, the counterfactual is little research, rather than using some other consultancy. And in the wider landscape, there seems to be just very little in the direction of what we'd call EA charity recommendations. There's GiveWell / Open Phil, there's philanthropic advising that's very heavily about understanding the preferences of the donor and finding charities that 'fit' those preferences, and there seems to us to be a very significant gap in the middle.

Some people have argued against this. I'm also skeptical.

In response to the linked-to article and notes: 1. I'm intuitively also very wary of EA engaging in partisan politics. Indeed, when I think of EA as applied to politics, I think of it as almost being defined by being non-partisan, opposed to tribal politics: where you come to views on policy on a case-by-case basis, weighing all the best evidence, deeply understanding all the various viewpoints (to the point of passing ideological Turing tests), being highly self-sceptical and looking out for ideological bias; 2. It's also a major issue that whether certain policies are even good or bad can be incredibly difficult to know. E.g. when I think about AI policy, I can think of things where I know the magnitude of the impact of the policy would be very great indeed, but have no idea about the sign of the impact. Or e.g. being pro EU immigration to the UK 10 years ago (surely good! ultimately leads to the unintended consequence of Brexit (oh no, wait, I hadn't thought about political equilibrium effects).

If that means we should abandon policy and politics as a whole, however, I think that would be wrong. Politics is a huge lever in the world, perhaps the single biggest lever, and to dismiss from the outset that whole method of making the world better would be to far too quickly narrow down our options.

This is an area where it plausibly does make sense to use a non-CEA label.

I agree that we need to think very carefully about what labels we use, and we should be very concerned with how the term 'effective altruism' might come to lose its meaning and value, or become the victim of malicious PR.

As a broad question: I understand it's commonly advised in the business world to focus on a >few "core competencies" and outsource most other functions. I'm curious whether this also >makes sense in the nonprofit world.

Because of this general principle, I stress a lot about how many different things CEA is doing. I'm not sure whether the general principle is right for the sort of organisation we are, and we're the exception to it, whether the principle just isn't right for the sort of organisation we are, or whether we're being irrational. My current instinct is that we should be aiming to focus more than we have done, and that we've just taken a good step in that direction.

Comment by william_macaskill on Some Organisational Changes at the Centre for Effective Altruism · 2016-07-23T19:26:13.002Z · score: 2 (2 votes) · EA · GW

Thanks!

People have argued for i) flatter organizational structure ii) pivoting from charity evaluation to >more fundamental research (in order to add more value over and above GiveWell), and iii) >growing emphasis of the EA brand for a while, so it's good to see this feedback incorporated.

Yeah, I want CEA strategy to be guided significantly by the views of engaged members of the EA community. (Of course, that doesn't mean we'll always go with others' views (not least because different people regularly disagree)). This, it seems to me, has both inside and outside view support. Inside view: when I talk to engaged EAs, they often have interesting and well-reasoned views about what CEA should or should not be doing. Outside view: the current dedicated EAs are the equivalent of the 'early users' of EA as an idea, and the standard advice for startups is to pay a huge amount of attention to what early users want, and be responsive to that. I also simply see CEA's role in significant part as to serve the EA community, so it's therefore obviously important to know what that community thinks is most important.

Comment by william_macaskill on EA != minimize suffering · 2016-07-13T20:07:54.346Z · score: 16 (16 votes) · EA · GW

As a ‘well-known’ EA, I would say that you can reasonably say that EA has one of two goals: a) to ‘do the most good’ (leaving what ‘goodness’ is undefined); b) to promote the wellbeing of all (accepting that EA is about altruism in that it’s always ultimately about the lives of sentient creatures, but not coming down on a specific view of what wellbeing consists in). I prefer the latter definition (for various reasons; I think it’s a more honest representation of how EAs behave and what they believe), though think that as the term is currently used either is reasonable. Although reducing suffering is an important component of EA under either framing, under neither is the goal simply to minimize suffering, and I don’t think that Peter Singer, Toby Ord or Holden Karnofsky (etc) would object to me saying that they don’t think of this as the only goal either.

Comment by william_macaskill on Against segregating EAs · 2016-01-23T13:21:43.600Z · score: 14 (16 votes) · EA · GW

+1 for 'Dedicated EAs' and 'EAs'. I think 80k internally could describe all it wants to describe in simple English using those terms. It's naturally a continuum. If you really really need to describe people who are into EA but not that dedicated then 'less dedicated' is fine. "Committed" could work too. (I understand 'dedicated' to mean: how highly someone scores on the product of 'into effectiveness' and 'into altruism'.)

-1 for 'full-time' and 'part-time', I don't think it conveys what we mean (at least, doesn't to me; I'd be confused when I first heard it) and I'd personally find it annoying to be described as 'part-time'.

+10000 for ditching 'hardcore' and 'softcore'

Comment by william_macaskill on Peter Hurford thinks that a large proportion of people should earn to give long term · 2015-08-20T06:57:34.606Z · score: 1 (1 votes) · EA · GW

In general, in "talent constraint vs funding constraint" discussions I find it super important to be clear on exactly what q is being asked as it's easy to talk past one another.