EA Diversity: Unpacking Pandora's Box 2015-02-01T00:40:05.862Z · score: 34 (25 votes)


Comment by agb on Jamie_Harris's Shortform · 2020-10-17T12:00:09.226Z · score: 8 (5 votes) · EA · GW

I also live in London, and bought a house in April 2016. So I've thought about these calculations a fair bit, and happy to share some thoughts here:

One quick note on your calculations is that stamp duty has been massively, but temporarily, cut due to COVID. You note it's currently £3k on a £560k flat. Normally it would be £18k. You can look at both sets of rates here.

When I looked at this, the calculation was heavily dependent on how often you expect to move. Every time you sell a home and buy a new one you incur large fixed costs; normally 2-4% of purchase price in stamp duty, 1-3% in estate agent fees, and a few other fixed costs which are minor in the context of the London property market but would be significant if you were looking at somewhere much cheaper (legal fees etc.). All of this seems well accounted for in your spreadsheet, but it means that if you expect to move every 1-3 years then the ongoing saving will be swamped by repeatedly incurring these costs.

There's also a somewhat fixed time cost; when I bought a home I estimate I spent the equivalent of 1 week of full-time work on the process (not the moving itself), most of which was spent doing things I wouldn't have needed to do for rented accomodation.

All told, for my personal situation in 2016 I thought I should only buy if I expected to stay in that flat for at least 5 years, and to make the calculation clear I would have wanted that to be more like 10 years. As a result, buying looks much better if you have outside factors already tying you down; a job that is very unlikely to be beaten, kids, a city you and/or your partner loves, etc.

This is a much closer calculation that will come out with your numbers, because I don't think a 7.5% housing return is a sensible average to use going forward. I had something like a 2% real (~4% nominal, but I generally prefer to think in terms of real) estimate pencilled in for housing, and more like a 5% real (7% nominal) rate pencilled in for stocks. There's a longer discussion there, but the key point I would make is that interest rates fallen dramatically in recent decades, boosting the value of assets which pay out streams of income, i.e. rent/dividends. It's unclear to me that the recent trend towards ever lower rates can go much further, and markets don't expect it to, so I didn't want to tacitly assume that.

So far, that conservative estimate was much closer, London house prices rose by roughly 1.5% annualised between April 2016 and March 2020. Then a pandemic hit, but happy to exclude that from 'things I could have reasonably expected'.

Comment by agb on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T11:14:28.740Z · score: 8 (4 votes) · EA · GW

Thanks for your response.

I didn't actually interpret Lark's post as trying to contribute to the "ongoing prosecution-and-defence of Robin's character or work", but instead think it is trying to add to the cancel culture conversation more generally, using Robin's case as a useful example.

Sorry, this is on me. The original draft of that sentence read something like "I agree with Khorton below that little is being served by an ongoing prosecution-and-defence of Robin's character or work, so I'm not going to weigh in again on those specific points and request others replying to this comment do the same, instead focusing on the question of what rules we do/don't want in general".

I then cut the sentence down, but missed that in doing so it could now be read as implying that this was Larks' objective. That wasn't intentional, and I don't think this.

Comment by agb on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T20:53:59.631Z · score: 66 (28 votes) · EA · GW

I want to open by saying that there are many things about this post I appreciate, and accordingly I upvoted it despite disagreeing with many particulars. Things I appreciate include, but are not limited to:

-The detailed block-by-block approach to making the case for both cancel culture's prevalence and its potential harm to the movement.

-An attempt to offer a concrete alternative pathway to CEA and local groups that face similar decisions in future.

-Many attempts throughout the post to imagine the viewpoint of someone who might disagree, and preempt the most obvious responses.

But there's still a piece I think is missing. I don't fault Larks for this directly, since the post is already very long and covers a lot of ground, but it's the area that I always find myself wanting to hear more about in these discussions, and so would like to hear more about from either Larks or others in reply to this comment. It relates to both of these quotes.

Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person.

Rules and standards are very important for organising any sort of society. However, when applied inconsistently they can be used as a weapon to attack unpopular people while letting popular people off the hook.

Given that this post is titled 'advice for CEA and local groups', reading this made me hope that this post would end with some suggested 'rules and standards' for who we do and do not invite to speak at local events/EAG/etc. Where do we draw the line on 'behaving immorally'? I strongly agree that whatever rules are being applied should be applied consistently, and think this is most likely to happen when discussed and laid down in a transparent and pre-agreed fashion.

While I have personal views on the Munich case which I have laid out elsewhere, I agree with Khorton below that little is being served by an ongoing prosecution-and-defence of Robin's character or work. Moreover, my commitment to consistency and transparency is far stronger than my preference for any one set of rules over others. I also expect clear rules about what we will and won't allow at various levels to naturally insulate against cancel culture. To the extent I agree that cancel culture is an increasing problem, the priority on getting this clear and relying less on ad hoc judgements of individuals has therefore risen, and will likely continue to rise.

So, what rules should we have? What are valid reasons to choose not to invite a speaker?

Comment by agb on Suggestions for Online EA Discussion Norms · 2020-10-05T12:42:15.984Z · score: 20 (9 votes) · EA · GW

There was a string of writing on on this topic or closely related topics early in the forum's life, especially w.r.t. talking about cause prioritisation, so here are some links to those posts. AFAIK, the advice within largely still holds.

Robert Wiblin, Six Ways To Get Along With People Who Are Totally Wrong*

Jess Whittlestone, Supportive Scepticism

Michelle Hutchinson and Jess Whittlestone, Supportive Scepticism in Practice

Owen Cotton-Barratt, Keeping the Effective Altruism movement welcoming

I should also thank Owen for linking to most of these in his comment on the first link, which made collecting these quite a lot easier.

Comment by agb on EA Relationship Status · 2020-09-22T10:19:14.225Z · score: 10 (3 votes) · EA · GW

The data I gave is ultimately survey data, the table you post is based on marriage certificates issued. This has advantages but has one large disadvantage, namely ignoring marriages that take place overseas, while possibly counting marriages between two overseas residents that take place locally. It's mentioned on the 'Table 12 interpretation' tab:

These statistics are based on marriages registered in England and Wales. Because no adjustment has been made for marriages taking place abroad, the true proportion of men and women ever married could be higher.

I followed that link to get any context on how big a deal this might be.

In 2017, an estimated 104,000 UK residents went abroad to get married and an estimated 8,000 overseas residents married in the UK.

To put that number in context, there are roughly 240k marriages per year in the UK, presumably involving around 480k people, so that's a large chunk of the total.

I think survey data is just better for our current use case since we don't much care about sample noise; apart from the 'destination wedding' issue, I definitely want to count two immigrants who arrived in the UK already married, and I think they'll also appear in the survey but not the certificate-counting.

Comment by agb on EA Relationship Status · 2020-09-22T06:21:07.087Z · score: 5 (3 votes) · EA · GW

Source for the UK:22% figure? The ONS figures for 2019 (for married, not ever married) are:

Men 25-29: 15.7%

Women 25-29: 25.4%

Men 30-34: 42.4%

Women 30-34: 52.3%

These groups are all roughly the same size, so a combined 25-34 group would be around 34%. ‘Ever married’ should be 1-4 percentage points higher.

Comment by agb on EA Relationship Status · 2020-09-19T11:48:41.394Z · score: 7 (4 votes) · EA · GW
I take the observation to be that 60% of EAs over 45 have married, where we'd expect 85%.

FWIW, and without speaking for Jeff, for Denise and I the original observation was something like 'percentage of people in nesting relationships around our age range (25-30) anecdotally seems sharply different in our EA versus similar-demographic non-EA circles'.

I consider religion a weak explanation for that, since we're definitely counting cohabiting couples, but the observation is also less well-founded and I'm far from confident that it generalises across the community well.

Comment by agb on Judgement as a key need in EA · 2020-09-13T11:55:09.394Z · score: 21 (6 votes) · EA · GW

How confident are you that the EALF survey respondants were using your relatively-narrow definition of judgment, rather than the dictionary definitions which, as you put it, "seem overly broad, making judgment a central trait almost by definition"?

I ask because scanning the other traits in the survey, they all seem like things where if I use common definitions I consider them useful for some or even many but not all roles, whereas judgment as usually defined is useful ~everywhere, making it unsurprising that it comes out on top. At least, that's why I've never paid attention to this particular part of the EALF survey results in the past.

But I appreciate you've probably spoken in person to a number of the EALF people and had a better chance to understand their views, so I'm mostly curious whether you feel those conversations support the idea that the other respondants were thinking of judgment in the narrower way you would use the term.

Comment by agb on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-03T15:30:55.682Z · score: 18 (8 votes) · EA · GW

Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since 'this might possibly actually happen' is not the same as explicitly saying 'I think this will happen'. The latter certainly does not follow from the former 'by necessity'.

Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.

That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.

Comment by agb on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T13:26:14.108Z · score: 15 (13 votes) · EA · GW

[EDIT: As Oli's next reponse notes, I'm misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]


Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Ben's first option, since it's just a non-numeric write-up of what Ben said:

Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it's worth the cost

I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.


At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that don't want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so I'm generally willing to distance from the latter in order to be open to the former. That's from a purely cold-blooded 'maximise community output' perspective, never mind the human element.

Other than that, I have a number of disagremeents with things you wrote, and for brevity I'm not going to go through them all; you may assume by default that everything you think is obvious I do not think is obvious. But the crux of the disagreement is here I think:

it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you

I disagree with the non-hyperbolic version of this, and think it significantly underestimates the extent to which someone repeatedly saying or doing public things that you find odious is a predictor of them saying or doing unpleasant things to you in person, in a fairly straightforward 'leopards don't change their spots' way.

I can't speak to the sexual abuse case directly, but if someone has a long history of making overtly racist statements I'm not likely to attend a small-group event that I know they will attend, because I put high probability that they will act in an overtly racist way towards me and I really can't be bothered to deal with that. I'm definitely not bringing my children to that event. It's not a matter of being 'triggered' per se, I just have better things to do with my evening than cutting some obnoxious racist down to size. But even then, I'm very privileged in a number of ways and so very comfortable defending my corner and arguing back if attacked; not everybody has (or should have) the ability and/or patience to do that.

There's also a large second-order effect that communities which tolerate such behaviour are much more likely to contain other individuals who hold those views and merely haven't put them in writing on the internet, which increases the probability of such an experience considerably. Avoidance of such places is the right default policy here, at an individual level at least.

Comment by agb on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T00:23:38.798Z · score: 22 (14 votes) · EA · GW

I think you're unintentionally dodging both Aaron's and Ben's points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isn't the case that disagreeing about *anything* can get you no-platformed/cancelled/whatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesn't sound like what you're talking about.

> A German survivor of sexual abuse is interested in EA Munich's events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on "gentle silent rape" and find it viscerally unpleasant. They've seen other discussion spaces where ideas like Hanson's were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.

Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we don't care about someone who would distance themselves from EA for this reason anyway (it's not an actual 'loss'). And I'm not sure which it is or (if the first) what percentages you would give.

Comment by agb on Will protests lead to thousands of coronavirus deaths? · 2020-07-08T18:05:55.898Z · score: 24 (10 votes) · EA · GW

For posterity, the only data I've seen on this question suggests that this has not played out the way the OP and many others (myself included) might have expected. The economist ran an article* which links to this paper**. In short, cities with protests did not record discernible COVID case growth, at least as of a few weeks later. Moreover, quoting the paper (italics in original):

"Second, where there are social distancing effects, they only appear to materialize after the onset of the protests. Specifically, after the outbreak of an urban protest, we find, on average, an increase in stay-at-home behaviors in the primary county encompassing the city. That overall social distancing behavior increases after the mass protests is notable, as this finding contrasts with the general secular decline in sheltering-at-home taking place across the sample period (see Appendix Figure 6). Our findings suggest that any direct decrease in social distancing among the subset of the population participating in the protests is more than offset by increasing social distancing behavior among others who may choose to shelter-at-home and circumvent public places while the protests are underway. "

In other words, it seems that protestors being outside was more than offset by other people avoiding the protests and staying home.



Comment by agb on Pablo_Stafforini's Shortform · 2020-01-14T22:52:00.482Z · score: 1 (1 votes) · EA · GW

Pablo already replied, but FWIW I had the same irritation (and similarly had all posts pointed out to me by someone else after complaining to them about it). I think in my case the original assumption was that 'latest posts' meant what it sounds like, and on discovering that it wasn't I (lazily) assumed there wasn't a way to get what I wanted.

I don't have a constructive suggestion for a better name though.

Comment by agb on [updated] Global development interventions are generally more effective than climate change interventions · 2019-09-13T14:07:36.855Z · score: 5 (3 votes) · EA · GW

I agree with this. I would have assumed they would do (i), and other responses from people who actually read the paper make me think it might effectively be (iii). I don't think it's (ii).

Comment by agb on [updated] Global development interventions are generally more effective than climate change interventions · 2019-09-10T20:44:54.094Z · score: 49 (23 votes) · EA · GW
If a climate change intervention has a cost-effectiveness of $417 / X per tonne of CO2 averted, then it is X times as effective as cash-transfers.

Wait a second.

I'm very confused by this sentence. Suppose, for the sake of argument, that all the impacts of emitting a tonne of CO2 are on people about as rich as present-day Americans, i.e. emitting a tonne of CO2 now causes people of that level of wealth to lose $417 at some point in the future. There is then no income adjustment necessary (I assume everything is being converted to something like present-day USD for present-day Americans, but I'm not actually sure and following the links didn't shed any light), so the post-income-adjustment number is still $417. Also suppose for the sake of argument that we can prevent this for $100.

This seems clearly worse than cash transfers to me under usual assumptions about log income being a reasonable approximation to wellbeing (as described in your first appendix), since we are effectively getting a 4.17x multiplier rather than a 50-100x multiplier. Yet the equation in the quote claims it is 4.17x more effective than cash transfers*.

What am I missing?

*Mathematically, I think the equation works iff. the cash transfers in question are to people of comparable wealth to whatever baseline is being used to come up with the $417 figure. So if the baseline is modern-day Americans, that equation calculates how much better it is to avert CO2 emissions than to transfer cash to modern-day Americans.

Comment by agb on EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? · 2019-08-16T20:39:45.754Z · score: 12 (8 votes) · EA · GW

Quick note on the 'bunching' hypothesis. While that particular post and suggestion is mostly an artefact of the US tax code and would lead to years that look like 20%/0%/20%/0%/etc., there's a similar-looking thing that can happen for non-US GWWC members, namely that their tax year often won't align with the calendar year (e.g. UK is 6th April - 5th April, Australia is 1st July - 30th June I believe).

In these cases I would expect compliant pledge takers to focus on hitting 10% in their local tax year, and when the EA survey asks about calendar years the effect will be that the average for that group is around 10% but the actual percentage given will range anywhere from 0 - 20% (if ~10% is being given), but often look like 13% one calendar year, 8% the next, 11% the year after that, etc. In other words, they will appear to be meeting the pledge around 50% of the time in your data. Yet the pledge is being kept by all such members continuously through that period. Eyeballing your 2017 graph of the actual distributions of percentages given, there are a lot of people in the 8-10% range, who are the main candidates for this.

Since both most US members and most non-US members have good reasons to not hit 10% in every calendar year, the number I find most compelling is the one in the bunching section that averages 2015 and 2016 donations (and finds 69% compliance when doing so). But that number suffers from not knowing if those people were actually GWWC members in 2015. It just knows they were members when they took the survey in 2017. GWWC had large growth around that time, so that's a thorny issue. Then the 2018 survey solves the 'when did they join' problem but can't handle any level of donations not exactly aligning with the 2017 calendar year.

My best guess thinking over all this would be that 73% of the GWWC members in this EA survey sample are compliant with the pledge, with extremely wide error bars (90% confidence interval 45% - 88%). I like Jeff's suggestion below as a way to start to reduce those error bars.

Comment by agb on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-20T22:11:06.076Z · score: 15 (9 votes) · EA · GW

Fair enough. I remain in almost-total agreement, so I guess I'll just have to try and keep an eye out for what you describe. But based on what I've seen within EA, which is evidently very different to what you've seen, I'm more worried about little-to-zero quantification than excessive quantification.

Comment by agb on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-14T19:07:11.362Z · score: 47 (17 votes) · EA · GW

I'm feeling confused.

I basically agree with this entire post. Over many years of conversations with Givewell staff or former staff, I can't readily recall speaking to anyone affiliated with Givewell who I can identify that they would substantively disagree with the suggestions in this post. But you obviously feel that some (reasonably large?) group of people disagrees with some (reasonably large?) part of your post. I understand a reluctance to give names, but focusing on Givewell specifically as much of their thoughts on these matters are public record here, can you identify what specifically in that post or the linked extra reading you disagree with? Or are you talking to EAs-not-at-Givewell? Or do you think Givewell's blog posts are reasonable but their internal decision-making process nonetheless commits the errors they warn against? Or some possibility I'm not considering?

I particularly note that your first suggestion to 'entertain multiple models' sounds extremely similar to 'cluster thinking' as described and advocated-for here, and the other suggestions also don't sound like things I would expect Givewell to disagree with. This leaves me at a bit of a loss as to what you would like to see change, and how you would like to see it change.

Comment by agb on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-03T20:11:31.996Z · score: 22 (10 votes) · EA · GW

>Also, not to mention all the career paths that aren't earning to give or "work in an EA org"

While I share your concern about the way earning to give is portrayed, I think this issue might be even more pressing.

Comment by AGB on [deleted post] 2019-02-09T19:27:22.386Z

> But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.

For clarity's sake, I don't disagree with this. This does mean that your argument for overwhelming value of mitigating nuclear war is still predicated on developing a safe AI (or some other way of massively reducing the base rate) at a future date, rather than being a self-contained argument based solely on nuclear war being an x-risk. Which is totally fine and reasonable, but a useful distinction to make in my experience. For example, it would now make sense to compare whether working on safe AI directly or working on nuclear war in order to increase the number of years we have to develop safe AI is generating better returns per effort spent. This in turn I think is going to depend heavily on AI timelines, which (at least to me) was not obviously an important consideration for the value of working on mitigating the fallout of a nuclear war!

Comment by agb on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-25T08:25:02.965Z · score: 6 (6 votes) · EA · GW

I agree with this summary. Thanks Peter and sorry for the wordiness Milan, that comment ended up being more of a stream of consciousness that I’d intended.

Comment by agb on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-24T18:15:27.295Z · score: 10 (9 votes) · EA · GW

I personally feel much more funding constrained / management capacity constrained / team culture “don’t grow too quickly” constrained than I feel “I need more talented applicants” constrained. I definitely don’t feel a need to trade away hundreds of thousands or millions of dollars in donations to get a good hire...

Something about this phrasing made me feel a bit 'off' when I first read this comment, like I'd just missed something important, but it took me a few days to pin down what it was.

I think this phrasing implicitly handles replaceability significantly differently to how I think the core orgs conventionally handle it. To illustrate with arbitrary numbers, let's say you have two candidates A and B for a position at your org, and A you think would generate $500k a year of 'value' after accounting for all costs, while B would generate $400k.

Level 0 thinking suggests that A applying to your org made the world $100k per year better off; if they would otherwise earn to give for $50k they shouldn't do that, but if they would otherwise EtG for $150k they should do that.

Level 0 thinking misses the fact that when A gets the job, B can go and do something else valuable. Right now I think the typical implicit level 1 assumption is that B will go and do something almost as valuable as the $400k, and so A should treat working for you as generating close to $500k value for the world, not $100k, since they free up a valuable individual.

In this world and with level 1 assumptions, your org doesn't want to trade away any more than $100k to get A into the applicant pool, but the world should be willing to trade $500k to get A into the pool. So there can be a large disparity between 'what EA orgs should recommend as a group' and 'what your org is willing to trade to get more talented applicants', without any actual conflict or disagreement over values or pool quality, in the style of your (1) / (2) / (3).

That being said, I notice that I'm a lot less sold on the level 1 assumptions than I used to be. I hadn't noticed that I now feel very differently to say 24 months ago until I was focusing on the question to write this reply, so I'm not sure exactly what has changed my mind about it, but I think it's what I perceive as a (much) higher level of EA unemployment or under-employment. Where I used to find the default assumption of B going and doing something almost as directly valuable credible, I now assign high (>50%) probability that B will either end up unemployed for a significant period of time, or end up 'keeping the day job' and basically earning-to-give for some much lower amount than the numbers EA orgs generally talk about. I feel like there's a large pool of standing applicants for junior posts already out there, and adding to the pool now is only worth the difference between the quality of the person added and the existing marginal person, and not the full amount as it was when the pool was much smaller.

How much this matters in practice obviously depends on what fraction of someone's total value to an org is 'excess' value relative to the next marginal hire, but my opinion based on private information about just who is in the 'can't get a junior job at an EA org' pool is that this pool is pretty high quality right now, and so I'm by-default sceptical that adding another person to it is hugely valuable. Which is a much more precise disagreement with the prevailing consensus than I previously had, so thanks for pointing me in a direction that helped me refine my thoughts here!

Comment by agb on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-20T10:18:22.961Z · score: 1 (1 votes) · EA · GW

Re. your first paragraph, I don’t know why you chose to reply to my comment specifically, since as far as I can tell I’ve never been asking ‘why do people hire slowly’.

I think I’ve already explained why I don’t agree with your later paragraphs and see little value in repeating myself, so we should probably just leave it there.

Comment by agb on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-19T08:21:14.625Z · score: 10 (10 votes) · EA · GW

With your first two paragraphs, I just want to step back and point out that things get pretty confusing when you include all opportunity costs. When you do that, the return of every action except the single best action is zero or negative. Being close to zero is actually good. It's probably less confusing to think in terms of a ranked list of actions that senior staff could take.

True, but I haven't accounted for all the opportunity costs, just one of them, namely the 'senior staff time' opportunity cost. If you are in fact close to 0 after that cost alone (i.e. being in a situation where a new hire would use x time and generate $1m, but an alternative org-improvement action that could be taken internally would generate $950k), that isn't 'good', it's awful, because one of those actions incurs opportunity costs on the applicant side (namely, and at the risk of beating a dead horse, the cost of not earning to give), but the other does not.

So we could look at this as a ranked list of potential senior staff actions, but to do so usefully the ranking needs to be determined by numbers that account for all the costs and benefits to the wider world and only exclude senior staff time (i.e. use $1m minus opportunity cost to applicant minus salary minus financial cost of hiring process per successful hire etc.), not this gross $1m number.

Similarly, potential applicants to EA orgs making a ranked list of their options should include all costs and benefits that aren't tied to them, i.e. they should subtract senior staff time from the $1m number, if that hasn't been done for them already. Which is what I've in fact been recommending people do. But my experience is that people who haven't been directly involved in hiring during their career radically underestimate the cost of hiring, and many applicants fall into that camp, so for them to take account of this is not trivial. I mean, it's not trivial for the orgs either, but I do think it is relatively easier.

I also expect the orgs partially take account of the opportunity costs of staff time when reporting the dollar value figures, though it's hard to be sure. This is why next year we'll focus on in-depth interviews to better understand the figures rather than repeating the survey.

Given this conversation, I'm pretty skeptical of that? My experience with talking to EA org leaders is that if I beat this horse pretty hard they back down on inflated numbers or add caveats, but otherwise they say things that sound very like 'However, it would still be consistent with the idea that marginal hires are valuable and can have more impact by working at the org than by earning to give, since each generates $1m.', a statement it sounds like we now both agree is false or at least apt to mislead people who could earn to give for slightly less than $1m into thinking they should switch when they shouldn't.

For the benefit of third parties reading this thread, I have had conversations in this area with Ben and other org leaders in the past, and I actually think Ben thinks about these issues more clearly than almost anybody else I've spoken to. So the above paragraph should not be read as a criticism of him personally, rather a statement that 'if you can slip up and get this wrong, everybody else is definitely getting this wrong, and I speculate that you might be projecting a bit when you state they are getting it right'. The only thing Ben personally has done here is been kind enough to put the model in writing where I can more-easily poke holes in it.

Comment by agb on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-13T21:37:58.696Z · score: 4 (4 votes) · EA · GW

In that simple model, it appears to me that marginal hires are worth $1m minus the counterfactual use of senior staff time (hypothetically, and relevantly, suppose all the possible hires for a position decided to earn to give instead overnight. It would not be the case that the world was $1m worse off, because now senior staff are freed up to do other things). If there are in fact even more important things for senior management to focus on, this would be a negative number.

More realistically, we could assume orgs are prioritising marginal hiring correctly relative to their other activities (not obvious to me, but a reasonable outside view without delving into the org particulars I think), in which case the value of a marginal additional hire would simply be ~0.

So again, I appreciate the attempt to boil down the area of disagreement to the essentials, and even very largely agree with the essential models and descriptions you and Rob are using as they happen to match my experience working and recruiting for a different talent-constrained organisation, but feel like this kind of response is missing the point I'm trying to make, namely that these ex post numbers, even if accurate on their own terms, are not particularly relevant for comparison of options.

Comment by agb on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-13T07:41:00.796Z · score: 4 (4 votes) · EA · GW

Since most of the EA orgs in question are heavily constrained in hiring by whatever level of growth they can manage or feel comfortable with (that’s kinda the whole point of the OP, right?), it would not generally be my assumption that additional funds would be used for extra hiring compared to the counterfactual. I grant that if that is the assumption, these effects seem to cancel.

Other ways not listed in your last paragraph.

-earning to give to allow orgs to raise salaries

-earning to give to fund regranting

-earning to give to fund things like targeted advertising (you may have intended to cover this category in ‘capital goods’, I’m not sure)

These things are much closer to my model of where extra funding to at least CEA and 80k in at least the past 18 months has gone, not into additional hiring.

Comment by agb on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-12T23:06:33.102Z · score: 14 (10 votes) · EA · GW

Thanks for making the ex ante versus ex post distinction. But it makes me confused about the penultimate paragraph; if I am offered an job at an org and am comparing to earning to give, shouldn’t I be using the (currently unpublicised) ex ante numbers, not these ex post numbers?

The risks of bad personal fit, costs of senior staff time, costs of fast hiring in general, and time taken to build trust are all still in the future at the point, and don’t apply to the earning to give alternative. As far as I can tell, the only cost which has already been sunk at that point is the cost of evaluating me as a candidate. In my experience of working on the recruiting side of a non-EA org, this is far smaller than the other costs outlined, in particular the costs of training and building trust. I’m curious if the EA orgs feel differently.

In general though, I don’t think attempting to save these numbers by pointing out how hiring is subtly much more expensive than you would think interacts much with my objection to these numbers, since each additional reason you give me for why hiring is wildly expensive for EA orgs is yet another reason to prefer earning to give, precisely because it does not impose those costs! All these reasons simply mean the numbers should be lower than what you get from an ex post phrasing of the question, at least insofar as they are being used for what appears to be their intended purpose, namely comparison of options.

Comment by agb on CEA on community building, representativeness, and the EA Summit · 2018-08-19T22:08:07.737Z · score: 11 (11 votes) · EA · GW

(Speaking as a member of the panel, but not in any way as a representative of CEA).

It’s worth noting the panel hasn’t been consulted on anything in the last 12 months. I don’t think there’s anything necessarily wrong with this, especially since it was set up partly in response to the Intentional Insights affair and AFAIK there has been no similar event in that time, but I have a vague feeling that someone reading Julia’s posts would think it was more common, which I guess was part of the ‘question behind your question’, if that makes sense :)

Comment by agb on Should there be an EA crowdfunding platform? · 2018-05-03T12:49:25.835Z · score: 2 (2 votes) · EA · GW

Some way of distributing money to risky ventures, including fundraising, in global poverty and animal welfare should probably exist.

I think it's pretty reasonable if CEA doesn't want to do this because (a) they take a longtermist view and (b) they have limited staff capacity so aren't willing to divert many resources from (a) to anything else. In fact, given CEA's stated views it would be a bit strange if they acted otherwise. I know less about Nick, but I'm guessing the story there is similar.

I have a limited sense for what to do about this problem, and I don't know if the solution in the OP is actually a good idea, but recognising the disconnect between what people want and what we have is a start.

I may write more about this in the near future.

Comment by agb on Comparative advantage in the talent market · 2018-04-16T02:01:56.053Z · score: 1 (1 votes) · EA · GW

I agree with your last paragraph, but indeed think that you are being unreasonably idealistic :)

Comment by agb on Comparative advantage in the talent market · 2018-04-16T01:58:51.694Z · score: 3 (3 votes) · EA · GW

I suspect that the motivation hacking you describe is significantly harder for researchers than for, say, operations, HR, software developers, etc. To take your language, I do not think that the cause area beliefs are generally 'prudentially useful' for these roles, whereas in research a large part of your job may on justifying, developing, and improving the accuracy of those exact beliefs.

Indeed, my gut says that most people who would be good fits for these many critical and under-staffed supporting roles don't need to have a particularly strong or well-reasoned opinion on which cause area is 'best' in order to do their job extremely well. At which point I expect factors like 'does the organisation need the particular skills I have', and even straightforward issues like geographical location, to dominate cause prioritisation.

I speculate that the only reason this fact hasn't permeated into these discussions is that many of the most active participants, including yourself and Denise, are in fact researchers or potential researchers and so naturally view the world through that lens.

Comment by agb on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-27T17:39:24.630Z · score: 0 (0 votes) · EA · GW

To chime in as someone who has very recently spent a lot of time in both London and SF, a 1.8:1 ratio (as in $1.8y is about the same as £y) is very roughly what I would have said for living costs between that pair, though living circumstances will vary significantly.

Pound to dollar exchange rates have moved a ton in the last few years, whereas I don't think local salaries or costs of living have moved nearly as much, so I expect that 1.8:1 heuristic to be more stable/useful than trying to do the same comparison including a currency conversion (depending what point in the last few years you picked/moved, that ratio would imply anywhere between a 1.05x increase and a 1.55x increase).

Comment by agb on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-13T06:55:06.693Z · score: 3 (3 votes) · EA · GW

(Disclaimer: I am Denise’s partner, have discussed this with her before, and so it’s unsurprising if I naturally interpreted her comment differently.)

Enthusiasm =! consent. I’m not sure where enthusiasm made it into your charitable reading.

Denise’s comment was deliberately non-gendered, and we would both guess (though without data) that once you move to the fuzzy ‘insufficient evidence of consent’ section of the spectrum there will be lots of women doing this, possibly even accounting for the majority of such cases in some environments.

Comment by agb on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-29T15:34:39.990Z · score: 7 (7 votes) · EA · GW

So as a general principle, it's true that discussion of an issue filters out (underrepresents) people who find or have found the discussion itself unpleasant*. In this particular case I think that somewhat cuts both ways, since these discussions as they take place in wider society often aren't very pleasant in general, for either side. See this comic.

To put it more plainly, I could easily name a lot of people who will strongly agree with this post but won't comment for fear of criticism and/or backlash. Like you I don't think there is an easy fix for this.

*Ironically, this is part of what Kelly is driving at when she says that championing free speech can sometimes inhibit it.

Comment by agb on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-29T15:14:11.022Z · score: 0 (2 votes) · EA · GW

Either way, the effect is I really haven't felt like I've had too many discussion in EA about diversity. It's not like it's my favourite topic or anything.

It's extremely hard to generalize here because different geographies have such different stories to tell, but my personal take is that the level of (public) discussion about diversity within EA has dipped somewhat over time.

When I wrote the Pandora's Box 2.5 years ago, I remember being sincerely worried that low-quality discussion of the issue would swamp a lot of good things that EA was accomplishing, and I wanted build some consensus before that got out of hand. I can't really imagine feeling that way now.

Comment by agb on Effective altruism is self-recommending · 2017-05-07T11:42:00.844Z · score: 2 (2 votes) · EA · GW

I found the post, was struggling before because it's actually part of their career guide rather than a blog post.

Comment by agb on Effective altruism is self-recommending · 2017-05-06T13:55:28.026Z · score: 1 (1 votes) · EA · GW

Thanks for digging up those examples.'s Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

I think 'many methods of doing good fail' has wide applications outside of Global Poverty, but I acknowledge the wider point you're making.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

It's very plausible to me that in-person EA groups often don't have this problem because individuals don't feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.

This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general 'intro to EA' that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about 'which causes and which methods of doing good should we list given limited time', rather than 'which cause/method would provide the most generically effective pitch'.

We didn't want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the 'core' EAs in the room that was a very real risk.

Comment by agb on Update on Effective Altruism Funds · 2017-04-30T12:22:24.700Z · score: 0 (0 votes) · EA · GW

(Sorry for the slower response, your last paragraph gave me pause and I wanted to think about it. I still don't feel like I have a satisfactory handle on it, but also feel I should reply at this point.)

this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others.

This makes total sense to me, and I do currently perceive something of an inverse correlation between how hard people have thought about the funds and how positively they feel about them. I agree this is a cause for concern. The way I would describe that situation from your perspective is not 'the funds have not been well-received', but rather 'the funds have been well-received but only because too many (most?) people are analysing the idea in a superficial way'. Maybe that is what you were aiming for originally and I just didn't read it that way.

But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.

True. That post was only a couple of months before this one though; not a lot of time for new data/arguments to emerge or opinions to change. The only major new data point I can think of since then is the funds raising ~$1m, which I think is mostly orthogonal to what we are discussing. I'm curious whether you personally a perceive a change (drop) in popularity in your circles?

My view is further that the community's response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I'm positing that the public response to EA Funds would be more negative if we hadn't filtered certain people out of EA by having an integrity problem in the first place.

This story sounds plausibly true. It's a difficult one to falsify though (I could flip all the language and get something that also sounds plausibly true), so turning it over in my head for the past few days I'm still not sure how much weight to put on it.

Comment by agb on Update on Effective Altruism Funds · 2017-04-30T10:48:40.324Z · score: 1 (1 votes) · EA · GW

That seems like a good use of the upvote function, and I'm glad you try to do things that way. But my nit-picking brain generates a couple of immediate thoughts:

  1. I don't think it's a coincidence that a development you were concerned about was also one where you forgot* to apply your general rule. In practice I think upvotes track 'I agree with this' extremely strongly, even though lots of people (myself included) agree that ideally they shouldn't.

  2. In the hypothetical where there's lots of community concern about the funds but people are happy they have a venue to discuss it, I expect the top-rated comments to be those expressing those concerns. This possibility is what I was trying to address in my next sentence:

Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea.

*Not sure if 'forgot' is quite the right word here, just mirroring your description of my comment as 'reminding' you.

Comment by agb on Effective altruism is self-recommending · 2017-04-30T10:28:51.835Z · score: 3 (3 votes) · EA · GW

The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn't match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations.

If this is basically saying 'we should take care to emphasize that EAs have wide-ranging disagreements of both values and fact that lead them to prioritise a range of different cause areas', then I strongly agree. In the same vein, I think we should emphasize that people who self-identify as 'EAs' represent a wide range of commitment levels.

One reason for this is that depending which university or city someone is in, which meetup they turn up to, and who exactly they talk to, they'll see wildly different distributions of commitment and similarly differing representation of various cause areas.

With that said, I'm not totally sure if that's the point you're making because my personal experience in London is that we've been going out of our way to make the above points for a while; what's an example of marketing which you think works to maintain a homogenous public image?

Comment by agb on Effective altruism is self-recommending · 2017-04-28T19:04:56.546Z · score: 9 (9 votes) · EA · GW

Trying to square this circle, because I think these observations are pretty readily reconcilable. My second-hand vague recollections from speaking to people at the time are:

  1. The programming had a moderate slant towards AI risk because we got Elon.
  2. The participants were generally very bullish on AI risk and other far-future causes.
  3. The 'Global poverty is a rounding error' crowd was a disproportionately-present minority.

Any one of these in isolation would likely have been fine, but the combination left some people feeling various shades of surprised/bait-and-switched/concerned/isolated/unhappy. I think the combination is consistent with both what Ben said and what Kerry said.

Further, (2) and (3) aren't surprising if you think about the way San Francisco EAs are drawn differently to EAs globally; SF is by some margin the largest AI hub, so committed EAs who care a lot about AI disproportionately end up living and working there.

Note that EAG Oxford, organised by the same team in the same month with the same private opinions, didn't have the same issues, or at least it didn't to the best of my knowledge as a participant who cared very little for AI risk at the time. I can't speak to EAG Melbourne but I'd guess the same was true.

While (2) and (3) aren't really CEA's fault, there's a fair challenge as to whether CEA should have anticipated (2) and (3) given the geography, and therefore gone out of their way to avoid (1). I'm moderately sympathetic to this argument but it's very easy to make this kind of point with hindsight; I don't know whether anyone foresaw it. Of course, we can try to avoid the mistake going forward regardless, but then again I didn't hear or read anyone complaining about this at EAG 2016 in this way, so maybe we did?

Comment by agb on Effective altruism is self-recommending · 2017-04-24T08:01:24.983Z · score: 13 (13 votes) · EA · GW

(Disclosure, I read this post, thought it was very thorough, and encouraged Ben to post it here.)

It's hard to do good criticism, but starting out with long explanations of confidence games and Ponzi schemes is not something that makes the criticism likely to be well-received. You assert that these things are not necessarily bad, so why not just zero in on the thing that you think is bad in this case?

Just to balance this, I actually liked the Ponzi scheme section. I think that making the claim 'aspects of EA have Ponzi-like elements and this is a problem' without carefully explaining what a Ponzi scheme is and without explaining that Ponzi-schemes don't necessarily require people with bad intentions would have potential to be much more inflammatory. As written, this piece struck me as fairly measured.

Also, since the claims are aimed at a potentially-flawed general approach/mindset rather than being specific to current actions, zeroing in too much might be net counterproductive in this case; there's some balance to strike here.

Comment by agb on Update on Effective Altruism Funds · 2017-04-22T23:20:50.546Z · score: 2 (2 votes) · EA · GW

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect...

In the OP Kerry wrote:

The donation amounts we’ve received so far are greater than we expected, especially given that donations typically decrease early in the year after ramping up towards the end of the year.

CEA's original expectation of donations could just have been wrong, of course. But I don't see a failure of logic here.

Re. your last paragraph, Kerry can confirm or deny but I think he's referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn't actually officially given the funds a 'green light' yet. So not referring to the same set of criticisms you are talking about. I think 'confusion at GWWC's endorsement of EA funds' is a reasonable description of how I felt when I received this e-mail, at the very least*; I like the funds but prominently recommending something that is in beta and might be discontinued at any minute seemed odd.

*I got the e-mail from GWWC announcing this on 11th April. I got CEA's March 2017 update saying they'd decided to continue with the funds later on the same day, but I think that goes to a much narrower list and in the interim I was confused and was going to ask someone about it. Checking now it looks like CEA actually announced this on their blog on 10th April (see below link), but again presumably lots of GWWC members don't read that.

Comment by agb on Update on Effective Altruism Funds · 2017-04-22T22:51:44.428Z · score: 6 (6 votes) · EA · GW

So I probably disagree with some of your bullet points, but unless I'm missing something I don't think they can be the crux of our disagreement here, so for the sake of argument let's suppose I fully agree that there are a variety of strong social norms in place here that make praise more salient, visible and common than criticism.

...I still don't see how to get from here to (for example) 'The community is probably net-neutral to net-negative on the EA funds, but Will's post introducing them is the 4th most upvoted post of all time'. The relative (rather than absolute) nature of that claim is important; even if I think posts and projects on the EA forum generally get more praise, more upvotes, and less criticism than they 'should', why has that boosted the EA funds in particular over the dozens of other projects that have been announced on here over the past however-many years? To pick the most obviously-comparable example that quickly comes to mind, Kerry's post introducing EA Ventures has just 16 upvotes*.

It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.

And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I've spoken to in person about the EA funds thinks they're a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation. I think we should both discount it ~entirely once we have anything else to go on. Relative upvotes are extremely far from perfect as a metric, but I think they are much better than in-person anecdata for this reason alone.

FWIW I'm very open to suggestions on how we could settle this question more definitively. I expect CEA pushing ahead with the funds if the community as a whole really is net-negative on them would indeed be a mistake. I don't have any great ideas at the moment though.


Comment by agb on Update on Effective Altruism Funds · 2017-04-22T12:51:34.190Z · score: 15 (15 votes) · EA · GW

Things don't look good regarding how well this project has been received

I know you say that this isn't the main point you're making, but I think it's the hidden assumption behind some of your other points and it was a surprise to read this. Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of support are kind of fuzzy and prone to weird biases, but putting it all together I find it much more likely than not that the community is as-a-whole positive about the funds. An alternative and more concrete angle would be money received into the funds, which was just shy of CEA's target of $1m.

Given all that, what would 'well-received' look like in your view?

If you think the community is generally making a mistake in being supportive of the EA funds, that's fine and obviously you can/should make arguments to that effect. But if you are making the empirical claim that the community is not supportive, I want to know why you think that.

Comment by agb on How accurately does anyone know the global distribution of income? · 2017-04-16T12:40:24.973Z · score: 2 (2 votes) · EA · GW

Was this intended as a response to my comment? I didn't bring up the $70k figure or the $200k figure. I did take up one part of your argument (the 'minimum standards' part) and try to explain why I don't think using a $2k - $5k minimum as equivalent to the median Indian actually makes sense.

Advantage of the "bailey": makes people feel extremely guilty and more likely to donate money or sign the pledge.

FWIW I doubt this is actually true. I have generally strongly preferred to understate people's relative income rather than overstate it when 'selling' the pledge, because it shrinks the inferential distance.

Comment by agb on How accurately does anyone know the global distribution of income? · 2017-04-08T16:51:58.488Z · score: 3 (3 votes) · EA · GW

If minimum standards rise to $90,000 and I'm earning $100,000, I would argue they do probably affect me substantially and my original premise of 'minimum standards that basically don't affect me' no longer holds. For example, I might to start putting substantial money aside to make sure I can meet the minimum if I lose my job, which will eat into my standard of living. That's why I used numbers where I think that statement does actually hold ($10,000 minimum versus $100,000 income).

that's a nice fantasy but in reality the way the west works is if you are a single young male and you have less than enough money to afford rent, there is no safety net in many places, especially the USA and the UK. You are thrown into the homelessness trap.

Sure, this is why I said 'hypothetically' and 'in 50 years'. I'm not sure your above claim is true in the UK even as of today in any case.

(UK benefits are a bit of a maze so I'm wary of saying anything too general, but running through one website ( and trying to select answers that correspond to '22 year old single healthy male living in my area with no source of income', I get an entitlement of £8,300 per year, most of which (around £5,200) is meant to cover the cost of shared housing. Eyeballing that number I think 100pw should indeed be enough to get a room in a shared property at the low end of the housing market around here.

I think it is also true that a 21 year old wouldn't get that entitlement because they are supposed to live with their parents, but there are meant to be 'protections' in place where that isn't possible for whatever reason. I haven't dug further than that.)

Comment by agb on How accurately does anyone know the global distribution of income? · 2017-04-06T19:21:14.983Z · score: 9 (9 votes) · EA · GW

I think your last paragraph is plausibly true and relevant, but this is a common argument and it has common rebuttals, one of which I'm going to try and lay out here.

However, $700/year (= $1.91/day, =€1.80/day, =£1.53 /day) (without gifts or handouts) is not a sufficient amount of money to be alive in the west. You would be homeless. You would starve to death. In many places, you would die of exposure in the winter without shelter. Clearly, the median person in India is better off than a dead person.

The basics of survival are food, water, accommodation and medical care. Medical care is normally provided by the state for the poorest in the West so let's set that to one side for a moment. For the rest we set a lot of minimum standards on what is available to buy; you can't get rice below some minimum safety standard even if that very low-quality rice is more analogous to the rice eaten by a poor Indian person, I would guess virtually all (maybe actually all?) dwellings in the US have running water, etc.

This presents difficult problems for making these comparisons, and I think it's part of what Rob is talking about in his point (2). One method that comes to mind is to take your median Indian and find a rich Indian who is 10x richer, then work out how that person compares to poor Americans since (hopefully) the goods they buy have significantly more overlap. Then you might be able to stitch your income distributions together and say something like [poor Indian] = [Rich Indian] / 10 = [Poor American] / 10 = [Rich American] / 100. I have some memory that this is what some of the researchers building these distributions actually do but I can't recall the details offhand; maybe someone more familiar can fill in the blanks.

A realistic minimum amount of money to not die in the west is probably $2000-$5000/year, again without gifts or handouts, implying that to be 100 times richer than the average Indian, you have to be earning at least $200,000-$500,000 net of tax (or at least net of that portion of tax which isn't spent on things that benefit you - which at that level is almost all of it, unless you are somehow getting huge amounts of government money spent on you in particular).

Building on the above, hypothetically suppose over the next 50 years the West continues on its current trend of getting richer and putting more minimum standards in place; the minimum to survive in the West is now $10,000 per year and the now-much-richer countries have a safety net that enables everyone to reach this. However, in India nothing happens.

Is it now true that I need at least $1,000,000 per year to be 100x richer than the median Indian? That seems peverse. Supposing my income started at $100,000 and stayed constant in real terms throughout, why do increases in minimum standards that basically don't affect me (I was already buying higher-than-minimum-quality things) and don't at all affect the median Indian make me much poorer relative to the median Indian? As a result I think this particular section 'proves too much'.

Comment by agb on Concrete project lists · 2017-03-26T10:04:11.980Z · score: 12 (12 votes) · EA · GW

I'm sympathetic to this view, though I think the EA funds have some EA-Ventures-like properties; charities in each of the fund areas presumably can pitch themselves to the people running the funds if they so choose.

One difference that has been pointed out to me in the past is that for (e.g.) EA Ventures you have to put a lot of up-front work into your actual proposal. That's time-consuming and costly if you don't get anything out of it. That's somewhat different to handing some trustworthy EA an unconditional income and saying 'I trust your commitment and your judgment, go and do whatever seems most worthwhile for 6/12/24 months'. It's plausible to me that the latter involves less work on both donor and recipient side for some (small) set of potential recipients.

With that all said, better communication of credible proposals still feels like the relatively higher priority to me.

Comment by agb on Advisory panel at CEA · 2017-03-09T20:11:43.060Z · score: 2 (2 votes) · EA · GW

Hi Alasdair

Perhaps to mitigate my meandering can the members of the council give one example of something the CEA has done in the last 12 months they are willing to publicly disagree with?

Well, I'm far from sold on the principles and panel being a good idea in the first place. But everything in the linked comment is low confidence, some of it doesn't apply given the actual implementation, and certainly it's not obvious to me that it's a bad idea (i.e. I have a small positive but extremely uncertain EV).

For something that happened that I more robustly disagree with, a lot of the marketing around EA Global last year concerned me. I didn't go, so I only heard about it secondhand, and so I didn't feel best-placed to raise it directly, but from a distance I think pretty much everything Kit said in this thread re. marketing was on point.

With that said I think there is definitely some version of what you are saying that I would agree with; I certainly would consider myself very much an EA 'insider', albeit one who has no particular personal interest in CEA itself doing well except insofar as it helps the community do well. I'm not sure what the best way for CEA (or EA in general for that matter; this isn't just their responsibility) to hear from people who are genuinely external or peripheral to EA is, except that I think a small panel of people is probably not it.