Posts

EA Diversity: Unpacking Pandora's Box 2015-02-01T00:40:05.862Z · score: 34 (25 votes)

Comments

Comment by agb on Will protests lead to thousands of coronavirus deaths? · 2020-07-08T18:05:55.898Z · score: 19 (9 votes) · EA · GW

For posterity, the only data I've seen on this question suggests that this has not played out the way the OP and many others (myself included) might have expected. The economist ran an article* which links to this paper**. In short, cities with protests did not record discernible COVID case growth, at least as of a few weeks later. Moreover, quoting the paper (italics in original):

"Second, where there are social distancing effects, they only appear to materialize after the onset of the protests. Specifically, after the outbreak of an urban protest, we find, on average, an increase in stay-at-home behaviors in the primary county encompassing the city. That overall social distancing behavior increases after the mass protests is notable, as this finding contrasts with the general secular decline in sheltering-at-home taking place across the sample period (see Appendix Figure 6). Our findings suggest that any direct decrease in social distancing among the subset of the population participating in the protests is more than offset by increasing social distancing behavior among others who may choose to shelter-at-home and circumvent public places while the protests are underway. "

In other words, it seems that protestors being outside was more than offset by other people avoiding the protests and staying home.

* https://www.economist.com/graphic-detail/2020/06/30/black-lives-matter-protests-did-not-cause-an-uptick-in-covid-19-cases

** https://www.nber.org/papers/w27408

Comment by agb on Pablo_Stafforini's Shortform · 2020-01-14T22:52:00.482Z · score: 1 (1 votes) · EA · GW

Pablo already replied, but FWIW I had the same irritation (and similarly had all posts pointed out to me by someone else after complaining to them about it). I think in my case the original assumption was that 'latest posts' meant what it sounds like, and on discovering that it wasn't I (lazily) assumed there wasn't a way to get what I wanted.

I don't have a constructive suggestion for a better name though.

Comment by agb on [updated] Global development interventions are generally more effective than Climate change interventions · 2019-09-13T14:07:36.855Z · score: 5 (3 votes) · EA · GW

I agree with this. I would have assumed they would do (i), and other responses from people who actually read the paper make me think it might effectively be (iii). I don't think it's (ii).

Comment by agb on [updated] Global development interventions are generally more effective than Climate change interventions · 2019-09-10T20:44:54.094Z · score: 49 (23 votes) · EA · GW
If a climate change intervention has a cost-effectiveness of $417 / X per tonne of CO2 averted, then it is X times as effective as cash-transfers.

Wait a second.

I'm very confused by this sentence. Suppose, for the sake of argument, that all the impacts of emitting a tonne of CO2 are on people about as rich as present-day Americans, i.e. emitting a tonne of CO2 now causes people of that level of wealth to lose $417 at some point in the future. There is then no income adjustment necessary (I assume everything is being converted to something like present-day USD for present-day Americans, but I'm not actually sure and following the links didn't shed any light), so the post-income-adjustment number is still $417. Also suppose for the sake of argument that we can prevent this for $100.

This seems clearly worse than cash transfers to me under usual assumptions about log income being a reasonable approximation to wellbeing (as described in your first appendix), since we are effectively getting a 4.17x multiplier rather than a 50-100x multiplier. Yet the equation in the quote claims it is 4.17x more effective than cash transfers*.

What am I missing?

*Mathematically, I think the equation works iff. the cash transfers in question are to people of comparable wealth to whatever baseline is being used to come up with the $417 figure. So if the baseline is modern-day Americans, that equation calculates how much better it is to avert CO2 emissions than to transfer cash to modern-day Americans.

Comment by agb on EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? · 2019-08-16T20:39:45.754Z · score: 12 (8 votes) · EA · GW

Quick note on the 'bunching' hypothesis. While that particular post and suggestion is mostly an artefact of the US tax code and would lead to years that look like 20%/0%/20%/0%/etc., there's a similar-looking thing that can happen for non-US GWWC members, namely that their tax year often won't align with the calendar year (e.g. UK is 6th April - 5th April, Australia is 1st July - 30th June I believe).

In these cases I would expect compliant pledge takers to focus on hitting 10% in their local tax year, and when the EA survey asks about calendar years the effect will be that the average for that group is around 10% but the actual percentage given will range anywhere from 0 - 20% (if ~10% is being given), but often look like 13% one calendar year, 8% the next, 11% the year after that, etc. In other words, they will appear to be meeting the pledge around 50% of the time in your data. Yet the pledge is being kept by all such members continuously through that period. Eyeballing your 2017 graph of the actual distributions of percentages given, there are a lot of people in the 8-10% range, who are the main candidates for this.

Since both most US members and most non-US members have good reasons to not hit 10% in every calendar year, the number I find most compelling is the one in the bunching section that averages 2015 and 2016 donations (and finds 69% compliance when doing so). But that number suffers from not knowing if those people were actually GWWC members in 2015. It just knows they were members when they took the survey in 2017. GWWC had large growth around that time, so that's a thorny issue. Then the 2018 survey solves the 'when did they join' problem but can't handle any level of donations not exactly aligning with the 2017 calendar year.

My best guess thinking over all this would be that 73% of the GWWC members in this EA survey sample are compliant with the pledge, with extremely wide error bars (90% confidence interval 45% - 88%). I like Jeff's suggestion below as a way to start to reduce those error bars.

Comment by agb on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-20T22:11:06.076Z · score: 15 (9 votes) · EA · GW

Fair enough. I remain in almost-total agreement, so I guess I'll just have to try and keep an eye out for what you describe. But based on what I've seen within EA, which is evidently very different to what you've seen, I'm more worried about little-to-zero quantification than excessive quantification.

Comment by agb on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-14T19:07:11.362Z · score: 47 (17 votes) · EA · GW

I'm feeling confused.

I basically agree with this entire post. Over many years of conversations with Givewell staff or former staff, I can't readily recall speaking to anyone affiliated with Givewell who I can identify that they would substantively disagree with the suggestions in this post. But you obviously feel that some (reasonably large?) group of people disagrees with some (reasonably large?) part of your post. I understand a reluctance to give names, but focusing on Givewell specifically as much of their thoughts on these matters are public record here, can you identify what specifically in that post or the linked extra reading you disagree with? Or are you talking to EAs-not-at-Givewell? Or do you think Givewell's blog posts are reasonable but their internal decision-making process nonetheless commits the errors they warn against? Or some possibility I'm not considering?

I particularly note that your first suggestion to 'entertain multiple models' sounds extremely similar to 'cluster thinking' as described and advocated-for here, and the other suggestions also don't sound like things I would expect Givewell to disagree with. This leaves me at a bit of a loss as to what you would like to see change, and how you would like to see it change.

Comment by agb on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-03T20:11:31.996Z · score: 22 (10 votes) · EA · GW

>Also, not to mention all the career paths that aren't earning to give or "work in an EA org"

While I share your concern about the way earning to give is portrayed, I think this issue might be even more pressing.

Comment by AGB on [deleted post] 2019-02-09T19:27:22.386Z

> But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.

For clarity's sake, I don't disagree with this. This does mean that your argument for overwhelming value of mitigating nuclear war is still predicated on developing a safe AI (or some other way of massively reducing the base rate) at a future date, rather than being a self-contained argument based solely on nuclear war being an x-risk. Which is totally fine and reasonable, but a useful distinction to make in my experience. For example, it would now make sense to compare whether working on safe AI directly or working on nuclear war in order to increase the number of years we have to develop safe AI is generating better returns per effort spent. This in turn I think is going to depend heavily on AI timelines, which (at least to me) was not obviously an important consideration for the value of working on mitigating the fallout of a nuclear war!

Comment by agb on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-25T08:25:02.965Z · score: 6 (6 votes) · EA · GW

I agree with this summary. Thanks Peter and sorry for the wordiness Milan, that comment ended up being more of a stream of consciousness that I’d intended.

Comment by agb on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-24T18:15:27.295Z · score: 10 (9 votes) · EA · GW

I personally feel much more funding constrained / management capacity constrained / team culture “don’t grow too quickly” constrained than I feel “I need more talented applicants” constrained. I definitely don’t feel a need to trade away hundreds of thousands or millions of dollars in donations to get a good hire...

Something about this phrasing made me feel a bit 'off' when I first read this comment, like I'd just missed something important, but it took me a few days to pin down what it was.

I think this phrasing implicitly handles replaceability significantly differently to how I think the core orgs conventionally handle it. To illustrate with arbitrary numbers, let's say you have two candidates A and B for a position at your org, and A you think would generate $500k a year of 'value' after accounting for all costs, while B would generate $400k.

Level 0 thinking suggests that A applying to your org made the world $100k per year better off; if they would otherwise earn to give for $50k they shouldn't do that, but if they would otherwise EtG for $150k they should do that.

Level 0 thinking misses the fact that when A gets the job, B can go and do something else valuable. Right now I think the typical implicit level 1 assumption is that B will go and do something almost as valuable as the $400k, and so A should treat working for you as generating close to $500k value for the world, not $100k, since they free up a valuable individual.

In this world and with level 1 assumptions, your org doesn't want to trade away any more than $100k to get A into the applicant pool, but the world should be willing to trade $500k to get A into the pool. So there can be a large disparity between 'what EA orgs should recommend as a group' and 'what your org is willing to trade to get more talented applicants', without any actual conflict or disagreement over values or pool quality, in the style of your (1) / (2) / (3).


That being said, I notice that I'm a lot less sold on the level 1 assumptions than I used to be. I hadn't noticed that I now feel very differently to say 24 months ago until I was focusing on the question to write this reply, so I'm not sure exactly what has changed my mind about it, but I think it's what I perceive as a (much) higher level of EA unemployment or under-employment. Where I used to find the default assumption of B going and doing something almost as directly valuable credible, I now assign high (>50%) probability that B will either end up unemployed for a significant period of time, or end up 'keeping the day job' and basically earning-to-give for some much lower amount than the numbers EA orgs generally talk about. I feel like there's a large pool of standing applicants for junior posts already out there, and adding to the pool now is only worth the difference between the quality of the person added and the existing marginal person, and not the full amount as it was when the pool was much smaller.

How much this matters in practice obviously depends on what fraction of someone's total value to an org is 'excess' value relative to the next marginal hire, but my opinion based on private information about just who is in the 'can't get a junior job at an EA org' pool is that this pool is pretty high quality right now, and so I'm by-default sceptical that adding another person to it is hugely valuable. Which is a much more precise disagreement with the prevailing consensus than I previously had, so thanks for pointing me in a direction that helped me refine my thoughts here!

Comment by agb on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-20T10:18:22.961Z · score: 1 (1 votes) · EA · GW

Re. your first paragraph, I don’t know why you chose to reply to my comment specifically, since as far as I can tell I’ve never been asking ‘why do people hire slowly’.

I think I’ve already explained why I don’t agree with your later paragraphs and see little value in repeating myself, so we should probably just leave it there.

Comment by agb on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-19T08:21:14.625Z · score: 10 (10 votes) · EA · GW

With your first two paragraphs, I just want to step back and point out that things get pretty confusing when you include all opportunity costs. When you do that, the return of every action except the single best action is zero or negative. Being close to zero is actually good. It's probably less confusing to think in terms of a ranked list of actions that senior staff could take.

True, but I haven't accounted for all the opportunity costs, just one of them, namely the 'senior staff time' opportunity cost. If you are in fact close to 0 after that cost alone (i.e. being in a situation where a new hire would use x time and generate $1m, but an alternative org-improvement action that could be taken internally would generate $950k), that isn't 'good', it's awful, because one of those actions incurs opportunity costs on the applicant side (namely, and at the risk of beating a dead horse, the cost of not earning to give), but the other does not.

So we could look at this as a ranked list of potential senior staff actions, but to do so usefully the ranking needs to be determined by numbers that account for all the costs and benefits to the wider world and only exclude senior staff time (i.e. use $1m minus opportunity cost to applicant minus salary minus financial cost of hiring process per successful hire etc.), not this gross $1m number.

Similarly, potential applicants to EA orgs making a ranked list of their options should include all costs and benefits that aren't tied to them, i.e. they should subtract senior staff time from the $1m number, if that hasn't been done for them already. Which is what I've in fact been recommending people do. But my experience is that people who haven't been directly involved in hiring during their career radically underestimate the cost of hiring, and many applicants fall into that camp, so for them to take account of this is not trivial. I mean, it's not trivial for the orgs either, but I do think it is relatively easier.

I also expect the orgs partially take account of the opportunity costs of staff time when reporting the dollar value figures, though it's hard to be sure. This is why next year we'll focus on in-depth interviews to better understand the figures rather than repeating the survey.

Given this conversation, I'm pretty skeptical of that? My experience with talking to EA org leaders is that if I beat this horse pretty hard they back down on inflated numbers or add caveats, but otherwise they say things that sound very like 'However, it would still be consistent with the idea that marginal hires are valuable and can have more impact by working at the org than by earning to give, since each generates $1m.', a statement it sounds like we now both agree is false or at least apt to mislead people who could earn to give for slightly less than $1m into thinking they should switch when they shouldn't.

For the benefit of third parties reading this thread, I have had conversations in this area with Ben and other org leaders in the past, and I actually think Ben thinks about these issues more clearly than almost anybody else I've spoken to. So the above paragraph should not be read as a criticism of him personally, rather a statement that 'if you can slip up and get this wrong, everybody else is definitely getting this wrong, and I speculate that you might be projecting a bit when you state they are getting it right'. The only thing Ben personally has done here is been kind enough to put the model in writing where I can more-easily poke holes in it.

Comment by agb on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-13T21:37:58.696Z · score: 4 (4 votes) · EA · GW

In that simple model, it appears to me that marginal hires are worth $1m minus the counterfactual use of senior staff time (hypothetically, and relevantly, suppose all the possible hires for a position decided to earn to give instead overnight. It would not be the case that the world was $1m worse off, because now senior staff are freed up to do other things). If there are in fact even more important things for senior management to focus on, this would be a negative number.

More realistically, we could assume orgs are prioritising marginal hiring correctly relative to their other activities (not obvious to me, but a reasonable outside view without delving into the org particulars I think), in which case the value of a marginal additional hire would simply be ~0.

So again, I appreciate the attempt to boil down the area of disagreement to the essentials, and even very largely agree with the essential models and descriptions you and Rob are using as they happen to match my experience working and recruiting for a different talent-constrained organisation, but feel like this kind of response is missing the point I'm trying to make, namely that these ex post numbers, even if accurate on their own terms, are not particularly relevant for comparison of options.

Comment by agb on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-13T07:41:00.796Z · score: 4 (4 votes) · EA · GW

Since most of the EA orgs in question are heavily constrained in hiring by whatever level of growth they can manage or feel comfortable with (that’s kinda the whole point of the OP, right?), it would not generally be my assumption that additional funds would be used for extra hiring compared to the counterfactual. I grant that if that is the assumption, these effects seem to cancel.

Other ways not listed in your last paragraph.

-earning to give to allow orgs to raise salaries

-earning to give to fund regranting

-earning to give to fund things like targeted advertising (you may have intended to cover this category in ‘capital goods’, I’m not sure)

These things are much closer to my model of where extra funding to at least CEA and 80k in at least the past 18 months has gone, not into additional hiring.

Comment by agb on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-12T23:06:33.102Z · score: 14 (10 votes) · EA · GW

Thanks for making the ex ante versus ex post distinction. But it makes me confused about the penultimate paragraph; if I am offered an job at an org and am comparing to earning to give, shouldn’t I be using the (currently unpublicised) ex ante numbers, not these ex post numbers?

The risks of bad personal fit, costs of senior staff time, costs of fast hiring in general, and time taken to build trust are all still in the future at the point, and don’t apply to the earning to give alternative. As far as I can tell, the only cost which has already been sunk at that point is the cost of evaluating me as a candidate. In my experience of working on the recruiting side of a non-EA org, this is far smaller than the other costs outlined, in particular the costs of training and building trust. I’m curious if the EA orgs feel differently.

In general though, I don’t think attempting to save these numbers by pointing out how hiring is subtly much more expensive than you would think interacts much with my objection to these numbers, since each additional reason you give me for why hiring is wildly expensive for EA orgs is yet another reason to prefer earning to give, precisely because it does not impose those costs! All these reasons simply mean the numbers should be lower than what you get from an ex post phrasing of the question, at least insofar as they are being used for what appears to be their intended purpose, namely comparison of options.

Comment by agb on CEA on community building, representativeness, and the EA Summit · 2018-08-19T22:08:07.737Z · score: 11 (11 votes) · EA · GW

(Speaking as a member of the panel, but not in any way as a representative of CEA).

It’s worth noting the panel hasn’t been consulted on anything in the last 12 months. I don’t think there’s anything necessarily wrong with this, especially since it was set up partly in response to the Intentional Insights affair and AFAIK there has been no similar event in that time, but I have a vague feeling that someone reading Julia’s posts would think it was more common, which I guess was part of the ‘question behind your question’, if that makes sense :)

Comment by agb on Should there be an EA crowdfunding platform? · 2018-05-03T12:49:25.835Z · score: 2 (2 votes) · EA · GW

Some way of distributing money to risky ventures, including fundraising, in global poverty and animal welfare should probably exist.

I think it's pretty reasonable if CEA doesn't want to do this because (a) they take a longtermist view and (b) they have limited staff capacity so aren't willing to divert many resources from (a) to anything else. In fact, given CEA's stated views it would be a bit strange if they acted otherwise. I know less about Nick, but I'm guessing the story there is similar.

https://www.centreforeffectivealtruism.org/ceas-current-thinking/

I have a limited sense for what to do about this problem, and I don't know if the solution in the OP is actually a good idea, but recognising the disconnect between what people want and what we have is a start.

I may write more about this in the near future.

Comment by agb on Comparative advantage in the talent market · 2018-04-16T02:01:56.053Z · score: 1 (1 votes) · EA · GW

I agree with your last paragraph, but indeed think that you are being unreasonably idealistic :)

Comment by agb on Comparative advantage in the talent market · 2018-04-16T01:58:51.694Z · score: 3 (3 votes) · EA · GW

I suspect that the motivation hacking you describe is significantly harder for researchers than for, say, operations, HR, software developers, etc. To take your language, I do not think that the cause area beliefs are generally 'prudentially useful' for these roles, whereas in research a large part of your job may on justifying, developing, and improving the accuracy of those exact beliefs.

Indeed, my gut says that most people who would be good fits for these many critical and under-staffed supporting roles don't need to have a particularly strong or well-reasoned opinion on which cause area is 'best' in order to do their job extremely well. At which point I expect factors like 'does the organisation need the particular skills I have', and even straightforward issues like geographical location, to dominate cause prioritisation.

I speculate that the only reason this fact hasn't permeated into these discussions is that many of the most active participants, including yourself and Denise, are in fact researchers or potential researchers and so naturally view the world through that lens.

Comment by agb on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-27T17:39:24.630Z · score: 0 (0 votes) · EA · GW

To chime in as someone who has very recently spent a lot of time in both London and SF, a 1.8:1 ratio (as in $1.8y is about the same as £y) is very roughly what I would have said for living costs between that pair, though living circumstances will vary significantly.

Pound to dollar exchange rates have moved a ton in the last few years, whereas I don't think local salaries or costs of living have moved nearly as much, so I expect that 1.8:1 heuristic to be more stable/useful than trying to do the same comparison including a currency conversion (depending what point in the last few years you picked/moved, that ratio would imply anywhere between a 1.05x increase and a 1.55x increase).

Comment by agb on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-13T06:55:06.693Z · score: 3 (3 votes) · EA · GW

(Disclaimer: I am Denise’s partner, have discussed this with her before, and so it’s unsurprising if I naturally interpreted her comment differently.)

Enthusiasm =! consent. I’m not sure where enthusiasm made it into your charitable reading.

Denise’s comment was deliberately non-gendered, and we would both guess (though without data) that once you move to the fuzzy ‘insufficient evidence of consent’ section of the spectrum there will be lots of women doing this, possibly even accounting for the majority of such cases in some environments.

Comment by agb on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-29T15:34:39.990Z · score: 7 (7 votes) · EA · GW

So as a general principle, it's true that discussion of an issue filters out (underrepresents) people who find or have found the discussion itself unpleasant*. In this particular case I think that somewhat cuts both ways, since these discussions as they take place in wider society often aren't very pleasant in general, for either side. See this comic.

To put it more plainly, I could easily name a lot of people who will strongly agree with this post but won't comment for fear of criticism and/or backlash. Like you I don't think there is an easy fix for this.

*Ironically, this is part of what Kelly is driving at when she says that championing free speech can sometimes inhibit it.

Comment by agb on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-29T15:14:11.022Z · score: 0 (2 votes) · EA · GW

Either way, the effect is I really haven't felt like I've had too many discussion in EA about diversity. It's not like it's my favourite topic or anything.

It's extremely hard to generalize here because different geographies have such different stories to tell, but my personal take is that the level of (public) discussion about diversity within EA has dipped somewhat over time.

When I wrote the Pandora's Box 2.5 years ago, I remember being sincerely worried that low-quality discussion of the issue would swamp a lot of good things that EA was accomplishing, and I wanted build some consensus before that got out of hand. I can't really imagine feeling that way now.

Comment by agb on Effective altruism is self-recommending · 2017-05-07T11:42:00.844Z · score: 2 (2 votes) · EA · GW

I found the post, was struggling before because it's actually part of their career guide rather than a blog post.

Comment by agb on Effective altruism is self-recommending · 2017-05-06T13:55:28.026Z · score: 1 (1 votes) · EA · GW

Thanks for digging up those examples.

EffectiveAltruism.org's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

I think 'many methods of doing good fail' has wide applications outside of Global Poverty, but I acknowledge the wider point you're making.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

It's very plausible to me that in-person EA groups often don't have this problem because individuals don't feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.

This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general 'intro to EA' that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about 'which causes and which methods of doing good should we list given limited time', rather than 'which cause/method would provide the most generically effective pitch'.

We didn't want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the 'core' EAs in the room that was a very real risk.

Comment by agb on Update on Effective Altruism Funds · 2017-04-30T12:22:24.700Z · score: 0 (0 votes) · EA · GW

(Sorry for the slower response, your last paragraph gave me pause and I wanted to think about it. I still don't feel like I have a satisfactory handle on it, but also feel I should reply at this point.)

this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others.

This makes total sense to me, and I do currently perceive something of an inverse correlation between how hard people have thought about the funds and how positively they feel about them. I agree this is a cause for concern. The way I would describe that situation from your perspective is not 'the funds have not been well-received', but rather 'the funds have been well-received but only because too many (most?) people are analysing the idea in a superficial way'. Maybe that is what you were aiming for originally and I just didn't read it that way.

But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.

True. That post was only a couple of months before this one though; not a lot of time for new data/arguments to emerge or opinions to change. The only major new data point I can think of since then is the funds raising ~$1m, which I think is mostly orthogonal to what we are discussing. I'm curious whether you personally a perceive a change (drop) in popularity in your circles?

My view is further that the community's response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I'm positing that the public response to EA Funds would be more negative if we hadn't filtered certain people out of EA by having an integrity problem in the first place.

This story sounds plausibly true. It's a difficult one to falsify though (I could flip all the language and get something that also sounds plausibly true), so turning it over in my head for the past few days I'm still not sure how much weight to put on it.

Comment by agb on Update on Effective Altruism Funds · 2017-04-30T10:48:40.324Z · score: 1 (1 votes) · EA · GW

That seems like a good use of the upvote function, and I'm glad you try to do things that way. But my nit-picking brain generates a couple of immediate thoughts:

  1. I don't think it's a coincidence that a development you were concerned about was also one where you forgot* to apply your general rule. In practice I think upvotes track 'I agree with this' extremely strongly, even though lots of people (myself included) agree that ideally they shouldn't.

  2. In the hypothetical where there's lots of community concern about the funds but people are happy they have a venue to discuss it, I expect the top-rated comments to be those expressing those concerns. This possibility is what I was trying to address in my next sentence:

Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea.

*Not sure if 'forgot' is quite the right word here, just mirroring your description of my comment as 'reminding' you.

Comment by agb on Effective altruism is self-recommending · 2017-04-30T10:28:51.835Z · score: 3 (3 votes) · EA · GW

The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn't match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations.

If this is basically saying 'we should take care to emphasize that EAs have wide-ranging disagreements of both values and fact that lead them to prioritise a range of different cause areas', then I strongly agree. In the same vein, I think we should emphasize that people who self-identify as 'EAs' represent a wide range of commitment levels.

One reason for this is that depending which university or city someone is in, which meetup they turn up to, and who exactly they talk to, they'll see wildly different distributions of commitment and similarly differing representation of various cause areas.

With that said, I'm not totally sure if that's the point you're making because my personal experience in London is that we've been going out of our way to make the above points for a while; what's an example of marketing which you think works to maintain a homogenous public image?

Comment by agb on Effective altruism is self-recommending · 2017-04-28T19:04:56.546Z · score: 9 (9 votes) · EA · GW

Trying to square this circle, because I think these observations are pretty readily reconcilable. My second-hand vague recollections from speaking to people at the time are:

  1. The programming had a moderate slant towards AI risk because we got Elon.
  2. The participants were generally very bullish on AI risk and other far-future causes.
  3. The 'Global poverty is a rounding error' crowd was a disproportionately-present minority.

Any one of these in isolation would likely have been fine, but the combination left some people feeling various shades of surprised/bait-and-switched/concerned/isolated/unhappy. I think the combination is consistent with both what Ben said and what Kerry said.

Further, (2) and (3) aren't surprising if you think about the way San Francisco EAs are drawn differently to EAs globally; SF is by some margin the largest AI hub, so committed EAs who care a lot about AI disproportionately end up living and working there.

Note that EAG Oxford, organised by the same team in the same month with the same private opinions, didn't have the same issues, or at least it didn't to the best of my knowledge as a participant who cared very little for AI risk at the time. I can't speak to EAG Melbourne but I'd guess the same was true.

While (2) and (3) aren't really CEA's fault, there's a fair challenge as to whether CEA should have anticipated (2) and (3) given the geography, and therefore gone out of their way to avoid (1). I'm moderately sympathetic to this argument but it's very easy to make this kind of point with hindsight; I don't know whether anyone foresaw it. Of course, we can try to avoid the mistake going forward regardless, but then again I didn't hear or read anyone complaining about this at EAG 2016 in this way, so maybe we did?

Comment by agb on Effective altruism is self-recommending · 2017-04-24T08:01:24.983Z · score: 12 (12 votes) · EA · GW

(Disclosure, I read this post, thought it was very thorough, and encouraged Ben to post it here.)

It's hard to do good criticism, but starting out with long explanations of confidence games and Ponzi schemes is not something that makes the criticism likely to be well-received. You assert that these things are not necessarily bad, so why not just zero in on the thing that you think is bad in this case?

Just to balance this, I actually liked the Ponzi scheme section. I think that making the claim 'aspects of EA have Ponzi-like elements and this is a problem' without carefully explaining what a Ponzi scheme is and without explaining that Ponzi-schemes don't necessarily require people with bad intentions would have potential to be much more inflammatory. As written, this piece struck me as fairly measured.

Also, since the claims are aimed at a potentially-flawed general approach/mindset rather than being specific to current actions, zeroing in too much might be net counterproductive in this case; there's some balance to strike here.

Comment by agb on Update on Effective Altruism Funds · 2017-04-22T23:20:50.546Z · score: 2 (2 votes) · EA · GW

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect...

In the OP Kerry wrote:

The donation amounts we’ve received so far are greater than we expected, especially given that donations typically decrease early in the year after ramping up towards the end of the year.

CEA's original expectation of donations could just have been wrong, of course. But I don't see a failure of logic here.

Re. your last paragraph, Kerry can confirm or deny but I think he's referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn't actually officially given the funds a 'green light' yet. So not referring to the same set of criticisms you are talking about. I think 'confusion at GWWC's endorsement of EA funds' is a reasonable description of how I felt when I received this e-mail, at the very least*; I like the funds but prominently recommending something that is in beta and might be discontinued at any minute seemed odd.

*I got the e-mail from GWWC announcing this on 11th April. I got CEA's March 2017 update saying they'd decided to continue with the funds later on the same day, but I think that goes to a much narrower list and in the interim I was confused and was going to ask someone about it. Checking now it looks like CEA actually announced this on their blog on 10th April (see below link), but again presumably lots of GWWC members don't read that.

https://www.centreforeffectivealtruism.org/blog/cea-update-march-2017/

Comment by agb on Update on Effective Altruism Funds · 2017-04-22T22:51:44.428Z · score: 6 (6 votes) · EA · GW

So I probably disagree with some of your bullet points, but unless I'm missing something I don't think they can be the crux of our disagreement here, so for the sake of argument let's suppose I fully agree that there are a variety of strong social norms in place here that make praise more salient, visible and common than criticism.

...I still don't see how to get from here to (for example) 'The community is probably net-neutral to net-negative on the EA funds, but Will's post introducing them is the 4th most upvoted post of all time'. The relative (rather than absolute) nature of that claim is important; even if I think posts and projects on the EA forum generally get more praise, more upvotes, and less criticism than they 'should', why has that boosted the EA funds in particular over the dozens of other projects that have been announced on here over the past however-many years? To pick the most obviously-comparable example that quickly comes to mind, Kerry's post introducing EA Ventures has just 16 upvotes*.

It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.

And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I've spoken to in person about the EA funds thinks they're a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation. I think we should both discount it ~entirely once we have anything else to go on. Relative upvotes are extremely far from perfect as a metric, but I think they are much better than in-person anecdata for this reason alone.

FWIW I'm very open to suggestions on how we could settle this question more definitively. I expect CEA pushing ahead with the funds if the community as a whole really is net-negative on them would indeed be a mistake. I don't have any great ideas at the moment though.

*http://effective-altruism.com/ea/fo/announcing_effective_altruism_ventures/

Comment by agb on Update on Effective Altruism Funds · 2017-04-22T12:51:34.190Z · score: 15 (15 votes) · EA · GW

Things don't look good regarding how well this project has been received

I know you say that this isn't the main point you're making, but I think it's the hidden assumption behind some of your other points and it was a surprise to read this. Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of support are kind of fuzzy and prone to weird biases, but putting it all together I find it much more likely than not that the community is as-a-whole positive about the funds. An alternative and more concrete angle would be money received into the funds, which was just shy of CEA's target of $1m.

Given all that, what would 'well-received' look like in your view?

If you think the community is generally making a mistake in being supportive of the EA funds, that's fine and obviously you can/should make arguments to that effect. But if you are making the empirical claim that the community is not supportive, I want to know why you think that.

Comment by agb on How accurately does anyone know the global distribution of income? · 2017-04-16T12:40:24.973Z · score: 2 (2 votes) · EA · GW

Was this intended as a response to my comment? I didn't bring up the $70k figure or the $200k figure. I did take up one part of your argument (the 'minimum standards' part) and try to explain why I don't think using a $2k - $5k minimum as equivalent to the median Indian actually makes sense.

Advantage of the "bailey": makes people feel extremely guilty and more likely to donate money or sign the pledge.

FWIW I doubt this is actually true. I have generally strongly preferred to understate people's relative income rather than overstate it when 'selling' the pledge, because it shrinks the inferential distance.

Comment by agb on How accurately does anyone know the global distribution of income? · 2017-04-08T16:51:58.488Z · score: 3 (3 votes) · EA · GW

If minimum standards rise to $90,000 and I'm earning $100,000, I would argue they do probably affect me substantially and my original premise of 'minimum standards that basically don't affect me' no longer holds. For example, I might to start putting substantial money aside to make sure I can meet the minimum if I lose my job, which will eat into my standard of living. That's why I used numbers where I think that statement does actually hold ($10,000 minimum versus $100,000 income).

that's a nice fantasy but in reality the way the west works is if you are a single young male and you have less than enough money to afford rent, there is no safety net in many places, especially the USA and the UK. You are thrown into the homelessness trap.

Sure, this is why I said 'hypothetically' and 'in 50 years'. I'm not sure your above claim is true in the UK even as of today in any case.

(UK benefits are a bit of a maze so I'm wary of saying anything too general, but running through one website (www.entitledto.co.uk) and trying to select answers that correspond to '22 year old single healthy male living in my area with no source of income', I get an entitlement of £8,300 per year, most of which (around £5,200) is meant to cover the cost of shared housing. Eyeballing that number I think 100pw should indeed be enough to get a room in a shared property at the low end of the housing market around here.

I think it is also true that a 21 year old wouldn't get that entitlement because they are supposed to live with their parents, but there are meant to be 'protections' in place where that isn't possible for whatever reason. I haven't dug further than that.)

Comment by agb on How accurately does anyone know the global distribution of income? · 2017-04-06T19:21:14.983Z · score: 9 (9 votes) · EA · GW

I think your last paragraph is plausibly true and relevant, but this is a common argument and it has common rebuttals, one of which I'm going to try and lay out here.

However, $700/year (= $1.91/day, =€1.80/day, =£1.53 /day) (without gifts or handouts) is not a sufficient amount of money to be alive in the west. You would be homeless. You would starve to death. In many places, you would die of exposure in the winter without shelter. Clearly, the median person in India is better off than a dead person.

The basics of survival are food, water, accommodation and medical care. Medical care is normally provided by the state for the poorest in the West so let's set that to one side for a moment. For the rest we set a lot of minimum standards on what is available to buy; you can't get rice below some minimum safety standard even if that very low-quality rice is more analogous to the rice eaten by a poor Indian person, I would guess virtually all (maybe actually all?) dwellings in the US have running water, etc.

This presents difficult problems for making these comparisons, and I think it's part of what Rob is talking about in his point (2). One method that comes to mind is to take your median Indian and find a rich Indian who is 10x richer, then work out how that person compares to poor Americans since (hopefully) the goods they buy have significantly more overlap. Then you might be able to stitch your income distributions together and say something like [poor Indian] = [Rich Indian] / 10 = [Poor American] / 10 = [Rich American] / 100. I have some memory that this is what some of the researchers building these distributions actually do but I can't recall the details offhand; maybe someone more familiar can fill in the blanks.

A realistic minimum amount of money to not die in the west is probably $2000-$5000/year, again without gifts or handouts, implying that to be 100 times richer than the average Indian, you have to be earning at least $200,000-$500,000 net of tax (or at least net of that portion of tax which isn't spent on things that benefit you - which at that level is almost all of it, unless you are somehow getting huge amounts of government money spent on you in particular).

Building on the above, hypothetically suppose over the next 50 years the West continues on its current trend of getting richer and putting more minimum standards in place; the minimum to survive in the West is now $10,000 per year and the now-much-richer countries have a safety net that enables everyone to reach this. However, in India nothing happens.

Is it now true that I need at least $1,000,000 per year to be 100x richer than the median Indian? That seems peverse. Supposing my income started at $100,000 and stayed constant in real terms throughout, why do increases in minimum standards that basically don't affect me (I was already buying higher-than-minimum-quality things) and don't at all affect the median Indian make me much poorer relative to the median Indian? As a result I think this particular section 'proves too much'.

Comment by agb on Concrete project lists · 2017-03-26T10:04:11.980Z · score: 12 (12 votes) · EA · GW

I'm sympathetic to this view, though I think the EA funds have some EA-Ventures-like properties; charities in each of the fund areas presumably can pitch themselves to the people running the funds if they so choose.

One difference that has been pointed out to me in the past is that for (e.g.) EA Ventures you have to put a lot of up-front work into your actual proposal. That's time-consuming and costly if you don't get anything out of it. That's somewhat different to handing some trustworthy EA an unconditional income and saying 'I trust your commitment and your judgment, go and do whatever seems most worthwhile for 6/12/24 months'. It's plausible to me that the latter involves less work on both donor and recipient side for some (small) set of potential recipients.

With that all said, better communication of credible proposals still feels like the relatively higher priority to me.

Comment by agb on Advisory panel at CEA · 2017-03-09T20:11:43.060Z · score: 2 (2 votes) · EA · GW

Hi Alasdair

Perhaps to mitigate my meandering can the members of the council give one example of something the CEA has done in the last 12 months they are willing to publicly disagree with?

Well, I'm far from sold on the principles and panel being a good idea in the first place. But everything in the linked comment is low confidence, some of it doesn't apply given the actual implementation, and certainly it's not obvious to me that it's a bad idea (i.e. I have a small positive but extremely uncertain EV).

For something that happened that I more robustly disagree with, a lot of the marketing around EA Global last year concerned me. I didn't go, so I only heard about it secondhand, and so I didn't feel best-placed to raise it directly, but from a distance I think pretty much everything Kit said in this thread re. marketing was on point.

With that said I think there is definitely some version of what you are saying that I would agree with; I certainly would consider myself very much an EA 'insider', albeit one who has no particular personal interest in CEA itself doing well except insofar as it helps the community do well. I'm not sure what the best way for CEA (or EA in general for that matter; this isn't just their responsibility) to hear from people who are genuinely external or peripheral to EA is, except that I think a small panel of people is probably not it.

Comment by agb on Donating To High-Risk High-Reward Charities · 2017-02-14T20:28:10.691Z · score: 3 (3 votes) · EA · GW

For a third perspective, I think most EAs who donate to AMF do so neither because of an EV calculation they've done themselves, nor because of risk aversion, but rather because they've largely-or-entirely outsourced their donation decision to Givewell. Givewell has also written about this in some depth, back in 2011 and probably more recently as well.

http://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/

Key quote:

"This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably)."

Comment by agb on Introducing the EA Funds · 2017-02-13T19:13:04.390Z · score: 10 (10 votes) · EA · GW

Just to give a perspective from the 'other' (donor) side:

I was excited about EA Ventures, partly because of the experimental value (both as an experiment in itself, and it's effect on encouraging other people to experiment). I also agreed with the decision to cease operation when it did, and I think Kerry's read of the situation basically concurs with my own experience

Also, as Kerry said, I think a large part of what happened here was that "the best projects are often able to raise money on their own without an intermediary to help them". At the time EA Ventures was running, EA was (and may still be) a close-enough community that I was finding out about several of the opportunities EAV was presenting via my own network, without EAV's help. That's not at all to say EAV was providing zero value in those cases since they were also filtering/evaluating, but it meant that the most promising charity (in my opinion) that I heard about via EAV was something I was already funding and keen to continue to fund up to RFMF/percentage-of-budget constraints.

Comment by agb on CEA is Fundraising! (Winter 2016) · 2016-12-28T20:58:26.170Z · score: 2 (2 votes) · EA · GW

Second this. I'm guessing part of what's going on in the $3.1 versus £1.8 is to do with reserves, but would be useful to get confirmation. Also, the google sheet linked doesn't have numbers that I can line up with anything else in the blog post, I think because it has numbers for CEA UK only and ignores CEA US (but that's speculation)?

Comment by agb on We Must Reassess What Makes a Charity Effective · 2016-12-24T13:42:00.661Z · score: 17 (17 votes) · EA · GW

Thank you for the top level post. It's much easier to engage here than on the various comment threads.

I have some clarifying questions about your claims, and in particular I would like to have a better understanding of where and why you disagree with Givewell's/AMF's read of the situation. You say that they are simply ignoring these issues, implying that they would agree with you if they paid attention. I don't think this is true, as detailed on a point-by-point basis below.

However hard they work, they can’t make enough nets to combat the malaria-carrying mosquito. Enter vociferous Hollywood movie star who rallies the masses and goads Western governments to collect and send 100,000 mosquito nets to the affected region, at the cost of a million dollars. The nets arrive, the nets are distributed, and a ‘good’ deed is done.

It seems the implied premise here is that 100,000 nets is more than that region actually needed? For example if the region needs 200,000 nets per year, only currently has 50,000 being manufactured per year, and some foreign donors distribute 100,000 nets per year, then I would have thought there was a lot of room for the local factory. This goes double if the donated nets are targeted to the poorest areas, while the factory presumably will prefer to sell to the richer areas.

Far from Givewell ignoring this issue, they pay a lot of attention to how many more nets affected regions can usefully absorb in their analysis of AMF's Room For More Funding. They conclude that there is huge scope for more nets that AMF is unlikely to get close to filling any time soon, see below quote. If you disagree with them on this concrete level, it would be worth saying why.

I agree that if we get anywhere close to filling local net gaps, it's likely not worth displacing local capacity, or at the very least we should seriously weigh the downsides of doing so. Though unless I'm missing something the most obvious solution to this would be for AMF (or whoever) to buy the nets locally, it seems like the origin of the donations isn't actually the problem here, just where the nets are manufactured.

Dr. Renshaw roughly estimated that there will be a funding gap for 100 million nets in 2018-2020. She estimated that the gap in Nigeria would account for a quarter of the total gap, or about 25 million nets. This assumes that funders other than the Global Fund (including AMF) will maintain their current level of support for LLINs in this period. Dr. Renshaw believed that less funding from the Global Fund would be available for LLINs because of changes in the way it is structuring its funding.

I'm not sure what you're trying to get at with your planners versus searchers quote. AMF does a lot of things that sound like a 'searcher' in your dichotomy. They look for local distribution partners whose methods vary by country, and also follow-up to check whether the nets are actually being used. Nor does it pick countries and areas at random, but rather on the basis of its assessment of need and in at least one major case in response to a request. Can you clarify more why you consider this a 'planning' approach?

The NMCP has been working with AMF for a relatively short period of time. Their working relationship has proceeded relatively smoothly thus far, especially since AMF has shown willingness to negotiate and compromise on some areas to conform with the country's specific scenario

AMF told us that it has been receiving more funding requests since it started funding larger distributions,8 and notes that its largest commitment so far—10.6 million LLINs in Uganda in 2017—was made in response to an in-bound request.

Finally, reading your first two criticisms I was inclined to suggest Give Directly as something you might be willing to support. So I read your third section with interest, but I don't think I understand it.

[Give Directly] simply [does] the work for a community, instead of building capacity and increasing autonomy and dependence. This is great for the organization, since it ensures that the community will need aid forever, by destroying the infrastructure that the community previously used to make a living. If you get rid of the need for structures which produce food, or organizations which provide jobs, they will go out of business, so that when the community will be unable to return to them when the aid money eventually dries up.

I'm very confused by this section. For instance, by what mechanism do you propose Give Directly gets 'rid of the need for structures which produce food'? Unsurprisingly, giving people extra cash increases the amount of money they spend on food (among many other things):

Treatment households consumed about $51 more per month (95% CI: $32 to $70) than control households.209 About half of this additional consumption was on food.210 This additional consumption also included increased spending on social expenditures and various other expenditures.

Comment by agb on Thoughts on the "Meta Trap" · 2016-12-23T06:56:46.960Z · score: 0 (0 votes) · EA · GW

Sure, I think we're on the same page here.

I'm hoping/planning to plug both of those holes (a lack of org-specific criticism, and the uncomplied general arguments in favour) in the next few weeks, so did want to double-check that there wasn't a canonical piece that I was missing.

Comment by agb on Thoughts on the "Meta Trap" · 2016-12-22T18:18:54.909Z · score: 0 (0 votes) · EA · GW

I think the arguments in favor of meta are intuitive, but not easy to find. For one thing, the org's posts tend to be org-specific (unsurprisngly) rather than a general defense of meta work. In fact, to the best of my knowledge the best general arguments have never been made on the forum at the top level because it's sort-of-assumed that everybody knows them. So while you're saying Peter's post is the only such post you could find, that's still more than the reverse (and with your post, it's now 2 - 0).

At the comment level it's easy to find plenty of examples of people making anti-meta arguments.

Comment by agb on Thoughts on the "Meta Trap" · 2016-12-22T07:15:32.122Z · score: 2 (2 votes) · EA · GW

I agree, I think it's just disproportionately the case that donors to meta work are not taking into account these considerations.

What makes you think this? I found this post interesting, but not new; it's all stuff I've thought about quite hard before. I wouldn't have thought I was roughly representative of meta donors here (I certainly know people who have thought harder), though I'd be happy for other such donors to contradict me.

Comment by agb on A Different Take on President Trump · 2016-12-18T17:01:07.001Z · score: 1 (1 votes) · EA · GW

I just wanted to reply to deal with one factual claim:

A better approach would be to try to find crime by ethnicity, crime by religion, or crime by immigrant nationality. Unfortunately, I can’t find those exact stats (probably because they would be incendiary).

LMGTFY

We have stats from some countries for crime by immigrant nationality. Muslim countries top these charts.

Um, no? Here's from the link above:

Poland: 4742

Romania: 3952

Lithuania: 2561

Ireland: 2503

Jamaica: 2323

India: 1902

Somalia: 1384

France: 1384

Italy: 1357

Portugal: 1202

Not a lot of Muslim countries there, in particular Pakistan and Bangladesh are notably absent. Yet here's the top 10 countries for overall population of foreign nationals in London from Wikipedia.

India: 262,247

Poland: 158,300

Ireland: 129,807

Nigeria: 114,718

Pakistan: 112,457

Bangladesh: 109,948

Jamaica: 87,467

Sri Lanka: 84,542

France 66,654

Somalia: 65,333

And in another entertaining example of MSM bias against immigrants, note how the Mail describes one in four London crimes being committed by foreign nationals as an 'immigrant crimewave', even though over 35% of London's population is foreign-born. Also, even that claim was originally exaggerated; see the correction at the bottom.

That's likely the true reason you were struggling to find these stats by the way; incendiary stats about immigrants are easy to find, the more prosaic ones highlighting that they are less likely to commit crime than native-born people tend to be buried in government reports (until an outlet like the Mail decides to report them and just deliberately mislead people about their relevance).

Comment by agb on A Different Take on President Trump · 2016-12-10T16:23:46.724Z · score: 5 (5 votes) · EA · GW

Even if concerns about cultural clashes with Muslims did not motivate a large percent of Leave voters, it could still be the case that those concerns did motivate many of the influencers behind Leave.

I certainly grant that this influence-via-influencers argument seems like a more-plausible causal mechanism, though also seems difficult to falsify so I'm not sure how much weight to put on it.

Of course, if you ask people in polls, they are going to under-report their concerns about mass low-skilled Muslim immigration because they don't want to be seen as racist.

Under-report? Sure. But the 'shy Tory/shy Trump' effects are generally only on the order of a few percentage points while for the world to really look the way you say it looks, they'd have to be under-reporting by huge margins. What reason do you have for thinking that? Is it a falsifiable one? I ask because it seems kinda unreasonable for you to say 'people are highly concerned about Muslim immigration in particular', I say 'no they aren't, see survey'. and you say 'ah well obviously huge numbers of people are really concerned, just don't want to admit it'. If direct survey data doesn't convince you otherwise, what would?

Since many Western countries are totalitarian states full of thought policing, and critics of Muslim immigration can result in visits by police, then it's no surprise that opinion polls are failing to capture how populations actually feel.

You just gave many examples of high-profile politicians criticising Muslim immigration. Many newspaper columnists criticise it daily (remember, the mainstream newspapers are right-wring/anti-immigration here). Those people don't get arrested. So I don't know exactly what that man did to merit a police visit, but it seems clear that either (a) it was more serious/threatening than that or (b) that particular police force is particularly over-zealous. Without more details it's hard to judge. But either way it's not something the general population has to worry about or would worry about.

Incidentally, the article you link to here is a great example of why I don't consider Breitbart a reliable source. It states* that 1,000 refugees were being relocated to a tiny island of 6,500 people, but if you check its source for that number then you discover that actually the refugees are actually being spread across the whole of West Central Scotland.

*"The tiny Isle of Bute in the Firth of Clyde, which had a total population of just 6,498 in 2011, is expected to take in around 1,000 Syrian migrants"

"More families are set to arrive on Bute over the next few weeks, which will bring the total to 28 adults and 31 children, topping up the small 6,300-strong population. They are among the first of about 1,000 refugees who are to be re-located around the west central area of Scotland after the British Government agreed to take a total of 20,000 Syrian refugees by 2020."

Comparison to American crime rates is confounded because America is a highly multiethnic society of groups with very different rates of criminality. Highly violent urban populations skew US crime statistics (which is rarely taken into account in the debates about gun control). If your reference point for a peaceful society is US crime rates, then your standards are too low.

All agreed, I would be horrified if Europe reached American levels of violent crime. But that makes it sound very strange to European ears when Americans talk about 'Law and Order breaking down'. If that's true for us, it's definitely true for you.

But I did also point out (and give sources) that violent crime is at historically low levels within Britain itself, so I can also use the reference point of 'Britain 20 years ago' and get much the same conclusion, which indeed seems a lot more reasonable.

Note that this kind of civil unrest would not show up in homicide statistics, which suggests that it’s the wrong metric.

Agreed. I only used it because I expected you to complain about massive under-reporting if I used anything else; it's hard to massively under-report murders. What metric would you suggest?

While I am glad to hear that you don’t feel in danger in Tower Hamlets, the environment in the UK looks pretty bad. Sharia parades, Rotherham, Muslim patrols, and scuffles with EDL and Britain First: it’s too much dirt to explain away.

Not really, it's quite easy to explain away. I'm going to mirror your 'mainstream media' argument back at you I'm afraid; the mainstream media is right-wring, wants to eliminate those 'precious, precious leftist votes' and bolster support for nationalist politics, and does this by a mixture of making things up, ignoring examples to the contrary, and blowing fairly minor events out of all proportion. There are plenty of examples where the general public's beliefs about the number of immigrants, their rates of criminality, their rates of worklessness, etc. are completely disjoint from reality, and always in the direction that makes the immigrants look worse (I can give many examples to this effect if required, but I'm in a bit of a rush so I won't do it right now). That's what a concerted brainwashing campaign over many years can achieve.

The people most immune to such a campaign are the people actually living on the ground since they can confirm or deny the reports directly, and they indeed tend to be much less concerned than the general population.

Comment by agb on A Different Take on President Trump · 2016-12-09T20:57:07.401Z · score: 17 (12 votes) · EA · GW

I agree with Michael that I'm sorry you're getting downvoted; this is pretty detailed stuff that is very helpful for understanding your views.

However, I strongly disagree with your description of Europe at the object level. I'm going to focus on the UK because I have the most local knowledge there and it's specifically mentioned in some of your points (Rotherham, Brexit). So to be specific:

(1) As you appear to acknowledge, concerns about immigration in the UK have skyrocketed over the past 20 or so years. However, this immigration has mostly not been from Muslim countries, rather it's been from EU countries, see link and link. These immigrants from EU countries are overwhelmingly White and Christian. A particularly large number came from Poland. By contrast, most of the Muslims in the UK are ethnically from Pakistan or Bangladesh, where there have been small decreases in migration. That doesn't sound like a world where people are mostly concerned about immigration because of a clash of cultures and specifically a culture clash with Muslims. The fact that many people voted for Brexit on grounds of immigration further supports a different interpretation; blocking EU migration will mostly block White Christian workers from Eastern Europe and do next to nothing to block further immigration from Pakistan and Bangladesh, it's a very odd policy to vote for if you were concerned specifically about Muslim immigration.

And we can also see evidence of this more directly, this survey shows that people feel very similarly about migrants from Eastern Europe and migrants from 'Muslim countries like Pakistan' (I grant they feel slightly worse, as in a few percentage points worse, about the latter), namely they feel positively if they have work and negatively if they don't. Again, this seems highly inconsistent with a massive culture clash along religious grounds; there should be sharp disparities in how people feel about the two groups.

In short, I think the view that UK concern about immigration and voting for Brexit is primarily driven by worries about the inability of Muslim immigrants to integrate is thoroughly contradicted by the available data and by people's own statements on who and what they are concerned about.

(2)

Mass migration continues because politicians want it, and the media is in bed with them (consider which political parties the media supports, and which political parties the migrants will vote for).

That might be true in the US, but the mainstream media in the UK, especially the newspapers, are generally pro-Conservative-party (i.e. right wing) and anti-immigration. See chart, and be aware that the Mail and Sun have by far the largest readerships.

(3) You talk about 'law and order breaking down' and elevated crime rates which aren't being talked about. In fact, survey data suggests violent crime is at historically low levels search for "Trends in Crime Survey for England and Wales violence, year ending December 1981 to year ending June 2016.

I happen to live in one of the areas you would probably describe as a Muslim 'no-go' area in Tower Hamlets, which is a part of London with a population of around 300,000, 45% Muslim (highest percentage in London). So as a random aside I checked the crime rate for my area, focusing on the homicide rate since that's where I'd expect the official figures to be best. There's been one in the last two years. If it was the average American homicide rate of 4 per 100,000 per year, I'd expect 24. Apparently you should move.

That's a very tongue-in-cheek comment of course, but quite a decent chunk of the London EA community is based in Tower Hamlets and hopefully that helps you see why the 'country is falling apart' suggestions ring somewhat absurd to them; while British Muslims might be more violent than your average Briton (though controlling for income seems important here), they still appear to be safer than your average American.

(4) Finally, you refer to the 'migrant crisis' in a few places in your writing. This phrase is usually deployed to refer to the crisis resulting a huge wave of immigrants starting in about 2015, but presumably you appreciate that there is this can't be linked to Rotherham; the dates don't line up (Rotherham being 1997 - 2013). So I'm left wondering what you are referring to and would appreciate further clarification on this point.

Comment by agb on A Different Take on President Trump · 2016-12-09T17:37:45.145Z · score: 6 (5 votes) · EA · GW

FWIW, I saw plenty of people on Facebook arguing that Clinton was more likely to cause a nuclear war, even from within my liberal bubble. Significantly more than I saw arguing the reverse in fact.

On the object level point, I basically agree with kbog. Even if I think Trump is unlikely to cause a nuclear war on account of being chummy with Russia, it's easy to imagine him damaging the structures that restrict the president and then the next populist won't be so chummy with Russia (AFAIK it's hardly a popular position within the US to be nice to Russia, though low confidence in that).