Posts

AGB's Shortform 2020-12-28T16:13:17.562Z
EA Diversity: Unpacking Pandora's Box 2015-02-01T00:40:05.862Z

Comments

Comment by AGB on EA Debate Championship & Lecture Series · 2021-04-11T11:46:25.378Z · EA · GW

I even explicitly said I am less familiar with BP as a debate format.

The fact that you are unfamiliar with the format, and yet are making a number of claims about it, is pretty much exactly my issue. Lack of familiarity is an anti-excuse for overconfidence.

The OP is about an event conducted in BP. Any future events will presumably also be conducted in BP. Information about other formats is only relevant to the extent that they provide information about BP. 

I can understand not realising how large the differences between formats are initially, and so assuming information from other formats has strong relevance at first, which is why I was sympathetic to your original comment, but a bunch of people have pointed this out by now.

I expect substantiated criticisms of BP as a truth-seeking device (of which there are many!) to look more like the stuff that Ben Pace is saying here, and less like the things you are writing. In brief, I think the actual biggest issues are:

  1. 15-minute prep makes for a better game but for very evidence-light arguments.
  2. Judges are explicitly not supposed to reward applause lights, but they are human, so sometimes they do.
  3. It's rarely a good idea to explicitly back down, even on an issue you are clearly losing. Instead you end up making a lot of 'even if' statements. I think Scott did a good job of explaining why that's not ideal in collaborative discussions (search for "I don’t like the “even if” framing.").

(1) isn't really a problem on the meta (read: relevant) level, since it's very obvious; mostly I think this ends up teaching the useful lesson 'you can prove roughly anything with ungrounded arguments'. (2) and (3) can inculcate actual bad habits, which I would worry about more if EA  wasn't already stuffed full of those habits and if my personal experience didn't suggest that debaters are pretty good at dropping those habits outside of the debates themselves. Still, I think they are things reasonable people can worry about.

By contrast, criticisms I think mostly don't make sense:

  • Goodharting 
  • Anything to the effect of 'the speakers might end up believing what they are saying', especially at top levels. Like, these people were randomly assigned positions, have probably been assigned the roughly opposite position at some point, and are not idiots. 

Finally, even after a re-read and showing your comment to two other people seeking alternative interpretations, I think you did say the thing you claim not to have said. Perhaps you meant to say something else, in which case I'd suggest editing to say whatever you meant to say. I would suggest an edit myself, but in this case I don't know what it was you meant to say.

Comment by AGB on EA Debate Championship & Lecture Series · 2021-04-10T08:00:18.034Z · EA · GW

You did give some responses elsewhere, so a few thoughts on your responses:

But this is really far from the only way policy debate is broken. Indeed, a large fraction of policy debates end up not debating the topic at all, but end up being full of people debating the institution of debating in various ways, and making various arguments for why they should be declared the winner for instrumental reasons. This is also pretty common in other debate formats.

(Emphasis added). This seems like a classic case for 'what do you think you know, and how do you think you know it?'. 

Here's why I think I know the opposite: the standard in British Parliamentary judging is to judge based on the 'Ordinary Intelligent Voter', defined as follows:

In particular, judges are asked to conceive of themselves as if they were a hypothetical ‘ordinary intelligent voter’ (sometimes also termed ‘average reasonable person’ or ‘informed global citizen’). This hypothetical ordinary intelligent voter doesn’t have pre-formed views on the topic of the debate and isn’t convinced by sophistry, deception or logical fallacies. They are well informed about political and social affairs but lack specialist knowledge. They are open-minded and concerned to decide how to vote – they are thus willing to be convinced by the debaters who provide the most compelling case for or against a certain policy. They are intelligent to the point of being able to understand and assess contrasting arguments (including sophisticated arguments), that are presented to them; but they keep themselves constrained to the material presented unless it patently contradicts common knowledge or is otherwise wildly implausible.

This definition is basically designed to be hard to Goodhart. It's still easy for judging cultures to take effect and either reward or fail to punish unhelplful behaviour, and personally I would list 'speaking too fast' under this, but nothing in that definition is likely to lead to people 'debating the institution of debating'. So unsurprisingly, I saw vanishingly little of this. Scanning down recent WUDC finals, the only one where the speakers appear to come close to doing this is the one where the motion itself is "This house believes that University Debating has done more harm than good". Correspondingly, I see no cases where they end up 'not debating the topic at all'. 

The debates I participated in in high-school had nobody talking fast. But it had people doing weird meta-debate, and had people repeatedly abusing terrible studies because you can basically never challenge the validity or methodology of a study, or had people make terrible rhetorical arguments, or intentionally obfuscate their arguments until they complete it in the last minute so the opposition would have no time to respond to it.

I mean, I'm sorry you had terrible judges or a terrible format I guess? I judged more high school debates than virtually anyone during my time at university, and these are not things I would have allowed to fly, because they are not things I consider persuasive to the Ordinary Intelligent Voter; the 'isn't convinced by sophistry, deception or logical fallacies' seems particularly relevant. 

On that note, I don't think it's a coincidence that a significant fraction of my comments on this forum are about challenging errors of math or logic. My rough impression is that other users often notice something is wrong, but struggle to identify it precisely, and so say nothing. It should be obvious why I'm keen on getting more people who are used to structuring their thoughts in such a way that they can explain the exact perceived error. Such exactness has benefits even when the perception is wrong and the original argument holds, because it's easier to refute the refutation. 

I might be wrong here, but I currently don't really believe that recruiting from the debate community is going to increase our cognitive diversity on almost any important dimension.

The Oxbridge debating community at least is pretty far to the right of the EA community, politically speaking. I consider this an important form of cognitive diversity, but YMMV. 

***

Overall, I'm left with the distinct impression that you've made up your mind on this based on a bad personal experience, and that nothing is likely to change that view. Which does happen sometimes when there isn't much in the way of empirical data (after all, there's sadly no easy way for me to disprove your claim that a large fraction of BP debates end up not debating the topic at all..), and isn't a bad reasoning process per se, but confidence in such views should necessarily be limited. 

Comment by AGB on Getting a feel for changes of karma and controversy in the EA Forum over time · 2021-04-07T18:25:31.611Z · EA · GW

Thanks for this, pretty interesting analysis.

Every time I come across an old post in the EA forum I wonder if the karma score is low because people did not get any value from it or if people really liked it and it only got a lower score because fewer people were around to upvote it at that time.

The other thing going on here is that the karma system got an overhaul when forum 2.0 launched in late 2018, giving some users 2x voting power and also introducing strong upvotes. Before that, one vote was one karma. I don't remember exactly when the new system came in, but I'd guess this is the cause of the sharp rise on your graph around December 2018. AFAIK, old votes were never re-weighted, which is why if you go back through comments on old posts you'll see a lot of things with e.g. +13 karma and 13 total votes, a pattern I don't recall ever seeing since. 

Partly as a result, most of the karma old posts have will have been from people going back and upvoting them later once the new system was impemented, e.g. from memory my post from your list was around +10 for most of its life, and has drifted to its current +59 over the past couple of years.

This jumps out to me because I'm pretty sure that post was not a particularly high-engagement post even at the time it was written, but it's the second-highest 2015 post on your list. I think this is because it's been linked back to a fair amount and so can partially benefit from the karma inflation.

(None of which is meant to take away from the work you've done here, just providing some possibly-helpful context.)

Comment by AGB on EA Debate Championship & Lecture Series · 2021-04-07T13:20:31.701Z · EA · GW

I think these concerns are all pretty reasonable, but also strongly discordant with my personal experience, so I figured it would help third parties if I explained the key insights/skills I think I learned or were strongly reinforced by my debating experience. 

Three notable caveats on that experience:

  • I spent more time judging debates than I did speaking in them, which is moderately unusual. It's plausible to me that judging was much more useful.
  • It was 8-12 years ago, and my independent impression is that the top levels of the sport have degenerated somewhat since (e.g. I watched world-class debaters speak and while they spoke fast, I've never seen anything like the link Oli posted).
  • I approached debating with a mindset of 'this is an area I am naturally weak in and want to get better at', so it was always more likely to complement my natural quantitative approach to figuring things out, rather than replacing it.

(Edit: Since some other discussions on this thread are talking about various formats, I should also add that my experience is entirely inside British Parliamentary debate.)

All in all, I think it's very plausible Oli's experience was closer to a typical 2021 experience than mine. But mostly I'm just not sure, for one thing I'd bet that the 'cram as many points in as possible' strategy is still much less prevalent at lower levels. 

With that out of the way, here are things I picked up that I think are important and useful for truth-tracking, as opposed to persuasion.

  • Actually listening to the arguments that have been made, in a way that means I could repeat them back with at-least-comparable eloquence to the speaker. Put another way, I think debating made me much better at ideological Turing tests.
  • A healthy skepticism of the power of arguments and inner-sense-of-conviction as a truth-tracking device, particularly whenever you are talking to someone smarter and more charismatic than yourself, or whenever you've just done something like give a speech (or write a blog post/comment!) in favour of a particular conclusion, or whenever you are surrounded by a group of people who all think the same way. This is very closely related to Epistemic Learned Helplessness. It seems like Scott realised this by reading pseudohistory books, see below quote, but my parallel 'oh shit' moment was being thoroughly out-argued and convinced by much better debaters in favour of A, and then being equally out-argued by debaters in favour of not-A. Unlike Scott's experience, I think those people could argue circles around me on virtually every topic. Which just makes it even more obvious you need a better approach.
  • Being able to generate (some) strong arguments against things I strongly believe and being able to do it independently. It's pretty common for novice debaters who are highly committed socialists to be unable to come up with any arguments for free markets, or vice-versa. I often see similar patterns, including on that exact issue but also on many other issues, within EA groups. I think getting better at this is critical if we want to do more policy work. Closely related: Policy debates should not appear one-sided. I'm also reminded of Haidt's work on moral foundations and how liberals tend to ignore some of the foundations.
  • Identifying critical disagreements, areas that if they resolved one way would likely result in a win for one side, and if they resolved the other way would win for the other side. These are very close to, though not quite the same as, CFAR's concept of a double crux.

To state the hopefully-obvious, I doubt debating is the optimal way to learn any of this. If I was talking to an EA without debating experience who really wanted to pick up the things I picked up, I'd advise them to read and reflect on the above links, and probably a few other related links I didn't think of, rather than getting involved in competitive debating, partly for reasons Oli gives and partly for time reasons. I did it primarily because it was fun and the fact it happened to be (imo) useful was a bonus, not unlike the reasons I played Chess or strategy games. That and the fact that half those posts didn't even exist back in 2009. 

At the same time, if I want to learn things from a conversation with someone that I disagree with, and all I know is that I have the choice between talking to someone with or without debating experience, I'm going with the first person. Past experience has taught me that the conversation is likely to be more efficient, more focused on cruxes and falsifiable beliefs, and thus less frustrating.

And there are people who can argue circles around me. Maybe not on every topic, but on topics where they are experts and have spent their whole lives honing their arguments. When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable...

Comment by AGB on How much does performance differ between people? · 2021-04-03T11:27:05.690Z · EA · GW

So taking a step back for a second, I think the primary point of collaborative written or spoken communication is to take the picture or conceptual map in my head and put it in your head, as accurately as possible. Use of any terms should, in my view, be assessed against whether those terms are likely to create the right picture in a reader's or listener's head. I appreciate this is a somewhat extreme position.

If everytime you use the term heavy-tailed (and it's used a lot - a quick CTRL + F tells me it's in the OP 25 times) I have to guess from context whether you mean the mathematical or commonsense definitions, it's more difficult to parse what you actually mean in any given sentence. If someone is reading and doesn't even know that those definitions substantially differ, they'll probably come away with bad conclusions.

This isn't a hypothetical corner case - I keep seeing people come to bad (or at least unsupported) conclusions in exactly this way, while thinking that their reasoning is mathematically sound and thus nigh-incontrovertible. To quote myself above:

The above, in my opinion, highlights the folly of ever thinking 'well, log-normal distributions are heavy-tailed, and this should be log-normal because things got multiplied together, so the top 1% must be at least a few percent of the overall value'.

If I noticed that use of terms like 'linear growth' or 'exponential growth' were similarly leading to bad conclusions, e.g. by being extrapolated too far beyond the range of data in the sample, I would be similarly opposed to their use. But I don't, so I'm not. 

If I noticed that engineers at firms I have worked for were obsessed with replacing exponential algorithms with polynomial algorithms because they are better in some limit case, but worse in the actual use cases, I would point this out and suggest they stop thinking in those terms. But this hasn't happened, so I haven't ever done so. 

I do notice that use of the term heavy-tailed (as a binary) in EA, especially with reference to the log-normal distribution, is causing people to make claims about how we should expect this to be 'a heavy-tailed distribution' and how important it therefore is to attract the top 1%, and so...you get the idea.

Still, a full taboo is unrealistic and was intended as an aside; closer to 'in my ideal world' or 'this is what I aim for my own writing', rather than a practical suggestion to others. As I said, I think the actual suggestions made in this summary are good - replacing the question 'is this heavy-tailed or not' with 'how heavy-tailed is this' should do the trick- and hope to see them become more widely adopted.

Comment by AGB on How much does performance differ between people? · 2021-04-03T10:55:30.498Z · EA · GW

Briefly on this, I think my issue becomes clearer if you look at the full section.

If we agree that log-normal is more likely than normal, and log-normal distributions are heavy-tailed, then saying 'By contrast, [performance in these jobs] is thin-tailed' is just incorrect? Assuming you meant the mathematical senses of heavy-tailed and thin-tailed here, which I guess I'm not sure if you did.

This uncertainty and resulting inability to assess whether this section is true or false obviously loops back to why I would prefer not to use the term 'heavy-tailed' at all, which I will address in more detail in my reply to your other comment.

Ex-post performance appears ‘heavy-tailed’ in many relevant domains, but with very large differences in how heavy-tailed: the top 1% account for between 4% to over 80% of the total. For instance, we find ‘heavy-tailed’ distributions (e.g.  log-normal, power law) of scientific citations, startup valuations, income, and media sales. By contrast, a large meta-analysis reports ‘thin-tailed’ (Gaussian) distributions for ex-post performance in less complex jobs such as cook or mail carrier

Comment by AGB on How much does performance differ between people? · 2021-04-02T18:31:40.021Z · EA · GW

Hi Max and Ben, a few related thoughts below. Many of these are mentioned in various places in the doc, so seem to have been understood, but nonetheless have implications for your summary and qualitative commentary, which I sometimes think misses the mark. 

  • Many distributions are heavy-tailed mathematically, but not in the common use of that term, which I think is closer to 'how concentrated is the thing into the top 0.1%/1%/etc.', and thus 'how important is it I find top performers' or 'how important is it to attract the top performers'. For example, you write the following:

What share of total output should we expect to come from the small fraction of people we’re most optimistic about (say, the top 1% or top 0.1%) – that is, how heavy-tailed is the distribution of ex-ante performance? 

  • Often, you can't derive this directly from the distribution's mathematical type. In particular, you cannot derive it from whether a distribution is heavy-tailed in the mathematical sense. 
  • Log-normal distributions are particuarly common and are a particular offender here, because they tend to occur whenever lots of independent factors are multiplied together. But here is the approximate* fraction of value that comes from the top 1% in a few different log-normal distributions:
    EXP(N(0,0.0001))  -> 1.02%
    EXP(N(0,0001)) -> 1.08%
    EXP(N(0,0.01)) -> 1.28%
    EXP(N(0,0.1)) -> 2.22%
    EXP(N(0,1)) -> 9.5%
  • For a real-world example, geometric brownian motion is the most common model of stock prices, and produces a log-normal distribution of prices, but models based on GBM actually produce pretty thin tails in the commonsense use, which are in turn much thinner than the tails in real stock markets, as (in?)famously chronicled in Taleb's Black Swan among others. Since I'm a finance person who came of age right as that book was written, I'm particularly used to thinking of the log-normal distribution as 'the stupidly-thin-tailed one', and have a brief moment of confusion every time it is referred to as 'heavy-tailed'. 
  • The above, in my opinion, highlights the folly of ever thinking 'well, log-normal distributions are heavy-tailed, and this should be log-normal because things got multiplied together, so the top 1% must be at least a few percent of the overall value'. Log-normal distributions with low variance are practically indistinguishable from normal distributions. In fact, as I understand it many oft-used examples of normal distributions, such as height and other biological properties, are actually believed to follow a log-normal distribution.

***

I'd guess we agree on the above, though if not I'd welcome a correction. But I'll go ahead and flag bits of your summary that look weird to me assuming we agree on the mathematical facts:

By contrast, a large meta-analysis reports ‘thin-tailed’ (Gaussian) distributions for ex-post performance in less complex jobs such as cook or mail carrier [1]: the top 1% account for 3-3.7% of the total.

I haven't read the meta-analysis, but I'd tentatively bet that much like biological properties these jobs actually follow log-normal distributions and they just couldn't tell (and weren't trying to tell) the difference. 

These figures illustrate that the difference between ‘thin-tailed’ and ‘heavy-tailed’ distributions can be modest in the range that matters in practice

I agree with the direction of this statement, but it's actually worse than that: depending on the tail of interest "heavy-tailed distributions" can have thinner tails than "thin-tailed distributions"! For example, compare my numbers for the top 1% of various log-normal distributions to the right-hand-side of a standard N(0,1) normal distribution where we cut off negative values (~3.5% in top 1%).  

 

It's also somewhat common to see comments like this from 80k staff (This from Ben Todd elsewhere in this thread):

You can get heavy tailed outcomes if performance is the product of two normally distributed factors (e.g. intelligence x effort).

You indeed can, but like the log-normal distribution this will tend to have pretty thin tails in the common use of the term. For example, multipling two N(100,225) distributions together, chosen because this is roughly the distribution of IQ, gets you a distribution where the top 1% account for 1.6% of the total. Looping back to my above thought, I'd also guess that performance on jobs like cook and mail-carrier look very close to this, and empirically were observed to have similarly thin tails (aptitude x intelligence x effort might in fact be the right framing for these jobs).

***

Ultimately, the recommendation I would give is much the same as the bottom line presented, which I was very happy to see. Indeed, I'm mostly grumbling because I want to discourage anything which treats heavy-tailed as a binary property**, as parts of the summary/commentary tend to, see above.

Some advice for how to work with these concepts in practice:

  • In practice, don’t treat ‘heavy-tailed’ as a binary property. Instead, ask how heavy the tails of some quantity of interest are, for instance by identifying the frequency of outliers you’re interested in (e.g. top 1%, top 0.1%, …) and comparing them to the median or looking at their share of the total. [2]
  • Carefully choose the underlying population and the metric for performance, in a way that’s tailored to the purpose of your analysis. In particular, be mindful of whether you’re looking at the full distribution or some tail (e.g. wealth of all citizens vs. wealth of billionaires).

*Approximate because I was lazy and just simulated 10000 values to get these and other quoted numbers. AFAIK the true values are not sufficiently different to affect the point I'm making. 

**If it were up to me, I'd taboo the term 'heavy-tailed' entirely, because having an oft-used term whose mathematical and commonsense notions differ is an obvious recipe for miscommunication in a STEM-heavy community like this one. 

Comment by AGB on Forget replaceability? (for ~community projects) · 2021-03-31T17:49:20.203Z · EA · GW

I want to push back against a possible interpretation of this moderately strongly.

If the charity you are considering starting has a 40% chance of being 2x better than what is currently being done on the margin, and a 60% chance of doing nothing, I very likely want you to start it, naive 0.8x EV be damned. I could imagine wanting you to start it at much lower numbers than 0.8x, depending on the upside case. The key is to be able to monitor whether you are in the latter case, and stop if you are. Then you absorb a lot more money in the 40% case, and the actual EV becomes positive even if all the money comes from EAs.

If monitoring is basically impossible and your EV estimate is never going to get more refined, I think the case for not starting becomes clearer. I just think that's actually pretty rare?

From the donor side in areas and at times where I've been active, I've generally been very happy to give 'risky' money to things where I trust the founders to monitor and stop or switch as appropriate, and much more conservative (usually just not giving) if I don't. I hope and somewhat expect other donors are willing to do the same, but if they aren't that seems like a serious failure of the funding landscape. 

Comment by AGB on RyanCarey's Shortform · 2021-03-31T09:31:06.929Z · EA · GW

I have a few thoughts here, but my most important one is that your (2), as phrased, is an argument in favour of outreach, not against it. If you update towards a much better way of doing good, and any significant fraction of the people you 'recruit' update with you, you presumably did much more good via recruitment than via direct work. 

Put another way, recruitment defers to question of how to do good into the future, and is therefore particularly valuable if we think our ideas are going to change/improve particularly fast. By contrast, recruitment (or deferring to the future in general) is less valuable when you 'have it all figured out'; you might just want to 'get on with it' at that point. 

***

It might be easier to see with an illustrated example: 

Let's say in the year 2015 you are choosing whether to work on cause P, or to recruit for the broader EA movement. Without thinking about the question of shifting cause preferences, you decide to recruit, because you think that one year of recruiting generates (e.g.) two years of counterfactual EA effort at your level of ability.

In the year 2020, looking back on this choice, you observe that you now work on cause Q, which you think is 10x more impactful than cause P. With frustration and disappointment, you also observe that a 'mere' 25% of the people you recruited moved with you to cause Q, and so your original estimate of two years actually became six months (actually more because P still counts for something in this example, but ignoring that for now).

This looks bad because six months < one year, but if you focus on impact rather than time spent then you realise that you are comparing one year of work on cause P, to six months of work on cause Q. Since cause Q is 10x better, your outreach 5x outperformed direct work on P, versus the 2x you thought it would originally.

***

You can certainly plug in numbers where the above equation will come out the other way - suppose you had 99% attrition - but I guess I think they are pretty implausible? If you still think your (2) holds, I'm curious what (ballpark) numbers you would use. 

Comment by AGB on Some quick notes on "effective altruism" · 2021-03-25T00:16:10.184Z · EA · GW

+1. A short version of my thoughts here is that I’d be interested in changing the EA name if we can find a better alternative, because it does have some downsides, but this particular alternative seems worse from a strict persuasion perspective.

Most of the pushback I feel when talking to otherwise-promising people about EA is not really as much about content as it is about framing: it’s people feeling EA is too cold, too uncaring, too Spock-like, too thoughtless about the impact it might have on those causes deemed ineffective, too naive to realise the impact living this way will have on the people who dive into it. I think you can see this in many critiques.

(Obviously, this isn’t universal; some people embrace the Spock-like-mindset and the quantification. I do, to some extent, or I wouldn’t be here. But I’ve been steadily more convinced over the years that it’s a small minority.)

You can fight this by framing your ideas in warmer terms, but it does seem like starting at ‘Global Priorities community’ makes the battle more uphill. And I find losing this group sad, because I think the actual EA community is relatively warm, but first impressions are tough to overcome.

Low confidence on all of the above, would be happy to see data.

Comment by AGB on Progress Open Thread: March 2021 · 2021-03-23T23:18:08.472Z · EA · GW

I think similar adjustments should be made if you are extrapolating to crimes with very different prevalence. For example, the US murder rate is 4-5x that of the UK, but I wouldn’t expect the US to have that many more bike thefts.

Proxy seems fine if you’re focused on which country/city/etc. has higher overall crime, rather than estimating magnitude.

(FWIW, attempt at Googling the above suggest ~300k bike thefts per year in UK versus 2m in US, US population 5x bigger so that’s only 1.33x the UK rate. A quick check on bicycle sales in the two countries does not suggest that this is because of very different cycling rates. No links because on phone, but above is very rough anyway. I’m left with somewhat greater confidence that the gap is in fact <<4x, like 1.2x - 2x, though.)

Similar comments could be made about extrapolating from the large number of US billionaires (way more per capita than any other country IIRC) to the relative rates of people earning more than $200k/$50k/etc. That case might be more intuitive.

Comment by AGB on Progress Open Thread: March 2021 · 2021-03-23T19:56:42.815Z · EA · GW

(Arguably nitpicking, in the sense that I suspect this would not change the bottom line, posted because the use of stats here raised my eyebrows)

For some calibration, risk of drug abuse, which is a reasonable baseline for other types of violent behavior as well, is about 2-3x in adopted children. This is not conditioning on it being a teenager adoption, which I expect would likely increase the ratio to more something like 3-4x, given the additional negative selection effects. 

Sibling abuse rates are something like 20% (or 80% depending on your definition). And is the most frequent form of household abuse. This means by adopting a child you are adding something like an additional 60% chance of your other child going through at least some level of abuse

For the benefit of those who didn't click through the link, the rate on their chosen measure is very roughly 3.5% for adoptees versus roughly 1.5% for the general population, which I assume is where the 2-3x came from. I also buy that by adopting a teenager this number is going to be pushed up towards the foster child outcomes (~8%); a guess like 5% ("3-4x") seems reasonable.  

But you can't directly extrapolate from the ratio on a rare outcome to a typical outcome, e.g. a 20% -> 67% (67 = 20 * 5 / 1.5)  change in the absolute likelihood of sibling abuse, which I think is basically what you are doing here, though do correct me if I'm wrong since there were some numbers you gave I couldn't follow. The statistical intuition going into that is rough, but here's a concrete, if technical, example: 

A 1.5% bad tail outcome in a normal distribution means you are 2.17 standard deviations below the mean, a 5% tail outcome means you are 1.64 SDs below the mean, and so you would go 1.5% -> 5% just by dropping the mean by 0.53 SDs. But this would only move a 20% likelihood outcome to 38%, well short of 67% or even your 60%. To get a 20% outcome to 60% you need a 1.1 SD move, which would be equivalent to a 1.5% outcome becoming 14%. The choice of normal distribution in the above is arbitrary, but I expect the pattern to hold among reasonable choices for this case. 

In less technical language: you don't have to move a distribution very much to change the probability of tail outcomes by a lot, whereas almost by definition you do have to move a distribution a lot to change the probability of typical outcomes by a lot. 

Comment by AGB on Responses and Testimonies on EA Growth · 2021-03-23T17:20:31.428Z · EA · GW

I agree with a lot of this, and I appreciated both the message and the effort put into this comment. Well-substantiated criticism is very valuable.

I do want to note that GWWC being scaled back was flagged elsewhere, most explicitly in Ben Todd's comment (currently 2nd highest upvoted on that thread). But for example, Scott's linked reddit comment also alludes to this, via talking about the decreased interest in seeking financial contributions. 

But it's true that in neither case would I expect the typical reader to come away with the impression that a mistake was made, which I think is your main point and a good one. This is tricky because I think there's significant disagreement about whether this was a mistake or a correct strategic call, and in some cases I think what is going on is that the writer thinks the call was correct (in spite of CEA now thinking otherwise), rather than simply refusing to acknowledge past errors.

Comment by AGB on What do you make of the doomsday argument? · 2021-03-19T17:51:19.786Z · EA · GW

With low confidence, I think I agree with this framing.

If correct, then I think the point is that seeing us at an 'early point in history' updates us against a big future, but the fact we exist at all updates in favour of a big future, and these cancel out.

You wake up in a mysterious box, and hear the booming voice of God:

“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.

If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.

To get into heaven, you have to answer this correctly: Which way did the coin land?”

You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.

But then you get up, walk outside, and look at the number on your box.

‘3’. Huh. Now you don’t know what to believe.

If God made 10 billion boxes, surely it’s much more likely that you would have seen a number like 7,346,678,928?

Comment by AGB on Politics is far too meta · 2021-03-19T16:10:05.262Z · EA · GW

Weirdly, I found this post a bit 'too meta', in the sense that there are a lot of assertions and not a lot of effort to provide evidence or otherwise convince me that these claims are actually true. Some claims I agree with anyway (e.g. I think you can reasonably declare political feasibility 'out-of-scope' in early-stage brainstorming), some I don't. Here's the bit that my gut most strongly disagrees with:

A good test is to ask, when right things are done on the margin, what happens? When we move in the direction of good policies or correct statements, how does the media react? How does the public react?

This does eventually happen on almost every issue.

The answer is almost universally that the change is accepted.

***

This is a pure ‘they’ll like us when we win.’ Everyone’s defending the current actions of the powerful in deference to power.

I agree this is a good test. But I have the opposite gut sense of what happens when this test has been carried out. In general, I think there is backlash. Examples:

After Obama was elected, did the people who were not sold on a Black person being president disappear 'in deference to power'? Not exactly.

After Trump was elected on the back of anti-immigration statements and enacted anti-immigration policies, support for immigration rose.

In the UK, there has also been a sharp rise in support for immigration since 2010 (when the Conservatives came to power with a mandate to reduce immigration), including during the Brexit referendum campaign and Brexit itself; i.e. while the UK's effective policy on immigration has been becoming tighter, public opinion has been moving in the exact opposite direction. While I don't have the data to hand, I'm fairly sure the opposite happened during the high-immigration years of the 2000s; as public policy got looser, public opinion got tighter. So this is a case where regardless of whether you think the good policy is 'reducing immigration' or 'increasing immigration', it seems clear that your favoured path being taken did not lead to a virtuous cycle of more support for the good thing.

A less talked-about example: some politicians in the UK traced the Brexit vote to a conservative backlash after the legalisation of gay marriage; this is the clearest 'good policy/right thing' of the bunch in my personal view, and yet still it was not accepted once implemented.

*** 

There are more examples, but you get the idea. I'd be interested in any attempt to make the opposite case, that people do generally fall in line, on the object-level. 

I have noted this happening a lot with COVID specifically, but I considered it an anomaly that probably has something to do with the specifics of the COVID situation (at a guess, the fact that it hasn't been going on for that long, so a rather large number of people don't have settled and hard-to-change views, instead they just go along with whatever authority is doing), rather than a generalisable truth we can apply in other areas. 

Comment by AGB on What Makes Outreach to Progressives Hard · 2021-03-17T11:18:33.351Z · EA · GW

This has been a philosophical commitment since the early days of EA, yet information on how we (or the charities we prioritize) actually confirm with recipients that our programs are having the predicted positive impact on them receives, AFAICT, little attention in EA.

[Within footnote] As an example, after ten minutes of searching I could not find information on GiveWell's overall view on this subject on their website.

 

FWIW, the most closely related Givewell article I'm aware of is How not to be a "white in shining armor". Relevant excerpts (emphasis in original):

We fundamentally believe that progress on most problems must be locally driven. So we seek to improve people’s abilities to make progress on their own, rather than taking personal responsibility for each of their challenges. How can we best accomplish this?...

A common and intuitively appealing answer is letting locals drive philanthropic projects...At the same time, we have noted some major challenges of doing things this way. Which locals should be put in charge?...

Another approach to “putting locals in the driver’s seat” is quite different. It comes down to acknowledging that as funders, we will always be outsiders, so we should focus on helping with what we’re good at helping with and leave the rest up to locals...

It’s not that we think global health and nutrition are the only important, or even the most important, problems in the developing world. It’s that we’re trying to focus on what we can do well, and thus maximally empower people to make locally-driven progress on other fronts.

Comment by AGB on Why do so few EAs and Rationalists have children? · 2021-03-17T10:51:30.735Z · EA · GW

Worth noting that you might get increased meaningfulness in exchange for the lost happiness

FWIW, I think this accidentally sent this subthread off on a tangent because of the phrasing of 'in exchange for the lost happiness'. 

My read of the stats, similar to this Vox article and to what Robin actually said, is that people with children (by choice) are neither more nor less happy on average than childless people (by choice), so any substantial boost to meaning should be seen as a freebie, rather than something you had to give up happiness for.

I think there's a related error where people look at the costs of having children (time, money, etc.) and conclude that it's not worth it if the children aren't even making you happy at the end of all that. But this doesn't make sense, at least from a selfish perspective: the parents in these studies were also paying all those costs, their childless counterparts were not, and yet the bottom line was essentially no overall effect, suggesting that children are either providing something which makes up for these costs or that the costs are not as big as people sometimes make out (my suspicion as a father of two is that it's a bit of both). And so as Vox put it:

Bottom line: The evidence we have suggests having children doesn't affect a person's happiness much one way or another. But that evidence is limited by people selecting into the path they think is best for them. So: If you want to have kids, have kids. If you don't want to have kids, don't have kids. The happiness literature isn't going to make the decision for you.

Comment by AGB on What Makes Outreach to Progressives Hard · 2021-03-17T09:40:37.280Z · EA · GW

Whether this is a ‘good’ answer would depend on your audience, but I think one true answer from a typical EA would be ‘I care about those things too, but I think that the global poor/nonhuman animals/future generations are even more excluded from decision-making (and therefore ignored) than POC/women/LGBT groups are, so that’s where I focus my limited time and money’.

I don’t actually think the cause area challenge is quite what is going on here; I can easily imagine advancing those things being considered cause areas if they had a stronger case.

Comment by AGB on What Makes Outreach to Progressives Hard · 2021-03-16T16:19:39.075Z · EA · GW

But also, I think a lot of people that end up at HLS don't think in those sort of Marxist/socialist class terms, but rather just have a sort of strong Rawslian egalitarianism commitment.

I also think many people at HLS are hilariously unaware of their class privilege.

FWIW, I strongly agree with both of these statements for Oxbridge in the UK as well. 

The latter I think is a combination of a common dynamic where most people think they are closer to the middle of the income spectrum than they are, plus a natural human tendency to focus on the areas where you are being treated poorly or unfairly over the areas where you are being treated well. 

Comment by AGB on Why do so few EAs and Rationalists have children? · 2021-03-15T00:36:07.952Z · EA · GW

To this I would add:

Beware of the selection effect where I’d expect people with kids are less likely to come to meetups, less likely to post on this forum, etc. than EAs with overall-similar levels of involvement, so it can look like there are fewer than is actually the case, if you aren’t counting carefully.

For EA clusters in very-high-housing-cost areas specifically (Milan mentioned the Bay), I wouldn’t be surprised if the broader similar demographic is also avoiding children, since housing is usually the largest direct financial cost of having children, so you may need to control for that as well.

(I think I agree there’s still some difference here, just flagging some confounders beyond what Buck mentioned.)

Comment by AGB on How to make people appreciate asynchronous written communication more? · 2021-03-10T22:23:45.222Z · EA · GW

Writing is just a lot more time-consuming to cover equivalent ground in my experience. I occasionally make the mistake of getting into multi-hour text conversations with people, and almost invariably look back and think we could have covered the same ground in a phone call lasting <25% as long.

Comment by AGB on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-10T19:46:00.741Z · EA · GW

Scattered thoughts on this, pointing in various directions. 

TL;DR: Measuring and interpreting movement growth is complicated.

Things I'm relatively confident about:

  1. You need to be careful about whether the thing you are looking at is a proxy for 'size of EA' or a proxy for a derivative, i.e. 'how fast is EA growing'. I think Google Trends searches for 'Effective Altruism' are mostly the latter; it's something people might do on the way into the movement, but not something I would ever do.
  2. After correcting for (1), my rough impression is that EA grew super-linearly up to about 2016, and then approximately linearly after that up to about March 2020. Intepretation of many metrics since COVID is complicated by, well, COVID. One salient-to-me way to think about linear growth is that each year some fraction of the new crop of university students discover EA and some fraction of them take to it.
  3. Givewell money moved is obviously going to be impacted by a shift away from global poverty/health as a focus area within the movement. We have survey data which suggests this has happened over the time period in question.  In that  context, a 93% increase in non-Open-Phil money moved to the shrinking cause area between 2015 and 2019 is pretty good.
    • OTOH, when looking at any kind of money moved over time you need to remember that EA's non-Open-Phil financial power should increase regardless of the number of people increasing. The movement is young and full of the types of people who have had large income increases between 2015 and 2020. For example, while I couldn't find the data quickly, I believe >>50% of GWWC members were students in 2015 and <50% are now.
    • Also on that hand, I expect most of Givewell's donors don't self-identify as EAs. Whether this matters is unclear, makes it a bit of a weak proxy though.
  4. I really don't think it makes sense to treat Alienation/Demandingness as a constant. Scott's response to this matches my impressions, and one of the things it flags is how the level of demands on proto-EAs have increased, in my opinion by a lot. I think this is true in both the 'level of dedication required' sense and the 'level/specificity of skills required' sense.
    • This is particularly salient, perhaps too salient, to me for personal reasons. I am a top-third Maths graduate from a top university who has donated roughly half of my income to date, but I don't quite hit the type and level of dedication/skill that I perceive is desired, and partly as a result I doubt I would have gotten involved in the movement if I had been born 6 years later. I want to be explicit that this isn't necessarily a problem - that judgement is very sensitive to beliefs about relative values of different types of individuals - I'm just providing a personal anecdote that if it generalises would serve as partial explanation for the tailing off of growth rates.

Things I am less confident about: 

  1. While the level of demands has increased, I think EA's online spaces are actually less supportive than they used to be, creating a gap that can easily leave people disillusioned, especially if they are geographically distant from major movement hubs. Many in-person spaces seem to be healthier, but are always going to grow less rapidly.
  2. The other gap leaving people disillusioned is the lack of actual things to do, especially at 'entry levels' of dedication but also at higher levels if you strike out on job applications. I chuckle sadly every time I read this piece, in particular the paragraph quoted below.
  3. I do actually agree a lot of people who get seriously involved (say >20% dedication, including anyone who has changed their career path for EA reasons) in EA seem to have liked it as soon as they hear about it. But:
    1. I think this is at best a partial account of why growth has stalled, because as of now my impression is that essentially nobody (<10% of university students) has heard about it.
    2. If you lower the dedication bar at all I get a lot more positive on the possibility of convincing people. Partly this is for personal reasons, my closest friend from university has taken the GWWC pledge and I'm >50% confident he would not have done so if it weren't for me talking about EA. I just don't expect him to 'go beyond' that pledge or otherwise engage with the community.
    3. If I do check my own path, I was introduced to EA or Rationality at least three times before something stuck: I saw Toby Ord give an interview, was nudged into reading parts of the Sequences by my first job, then a university friend pointed me to HPMOR, then finally a different university friend interned at GWWC.  So on the one hand it does seem with hindsight like these communities kept knocking on my door, but on the other I didn't actually do anything with the first few points of contact. For me it was when I was asked to do something concrete, achievable and valuable that I switched my attention. I know others' mileage varies a lot here; some people are particularly drawn to the intellectual aspect for example.  But it means that even 'innate' EAs might need exposure to the representation of EA that matches what they are looking for.
    4. Finally, there are at least two candidate explanations of 'many people who are seriously involved with EA liked it as soon as they heard about it', if true. EA could be innate, or we could suck at providing the incentive gradients/incremental support necessary to turn less-committed people into more-committed people. Both would create that pattern.

Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint… (20 applications later) … Yeah, when we said that we need people, we meant capable people. Not you. You suck.

Comment by AGB on Complex cluelessness as credal fragility · 2021-03-05T23:46:31.419Z · EA · GW

How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-land - particularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive

This makes some sense to me, although if that's all we're talking about I'd prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules. 

FWIW, I don't think 'risks' is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. "Give Directly reduces extinction risk by reducing poverty, a known cause of conflict"); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.

Although I don't see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an 'inverse logic of the larder' (see), such as "per area, a factory farm has a lower intensity of animal suffering than the environment it replaced".

I think you were trying to draw a distinction, but FWIW this feels structurally similar to the 'AMF impact population growth/economic growth' argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. 'promoting as much concern for animal welfare as possible'). Is your point just that it does not in fact disappear from people's priority lists in this case? That I'm not well-placed to observe or comment on either way.

b) Early (or motivated) stopping across crucial considerations.

This I agree is a problem.  I'm not sure if thinking in terms of cluelessness makes it better or worse; I've had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, I've been unconvinced of every case and think said interlocutor is 'stopping early' and missing aspects of impact about which they are complexly clueless (often economic/population growth impacts, since it's actually quite hard to come up with an intervention that doesn't credibly impact one of those). 

I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that there's a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a fool's errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though. 

Given my fairly deflationary OP, I don't think these problems are best described as cluelessness

Point taken. Given that I hope this wasn't too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment. 

Comment by AGB on Brainstorm: What questions will the general public find most interesting about charities and causes? · 2021-03-03T23:22:27.561Z · EA · GW

Your comment reminded me of this post, whose ideas I like as a starting point for handling this type of question:

https://forum.effectivealtruism.org/posts/DYr7kBpMpmbygBiEq/the-privilege-of-earning-to-give

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-19T23:01:19.097Z · EA · GW

Not sure if you were referring to that particular post or the whole sequence. If I follow it correctly, I think that particular post is trying to answer the question 'how can we plausibly impact the long-term future, assuming it's important to do so'. I think it's a pretty good treatment of that question!

But I wouldn't mentally file that under cluelessness as I understand the term, because that would also be an issue under ordinary uncertainty. To the extent you explain how cluelessness is different to garden-variety uncertainty and why we can't deal with it in the same way(s), it's earlier in your sequence of posts, and so far I think I have not been moved away from the objection you try to address in your second post, though if you read the (long) exchange with MichaelStJules above you can see him trying to move me and at minimum succeeding in giving me a better picture of where the disagreements might be. 

Edit: I guess what I'm really saying is that the part of that sequence which seems useful and interesting to me - the last bit - could also have been written and would be just as important if we were merely normally-uncertain about the future, as opposed to cluelessly-uncertain. 

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-19T22:47:10.120Z · EA · GW

I think if you reject incomparability, you're essentially assuming away complex cluelessness and deep uncertainty.

That's really useful, thanks, at the very least I now feel like I'm much closer to identifying where the different positions are coming from. I still think I reject incomparability; the example you gave didn't strike me as compelling, though I can imagine it compelling others. 

So, while I might just pick an option if forced to choose between A, B and indifferent, it doesn't reveal a ranking, since you've eliminated the option I'd want to give, "I really don't know". You could force me to choose among wrong answers to other questions, too.

I would say it's reality that's doing the forcing. I have money to donate currently; I can choose to donate it to charity A, or B, or C, etc., or to not donate it. I am forced to choose and the decision has large stakes; 'I don't know' is not an option ('wait and do more research' is, but that doesn't seem like it would help here). I am doing a particular job as opposed to all the other things I could be doing with that time; I have made a choice and for the rest of my life I will continue to be forced to choose what to do with my time. Etc. 

It feels intuitively obvious to me that those many high-stakes forced choices can and should be compared in order to determine the all-things-considered best course of action, but it's useful to know that this intuition is apparently not shared. 

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-16T08:15:05.447Z · EA · GW

Thanks again. I think my issue is that I’m unconvinced that incomparability applies when faced with ranking decisions. In a forced choice between A and B, I’d generally say you have three options: choose A, choose B, or be indifferent.

Incomparability in this context seems to imply that one could be indifferent between A and B, prefer C to A, yet be indifferent between C and B. That just sounds wrong to me, and is part of what I was getting at when I mentioned transitivity, curious if you have a concrete example where this feels intuitive?

For the second half, note I said among all actions being taken. If ‘business as usual’ includes action A which is dominated by action B, we can improve things by replacing A with B.

Comment by AGB on Deference for Bayesians · 2021-02-16T06:24:24.362Z · EA · GW

I mostly agree with this. Of course, to notice that you have to know (2)/(3) are part of the ‘expert belief set’, or at least it really helps, which you easily might not have done if you relied on Twitter/Facebook/headlines for your sense of ‘expert views’.

And indeed, I had conversations where pointing those things out to people updated them a fair amount towards thinking that masks were worth wearing.

In other words, even if you go and read the expert view directly and decide it doesn’t make sense, I expect you to end up in a better epistemic position than you would otherwise be; it’s useful for both deference and anti-deference, and imo will strongly tend to push you in the ‘right’ direction for the matter at hand.

Edit: Somewhat independently, I’d generally like our standards to be higher than ‘this argument/evidence could be modified to preserve the conclusion’. I suspect you don’t disagree, but stating it explicitly because leaning too hard on that in a lot of different areas is one of the larger factors leading me to be unhappy with the current state of EA discourse.

Comment by AGB on Deference for Bayesians · 2021-02-14T12:01:41.438Z · EA · GW

A quibble on the masks point because it annoys me every time it's brought up. As you say, it's pretty easy to work out that masks stop an infected person from projecting nearly as many droplets into the air when they sneeze, cough, or speak, study or no study. But virtually every public health recommendation that was rounded off as 'masks don't work' did in fact recommend that infected people should wear masks. For example, the WHO advice that the Unherd article links to says:

Among the general public, persons with respiratory symptoms or those caring for COVID-19 patients at home should receive medical masks

Similarly, here's the actual full statement from Whitty in the UK:

Prof Whitty said: “In terms of wearing a mask, our advice is clear: that wearing a mask if you don’t have an infection reduces the risk almost not at all. So we do not advise that.”

“The only people we do sometimes use masks for are people who have got an infection and that is to help them to stop it spreading around," he added.

As for the US, here's Scott Alexander's summary of the debate in March:

As far as I can tell, both sides agree on some points.

They agree that N95 respirators, when properly used by trained professionals, help prevent the wearer from getting infected.

They agree that surgical masks help prevent sick people from infecting others. Since many sick people don’t know they are sick, in an ideal world with unlimited mask supplies everyone would wear surgical masks just to prevent themselves from spreading disease.

So 'the experts' did acknowledge, often quite explicitly, that masks should stop infected people spreading the infection, as the video and just plain common sense would suggest. 

This is mostly a quibble because I think it's pretty plausible you know this and I do agree that the downstream messaging was pretty bad, it was mostly rounded off to 'masks don't work', and there was a strong tendency to double-down on that as it became contentious, as opposed to (takes deep breath) 'Masks are almost certainly worthwhile for infected people, and we don't really know much of anything for asymptomatic people but supplies are limited so maybe they aren't the priority right now but could be worth it very soon.'. Admittedly the first is much more pithy. 

But it's not entirely frivolous either; I've had many conversations with people (on both sides) over the past several months who appeared to be genuinely unaware that the WHO et al. were recommending infected people wore masks, they just blindly assumed that the media messaging matched what the experts were saying at the time. So I suggest that regardless of one's thoughts on myopic empiricism, capabilities of experts, etc., one easy improvement when trying to defer to experts is to go and read what the experts are actually saying, rather than expecting a click-chasing headline writer to have accurately summarised it for you.

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-14T11:29:02.796Z · EA · GW

I certainly wouldn't walk on by, but that's mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by.

This makes some sense, but to take a different example, I've followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned that maybe saving lives via faster lockdowns or more testing or more vaccines or whatever is not actually a good thing to do. Yet it seems obvious that some level of complex cluelessness applies here if it applies anywhere, and this is a case where simply ignoring COVID efforts and getting on with your daily life (as best one can) is what most people have done, and certainly not something I would expect to leave people struggling with guilt or facing harsh critique from deontologists. 

As Greaves herself notes, such situations are ubiquitous, and the fact that cluelessness worries are only being felt in a very small subset of the situations should lead to a certain degree of skepticism that they are in fact what is really going on. But I don't want to spend too much time throwing around speculation about intentions relative to focusing the object level arguments made, so will leave this train of thought here. 

Comment by AGB on In diversity lies epistemic strength · 2021-02-14T10:54:38.054Z · EA · GW

I'm claiming the latter, yes. I do agree it's hard to prove, but I place high subjective credence (~88%) on it. Put simply, if I can directly observe factors that would tend to lower the representation of WEIRD ethnic minorities, I don't necessarily need to have an estmate of the percentages of WEIRD people who are ethnic minorities, or even of the percentage of people in EA who are from ethnic minorities. I only need to think that the factors are meaningful enough to lead to meaningful differences in representation, and not being offset by comparably-meaningful factors in the other direction. Some of these factors are innocuous, some less so. 

But if you're interested in public attempts to take the direct comparison route, which I fully acknowledge would be stronger evidence if done well, you might find this post of relevance. (Note I'm not necessarily advocating for the concrete suggestions in the post, mostly linking for the counts at the start.)

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-14T10:43:08.242Z · EA · GW

Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they don’t pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.

Of course, one could absolutely think that a dollar spent on climate mitigation is more valuable than a dollar spent saving the lives of the global poor. But that’s very different to the cluelessness line of attack; put harshly it’s the difference between choosing not to save a drowning child because there is another pond with even more drowning children and you have to make a hard trolley-problem-like choice, versus choosing to walk on by because who even really knows if saving that child would be good anyway. FWIW, I feel like many people effectively arguing the latter in the abstract would not actually walk on by if faced with that actual physical situation, which if I’m being honest is probably part of why I find it difficult to take these arguments seriously; we are in fact in that situation all the time, whether we realise it or not, and if we wouldn’t ignore the drowning child on our doorstep we shouldn’t entirely ignore the ones half a world away...unless we are unfortunately forced to do so by the need to save even greater numbers / prevent even greater suffering.

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-13T22:37:57.176Z · EA · GW

Thanks for the response, but I don't think this saves it. In the below I'm going to treat your ranges as being about the far future impacts of particular actions, but you could substitute for 'all the impacts of particular actions' if you prefer.

In order for there to be useful things to say, you need to be able to compare the ranges. And if you can rank the ranges ("I would prefer 2 to 1" "I am indifferent between 3 and 4", etc.), and that ranking obeys basic rules like transitivity, that seems equivalent to collapsing the all the ranges to single numbers. Collapsing two actions to the same number is fine. So in your example I could arbitrarily assign a 'score' of 0 to action 1, a score of 1 to action 2, and scores of 2 to each of 3 and 4. 

Then my decision rule just switches from 'do the thing with highest expected value' to 'do (one of) the things with highest score', and the rest of the argument is essentially unchanged: either every possible action has the same score or it doesn't. If some things have higher scores than others, then replacing a lower score action with a higher score action is a way to tractably make the far future better.

Therefore, claims that we cannot tractably make the far future better force all the scores among all actions being taken to be the same, and if the scores are all the same I think your scoring system is decision-irrelevant; it will never push for action A over action B. 

Did I miss an out? It's been a while since I've had to think about weak orderings..

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-13T22:02:54.068Z · EA · GW

Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). I'm not really sure how that resolves things.


If you think you can tractably impact the far future in expectation, AMF can impact the far future in expectation. At which point it's reasonable to think that those far future impacts could be predictably negative on further investigation, since we weren't really selecting for them to be positive. I do think trying to resolve the question of whether they are negative is probably a waste of time for reasons in my first comment, and it sounds like we agree on that, but at that point it's reasonable to say that 'AMF could be good or bad, I'm not really sure, because I've chosen to focus my limited time and attention elsewhere'. There's no deep or fundamental uncertainty here, just a classic example of triage leading us to prioritise promising-looking paths over unpromising-looking ones.

For the same reason, I don't see anything wrong with that quote from Greaves; coming from someone who thinks we can tractably impact the far future and that the far future is massively morally relevant, it makes a lot of sense. If it came from someone who thought it was impossible to tractably impact the future, I'd want to dig into it more. 

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-13T17:13:45.317Z · EA · GW

I'm not sure how to parse this 'expectation that is neither positive nor negative or zero but still somehow impacts decisions' concept, so maybe that's where my confusion lies. If I try to work with it, my first thought is that not giving money to AMF would seem to have an undefined expectation for the exact same reason that giving money to AMF would have an undefined expectation; if we wish to avoid actions with undefined expectations (but why?), we're out of luck and this collapses back to being decision-irrelevant.

I have read the paper. I'm surprised you think it's well-explained there, since it's pretty dense. Accordingly, I won't pretend I understood all of it. But I do note it ends as follows (emphasis added):

It is not at all obvious on reflection, however, what the phenomenon of cluelessness really amounts to. In particular, it (at least at first sight) seems difficult to capture within an orthodox Bayesian model, according to which any given rational agent simply settles on some particular precise credence function, and the subjective betterness facts follow. Here, I have explored various possibilities within an ‘imprecise-credence’ model. Of these, the most promising account – on the assumption that the phenomenon of cluelessness really is a genuine and deep one – involved a ‘supervaluational’ account of the connection between imprecise credences and permissibility. It is also not at all obvious, however, how deep or important the phenomenon of cluelessness really is. In the context of effective altruism, it strikes many as compelling and as deeply problematic. However, mundane, everyday cases that have a similar structure in all respects I have considered are also ubiquitous, and few regard any resulting sense of cluelessness as deeply problematic in the latter cases. It may therefore be that the diagnosis of would-be effective altruists’ sense of cluelessness, in terms of psychology and/or the theory of rationality, lies quite elsewhere.

And of course Greaves has since said that she does think we can tractably influence the far future, which resolves the conflict I'm pointing to anyway. In other words, I'm not sure I actually disagree with Greaves-the-individual at all, just with (some of) the people who quote her work. 

Perhaps we could find some other interventions for which that's the case to a much lesser extent. If we deliberately try to beneficially influence the course of the very far future, can we find things where we more robustly have at least some clue that what we're doing is beneficial and of how beneficial it is? I think the answer is yes.

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-13T13:49:28.282Z · EA · GW

So yes we are in fact predictably influencing the far future by giving to AMF, in that we know we will be affecting the number of people who will live in the future. However, I wouldn't say we are influencing the far future in a 'tractable way' because we're not actually making the future better (or worse) in expectation

 

If we aren't making the future better or worse in expectation, it's not impacting my decision whether or not to donate to AMF. We can then safely ignore complex cluelessness for the same reason we would ignore simple cluelessness.

Cluelessness only has potential to be interesting if we can plausibly reduce how clueless we are with investigation (this is a lot of Greg's point in the OP); in this sense the simple/complex difference Greaves identifies is not quite the action-relevant distinction. If, having investigated, far future impacts meaningfully alter the AMF analysis, this is precisely because we have decided that AMF meaningfully impacts the far future in at least one way that is good or bad in expectation, i.e. we can tractably impact the far future. 

Put simply, if we cannot affect the far future in expectation at all, then logically AMF cannot affect the far future in expectation. If AMF does not affect the far future in expectation, far future effects need not concern its donors. 

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-13T12:28:39.763Z · EA · GW

I think there's a difference between the muddy concept of 'cause areas' and actual specific charities/interventions here. At the level of cause areas, there could be overlap, because I agree that if you think the Most Important Thing is to expand the moral circle, then there are things in the animal-substitute space that might be interesting, but I'd be surprised and suspicious (not infinitely suspicious, just moderately so) if the actual bottom-line charity-you-donate-to was the exact same thing as what you got to when trying to minimise the suffering of animals in the present day. Tobias makes a virtually identical point in the post you link to, so we may not disagree, apart from perhaps thinking about the word 'intervention' differently. 

Most animal advocacy efforts are focused on helping animals in the here and now. If we take the longtermist perspective seriously, we will likely arrive at different priorities and focus areas: it would be a remarkable coincidence if short-term-focused work were also ideal from this different perspective.5

Similarly, I could imagine a longtermist concluding that if you look back through history, attempts to e.g. prevent extinction directly or implement better governance seem like they would have been critically hamstrung by a lack of development in the relevant fields, e.g. economics, and the general difficulty of imagining the future. But attempts to grow the economy and advance science seem to have snowballed in a way that impacts the future and also incidentally benefits the present. So in that way you could end up with a longtermist-inspired focus on things like 'speed up economic growth' or 'advance important research' which arguably fall under the 'near-term human-centric welfare' area on some categorisations of causes. But you didn't get there from that starting point, and again I expect your eventual specific area of focus to be quite different. 

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-13T12:10:22.059Z · EA · GW

If we have good reason to expect important far future effects to occur when donating to AMF, important enough to change the sign if properly included in the ex ante analysis, that is equivalent to (actually somewhat stronger than) saying we can tractably influence the far future, since by stipulation AMF itself now meaningfully and predictably influences the far future. I currently don't think you can believe the first and not the second, though I'm open to someone showing me where I'm wrong.

Comment by AGB on In diversity lies epistemic strength · 2021-02-13T09:21:40.181Z · EA · GW

FWIW, I don’t think your argument goes through for ethnic diversity either; EA is much whiter than its WEIRD base. I agree aiming to match the ethnic diversity of the world would be a mistake.

(Disclaimer: Not white)

Comment by AGB on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-02-12T23:05:57.812Z · EA · GW

Spitballing here, but have you considered putting some thoughts to this effect on your website? Currently, the relevant part of the 80k website reads as follows.

Why wasn’t I accepted?

Unfortunately, due to overwhelming demand, we can’t advise everyone who applies. However, we’re confident that everyone who is reading this has what it takes to lead a fulfilling, high impact career. Our key ideas series contains lots of our best advice on this topic – we hope you’ll find it useful.

If you’re thinking of re-applying, you can improve your chances by:

  1. Reading our key ideas series.
  2. Using our planning tool, which we developed to help people think through their own decisions.

You can also get involved in our community to get help from other people trying to do good with their careers.

This is ok as far as it goes, but to me does feel a little like a fake-positive 'I'm sure you'll do just fine, whoever-you-are!'. Pointing out things like the fact that you have very little information to go on, and that you're optimising for people you can help most rather than making some kind of pure 'how valuable is this person' call, seems like it could help soften the blow the margin, though I appreciate it'll never make that large a difference given the other things you mentioned.

Comment by AGB on Complex cluelessness as credal fragility · 2021-02-12T13:45:30.878Z · EA · GW

Many of the considerations regarding the influence we can have on the deep future seem extremely hard, but not totally intractable, to investigate. Offering naive guestimates for these, whilst lavishing effort to investigate easier but less consequential issues, is a grave mistake. The EA community has likely erred in this direction.

***

Yet others, those of complex cluelessness, do not score zero on ‘tractability’. My credence in “economic growth in poorer countries is good for the longterm future” is fragile: if I spent an hour (or a week, or a decade) mulling it over, I would expect my central estimate to change, although my remaining uncertainty to be only a little reduced. Given this consideration has much greater impact on what I ultimately care about, time spent on this looks better than time further improving the estimate of immediate impacts like ‘number of children saved’. It would be unwise to continue delving into the latter at the expense of the former. Would that we do otherwise.

 

At current, I straightforwardly disagree with both of these claims; I do not think it is a good use of 'EA time' to try and pin down exactly what the long term (say >30 years out) effects of AMF donations are, or for that matter any other action originally chosen for its short term benefits (such as...saving a drowning child). I feel very confused about why some people seem to think it is a good use of time, and would appreciate any enlightenment offered on this point. 

The below is somewhat blunt, because at current cluelessness arguments appear to me to be almost entirely without decision-relevance, but at the same time a lot of people who I think are smarter than me appear to think they are relevant and interesting for something and someone, and I'm not quite clear what the something or who the someone is, so my confidence is necessarily more limited than the tone would suggest. Then again, this thought is clearly not limited to me, e.g. Buck makes a very similar point here. I'd really love to get an example, even a hypothetical one, of where any of this would actually matter at the level of making donation/career/etc. decisions. Or if everybody agrees it would not impact decisions, an explanation of why it is a good idea to spend any 'EA time' on this at all. 

***

For the challenge of complex cluelessness to have bite in the case of AMF donations, it seems to me that we need something in the vicinity of these two claims:

  1. The expected long term consequences of our actions dominate the expected short term consequences, in terms of their moral relevance.
  2. We can tractably make progress on predicting what those those long term consequences are, beyond simple 1-to-1 extrapolation from the short term consequences, by considering those consequences directly.

In short, I claim that once we truly believe (1) and (2) AMF would no longer be on our list of donation candidates.

For example, suppose you go through your suggested process of reflection, come to the conclusion AMF will in fact tractably boost economic growth in poorer countries, that such growth is one of the best ways to improve the longterm future, and that AMF's impact on growth is morally far more important than the considerations which motivated the choice of AMF in the first place, namely its impact on child mortality in the short term. Satisfied that you have now met the challenge of cluelessness, should you go ahead and click the 'donate' button?

I think you obviously shouldn't. It seems obvious to me that at minimum you should now go and find an intervention that was explicitly chosen for the purpose of boosting long term growth by the largest amount possible. Since AMF was apparently able to predictably and tractably impact the long term via incidentally impacting growth, it seems like you should be able to do much better if you actually try, and for the optimal approach to improve the long term future to be increasing growth via donating to AMF would be a prime example of Surprising and Suspicious Convergence. In fact, the opening quote from that piece seems particularly apt here:

Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.

Eleanor: Okay, but I’m principally interested in improving human welfare.

Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.

Similar thoughts would seem to apply to also other possible side-effects of AMF donations; population growth impacts, impacts on animal welfare (wild or farmed), etc. In no case do I have reason to think that AMF is a particularly powerful lever to move those things, and so if I decide that any of them is the Most Important Thing then AMF would not even be on my list of candidate interventions.

Indeed, none of the people I know who think that the far future is massively morally important and that we can tractably impact it focus their do-gooding efforts on AMF, or anything remotely like AMF. To the extent they give money to AMF-like things, it is for appearences' sake, for personal comfort, or a hedge against their central beliefs about how to do good being very wrong (see e.g. this comment by Aaron Gertler). As a result, cluelessness arguments appear to me to be addressing a constituency that doesn't actually exist, and attempts to resolve cluelessness are a 'solution looking for a problem'.  

If cluelessness arguments are intended have impact on the actual people donating to short term interventions as a primary form of doing good, they need to engage with the actual disagreements those people have, namely the questions of whether we can actually predict the size/direction of the longterm consequences despite the natural lack of feedback loops (see e.g. KHorton's comment here or MichaelStJules' comment here), or the empirical question of whether the impacts of our actions do in fact wax or wane over time (see e.g. reallyeli here), or the legion of potential philosophical objections. 

Instead, cluelessness arguments appear to me to assume away all the disagreements, and then say 'if we assume the future is massively morally relevant compared to the present, that we can tractably and predictably impact said future, and a broadly consequentalist approach, one should be longtermist in order to maximise good done'. Which is in fact true, but unexciting once made clear. 

Comment by AGB on Retention in EA - Part III: Retention Comparisons · 2021-02-10T22:55:36.212Z · EA · GW

So my first reaction to the Youth Ministry Adherence data was the basically the opposite of yours, in that I looked at it and thought 'seems like they are doing a (slightly) better job of retention'. Reviewing where we disagree, I think there's a tricky thing here about distinguishing between 'dropout' rates and 'decreased engagement' rates. Ben Todd's estimates which you quote are explicitly trying to estimate the former, but when you compare to:

those listed as “engaged disciples” who continue to self-report as “high involvement”

...I think you might end up estimating the latter. 'High involvement' was the highest of four possible levels, and 'Engaged disciple' was also the highest of four possible levels. By default I'd look at the number who are neither moderately nor highly involved, i.e. 8% rather than 42%.

More generally, my understanding is that Ben was counting someone as not having dropped out if they were still doing something as engaged as fulfulling as GWWC pledge, based on quote below. So if you start at a much higher level than that (like...attending the Weekend Away), there's a lot of room to decrease engagement substantially while still being above the bar and not having 'dropped out'. Which in turn means I'd generally be aiming to have similar leeway for 'regression to the mean' in the comparisons. Or you can compare everything to Ben's GWWC dropout number of 40%, which has no such leeway, as you do with the similarly-no-leeway case of vegetarianism. 

I appreciate this is a highly subjective call though, this is very much just my two cents. I could easily imagine changing my mind if I looked at the Youth Ministry information more closely and decided that 'Engaged Disciple' actually constituted some kind of 'super-high-involvement' category. 

The list contains 69 names. Several of the team went over the names checking who is still involved. We started with our own knowledge, and then checked our own engagement data for ambiguous cases.

We counted someone as ‘still involved’ if they were doing something as engaged as fulfilling a GWWC pledge.

On this basis, we counted about 10 people we think have ‘dropped out’, which is 14% of the total in 6 years.

Comment by AGB on Retention in EA - Part III: Retention Comparisons · 2021-02-07T16:03:47.082Z · EA · GW

Cool series, thanks for sharing on the forum. One nitpick:

ACE estimates that the average vegetarian stays vegetarian for 3.9-7.2 years, implying a five-year dropout rate of 14-26%.

I'm not sure how your rate is being calculated from ACE's figures here, but at first pass it seems wrong? Since 5 years is within but slightly towards the lower end of the range given for how long the average vegetarian stays vegetarian, I'd assume we'd end up with something more like a ~45% five-year dropout rate. By contrast, a 14-26% five-year dropout rate would suggest that >50% of vegetarians are still vegetarian after two such periods, i.e. 10 years. 

If I'm misunderstanding either stat, just let me know and will happily retract.

Comment by AGB on Introduction to Longtermism · 2021-01-30T12:55:30.478Z · EA · GW

Thank you for this! This is not the kind of post that I expect to generate much discussion, since it's relatively uncontroversial in this venue, but is the kind of thing I expect to point people to in future. 

I want to particularly draw attention to a pair of related quotes partway through your piece:

I've tried explaining the case for longtermism in a way that is relatively free of jargon. I've argued for a fairly minimal version — that we may be able to influence the long-run future, and that aiming to achieve this is extremely good and important. The more precise versions of longtermism from the proper philosophical literature tend to go further than this, making some claim about the high relative importance of influencing the long-run future over other kinds of morally relevant activities.

***

Please note that you do not have to buy into strong longtermism in order to buy into longtermism!

Comment by AGB on Money Can't (Easily) Buy Talent · 2021-01-23T17:11:43.076Z · EA · GW

I was surprised to discover that this doesn't seem to have already been written up in detail on the forum, so thanks for doing so. The same concept has been written up in a couple of other (old) places, one of which I see you linked to and I assume inspired the title:

Givewell: We can't (simply) buy capacity

80000 Hours: Focus more on talent gaps, not funding gaps

The 80k article also has a disclaimer and a follow-up post that felt relevant here; it's worth being careful about a word as broad as 'talent':

Update April 2019: We think that our use of the term ‘talent gaps’ in this post (and elsewhere) has caused some confusion. We’ve written a post clarifying what we meant by the term and addressing some misconceptions that our use of it may have caused. Most importantly, we now think it’s much more useful to talk about specific skills and abilities that are important constraints on particular problems rather than talking about ‘talent constraints’ in general terms. This page may be misleading if it’s not read in conjunction with our clarifications.

Comment by AGB on richard_ngo's Shortform · 2021-01-21T14:05:31.414Z · EA · GW

But for the purposes of my questions above, that's not the relevant factor; the relevant factor is: does someone know, and have they made those arguments [that specific intervention X will wildly outperform] publicly, in a way that we could learn from if we were more open to less quantitative analysis?


I agree with this. I think the best way to settle this question is to link to actual examples of someone making such arguments. Personally, my observation from engaging with non-EA advocates of political advocacy is that they don't actually make a case; when I cash out people's claims it usually turns out they are asserting 10x - 100x multipliers, not 100x - 1000x multipliers, let alone higher than that. It appears the divergence in our bottom lines is coming from my cosmopolitan values and low tolerance for act/omission distinctions, and hopefully we at least agree that if even the entrenched advocate doesn't actually think their cause is best under my values, I should just move on. 

As an aside, I know you wrote recently that you think more work is being done by EA's empirical claims than moral claims. I think this is credible for longtermism but mostly false for Global Health/Poverty. People appear to agree they can save lives in the deveoping world incredibly cheaply, in fact usually giving lower numbers than I think are possible. We aren't actually that far apart on the empirical state of affairs. They just don't want to. They aren't refusing to because they have even better things to do, because most people do very little. Or as Rob put it:

Many people donate a small fraction of their income, despite claiming to believe that lives can be saved for remarkably small amounts. This suggests they don’t believe they have a duty to give even if lives can be saved very cheaply – or that they are not very motivated by such a duty.

I think that last observation would also be my answer to 'what evidence do we have that we aren't in the second world?' Empirically, most people don't care, and most people who do care are not trying to optimise for the thing I am optimising for (in many cases it's debateable whether they are trying to optimise at all). So it would be surprising if they hit the target anyway, in much the same way it would be surprising if AMF were the best way to improve animal welfare.

Comment by AGB on richard_ngo's Shortform · 2021-01-21T09:10:00.097Z · EA · GW

I think we’re still talking past each other here.

You seem to be implicitly focusing on the question ‘how certain are we these will turn out to be best’. I’m focusing on the question ‘Denise and I are likely to make a donation to near-term human-centric causes in the next few months; is there something I should be donating to above Givewell charities’.

Listing unaccounted-for second order effects is relevant for the first, but not decision-relevant until the effects are predictable-in-direction and large; it needs to actually impact my EV meaningfully. Currently, I’m not seeing a clear argument for that. ‘Might have wildly large impacts’, ‘very rough estimates’, ‘policy can have enormous effects’...these are all phrases that increase uncertainty rather than concretely change EVs and so are decision-irrelevant. (That’s not quite true; we should penalise rough things’ calculated EV more in high-uncertainty environments due to winners’ curse effects, but that’s secondary to my main point here).

Another way of putting it is that this is the difference between one’s confidence level that what you currently think is best will still be what you think is best 20 years from now, versus trying to identify the best all-things-considered donation opportunity right now with one’s limited information.

So concretely, I think it’s very likely that in 20 years I’ll think one of the >20 alternatives I’ve briefly considered will look like it was a better use of my money that Givewell charities, due to the uncertainty you’re highlighting. But I don’t know which one, and I don’t expect it to outperform 20x, so picking one essentially at random still looks pretty bad.

A non-random way to pick would be if Open Phil, or someone else I respect, shifted their equivalent donation bucket to some alternative. AFAIK, this hasn’t happened. That’s the relevance of those decisions to me, rather than any belief that they’ve done a secret Uber-Analysis.

Comment by AGB on richard_ngo's Shortform · 2021-01-20T16:20:29.628Z · EA · GW

Thanks for the write-up. A few quick additional thoughts on my end:

  • You note that OpenPhil still expect their hits-based portfolio to moderately outperform Givewell in expectation. This is my understanding also, but one slight difference of interpretation is that it leaves me very baseline skeptical that most 'systemic change' charities people suggest would also outperform, given the amount of time Open Phil has put into this question relative to the average donor. 
  • I think it's possible-to-likely I'm mirroring your 'overestimating how representative my bubble was' mistake, despite having explicitly flagged this type of error before because it's so common. In particular, many (most?) EAs first encounter the community at university, whereas my first encounter was after university, and it wouldn't shock me if student groups were making more strident/overconfident claims than I remember in my own circles. On reflection I now have anecdotal evidence of this from 3 different groups.
  • Abstaning on the 'what is the best near-term human-centric charity' question, and focusing on talking about the things that actually appear to you to be among the best options, is a response I strongly support. I really wish more longtermists took this approach, and I also wish EAs in general would use 'we' less and 'I' more when talking about what they think about optimal opportunities to do good. 
Comment by AGB on My mistakes on the path to impact · 2021-01-16T11:59:42.784Z · EA · GW

(Disclaimer: I am OP’s husband)

As it happens, there are a couple of examples in this post where poor or distorted versions of 80k advice arguably caused harm relative to no advice; over-focus on working at EA orgs due to ‘talent constraint’ claims probably set Denise’s entire career back by ~2 years for no gain, and a simplistic understanding of replaceability was significantly responsible for her giving up on political work.

Apart from the direct cost, such events leave a sour taste in people’s mouths and so can cause them to dissociate from the community; if we’re going to focus on ‘recruiting’ people while they are young, anything that increases attrition needs to be considered very carefully and skeptically.

I do agree that in general it’s not that hard to beat ‘no advice’, rather a lot of the need for care comes from simplistic advice’s natural tendency to crowd out nuanced advice.

I don’t mean to bash 80k here; when they become aware of these things they try pretty hard to clean it up, they maintain a public list of mistakes (which includes both of the above), and I think they apply way more thought and imagination to the question of how this kind of thing can happen than most other places, even most other EA orgs. I’ve been impressed by the seriousness with which they take this kind of problem over the years.

Comment by AGB on The Folly of "EAs Should" · 2021-01-12T08:12:59.240Z · EA · GW

The ‘any decent shot’ is doing a lot of work in that first sentence, given how hard the field is to get into. And even then you only say ‘probably stop’.

There’s a motte/bailey thing going on here, where the motte is something like ‘AI safety researchers probably do a lot more good than doctors’ and the bailey is ‘all doctors who come into contact with EA should be told to stop what they are doing and switch to becoming (e.g.) AI safety researchers, because that’s how bad being a doctor is’.

I don’t think we are making the world a better place by doing the second; where possible we should stick to ‘probably’ and communicate the first, nuance and all, as you did do here but as Khorton is noting people often don’t do in person.