Posts

AMA: Owen Cotton-Barratt, RSP Director 2020-08-28T14:20:18.846Z · score: 75 (29 votes)
"Good judgement" and its components 2020-08-19T23:30:38.412Z · score: 58 (23 votes)
What is valuable about effective altruism? Implications for community building 2017-06-18T14:49:56.832Z · score: 14 (18 votes)
A new reference site: Effective Altruism Concepts 2016-12-05T21:20:03.946Z · score: 22 (24 votes)
Why I'm donating to MIRI this year 2016-11-30T22:21:20.234Z · score: 34 (34 votes)
Should effective altruism have a norm against donating to employers? 2016-11-29T21:56:36.528Z · score: 11 (15 votes)
Donor coordination under simplifying assumptions 2016-11-12T13:13:14.314Z · score: 7 (7 votes)
Should donors make commitments about future donations? 2016-08-30T14:16:51.942Z · score: 18 (17 votes)
An update on the Global Priorities Project 2015-10-07T16:19:32.298Z · score: 5 (7 votes)
Cause selection: a flowchart [link] 2015-09-10T11:52:07.140Z · score: 10 (10 votes)
How valuable is movement growth? 2015-05-14T20:54:44.210Z · score: 21 (23 votes)
[Link] Discounting for uncertainty in health 2015-05-07T18:43:33.048Z · score: 4 (4 votes)
Neutral hours: a tool for valuing time 2015-03-04T16:33:41.087Z · score: 9 (9 votes)
Report -- Allocating risk mitigation across time 2015-02-20T16:34:47.403Z · score: 6 (6 votes)
Long-term reasons to favour self-driving cars 2015-02-13T18:40:16.440Z · score: 8 (8 votes)
Increasing existential hope as an effective cause? 2015-01-10T19:55:08.421Z · score: 10 (10 votes)
Factoring cost-effectiveness 2014-12-23T12:12:08.789Z · score: 5 (5 votes)
Make your own cost-effectiveness Fermi estimates for one-off problems 2014-12-11T11:49:13.771Z · score: 11 (11 votes)
Estimating the cost-effectiveness of research 2014-12-11T10:50:53.679Z · score: 9 (9 votes)
Effective policy? Requiring liability insurance for dual-use research 2014-10-01T18:36:15.177Z · score: 9 (9 votes)
Cooperation in a movement supporting diverse causes 2014-09-23T10:47:11.357Z · score: 18 (18 votes)
Why we should err in both directions 2014-08-21T02:23:06.000Z · score: 7 (6 votes)
Strategic considerations about different speeds of AI takeoff 2014-08-13T00:18:47.000Z · score: 3 (3 votes)
How to treat problems of unknown difficulty 2014-07-30T02:57:26.000Z · score: 3 (3 votes)
On 'causes' 2014-06-24T17:19:54.000Z · score: 1 (1 votes)
Human and animal interventions: the long-term view 2014-06-02T00:10:15.000Z · score: 3 (7 votes)
Keeping the effective altruist movement welcoming 2014-02-07T01:21:18.000Z · score: 15 (11 votes)

Comments

Comment by owen_cotton-barratt on Prospecting for Gold - EAGxOxford 2016 - edited transcript · 2020-09-14T23:31:38.435Z · score: 12 (6 votes) · EA · GW

Thanks! This largely seems rather better.

One paragraph where you've lost the meaning is:

On the right is a factorisation that I think makes the quantity easier to interpret and measure. But it is only justifiable if the terms I've added cancel out, so I'm going to present the case for why I think it is.

I'm not claiming that my original was the easiest to follow, but the point that needs justifying is not that the terms cancel (that's mathematically trivial), but that the decomposition is actually an improvement in terms of ease of understanding or ease of estimation, relative to the term on the left of the equation.

Comment by owen_cotton-barratt on Some thoughts on EA outreach to high schoolers · 2020-09-14T23:03:08.211Z · score: 11 (10 votes) · EA · GW

I don't want to name individuals on a public forum, but noting that there are at least a couple of individuals at FHI who passed through one of the programmes you mention (I don't know about counterfactual attribution).

Comment by owen_cotton-barratt on Judgement as a key need in EA · 2020-09-13T21:46:25.501Z · score: 6 (5 votes) · EA · GW

I'm actually confused about what you mean by your definition. I have an impression about what you mean from your post, but if I try to just go off the wording in your definition I get thrown by "calibrated". I naturally want to interpret this as something like "assigns confidence levels to their claims that are calibrated", but that seems ~orthogonal to having the right answer more often, which means it isn't that large a share of what I care about in this space (and I suspect is not all of what you're trying to point to).

Now I'm wondering: does your notion of judgement roughly line up with my notion of meta-level judgement? Or is it broader than that?

Comment by owen_cotton-barratt on Judgement as a key need in EA · 2020-09-13T15:15:08.685Z · score: 4 (2 votes) · EA · GW

For one data point, I filled in the EALF survey and had in mind something pretty close to what I wrote about in the post Ben links to. I don't remember paying much attention to the parenthetical definition -- I expect I read it as a reasonable attempt to gesture towards the thing that we all meant when we said "good judgement" (though on a literal reading it's something much narrower than I think even Ben is talking about).

I think that good judgement in the broad sense is useful ~everywhere, but that:

  • It's still helpful to try to understand it, to know better how to evaluate it or improve at it;
  • For reasons Ben outlines, it's more important for domains where feedback loops are poor;
  • The cluster Ben is talking about gets disproportionately more weight in importance for thinking about strategic directions.
Comment by owen_cotton-barratt on An argument for keeping open the option of earning to save · 2020-09-09T23:03:11.097Z · score: 3 (2 votes) · EA · GW

ii) But, actors making up a large proportion of total financial assets may have constraints other than maximising impact, which could lead the community to spend faster than the aggregate of the community thinks is correct:

  • Large donors usually want to donate before they die (and Open Phil’s donors have pledged to do so). (Of course, it’s arguable whether this should be modeled as such a constraint or as a claim about optimal timing).

Other holders of financial capital may not have enough resources to realistically make up for that.

Thanks for pulling this out, I think this is the heart of the argument. (I think it's quite valuable to show how the case relies on this, as it helps to cancel a possible reading where everyone should assume that they personally will have better judgement than the aggregate community.)

I think it's an interesting case, and worth considering carefully. We might want to consider:

  1.  Whether this will actually lead to incorrect spending?
    • My central best guess is that there will be enough flow of other money into longtermist-aligned purposes that this won't be an issue in coming decades, but I'm quite uncertain about that
  2. What are the best options for mitigating it?
    • Earning to save is certainly one possibility, but we could also consider e.g. whether there are direct work opportunities which would have a significant effect of passing capital into the hands of future longtermists
Comment by owen_cotton-barratt on An argument for keeping open the option of earning to save · 2020-09-09T22:52:38.062Z · score: 7 (2 votes) · EA · GW

Thanks for the thoughtful reply!

On reflection I realise that in some sense the heart of my objection to the post was in vibe, and I think I was subconsciously trying to correct for this by leaning into the vibe (for my response) of "this seems wrongfooted".

But I do think the post tries to caveat a lot and it overall seems good for there to be a forum where even minor considerations can be considered in a quick post., so I thought it was worth posting.

I quite agree that it's good if even minor considerations can be considered in a quick post. I think the issue is that the tone of the post is kind of didactic, let-me-explain-all-these-things (and the title is "an argument for X", and the post begins "I used to think not-X"): combined these are projecting quite a sense of "X is solid", and while it's great that it had lots of explicit disclaimers about this just being one consideration etc., I don't think they really do the work of cancelling the tone for feeding into casual readers' gut impressions.

For an exaggerated contrast, imagine if the post read like:

A quick thought on earning-to-save

I've been wondering recently about whether earning-to-save could make sense. I'm still not sure what I think, but I did come across a perspective which could justify it.

[argument goes here]

What do people think? I haven't worked out how big a deal this seems compared to the considerations against earning to save (and some of them are pretty substantial), so it might still be a pretty bad idea overall.

I think that would have triggered approximately zero of my vibe concerns.

Alternatively I think it could have worked to have a didactic post on "Considerations around earning-to-save" that felt like it was trying to collect the important considerations (which I'm not sure have been well laid out anywhere, so there might not be a canonical sense of which arguments are "new") rather than particularly emphasise one consideration.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-03T16:20:06.777Z · score: 6 (3 votes) · EA · GW

I didn't downvote, but I also didn't even understand whether you were agreeing with me or disagreeing with me (and strongly suspected that "would have to" was an error in either case).

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-03T01:07:21.227Z · score: 16 (13 votes) · EA · GW

I almost feel cheeky responding to this as you've essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!

That's fine! :)

In turn, an apology: my controversial view has baited you into response, and I'm now going to take your response as kind-of-volunteering for me to be critical. So I'm going to try and exhibit how it seems mistaken to me, and I'm going (in part) to use mockery as a rhetorical means to achieve this. I think this would usually be a violation of discourse norms, but here: the meta-level point is to try and exhibit more clearly what this controversial view I hold is and why; the thing I object to is a style of argument more than a conclusion; I think it's helpful for the exhibition to be able to draw attention to features of a specific instance, and you're providing what-seems-like-implicit-permission for me to do that. Sorry!

I'd say that something doesn't have to be the most effective thing to do for it to be worth doing, even if you're an EA.

To be clear: I strongly agree with this, and this was a big part of what I was trying say above.

So donating to a seeing eye dog charity isn't really a good thing to do.

This is non-central, but FWIW I disagree with this. Donating to the guide dog charity usually is a good thing to do (relative to important social norms where people have property rights over their money), it's just that it turns out there are fairly accessible actions which are quite a lot better.

Choosing to follow a ve*an diet doesn't have an opportunity cost (usually). You have to eat, and you're just choosing to eat something different.

This, I'm afraid, is the type of statement that really bugs me. It's trying to collapse a complex issue onto simple dimensions, draw a simple conclusion there, and project it back to the original complex world. But in doing so it's thrown common-sense out of the window!

If I believed that choosing to follow a ve*an diet usually didn't have an opportunity cost, I would expect to see:

  • People usually willing to go ve*an for a year for some small material gain
    • In theory if there was no opportunity cost, even for something trivial like $10, but I think many non ve*ans would be unwilling to do this even for $1000
    • [As an aside, I think taxes on meat would probably be a good policy that might well be accessible]
  • Almost everyone who goes ve*an for ethical reasons keeping it up
    • In fact some significant proportion of people stop

Or perhaps you just think the personal cost to you of being ve*an is substantial enough to offset the harm to the animals.

I certainly don't claim this in any utilitarian comparison of welfare. But now the argument seems almost precisely analogous to:

"You could help the poorest people in the world a tremendous amount for the cost of a cup of coffee. Since your welfare shouldn't outweigh theirs, you should forgo that cup of coffee, and every other small luxury in your life, to give more to them."

I think EA correctly rejects this argument, and that it's correct to reject its analogue as well. (I think the argument is stronger for ve*anism than giving to the poor instead of buying coffee; but I also think that there are better giving opportunities than giving directly to the poor, and that when you work it through the coffee argument ends up being stronger than the corresponding one for ve*anism.)

---

Again, I'm not claiming that EAs shouldn't be ve*an. I think it's a morally virtuous thing to do!

But I don't think EAs have a monopoly on virtue. I think the EA schtick is more like "we'll think things through really carefully and tell you what the most efficient ways to do good are". And so I think that if it's presented as "you want to be an EA now? great! how about ve*anism?" then the implicature is that this is a bigger deal than, say, moving from giving away 7% of your income to giving away 8%, and that this is badly misleading.

Notes:

  • There may be some people for whom the opportunity cost is trivial
    • I think there are probably quite a few people for whom the opportunity cost is actually negative -- i.e. it's overall easier for them to be ve*an than not
  • I would feel very good about encouragement to check whether people fall into one of these buckets, as in cases where they do then dietary change may be a particularly efficient way to do good
  • I'd also feel very good about moral exhortment to be ve*an that was explicit that it wasn't grounded in EA thinking, like:
    • "Many EAs try to be morally serious in all aspects of their lives, beyond just trying to optimise for the most good achievable. This leads us to ve*anism. You might want to consider it."
Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T16:00:11.696Z · score: 13 (4 votes) · EA · GW

1. Did you make an active decision to shift your priorities somewhat from doing to facilitating research? If so, what factors drove that decision?

There was something of an active decision here. It was partly based on a sense that the returns had been good when I'd previously invested attention in mentoring junior researchers, and partly on a sense that there was a significant bottleneck here for the research community.

2. What do you think makes running RSP your comparative advantage (assuming you think that)? 

Overall I'm not sure what my comparative advantage is! (At least in the long term.) 

I think:

  • Some things which makes me good at research mentoring are:
    • being able to get up to speed on different projects quickly
    • holding onto a sense of why we're doing things, and connecting to larger purposes
    •  finding that I'm often effective in 'reactive' mode rather than 'proactive' mode 
      • (e.g. I suspect this AMA has the highest ratio of public-written-words / time-invested of anything substantive I've ever done)
    • being able to also connect to where the researcher in front of me is, and what their challenges are
  • There are definitely parts of running RSP which seem not my comparative advantage (and I'm fortunate enough to have excellent support from project managers who have taken ownership of a lot of the programme)

3. Any thoughts on how to test or build one's skills for that sort of role/pathway?

  • Read a lot of research. Form views (and maybe talk to others) about which pieces are actually valuable, and how. Try to work out what seems bad even about good pieces, or what seems good even about bad pieces.
  • Be generous with your time looking to help others with their projects. Check in with them afterwards to see if they found it useful. (Try to ask in a way which makes it safe for them to express that they did not.)
  • Try your own hand at research. First-hand experience of challenges is helpful for this.

(I've focused on the pathway of "research mentorship"; I think there are other parts you were asking about which I've ignored.)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T15:48:43.246Z · score: 8 (3 votes) · EA · GW

Gee, this is really hard to measure.

I'd guess that somewhere between 10% and 30% is done as part of something that we'd naturally call the "standard academic process" ?

I think that there are some good reasons for deviation, and some things that academic norms provide that we may be missing out on.

I think academia is significantly set up as a competitive process, where part of the game is to polish your idea and present it in the best light. This means:

  • It encourages you to care about getting credit, and people are discouraged from freely-sharing early stage ideas that they might turn into papers, for fear of being scooped
    • This seems broadly bad
  • It encourages people to put in the time to properly investigate the ins and outs of an idea, and find the clearest framing of it, making it more efficient for later readers
    • This seems broadly good

I'd like it if we could work out how to get more of the good here with less of the bad. That could mean doing a larger proportion of things within some version of the academic process, or could mean working out other ways to get the benefits.

There's also a credentialing benefit to doing things within the academic process. I think this is non-negligible, but also that if you do really high-quality work anywhere, people will observe this and come, so I don't think it's necessary to rest on that credentialing.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T15:40:57.826Z · score: 4 (3 votes) · EA · GW

This is an interesting question, but I don't think there's a decent short-answer version; it's more like investing several hours or not at all.

So I'll take this as a prompt to consider the several-hour version, but won't answer for now.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T15:36:35.361Z · score: 4 (2 votes) · EA · GW

Malevolence seems potentially important to me, although I mostly haven't been thinking about it (except a bit about psychopathy and its absence). Things more like game-theoretic dynamics are where a good portion of my attention has been ... but I don't want to claim this means they're more important.

[meta: this is a short answer because while I might have things to say about crisper questions within this space, for saying things-in-general I think it makes more sense to wait until I have coherent enough ideas to publish something.]

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T15:32:46.249Z · score: 4 (2 votes) · EA · GW

Good question.

Of the two options I'd be tempted to say it's more of a priority to spread the underlying arguments, but actually I think something more nuanced: it's a priority to keep engaging with people about the underlying arguments, finding where there seems to be the greatest discomfort and turning a critical eye on the arguments there, looking to see if we can develop stronger versions of them.

I think that talking about the tentative conclusions along with this is  important both for growing the network of people sympathetic to those, and for providing concrete instantiation of what is meant by the underlying philosophy (too much risk of talking past each other or getting lost in abstraction-land without this)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T15:28:37.840Z · score: 4 (2 votes) · EA · GW

I guess I think that "decision-making under deep uncertainty" is mostly too broad a category to be able to say useful things about (although maybe we can draw together useful lessons that seem to hold in a variety of more specialised contexts), and we're better trying to look at more particular setups and reason about those.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T15:24:28.970Z · score: 6 (3 votes) · EA · GW

I don't feel like I'm at all an expert in biosecurity careers, but I agree with that directionally they seem more credentialist.

I think this is a consideration against RSP, although it doesn't feel like an overwhelming one, since:

  • It could be a reasonable option before a PhD
    • This is particularly relevant if taking the time to think about what you want to work on allows you to do a PhD in which your work is much closer to things you eventually care about
    • (similarly it could be a good option for some people after a PhD)
  • There may well be some roles (now or in the future) which are less credential-locked
Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T15:18:56.937Z · score: 5 (3 votes) · EA · GW

Generally Oxford lectures are open to any university members, although:

  • They wouldn't generally get "academic credit" for this
  • They wouldn't necessarily be able to join accompanying classes (although we might be able to arrange this)
  • I've no idea what the situation is now so many things are remote because of COVID-19
Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T15:16:27.607Z · score: 6 (4 votes) · EA · GW

There's a class of things which feel majorly helpful, but it's hard to distinguish between whether I was helped by the background in pure mathematics, or whether I have some characteristics which both helped me in mathematics and help me now (I suspect it's some of both):

  • Being good at framing things
    • Turning things over in my head, looking for the angle which makes them most parsimonious, and easiest to comprehend clearly
    • Relatedly, feeling happy to dive in and try to make up theory, but keep it grounded by "this has to actually explain the things we want to know about"
    • These are useful skills when faced with domains where we haven't yet settled on paradigms which we're satisfied capture the important parts of what we care about
  • Generally keeping track of precisely what are the epistemic statuses of different claims, and how they interact
    • This is a useful skill for domains where we're projecting out beyond things we can easily check empirically

Then there are some cases where I was more directly applying some mathematical thinking, e.g.:

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T15:01:55.302Z · score: 12 (5 votes) · EA · GW

Estimating the value of research seems really hard to me (and this is significantly true even in retrospect).

That said, some candidates are:

  • Work making the point that we should give outsized attention to mitigating risks that might manifest unexpectedly soon, since we're the only ones who can
    • At the time it didn't seem unusually valuable, but I think it was relatively soon after (a few months) that I saw some people changing behaviour in light of the point, which increased my sense of its importance
  • Work on cost-effectiveness of research of unknown difficulty, particularly the principle of using log returns when you don't know where to start
    • Felt sort-of important at the time, although I think the kind of value I anticipated hasn't really manifested
    • I have felt like it's been useful for my thinking in a variety of domains, thinking about pragmatic prioritisation (and I've seen some others get some value from that); however logarithm is an obvious-enough functional form that maybe it didn't really add much
  • Maybe something where it was more about dissemination of ideas than finding deep novel insights (I think it's very hard to draw a line between what counts as "research" or what doesn't), such as Prospecting for Gold, or How valuable is movement growth?
    • Quite a few people have told me that they got something out of one or both of those pieces, although it's extremely hard to assess the counterfactuals
    • I felt like I was doing something significant in these cases (particularly when writing the talk Prospecting for Gold)
  • Overall I'd be hard pressed to decide between choosing one of the above, although I'd tend to guess these are more valuable than most other pieces I've done (excepting some recent work that I don't yet want to judge, and with the caveat that I'm surely forgetting some)
    • That said, some of the more policy-ish pieces of research might still turn out to be the most valuable, if they got picked up somewhere important, but so far I'll not count them
Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T13:38:02.790Z · score: 7 (4 votes) · EA · GW

Something like: it seems like the people we're taking on the programme are doing kind of good things, but when we dig into counterfactual analysis it seems like they might on average have done more if they hadn't joined the programme (perhaps because e.g. normal academic pressures are surprisingly helpful motivationally, or because we're fostering a community which is too inward-looking).

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T13:34:39.408Z · score: 7 (4 votes) · EA · GW

Something like: it catalysed the creation of a whole stream of major new projects (led by scholars who used the space afforded by the programme to think seriously about possibilities, and who are well-networked with the broader x-risk ecosystem which makes coordination and recruitment easier).

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T13:18:14.727Z · score: 2 (1 votes) · EA · GW

Will most value of RSP will come from direct work done by scholars or by scholars [and program] indirectly influencing other people/organizations? [I would count consulting policy-makers as direct work.]

I want to say "yes, by indirect influence", but I expect that this will be true also of most cases of consulting policy-makers (this would remain true even if you got to set policies directly, as I think that most things we do have value filtered through what future people do). This makes me think I'm somehow using a different lens on the world which makes it hard to answer this question directly.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T13:15:26.588Z · score: 3 (2 votes) · EA · GW

When working on a paper, do you think value consists of field-building or from a small personal chance of, say, coming up with a crucial consideration?

This question doesn't quite feel right to me. I think that when working on a paper I normally have an idea of what insights I want it to convey. The value might be in field-building, or the direct value of disseminating that insight (not counting its spillover to field-building).

Work that might find crucial insights feels like it happens before the paper-writing stage. I try to spend some time in that mode. 

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T13:12:12.091Z · score: 7 (4 votes) · EA · GW

Of these, I think RSP is most aiming at "next-generation", with "this generation" a significant secondary target.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T12:12:23.347Z · score: 17 (4 votes) · EA · GW

For RSP, I think that:

  • In starting RSP, I had an implicit theory of change in my head
    • There are quite a few facets of this (mechanisms for value produced, a continuum of hypotheses, etc.)
    • One important facet (particularly for early-RSP) was a sense of "pretty sure there's significant value available via something in this vicinity, let's try it and see if we can hone in"
  • I explicitly share and communicate parts of this model to the extent that it's accessible for me to do so
    • This involved some conversations with people before RSP started, and some presenting thoughts to the research scholars as the programme started, and periodically returning to it
  • As RSP has developed and other people have become major stakeholders, they've developed their own implicit theories of change
    • We make some space to discuss these / exchange models
  • As RSP matures, it will make more sense to pin down a theory of change and have it explicit and shared
    • The facet of "let's work out what here is good" will naturally diminish, and we'll work out which other facets are best to lean on

Some general thoughts:

  • Advantages of having an explicit theory of change:
    • Makes it easier to sync up about direction/priorities/reasons for doing things
    • Makes it easier for people to engage critically, or otherwise to notice mistakes and course-correct
  • Disadvantages of having an explicit theory of change:
    • Easy to have the case where your best expression of something is dumber than your real internal sense of it
      • In this case it may be preferable to be guided by the internal sense rather than the explicit version
      • (this is at least some distant relative of Goodhart's law)
    • To the extent that you're going to be guided by your internal sense rather than an explicit version, sharing something as an explicit theory of change can be misleading
  • In general I think it's good to encourage lots of explicit discussion about theories of change
    • Ideally without committing to reaching an "answer", but having that as a goal may be helpful for prompting the discussion

I think that I find the disadvantages quite emotionally resonant, which may pull me to err too far in the direction of not being explicit. I have appreciated some cases where people have pushed me towards "let's have a discussion where we're pretty explicit about best guesses".

FHI I think has an explicit theory of change even less than RSP does; my guess it that Nick Bostrom is also averse to incurring the costs of these disadvantages (and maybe more strongly so than me), but that's speculation.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-01T22:58:13.423Z · score: 12 (8 votes) · EA · GW

That in thinking about community/movement building, it's more important to consider something like how people should be -- e.g. what virtues should be cultivated/celebrated -- rather than just what people should do (although of course both matter). 

(That's in impression space. I have various drafts related to this, and I hope to get something public up in the next few months, so I'll leave it brief for now.)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-01T22:42:03.650Z · score: 28 (16 votes) · EA · GW

That personal dietary choices are important on consequentialist effectiveness grounds. 

I actually think there are lots of legitimate and powerful reasons for EAs to consider veg*nism, such as:

  • A ~deontological belief that it's wrong to eat animals
  • A desire for lifestyle choices that help you connect with what you care about
  • Signalling caring
  • A desire for shared culture with people you share values with

... but it feels to me almost intellectually dishonest to have it be part of an answer to someone saying "OK, I'm bought into the idea that I should really go after what's important, what do I do now?"

(I'm not vegetarian, although I do try to only consume animals I think have had reasonable welfare levels, for reasons in the vicinity of the first two listed above. I still have some visceral unease about the idea of becoming vegetarian that is like "but this might be mistaken for being taken in by intellectually dishonest arguments".)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-01T15:47:43.719Z · score: 8 (4 votes) · EA · GW

I'm just using "deep uncertainty" to refer to a theme of situations where there are challenges about how you get going. I'm not thinking of it as a crisp referent. 

I guess that complex cluelessness would be a subclass of cases of deep uncertainty in my ontology, but I also mean to include e.g. normative uncertainty; Knightian uncertainty; heuristics for estimating probabilities when you don't really know where to start.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-01T15:37:39.575Z · score: 5 (3 votes) · EA · GW

We're doing a combination of:

  • Looking at what people go on from RSP to do
  • Surveys (& conversations) asking research scholars how useful they have found RSP (and in what that value consists), and what they guess they would have done otherwise
  • Comparison of the above with some people who narrowly didn't join RSP (for one reason or another)
  • Looking at to what extent work done by research scholars while on the programme is directly useful
  • Our (=RSP management's) independent impressions of whether / how much we've helped people

(I think we're still finding out feet with this.)

There are a relatively small number of individuals who have gone through the programme, and it's important to us to protect their privacy, so at the moment we don't have plans to publish any of this. When we have slightly more data I kind of like the idea of publishing some aggregate summaries, but I haven't thought seriously about whether this will be possible to do in a way which is properly privacy-preserving while also actually useful to readers.

Comment by owen_cotton-barratt on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-08-31T20:20:25.741Z · score: 7 (5 votes) · EA · GW

I like the point about chains of onwards infections. 

Actually there might be a dynamic where the number of people you expect to infect is also relatively proportional to your exposure, so the total cost goes with something like the square of your exposure? (If your exposure is small, your personal risk dominates as it doesn't have the squared term.)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-31T19:53:17.337Z · score: 4 (2 votes) · EA · GW

Sorry that was poorly worded.

I mean for various activities X, estimating how many resources end up devoted to longtermist ends as a result of X (and what the lags are).

e.g. some Xs = writing articles about longtermism; giving talks in schools; talking about EA but not explicitly longtermism; outreach to foundations; consultancy to help people give better according to their values (and clarify those values); ...

Comment by owen_cotton-barratt on An argument for keeping open the option of earning to save · 2020-08-31T17:51:43.618Z · score: 13 (10 votes) · EA · GW

Hmm, this argument feels confused to me.

You say you take (1) to be obvious, but I think that you're treating the optimal percentage as kind of exogenous rather than dependent on the giving opportunities in the system. In fact with the right opportunities even a maximally patient longtermist will want to give 100% of capital (or more, via borrowing) in a given year. (If those opportunities will quickly enough return more capital that's smart+aligned with their values.)

So the argument really feels like:

Maybe in the future the community will give to some places that are worse than this other place [=saving]. If you're smarter than the aggregate community then it will be good if you control a larger slice of the resources so you can help to hedge against this mistake. This pushes towards earning.

I think if you don't have reason to believe you'll do better than the aggregate community then this shouldn't get much weight; if you do have such reason then it's legitimate to give it some weight. But this was already a legitimate argument before you thought about saving! It applies whenever there are multiple possible uses of capital and you worry that future people might make a mistake. I suppose whenever you think of a new possible use of capital it becomes a tiny bit stronger?

It's quite possible I'm mischaracterising your argument somehow! But at present I'm worried that this isn't really a new argument and the post risks giving inappropriate prominence to the idea of earning to save (which I think could be quite toxic for the community for reasons you mention), even given your caveats.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-31T14:55:05.349Z · score: 14 (4 votes) · EA · GW

Ahh, I think I was interpreting your general line of questioning as being:

A) Absent ability to get sufficient mentorship within EA circles, should people go outside to get mentorship?

... whereas this comment makes me think you were more asking:

B) Since research mentorship/management is such a bottleneck, should we get people trying to skill up a lot in that?

I think that some of the most important skills for research mentorship from an EA perspective include transferring intuitions about what is important to work on, and that this will be hard to learn properly outside an EA context (although there are probably some complementary skills one can effectively learn).

I do think that if the questions were in the vein of B) I'm more wary in my agreement: I kind of think that research mentorship is a valuable skill to look for opportunities to practice, but a little hard to be >50% of what someone focuses on? So I'm closer to encouraging people doing research that seems valuable to look for opportunities to do this as well. I guess I am positive on people practicising mentorship generally, or e.g. reading a lot of different pieces of research and forming inside views on what makes some pieces seem more valuable. I think the demand for these skills will become slightly less acute but remain fairly high for at least a decade.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-31T11:07:54.476Z · score: 6 (3 votes) · EA · GW

Is it about the research scholars themselves spending part of their career before or after RSP outside of EA orgs?

Roughly, yes. e.g. I think several people currently at RSP have had some career outside first, and I think that they are typically deriving some real benefit from that (i.e. RSP is providing a complement rather than a substitute for the experience they have already).

(Not claiming that RSP is only for people with such experience!)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-31T11:02:46.093Z · score: 4 (2 votes) · EA · GW

Perhaps you mean something like "people who are decent at working out what strategies and interventions we should pursue amongst the innumerable possibilities"? (As opposed to what fine-grained decisions individual people/orgs should make on a day to day level.)

Yes, I think that's mostly a better characterisation. 

(There's definitely some grey area, as e.g. I think that people who are good at the thing I'm pointing to are in touch with the reasons behind a choice of intervention, in a way that feeds into some of the decisions about how to implement it on a day-to-day level.)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-31T06:07:00.787Z · score: 13 (5 votes) · EA · GW

1. Does all of those claims seem true to you?

Yes, with some important comments:

  • I don't think this is centrally about "researchers", but about "people-who-are-decent-at-working-out-what-to-do-amongst-the-innumerable-possibilities"
    • This is a class we need more of in EA (and particularly longtermist EA); research is one of the applications of the (major) applications of such people, but far from the only one
  • Mentorship/management is more like a thousand small things than two big things
    • Often people will be better off learning from multiple strong mentors than one, because they'll be good at different subcomponents
  • There are very substantial reasons beyond this to spend part of one's (research) career outside of explicitly EA orgs, particularly if you get an opportunity to work with outstanding people
    • Such as:
      • You can better learn the specialist knowledge belonging to the relevant domain by spending time working with top experts
        • Or idiosyncratic-but-excellent pieces of mentorship
      • To the extent that EA has important insights that are relevant in many domains, working closely with smart people is a good opportunity to share those insights
      • It's a powerful way to develop a network
    • I gave the reasons above in terms of "how this seems locally good", but it might be more natural to think about it globally, and notice that a version of EA which is very insular and just builds up its own stuff kind-of cut off from the rest of intellectual endeavour seems way worse (in expectation) than a version which has lots of surface area and good interfaces

2. If so, do you expect this to remain true for a long time, or do you think we're already moving rapidly towards fixing it?

Hmm, I think that I'm less conceiving of this as a problem-to-be-fixed than you are. Partially it's because I do see these substantial benefits of spending part of one's career outside of explicitly EA orgs -- I don't think it's important that everyone does this (and it doesn't have to be at the start of their career), but important that there's at least a solid fraction of people who have done so.

That said, I do think it's somewhat a problem, and there are people (whether or not they've already spent part of their career outside of explicitly EA orgs) who would be in a good position to contribute directly to EA work if only they had the right mentorship. I think maybe we're on the way to having the most acute versions of it fixed (though I'm not that confident about that), but I think the basic dynamic will remain true for a long time.

4. Do you think RSP, or things like it, are especially good ways to address this problem (if it exists)?

I think things like RSP are a good way to address a facet of this problem, of getting people towards "people-who-are-decent-at-working-out-what-to-do-amongst-the-innumerable-possibilities". I think that this can be significantly complementary to people spending part of their career outside of EA orgs.

(I think this last paragraph in particular may not be very clear. Feel free to poke at what doesn't make sense.)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-31T05:15:24.165Z · score: 17 (5 votes) · EA · GW

I'm not sure I really believe that "patient vs impatient longtermists" cleaves the world at its joins. I'll use the terms to mean something like resources aimed at reducing existential risk over the next fifty years or so, versus aiming to be helpful on a timescale of over a century?

In either case I think it depends a lot on the resource in question. Many resources (e.g. people's labour) are not fully fungible with one another, so it can depend quite a bit on comparative advantage.

If we're talking about financial resources, these are fairly fungible. There I tend to think (still applies to both "patient" and "impatient" flavours of longtermism):

  • It doesn't make so much sense to analyse at the level of the individual donor
  • Instead we should think about the portfolio we want longtermist capital as a whole to be spread across, and what are good ways to contribute to that portfolio at the margin
    • Sometimes particular donors will have comparative advantage in giving to certain places (e.g. they have high visibility on a giving opportunity so it's less overhead for them to assess it, and it makes sense for them to fill it)
    • Sometimes it's more about coordinating to have roughly the right amount total spent, and not fritter away too much on donor-of-last-resort type games
  • Some particular opportunities around now look excellent, but the scale of them is such that it's hard to absorb a large fraction of longtermist capital over (say) the next five years, so it makes sense for some money to be held in traditional investments

Then more specialised statements:

  • From a "patient" perspective, we want to invest in anything which grows the pool of informed longtermist resources faster than traditional investment, e.g. some versions of:
    • value-spreading
      • this is a lazy catch-all term which includes a ton of very different looking activities; I think getting better resolution on strategies that are worth employing here is pretty important
    • producing better intellectual material about what longtermists should do (includes research + dissemination, and works by increasing the degree people are informed plus by making the set of ideas seem more legit+solid, so easier to attract further people)
    • (specialised) education
    • research into what type of activities have good long-term returns for longtermists
  • From an "inpatient" perspective, we want to invest in opportunities which are plausibly on critical paths to averting existential catastrophes, e.g. some versions of:
    • lots of the things mentioned for the "patient" perspective 
      • (there's a continuum here, and if you were so impatient you just wanted to work on averting existential risk that would manifest in the next five years, the patient strategies don't look so great)
    • research into understanding and characterising the nature and likelihood of imminent risks
      • + work which sets this out clearly, and can be usefully understood by many people (I think that broad understanding of risks is a big step towards reducing them)
    • work to develop careers of people who might be well-placed to do relevant work
      • analogously, work to develop institutions that could play a useful role

Overall, I don't think it makes sense to imagine that one of the "patient" or "impatient" perspectives is correct. I think that the correct longtermist portfolio certainly includes substantial amounts of both of these classes of investments. For financial resources, I think that over the next several years it's likely that the meaningful margins are all about which given opportunities rise above the bar of saving (rather than which class deserves more money).

For non-financial resources (e.g. the career of a specified individual) it's more plausible to ask about tradeoffs between patient and impatient perspectives. I think it may usually be better if decisions here come down to comparative advantage rather than high-level views of the tradeoffs between "patient" and "impatient".

If I pinned myself down and forced myself to name one bullet point above that I think I'd like to see slightly more of (at the expense of the other bullet points), then at-least-in-moment-writing-this, I'd say "research into what type of activities have good long-term returns for longtermists". But I think this is correctly quite a small slice of our portfolio: I just want it to be a slightly-less-small slice. (I'd have similar views about some particular subfields of various of the other activity-types.)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-30T11:55:43.757Z · score: 4 (2 votes) · EA · GW

I think that often the topology of things in low dimensions ends up interestingly different to in high dimensions -- roughly when your dimensionality gets big enough (often 3, 4, or 5 is "big enough") there's enough space to do the things you want without things getting in the way.

One of the proofs I know takes advantage of the fact that f  (which is not simply connected) has boundary  , which is also the boundary of   (which is simply connected); there isn't room for the analogous trick a dimension down.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-30T11:26:09.014Z · score: 5 (3 votes) · EA · GW

Do you mean that they will fail to approximate the fully rational behaviour and sometimes be more biased when they try to approximate it? 

Roughly yes. They might even exactly match the fully rational behaviour on some dimension under consideration, but in so doing be a worse approximation overall to full rationality.

I think a proper study of full rationality and boundedly rational actors would look at limits of behaviour as you impose weaker and weaker computational constraints. I think that it could be really useful to understand which properties of the fully rational actor are converged upon in a reasonable time and basically hold for powerful-enough boundedly rational actors, and which e.g. only hold in the very limit when the actors comprehension ability is large compared to the world.

My instinct in response to the optimal top marginal tax rate being zero is that their model is probably missing very important features (which might be hard to measure or quantify).

Yes, I think it is missing imperfect information and bounded rationality. (TBC, I don't think that anyone working in optimal tax theory thinks that top marginal rates should actually be zero.) I think the theorem is pretty clear that in the perfect information case with all actors rational the top rate should be zero (basically needs an additional assumption about smoothness of preferences, but that's pretty reasonable). And although this sounds surprising, it is just correct!

To set up an example that's about bounded rationality in particular, suppose:

  • The taxpayers are fully rational
  • You, the tax-setter, have a lot of giant spreadsheets which express all of the taxpayer preferences for different levels of work/consumption, marginal value of public funds etc. (so theoretically full information)
  • You now get to set all the tax rates (which could be quite complicated)
  • If you were fully rational and could calculate everything out, you would be able to set optimal tax policy
  • But calculating everything out is too much of a mess, and you can't do it
  • You know for certain that the optimal solution would have a marginal top rate of zero somewhere
  • But as you can't work out where that is, and as having a marginal top rate of zero is not that important, you'll probably decide on a set of tax rates without a marginal top rate of zero, even though you know that that is certainly wrong
Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-30T10:22:29.395Z · score: 4 (2 votes) · EA · GW

Would a fully rational actor need to have a universal prior? Wouldn't they need to have justified one choice of a universal prior over all others? It seems like there might be a hard first step here that could prevent them from committing to a single joint probability distribution. Maybe you'd want a prior over universal priors, but then where would that come from?

I'd usually think of being fully rational as giving constraints after your choice of prior; there are questions about whether some priors are better than others, but you can treat that separately.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-29T22:34:48.917Z · score: 12 (6 votes) · EA · GW

One that comes to mind:

Theorem: Every finitely presented group is the fundamental group of some compact 4-manifold.

I like it because:

  • It's a universal claim relating two very broad classes of objects, such that when I look at the statement I think "wow, how would you even start thinking about how to prove that?"
  • There's a proof which is geometric, elegant, and short 
    • In fact there are multiple quite different geometric proofs!

[With apologies for the fact that this likely makes no sense to most readers.]

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-29T22:14:51.670Z · score: 7 (4 votes) · EA · GW

Meta: the last time I looked into any literature around this was about 5-6 years ago (and I wasn't thorough then), so I really don't know if this perspective is represented somewhere in the debate.

In case it isn't, and if any reader feels like they would like to take on the hard work of fleshing out details and seeing what problems it does/doesn't address, and writing it up for a paper, I'd be really happy to hear that that had been done. (Also feel free to reach out if that might be you and you'd want to discuss.)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-29T22:09:08.211Z · score: 8 (5 votes) · EA · GW

Meta: I really appreciated being asked this question! It made me realise I no longer felt confused about ambiguity aversion.

(I think the last time I thought explicitly about it, I'd have said "seems like ambiguity aversion is a good heuristic in some circumstances and that generates the intuitions in favour of it, but it's irrational", and the time before I'd have said "I think ambiguity aversion is irrational".)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-29T22:06:54.936Z · score: 22 (8 votes) · EA · GW

I think the debate about ambiguity aversion mostly comes down to a bucket error about the meaning of "rational":

  • I think that a fully rational actor would:
    • not exhibit ambiguity aversion
    • commit to a single joint probability distribution
  • I think for boundedly rational actors:
    • ambiguity aversion is a (very) useful heuristic
      • particularly if you're in an environment which is or might be partially designed by other agents who could stand to benefit from your loss
    • it can make sense to hold onto ranges of probabilities
      • e.g. maybe you think event X has probability between 10% and 20%, then that's enough to determine what to do for lots of policy decisions; in cases where it doesn't determine what to do you can consider whether it's worth time investment to sharpen your probability estimate
  • I think it's a bad (but frequently made, at least implicitly) assumption that boundedly rational actors should mimic the behaviour of fully rational actors in cases where they can work out what that is
    • For a particularly vivid example of (something at least strongly analogous to) this assumption breaking, see the theorem in the optimal taxation literature that the top marginal tax rate should be zero
Comment by owen_cotton-barratt on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-26T09:18:26.534Z · score: 5 (3 votes) · EA · GW

(I was counting you in the other "half of the rest", i.e. people I'd had more contact with than a single conversation, so probably wouldn't be regarded as "chosen from cold applications".)

Comment by owen_cotton-barratt on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-25T13:16:22.401Z · score: 14 (4 votes) · EA · GW

Just under half of the people we took I think nobody on the selection committee knew at all before the application process. For about half of the rest, they'd had a single conversation with me (& I think usually not with anyone else on the committee).

[From memory; haven't checked carefully.]

Comment by owen_cotton-barratt on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-25T13:04:50.842Z · score: 8 (4 votes) · EA · GW

Note that those ratios are [number starting on programme]/[number of applications]. In fact a few people were made offers and declined, so I think on the natural way of understanding acceptance rate it's a little higher.

Comment by owen_cotton-barratt on "Good judgement" and its components · 2020-08-20T22:42:01.333Z · score: 11 (5 votes) · EA · GW

Gosh, I wasn't (explicitly) thinking about branding at all. This is something I've been finding useful in my personal ontology, and I actually wasn't thinking about sharing it publicly until Ben suggested that, I thought "oh that makes sense" and tidied up the notes to post here (with some quick mental checks that they didn't seem somehow harmful). I'm mildly embarrassed that I hadn't thought about questions of how it could interact with branding of ideas -- but in some recent reflection I realised I was probably underweighting the value of making thinking public even when imperfect, so I'm not certain that there was any meta-level error here.

I think that there are legitimate questions here for me anyway, though: how much does my conception line up with LessWrong-style rationality, and/or why am I not just using that mental bucket? Definitely this is in a similar space. I guess I tend to think of "rationality" as referring to both a goal (think well) and a culture / {set of content designed to facilitate that}. I'm wanting to refer to the objective but without taking much of a stance on the informational content that people should consume to get better at it. I feel like there are lots of people in the world with a lot of elements of good judgement who have never heard of EA/rationality. I want to be able to point to them and what they're doing well, rather than have something that feels like a particular (niche?) school of thinking, so I don't really want strong associations with either EA or LessWrong.

Comment by owen_cotton-barratt on "Good judgement" and its components · 2020-08-20T14:14:33.440Z · score: 2 (1 votes) · EA · GW

I don't think there's always a clear line between implicit and explicit heuristics, e.g. often I think they might start out as implicit and then be made (partially) explicit in the process of reflecting on them. 

If you're going to import an explicit heuristic I think that it's usually a good idea to have a good understanding of its mechanism. But you might forgo this requirement if you have enough trust in its provenance. Also moderately often I think hearing an explicit heuristic from someone else gives you a hypothesis that you can now pay some attention to and see how it performs in different contexts and then work out whether you want to give it any weight in your decision-making. (I think a lot of distilled advice has something of this nature.)

Comment by owen_cotton-barratt on "Good judgement" and its components · 2020-08-20T10:31:13.692Z · score: 6 (4 votes) · EA · GW

Thanks. I think this is kind of nuanced, but here are some statements in the vicinity I agree with:

  • Heuristics and understanding of the world are not separate magisteria, and can inform each other
    • Understanding can tell us the implications of following different heuristics and let us choose
    • Noticing that a heuristic seems to work well can lead us to question what about the world makes it work well (or just provide evidence for worlds where that heuristic would work well over ones where it wouldn't)
    • In general time spent thinking and exploring the interplay between these often seems valuable to me
  • Having a good understanding of why a heuristic is good can increase our trust in that heuristic
  • Lack of understanding of why a heuristic is good, when we've spent time looking for such understanding, is evidence against the heuristic
    • Particularly if we can't even see a plausible mechanism it can be significant evidence

On the other hand I think I disagree with your statement taken literally:

  • I think usually heuristics are employed at a micro-scale, they're implicit, and there are a lot of them: we simply don't get to have good understanding of most
  • Even for heuristics that are explicit and have been promoted to our conscious attention, we sometimes justifiably have more trust in the heuristic than in our understanding of the underlying mechanisms
    • e.g. I do think "avoid doing sketchy things" is often a useful heuristic; my evidence base for this includes a bunch of direct and reported observations, as well as social proof of others' views. I'm sure I don't fully understand the boundaries of how to apply it (even the specification of "sketchy" is done implicitly). I've thought about why it seems good to avoid sketchy things, and have a partial understanding of mechanisms, but I'm sure there's a lot of detail I don't understand there. But I don't think that I need to fully understand those details to get value out of the heuristic. I also would prefer that my past self put some weight on this heuristic, even before I'd tried to think through the mechanisms (although I'm glad I've done that thinking).
Comment by owen_cotton-barratt on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-13T15:44:26.442Z · score: 19 (9 votes) · EA · GW

Maybe: "We should give outsized attention to risks that manifest unexpectedly early, since we're the only people who can."

(I think this is borderline major? The earliest occurrence I know of was 2015 but it's sufficiently simple that I wouldn't be surprised if it was discovered multiple times and some of them were earlier.)