# AMA: Owen Cotton-Barratt, RSP Director

post by Owen_Cotton-Barratt · 2020-08-28T14:20:18.846Z · score: 75 (29 votes) · EA · GW · 80 comments

I'm planning to spend time on the afternoon (UK time) of Wednesday 2nd September answering questions here (though I may get to some sooner). Ask me anything!

comment by MichaelA · 2020-08-29T07:30:27.572Z · score: 35 (14 votes) · EA(p) · GW(p)

Does FHI or the RSP have a relatively explicit, shared theory of change? Do different people have different theories of change, but these are still relatively explicit and communicated between people? Is it less explicit than that?

Whichever is the case, could you say a bit about why you think that's the case?

comment by Owen_Cotton-Barratt · 2020-09-02T12:12:23.347Z · score: 17 (4 votes) · EA(p) · GW(p)

For RSP, I think that:

• In starting RSP, I had an implicit theory of change in my head
• There are quite a few facets of this (mechanisms for value produced, a continuum of hypotheses, etc.)
• One important facet (particularly for early-RSP) was a sense of "pretty sure there's significant value available via something in this vicinity, let's try it and see if we can hone in"
• I explicitly share and communicate parts of this model to the extent that it's accessible for me to do so
• This involved some conversations with people before RSP started, and some presenting thoughts to the research scholars as the programme started, and periodically returning to it
• As RSP has developed and other people have become major stakeholders, they've developed their own implicit theories of change
• We make some space to discuss these / exchange models
• As RSP matures, it will make more sense to pin down a theory of change and have it explicit and shared
• The facet of "let's work out what here is good" will naturally diminish, and we'll work out which other facets are best to lean on

Some general thoughts:

• Advantages of having an explicit theory of change:
• Makes it easier to sync up about direction/priorities/reasons for doing things
• Makes it easier for people to engage critically, or otherwise to notice mistakes and course-correct
• Disadvantages of having an explicit theory of change:
• Easy to have the case where your best expression of something is dumber than your real internal sense of it
• In this case it may be preferable to be guided by the internal sense rather than the explicit version
• (this is at least some distant relative of Goodhart's law)
• To the extent that you're going to be guided by your internal sense rather than an explicit version, sharing something as an explicit theory of change can be misleading
• In general I think it's good to encourage lots of explicit discussion about theories of change
• Ideally without committing to reaching an "answer", but having that as a goal may be helpful for prompting the discussion

I think that I find the disadvantages quite emotionally resonant, which may pull me to err too far in the direction of not being explicit. I have appreciated some cases where people have pushed me towards "let's have a discussion where we're pretty explicit about best guesses".

FHI I think has an explicit theory of change even less than RSP does; my guess it that Nick Bostrom is also averse to incurring the costs of these disadvantages (and maybe more strongly so than me), but that's speculation.

comment by MichaelA · 2020-09-02T12:56:31.638Z · score: 2 (1 votes) · EA(p) · GW(p)

I've quoted the part from "Some general thoughts:" to the second-last paragraph in a new comment on my earlier question post Do research organisations make theory of change diagrams? Should they? [EA · GW] (I flagged that you weren't talking about ToC diagrams in particular.) Hope that's ok.

comment by Misha_Yagudin · 2020-09-01T20:32:55.846Z · score: 3 (2 votes) · EA(p) · GW(p)

A related question: which fraction of your and RSP's impact do you expect to come from direct and from community/field-building?

E.g.

• When working on a paper, do you think value consists of field-building or from a small personal chance of, say, coming up with a crucial consideration?
• Will most value of RSP will come from direct work done by scholars or by scholars [and program] indirectly influencing other people/organizations? [I would count consulting policy-makers as direct work.]
comment by Misha_Yagudin · 2020-09-01T20:37:18.193Z · score: 6 (4 votes) · EA(p) · GW(p)

Oh, even better! In your What Does (and Doesn’t) AI Mean for Effective Altruism? [? · GW] slide four speaks about different timelines: immediate (~5 years), this generation (~15), next-generation (~40), distant (~100). Which timelines are you optimizing RSP for?

comment by Owen_Cotton-Barratt · 2020-09-02T13:12:12.091Z · score: 7 (4 votes) · EA(p) · GW(p)

Of these, I think RSP is most aiming at "next-generation", with "this generation" a significant secondary target.

comment by Owen_Cotton-Barratt · 2020-09-02T13:15:26.588Z · score: 3 (2 votes) · EA(p) · GW(p)

When working on a paper, do you think value consists of field-building or from a small personal chance of, say, coming up with a crucial consideration?

This question doesn't quite feel right to me. I think that when working on a paper I normally have an idea of what insights I want it to convey. The value might be in field-building, or the direct value of disseminating that insight (not counting its spillover to field-building).

Work that might find crucial insights feels like it happens before the paper-writing stage. I try to spend some time in that mode.

comment by Misha_Yagudin · 2020-09-03T14:26:51.529Z · score: 1 (1 votes) · EA(p) · GW(p)

Yeah, on a reflection framing of "working on a paper" is not quite right. So let me be more specific,

• Prospecting for Gold's impact comes from promoting a certain established way of thinking [≈ econ 101 and ITN] within the EA community and, unclear, if intended or not, also providing local communities with an excellent discussion topic.
• The expected value of cost-effectiveness of research seems to be dominated by chances of stumbling on considerations for the EA researchers, GiveWell, 80K's career recommendations, etc.
• The impact of work on moral uncertainty seems to primarily come from field-building. Doing EA-relevant research within a prestigious branch of philosophy increase odds that more pressing EA questions would be addressed by the next generation of academics.

There are other potentials reasons to do research, say, one might prefer to fully concentrate on mentoring but need to do research for the second-order effects: having prestige for hiring; having scholars' respect for better mentorship; having fresh meta-cognitive observations to emphasize with mentees for better advising). I am curious about which impact pathways do you prioritize?

I feel the most confused about moral uncertainty because it doesn't resonate with my taste and my knowledge of the subject and of field politics is very limited. I hope my oversimplification doesn't diminish/misrepresent your work too much.

comment by Owen_Cotton-Barratt · 2020-09-02T13:18:14.727Z · score: 2 (1 votes) · EA(p) · GW(p)

Will most value of RSP will come from direct work done by scholars or by scholars [and program] indirectly influencing other people/organizations? [I would count consulting policy-makers as direct work.]

I want to say "yes, by indirect influence", but I expect that this will be true also of most cases of consulting policy-makers (this would remain true even if you got to set policies directly, as I think that most things we do have value filtered through what future people do). This makes me think I'm somehow using a different lens on the world which makes it hard to answer this question directly.

comment by Neel Nanda · 2020-08-29T05:40:41.486Z · score: 25 (13 votes) · EA(p) · GW(p)

Suppose, in 10 years, that the Research Scholar's Programme has succeeded way beyond what you expected now. What happened?

comment by MichaelA · 2020-08-29T06:53:55.106Z · score: 17 (8 votes) · EA(p) · GW(p)

Interesting question!

Related: Suppose that, in 10 years, the RSP seems to have had no impact.* What signs would reveal this? And what seem the most likely explanations for the lack of impact?

*There already seem to be some indicators of impact, so feel free to interpret this as "seems to have had no impact after 2020", or as "seems to have had no impact after 2020, plus the apparent impacts by 2020 all ended up washing out over time".

comment by Owen_Cotton-Barratt · 2020-09-02T13:38:02.790Z · score: 7 (4 votes) · EA(p) · GW(p)

Something like: it seems like the people we're taking on the programme are doing kind of good things, but when we dig into counterfactual analysis it seems like they might on average have done more if they hadn't joined the programme (perhaps because e.g. normal academic pressures are surprisingly helpful motivationally, or because we're fostering a community which is too inward-looking).

comment by Owen_Cotton-Barratt · 2020-09-02T13:34:39.408Z · score: 7 (4 votes) · EA(p) · GW(p)

Something like: it catalysed the creation of a whole stream of major new projects (led by scholars who used the space afforded by the programme to think seriously about possibilities, and who are well-networked with the broader x-risk ecosystem which makes coordination and recruitment easier).

comment by tamgent · 2020-08-29T15:23:35.475Z · score: 21 (8 votes) · EA(p) · GW(p)

Is there any impact measurement of RSP currently? I appreciate it is unusually hard, but have you had any thoughts on good ways to go about this?

comment by Owen_Cotton-Barratt · 2020-09-01T15:37:39.575Z · score: 5 (3 votes) · EA(p) · GW(p)

We're doing a combination of:

• Looking at what people go on from RSP to do
• Surveys (& conversations) asking research scholars how useful they have found RSP (and in what that value consists), and what they guess they would have done otherwise
• Comparison of the above with some people who narrowly didn't join RSP (for one reason or another)
• Looking at to what extent work done by research scholars while on the programme is directly useful
• Our (=RSP management's) independent impressions of whether / how much we've helped people

(I think we're still finding out feet with this.)

There are a relatively small number of individuals who have gone through the programme, and it's important to us to protect their privacy, so at the moment we don't have plans to publish any of this. When we have slightly more data I kind of like the idea of publishing some aggregate summaries, but I haven't thought seriously about whether this will be possible to do in a way which is properly privacy-preserving while also actually useful to readers.

comment by MichaelA · 2020-08-29T07:22:47.120Z · score: 20 (8 votes) · EA(p) · GW(p)

I've heard many people express the view that in EA, and perhaps especially in longtermism:

• There are a lot of people who could potentially be good/great researchers, but have limited experience thus far
• There is too little capacity to mentor and manage these people
• This is partly because the best candidates for doing that are also able to do very valuable research themselves, or other things like outreach, so the opportunity costs for them are very high
• This results in an untapped pool of potential talent, and also makes it harder to fix this problem itself, because it limits the pipeline of new mentors and managers as well
• So it'd be highly valuable for more people to build skills in research as well as in mentorship/management, to address this problem
• And maybe this pushes in favour of starting one's research career outside of explicitly EA orgs, e.g. in academia, to draw on the mentorship capacity there

1. Does all of those claims seem true to you?

2. If so, do you expect this to remain true for a long time, or do you think we're already moving rapidly towards fixing it? (E.g., maybe there are a lot of people already "in the pipeline", reducing the need for new people to enter it.)

3. Do you think there are other ways to potentially address this problem (if it exists) that deserve more attention or that I didn't mention above?

4. Do you think RSP, or things like it, are especially good ways to address this problem (if it exists)?

comment by Owen_Cotton-Barratt · 2020-08-31T06:07:00.787Z · score: 13 (5 votes) · EA(p) · GW(p)

1. Does all of those claims seem true to you?

• I don't think this is centrally about "researchers", but about "people-who-are-decent-at-working-out-what-to-do-amongst-the-innumerable-possibilities"
• This is a class we need more of in EA (and particularly longtermist EA); research is one of the applications of the (major) applications of such people, but far from the only one
• Mentorship/management is more like a thousand small things than two big things
• Often people will be better off learning from multiple strong mentors than one, because they'll be good at different subcomponents
• There are very substantial reasons beyond this to spend part of one's (research) career outside of explicitly EA orgs, particularly if you get an opportunity to work with outstanding people
• Such as:
• You can better learn the specialist knowledge belonging to the relevant domain by spending time working with top experts
• Or idiosyncratic-but-excellent pieces of mentorship
• To the extent that EA has important insights that are relevant in many domains, working closely with smart people is a good opportunity to share those insights
• It's a powerful way to develop a network
• I gave the reasons above in terms of "how this seems locally good", but it might be more natural to think about it globally, and notice that a version of EA which is very insular and just builds up its own stuff kind-of cut off from the rest of intellectual endeavour seems way worse (in expectation) than a version which has lots of surface area and good interfaces

2. If so, do you expect this to remain true for a long time, or do you think we're already moving rapidly towards fixing it?

Hmm, I think that I'm less conceiving of this as a problem-to-be-fixed than you are. Partially it's because I do see these substantial benefits of spending part of one's career outside of explicitly EA orgs -- I don't think it's important that everyone does this (and it doesn't have to be at the start of their career), but important that there's at least a solid fraction of people who have done so.

That said, I do think it's somewhat a problem, and there are people (whether or not they've already spent part of their career outside of explicitly EA orgs) who would be in a good position to contribute directly to EA work if only they had the right mentorship. I think maybe we're on the way to having the most acute versions of it fixed (though I'm not that confident about that), but I think the basic dynamic will remain true for a long time.

4. Do you think RSP, or things like it, are especially good ways to address this problem (if it exists)?

I think things like RSP are a good way to address a facet of this problem, of getting people towards "people-who-are-decent-at-working-out-what-to-do-amongst-the-innumerable-possibilities". I think that this can be significantly complementary to people spending part of their career outside of EA orgs.

(I think this last paragraph in particular may not be very clear. Feel free to poke at what doesn't make sense.)

comment by MichaelA · 2020-08-31T08:37:52.989Z · score: 7 (2 votes) · EA(p) · GW(p)

Hmm, I think that I'm less conceiving of this as a problem-to-be-fixed than you are.

I think my second question was broad and vague.

I could operationalise part of it as: "Do you expect there's still high expected value in more people starting now at trying to get good at 'research mentorship/management'? Do you expect the same would be true if they started on that in, e.g., 2 years? Or do you think that, by the time people got good at this if they start now, the 'gap' will have been largely filled?"

It sounds like you think the answer is essentially "Yes, there's still high expected value in this"?

I'd agree that there are other strong arguments for many people working outside of explicitly EA orgs. And I think many EAs - myself included - are biased towards and often overemphasise working at explicitly EA orgs.

But "jobs/projects that are unusually good for getting better at 'research mentorship/management'" includes various jobs both within and outside of EA, as well as excluding various jobs both within and outside of EA. So I think the questions in this comment are distinct from - though somewhat related to - the question "Should more people work outside of EA orgs?"

comment by Owen_Cotton-Barratt · 2020-08-31T14:55:05.349Z · score: 14 (4 votes) · EA(p) · GW(p)

Ahh, I think I was interpreting your general line of questioning as being:

A) Absent ability to get sufficient mentorship within EA circles, should people go outside to get mentorship?

... whereas this comment makes me think you were more asking:

B) Since research mentorship/management is such a bottleneck, should we get people trying to skill up a lot in that?

I think that some of the most important skills for research mentorship from an EA perspective include transferring intuitions about what is important to work on, and that this will be hard to learn properly outside an EA context (although there are probably some complementary skills one can effectively learn).

I do think that if the questions were in the vein of B) I'm more wary in my agreement: I kind of think that research mentorship is a valuable skill to look for opportunities to practice, but a little hard to be >50% of what someone focuses on? So I'm closer to encouraging people doing research that seems valuable to look for opportunities to do this as well. I guess I am positive on people practicising mentorship generally, or e.g. reading a lot of different pieces of research and forming inside views on what makes some pieces seem more valuable. I think the demand for these skills will become slightly less acute but remain fairly high for at least a decade.

comment by MichaelA · 2020-08-31T18:52:22.824Z · score: 3 (2 votes) · EA(p) · GW(p)

I think I had both of those lines of questioning in mind, but didn't make this explicit. Thanks for your responses :)

comment by MichaelA · 2020-08-31T08:37:22.039Z · score: 4 (2 votes) · EA(p) · GW(p)

Thanks for that interesting response!

[...] research is one of the [...] (major) applications of such people, but far from the only one

Other than research, what do you see as fitting in this category? I'd guess it includes grantmaking, and making high-level/strategic organisation decisions. And I'd guess it wouldn't include working out which accounting firm an organisation should use. But I'm unsure about both of those guesses, and especially about things "in between" them.

Perhaps you mean something like "people who are decent at working out what strategies and interventions we should pursue amongst the innumerable possibilities"? (As opposed to what fine-grained decisions individual people/orgs should make on a day to day level.)

I think that this can be significantly complementary to people spending part of their career outside of EA orgs.

I'm not sure I know what you mean by this (as you anticipate!). Is it about the research scholars themselves spending part of their career before or after RSP outside of EA orgs? Or about the research scholars complementing other people working elsewhere?

comment by Owen_Cotton-Barratt · 2020-08-31T11:07:54.476Z · score: 6 (3 votes) · EA(p) · GW(p)

Is it about the research scholars themselves spending part of their career before or after RSP outside of EA orgs?

Roughly, yes. e.g. I think several people currently at RSP have had some career outside first, and I think that they are typically deriving some real benefit from that (i.e. RSP is providing a complement rather than a substitute for the experience they have already).

(Not claiming that RSP is only for people with such experience!)

comment by Owen_Cotton-Barratt · 2020-08-31T11:02:46.093Z · score: 4 (2 votes) · EA(p) · GW(p)

Perhaps you mean something like "people who are decent at working out what strategies and interventions we should pursue amongst the innumerable possibilities"? (As opposed to what fine-grained decisions individual people/orgs should make on a day to day level.)

Yes, I think that's mostly a better characterisation.

(There's definitely some grey area, as e.g. I think that people who are good at the thing I'm pointing to are in touch with the reasons behind a choice of intervention, in a way that feeds into some of the decisions about how to implement it on a day-to-day level.)

comment by John_Maxwell (John_Maxwell_IV) · 2020-09-03T09:37:22.886Z · score: 7 (4 votes) · EA(p) · GW(p)

Why not just have the people who need mentorship serve as "research personal assistants" to improve the productivity of people who are qualified to provide mentorship? (This describes something which occurs between professors and graduate students right?)

comment by MichaelA · 2020-09-03T10:09:12.688Z · score: 4 (2 votes) · EA(p) · GW(p)

I've heard the view that more EAs should consider being research assistants to seemingly highly skilled EA researchers[1], both for their own learning and to improve those researchers' productivity. Is this what you mean?

I didn't deliberately exclude mention of this from my above comment; I just didn't think to include it. And now that you mention it (or something similar), I'd be interested in Owen's take on this as well :)

[1] One could of course also do this for highly skilled non-EA researchers working in relevant areas. I just haven't heard that suggested as often; I'm not sure if there are good reasons for that.

comment by Neel Nanda · 2020-08-29T15:14:30.600Z · score: 19 (10 votes) · EA(p) · GW(p)

What common belief in EA do you most strongly disagree with?

comment by Owen_Cotton-Barratt · 2020-09-01T22:42:03.650Z · score: 28 (16 votes) · EA(p) · GW(p)

That personal dietary choices are important on consequentialist effectiveness grounds.

I actually think there are lots of legitimate and powerful reasons for EAs to consider veg*nism, such as:

• A ~deontological belief that it's wrong to eat animals
• Signalling caring
• A desire for shared culture with people you share values with

... but it feels to me almost intellectually dishonest to have it be part of an answer to someone saying "OK, I'm bought into the idea that I should really go after what's important, what do I do now?"

(I'm not vegetarian, although I do try to only consume animals I think have had reasonable welfare levels, for reasons in the vicinity of the first two listed above. I still have some visceral unease about the idea of becoming vegetarian that is like "but this might be mistaken for being taken in by intellectually dishonest arguments".)

comment by jackmalde · 2020-09-02T21:42:09.672Z · score: 15 (8 votes) · EA(p) · GW(p)

I almost feel cheeky responding to this as you've essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!

I'd say that something doesn't have to be the most effective thing to do for it to be worth doing, even if you're an EA. If something is a good thing, and provided it doesn't really have an opportunity cost, then it seems to me that a consequentialist EA should do it regardless of how good it is.

To illustrate my point, one can say it's a good thing to donate to a seeing eye dog charity. In a way it is, but an EA would say it isn't because there is an opportunity cost as you could instead donate to the Against Malaria Foundation for example which is more effective. So donating to a seeing eye dog charity isn't really a good thing to do.

Choosing to follow a ve*an diet doesn't have an opportunity cost (usually). You have to eat, and you're just choosing to eat something different. It doesn't stop you doing something else. Therefore even if it realises a small benefit it seems worth it (and for the record I don't think the benefit is small).

Or perhaps you just think the personal cost to you of being ve*an is substantial enough to offset the harm to the animals. From a utilitarian view I'd imagine this is unlikely to be true. I happen to think avoiding the suffering of even one animal is significant, similarly to the fact that we think it would be highly significant to save just one human life. And following a vegan diet for a while will benefit way more than just one animal anyway.

comment by Owen_Cotton-Barratt · 2020-09-03T01:07:21.227Z · score: 16 (13 votes) · EA(p) · GW(p)

I almost feel cheeky responding to this as you've essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!

That's fine! :)

In turn, an apology: my controversial view has baited you into response, and I'm now going to take your response as kind-of-volunteering for me to be critical. So I'm going to try and exhibit how it seems mistaken to me, and I'm going (in part) to use mockery as a rhetorical means to achieve this. I think this would usually be a violation of discourse norms, but here: the meta-level point is to try and exhibit more clearly what this controversial view I hold is and why; the thing I object to is a style of argument more than a conclusion; I think it's helpful for the exhibition to be able to draw attention to features of a specific instance, and you're providing what-seems-like-implicit-permission for me to do that. Sorry!

I'd say that something doesn't have to be the most effective thing to do for it to be worth doing, even if you're an EA.

To be clear: I strongly agree with this, and this was a big part of what I was trying say above.

So donating to a seeing eye dog charity isn't really a good thing to do.

This is non-central, but FWIW I disagree with this. Donating to the guide dog charity usually is a good thing to do (relative to important social norms where people have property rights over their money), it's just that it turns out there are fairly accessible actions which are quite a lot better.

Choosing to follow a ve*an diet doesn't have an opportunity cost (usually). You have to eat, and you're just choosing to eat something different.

This, I'm afraid, is the type of statement that really bugs me. It's trying to collapse a complex issue onto simple dimensions, draw a simple conclusion there, and project it back to the original complex world. But in doing so it's thrown common-sense out of the window!

If I believed that choosing to follow a ve*an diet usually didn't have an opportunity cost, I would expect to see:

• People usually willing to go ve*an for a year for some small material gain
• In theory if there was no opportunity cost, even for something trivial like $10, but I think many non ve*ans would be unwilling to do this even for$1000
• [As an aside, I think taxes on meat would probably be a good policy that might well be accessible]
• Almost everyone who goes ve*an for ethical reasons keeping it up
• In fact some significant proportion of people stop

Or perhaps you just think the personal cost to you of being ve*an is substantial enough to offset the harm to the animals.

I certainly don't claim this in any utilitarian comparison of welfare. But now the argument seems almost precisely analogous to:

"You could help the poorest people in the world a tremendous amount for the cost of a cup of coffee. Since your welfare shouldn't outweigh theirs, you should forgo that cup of coffee, and every other small luxury in your life, to give more to them."

I think EA correctly rejects this argument, and that it's correct to reject its analogue as well. (I think the argument is stronger for ve*anism than giving to the poor instead of buying coffee; but I also think that there are better giving opportunities than giving directly to the poor, and that when you work it through the coffee argument ends up being stronger than the corresponding one for ve*anism.)

---

Again, I'm not claiming that EAs shouldn't be ve*an. I think it's a morally virtuous thing to do!

But I don't think EAs have a monopoly on virtue. I think the EA schtick is more like "we'll think things through really carefully and tell you what the most efficient ways to do good are". And so I think that if it's presented as "you want to be an EA now? great! how about ve*anism?" then the implicature is that this is a bigger deal than, say, moving from giving away 7% of your income to giving away 8%, and that this is badly misleading.

Notes:

• There may be some people for whom the opportunity cost is trivial
• I think there are probably quite a few people for whom the opportunity cost is actually negative -- i.e. it's overall easier for them to be ve*an than not
• I would feel very good about encouragement to check whether people fall into one of these buckets, as in cases where they do then dietary change may be a particularly efficient way to do good
• I'd also feel very good about moral exhortment to be ve*an that was explicit that it wasn't grounded in EA thinking, like:
• "Many EAs try to be morally serious in all aspects of their lives, beyond just trying to optimise for the most good achievable. This leads us to ve*anism. You might want to consider it."
comment by jackmalde · 2020-09-03T06:10:11.992Z · score: 15 (7 votes) · EA(p) · GW(p)

I'm not 100% sure but we may be defining opportunity cost differently. I'm drawing a distinction between opportunity cost and personal cost. Opportunity cost relates to the fact that doing something may inhibit you from doing something else that is more effective. Even if going vegan didn't have any opportunity cost (which is what I'm arguing in most cases), people may still not want to do it due to high perceived personal cost (e.g. thinking vegan food isn't tasty). I'm not claiming there is no personal cost and that is indeed why people don't go / stay vegan - although I do think personal costs are unfortunately overblown.

Without addressing all of your points in detail I think a useful thought experiment might be to imagine a world where we are eating humans not animals. E.g. say there are mentally-challenged humans of a comparable intelligence/capacity to suffer to non-human animals and we farm them in poor conditions and eat them causing their suffering. I'd imagine most people would judge this as morally unacceptable and go vegan on consequentialist grounds (although perhaps not and it would actually be deontological grounds?). If you would go vegan in the thought experiment but not in the real world then you're probably speciesist to some degree which I ultimately don't think can be defended.

I think the EA schtick is more like "we'll think things through really carefully and tell you what the most efficient ways to do good are". And so I think that if it's presented as "you want to be an EA now? great! how about ve*anism?"

EA is sometimes described as doing the most good (most common definition) or I suppose is sometimes described as finding the most effective ways to do good. These can be construed as two different things. I would say under the first definition that being vegan naturally becomes part of the conversation for the reasons I have mentioned (little to no opportunity cost).

Also, we may be fundamentally disagreeing on the scale of the benefits on consequentialist grounds of going vegan as well - I think they are quite considerable. Indeed "signalling caring" as you put it can then convince others to consider veganism in which case you can get a snowball of positive effects. But that's a whole other discussion.

P.S. I agree we can probably improve the way veganism is messaged in EA and it's possible I am part of the problem!

comment by Bella_Forristal · 2020-09-06T07:17:18.870Z · score: 4 (3 votes) · EA(p) · GW(p)

Thanks for this interesting discussion; for others who read this and were interested, I thought I'd [EA · GW] link [EA · GW] some [EA · GW] previous EA discussions on this topic in case it's helpful :)

One brief addition: I think the kind of conscientious omnivorism you describe ('I do try to only consume animals I think have had reasonable welfare levels') might have similar opportunity costs to veg*ism, and there's some not very conclusive psychological literature to suggest that, since it is a finer grained rule than 'eat no animals', it might even be harder to follow.

Obviously, this depends very much on what we mean by opportunity cost, and it also depends on how one goes about only trying to eat happy animals. I'm not sure what the best answer to either of those questions is.

comment by Denis Drescher (Telofy) · 2020-09-05T21:33:31.601Z · score: 4 (2 votes) · EA(p) · GW(p)

I’ve thought a bit about this for personal reasons, and I found Scott Alexander’s take on it to be enlightening.

I see a tension between the following two arguments that I find plausible:

1. Some people run into health issues due to a vegan diet despite correct supplementation. In most cases it’s probably because of incorrect or absent supplementation, but probably not in all. This could mean that a highly productive EA doing highly important work may cease to be as productive with a small probability. Since they’ve probably been doing extremely valuable work, this decrease in output may be worse than the suffering they would’ve inflicted if they had [eaten some beef and had some milk](https://impartial-priorities.org/direct-suffering-caused-by-various-animal-foods.html). So they should at least eat a bit of beef and drink a bit of milk to reduce that risk. (These foods may increase other risks – but let’s assume for the moment that the person can make that tradeoff correctly for themselves.)
2. There is currently in our society a strong moral norm against stealing. We want to live in a society that has a strong norm against stealing. So whenever we steal – be it to donate the money to a place where it has much greater marginal utility than with its owner – we erode, in expectation, the norm against stealing a bit. People have to invest more into locks, safes, guards, and fences. People can’t just offer couchsurfing anymore. This increase in anomie (roughly, lack of trust and cohesion) may be small in expectation but has a vast expected societal effect. Hence we should be very careful about eroding valuable societal norms, and, conversely, we should also take care to foster new valuable societal norms or at least not stand in the way of them emerging.

I see a bit of a Laffer curve here (like an upside-down U) where upholding societal rules that are completely unheard of has little effect, and violating societal rules that are extremely well established has little effect again (except that you go to prison). The middle section is much more interesting, and this is where I generally advise to tread softly. (But I’m also against stealing.)

Because the way I resolve this tension for me is to assess whether in my immediate environment – the people who are most likely to be directly influenced by me – a norm is potentially about to emerge. If that is the case, and I approve of the norm, I try to always uphold that norm to at least an above average level.

Well, and then there are a few more random caveats:

1. As the norm not to harm other animals for food becomes stronger, it’ll be less socially awkward for people (outside vegan circles) to eat vegan food. Social effects were (last time I checked) still the second most common reason for vegan recidivism.
2. As the norm not to harm other animals for food becomes stronger, more effort will be put into providing properly fortified food to make supplementation automatic.
3. Eroding a budding social norm because it comes at a cost to one’s own goals seems like the sort of freeriding that I think the EA community needs to be very careful about. In some cases the conflict is only due to lacking idealization of preferences or only between instrumental rather than terminal goals or the others would defect against us in any case, but we don’t know any of this to be the case here. The first comes down to unanswered questions of population ethics, the second to the exact tradeoffs between animal suffering and health risks for a particular person, and the third to how likely animal rights activists are to badmouth AI safety, priorities research, etc. – probably rarely.
4. Being vegan among EAs, young, educated people, and other disproportionately antispeciesist groups may be more important than being vegan in a community of hunters.
5. A possible, unusual conclusion to draw from this is to be “private carnivor”: You only eat vegan food in public, and when people ask you whether you’re vegan, you tell them that you think eating meat is morally bad, a bad norm, and shameful, and so you only do it in private and as rarely as possible. No lies or pretense.
6. There’s also the option of moral offsetting, which I find very appealing (despite these criticisms [EA · GW] – I think I somewhat disagree with my five-year-old comment there now), but it doesn’t seem to quite address the core issue here.
7. Another argument you mentioned to me at an EAGx was something along the lines that it’ll be harder to attract top talent in field X (say, AI safety) if they not only have to subscribe X being super important but have to subscribe to  X being super important and be vegan. Friend of mine solve that by keeping those things separate. Yes, the catering may be vegan, but otherwise nothing indicates that there’s any need for them to be vegan themselves. (That conversation can happen, if at all, in a personal context separate of any ties to field X.)
comment by Milton · 2020-09-02T20:56:46.550Z · score: -11 (6 votes) · EA(p) · GW(p)

Additionally, you would have to be okay with a pedophile justifying molesting children on the same grounds, which, ehm, seems a bit repugnant.

comment by Milton · 2020-09-03T12:44:04.936Z · score: 1 (1 votes) · EA(p) · GW(p)

Would any of you who downvoted the comment above be willing to state why?

comment by Max_Daniel · 2020-09-03T13:56:41.718Z · score: 29 (10 votes) · EA(p) · GW(p)

I didn't downvote your comment, but was close to doing so. (I generally downvote few comments, maybe in some sense "too few".)

The reason why I considered downvoting: You claim that an argument implies a view widely seen as morally repugnant and (i.e. this alone is not sufficient):

• You are not as clear as I think you could have been that you don't actually ascribe the morally repugnant view to Owen, as opposed to mentioning this as an reduction ad absurdum precisely because you don't think anyone accepts the morally repugnant conclusion.
• You use more charged language than is necessary to make your point. E.g. instead of saying "repugnant" you could have said something like "which presumably no-one is willing to accept". Similarly, it's not relevant whether the perpetrator in your claim is a pedophile. (But it's good to avoid even the faintest suggestion that someone in this debate is claimed to be pedophile.)
• I'm not able to follow your reasoning, and suspect you may have misunderstood the comment you're responding to. Most significantly, the above comment doesn't argue that anything is morally okay, simpliciter - it just argues that a certain kind of moral objections, namely an appeal to bad consequences, doesn't work for certain actions. It even explicitly lists other moral reasons against these actions. (Granted, it does suggest that these reasons aren't so strong that the action is clearly impermissible in all circumstances.) But even setting this aside, I'm not sure why you think the above comment has the implication you think it has.

I don't know for sure why anyone downvoted, but moderately strongly suspect they had similar reasons.

Here's a version of your point which is still far from optimal on the above criteria (e.g. I'd probably have avoided the child abuse example altogether) but which I suspect wouldn't have been downvoted:

I think your argument proves too much. It implies, for instance, that it's not clearly impermissible to harm humans in similar ways in which non-human animals are being harmed because of humans slaughtering them for food. [Say 1-2 sentences about why you think this.] As a particularly drastic example, consider that virtually everyone agrees that sexual abuse of children is not permissible under any circumstances. Your argument seems to imply that there would only be a much weaker moral prohibition against child abuse. Clearly we cannot accept this conclusion. So there must be something wrong with your argument.
comment by Khorton · 2020-09-03T14:31:18.983Z · score: 13 (6 votes) · EA(p) · GW(p)

I strong-upvoted this because it's super clear and detailed and the kind of thing I want to see more of on the Forum, but I haven't actually read the original comment so don't know if this analysis is right, just to avoid confusion

comment by Owen_Cotton-Barratt · 2020-09-03T16:20:06.777Z · score: 6 (3 votes) · EA(p) · GW(p)

I didn't downvote, but I also didn't even understand whether you were agreeing with me or disagreeing with me (and strongly suspected that "would have to" was an error in either case).

comment by Neel Nanda · 2020-08-29T06:00:17.512Z · score: 14 (6 votes) · EA(p) · GW(p)

What do you think is the most valuable research you've produced so far? Did you think it would be so valuable at the time?

comment by Owen_Cotton-Barratt · 2020-09-02T15:01:55.302Z · score: 12 (5 votes) · EA(p) · GW(p)

Estimating the value of research seems really hard to me (and this is significantly true even in retrospect).

That said, some candidates are:

• Work making the point that we should give outsized attention to mitigating risks that might manifest unexpectedly soon, since we're the only ones who can
• At the time it didn't seem unusually valuable, but I think it was relatively soon after (a few months) that I saw some people changing behaviour in light of the point, which increased my sense of its importance
• Work on cost-effectiveness of research of unknown difficulty, particularly the principle of using log returns when you don't know where to start
• Felt sort-of important at the time, although I think the kind of value I anticipated hasn't really manifested
• I have felt like it's been useful for my thinking in a variety of domains, thinking about pragmatic prioritisation (and I've seen some others get some value from that); however logarithm is an obvious-enough functional form that maybe it didn't really add much
• Maybe something where it was more about dissemination of ideas than finding deep novel insights (I think it's very hard to draw a line between what counts as "research" or what doesn't), such as Prospecting for Gold [? · GW], or How valuable is movement growth? [? · GW]
• Quite a few people have told me that they got something out of one or both of those pieces, although it's extremely hard to assess the counterfactuals
• I felt like I was doing something significant in these cases (particularly when writing the talk Prospecting for Gold)
• Overall I'd be hard pressed to decide between choosing one of the above, although I'd tend to guess these are more valuable than most other pieces I've done (excepting some recent work that I don't yet want to judge, and with the caveat that I'm surely forgetting some)
• That said, some of the more policy-ish pieces of research might still turn out to be the most valuable, if they got picked up somewhere important, but so far I'll not count them
comment by Neel Nanda · 2020-08-29T05:59:46.472Z · score: 13 (7 votes) · EA(p) · GW(p)

You have a pure maths research background. What areas/problems do you think this background and way of thinking give you the strongest comparative advantage at?

Can you give any examples of times your background has felt like it helped you come to valuable insights?

comment by Owen_Cotton-Barratt · 2020-09-02T15:16:27.607Z · score: 6 (4 votes) · EA(p) · GW(p)

There's a class of things which feel majorly helpful, but it's hard to distinguish between whether I was helped by the background in pure mathematics, or whether I have some characteristics which both helped me in mathematics and help me now (I suspect it's some of both):

• Being good at framing things
• Turning things over in my head, looking for the angle which makes them most parsimonious, and easiest to comprehend clearly
• Relatedly, feeling happy to dive in and try to make up theory, but keep it grounded by "this has to actually explain the things we want to know about"
• These are useful skills when faced with domains where we haven't yet settled on paradigms which we're satisfied capture the important parts of what we care about
• Generally keeping track of precisely what are the epistemic statuses of different claims, and how they interact
• This is a useful skill for domains where we're projecting out beyond things we can easily check empirically

Then there are some cases where I was more directly applying some mathematical thinking, e.g.:

comment by jackmalde · 2020-08-29T19:08:32.010Z · score: 12 (5 votes) · EA(p) · GW(p)

Would you currently prefer a marginal resource to be used by an impatient longtermist (i.e. to reduce existential risk) or by a patient longtermist (i.e. to invest for the future)? Assume both would spend their resource as effectively as possible

Where do you think the impatient longtermist would spend their resource and where do you think the patient longtermist would spend their resource?

Finally, how do you best think we should proceed to answer these questions with more certainty?

P.S. there may well have been a much simpler way to formulate these questions, feel free to reformulate if you want to!

comment by Owen_Cotton-Barratt · 2020-08-31T05:15:24.165Z · score: 17 (5 votes) · EA(p) · GW(p)

I'm not sure I really believe that "patient vs impatient longtermists" cleaves the world at its joins. I'll use the terms to mean something like resources aimed at reducing existential risk over the next fifty years or so, versus aiming to be helpful on a timescale of over a century?

In either case I think it depends a lot on the resource in question. Many resources (e.g. people's labour) are not fully fungible with one another, so it can depend quite a bit on comparative advantage.

If we're talking about financial resources, these are fairly fungible. There I tend to think (still applies to both "patient" and "impatient" flavours of longtermism):

• It doesn't make so much sense to analyse at the level of the individual donor
• Instead we should think about the portfolio we want longtermist capital as a whole to be spread across, and what are good ways to contribute to that portfolio at the margin
• Sometimes particular donors will have comparative advantage in giving to certain places (e.g. they have high visibility on a giving opportunity so it's less overhead for them to assess it, and it makes sense for them to fill it)
• Sometimes it's more about coordinating to have roughly the right amount total spent, and not fritter away too much on donor-of-last-resort type games
• Some particular opportunities around now look excellent, but the scale of them is such that it's hard to absorb a large fraction of longtermist capital over (say) the next five years, so it makes sense for some money to be held in traditional investments

Then more specialised statements:

• From a "patient" perspective, we want to invest in anything which grows the pool of informed longtermist resources faster than traditional investment, e.g. some versions of:
• this is a lazy catch-all term which includes a ton of very different looking activities; I think getting better resolution on strategies that are worth employing here is pretty important
• producing better intellectual material about what longtermists should do (includes research + dissemination, and works by increasing the degree people are informed plus by making the set of ideas seem more legit+solid, so easier to attract further people)
• (specialised) education
• research into what type of activities have good long-term returns for longtermists
• From an "inpatient" perspective, we want to invest in opportunities which are plausibly on critical paths to averting existential catastrophes, e.g. some versions of:
• lots of the things mentioned for the "patient" perspective
• (there's a continuum here, and if you were so impatient you just wanted to work on averting existential risk that would manifest in the next five years, the patient strategies don't look so great)
• research into understanding and characterising the nature and likelihood of imminent risks
• + work which sets this out clearly, and can be usefully understood by many people (I think that broad understanding of risks is a big step towards reducing them)
• work to develop careers of people who might be well-placed to do relevant work
• analogously, work to develop institutions that could play a useful role

Overall, I don't think it makes sense to imagine that one of the "patient" or "impatient" perspectives is correct. I think that the correct longtermist portfolio certainly includes substantial amounts of both of these classes of investments. For financial resources, I think that over the next several years it's likely that the meaningful margins are all about which given opportunities rise above the bar of saving (rather than which class deserves more money).

For non-financial resources (e.g. the career of a specified individual) it's more plausible to ask about tradeoffs between patient and impatient perspectives. I think it may usually be better if decisions here come down to comparative advantage rather than high-level views of the tradeoffs between "patient" and "impatient".

If I pinned myself down and forced myself to name one bullet point above that I think I'd like to see slightly more of (at the expense of the other bullet points), then at-least-in-moment-writing-this, I'd say "research into what type of activities have good long-term returns for longtermists". But I think this is correctly quite a small slice of our portfolio: I just want it to be a slightly-less-small slice. (I'd have similar views about some particular subfields of various of the other activity-types.)

comment by jackmalde · 2020-08-31T19:25:14.811Z · score: 1 (1 votes) · EA(p) · GW(p)

Thanks for this detailed reply! I appreciate these aren't questions with simple answers.

research into what type of activities have good long-term returns for longtermists

Do you mind elaborating slightly on what you mean here? To me this just reads as finding out the best activities to do if you're a longtermist, but given that you say it's a "small slice of our portfolio" I suspect this is this more specific.

comment by Owen_Cotton-Barratt · 2020-08-31T19:53:17.337Z · score: 4 (2 votes) · EA(p) · GW(p)

Sorry that was poorly worded.

I mean for various activities X, estimating how many resources end up devoted to longtermist ends as a result of X (and what the lags are).

e.g. some Xs = writing articles about longtermism; giving talks in schools; talking about EA but not explicitly longtermism; outreach to foundations; consultancy to help people give better according to their values (and clarify those values); ...

comment by jackmalde · 2020-08-31T20:02:27.050Z · score: 1 (1 votes) · EA(p) · GW(p)

Ah OK thanks, that makes sense. Certainly seems worthwhile to have more research into this

comment by MichaelA · 2020-08-29T07:34:08.217Z · score: 12 (5 votes) · EA(p) · GW(p)

What do you believe* that seems important and that you think most EAs/longtermists/people at FHI would disagree with you about?

*Perhaps in terms of your independent impression, before updating on others' views.

comment by Owen_Cotton-Barratt · 2020-09-01T22:58:13.423Z · score: 12 (8 votes) · EA(p) · GW(p)

That in thinking about community/movement building, it's more important to consider something like how people should be -- e.g. what virtues should be cultivated/celebrated -- rather than just what people should do (although of course both matter).

(That's in impression space. I have various drafts related to this, and I hope to get something public up in the next few months, so I'll leave it brief for now.)

comment by MichaelStJules · 2020-08-29T01:57:51.969Z · score: 12 (5 votes) · EA(p) · GW(p)

Do you think Ellsberg preferences and/or uncertainty/ambiguity aversion are irrational?

Do you think it's a requirement of rationality to commit to a single joint probability distribution, rather than use multiple distributions or ranges of probabilities?

Related papers:

comment by Owen_Cotton-Barratt · 2020-08-29T22:06:54.936Z · score: 22 (8 votes) · EA(p) · GW(p)

I think the debate about ambiguity aversion mostly comes down to a bucket error about the meaning of "rational":

• I think that a fully rational actor would:
• not exhibit ambiguity aversion
• commit to a single joint probability distribution
• I think for boundedly rational actors:
• ambiguity aversion is a (very) useful heuristic
• particularly if you're in an environment which is or might be partially designed by other agents who could stand to benefit from your loss
• it can make sense to hold onto ranges of probabilities
• e.g. maybe you think event X has probability between 10% and 20%, then that's enough to determine what to do for lots of policy decisions; in cases where it doesn't determine what to do you can consider whether it's worth time investment to sharpen your probability estimate
• I think it's a bad (but frequently made, at least implicitly) assumption that boundedly rational actors should mimic the behaviour of fully rational actors in cases where they can work out what that is
• For a particularly vivid example of (something at least strongly analogous to) this assumption breaking, see the theorem in the optimal taxation literature that the top marginal tax rate should be zero
comment by Owen_Cotton-Barratt · 2020-08-29T22:09:08.211Z · score: 8 (5 votes) · EA(p) · GW(p)

Meta: I really appreciated being asked this question! It made me realise I no longer felt confused about ambiguity aversion.

(I think the last time I thought explicitly about it, I'd have said "seems like ambiguity aversion is a good heuristic in some circumstances and that generates the intuitions in favour of it, but it's irrational", and the time before I'd have said "I think ambiguity aversion is irrational".)

comment by Owen_Cotton-Barratt · 2020-08-29T22:14:51.670Z · score: 7 (4 votes) · EA(p) · GW(p)

Meta: the last time I looked into any literature around this was about 5-6 years ago (and I wasn't thorough then), so I really don't know if this perspective is represented somewhere in the debate.

In case it isn't, and if any reader feels like they would like to take on the hard work of fleshing out details and seeing what problems it does/doesn't address, and writing it up for a paper, I'd be really happy to hear that that had been done. (Also feel free to reach out if that might be you and you'd want to discuss.)

comment by MichaelStJules · 2020-08-30T03:07:55.419Z · score: 2 (1 votes) · EA(p) · GW(p)

Separating fully and boundedly rational actors is very helpful.

Would a fully rational actor need to have a universal prior? Wouldn't they need to have justified one choice of a universal prior over all others? It seems like there might be a hard first step here that could prevent them from committing to a single joint probability distribution. Maybe you'd want a prior over universal priors, but then where would that come from?

Maybe this is the only place where multiple distributions can creep in for a fully rational actor, and all other probabilities would be based on your universal prior and observations.

I think it's a bad (but frequently made, at least implicitly) assumption that boundedly rational actors should mimic the behaviour of fully rational actors in cases where they can work out what that is

• For a particularly vivid example of (something at least strongly analogous to) this assumption breaking, see the theorem in the optimal taxation literature that the top marginal tax rate should be zero

Do you mean that they will fail to approximate the fully rational behaviour and sometimes be more biased when they try to approximate it? My instinct in response to the optimal top marginal tax rate being zero is that their model is probably missing very important features (which might be hard to measure or quantify).

comment by Owen_Cotton-Barratt · 2020-08-30T11:26:09.014Z · score: 5 (3 votes) · EA(p) · GW(p)

Do you mean that they will fail to approximate the fully rational behaviour and sometimes be more biased when they try to approximate it?

Roughly yes. They might even exactly match the fully rational behaviour on some dimension under consideration, but in so doing be a worse approximation overall to full rationality.

I think a proper study of full rationality and boundedly rational actors would look at limits of behaviour as you impose weaker and weaker computational constraints. I think that it could be really useful to understand which properties of the fully rational actor are converged upon in a reasonable time and basically hold for powerful-enough boundedly rational actors, and which e.g. only hold in the very limit when the actors comprehension ability is large compared to the world.

My instinct in response to the optimal top marginal tax rate being zero is that their model is probably missing very important features (which might be hard to measure or quantify).

Yes, I think it is missing imperfect information and bounded rationality. (TBC, I don't think that anyone working in optimal tax theory thinks that top marginal rates should actually be zero.) I think the theorem is pretty clear that in the perfect information case with all actors rational the top rate should be zero (basically needs an additional assumption about smoothness of preferences, but that's pretty reasonable). And although this sounds surprising, it is just correct!

To set up an example that's about bounded rationality in particular, suppose:

• The taxpayers are fully rational
• You, the tax-setter, have a lot of giant spreadsheets which express all of the taxpayer preferences for different levels of work/consumption, marginal value of public funds etc. (so theoretically full information)
• You now get to set all the tax rates (which could be quite complicated)
• If you were fully rational and could calculate everything out, you would be able to set optimal tax policy
• But calculating everything out is too much of a mess, and you can't do it
• You know for certain that the optimal solution would have a marginal top rate of zero somewhere
• But as you can't work out where that is, and as having a marginal top rate of zero is not that important, you'll probably decide on a set of tax rates without a marginal top rate of zero, even though you know that that is certainly wrong
comment by Owen_Cotton-Barratt · 2020-08-30T10:22:29.395Z · score: 4 (2 votes) · EA(p) · GW(p)

Would a fully rational actor need to have a universal prior? Wouldn't they need to have justified one choice of a universal prior over all others? It seems like there might be a hard first step here that could prevent them from committing to a single joint probability distribution. Maybe you'd want a prior over universal priors, but then where would that come from?

I'd usually think of being fully rational as giving constraints after your choice of prior; there are questions about whether some priors are better than others, but you can treat that separately.

comment by Misha_Yagudin · 2020-08-28T19:34:44.042Z · score: 12 (6 votes) · EA(p) · GW(p)

Hey Owen, you have a background in mathematic. What is your favorite theorem/proof/object/definition/algorithm/conjecture/..?

comment by Owen_Cotton-Barratt · 2020-08-29T22:34:48.917Z · score: 12 (6 votes) · EA(p) · GW(p)

One that comes to mind:

Theorem: Every finitely presented group is the fundamental group of some compact 4-manifold.

I like it because:

• It's a universal claim relating two very broad classes of objects, such that when I look at the statement I think "wow, how would you even start thinking about how to prove that?"
• There's a proof which is geometric, elegant, and short
• In fact there are multiple quite different geometric proofs!

[With apologies for the fact that this likely makes no sense to most readers.]

comment by Max_Daniel · 2020-08-30T06:27:27.362Z · score: 7 (3 votes) · EA(p) · GW(p)

(FWIW, I hadn't heard of that theorem before but don't feel that surprised by the statement. But I'm quite curious if the proofs provide an intuitive understanding for why we need 4 dimensions.

Maybe this is hindsight bias, but I feel like if you had asked me "Can we get any member of [broad but appropriately restricted class of groups] as fundamental group of a [sufficiently general class of manifolds]?" my immediate reply would have been "uh, I'd need to think about this, but it's at least plausible that the answer is yes", whereas there is no way I'd have intuitively said "yes, but we need at least four dimensions".)

comment by Owen_Cotton-Barratt · 2020-08-30T11:55:43.757Z · score: 4 (2 votes) · EA(p) · GW(p)

I think that often the topology of things in low dimensions ends up interestingly different to in high dimensions -- roughly when your dimensionality gets big enough (often 3, 4, or 5 is "big enough") there's enough space to do the things you want without things getting in the way.

One of the proofs I know takes advantage of the fact that f  (which is not simply connected) has boundary  , which is also the boundary of   (which is simply connected); there isn't room for the analogous trick a dimension down.

comment by Ben_West · 2020-08-31T22:39:00.357Z · score: 11 (7 votes) · EA(p) · GW(p)

My impression is that, of FHI's focus areas, biotechnology is substantially more credentialist than the others. I've been hesitant to recommend RSP to life scientists who are considering a PhD because I'm worried that not having a "traditional" degree is harmful to their job prospects.

Do you think that's an accurate concern? (I mostly speak with US-based people, if that's relevant.)

comment by Gregory_Lewis · 2020-09-04T16:19:22.762Z · score: 6 (3 votes) · EA(p) · GW(p)

FWIW I agree with Owen. I agree the direction of effect supplies a pro tanto consideration which will typically lean in favour of other options, but it is not decisive (in addition to the scenarios he notes, some people have pursued higher degrees concurrently with RSP).

So I don't think you need to worry about potentially leading folks astray by suggesting this as an option for them to consider - although, naturally, they should carefully weigh their options up (including considerations around which sorts of career capital are most valuable for their longer term career planning).

comment by Owen_Cotton-Barratt · 2020-09-02T15:24:28.970Z · score: 6 (3 votes) · EA(p) · GW(p)

I don't feel like I'm at all an expert in biosecurity careers, but I agree with that directionally they seem more credentialist.

I think this is a consideration against RSP, although it doesn't feel like an overwhelming one, since:

• It could be a reasonable option before a PhD
• This is particularly relevant if taking the time to think about what you want to work on allows you to do a PhD in which your work is much closer to things you eventually care about
• (similarly it could be a good option for some people after a PhD)
• There may well be some roles (now or in the future) which are less credential-locked
comment by aman-patel · 2020-08-29T16:31:17.207Z · score: 10 (4 votes) · EA(p) · GW(p)

How do you think the EA community can improve its interactions and cooperation with the broader global community, especially those who might not be completely comfortable with the underlying philosophy? Do you think it's more of a priority to spread those underlying arguments, or to simply grow the network of people sympathetic to EA causes, even if they disagree with the principles of EA?

comment by Owen_Cotton-Barratt · 2020-09-02T15:32:46.249Z · score: 4 (2 votes) · EA(p) · GW(p)

Good question.

Of the two options I'd be tempted to say it's more of a priority to spread the underlying arguments, but actually I think something more nuanced: it's a priority to keep engaging with people about the underlying arguments, finding where there seems to be the greatest discomfort and turning a critical eye on the arguments there, looking to see if we can develop stronger versions of them.

I think that talking about the tentative conclusions along with this is  important both for growing the network of people sympathetic to those, and for providing concrete instantiation of what is meant by the underlying philosophy (too much risk of talking past each other or getting lost in abstraction-land without this)

comment by MichaelA · 2020-08-29T07:11:06.232Z · score: 10 (4 votes) · EA(p) · GW(p)

You've done research that seems to me very valuable, and now (I imagine) spend a lot of your time on something more like "facilitating and mentoring other researchers", in your role running the RSP.

1. Did you make an active decision to shift your priorities somewhat from doing to facilitating research? If so, what factors drove that decision? What would've made you not make that decision, or what would lead you to switch back to a larger focus on doing your own research?

2. What do you think makes running RSP your comparative advantage (assuming you think that)? More generally, what do you think makes that sort of "research facilitation/mentorship" someone's comparative advantage?

3. Any thoughts on how to test or build one's skills for that sort of role/pathway? (I guess I currently consider things like research management, project management at a research org, and coordinating fellowships to be in the same broad category. This may not be the best way of grouping things.)

(Feel free to just pick one question, or just say related things!)

comment by Owen_Cotton-Barratt · 2020-09-02T16:00:11.696Z · score: 13 (4 votes) · EA(p) · GW(p)

1. Did you make an active decision to shift your priorities somewhat from doing to facilitating research? If so, what factors drove that decision?

There was something of an active decision here. It was partly based on a sense that the returns had been good when I'd previously invested attention in mentoring junior researchers, and partly on a sense that there was a significant bottleneck here for the research community.

2. What do you think makes running RSP your comparative advantage (assuming you think that)?

Overall I'm not sure what my comparative advantage is! (At least in the long term.)

I think:

• Some things which makes me good at research mentoring are:
• being able to get up to speed on different projects quickly
• holding onto a sense of why we're doing things, and connecting to larger purposes
•  finding that I'm often effective in 'reactive' mode rather than 'proactive' mode
• (e.g. I suspect this AMA has the highest ratio of public-written-words / time-invested of anything substantive I've ever done)
• being able to also connect to where the researcher in front of me is, and what their challenges are
• There are definitely parts of running RSP which seem not my comparative advantage (and I'm fortunate enough to have excellent support from project managers who have taken ownership of a lot of the programme)

3. Any thoughts on how to test or build one's skills for that sort of role/pathway?

• Read a lot of research. Form views (and maybe talk to others) about which pieces are actually valuable, and how. Try to work out what seems bad even about good pieces, or what seems good even about bad pieces.
• Be generous with your time looking to help others with their projects. Check in with them afterwards to see if they found it useful. (Try to ask in a way which makes it safe for them to express that they did not.)
• Try your own hand at research. First-hand experience of challenges is helpful for this.

(I've focused on the pathway of "research mentorship"; I think there are other parts you were asking about which I've ignored.)

comment by MichaelA · 2020-08-29T06:58:37.597Z · score: 10 (3 votes) · EA(p) · GW(p)

Thanks for doing this AMA!

Recently I've been thinking around the themes of how we try to avoid catastrophic behaviour from humans (and how that might relate to efforts with AI)

Do you think "malevolence" [EA · GW] (essentially, high levels of traits like Machiavellianism, narcissism,  psychopathy, and/or sadism) may play an important role here? Or do other psychological traits, biases, and limitations seem far more important? Or values? Or things like game-theoretic dynamics, how groups interact, institutional structures,  etc.?

(Feel free to just talk about this area in the terms that make sense to you, rather than answering that particular framing of the question.)

comment by Owen_Cotton-Barratt · 2020-09-02T15:36:35.361Z · score: 4 (2 votes) · EA(p) · GW(p)

Malevolence seems potentially important to me, although I mostly haven't been thinking about it (except a bit about psychopathy and its absence). Things more like game-theoretic dynamics are where a good portion of my attention has been ... but I don't want to claim this means they're more important.

[meta: this is a short answer because while I might have things to say about crisper questions within this space, for saying things-in-general I think it makes more sense to wait until I have coherent enough ideas to publish something.]

comment by MichaelStJules · 2020-08-28T22:36:46.955Z · score: 9 (6 votes) · EA(p) · GW(p)

Which approaches and directions for decision-making under deep uncertainty seem most promising? Are there any that seem likely to be rational but not (apparently?) too permissive like Mogensen's maximality rule?

Which approaches do you see people using or endorsing that you think are bad (e.g. irrational)?

comment by Owen_Cotton-Barratt · 2020-09-02T15:28:37.840Z · score: 4 (2 votes) · EA(p) · GW(p)

I guess I think that "decision-making under deep uncertainty" is mostly too broad a category to be able to say useful things about (although maybe we can draw together useful lessons that seem to hold in a variety of more specialised contexts), and we're better trying to look at more particular setups and reason about those.

comment by Misha_Yagudin · 2020-09-01T20:43:19.029Z · score: 8 (4 votes) · EA(p) · GW(p)

What intellectual progress did you make in the 2010s? (See SSC and Gwern's essays on the question.)

comment by Owen_Cotton-Barratt · 2020-09-02T15:40:57.826Z · score: 4 (3 votes) · EA(p) · GW(p)

This is an interesting question, but I don't think there's a decent short-answer version; it's more like investing several hours or not at all.

So I'll take this as a prompt to consider the several-hour version, but won't answer for now.

comment by Linch · 2020-08-29T22:08:28.616Z · score: 8 (4 votes) · EA(p) · GW(p)

What percentage of "EA intellectual work" is done as part of the standard academic process? From your perspective, how far away is it from the optimal distribution?

comment by Owen_Cotton-Barratt · 2020-09-02T15:48:43.246Z · score: 8 (3 votes) · EA(p) · GW(p)

Gee, this is really hard to measure.

I'd guess that somewhere between 10% and 30% is done as part of something that we'd naturally call the "standard academic process" ?

I think that there are some good reasons for deviation, and some things that academic norms provide that we may be missing out on.

I think academia is significantly set up as a competitive process, where part of the game is to polish your idea and present it in the best light. This means:

• It encourages you to care about getting credit, and people are discouraged from freely-sharing early stage ideas that they might turn into papers, for fear of being scooped
• It encourages people to put in the time to properly investigate the ins and outs of an idea, and find the clearest framing of it, making it more efficient for later readers

I'd like it if we could work out how to get more of the good here with less of the bad. That could mean doing a larger proportion of things within some version of the academic process, or could mean working out other ways to get the benefits.

There's also a credentialing benefit to doing things within the academic process. I think this is non-negligible, but also that if you do really high-quality work anywhere, people will observe this and come, so I don't think it's necessary to rest on that credentialing.

comment by MichaelStJules · 2020-08-28T22:45:42.378Z · score: 6 (3 votes) · EA(p) · GW(p)

What's the difference between deep uncertainty and (complex) cluelessness?

comment by Owen_Cotton-Barratt · 2020-09-01T15:47:43.719Z · score: 8 (4 votes) · EA(p) · GW(p)

I'm just using "deep uncertainty" to refer to a theme of situations where there are challenges about how you get going. I'm not thinking of it as a crisp referent.

I guess that complex cluelessness would be a subclass of cases of deep uncertainty in my ontology, but I also mean to include e.g. normative uncertainty; Knightian uncertainty; heuristics for estimating probabilities when you don't really know where to start.

comment by Linch · 2020-08-30T02:23:17.487Z · score: 2 (5 votes) · EA(p) · GW(p)

I could probably figure this out online so don't answer if you don't have a quick answer cached, but is it difficult for RSP scholars (who are not admitted through other channels to take other classes/do other studies at Oxford? Either at the philosophy department or elsewhere) For example if someone's interested in classes in philosophy, public health, ML, or statistical methods.

comment by Linch · 2020-09-03T00:17:07.922Z · score: 5 (4 votes) · EA(p) · GW(p)

(I'm amused at the distribution of votes on this question).

comment by Owen_Cotton-Barratt · 2020-09-02T15:18:56.937Z · score: 5 (3 votes) · EA(p) · GW(p)

Generally Oxford lectures are open to any university members, although:

• They wouldn't generally get "academic credit" for this
• They wouldn't necessarily be able to join accompanying classes (although we might be able to arrange this)
• I've no idea what the situation is now so many things are remote because of COVID-19
comment by Linch · 2020-09-03T00:17:23.974Z · score: 2 (1 votes) · EA(p) · GW(p)

Thanks a lot!