Posts

Everyday Longtermism 2021-01-01T17:39:29.452Z
Good altruistic decision-making as a deep basin of attraction in meme-space 2021-01-01T17:11:06.906Z
Web of virtue thesis [research note] 2021-01-01T16:21:19.522Z
Blueprints (& lenses) for longtermist decision-making 2020-12-21T17:25:15.087Z
"Patient vs urgent longtermism" has little direct bearing on giving now vs later 2020-12-09T14:58:21.548Z
AMA: Owen Cotton-Barratt, RSP Director 2020-08-28T14:20:18.846Z
"Good judgement" and its components 2020-08-19T23:30:38.412Z
What is valuable about effective altruism? Implications for community building 2017-06-18T14:49:56.832Z
A new reference site: Effective Altruism Concepts 2016-12-05T21:20:03.946Z
Why I'm donating to MIRI this year 2016-11-30T22:21:20.234Z
Should effective altruism have a norm against donating to employers? 2016-11-29T21:56:36.528Z
Donor coordination under simplifying assumptions 2016-11-12T13:13:14.314Z
Should donors make commitments about future donations? 2016-08-30T14:16:51.942Z
An update on the Global Priorities Project 2015-10-07T16:19:32.298Z
Cause selection: a flowchart [link] 2015-09-10T11:52:07.140Z
How valuable is movement growth? 2015-05-14T20:54:44.210Z
[Link] Discounting for uncertainty in health 2015-05-07T18:43:33.048Z
Neutral hours: a tool for valuing time 2015-03-04T16:33:41.087Z
Report -- Allocating risk mitigation across time 2015-02-20T16:34:47.403Z
Long-term reasons to favour self-driving cars 2015-02-13T18:40:16.440Z
Increasing existential hope as an effective cause? 2015-01-10T19:55:08.421Z
Factoring cost-effectiveness 2014-12-23T12:12:08.789Z
Make your own cost-effectiveness Fermi estimates for one-off problems 2014-12-11T11:49:13.771Z
Estimating the cost-effectiveness of research 2014-12-11T10:50:53.679Z
Effective policy? Requiring liability insurance for dual-use research 2014-10-01T18:36:15.177Z
Cooperation in a movement supporting diverse causes 2014-09-23T10:47:11.357Z
Why we should err in both directions 2014-08-21T02:23:06.000Z
Strategic considerations about different speeds of AI takeoff 2014-08-13T00:18:47.000Z
How to treat problems of unknown difficulty 2014-07-30T02:57:26.000Z
On 'causes' 2014-06-24T17:19:54.000Z
Human and animal interventions: the long-term view 2014-06-02T00:10:15.000Z
Keeping the effective altruist movement welcoming 2014-02-07T01:21:18.000Z

Comments

Comment by owen_cotton-barratt on Everyday Longtermism · 2021-01-08T10:35:22.283Z · EA · GW

I spent a little while thinking about this. My guess is that of the activities I list:

  • Alice and Bob's efforts look comparable to donating (in external benefit/effort) when the longtermist portfolio is around $100B-$1T/year
  • Clara's efforts looks comparable to donating when the longtermist portfolio is around $1B-$10B/year
  • Diya's efforts look comparable to donating when the longtermist portfolio is around $10B-$100B/year
  • Elmo's efforts are harder to say because they're closer to directly trying to grow longtermist support, so the value diminishes as the existing portfolio gets larger just as for donations, and it more depends on underlying quality

All of those numbers are super crude and I might well disagree with myself if I came back later and estimated again. They also depend on lots of details (like how good the individuals are at executing on those strategies).

Perhaps most importantly, they're excluding the internal benefits -- if these activities are (as I suggest) partly good for practicing some longtermist judgement, then I'd really want to see them as a complement to donation rather than just a competitor.

Comment by owen_cotton-barratt on AGB's Shortform · 2021-01-05T21:11:15.477Z · EA · GW

One argument goes via something like the reference class of global autopoeitic information-processing systems: life has persisted since it started several billion years ago; multicellular life similarly; sexual selection similarly. Sure, species go extinct when they're outcompeted, but the larger systems they're part of have only continued to thrive.

The right reference class (on this story) is not "humanity as a mammalian species" but "information-based civilization as the next step in faster evolution". Then we might be quite optimistic about civilization in some meaningful sense continuing indefinitely (though perhaps not about particular institutions or things that are recognisably human doing so).

Comment by owen_cotton-barratt on AGB's Shortform · 2021-01-05T20:55:44.187Z · EA · GW

Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then there's a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period you're pretty much safe.

That model is clearly too optimistic because it doesn't admit crises with correlated problems across all the individuals in a generation. But then there's a question about how high is the unavoidable background rate of such crises (i.e. ones that remain even if you have a very sophisticated and well-resourced attempt to prevent them).

On current understanding I think the lower bounds for the rate of exogenous such events rely on things like false vacuum decay (and maybe GRBs while we're local enough), and those lower bounds are really quite low, so it's fairly plausible that the true rate is really low (though also plausible it's higher because there are risks that aren't observed/understood).

Bounding endogenous risk seems a bit harder to reason about. I think that you can give kind of fairytale/handwaving existence proofs of stable political systems (which might however be utterly horrific to us). Then it's at least sort of plausible that there would be systems which are simultaneously extremely stable and also desirable.

Comment by owen_cotton-barratt on Blueprints (& lenses) for longtermist decision-making · 2021-01-04T14:06:00.845Z · EA · GW

My primary blueprint is as follows:

I want the world in 30 years time to be in as good a state as it can be in order to face whatever challenges that will come next.

I like this! I sometimes use a perspective which is pretty close (though often think about 50 years rather than 30 years, and hold it in conjunction with "what are the challenges we might need to face in the next 50 years?"). I think 30 vs 50 years is a kind-of interesting question. I've thought about 50 because if I imagine e.g. that we're going to face critical junctures with the development of AI in 40 years, that's within the scope where I can imagine it being impacted by causal pathways that I can envision -- e.g. critical technology being developed by people who studied under professors who are currently students making career decisions. By 60 years it feels a bit too tenuous for me to hold on to.

I kind of agree that if looking at policy specifically a shorter time horizon feels good.

Comment by owen_cotton-barratt on Everyday Longtermism · 2021-01-04T12:30:01.151Z · EA · GW

I appreciate the pushback!

I have two different responses (somewhat in tension with each other):

  1. Finding "everyday" things to do will necessitate identifying what's good to do in various situations which aren't the highest-value activity an individual can be undertaking
    • This is an important part of deepening the cultural understanding of longtermism, rather than have all of the discussion be about what's good to do in a particular set of activities that's had strong selection pressure on it
      • This is also important for giving people inroads to be able to practice different aspects of longtermism
      • I think it's a bit like how informal EA discourse often touches on how to do everyday things efficiently (e.g. "here are tips for batching your grocery shopping") -- it's not that these are the most important things to be efficient about, but that all-else-equal it's good, and it's also very good to give people micro-scale opportunities to put efficiency-thinking into practice
    • Note however that my examples would be better if they had more texture:
      • Discussion of the nuance of better or worse versions of the activities discussed could be quite helpful for conveying the nuance of what is good longtermist action
      • To the extent that these are far from the highest value activities those people could be undertaking, it seems important to be up-front about that: keeping tabs on what's relatively important is surely an important part of the (longtermist) EA culture
  2. I'm not sure how much I agree with "probably much less positive than some other things that could be done even by "regular people', even once there are millions or tens of millions of longtermists"
    • I'd love to hear your ideas for things that you think would be much more positive for those people in that world
      • My gut feeling is that they are at the level of "competitive uses of time/attention (for people who aren't bought into reorienting their whole lives) by the time there are tens of millions of longtermists"
        • It seems compatible with that feeling that there could be some higher-priority things for them to be doing as well -- e.g. maybe some way of keeping immersed in longtermist culture, by being a member of some group -- but that those reach saturation or diminishing returns
        • I think I might be miscalibrated about this; think it would be easier to discuss with some concrete competition on the table
    • Of course to the extent that these actually are arguably competitive actions, if I believe my first point, maybe I should have been looking for even more everyday situations
      • e.g. could ask "what is the good longtermist way to approach going to the shops? meeting a romantic partner's parents for the first time? deciding how much to push yourself to work when you're feeling a bit unwell?"
Comment by owen_cotton-barratt on Everyday Longtermism · 2021-01-04T11:36:55.226Z · EA · GW

Thanks, I agree with both of those points.

Comment by owen_cotton-barratt on Everyday Longtermism · 2021-01-04T11:27:24.384Z · EA · GW

I really appreciate you highlighting these connections with other pieces of thinking -- a better version of my post would have included more of this kind of thing.

Comment by owen_cotton-barratt on Everyday Longtermism · 2021-01-04T10:56:03.304Z · EA · GW

Some further suggestions:

  1. Be more cooperative. (There are arguments about increasing cooperation, especially from people working on reducing S-risks, but I couldn't find any suitable resource in a brief search)
  2. Take a strong stance against narrow moral circles.
  3. Have a good pitch prepared about longtermism and EA broadly. Balance confidence with adequate uncertainty.
  4. Have a well-structured methodology for getting interested acquaintances more involved with EA.
  5. Help friends in EA/longtermism more.
  6. Strengthen relationships with friends who have a high potential to be highly influential in the future.

I basically like all of these. I think there might be versions which could be bad, but they seem like a good direction to be thinking in. 

I'd love to see further exploration of these -- e.g. I think any of your six suggestions could deserve a top-level post going into the weeds (& ideally reporting on experiences from trying to implement it). I feel most interested in #3, but not confidently so.

Comment by owen_cotton-barratt on Everyday Longtermism · 2021-01-04T10:46:56.456Z · EA · GW

I think that the suggestions here, and most of the arguments, should apply to "Everyday EA " which isn't necessarily longtermistic.  I'd be interested in your thoughts about where exactly should we make a distinction between everyday longtermist actions and non-longtermist everyday actions.

I agree that quite a bit of the content seems not to be longtermist-specific. But I was approaching it from a longtermist perspective (where I think the motivation is particularly strong), and I haven't thought it through so carefully from other angles.

I think the key dimension of "longtermism" that I'm relying on is the idea that the longish-term (say 50+ years) indirect effects of one's actions are a bigger deal in expectation than the directly observable effects. I don't think that that requires e.g. any assumptions about astronomically large futures. But if you thought that such effects were very small compared to directly observable effects, then you might think that the best everyday actions involved e.g. saving money or fundraising for charities you had strong reason to believe were effective.

Comment by owen_cotton-barratt on Good altruistic decision-making as a deep basin of attraction in meme-space · 2021-01-04T09:59:26.770Z · EA · GW

Yes, that's the kind of thing I had in the back of my mind as I wrote that.

I guess I actually think:

  • On average moving people further into the basin should lead to more useful work
  • Probably we can identify some regions/interventions where this is predictably not the case
    • It's unclear how common such regions are
Comment by owen_cotton-barratt on Good altruistic decision-making as a deep basin of attraction in meme-space · 2021-01-03T14:32:15.484Z · EA · GW

I have a sense that a large part of the success of scientific norms comes down to their utility being immediately visible.

I agree with this. I don't think science has the attractor property I was discussing, but it has this other attraction of being visibly useful (which is even better). I was trying to use science as an example of the self-correction mechanism.

Or perhaps I am having a semantic confusion: is science self-propagating in that scientists, once cultivated, go on to cultivate others?

Yes, this is the sense of self-propagating that I intended.

Comment by owen_cotton-barratt on Good altruistic decision-making as a deep basin of attraction in meme-space · 2021-01-03T14:28:41.549Z · EA · GW

In my words, what you've done is point out that approximate-consequentialism + large-scale preferences is an attractor.

I think that this is a fair summary of my first point (it also needs enough truth seeking to realise that spreading the approach is valuable). It doesn't really speak to the point about being self-correcting/improving.

I'm not trying to claim that it's obviously the strongest memeplex in the long term. I'm saying that it has some particular strengths (which make me more optimistic than before I was aware of those strengths).

I think another part of my thinking there is that actually quite a lot of people have altruistic preferences already, so it's not like trying to get buy-in for a totally arbitrary goal.

Comment by owen_cotton-barratt on Good altruistic decision-making as a deep basin of attraction in meme-space · 2021-01-01T20:59:42.548Z · EA · GW

Why does it need to rely on spreading without too much questioning?

(BTW I'm using "meme" in the original general sense not the more specific "internet meme" usage; was that obvious enough?)

Comment by owen_cotton-barratt on What’s the low resolution version of effective altruism? · 2021-01-01T13:55:34.119Z · EA · GW

I agree with this. I think "do-gooding for nerds" might be preferable than "charity for nerds", but probably "charity for nerds" is closer to current perceptions.

Comment by owen_cotton-barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T11:30:43.367Z · EA · GW

I read your second critique as implicitly saying "there must be a mistake in the argument", whereas I'd have preferred it to say "the things that might be thought to follow from this argument are wrong (which could mean a mistake in the argument that's been laid out, or in how its consequences are being interpreted)".

Comment by owen_cotton-barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T11:26:51.595Z · EA · GW

I agree that there's a tension in how we're talking about it. I think that Greaves+MacAskill are talking about how an ideal rational actor should behave -- which I think is informative but not something to be directly emulated for boundedly rational actors.

Comment by owen_cotton-barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T16:39:07.192Z · EA · GW

I think this is might be a case of the-devil-is-in-the-details.

I'm in favour of people scanning the horizon for major problems whose negative impacts are not yet being felt, and letting that have some significant impact on which nearer-term problems they wrestle with. I think that a large proportion of things that longtermists are working on are problems that are at least partially or potentially within our foresight horizons. It sounds like maybe you think there is current work happening which is foreseeably of little value: if so I think it could be productive to debate the details of that.

Comment by owen_cotton-barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T16:32:55.744Z · EA · GW

Cool. I do think that trying to translate your position into the ontology used by Greaves+MacAskill it's sounding less like "longtermism is wrong" and more like "maybe longtermism is technically correct; who cares?; the practical advice people are hearing sucks".

I think that's a pretty interestingly different objection and if it's what you actually want to say it could be important to make sure that people don't hear it as "longtermism is wrong" (because that could lead them to looking at the wrong type of thing to try to refute you).

Comment by owen_cotton-barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T02:36:10.765Z · EA · GW

I will focus on two aspects of strong longtermism, henceforth simply longtermism. First, the underlying arguments inoculate themselves from criticism by using arbitrary assumptions on the number of future generations. Second, ignoring short-term effects destroys the means by which we make progress — moral, scientific, artistic, and otherwise.

I found it helpful that you were so clear about these two aspects of what you are saying. My responses to the two are different.

On the first, I think resting on possibilities of large futures is a central part of the strength of the case for longtermism. It doesn't feel like inoculation from criticism to put the strong argument forwards. Of course this only applies to the argument for longtermism in the abstract and not for particular actions people might want to take; I think that using such reasoning in favour of particular actions tends to be weak (inoculation is sometimes attempted but it is ineffectual).

On the second, I think this might be an important and strong critique, but it is a critique of how the idea is presented and understood rather than of the core tenets of longtermism; indeed one could make the same arguments starting from an assumption that longtermism was certainly correct, but being worried that it would be self-defeating.

So I'm hearing the second critique (perhaps also the first but it's less clear) as saying that the "blueprints" (in the sense of https://forum.effectivealtruism.org/posts/NdSoipXQhdzozLqW4/blueprints-and-lenses-for-longtermist-decision-making ) people commonly get for longtermism are bad (on both shorttermist and longtermist grounds). Does that sound mostly-correct to you?

Comment by owen_cotton-barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T00:04:47.620Z · EA · GW

It is certainly possible to accuse me of taking the phrase “ignoring the effects” too literally. Perhaps longtermists wouldn’t actually ignore the present and its problems, but their concern for it would be merely instrumental. In other words, longtermists may choose to focus on current problems, but the reason to do so is out of concern for the future.

My response is that attention is zero-sum. We are either solving current pressing problems, or wildly conjecturing what the world will look like in tens, hundreds, and thousands of years. If the focus is on current problems only, then what does the “longtermism” label mean? If, on the other hand, we’re not only focused on the present, then the critique holds to whatever extent we’re guessing about future problems and ignoring current ones.

I agree that attention is a limited resource, but it feels like you're imagining split attention leads to something like linear interpolation between focused attention on either end; in fact I think it's much better than that, and that attention on the two parts are complementary. For example we need to wrestle with problems we face today to give us good enough feedback loops to make substantial progress, but by taking the long-term perspective we can improve our judgement about which of the nearer-term problems should be highest-priority.

I actually think that in the longtermist ideal world (where everyone is on board with longtermism) that over 90% of attention -- perhaps over 99% -- would go to things that look like problems already. But that at the present margin in the actual world the longtermist perspective is underappreciated so looks particularly valuable.

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-19T16:59:33.366Z · EA · GW

They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0.

Just want to register strong disagreement with this. (That is, disagreement with the position you report, not disagreement that you know people holding this position.) I think there are enough variables in the world that have some nonzero expected impact on the long term future that for very many actions we can usually hazard guesses about their impact on at least some such variables, and hence about the expected impact of the individual actions (of course in fact one will be wrong in a good fraction of cases, but we're talking about in expectation).

Note I feel fine about people saying of lots of activities "gee I haven't thought about that one enough, I really don't know which way it will come out", but I think it's a sign that longtermism is still meaningfully under development and we should be wary of rolling it out too fast.

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-18T16:26:50.842Z · EA · GW

I think it's a combination of a couple of things.

  1. I'm not fully bought into strong longtermism (nor, I suspect, are Greaves or MacAskill), but on my inside view it seems probably-correct.

When I said "likely", that was covering the fact that I'm not fully bought in.

  1. I'm taking "strong longtermism" to be a concept in the vicinity of what they said (and meaningfully distinct from "weak longtermism", for which I would not have said "by far"), that I think is a natural category they are imperfectly gesturing at. I don't agree with with a literal reading of their quote, because it's missing two qualifiers: (i) it's overwhelmingly what matters rather than the only thing; & (ii) of course we need to think about shorter term consequences in order to make the best decisions for the long term.

Both (i) and (ii) are arguably technicalities (and I guess that the authors would cede the points to me), but (ii) in particular feels very important.

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-18T16:08:16.207Z · EA · GW

I appreciate the points here. I think I might be slightly less pessimistic than you about the ability to evaluate arguments in foreign domains, but the thrust of why I was making that point was because: I think for pushing out the boundaries of collective knowledge it's roughly correct to adopt the idealistic stance I was recommending; & I think that Vaden is engaging in earnest and noticing enough important things that there's a nontrivial chance they could contribute to pushing such boundaries (and that this is valuable enough to be encouraged rather than just encouraging activity that is likely to lead to the most-correct beliefs among the convex hull of things people already understand).

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-17T23:55:53.992Z · EA · GW

Anyway, I'm a huge fan of 95% of EA's work, but really think it has gone down the wrong path with longtermism. Sorry for the sass -- much love to all :) 

It's all good! Seriously, I really appreciate the engagement from you and Vaden: it's obvious that you both care a lot and are offering the criticism precisely because of that. I currently think you're mistaken about some of the substance, but this kind of dialogue is the type of thing which can help to keep EA intellectually healthy.

I'm confused about the claim 

>I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value.

This seems in direct opposition to what the authors say (and what Vaden quoted above), that 

>The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years

I understand that they may not feel this way, but it is what they argued for and is, consequently, the idea that deserves to be criticized.

So my interpretation had been that they were using a technical sense of "evaluating actions", meaning something like "if we had access to full information about consequences, how would we decide which ones were actually good".

However, on a close read I see that they're talking about ex ante effects. This makes me think that this is at least confusingly explained, and perhaps confused. It now seems most probable to me that they mean something like "we can ignore the effects of the actions contained in the first 100 years, except insofar as those feed into our understanding of the longer-run effects". But the "except insofar ..." clause would be concealing a lot, since 100 years is so long that almost all of our understanding of the longer-run effects must go via guesses about the long-term goodness of the shorter-run effects.

[As an aside, I've been planning to write a post about some related issues; maybe I'll move it up my priority stack.]

The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures? 

I like the question; I think this may be getting at something deep, and I want to think more about it.

Nonetheless, my first response was: while I can't write this down, if we helped ourselves to some cast-iron guarantees about the size and future lifespan of the universe (and made some assumptions about quantization) then we'd know that the set of possible futures was smaller than a particular finite number (since there would only be a finite number of time steps and a finite number of ways of arranging all particles at each time step). Then even if I can't write it down, in principle someone could write it down, and the mathematical worries about undefined expectations go away.

The reason I want to think more about it is that I think there's something interesting about the interplay between objective and subjective probabilities here. How much should it help me as a boundedly rational actor to know that in theory a fully rational actor could put a measure on things, if it's practically immeasurable for me?

Considering  that the Open Philanthropy Project has poured millions into AI Safety, that its listed as a top cause by 80K, and that EA's far-future-fund makes payouts to AI safety work, if Shivani's reasoning isn't to be taken seriously then now is probably a good time to make that abundantly clear. Apologies for the harshness in tone here, but for an august institute like GPI to make normative suggestions in its research and then expect no one to act on them is irresponsible.

Sorry, I made an error here in just reading Vaden's quotation of Shivani's reasoning rather than looking at it in full context.

In the construction of the argument in the paper Shivani is explicitly trying to compare the long-term effects of action A to the short-term effects of action B (which was selected to have particularly good short-term effects). The paper argues that there are several cases where the former is larger than the latter. It doesn't follow that A is overall better than B, because the long-term effects of B are unexamined.

The comparison of of AMF to AI safety that was quoted felt like a toy example to me because it obviously wasn't trying to be a full comparison between the two, but was rather being used to illustrate a particular point. (I think maybe the word "toy" is not quite right.)

In any case I consider it a minor fault of the paper that one could read just the section quoted and reasonably come away with the impression that comparing the short-term number of lives saved by AMF with the long-term number of lives expected to be saved by investing in AI safety was the right way to compare between those two opportunities. (Indeed one could come away with the impression that the AMF price to save a life was the long-run price, but in the structure of the argument being used they need it to be just the short-term price.)

Note that I do think AI safety is very important, and I endorse the actions of the various organisations you mention. But I don't think that comparing some long-term expectation on one side with a short-term expectation on the other is the right argument for justifying this (particularly versions which make the ratio-of-goodness scale directly with estimates of the size of the future), and that was the part I was objecting to. (I think this argument is sometimes seen in earnest "in the wild", and arguably on account of that the paper should take extra steps to make it clear that it is not the argument being made.)

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-17T10:24:31.013Z · EA · GW

I think the crucial point of outstanding disagreement is that I agree with Greaves and MacAskill that by far the most important effects of our actions are likely to be temporally distant. 

I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value. Of course, there are also important instrumental reasons to attend to the intrinsic value of various effects, so I don't think intrinsic value should be ignored either.

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-16T23:49:31.554Z · EA · GW

People are united across time working for the good! Each generation does what it can to make the world a little bit better for its descendants, and in this way we are all united. 

I meant if everyone were actively engaged in this project. (I think there are plenty of people in the world who are just getting on with their thing, and some of them make the world a bit worse rather than a bit better.)

Overall though I think that longtermism is going to end up with practical advice which looks quite a lot like "it is the duty of each generation to do what it can to make the world a little bit better for its descendants"; there will be some interesting content in which dimensions of betterness we pay most attention to (e.g. I think that the longtermist lens on things makes some dimension like "how much does the world have its act together on dealing with possible world-ending catastrophes?" seem really important).

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-16T23:43:31.503Z · EA · GW

Also my hope was that this would highlight a methodological error (equating made up numbers to real data) that could be rectified, whether or not you buy my other arguments about longtermism.  I'd be a lot more sympathetic with longtermism in general if the proponents were careful to adhere to the methodological rule of only ever comparing subjective probabilities with other subjective probabilities  (and not subjective probabilities with objective ones, derived from data). 

I'm sympathetic to something in the vicinity of your complaint here, striving to compare like with like, and being cognizant of the weaknesses of the comparison when that's impossible (e.g. if someone tried the reasoning from the Shivani example in earnest rather than as a toy example in a philosophy paper I think it would rightly get a lot of criticism).

(I don't think that "subjective" and "objective" are quite the right categories here, btw; e.g. even the GiveWell estimates of cost-to-save-a-life include some subjective components.)

In terms of your general sympathy with longtermism -- it makes sense to me that the behaviour of its proponents should affect your sympathy with those proponents.  And if you're thinking of the position as a political stance (who you're allying yourself etc.) then it makes sense that it could affect your sympathy with the position. But if you're engaged in the business of truth-seeking, why does it matter what the proponents do? You should ignore the bad arguments and pay attention to the best ones you can see -- whether or not anyone actually made them. (Of course I'm expressing a super idealistic position here, and there are practical reasons not to be all the way there, but I still think it's worth thinking about.) 

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-16T23:28:06.884Z · EA · GW

>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.

Would be able to elaborate a bit on where the weaknesses are? I see in the thread  you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes  your sniff-test :) ). 

I think it proves both too little and too much.

Too little, in the sense that it's contingent on things which don't seem that related to the heart of the objections you're making. If we were certain that the accessible universe were finite (as is suggested by (my lay understanding of) current physical theories), and we had certainty in some finite time horizon (however large), then all of the EVs would become defined again and this technical objection would disappear.

In that world, would you be happy to drop your complaints? I don't really think you should, so it would be good to understand what the real heart of the issue is.

Too much, in the sense that if we apply the argument naively then it appears to rule out using EVs as a decision-making tool in many practical situations (where subjective probabilities are fed into the process), including many where we have practical experience of it and it has a good track record.

Overall, my take is something like:

  • This is a technical obstruction around use of EVs, and one which might turn out to be important
  • We know that EVs seem like a really important/useful tool in a wide range of domains
    • Including:
      • ones with small probabilities (e.g. seatbelts)
      • ones based on subjective probabilities (e.g. talk to traders about their use of them)
  • Since EVs seem useful at least for reasoning about finite-horizon worlds, it would be way premature to discard them
    • Instead let's keep on using them and see where it gets
    • Let's remain cautious, particularly in cases which most risk brushing up against pathologies
    • Let's give the technical obstruction a bit of attention, and see if we can come up with anything better (see e.g. Tarsney's work on stochastic dominance)

If we agree EVs are undefined over possible futures, then in the Shivani example, this is like comparing 3 lives to NaN.  

[Mostly an aside] I think the example has been artificially simplified to make the point cleaner for an audience of academic philosophers, and if you take account of indirect effects from giving to AMF then properly we should be comparing NaN to NaN. But I agree that we should not be trying to make any longtermist decisions by literally taking expectations of the number of future lives saved.

Does this not refute at least 1 / 2 of the assumptions longtermism needs to 'get off the ground'?  

Not in my view. I don't think we should be using expectations over future lives as a fundamental decision-making tool, but I do think that thinking in terms of expectations can be helpful for understanding possible future paths. I think it's a moderately robust point that the long-term impacts of our actions are predictably a bigger deal than the short-term impacts -- and this point would survive for example artificially capping the size of possible futures we could reach.

(I think it's a super important question how longtermists should make decisions; I'll write up some more of my thoughts on this sometime.)

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-15T22:56:23.021Z · EA · GW

In response to the plea at the end (and quoting of Popper) to focus on the now over the utopian future: I find myself sceptical and ultimately wanting to disagree with the literal content, and yet feeling that there is a good deal of helpful practical advice there:

  • I don't think that we must focus on the suffering now over thinking about how to help the further-removed future
    • I do think that if all people across time were united in working for the good, then our comparative advantage as being the only people who could address current issues (for both their intrinsic and instrumental value) would mean that a large share of our effort would be allocated to this
  • I do think that attempts to focus on hard-to-envision futures risk coming to nothing (or worse) because of poor feedback loops
    • In contrast tackling issues that are within our foresight horizon allows us to develop experience and better judgement about how to address important issues (while also providing value along the way!)
    • I don't think this means we should never attempt such work; rather we should do so carefully, and in connection with what we can learn from wrestling with more imminent challenges
Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-15T22:36:30.759Z · EA · GW

As a minor point, I don't think that discounting the future really saves you from undefined expectations, as you're implying. I think that on simple models of future growth -- such as are often used in practice -- it does, but if you give some credence to wild futures with crazy growth rates, then it's easy to make the entire thing undefined even through a positive discount rate for pure time preference.

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-15T22:20:12.924Z · EA · GW

Regarding the point about the expectation of the future being undefined: I think this is correct and there are a number of unresolved issues around exactly when we should apply expectations, how we should treat them, etc.

Nonetheless I think that we can say that they're a useful tool on lots of scales, and many of the arguments about the future being large seem to bite without relying on getting far out into the tails of our hypothesis space. I would welcome more work on understanding the limits of this kind of reasoning, but I'm wary of throwing the baby out with the bathwater if we say we must throw our hands up rather than reason at all about things affecting the future.

To see more discussion of this topic, I particularly recommend Daniel Kokotajlo's series of posts on tiny probabilities of vast utilities.

Comment by owen_cotton-barratt on A case against strong longtermism · 2020-12-15T22:04:48.499Z · EA · GW

Thanks! I think that there's quite a lot of good content in your critical review, including some issues that really should be discussed more. In my view there are a number of things to be careful of, but ultimately not enough to undermine the longtermist position. (I'm not an author on the piece you're critiquing, but I agree with enough of its content to want to respond to you.)

Overall I feel like a lot of your critique is not engaging directly with the case for strong longtermism; rather you're pointing out apparently unpalatable implications. I think this is a useful type of criticism, but one that often leads me suspecting that neither side is simply-incorrect, but rather looking for a good synthesis position which understands all of the important points. (Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.)

The point I most appreciate you making is that it seems like strong longtermism could be used to justify ignoring all sorts of pressing present problems. I think that this is justifiably concerning, and deserves attention. However my view is more like "beware naive longtermism" (rather like "beware naive utilitarianism") rather than thinking that the entire framework is lost.

To expand on that:

  • I think that a properly interpreted version of strong longtermism would not recommend that the world ignores present-day issues
    • Indeed, building towards the version of the future which is most likely to produce really good longterm outcomes will mean both removing acute pains and issues for the world, and broadly fostering good-decision-making (which would lead to people solving urgent and tractable issues)
    • Of course there are a lot of things wrong with the world and we're not very close to optimal global allocation of resources, so I think it's acceptable as a form of triage to say "right now while these extremely pressing global issues (existential risk etc.) are so severely neglected, we'd prefer to devote marginal resources there than to solving immediate suffering"
  • I think that "strong longtermism" (as analysed by philosophers) won't end up being the best version of action-guiding advice to spread (even on longtermist grounds), because there will be too much scope for naive interpretation; rather we'll end up building up a deeper repertoire of things to communicate

(I'll address a few other points in replies to this comment, for better threading and because they seem less centrally important to me.)

Comment by owen_cotton-barratt on The Fermi Paradox has not been dissolved · 2020-12-14T18:01:06.505Z · EA · GW

I think this is a good argument against multiple really hard steps, but doesn't say that much about the possibility of one extremely hard step.

Comment by owen_cotton-barratt on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-14T15:50:10.747Z · EA · GW

I definitely think it's relevant that we can get a better sense of the considerations, and that this will affect our future decisions.

To a first order approximation, waiting a few years means passing on the immediate (non-financial) investment opportunities. There may have analogues later, but there could be low-hanging fruit which will remain forever unpicked if we pass now, and this is what drives the impetus to invest even given the rate of improving knowledge. Whether those are ultimately good enough to invest in now comes back to being a matter of messy empirics.

As a meta-consideration, I'm particularly excited about investments which will help us in the future to assess which investments are worthwhile. This could mean early experiments with different types of thing, or putting work into measuring the effects of different investments.

Overall I still feel in favour of a bit more spending at current margins, but far from wanting longtermists to spend down all of our capital this year. But I'm super interested to have discussion about which funded/unfunded things actually do or don't represent stock-market--beating investments for the community.

Comment by owen_cotton-barratt on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-14T15:05:24.011Z · EA · GW

I think that these categories make some sense to gesture with, and describe reasonable paradigm cases, but actually the lines between the categories are super blurry, such that it's hard to use them too much as the basis for subsequent analysis.

For instance, as you point out some things that "look like object level work are also 'meta'". But some other weird cases:

  • Some things that sound like "meta" might not be justifiable as long-term investments. For instance running an early-career programme for people to get into field X that seems undersupplied, if the people entering field X won't do so with a good understanding and motivation that linked to the reasons for it being a priority in the first place.
  • Since a lot of work will have both object-level and meta-level effects it seems hard to draw a line between them such that we could even start counting "0.5%"
    • I basically don't know how to do this for current spending
    • You're talking about it in terms of "aims", which I think is getting at something real, but also gives a lot of weird cases:
      • I think it could mean that the same activity counts as "object" or "meta" depending on who's funding it, and what their aims are in doing so
      • I think lots of time people won't have a clean idea that one of these is "the aim"; they'll have a sense that the activity is good (which will connect to impressions about its various effects)
      • I think "aim of impact" is kind of a weird way of putting it, since almost all activities in the longtermist space only hope to have impact quite indirectly factored through other people's actions, so it's hard even in principle to know where the line should be
  • If Alice uses her savings to do a PhD for career capital reasons, that counts as investment, but if I give her a scholarship for the same reasons, does that count as meta rather than investment?

Overall I think I'd prefer to think about "how good are various opportunities as investments in the longtermist community?", as well as "how good are various opportunities at making progress towards other proxies-for-good that we've identified?". Activities can score well on either, both, or neither of these, rather than being classed as one type or the other.

Comment by owen_cotton-barratt on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-09T23:06:57.614Z · EA · GW

Thanks, I think this is really useful to unpack.

I do agree with all of this, but one important point wasn't salient to me at the time of writing the post: that you want the resources returned to be under direction as sophisticated as your future self or they should get discounted, and that this might constitute a narrow target. I'm uncertain how narrow a target it is, but I think that getting clarity on that seems quite important as it could affect judgements about which opportunities are good investments.

Comment by owen_cotton-barratt on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-09T22:21:19.510Z · EA · GW

Thanks for the link! (I think it's self-promotional but clearly not selfish; it's just helpful to connect this with previous discussion, and I hadn't seen it before.)

Comment by owen_cotton-barratt on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-09T22:12:49.843Z · EA · GW

In the abstract I agree that you could think that. But I'd make some of the same claims for the urgent longtermist as the patient longtermist: that some of the best investment opportunities are probably non-financial, and we should be trying to make use of those before going on to financial investments. (There's a question about whether at current margins we're already using them up.)

I think there are some principled reasons to be unsurprised if the best available non-financial investment opportunities are better than the best available financial investment opportunities. Financial investment is a competitive market; there are lots of people who have money and want more money, and so for a given risk tolerance (and without lots of work) you can't expect to massively outperform what others are making.

There are also markets (broadly understood) competing for buy-in to worldviews. At first glance these might look less attractive to enter into, since they seem to be (roughly) zero-sum. But unlike the financial case, capital is not fungible across worldviews, so we shouldn't assume that market forces mean that the returns from the best opportunities can't get too good (or they'd be taken by others). And I'm not concerned about the zero-sum point, because I don't think that the longtermist worldview is just an arbitrary set of beliefs; I think that it has ~truth on its side, and providing people with arguments plus encouraging them to reflect will on average be quite good for its market share (and to the extent that it isn't, maybe that's a sign that it's getting something wrong). This is a pretty major advantage and makes it plausible that there are some really excellent opportunities available. Then I think growth over the last few years is evidence that at least some of the activities people engage in have really good returns; the crucial question is how much there are really good ones being left on the table.

Comment by owen_cotton-barratt on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-09T21:45:07.347Z · EA · GW

I guess "large part of what I was trying to say" was an overstatement; it's an illustrative facet of the large thing (which is that there isn't the obvious coupling).

Anyhow thanks for the pointer, I made some small language tweaks to the second part quoted (which hopefully help).

Comment by owen_cotton-barratt on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-09T18:00:04.719Z · EA · GW

But I'd also argue that, if urgent longtermism is defined roughly as the view that there will be critical junctures in the next few decades, as you put it, then an urgent longtermist could still think it's worth investing now, so that more money will be spent near those junctures in a few decades.

Yes, I totally agree with this. Indeed a large part of what I was trying to say was that I'm more sympathetic to this strategy right now for "urgent longtermists" than "patient longtermists" (although it happens that I mostly still think it's beaten by non-financial investment opportunities which will pay off soon enough).

[LMK if you found something I wrote confusing; I could consider editing to improve clarity.]

Comment by owen_cotton-barratt on Prospecting for Gold - EAGxOxford 2016 - edited transcript · 2020-09-14T23:31:38.435Z · EA · GW

Thanks! This largely seems rather better.

One paragraph where you've lost the meaning is:

On the right is a factorisation that I think makes the quantity easier to interpret and measure. But it is only justifiable if the terms I've added cancel out, so I'm going to present the case for why I think it is.

I'm not claiming that my original was the easiest to follow, but the point that needs justifying is not that the terms cancel (that's mathematically trivial), but that the decomposition is actually an improvement in terms of ease of understanding or ease of estimation, relative to the term on the left of the equation.

Comment by owen_cotton-barratt on Some thoughts on EA outreach to high schoolers · 2020-09-14T23:03:08.211Z · EA · GW

I don't want to name individuals on a public forum, but noting that there are at least a couple of individuals at FHI who passed through one of the programmes you mention (I don't know about counterfactual attribution).

Comment by owen_cotton-barratt on Judgement as a key need in EA · 2020-09-13T21:46:25.501Z · EA · GW

I'm actually confused about what you mean by your definition. I have an impression about what you mean from your post, but if I try to just go off the wording in your definition I get thrown by "calibrated". I naturally want to interpret this as something like "assigns confidence levels to their claims that are calibrated", but that seems ~orthogonal to having the right answer more often, which means it isn't that large a share of what I care about in this space (and I suspect is not all of what you're trying to point to).

Now I'm wondering: does your notion of judgement roughly line up with my notion of meta-level judgement? Or is it broader than that?

Comment by owen_cotton-barratt on Judgement as a key need in EA · 2020-09-13T15:15:08.685Z · EA · GW

For one data point, I filled in the EALF survey and had in mind something pretty close to what I wrote about in the post Ben links to. I don't remember paying much attention to the parenthetical definition -- I expect I read it as a reasonable attempt to gesture towards the thing that we all meant when we said "good judgement" (though on a literal reading it's something much narrower than I think even Ben is talking about).

I think that good judgement in the broad sense is useful ~everywhere, but that:

  • It's still helpful to try to understand it, to know better how to evaluate it or improve at it;
  • For reasons Ben outlines, it's more important for domains where feedback loops are poor;
  • The cluster Ben is talking about gets disproportionately more weight in importance for thinking about strategic directions.
Comment by owen_cotton-barratt on An argument for keeping open the option of earning to save · 2020-09-09T23:03:11.097Z · EA · GW

ii) But, actors making up a large proportion of total financial assets may have constraints other than maximising impact, which could lead the community to spend faster than the aggregate of the community thinks is correct:

  • Large donors usually want to donate before they die (and Open Phil’s donors have pledged to do so). (Of course, it’s arguable whether this should be modeled as such a constraint or as a claim about optimal timing).

Other holders of financial capital may not have enough resources to realistically make up for that.

Thanks for pulling this out, I think this is the heart of the argument. (I think it's quite valuable to show how the case relies on this, as it helps to cancel a possible reading where everyone should assume that they personally will have better judgement than the aggregate community.)

I think it's an interesting case, and worth considering carefully. We might want to consider:

  1.  Whether this will actually lead to incorrect spending?
    • My central best guess is that there will be enough flow of other money into longtermist-aligned purposes that this won't be an issue in coming decades, but I'm quite uncertain about that
  2. What are the best options for mitigating it?
    • Earning to save is certainly one possibility, but we could also consider e.g. whether there are direct work opportunities which would have a significant effect of passing capital into the hands of future longtermists
Comment by owen_cotton-barratt on An argument for keeping open the option of earning to save · 2020-09-09T22:52:38.062Z · EA · GW

Thanks for the thoughtful reply!

On reflection I realise that in some sense the heart of my objection to the post was in vibe, and I think I was subconsciously trying to correct for this by leaning into the vibe (for my response) of "this seems wrongfooted".

But I do think the post tries to caveat a lot and it overall seems good for there to be a forum where even minor considerations can be considered in a quick post., so I thought it was worth posting.

I quite agree that it's good if even minor considerations can be considered in a quick post. I think the issue is that the tone of the post is kind of didactic, let-me-explain-all-these-things (and the title is "an argument for X", and the post begins "I used to think not-X"): combined these are projecting quite a sense of "X is solid", and while it's great that it had lots of explicit disclaimers about this just being one consideration etc., I don't think they really do the work of cancelling the tone for feeding into casual readers' gut impressions.

For an exaggerated contrast, imagine if the post read like:

A quick thought on earning-to-save

I've been wondering recently about whether earning-to-save could make sense. I'm still not sure what I think, but I did come across a perspective which could justify it.

[argument goes here]

What do people think? I haven't worked out how big a deal this seems compared to the considerations against earning to save (and some of them are pretty substantial), so it might still be a pretty bad idea overall.

I think that would have triggered approximately zero of my vibe concerns.

Alternatively I think it could have worked to have a didactic post on "Considerations around earning-to-save" that felt like it was trying to collect the important considerations (which I'm not sure have been well laid out anywhere, so there might not be a canonical sense of which arguments are "new") rather than particularly emphasise one consideration.

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-03T16:20:06.777Z · EA · GW

I didn't downvote, but I also didn't even understand whether you were agreeing with me or disagreeing with me (and strongly suspected that "would have to" was an error in either case).

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-03T01:07:21.227Z · EA · GW

I almost feel cheeky responding to this as you've essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!

That's fine! :)

In turn, an apology: my controversial view has baited you into response, and I'm now going to take your response as kind-of-volunteering for me to be critical. So I'm going to try and exhibit how it seems mistaken to me, and I'm going (in part) to use mockery as a rhetorical means to achieve this. I think this would usually be a violation of discourse norms, but here: the meta-level point is to try and exhibit more clearly what this controversial view I hold is and why; the thing I object to is a style of argument more than a conclusion; I think it's helpful for the exhibition to be able to draw attention to features of a specific instance, and you're providing what-seems-like-implicit-permission for me to do that. Sorry!

I'd say that something doesn't have to be the most effective thing to do for it to be worth doing, even if you're an EA.

To be clear: I strongly agree with this, and this was a big part of what I was trying say above.

So donating to a seeing eye dog charity isn't really a good thing to do.

This is non-central, but FWIW I disagree with this. Donating to the guide dog charity usually is a good thing to do (relative to important social norms where people have property rights over their money), it's just that it turns out there are fairly accessible actions which are quite a lot better.

Choosing to follow a ve*an diet doesn't have an opportunity cost (usually). You have to eat, and you're just choosing to eat something different.

This, I'm afraid, is the type of statement that really bugs me. It's trying to collapse a complex issue onto simple dimensions, draw a simple conclusion there, and project it back to the original complex world. But in doing so it's thrown common-sense out of the window!

If I believed that choosing to follow a ve*an diet usually didn't have an opportunity cost, I would expect to see:

  • People usually willing to go ve*an for a year for some small material gain
    • In theory if there was no opportunity cost, even for something trivial like $10, but I think many non ve*ans would be unwilling to do this even for $1000
    • [As an aside, I think taxes on meat would probably be a good policy that might well be accessible]
  • Almost everyone who goes ve*an for ethical reasons keeping it up
    • In fact some significant proportion of people stop

Or perhaps you just think the personal cost to you of being ve*an is substantial enough to offset the harm to the animals.

I certainly don't claim this in any utilitarian comparison of welfare. But now the argument seems almost precisely analogous to:

"You could help the poorest people in the world a tremendous amount for the cost of a cup of coffee. Since your welfare shouldn't outweigh theirs, you should forgo that cup of coffee, and every other small luxury in your life, to give more to them."

I think EA correctly rejects this argument, and that it's correct to reject its analogue as well. (I think the argument is stronger for ve*anism than giving to the poor instead of buying coffee; but I also think that there are better giving opportunities than giving directly to the poor, and that when you work it through the coffee argument ends up being stronger than the corresponding one for ve*anism.)

---

Again, I'm not claiming that EAs shouldn't be ve*an. I think it's a morally virtuous thing to do!

But I don't think EAs have a monopoly on virtue. I think the EA schtick is more like "we'll think things through really carefully and tell you what the most efficient ways to do good are". And so I think that if it's presented as "you want to be an EA now? great! how about ve*anism?" then the implicature is that this is a bigger deal than, say, moving from giving away 7% of your income to giving away 8%, and that this is badly misleading.

Notes:

  • There may be some people for whom the opportunity cost is trivial
    • I think there are probably quite a few people for whom the opportunity cost is actually negative -- i.e. it's overall easier for them to be ve*an than not
  • I would feel very good about encouragement to check whether people fall into one of these buckets, as in cases where they do then dietary change may be a particularly efficient way to do good
  • I'd also feel very good about moral exhortment to be ve*an that was explicit that it wasn't grounded in EA thinking, like:
    • "Many EAs try to be morally serious in all aspects of their lives, beyond just trying to optimise for the most good achievable. This leads us to ve*anism. You might want to consider it."
Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T16:00:11.696Z · EA · GW

1. Did you make an active decision to shift your priorities somewhat from doing to facilitating research? If so, what factors drove that decision?

There was something of an active decision here. It was partly based on a sense that the returns had been good when I'd previously invested attention in mentoring junior researchers, and partly on a sense that there was a significant bottleneck here for the research community.

2. What do you think makes running RSP your comparative advantage (assuming you think that)? 

Overall I'm not sure what my comparative advantage is! (At least in the long term.) 

I think:

  • Some things which makes me good at research mentoring are:
    • being able to get up to speed on different projects quickly
    • holding onto a sense of why we're doing things, and connecting to larger purposes
    •  finding that I'm often effective in 'reactive' mode rather than 'proactive' mode 
      • (e.g. I suspect this AMA has the highest ratio of public-written-words / time-invested of anything substantive I've ever done)
    • being able to also connect to where the researcher in front of me is, and what their challenges are
  • There are definitely parts of running RSP which seem not my comparative advantage (and I'm fortunate enough to have excellent support from project managers who have taken ownership of a lot of the programme)

3. Any thoughts on how to test or build one's skills for that sort of role/pathway?

  • Read a lot of research. Form views (and maybe talk to others) about which pieces are actually valuable, and how. Try to work out what seems bad even about good pieces, or what seems good even about bad pieces.
  • Be generous with your time looking to help others with their projects. Check in with them afterwards to see if they found it useful. (Try to ask in a way which makes it safe for them to express that they did not.)
  • Try your own hand at research. First-hand experience of challenges is helpful for this.

(I've focused on the pathway of "research mentorship"; I think there are other parts you were asking about which I've ignored.)

Comment by owen_cotton-barratt on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-02T15:48:43.246Z · EA · GW

Gee, this is really hard to measure.

I'd guess that somewhere between 10% and 30% is done as part of something that we'd naturally call the "standard academic process" ?

I think that there are some good reasons for deviation, and some things that academic norms provide that we may be missing out on.

I think academia is significantly set up as a competitive process, where part of the game is to polish your idea and present it in the best light. This means:

  • It encourages you to care about getting credit, and people are discouraged from freely-sharing early stage ideas that they might turn into papers, for fear of being scooped
    • This seems broadly bad
  • It encourages people to put in the time to properly investigate the ins and outs of an idea, and find the clearest framing of it, making it more efficient for later readers
    • This seems broadly good

I'd like it if we could work out how to get more of the good here with less of the bad. That could mean doing a larger proportion of things within some version of the academic process, or could mean working out other ways to get the benefits.

There's also a credentialing benefit to doing things within the academic process. I think this is non-negligible, but also that if you do really high-quality work anywhere, people will observe this and come, so I don't think it's necessary to rest on that credentialing.