Posts

Forget replaceability? (for ~community projects) 2021-03-31T14:41:23.899Z
Everyday Longtermism 2021-01-01T17:39:29.452Z
Good altruistic decision-making as a deep basin of attraction in meme-space 2021-01-01T17:11:06.906Z
Web of virtue thesis [research note] 2021-01-01T16:21:19.522Z
Blueprints (& lenses) for longtermist decision-making 2020-12-21T17:25:15.087Z
"Patient vs urgent longtermism" has little direct bearing on giving now vs later 2020-12-09T14:58:21.548Z
AMA: Owen Cotton-Barratt, RSP Director 2020-08-28T14:20:18.846Z
"Good judgement" and its components 2020-08-19T23:30:38.412Z
What is valuable about effective altruism? Implications for community building 2017-06-18T14:49:56.832Z
A new reference site: Effective Altruism Concepts 2016-12-05T21:20:03.946Z
Why I'm donating to MIRI this year 2016-11-30T22:21:20.234Z
Should effective altruism have a norm against donating to employers? 2016-11-29T21:56:36.528Z
Donor coordination under simplifying assumptions 2016-11-12T13:13:14.314Z
Should donors make commitments about future donations? 2016-08-30T14:16:51.942Z
An update on the Global Priorities Project 2015-10-07T16:19:32.298Z
Cause selection: a flowchart [link] 2015-09-10T11:52:07.140Z
How valuable is movement growth? 2015-05-14T20:54:44.210Z
[Link] Discounting for uncertainty in health 2015-05-07T18:43:33.048Z
Neutral hours: a tool for valuing time 2015-03-04T16:33:41.087Z
Report -- Allocating risk mitigation across time 2015-02-20T16:34:47.403Z
Long-term reasons to favour self-driving cars 2015-02-13T18:40:16.440Z
Increasing existential hope as an effective cause? 2015-01-10T19:55:08.421Z
Factoring cost-effectiveness 2014-12-23T12:12:08.789Z
Make your own cost-effectiveness Fermi estimates for one-off problems 2014-12-11T11:49:13.771Z
Estimating the cost-effectiveness of research 2014-12-11T10:50:53.679Z
Effective policy? Requiring liability insurance for dual-use research 2014-10-01T18:36:15.177Z
Cooperation in a movement supporting diverse causes 2014-09-23T10:47:11.357Z
Why we should err in both directions 2014-08-21T02:23:06.000Z
Strategic considerations about different speeds of AI takeoff 2014-08-13T00:18:47.000Z
How to treat problems of unknown difficulty 2014-07-30T02:57:26.000Z
On 'causes' 2014-06-24T17:19:54.000Z
Human and animal interventions: the long-term view 2014-06-02T00:10:15.000Z
Keeping the effective altruist movement welcoming 2014-02-07T01:21:18.000Z

Comments

Comment by Owen_Cotton-Barratt on Concerns with ACE's Recent Behavior · 2021-04-22T09:36:25.629Z · EA · GW

I didn't downvote (because as you say it's providing relevant information), but I did have a negative reaction to the comment. I think the generator of that negative reaction is roughly: the vibe of the comment seems more like a political attempt to close down the conversation than an attempt to cooperatively engage. I'm reminded of "missing moods";  it seems like there's a legitimate position of "it would be great to have time to hash this out but unfortunately we find it super time consuming so we're not going to", but it would naturally come with a mood of sadness that there wasn't time to get into things, whereas the mood here feels more like "why do we have to put up with you morons posting inaccurate critiques?". And perhaps that's a reasonable position, but it at least leaves a kind of bad taste.

Comment by Owen_Cotton-Barratt on "Good judgement" and its components · 2021-04-17T23:45:53.010Z · EA · GW

Yeah my quick guess is that (as for many complex skills) g is very helpful, but that it's very possible to be high g without being very good at the thing I'm pointing at (partially because feedback loops are poor, so people haven't necessarily has a good training signal for improving).

Comment by Owen_Cotton-Barratt on Forget replaceability? (for ~community projects) · 2021-04-11T20:10:26.753Z · EA · GW

I guess I significantly agree with all of the above, and I do think it would have been reasonable for me to mention these considerations.  But since I think the considerations tend to blunt rather than solve the issues, and since I think the audience for my post will mostly be well aware of these considerations,  it still feels fine to me to have omitted mention of them? (I mean, I'm glad that they've come up in the comments.)

I guess I'm unsure whether there's an interesting disagreement here. 

Comment by Owen_Cotton-Barratt on Forget replaceability? (for ~community projects) · 2021-04-09T08:13:06.522Z · EA · GW

Yeah, I totally agree that if you're much more sophisticated than your (potential) donors you want to do this kind of analysis. I don't think that applies in the case of what I was gesturing at with "~community projects", which is where I was making the case for implicit impact markets.

Assuming that the buyers in the market are sophisticated:

  1. in the straws case, they might say "we'll pay $6 for this output" and the straw org might think "$6 is nowhere close to covering our operating costs of $82,000” and close down
  2. I think too much work is being done by your assumption that the cost effectiveness can't be increased. In an ideal world, the market could create competition which drives both orgs to look for efficiency improvements
Comment by Owen_Cotton-Barratt on Forget replaceability? (for ~community projects) · 2021-04-08T07:17:48.511Z · EA · GW

This kind of externality should be accounted for by the market (although it might be that the modelling effectively happens in a distributed way rather than anyone thinking about it all).

So you might get VCs who become expert in judging when early-stage projects are a good bet. Then people thinking of starting projects can somewhat outsource the question to the VCs by asking "could we get funding for this?"

Comment by Owen_Cotton-Barratt on Forget replaceability? (for ~community projects) · 2021-04-07T22:50:18.428Z · EA · GW

Moral trade is definitely relevant here. Moral trade basically deals with cases with fundamental-differences-in-values (as opposed to coordination issues from differences in available information etc.).

I haven't thought about this super carefully, but it seems like a nice property of impact markets is that they'll manage to simultaneously manage the moral trade issues and the coordination issues. Like in the example of donors wishing to play donor-of-last-resort it's ambiguous whether this desire is driven by irreconcilably different values or different empirical judgements about what's good.

Comment by Owen_Cotton-Barratt on Forget replaceability? (for ~community projects) · 2021-04-07T22:42:23.039Z · EA · GW

I agree that these considerations would blunt the coordination issues some.

So I think that a proposal for "Implicit impact markets without infrastructure" should probably include as one element a reminder for people to take these considerations into account. 

I guess I think that it should include that kind of reminder if it's particularly important to account for these things under an implicit impact markets set-up. But I don't think that; I think they're important to pay attention to all of the time, and I'm not in the business (in writing this post) of providing reminders about everything that's important.

In fact I think it's probably slightly less important to take them into account if you have (implicit or explicit) impact markets, since the markets would relieve some of the edge that it's otherwise so helpful to blunt via these considerations.

Comment by Owen_Cotton-Barratt on Forget replaceability? (for ~community projects) · 2021-04-07T22:34:51.744Z · EA · GW

Yeah, Shapley values are a particular instantiation of a way that you might think the implicit credit split would shake out. There are some theoretical arguments in favour of Shapley values, but I don't think the case is clear-cut. However in practice they're not going to be something we can calculate on-the-nose, so they're probably more helpful as a concept to gesture with.

Comment by Owen_Cotton-Barratt on Forget replaceability? (for ~community projects) · 2021-04-07T22:31:42.785Z · EA · GW

Of course "non-EA funding" will vary a lot in its counterfactual value. But roughly speaking I think that if you are pulling in money from places where it wouldn't have been so good, then on the implicit impact markets story you should get a fraction of the credit for that fundraising. Whether or not that's worth pursuing will vary case-to-case.

Basically I agree with Michael that it's worth considering but not always worth doing. Another way of looking at what's happening is that starting a project which might appeal to other donors creates a non-transferrable fundraising opportunity. Such opportunities should be evaluated, and sometimes pursued.

Comment by Owen_Cotton-Barratt on Forget replaceability? (for ~community projects) · 2021-04-07T22:25:56.721Z · EA · GW

I agree that in principle that you could model all of this out explicitly, but it's the type of situation where I think explicit modelling can easily get you into a mess (because there are enough complicated effects that you can easily miss something which changes the answer), and also puts the cognitive work in the wrong part of the system (the job of funders is to work out what would be the best use of their resources; the job of the charities is to provide them with all relevant information to help them make the best decision).

I think impact markets (implicit or otherwise) actually handle this reasonably well. When you're starting a charity, you're considering investing resources in pursuit of a large payoff (which may not materialise). Because you're accepting money to do that, you have to give up a fraction of the prospective payoff to the funders. This could change the calculus of when it's worth launching something.

Comment by Owen_Cotton-Barratt on Everyday longtermism in practice · 2021-04-06T20:00:24.700Z · EA · GW

I like the jumping in! I think using vignettes as a starting point for discussion of norms has some promise.

In these cases, I imagine it being potentially fruitful to have more-discussion-per-vignette about both whether the idea captured is a good one (I think it's at least unclear in some of your examples), as well as how good it would be if the norm were universalised ... we don't want to spend too much attention on promoting norms that while positive just aren't a very big deal.

Comment by Owen_Cotton-Barratt on Forget replaceability? (for ~community projects) · 2021-03-31T14:48:36.029Z · EA · GW

Default expectations of credit

Maybe we should try to set default expectations of how much credit for a project goes to different contributors? With the idea that not commenting is a tacit endorsement that the true credit split is probably in that ballpark (or at least that others can reasonably read it that way).

One simple suggestion might be four equal parts credits: to founding/establishing the org and setting it in a good direction (including early funders and staff); to current org leadership; to current org staff; to current funders. I do expect substantial deviation from that in particular cases, but it's not obvious to me that any of the buckets is systematically too big or too small, so maybe it's reasonable as a starting point?

Comment by Owen_Cotton-Barratt on Forget replaceability? (for ~community projects) · 2021-03-31T14:46:26.318Z · EA · GW

Inefficiencies from inconsistent estimates of value

Broadening from just considering donations, there's a worry that the community as a whole might be able to coordinate to get better outcomes than we're currently managing. For instance opinions about the value of earning to give vary quite a bit; here's a sketch to show how that can go wrong:

Alice and Beth could each go into direct work or into earning-to-give. We represent their options by plotting a point showing how much they would achieve on the relevant dimension for each option. The red and green points show some possibilities for what Alice and Beth might together achieve by each picking one of their options. There are two more points in that choice set, one on each axis, where both people go into direct work or both go into earning to give. It's unclear in this example what optimal outcome is, but it is clear that the default point is not optimal, since it's dominated by the one marked "accessible".

This doesn't quite fit in the hierarchy of approaches to donor coordination, but it is one of the issues that fully explicit impact markets should be able to help resolve. How much would implicit impact markets help? Maybe if they were totally implicit and "strengths of ask" were always made qualitatively rather than quantitatively it wouldn't help so much (since everyone would understand "strength" relative to what they think of as normal for the importance of money or direct work, and Alice and Beth have different estimates of that 'normal'). But if a fraction of projects move to providing quantitative estimates (while still not including any formal explicit market mechanisms), that might be enough to relieve the inefficiencies.

Comment by Owen_Cotton-Barratt on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T21:42:16.644Z · EA · GW

Definitely didn't mean to shut down conversation! I felt like I had a strong feeling that it was not an option on the table (because of something like coherence reasons -- cf. my reply to Jonas -- not because it seemed like a bad or too-difficult idea). But I hadn't unpacked my feeling. I also wasn't sure whether I needed to, or whether when I posted everyone would say something like "oh, yeah, sure" and it would turn out to be a boring point. This was why I led with "I don't know how much of an outlier I am"; I was trying to invite people to let me know if this was a boring triviality after it was pointed out, or if it was worth trying to unpack.

P.S. I appreciate having what seemed bad about the phrasing pointed out.

Comment by Owen_Cotton-Barratt on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T21:07:48.831Z · EA · GW

Hmm, no, I didn't mean something that feels like pessimism about coordination ability, but that (roughly speaking) thing you get if you try to execute a "change the name of the movement" operation is not the same movement with a different name, but a different (albeit heavily overlapping) movement with the new name. And so it's better understood as a coordinated heavy switch to emphasising the new brand than it is just a renaming (although I think the truth is actually somewhere in the middle).

I don't think that's true if the name change is minor so that the connotations are pretty similar. I think that switching from "effective altruism" to "efficient do-gooding" is a switch which could more or less happen (you'd have a steady trickle of people coming in from having read old books or talked to people who were familiar with the old name, but "effective altruism, now usually called efficient do-gooding" would mostly work). But the identity of the movement is (at least somewhat) characterised by its name and how people understand it and relate to it. If you shifted to a name like "global priorities" with quite different connotations, I think that it would change people's relationship with the ideas, and you would probably find a significant group of people who said "well I identify with the old brand, but not with the new brand", and then what do you say to them? "Sorry, that brand is deprecated" doesn't feel like a good answer.

(I sort of imagine you agree with all of this, and by "change the name of the movement" you mean something obviously doable like getting a lot of web content and orgs and events and local groups to switch over to a new name. My claim is that that's probably better conceived of in terms of its constituent actions than in terms of changing the name of the movement.)

Comment by Owen_Cotton-Barratt on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T00:35:09.141Z · EA · GW

I don't know how much of an outlier I am, but I feel like "change the name of the movement" is mostly not an option on the table. Rather there's a question about how much (or when) to emphasise different labels, with the understanding that the different labels will necessarily refer to somewhat different things. (This is a different situation than an organisation considering a rebrand; in the movement case people who preferred the connotations of the older label are liable to just keep using it.)

Anyhow, I like your defence of "effective altruism", and I don't think it should be abandoned (while still thinking that there are some contexts where it gets used but something else might be better).

Comment by Owen_Cotton-Barratt on Name for the larger EA+adjacent ecosystem? · 2021-03-19T18:45:33.892Z · EA · GW

I agree that this is potentially an issue. I think it's (partially) mitigated the more it's used to refer to ideas rather than people, and the more it's seen to be a big (and high prestige) thing.

Comment by Owen_Cotton-Barratt on Name for the larger EA+adjacent ecosystem? · 2021-03-19T08:24:02.565Z · EA · GW

Maybe the obvious suggestion then is "new enlightenment"? I googled, and the term has some use already (e.g. in a talk by Pinker), but it feels pretty compatible with what you're gesturing at. I guess it would suggest a slightly broader conception (more likely to include people or groups not connected to the communities you named), but maybe that's good?

Comment by Owen_Cotton-Barratt on Name for the larger EA+adjacent ecosystem? · 2021-03-19T00:20:37.517Z · EA · GW

Thanks, makes sense. This makes me want to pull out the common characteristics of these different groups and use those as definitional (and perhaps realise we should include other groups we're not even paying attention to!), rather than treat it as a purely sociological clustering. Does that seem good?

Like maybe there's a theme about trying to take the world and our position in it seriously?

Comment by Owen_Cotton-Barratt on Name for the larger EA+adjacent ecosystem? · 2021-03-18T22:52:54.249Z · EA · GW

Could you say a little more about the context(s) where a name seems useful?

(I think it's often easier to think through what's wanted from a name when you understand the use case, and sometimes when you try to do this you realise it was a slightly different thing that you really wanted to name anyway.)

Comment by Owen_Cotton-Barratt on Should I transition from economics to AI research? · 2021-03-01T23:31:03.742Z · EA · GW

Note that I think that the mechanisms I describe aren't specific to economics, but cover academic research generally-- and will also  include most of how most AI safety researchers (even those not in academia) will have impact.

There are potentially major crux moments around AI, so there's also the potential to do an excellent job engineering real transformative systems to be safe at some point (but most AI safety researchers won't be doing that directly). I guess that perhaps the indirect routes to impact for AI safety might feel more exciting because they're more closely connected to the crucial moments -- e.g. you might hope to set some small piece of the paradigm that the eventual engineers of the crucial systems are using, or hope to support a culture of responsibility among AI researchers, to make it less likely that people at the key time ignore something they shouldn't have done.

Comment by Owen_Cotton-Barratt on Should I transition from economics to AI research? · 2021-03-01T18:28:44.665Z · EA · GW

Finally, I imagine quant trading is a non-starter for a longtermist who is succeeding in academic research. As a community, suppose we already have significant ongoing funding from 3 or so of the world's 3k billionaires. What good is an extra one-millionaire? Almost anyone's comparative advantage is more likely to lie in spending the money, but even more so if one can do so within academic research.

It seems quite wrong to me to present this as so clear-cut. I think if we don't get major extra funding the professional longtermist community might plateau at a stable size in perhaps the low thousands. A successful quantitative trader could support several more people at the margin (a very successful trader could support dozens). If you're a good fit for the crowd, it might also be a good group to network with.

If you're particularly optimistic about future funding growth, or pessimistic about community growth, you might think it's unlikely we end up in that world in a realistic timeframe, but there's likely to still be some hedging value.

To be clear, I mostly wouldn't want people in the OP's situation to drop the PhD to join a hedge fund. But it's worth understanding that e.g. the main routes to impact in academic research are probably: 

  1. Providing leadership for the academic field from within the field, including:
    1. Paradigm-setting
    2. Culture-setting
  2. Helping students orient to what's important, and providing space for them to work on more important projects
  3. Using academia as a springboard to affect non-academic projects (e.g. being an advisor on particular policy topics, or providing solid support for claims that are broadly useful)

I think for some people those just aren't going to be a great personal fit (even if they can achieve conventional "success" in academia!), so it's worth considering other options.

In this particular case, I'm kind of excited about getting more longtermist economists. But it might depend e.g. how disillusioned the OP is with the field as to whether it might make sense for them to be such a person.

Comment by Owen_Cotton-Barratt on Alternatives to donor lotteries · 2021-02-16T00:38:05.767Z · EA · GW

I guess I wouldn't recommend the donor lottery to people who wouldn't be happy entering a regular lottery for their charitable giving (but I would usually recommend them to be happy with that regular lottery!).

Btw, I'm now understanding your suggestions as not really alternatives to the donor lottery, since I don't think you buy into its premises, but alternatives to e.g. EA Funds.

(In support of the premise of respecting individual autonomy about where to allocate money: I think that making requests to pool money in a way that rich donors expect to lose control would risk making EA pattern match at a surface level to a scam, and might drive people away. For a more extreme version of this, imagine someone claiming that as soon as you've decided to donate some money you should send it all to the One True EA Collective fund, so that it can be fairly distributed, and it would be a weird propagation of wealth to allow rich people to take any time to think about where to give their money; whether or not you think an optimal taxation system would equalise wealth much more, I think it's fairly clear that the extreme bid that everyone pool donations would be destructive because it would put off donors.

Comment by Owen_Cotton-Barratt on Alternatives to donor lotteries · 2021-02-15T11:48:08.563Z · EA · GW

By dominant action I mean "is ~at least as good as other actions on ~every dimension, and better on at least one dimension".

My confusion is something like: there's no new money out there! Its a group of donors deciding to give individually or give collectively. So the perspective of "what will lead to optimal allocation of resources at the group level?" is the right one.

I don't think donor lotteries are primarily about collective giving. As a donor lottery entrant, I'd be just as happy giving $5k for a 5% chance of controlling a $100k pot of pooled winnings as entering a regular lottery where I could give $5k for a 5% chance of winning $100k (which I would then donate)*. In either case I think I'll do more than 20x as much good with $100k than $5k (mostly since I can spend longer thinking and investigating), so it's worthwhile in expectation.

* Except that I usually don't have good access to that kind of lottery (maybe there would also be tax implications, although perhaps it's fine if the money is all being donated). So the other donors are a logistical convenience, but not an integral part of the idea.

My understanding is that past people selected to allocate the pool haven't tended to delegate that allocation power. And indeed if you're strongly expecting to do so, why not just give the allocation power to that person beforehand, either over your individual donation (e.g. through an EA fund) or over a pool. Why go through the lottery stage?

I don't know that they should strongly expect to do so. But in any case the reason for going through the lottery stage is simple: maybe you'd want to take 50 hours thinking about whether to delegate and to whom, and vetting possible people to delegate to. That time might not be worth spending for a $5k donation, but become worth spending for a $100k donation. (Additionally the person you want to delegate to might be more likely to take the duty seriously for a larger amount of money.)

Comment by Owen_Cotton-Barratt on Alternatives to donor lotteries · 2021-02-14T23:09:16.728Z · EA · GW

I think your analysis of the alternatives is mostly from the perspective of "what will lead to optimal allocation of resources at the group level?"

But the strongest case for donor lotteries, in my view, isn't in these terms at all. Rather, it's that entering a lottery is often a dominant action from the perspective of the individual donor (if most other things they would consider giving to don't exhibit noticeably diminishing returns over the amount they are attempting to get in the lottery). The winner of a lottery need not be the allocator for the money; they can instead e.g. decide to take longer thinking about whom they want to delegate allocation power to (I actually think this might often be the "technically correct" move; I don't know how often lottery winners act this way). This dominance argument would go through for a much smaller proportion of possible donors for your alternatives. I'm interested if you see another reason that people would donate to these?

Comment by Owen_Cotton-Barratt on Everyday Longtermism · 2021-01-08T10:35:22.283Z · EA · GW

I spent a little while thinking about this. My guess is that of the activities I list:

  • Alice and Bob's efforts look comparable to donating (in external benefit/effort) when the longtermist portfolio is around $100B-$1T/year
  • Clara's efforts looks comparable to donating when the longtermist portfolio is around $1B-$10B/year
  • Diya's efforts look comparable to donating when the longtermist portfolio is around $10B-$100B/year
  • Elmo's efforts are harder to say because they're closer to directly trying to grow longtermist support, so the value diminishes as the existing portfolio gets larger just as for donations, and it more depends on underlying quality

All of those numbers are super crude and I might well disagree with myself if I came back later and estimated again. They also depend on lots of details (like how good the individuals are at executing on those strategies).

Perhaps most importantly, they're excluding the internal benefits -- if these activities are (as I suggest) partly good for practicing some longtermist judgement, then I'd really want to see them as a complement to donation rather than just a competitor.

Comment by Owen_Cotton-Barratt on AGB's Shortform · 2021-01-05T21:11:15.477Z · EA · GW

One argument goes via something like the reference class of global autopoeitic information-processing systems: life has persisted since it started several billion years ago; multicellular life similarly; sexual selection similarly. Sure, species go extinct when they're outcompeted, but the larger systems they're part of have only continued to thrive.

The right reference class (on this story) is not "humanity as a mammalian species" but "information-based civilization as the next step in faster evolution". Then we might be quite optimistic about civilization in some meaningful sense continuing indefinitely (though perhaps not about particular institutions or things that are recognisably human doing so).

Comment by Owen_Cotton-Barratt on AGB's Shortform · 2021-01-05T20:55:44.187Z · EA · GW

Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then there's a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period you're pretty much safe.

That model is clearly too optimistic because it doesn't admit crises with correlated problems across all the individuals in a generation. But then there's a question about how high is the unavoidable background rate of such crises (i.e. ones that remain even if you have a very sophisticated and well-resourced attempt to prevent them).

On current understanding I think the lower bounds for the rate of exogenous such events rely on things like false vacuum decay (and maybe GRBs while we're local enough), and those lower bounds are really quite low, so it's fairly plausible that the true rate is really low (though also plausible it's higher because there are risks that aren't observed/understood).

Bounding endogenous risk seems a bit harder to reason about. I think that you can give kind of fairytale/handwaving existence proofs of stable political systems (which might however be utterly horrific to us). Then it's at least sort of plausible that there would be systems which are simultaneously extremely stable and also desirable.

Comment by Owen_Cotton-Barratt on Blueprints (& lenses) for longtermist decision-making · 2021-01-04T14:06:00.845Z · EA · GW

My primary blueprint is as follows:

I want the world in 30 years time to be in as good a state as it can be in order to face whatever challenges that will come next.

I like this! I sometimes use a perspective which is pretty close (though often think about 50 years rather than 30 years, and hold it in conjunction with "what are the challenges we might need to face in the next 50 years?"). I think 30 vs 50 years is a kind-of interesting question. I've thought about 50 because if I imagine e.g. that we're going to face critical junctures with the development of AI in 40 years, that's within the scope where I can imagine it being impacted by causal pathways that I can envision -- e.g. critical technology being developed by people who studied under professors who are currently students making career decisions. By 60 years it feels a bit too tenuous for me to hold on to.

I kind of agree that if looking at policy specifically a shorter time horizon feels good.

Comment by Owen_Cotton-Barratt on Everyday Longtermism · 2021-01-04T12:30:01.151Z · EA · GW

I appreciate the pushback!

I have two different responses (somewhat in tension with each other):

  1. Finding "everyday" things to do will necessitate identifying what's good to do in various situations which aren't the highest-value activity an individual can be undertaking
    • This is an important part of deepening the cultural understanding of longtermism, rather than have all of the discussion be about what's good to do in a particular set of activities that's had strong selection pressure on it
      • This is also important for giving people inroads to be able to practice different aspects of longtermism
      • I think it's a bit like how informal EA discourse often touches on how to do everyday things efficiently (e.g. "here are tips for batching your grocery shopping") -- it's not that these are the most important things to be efficient about, but that all-else-equal it's good, and it's also very good to give people micro-scale opportunities to put efficiency-thinking into practice
    • Note however that my examples would be better if they had more texture:
      • Discussion of the nuance of better or worse versions of the activities discussed could be quite helpful for conveying the nuance of what is good longtermist action
      • To the extent that these are far from the highest value activities those people could be undertaking, it seems important to be up-front about that: keeping tabs on what's relatively important is surely an important part of the (longtermist) EA culture
  2. I'm not sure how much I agree with "probably much less positive than some other things that could be done even by "regular people', even once there are millions or tens of millions of longtermists"
    • I'd love to hear your ideas for things that you think would be much more positive for those people in that world
      • My gut feeling is that they are at the level of "competitive uses of time/attention (for people who aren't bought into reorienting their whole lives) by the time there are tens of millions of longtermists"
        • It seems compatible with that feeling that there could be some higher-priority things for them to be doing as well -- e.g. maybe some way of keeping immersed in longtermist culture, by being a member of some group -- but that those reach saturation or diminishing returns
        • I think I might be miscalibrated about this; think it would be easier to discuss with some concrete competition on the table
    • Of course to the extent that these actually are arguably competitive actions, if I believe my first point, maybe I should have been looking for even more everyday situations
      • e.g. could ask "what is the good longtermist way to approach going to the shops? meeting a romantic partner's parents for the first time? deciding how much to push yourself to work when you're feeling a bit unwell?"
Comment by Owen_Cotton-Barratt on Everyday Longtermism · 2021-01-04T11:36:55.226Z · EA · GW

Thanks, I agree with both of those points.

Comment by Owen_Cotton-Barratt on Everyday Longtermism · 2021-01-04T11:27:24.384Z · EA · GW

I really appreciate you highlighting these connections with other pieces of thinking -- a better version of my post would have included more of this kind of thing.

Comment by Owen_Cotton-Barratt on Everyday Longtermism · 2021-01-04T10:56:03.304Z · EA · GW

Some further suggestions:

  1. Be more cooperative. (There are arguments about increasing cooperation, especially from people working on reducing S-risks, but I couldn't find any suitable resource in a brief search)
  2. Take a strong stance against narrow moral circles.
  3. Have a good pitch prepared about longtermism and EA broadly. Balance confidence with adequate uncertainty.
  4. Have a well-structured methodology for getting interested acquaintances more involved with EA.
  5. Help friends in EA/longtermism more.
  6. Strengthen relationships with friends who have a high potential to be highly influential in the future.

I basically like all of these. I think there might be versions which could be bad, but they seem like a good direction to be thinking in. 

I'd love to see further exploration of these -- e.g. I think any of your six suggestions could deserve a top-level post going into the weeds (& ideally reporting on experiences from trying to implement it). I feel most interested in #3, but not confidently so.

Comment by Owen_Cotton-Barratt on Everyday Longtermism · 2021-01-04T10:46:56.456Z · EA · GW

I think that the suggestions here, and most of the arguments, should apply to "Everyday EA " which isn't necessarily longtermistic.  I'd be interested in your thoughts about where exactly should we make a distinction between everyday longtermist actions and non-longtermist everyday actions.

I agree that quite a bit of the content seems not to be longtermist-specific. But I was approaching it from a longtermist perspective (where I think the motivation is particularly strong), and I haven't thought it through so carefully from other angles.

I think the key dimension of "longtermism" that I'm relying on is the idea that the longish-term (say 50+ years) indirect effects of one's actions are a bigger deal in expectation than the directly observable effects. I don't think that that requires e.g. any assumptions about astronomically large futures. But if you thought that such effects were very small compared to directly observable effects, then you might think that the best everyday actions involved e.g. saving money or fundraising for charities you had strong reason to believe were effective.

Comment by Owen_Cotton-Barratt on Good altruistic decision-making as a deep basin of attraction in meme-space · 2021-01-04T09:59:26.770Z · EA · GW

Yes, that's the kind of thing I had in the back of my mind as I wrote that.

I guess I actually think:

  • On average moving people further into the basin should lead to more useful work
  • Probably we can identify some regions/interventions where this is predictably not the case
    • It's unclear how common such regions are
Comment by Owen_Cotton-Barratt on Good altruistic decision-making as a deep basin of attraction in meme-space · 2021-01-03T14:32:15.484Z · EA · GW

I have a sense that a large part of the success of scientific norms comes down to their utility being immediately visible.

I agree with this. I don't think science has the attractor property I was discussing, but it has this other attraction of being visibly useful (which is even better). I was trying to use science as an example of the self-correction mechanism.

Or perhaps I am having a semantic confusion: is science self-propagating in that scientists, once cultivated, go on to cultivate others?

Yes, this is the sense of self-propagating that I intended.

Comment by Owen_Cotton-Barratt on Good altruistic decision-making as a deep basin of attraction in meme-space · 2021-01-03T14:28:41.549Z · EA · GW

In my words, what you've done is point out that approximate-consequentialism + large-scale preferences is an attractor.

I think that this is a fair summary of my first point (it also needs enough truth seeking to realise that spreading the approach is valuable). It doesn't really speak to the point about being self-correcting/improving.

I'm not trying to claim that it's obviously the strongest memeplex in the long term. I'm saying that it has some particular strengths (which make me more optimistic than before I was aware of those strengths).

I think another part of my thinking there is that actually quite a lot of people have altruistic preferences already, so it's not like trying to get buy-in for a totally arbitrary goal.

Comment by Owen_Cotton-Barratt on Good altruistic decision-making as a deep basin of attraction in meme-space · 2021-01-01T20:59:42.548Z · EA · GW

Why does it need to rely on spreading without too much questioning?

(BTW I'm using "meme" in the original general sense not the more specific "internet meme" usage; was that obvious enough?)

Comment by Owen_Cotton-Barratt on [deleted post] 2021-01-01T13:55:34.119Z

I agree with this. I think "do-gooding for nerds" might be preferable than "charity for nerds", but probably "charity for nerds" is closer to current perceptions.

Comment by Owen_Cotton-Barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T11:30:43.367Z · EA · GW

I read your second critique as implicitly saying "there must be a mistake in the argument", whereas I'd have preferred it to say "the things that might be thought to follow from this argument are wrong (which could mean a mistake in the argument that's been laid out, or in how its consequences are being interpreted)".

Comment by Owen_Cotton-Barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T11:26:51.595Z · EA · GW

I agree that there's a tension in how we're talking about it. I think that Greaves+MacAskill are talking about how an ideal rational actor should behave -- which I think is informative but not something to be directly emulated for boundedly rational actors.

Comment by Owen_Cotton-Barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T16:39:07.192Z · EA · GW

I think this is might be a case of the-devil-is-in-the-details.

I'm in favour of people scanning the horizon for major problems whose negative impacts are not yet being felt, and letting that have some significant impact on which nearer-term problems they wrestle with. I think that a large proportion of things that longtermists are working on are problems that are at least partially or potentially within our foresight horizons. It sounds like maybe you think there is current work happening which is foreseeably of little value: if so I think it could be productive to debate the details of that.

Comment by Owen_Cotton-Barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T16:32:55.744Z · EA · GW

Cool. I do think that trying to translate your position into the ontology used by Greaves+MacAskill it's sounding less like "longtermism is wrong" and more like "maybe longtermism is technically correct; who cares?; the practical advice people are hearing sucks".

I think that's a pretty interestingly different objection and if it's what you actually want to say it could be important to make sure that people don't hear it as "longtermism is wrong" (because that could lead them to looking at the wrong type of thing to try to refute you).

Comment by Owen_Cotton-Barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T02:36:10.765Z · EA · GW

I will focus on two aspects of strong longtermism, henceforth simply longtermism. First, the underlying arguments inoculate themselves from criticism by using arbitrary assumptions on the number of future generations. Second, ignoring short-term effects destroys the means by which we make progress — moral, scientific, artistic, and otherwise.

I found it helpful that you were so clear about these two aspects of what you are saying. My responses to the two are different.

On the first, I think resting on possibilities of large futures is a central part of the strength of the case for longtermism. It doesn't feel like inoculation from criticism to put the strong argument forwards. Of course this only applies to the argument for longtermism in the abstract and not for particular actions people might want to take; I think that using such reasoning in favour of particular actions tends to be weak (inoculation is sometimes attempted but it is ineffectual).

On the second, I think this might be an important and strong critique, but it is a critique of how the idea is presented and understood rather than of the core tenets of longtermism; indeed one could make the same arguments starting from an assumption that longtermism was certainly correct, but being worried that it would be self-defeating.

So I'm hearing the second critique (perhaps also the first but it's less clear) as saying that the "blueprints" (in the sense of https://forum.effectivealtruism.org/posts/NdSoipXQhdzozLqW4/blueprints-and-lenses-for-longtermist-decision-making ) people commonly get for longtermism are bad (on both shorttermist and longtermist grounds). Does that sound mostly-correct to you?

Comment by Owen_Cotton-Barratt on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T00:04:47.620Z · EA · GW

It is certainly possible to accuse me of taking the phrase “ignoring the effects” too literally. Perhaps longtermists wouldn’t actually ignore the present and its problems, but their concern for it would be merely instrumental. In other words, longtermists may choose to focus on current problems, but the reason to do so is out of concern for the future.

My response is that attention is zero-sum. We are either solving current pressing problems, or wildly conjecturing what the world will look like in tens, hundreds, and thousands of years. If the focus is on current problems only, then what does the “longtermism” label mean? If, on the other hand, we’re not only focused on the present, then the critique holds to whatever extent we’re guessing about future problems and ignoring current ones.

I agree that attention is a limited resource, but it feels like you're imagining split attention leads to something like linear interpolation between focused attention on either end; in fact I think it's much better than that, and that attention on the two parts are complementary. For example we need to wrestle with problems we face today to give us good enough feedback loops to make substantial progress, but by taking the long-term perspective we can improve our judgement about which of the nearer-term problems should be highest-priority.

I actually think that in the longtermist ideal world (where everyone is on board with longtermism) that over 90% of attention -- perhaps over 99% -- would go to things that look like problems already. But that at the present margin in the actual world the longtermist perspective is underappreciated so looks particularly valuable.

Comment by Owen_Cotton-Barratt on A case against strong longtermism · 2020-12-19T16:59:33.366Z · EA · GW

They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0.

Just want to register strong disagreement with this. (That is, disagreement with the position you report, not disagreement that you know people holding this position.) I think there are enough variables in the world that have some nonzero expected impact on the long term future that for very many actions we can usually hazard guesses about their impact on at least some such variables, and hence about the expected impact of the individual actions (of course in fact one will be wrong in a good fraction of cases, but we're talking about in expectation).

Note I feel fine about people saying of lots of activities "gee I haven't thought about that one enough, I really don't know which way it will come out", but I think it's a sign that longtermism is still meaningfully under development and we should be wary of rolling it out too fast.

Comment by Owen_Cotton-Barratt on A case against strong longtermism · 2020-12-18T16:26:50.842Z · EA · GW

I think it's a combination of a couple of things.

  1. I'm not fully bought into strong longtermism (nor, I suspect, are Greaves or MacAskill), but on my inside view it seems probably-correct.

When I said "likely", that was covering the fact that I'm not fully bought in.

  1. I'm taking "strong longtermism" to be a concept in the vicinity of what they said (and meaningfully distinct from "weak longtermism", for which I would not have said "by far"), that I think is a natural category they are imperfectly gesturing at. I don't agree with with a literal reading of their quote, because it's missing two qualifiers: (i) it's overwhelmingly what matters rather than the only thing; & (ii) of course we need to think about shorter term consequences in order to make the best decisions for the long term.

Both (i) and (ii) are arguably technicalities (and I guess that the authors would cede the points to me), but (ii) in particular feels very important.

Comment by Owen_Cotton-Barratt on A case against strong longtermism · 2020-12-18T16:08:16.207Z · EA · GW

I appreciate the points here. I think I might be slightly less pessimistic than you about the ability to evaluate arguments in foreign domains, but the thrust of why I was making that point was because: I think for pushing out the boundaries of collective knowledge it's roughly correct to adopt the idealistic stance I was recommending; & I think that Vaden is engaging in earnest and noticing enough important things that there's a nontrivial chance they could contribute to pushing such boundaries (and that this is valuable enough to be encouraged rather than just encouraging activity that is likely to lead to the most-correct beliefs among the convex hull of things people already understand).

Comment by Owen_Cotton-Barratt on A case against strong longtermism · 2020-12-17T23:55:53.992Z · EA · GW

Anyway, I'm a huge fan of 95% of EA's work, but really think it has gone down the wrong path with longtermism. Sorry for the sass -- much love to all :) 

It's all good! Seriously, I really appreciate the engagement from you and Vaden: it's obvious that you both care a lot and are offering the criticism precisely because of that. I currently think you're mistaken about some of the substance, but this kind of dialogue is the type of thing which can help to keep EA intellectually healthy.

I'm confused about the claim 

>I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value.

This seems in direct opposition to what the authors say (and what Vaden quoted above), that 

>The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years

I understand that they may not feel this way, but it is what they argued for and is, consequently, the idea that deserves to be criticized.

So my interpretation had been that they were using a technical sense of "evaluating actions", meaning something like "if we had access to full information about consequences, how would we decide which ones were actually good".

However, on a close read I see that they're talking about ex ante effects. This makes me think that this is at least confusingly explained, and perhaps confused. It now seems most probable to me that they mean something like "we can ignore the effects of the actions contained in the first 100 years, except insofar as those feed into our understanding of the longer-run effects". But the "except insofar ..." clause would be concealing a lot, since 100 years is so long that almost all of our understanding of the longer-run effects must go via guesses about the long-term goodness of the shorter-run effects.

[As an aside, I've been planning to write a post about some related issues; maybe I'll move it up my priority stack.]

The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures? 

I like the question; I think this may be getting at something deep, and I want to think more about it.

Nonetheless, my first response was: while I can't write this down, if we helped ourselves to some cast-iron guarantees about the size and future lifespan of the universe (and made some assumptions about quantization) then we'd know that the set of possible futures was smaller than a particular finite number (since there would only be a finite number of time steps and a finite number of ways of arranging all particles at each time step). Then even if I can't write it down, in principle someone could write it down, and the mathematical worries about undefined expectations go away.

The reason I want to think more about it is that I think there's something interesting about the interplay between objective and subjective probabilities here. How much should it help me as a boundedly rational actor to know that in theory a fully rational actor could put a measure on things, if it's practically immeasurable for me?

Considering  that the Open Philanthropy Project has poured millions into AI Safety, that its listed as a top cause by 80K, and that EA's far-future-fund makes payouts to AI safety work, if Shivani's reasoning isn't to be taken seriously then now is probably a good time to make that abundantly clear. Apologies for the harshness in tone here, but for an august institute like GPI to make normative suggestions in its research and then expect no one to act on them is irresponsible.

Sorry, I made an error here in just reading Vaden's quotation of Shivani's reasoning rather than looking at it in full context.

In the construction of the argument in the paper Shivani is explicitly trying to compare the long-term effects of action A to the short-term effects of action B (which was selected to have particularly good short-term effects). The paper argues that there are several cases where the former is larger than the latter. It doesn't follow that A is overall better than B, because the long-term effects of B are unexamined.

The comparison of of AMF to AI safety that was quoted felt like a toy example to me because it obviously wasn't trying to be a full comparison between the two, but was rather being used to illustrate a particular point. (I think maybe the word "toy" is not quite right.)

In any case I consider it a minor fault of the paper that one could read just the section quoted and reasonably come away with the impression that comparing the short-term number of lives saved by AMF with the long-term number of lives expected to be saved by investing in AI safety was the right way to compare between those two opportunities. (Indeed one could come away with the impression that the AMF price to save a life was the long-run price, but in the structure of the argument being used they need it to be just the short-term price.)

Note that I do think AI safety is very important, and I endorse the actions of the various organisations you mention. But I don't think that comparing some long-term expectation on one side with a short-term expectation on the other is the right argument for justifying this (particularly versions which make the ratio-of-goodness scale directly with estimates of the size of the future), and that was the part I was objecting to. (I think this argument is sometimes seen in earnest "in the wild", and arguably on account of that the paper should take extra steps to make it clear that it is not the argument being made.)

Comment by Owen_Cotton-Barratt on A case against strong longtermism · 2020-12-17T10:24:31.013Z · EA · GW

I think the crucial point of outstanding disagreement is that I agree with Greaves and MacAskill that by far the most important effects of our actions are likely to be temporally distant. 

I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value. Of course, there are also important instrumental reasons to attend to the intrinsic value of various effects, so I don't think intrinsic value should be ignored either.