What is valuable about effective altruism? Implications for community building 2017-06-18T14:49:56.832Z · score: 14 (18 votes)
A new reference site: Effective Altruism Concepts 2016-12-05T21:20:03.946Z · score: 22 (24 votes)
Why I'm donating to MIRI this year 2016-11-30T22:21:20.234Z · score: 34 (34 votes)
Should effective altruism have a norm against donating to employers? 2016-11-29T21:56:36.528Z · score: 11 (15 votes)
Donor coordination under simplifying assumptions 2016-11-12T13:13:14.314Z · score: 7 (7 votes)
Should donors make commitments about future donations? 2016-08-30T14:16:51.942Z · score: 18 (17 votes)
An update on the Global Priorities Project 2015-10-07T16:19:32.298Z · score: 5 (7 votes)
Cause selection: a flowchart [link] 2015-09-10T11:52:07.140Z · score: 10 (10 votes)
How valuable is movement growth? 2015-05-14T20:54:44.210Z · score: 21 (23 votes)
[Link] Discounting for uncertainty in health 2015-05-07T18:43:33.048Z · score: 4 (4 votes)
Neutral hours: a tool for valuing time 2015-03-04T16:33:41.087Z · score: 9 (9 votes)
Report -- Allocating risk mitigation across time 2015-02-20T16:34:47.403Z · score: 6 (6 votes)
Long-term reasons to favour self-driving cars 2015-02-13T18:40:16.440Z · score: 8 (8 votes)
Increasing existential hope as an effective cause? 2015-01-10T19:55:08.421Z · score: 10 (10 votes)
Factoring cost-effectiveness 2014-12-23T12:12:08.789Z · score: 5 (5 votes)
Make your own cost-effectiveness Fermi estimates for one-off problems 2014-12-11T11:49:13.771Z · score: 11 (11 votes)
Estimating the cost-effectiveness of research 2014-12-11T10:50:53.679Z · score: 9 (9 votes)
Effective policy? Requiring liability insurance for dual-use research 2014-10-01T18:36:15.177Z · score: 9 (9 votes)
Cooperation in a movement supporting diverse causes 2014-09-23T10:47:11.357Z · score: 18 (18 votes)
Why we should err in both directions 2014-08-21T02:23:06.000Z · score: 5 (5 votes)
Strategic considerations about different speeds of AI takeoff 2014-08-13T00:18:47.000Z · score: 3 (3 votes)
How to treat problems of unknown difficulty 2014-07-30T02:57:26.000Z · score: 3 (3 votes)
On 'causes' 2014-06-24T17:19:54.000Z · score: 1 (1 votes)
Human and animal interventions: the long-term view 2014-06-02T00:10:15.000Z · score: 3 (7 votes)
Keeping the effective altruist movement welcoming 2014-02-07T01:21:18.000Z · score: 15 (11 votes)


Comment by owen_cotton-barratt on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-13T15:44:26.442Z · score: 4 (2 votes) · EA · GW

Maybe: "We should give outsized attention to risks that manifest unexpectedly early, since we're the only people who can."

(I think this is borderline major? The earliest occurrence I know of was 2015 but it's sufficiently simple that I wouldn't be surprised if it was discovered multiple times and some of them were earlier.)

Comment by owen_cotton-barratt on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-13T11:03:02.775Z · score: 6 (3 votes) · EA · GW

Do you have a sense of how long is typically the lag between an insight first being had, and being recognised as major? I think this might often be several years.

Maybe the dynamic I'm imagining is: "At time T0, someone suggests X as a joke. At time T1, someone seriously posits X: it makes sense to them but they haven't managed to explain it to anyone. At T2, they've explained it in conversation and a small fraction of other people believe it. At T3, there's a first blog post which kind of explains it but to many readers it doesn't feel that well supported. At T4, it's believed by 10% of the relevant community. At T5, someone else makes a better writeup, which sets out more of a solid basis for it. At T6, it's relatively widely accepted as a major insight."

Was it novel at T0 or T1? (or later?) When does it get to count as major? (Is this just in the eyes of the observer?)

Comment by owen_cotton-barratt on Do impact certificates help if you're not sure your work is effective? · 2020-02-12T14:58:49.027Z · score: 16 (7 votes) · EA · GW

This use-case for impact certificates isn't predicated on trusting the market more than yourself (although that might be a nice upside). It's more like a facilitated form of moral trade, where people with different preferences about what altruistic work happens all end up happier on account of switching so that people can work on things they can make more progress on rather than the things they personally want to bet on. (There are some reasons to be sceptical about how often this will actually be a good trade, because there can be significant comparative advantage to working on a project you believe in, from both motivation and having a clear sense of the goals; however I expect at least some of the time there would be good trades.)

On your second concern, I think that working in this way should basically be seen as a special case of earning to give. You're working for an employer whose goals you don't directly believe in because they will pay you a lot (in this case in impact certificates), which you can use to further things you do believe in. Sure there's a small degree to which people might interpret your place of work as an endorsement, but I don't think this is one of the principle factors feeding into our collective epistemic processes (particularly since you can explicitly disavow it; and in a world where this happens often others may be aware of the possibility even before disavowal) and wouldn't give it too much weight in the decision.

Comment by owen_cotton-barratt on Are we living at the most influential time in history? · 2019-09-14T22:29:37.667Z · score: 9 (4 votes) · EA · GW

I appreciate your explicitly laying out issues with the Laplace prior! I found this helpful.

The approach to picking a prior here which I feel least uneasy about is something like: "take a simplicity-weighted average over different generating processes for distributions of hinginess over time". This gives a mixture with some weight on uniform (very simple), some weight on monotonically-increasing and monotonically-decreasing functions (also quite simple), some weight on single-peaked and single-troughed functions (disproportionately with the peak or trough close to one end), and so on…

If we assume a big future and you just told me the number of people in each generation, I think my prior might be something like 20% that the most hingey moment was in the past, 1% that it was in the next 10 centuries, and the rest after that. After I notice that hingeyness is about influence, and causality gives a time asymmetry favouring early times, I think I might update to >50% that it was in the past, and 2% that it would be in the next 10 centuries.

(I might start with some similar prior about when the strongest person lives, but then when I begin to understand something about strength the generating mechanisms which suggest that the strongest people would come early and everything would be diminishing thereafter seem very implausible, so I would update down a lot on that.)

Comment by owen_cotton-barratt on 'Longtermism' · 2019-08-19T09:02:03.547Z · score: 21 (6 votes) · EA · GW

I think some of the differences in opinion about what the definition should be may be arising because there are several useful but distinct concepts:
A) an axiological position about the value of future people (as in Hilary's suggested minimal definition)
B) a decision-guiding principle for personal action (as I proposed in this comment)
C) a political position about what society should do (as in your suggested minimal definition)

I think it's useful to have terms for each of these. There is a question about which if any should get to claim "longtermism".

I think that for use A), precision matters more than catchiness. I like Holly's proposal of "temporal cosmopolitanism" for this.

To my mind B) is the meaning that aligns closest with the natural language use of longtermism. So my starting position is that it should get use of the term. If there were a strong reason for it not to do so, I suppose you could call it e.g. "being guided by long-term consequences".

I think there is a case to be made that C) is the one in the political sphere and which therefore would make best use of the catchy term. I do think that if "longtermism" is to refer to the political position, it would be helpful if it were as unambiguous as possible that it were a political position. This could perhaps be achieved by making "longtermist" an informal short form of the more proper "member of the longtermism movement". Overall though, I feel uncompelled by this case, and like just using "longtermist" for the thing it sounds most like — which in my mind is B).

Comment by owen_cotton-barratt on 'Longtermism' · 2019-08-19T01:24:54.291Z · score: 19 (10 votes) · EA · GW

I'm uneasy about loading empirical claims about how society is doing into the definition of longtermism (mostly part (ii) of your definition). This is mostly from wanting conceptual clarity, and wanting to be able to talk about what's good for society to do separately and considering what it's already doing.

An example where I'm noticing the language might be funny: I want to be able to talk about a hypothetical longtermist society, perhaps that we aspire to, where essentially everyone is on board with longtermism. But if the definition is society-relative this is really hard to do. I might say I think longtermism is true but we should try to get more and more people to adopt longtermism and then longtermism will become false and we won't actually want people to be longtermists any more — but we would still want them to be longtermist about 2019.

I think this happens because "longtermism" doesn't really sound like it's about a problem, so our brains don't want to parse it that way.

How about a minimal definition which tries to dodge this issue:
> Longtermism is the view that the very long term effects of our actions should be a significant part of our decisions about what to do today

(You gesture at this in the post, saying "I think the distinctive idea that we should be trying to capture is the idea of trying to promote good long-term outcomes"; I agree, and prefer to just build the definition around that.)

Comment by owen_cotton-barratt on 'Longtermism' · 2019-08-19T00:51:20.518Z · score: 14 (7 votes) · EA · GW

I agree with the sentiment that clause (i) is stronger than it needs to be. I don't really think this is because it would be good to include other well-specified positions like exponential discounting, though. It's more that it's taking a strong position, and that position isn't necessary for the work we want the term to do. On the other hand I also agree that "nonzero" is too weak. Maybe there's a middle ground using something like the word "significant"?

[For my own part intellectual honesty might make me hesitate before saying "I agree with longtermism" with the given definition — I think it may well be correct, but I'm noticeably less confident than I am in some related claims.]

Comment by owen_cotton-barratt on “Just take the expected value” – a possible reply to concerns about cluelessness · 2017-12-21T19:52:45.341Z · score: 9 (8 votes) · EA · GW

By now it should be clear that simply following the expected value is not a sufficient response to concerns of cluelessness.

I was pretty surprised by this sentence. Maybe you could say more precisely what you mean?

I take the core concern of cluelessness to be that perhaps we have no information about what options are best. Expected value gives a theoretical out to that (with some unresolved issues around infinite expectations for actors with unbounded utility functions). Approximations to expected value that humans can implement are as you point out kind of messy and opaque, but that's a feature of human reasoning in general, and doesn't seem particularly tied to expected value. Is that what you're pointing at?

Comment by owen_cotton-barratt on Announcing the 2017 donor lottery · 2017-12-18T14:15:22.029Z · score: 1 (1 votes) · EA · GW

I don't quite have an algorithm in mind for this. I think in practice it would likely be easy to find solutions to dividing tickets, but perhaps one would want something more specified first.

With a well-specified algorithm and an understanding that it was well-behaved, one could imagine shrinking the block size right down to give people flexibility over their lottery size and reduce the liability of the guarantor. There is perhaps an advantage to having a canonical size for developing buy-in to the idea, though.

Comment by owen_cotton-barratt on Announcing the 2017 donor lottery · 2017-12-18T14:11:49.544Z · score: 0 (0 votes) · EA · GW

A simple variation on the current system would allow people to opt into lottery-ing up further (to the scale of the total donor lottery pot):

Ask people what scale they would like to lottery to. If $100k, allocate them a range of tickets in one block as in the current system. If (say) $300k, split their tickets between three blocks, giving them the same range in each block. If their preferred scale exceeds the total pot, just give them correlated tickets on all available blocks.

If there's a conflict of preference between people wanting small and large lotteries so they're not simultaneously satisfiable (I think this is somewhat unlikely in practice unless someone comes in with $90k hoping to lottery up to $100k), first satisfy those who want smaller totals, then divide the rest as fairly as possible.

Comment by owen_cotton-barratt on Announcing the 2017 donor lottery · 2017-12-16T22:28:50.691Z · score: 6 (6 votes) · EA · GW

We think that it's in the spirit of the lottery that someone who does useful research that would be of interest to other donors should publish it (or give permission for CEA to publish their grant recommendation). Also, if they convince others to donate then they'll be causing additional grants to go to their preferred organization(s). We'll strongly encourage winners to do so, however, in the interests of keeping the barriers to entry low, we haven't made it a hard requirement.

Seems like even strong social pressure might be enough to be a significant barrier to entry. I feel excited about entering a donor lottery, and would feel less excited if I thought I'd feel accountable if I won (I might still enter, but it seems like a significant cost).

Would an attitude of "we think it's great if you want to share (and we could help you with communication) but there's no social obligation" capture the benefits? That's pretty close to what you were saying already, but the different tone might be helpful for some people.

Comment by owen_cotton-barratt on Causal Network Model III: Findings · 2017-11-23T23:15:36.136Z · score: 2 (2 votes) · EA · GW

Thanks for the write-up!

I found the figures for existential-risk-reduced-per-$ with your default values a bit suspiciously high. I wonder if the reason for this is in endnote [2], where you say:

say one researcher year costs $50,000

I think this is too low as the figure to use in this calculation, perhaps by around an order of magnitude.

Firstly, that is a very cheap researcher-year even just paying costs. Many researcher salaries are straight-up higher, and costs should include overheads.

A second factor is that having twice as much money doesn't come close to buying you twice as much (quality-adjusted) research. In general it is hard to simply pay money to produce more of some of these specialised forms of labour. For instance see the recent 80k survey of willingness to pay of EA orgs to bring forward recent hires, where the average willingness to forgo donations to move a senior hire forward by three years was around $4 million.

Comment by owen_cotton-barratt on Earning to Give as Costly Signalling · 2017-06-24T19:53:31.226Z · score: 1 (1 votes) · EA · GW

Seems like it's suggesting it as costly signalling at the level of the movement rather than the individuals. It's a stretch from normal use, but that's kind of the strength of analogies?

Comment by owen_cotton-barratt on What is valuable about effective altruism? Implications for community building · 2017-06-24T14:29:39.406Z · score: 0 (0 votes) · EA · GW

This is a great question and I think deserves further thought.

Helping people consider their values was one of the major goals Daniel Kokotajlo and I had in designing this flowchart. One possible activity would be to read through and/or discuss parts of that.

Comment by owen_cotton-barratt on Projects I'd like to see · 2017-06-13T11:18:26.302Z · score: 9 (9 votes) · EA · GW

I was a bit confused by some of these. Posting questions/comments here in case others have the same thoughts:

Earning-to-give buy-out

You're currently earning to give, because you think that your donations are doing more good than your direct work would. It might be that we think that it would be more valuable if you did direct work. If so we could donate a proportion of the amount that you were donating to wherever you were donating it, and you would move into work.

This made more sense to me after I realised that we should probably assume the person doesn't think CEA is a top donation target. Otherwise they would have an empirical disagreement about whether they should be doing direct work, and it's not clear how the offer helps resolve that (though it's obviously worth discussing).

Anti-Debates / Shark Tank-style career choice discussions / Research working groups

These are all things that might be good, but it's not obvious how funding would be a bottleneck. Might be worth saying something about that?

For those with a quantitative PhD, it could involve applying for the Google Brain Residency program or AI safety fellowship at ASI.

Similarly I'm confused what the funding is meant to do in these cases.

I'd be keen to see more people take ideas that we think we already know, but haven't ever been put down in writing, and write them up in a thorough and even-handed way; for example, why existential risk from anthropogenic causes is greater than the existential risk from natural causes

I think you were using this as an example of the type of work, rather than a specific request, but some readers might not know that there's a paper forthcoming on precisely this topic (if you mean something different from that paper, I'm interested to know what!).

Comment by owen_cotton-barratt on Considering Considerateness: Why communities of do-gooders should be exceptionally considerate · 2017-06-01T22:21:00.526Z · score: 7 (7 votes) · EA · GW

Not as pithy, but just a flag that I think the question implicitly raised by Tom's comment and the answer in David's are pretty important. This is a community which is willing to update actions based on theoretical arguments about what's important. Of course I don't expect an article to totally change people's beliefs -- let alone behaviours -- but if it has a fraction of that effect I'd count it as cheap.

Comment by owen_cotton-barratt on Considering Considerateness: Why communities of do-gooders should be exceptionally considerate · 2017-06-01T09:18:22.458Z · score: 4 (4 votes) · EA · GW

I think you're right that there's a failure mode of not asking people for things. I don't think that not-asking is in general the more considerate action, though -- often people would prefer to be given the opportunity to help (particularly if it feels like an opportunity rather than a demand).

I suppose the general point is: avoid the trap of overly-narrow interpretations of considerateness (just like it was good to avoid the trap of overly-narrow interpretations of consequences of actions).

Comment by owen_cotton-barratt on Returns Functions and Funding Gaps · 2017-05-25T10:24:17.825Z · score: 1 (1 votes) · EA · GW

Fair question. This argument is all conditioned on A not actually having good ways to expand capacity -- the case is that even then the funds are comparably good given to A as elsewhere. The possibility of A in fact having useful expansion might make it noticeably better than the alternative, which is what (to my mind) drives the asymmetry.

Comment by Owen_Cotton-Barratt on [deleted post] 2017-05-16T10:37:59.243Z

DALYs do use a more defensible analysis; GiveWell aren't using DALYs. This has some good and some bad aspects (related to the discussion in this post, although in this case the downside of defensibility is more that it doesn't let you incorporate considerations that aren't fully grounded).

The problem with just using DALYs is that on many views they overweigh infant mortality (here's my view on some of the issues, but the position that they overweigh infant mortality is far from original). With an internal agreement that they significantly overweigh infant mortality, it becomes untenable to just continue using DALYs, even absent a fully rigorous alternative. Hence falling back on more ad hoc but somewhat robust methods like asking people to consider it and using a median.

[I'm just interpreting GW decision-making from publicly available information; this might easily turn out to be a misrepresentation.]

Comment by Owen_Cotton-Barratt on [deleted post] 2017-05-08T13:42:36.453Z

Yes, I think this is a significant concern with this version of the model (somewhat less so with the original cruder version using something like medians, but that version also fails to pick up on legitimate effects of "what if these variables are all in the tails"). Combining the variables as you suggest is the easiest way to patch it. More complex would be to add in explicit time-dependency.

Comment by Owen_Cotton-Barratt on [deleted post] 2017-04-26T11:45:34.132Z

I largely agree with these considerations about the distribution of net impact of interventions (although with some possible disagreements, e.g. I think negative funging is also possible).

However, I actually wasn't trying to comment on this at all! I was talking about the distribution of people's estimates of impact around the true impact for a given intervention. Sorry for not being clearer :/

Comment by Owen_Cotton-Barratt on [deleted post] 2017-04-25T14:59:15.286Z

The fact that sometimes people's estimates of impact are subsequently revised down by several orders of magnitude seems like strong evidence against evidence being normally distributed around the truth. I expect that if anything it is broader than lognormally distributed. I also think that extra pieces of evidence are likely to be somewhat correlated in their error, although it's not obvious how best to model that.

Comment by owen_cotton-barratt on Update on Effective Altruism Funds · 2017-04-25T09:03:38.902Z · score: 3 (5 votes) · EA · GW

I find your comments painfully uncharitable, which really reduces my inclination to engage. If you can't find an interpretation of my comment which isn't just about the optimizer's curse I don't feel like helping you right now.

Agree that vetoes aren't the right solution, though (indeed they are themselves subject to a unilateralist's curse, perhaps of a worse type).

Comment by owen_cotton-barratt on Update on Effective Altruism Funds · 2017-04-24T16:51:27.216Z · score: 4 (6 votes) · EA · GW

The kind of set-up where it would apply:

  • An easy-to-evaluate opportunity which produces 1 util/$, which everyone correctly evaluates
  • 100 hard-to-evaluate opportunities each of which actually produces 0.1 util / $, but where everyone has an independent estimate of cost-effectiveness which is a log-normal centered on the truth

Then for any individual the're likely to think one of the 100 is best and donate there. If they all pooled their info they would instead all donate to the first opportunity.

Obviously the numbers and functional form here are implausible -- I chose them for legibility of the example. It's a legitimate question how strongly the dynamic applies in practice. But it seems fairly clear to me that it can apply. You suggested there's a symmetry with donating too little -- I think this is broken because people are selecting the top option, so they are individually running into the optimizer's curse.

Comment by owen_cotton-barratt on Update on Effective Altruism Funds · 2017-04-23T09:10:49.020Z · score: 8 (14 votes) · EA · GW

Ryan, I substantially disagree and actually think all of your suggested alternatives are worse. The original is reporting on a response to the writing, not staking out a claim to an objective assessment of it.

I think that reporting honest responses is one of the best tools we have for dealing with emotional inferential gaps -- particularly if it's made explicit that this is a function of the reader and writing, and not the writing alone.

Comment by owen_cotton-barratt on Update on Effective Altruism Funds · 2017-04-23T08:54:29.174Z · score: 2 (4 votes) · EA · GW

The basic dynamic applies. Think it's pretty reasonable to use the name to point loosely in such cases, even if the original paper didn't discuss this extension.

Comment by owen_cotton-barratt on Hard-to-reverse decisions destroy option value · 2017-03-26T11:13:12.354Z · score: 2 (2 votes) · EA · GW

Over the last couple of years, I've found it to be a widely held view among researchers interested in the long-run future that the EA movement should on the margin be doing less philosophical analysis.

I agree with some versions of this view. For what it's worth I think there may be a selection effect in terms of the people you're talking to, though (perhaps in terms of the organisations they've chosen to work with): I don't think there's anything like consensus about this among the researchers I've talked to.

Comment by owen_cotton-barratt on Hard-to-reverse decisions destroy option value · 2017-03-25T15:10:04.880Z · score: 5 (7 votes) · EA · GW

I think that the value of this type of work comes from: (i) making it easier for people entering the community to come up to the frontier of thought on different issues; (ii) building solid foundations for our positions, which makes it easier to go take large steps in subsequent work.

Cf. Olah & Carter's recent post on research debt.

Comment by owen_cotton-barratt on Why I left EA · 2017-02-21T09:43:53.315Z · score: 4 (6 votes) · EA · GW

Really liked this comment. Would be happy to see a top level post on the issue.

Comment by owen_cotton-barratt on CHCAI/MIRI research internship in AI safety · 2017-02-17T21:41:03.234Z · score: 3 (3 votes) · EA · GW

Actually that's probably overridden by a heuristic of not trying to second-guess decisions as a donor. I rather mean something like: please say if you thought this was a good idea but were budget-constrained.

Comment by owen_cotton-barratt on CHCAI/MIRI research internship in AI safety · 2017-02-17T21:37:35.300Z · score: 5 (5 votes) · EA · GW

Awesome, strongly pro this sort of thing.

You don't mention covering travel expenses. Do you intend to? If not, would you consider donations to let you do so? (Haven't thought about it much, but my heuristics suggest good use of marginal funds.)

Comment by owen_cotton-barratt on Strategic considerations about different speeds of AI takeoff · 2017-02-12T10:45:52.023Z · score: 2 (2 votes) · EA · GW

I think you get an adjustment from that, but that it should be modest. None of the arguments we have so far about how difficult to expect the problem to be seem very robust, so I think it's appropriate to have a somewhat broad prior over possible difficulties.

I think the picture you link to is plausible if the horizontal axis is interpreted as a log scale. But this changes the calculation of marginal impact quite a lot, so that you probably get more marginal impact towards the left than in the middle of the curve. (I think it's conceivable to end up with well-founded beliefs that look like that curve on a linear scale, but that this requires (a) very good understanding of what the problem actually is, & (b) justified confidence that you have the correct understanding.)

Comment by owen_cotton-barratt on Introducing the EA Funds · 2017-02-11T23:20:23.544Z · score: 9 (9 votes) · EA · GW

Presumably there's an operational cost to CEA in setting up / running the funds? I'd thought this was what Tom was asking about.

Comment by owen_cotton-barratt on Introducing the EA Funds · 2017-02-11T11:23:57.602Z · score: 5 (5 votes) · EA · GW

I think this is an important point. But it's worth acknowledging there's a potential downside to this too -- perhaps the bar of getting others on board is a useful check against errors of individual judgement.

Comment by owen_cotton-barratt on Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation · 2016-12-31T17:50:49.381Z · score: 2 (2 votes) · EA · GW

I think "in expectation" is meant to mean that they can access a probability of having large donation size and time investment. You might say "stochastically".

Comment by owen_cotton-barratt on Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation · 2016-12-31T17:18:28.234Z · score: 9 (8 votes) · EA · GW

Thanks for such a thorough exploration of the advantages of scaling up, and why small donors may be able to beat larger institutions at the margin. I'd previously (prior to the other thread) thought that there was typically not that much to gain (or lose) from entering a lottery, but I'm now persuaded that it's probably a good idea for many small donors.

I still see a few reasons one might prefer not to commit to a lottery:

1) If you see significant benefit to the community of more people seriously thinking through donation decisions, you might prefer to reserve enough of your donation to be allocated personally that you will take the process seriously (even if you give something else to a donor lottery). Jacob Steinhardt discusses this in his recent donation post. I'm sympathetic to this for people who actively want to take some time to think through where to give (but I don't think that's everyone).

2) If you prefer giving now over giving later, you may wish to make commitments about future donations to help charities scale up faster. This is much harder to do with donor lotteries. If you trusted other lottery entrants enough you could all commit to donating to it in the future, with the ability to make commitments about next year's allocation of funds being randomised today. But that's a much higher bar of trust than the current lottery requires. Alternatively you could borrow money to donate more (via the lottery) today. If you think that there are significant advantages to the lottery and to giving earlier, this strategy might be correct, even if borrowing to give to a particular charity is often beaten by making commitments about future donations. But if you think you're only getting a small edge from entering the lottery, this might be smaller than the benefit of being able to make commitments, and so not worthwhile.

3) If you think you might be in a good position to recognise small giving opportunities which are clearly above the bar for the community as a whole to fund, it could make sense for you to reserve some funds to let you fill these gaps in a low-friction manner. I think this is most likely to be the case for people who are doing direct work in high-priority areas. Taking such opportunities directly can avoid having to pull the attention of large or medium-sized funders. This is similar to the approach of delegating to another small donor, where the small donor is future-you.

Comment by owen_cotton-barratt on Donor lotteries: demonstration and FAQ · 2016-12-31T16:32:30.846Z · score: 0 (0 votes) · EA · GW

This seems like a reasonable concern, and longer term building good institutions for donor lotteries seems valuable.

However, I suspect there may be more overheads (and possible legal complications) associated with trying to run it as part of an existing charity. In the immediate, I wonder if there are enough people who you do trust who might give character references which would work for this? (You implied trust in GiveWell, and I believe Paul and Carl are fairly well known to several GiveWell staff; on the other hand you might think that the institutional reputation of GiveWell is more valuable than the individual reputations of people who work there, and so be more inclined to trust a project it backs not because you know more about it, but because it has more at stake.)

Comment by owen_cotton-barratt on Are You Sure You Want To Donate To The Against Malaria Foundation? · 2016-12-23T19:12:19.983Z · score: 4 (4 votes) · EA · GW

Is there a written version of this anywhere? I'm interested in the content of the argument, but I don't like video.

Comment by owen_cotton-barratt on Thoughts on the "Meta Trap" · 2016-12-23T10:55:26.157Z · score: 0 (0 votes) · EA · GW

I think it's not quite what you're looking for, but I wrote How valuable is movement growth?, which is an article analysing the long-term counterfactual impact of different types of short-term movement growth effects. (It doesn't properly speak to the empirical question of how short-term effort into meta work translates into short-term movement growth effects.)

Comment by owen_cotton-barratt on What is the expected value of creating a GiveWell top charity? · 2016-12-19T11:03:01.598Z · score: 0 (0 votes) · EA · GW

I think this has removed the pathology. There's still more variation in this number, but that comes from more uncertainty about amount of senior staff time needed. If the decision-relevant question under consideration is "how many of these could we do sequentially?" then this uncertainty is appropriate to weight like this.

Comment by owen_cotton-barratt on What is the expected value of creating a GiveWell top charity? · 2016-12-18T18:59:21.751Z · score: 1 (1 votes) · EA · GW

This doesn't look fixed to me (possible I'm seeing an older cached version?). I no longer see negative numbers in the summary statistics, but you're still dividing by things involving normal distributions -- these have a small chance of being extremely small or even negative. That in turn means that the expectation of the eventual distribution is undefined.

Empirically I think this is happening, because: (i) the sampling seems unstable -- refreshing the page a few times gives me quite different answers each time; (ii) the "sensitivity" tool in Guesstimate suggests something funny is going on there (but I'm not sure exactly how this diagnostic tool works, so take with some salt).

To avoid this, I'd change all of the normal distributions that you may end up dividing by to log-normals.

Comment by owen_cotton-barratt on What is the expected value of creating a GiveWell top charity? · 2016-12-18T15:05:40.867Z · score: 2 (2 votes) · EA · GW

Can you explain why attributing all impact to senior staff increases the width of the confidence interval (in log space)? I'd naively expect this to remove a source of uncertainty.

I had a quick look at the Guesstimate model, and what I think is going on is that you just have much wider error bars over how much senior staff time will be taken; but you include scenarios with negative senior staff time(!), which may contribute significantly to expectation of the value-per-year figure, but isn't very meaningful. Am I just confused?

Comment by owen_cotton-barratt on Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk · 2016-12-09T16:52:01.729Z · score: 2 (4 votes) · EA · GW

So you'd put the probability of CEV working at between 90 and 99 percent?

No, rather lower than that (80%?). But I think that we're more likely to attain only somewhat-flawed versions of the future without something CEV-ish. This reduces my estimate of the value of getting them kind of right, relative to getting good outcomes through worlds which do achieve something like CEV. I think that probably ex-post provides another very large discount factor, and the significant chance that it does provides another modest ex-ante discount factor (maybe another 80%; none of my numbers here are deeply considered).

Comment by owen_cotton-barratt on Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk · 2016-12-09T15:20:52.530Z · score: 5 (7 votes) · EA · GW

Thanks for the write-up. I'm excited about people presenting well thought-through cases for the value of different domains.

I want to push back a bit against the claim that the problem is time-sensitive. If we needed to directly specify what we valued to a powerful AI, then it would be crucial that we had a good answer to that by the time we had such an AI. But an alternative to directly specifying what it is that we value is to specify the process for working out what to value (something in the direction of CEV). If we can do this, then we can pass the intellectual work of this research off to the hypothesised AI. And this strategy looks generally very desirable for various robustness reasons.

Putting this together, I think that there is a high probability that consciousness research is not time-critical. This is enough to make me discount its value by perhaps one-to-two orders of magnitude. However, it could remain high-value even given such a discount.

(I agree that in the long run it's important. I haven't looked into your work beyond this post, so I don't (yet) have much of a direct view of how tractable the problem is to your approach. At least I don't see problems in principle.)

Comment by owen_cotton-barratt on A new reference site: Effective Altruism Concepts · 2016-12-08T15:19:10.327Z · score: 3 (3 votes) · EA · GW

Your two suggestions are both close to things we had in mind (on the first one we were thinking less someone who's very new, as someone who's somewhat engaged already, and may be up-to-speed on some areas but want to learn more about others).

Another use case is helping people who are considering doing research or strategy work to orient themselves with respect to the whole space of current thinking. This can help people to understand how different parts of research translate into better decisions, which in turn can help them to pick more crucial questions to work on. The hierarchical structure can also make it more apparent if there's a topic which should be worked on but hasn't been: rather than just explore out from existing streetlights we can spot where there are big patches of darkness. This might be strengthened by your suggestion of a hybrid wiki/forum (we talked about something in this direction, and our feeling was "could be cool, revisit later").

Comment by owen_cotton-barratt on A new reference site: Effective Altruism Concepts · 2016-12-08T15:07:02.844Z · score: 3 (3 votes) · EA · GW

UI is not really my area, so I'll leave that to others except to say:

  • Thanks for all the comments! I think that more work into the UI is going to be important, and critical voices are helpful for this.
  • In development a lot of this lived in workflowy, and it was noticeably worse to use than now. (But perhaps there was a different way of setting it up which would have worked better.)

On strategy, the general idea is not that everyone reads the whole thing, but that people can explore local areas they're interested in. This should avoid the need to cut anything off into a glossary (although the guidance for how to start engaging could improve; I agree that idealized ethical decision making content is irrelevant for most users so should probably be less prominent). This should let people engage with and become experts on aspects of EA-relevant research and have a rough idea of how it fits in with other areas, without needing to be expert on those other areas. One of the important reasons for laying it out in an approximately-logical tree was that we think this could help people to spot where there are gaps in the research that haven't been noticed.

Comment by owen_cotton-barratt on CEA is Fundraising! (Winter 2016) · 2016-12-08T09:39:43.091Z · score: -1 (1 votes) · EA · GW

You could give a little (3x3? 5x5?) voting grid: usefulness is one dimension, agreement is another. Users have the option of hiding one of the dimensions, and maybe this is the default.

Comment by owen_cotton-barratt on CEA is Fundraising! (Winter 2016) · 2016-12-07T19:10:49.693Z · score: 3 (3 votes) · EA · GW

I know this is the stated meaning, and I usually think it's correct to act on. In some cases when usage deviates from this, though, I'm not actually sure that people are making a mistake.

I think that happens most often on short statements of opinion. In such cases, there's not much ambiguity about how useful the comment was (opinions are always somewhat useful but don't contain amazing new insights). It's more useful to get a cheap instant poll of how widespread that opinion is in the community.


  • I'm not confident in this, but to the extent that it seems wrong it would be if we thought posting short opinions was generally unhelpful. (I'd find that claim more plausible of LW, but still dubious there.) Otherwise to convey the information about distribution of opinions lots of people need to post.
  • Separate buttons as Benito suggests below might well be preferable. In particular they'd avoid ambiguity of things like the case in hand, which will be mostly read as an expression of opinion but also gives some considerations for.
  • This "instant poll" effect is to my mind the strongest reason for having voting scores on posts be public anyway. Maybe if there were separate buttons only the "agree/disagree" one would get displayed, and the "useful/not-useful" would be used to determine display-order for posts.
  • I was going to down-vote Ryan's comment to express that I disagree ;) But then I noticed that it was unusually helpful that he'd raised the point explicitly as it made it easier to have this conversation, and didn't know what to do.
Comment by owen_cotton-barratt on Contra the Giving What We Can pledge · 2016-12-05T22:18:00.204Z · score: 2 (2 votes) · EA · GW

Re. firebombing, I think that the force of the argument there rests on the idea that everyone agrees that there were lots of reasonable alternatives that were better - - that it was unusually bad.

I don't think you think that's true in the case of the GWWC pledge?

Comment by owen_cotton-barratt on Why I'm donating to MIRI this year · 2016-12-05T21:34:03.232Z · score: 1 (1 votes) · EA · GW

Here's my current high-level take on the difference in our perspectives:

  • There is an ambiguity in whether MIRI's work is actually useful theory-building that they are just doing a poor job of communicating clearly, or whether it's not building something useful.
  • I tend towards giving them the benefit of the doubt / hedging that they are doing something valuable.
  • The Open Phil review takes a more sceptical position, that if they can't clearly express the value of the work, maybe there is not so much to it.