The ITN framework, cost-effectiveness, and cause prioritisation 2019-10-06T05:26:24.879Z · score: 84 (36 votes)
What should Founders Pledge research? 2019-09-09T17:41:04.073Z · score: 51 (19 votes)
[Link] New Founders Pledge report on existential risk 2019-03-28T11:46:17.623Z · score: 39 (13 votes)
The case for delaying solar geoengineering research 2019-03-23T15:26:13.119Z · score: 49 (20 votes)
Insomnia: a promising cure 2018-11-16T18:33:28.060Z · score: 38 (19 votes)
Concerns with ACE research 2018-09-07T14:56:25.737Z · score: 31 (28 votes)
New research on effective climate charities 2018-07-11T13:51:23.354Z · score: 19 (19 votes)
The counterfactual impact of agents acting in concert 2018-05-27T10:54:03.677Z · score: 4 (10 votes)
Climate change, geoengineering, and existential risk 2018-03-20T10:48:01.316Z · score: 16 (15 votes)
Economics, prioritisation, and pro-rich bias   2018-01-02T22:33:36.355Z · score: 3 (9 votes)
We're hiring! Founders Pledge is seeking a new researcher 2017-12-18T12:30:02.429Z · score: 4 (4 votes)
Capitalism and Selfishness 2017-09-15T08:30:54.508Z · score: 12 (14 votes)
How should we assess very uncertain and non-testable stuff? 2017-08-17T13:24:44.537Z · score: 18 (18 votes)
Where should anti-paternalists donate? 2017-05-04T09:36:53.654Z · score: 10 (10 votes)
The asymmetry and the far future 2017-03-09T22:05:26.700Z · score: 9 (17 votes)


Comment by halstead on Why we think the Founders Pledge report overrates CfRN · 2019-11-06T23:15:52.898Z · score: 2 (1 votes) · EA · GW

I agree with that. My response is (1) to contextualise this by saying that this feature is true of almost all CEAs, (2) to say that I don't think the counterfactual use of funds is very good in comparison to effective spending on deforestation prevention.

Comment by halstead on Why we think the Founders Pledge report overrates CfRN · 2019-11-05T14:11:11.900Z · score: 2 (1 votes) · EA · GW

Hello, my response was about the counterfactual value of funds to REDD+ - i.e. what govts and the private sector would spend money on. It is analogous to a donation to FHI: Sanjay is proposing that we should discount money to REDD+ projects because part of the money would otherwise have gone to global development. In the same way, one could argue that money donated to FHI would otherwise have gone to global development and discount by that. This is in principle correct, but it tends not to be done.

Comment by halstead on Why we think the Founders Pledge report overrates CfRN · 2019-11-05T09:41:15.484Z · score: 8 (5 votes) · EA · GW

Hi Sanjay, thanks for writing this. As we have discussed, I agree with some of this and disagree with other parts.

1. On whether the pledged funds will be forthcoming. I agree that the pessimistic estimate of funds forthcoming was probably too high, though I haven't looked at how much money has actually come out in the past year. However, I don't think this that big an effect on the CEA because the pessimistic estimate also assumes a cost per tonne of $30 (vs the $5 per tonne that you assume here) to abate CO2 through deforestation prevention. In the model, this offsets the potential overestimate of the forthcoming funds by a factor of 6, which makes the end estimate similar to the one you produce. I'm also not sure it is right to anchor so much on how much money has been disbursed so far, given that the model assesses the money that will be disbursed through REDD+ over all time, and not just the preceding year.

2. On the counterfactual impact of funds. I agree that this is in principle a gap in the CEA. However, this criticism also applies to almost all CEAs I have ever seen. Accounting for all counterfactuals in CEA models is very hard. Moreover, as you note, we do try to account for the counterfactual in the model by trying to estimate how much of the additional funding for REDD+ counterfactually contributes to additional CO2 reductions. We do this in the section where we discuss the interaction between carbon pricing and the effect of freeing up relatively cheap forestry offsets. The argument is that carbon is priced at a very low level worldwide (<$10/t), so opening up <$10/t offsets does free up additional funds for climate change that would not otherwise have gone to climate change. This also applies for planting trees, since REDD+ in principle covers such activity, so I don't think that could be a reason to downgrade CfRN's cost-effectiveness.

I agree that the funds spent on REDD+ could have gone to global development and this isn't accounted for in the model, but (1) to put this criticism in context, this is also true of almost all other CEAs that I have seen - you could do this in a CEA for FHI for example - money to them could have gone to global development. It becomes very unwieldy to measure such things. (2) Standard EA wisdom is that a lot of govt global development spending isn't very impactful. It is also of course hard to know how to trade off CO2 and global development metrics, but this seems to me at most a reason to very modestly reduce your estimate of CfRN's cost-effectiveness. I personally think that climate change is clearly better than global development from a long-termist point of view, so directing money to the former is far better than the directing money to the latter.

On counterfactual private sector funds, I'm not sure I agree with this. The government compulsion we refer to in the report is assuming that they impose a carbon price of <$10/t. For the reasons mentioned above, I don't think there are many other <$10/t offsets aside from forests.

3. Insufficient incentive funds. This is definitely a concern about REDD+ and I had hoped it would have (1) picked up more over the last year (maybe it has I haven't checked) and (2) constrained Bolsonaro's policies more due to the financial incentives (though I haven't looked into this this year either).

I'm not sure I agree that this is a good reason not to support CfRN. One could also argue that this makes it especially important to make sure REDD+ does not collapse and get replaced by nothing/something worse. It is (I think we agree) in principle a good idea, but there is a fair way to go on the implementation side. But I can also see the force of your argument.

4. Future private sector demand for compliance-grade offsets. I agree that the rationale surrounding the chart was a mistake and could have included other reasons for the decline in demand. However, as you say, this isn't the only piece of evidence that we produce for this estimate. The argument is that carbon pricing will incentivise private companies to buy high-grade offsets. I still think this is true. I agree that it is unlikely that corporates will buy such offsets for extra security of having an impact, though this was not part of our argument for the private sector funding projections.

The idea is not that CfRN ensure that the private funding goes through the registry and exchange but rather that REDD+ offsets are recognised as high enough quality to be included in carbon pricing schemes, incentivising corporates to buy such offsets.

5. On it being overly generous to assign all of these benefits to CfRN. I think this is a philosophical difference in measuring counterfactual impact. Some evaluators give orgs a portion of 'the credit' for some amount of impact, but I don't think this is correct. We measure the impact of CfRN as a speed up in deforestation prevention, rather than giving them a portion of the credit, which I don't think is an idea that makes conceptual sense.

I do think it is plausible that if CfRN had not existed, agreement on a system for forestry protection would have been delayed for 2-5 years and arguably much longer (it is extremely hard to say). (This also means that Paris Agreement would probably also have been delayed by many years). So, I do think it is plausible that CfRN have counterfactually released massive amounts of money for forests despite having a small budget. It is important to remember that CfRN are unusual in that they are an intergovernmental org and have a seat at the table at climate negotiations where they represent all of the world's largest rainforest countries except Brazil.

These disagreements aside, I encourage more efforts at checking charity recommendations rather than taking them on faith, so thanks again for doing this. Also, Founders Pledge has hired a new climate policy expert and we will be revisiting our climate research over the next few months and will assess our old recommendations and hopefully add new ones.

Comment by halstead on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-18T14:51:15.194Z · score: 3 (2 votes) · EA · GW

Hello michael. This feels like going too far in an anti-ITN direction. On the scores going to infinity point, this feels like an edge case where things break down rather than something which renders the framework useless. Price elasticities also have this feature for example, but still seem useful.

On defining a problem, there have to be restrictions on what you are comparing in order to make the framework useful. Nevertheless, it does seem that there is a meaningful sense in which we can compare eg malaria and diabetes in terms of scale and neglectedness, and that this could be a useful comparison to make.

Overall, I do think the ITN framework can be useful sometimes. If you knew nothing else about two problems aside from their importance and neglectedness and one was very important and neglected and one was not, then that would indeed be a reason to favour the former. Sometimes, problems will dominate others in terms of the three criteria considered at low resolution, and there the framework will again be useful.

Where I have my doubts is in it being used to make decisions in the hard high stakes cases. There, we need to use the best available arguments on marginal cost-effectiveness, not this very zoomed out perspective. eg we need to discuss whether technical AI safety research can indeed make progress.

Comment by halstead on Shapley values: Better than counterfactuals · 2019-10-11T09:18:35.162Z · score: 5 (3 votes) · EA · GW

Thanks for this interesting post. As I argued in the post that you cite and as George Bridgwater notes below, I don't think you have identified a problem in the idea of counterfactual impact here, but have instead shown that you sometimes cannot aggregate counterfactual impact across agents. As you say, CounterfactualImpact(Agent) = Value(World with agent) - Value(World without agent).

Suppose Karen and Andrew have a one night stand which leads to Karen having a baby George (and Karen and Andrew otherwise have no effect on anything). In this case, Andrew's counterfactual impact is:

Value (world with one night stand) - Value (world without one night stand)

The same is true for Karen. Thus, the counterfactual impact of each of them taken individually is an additional baby George. This doesn't mean that the counterfactual impact of Andrew and Karen combined is two additional baby Georges. In fact, the counterfactual impact of Karen and Andrew combined is also given by:

Value (world with one night stand) - Value (world without one night stand)

Thus, the counterfactual impact of Karen and Andrew combined is an additional baby George. There is nothing in the definition of counterfactual impact which implies it can be always be aggregated across agents.

This is the difference between "if me and Karen hadn't existed, neither would George" and "If I hadn't existed, neither would George, and if Karen hadn't existed neither would George, therefore if me and Karen hadn't existed, neither would two Georges." This last statement is confused, because the babies referred to in the antecedent are the same.

I discuss other examples in the comments to Joey's post.


The counterfactual understanding of impact is how almost all voting theorists analyse the expected value of voting. EAs tends to think that voting is sometimes altruistically rational because of the small chance of being the one pivotal voter and making a large counterfactual difference. On the Shapely value approach, the large counterfactual difference would be divided by the number of winning voters. Firstly, to my knowledge almost no-one in voting theory assesses the impact of voting in this way. Secondly, this would I think imply that voting is never rational since in any large election the prospective pay-off of voting would be divided by the potential set of winning voters and so would be >100,000x smaller than on the counterfactual approach

Comment by halstead on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-07T15:20:08.190Z · score: 3 (11 votes) · EA · GW

I disagree that 80k should transition towards a £3k retreat + no online content model, but it doesn't seem worth getting into why here.

On premises, here is the top definition I have found from googling... "a previous statement or proposition from which another is inferred or follows as a conclusion". This fits with my (and CFAR's) characterisation of double cruxing. I think we're agreed that the question is which premises you disagree on cause your disagreement. It is logically impossible that double cruxing extends this characterisation.

Comment by halstead on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-07T15:10:15.899Z · score: 6 (4 votes) · EA · GW

Yes I don't fully understand why they're not legible. A 4 day workshop seems pretty well-placed for a carefully done impact evaluation.

Comment by halstead on [link] Andreas Mogensen's "Maximal Cluelessness" · 2019-10-07T02:28:55.038Z · score: 3 (2 votes) · EA · GW

On the biting the bullet answer, that doesn't seem plausible to me. The preference we have are a product of the beliefs we have about what will make our lives better over the long-run. My preference not to smoke is entirely a product of the fact that I believe that it will increase my risk of premature death. Per proponents of cluelessness, I could argue "maybe it will make me look cool to smoke, and that will increase my chances of getting a desirable partner" or something like that. In that sense the sign of the effect of smoking on my own interests is not certain. Nevertheless, I think it is irrational to smoke. I don't think a Parfitian understanding of identity would help here because then my refusal to smoke would be altruistic - I would be helping out my future self.

The dodge the bullet answer is more plausible, and I may follow up with more later.

Comment by halstead on [link] Andreas Mogensen's "Maximal Cluelessness" · 2019-10-07T02:18:06.278Z · score: 3 (2 votes) · EA · GW

On the latter, yes that is a good point - there are general features at play here, so I retract my previous comment. However, it still seems true that your rational credal state will always depend to a very significant extent on the particular facts.

I find the use of the long-termist point of view a bit weird as applied to the AMF example. AMF is not usually justified from a long-termist point of view, so it is not really surprising that its benefits seem less obvious when you consider it from that point of view.

Comment by halstead on [link] Andreas Mogensen's "Maximal Cluelessness" · 2019-10-07T02:14:53.843Z · score: 3 (2 votes) · EA · GW


Here is a good paper on this -

Comment by halstead on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-07T02:03:36.967Z · score: 30 (14 votes) · EA · GW

thanks for this.

If the retreats are valuable, one would expect them to communicate genuinely useful concepts and ideas. Which ideas that CFAR teaches do you think are most useful?

On the payment model, imagine that instead of putting their material on choosing a high impact career online, 80k charged people £3000 to have 4 day coaching and networking retreats in a large mansion, afterwards giving them access to the relevant written material. I think this would shave off ~100% of the value of 80k. The differences between the two organisations don't seem to me to be large enough to make a relevant difference to this analysis when applied to CFAR. Do you think there is a case for 80k to move towards the CFAR £3k retreat model?


On double cruxing, here is how CFAR defines double cruxing

"Let’s say you have a belief, which we can label A (for instance, “middle school students should wear uniforms”), and that you’re in disagreement with someone who believes some form of ¬A.  Double cruxing with that person means that you’re both in search of a second statement B, with the following properties:

1. You and your partner both disagree about B as well (you think B, your partner thinks ¬B)

2. The belief B is crucial for your belief in A; it is one of the cruxes of the argument.  If it turned out that B was not true, that would be sufficient to make you think A was false, too.

3. The belief ¬B is crucial for your partner’s belief in ¬A, in a similar fashion."

So, if I were to double crux with you, we would both establish which were the premises we disagree on that cause our disagreement. B is a premise in the argument for A. This is double cruxing, right?

You say:

"if you ask me "what are my premises for the belief that Nature is the most prestigious science journal?" then I definitely won't have a nice list of premises I can respond with, but if you ask me "what would change my mind about Nature being the most prestigious science journal?" I might be able to give a reasonably good answer and start having a productive conversation"

Your answer could be expressed in the form of premises right? Premises are just propositions that bear on the likelihood of the conclusion

Comment by halstead on Defending Philanthropy Against Democracy · 2019-10-07T01:07:11.075Z · score: 5 (3 votes) · EA · GW

This is a great post, which I think will be useful for the community!

Comment by halstead on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-06T14:43:14.208Z · score: 29 (20 votes) · EA · GW

I'm interested in the recommendation of CFAR (though I appreciate it is not funded by the LTFF). What do you think are the top ideas regarding epistemics that CFAR has come up with that have helped EA/the world?

You mention double cruxing in the other post discussing CFAR. Rather than an innovation, isn't this merely agreeing on which premise you disagree on? Similarly, isn't murphyjitsu just the pre-mortem, which was defined by Kahneman more than a decade ago?

I also wonder why CFAR has to charge people for their advice. Why don't they write down all of their insights and put it online for free?

Comment by halstead on [link] Andreas Mogensen's "Maximal Cluelessness" · 2019-10-06T05:50:07.327Z · score: 8 (5 votes) · EA · GW

I'm pretty sceptical of arguments for cluelessness. Some thoughts:

  • Knightian uncertainty seems to me never rational. There are strong arguments that credence functions should be sharp. Even if you can bound your credences very broadly with intervals, it seems like you would never be under knightian uncertainty given your information - your credal state is always somewhere between 0 and 1, and surely your mean estimate will differ between different problems.
  • Similar arguments for complex cluelessness also seems to apply to my own decisions about what would be in my rational self-interest to do. Nevertheless, I will not be wandering blindly into the road outside my hotel room in 10 minutes.
  • I don't see how you could make a general argument for cluelessness with respect to all decisions made by the community. You could make an argument that the sign of the expected benefits of EA actions is much more uncertain than has been acknowledged. I don't see how this could ever generalise to an argument that all of our decisions are clueless, since the level of uncertainty will always be almost entirely dependent on the facts about the particular case. Why would uncertainty about the effects of AMF have any bearing on uncertainty about the effects of MIRI or the Clean Air Task Force?
  • Cluelessness seems to imply that altruists should be indifferent between all possible actions that they can take. Is this implication of the view embraced?
  • Related to the above, in the AMF vs make a wish foundation example, I don't actually agree that we are as uncertain as suggested. e.g. you list studies citing different effects of life saving on fertility saying "Unfortunately, the studies just noted are of different kinds (cross-country comparisons, panel studies, quasi-experiments, large-sample micro-studies), with different strengths and weaknesses, making it difficult to draw firm conclusions". This seems to be asking for the reaction "what are we to do in the face of all this methodological complexity?" But an economist would actually have an answer to this - cross-country comparisons with cross-sectional data are out of fashion for example.
  • Overall, arguments about cluelessness seem to merely reassert that the world is complex and we should think carefully before acting. I don't see how it points to some deep permanent feature of our epistemic situation.
Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-06T01:35:47.611Z · score: 7 (4 votes) · EA · GW

ok cheers. I disagree with that but feel we have reached the end of productive argument

Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-05T07:10:00.353Z · score: 4 (2 votes) · EA · GW

What do you make of my 'offensive beliefs' poll idea and questions?

Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-05T02:40:29.479Z · score: 7 (4 votes) · EA · GW

There are two issues here. The less important one is - (1) are people's beliefs that many of these opinions are taboo rational? I think not, and have discussed the reasons why above.

The more important one is (2) - this poll is a blunt instrument that encourages people to enter offensive opinions that threaten the reputation of the movement. If there were a way to do this with those opinions laundered out, then I wouldn't have a problem.

This has been done in a very careless way without due thought to the very obvious risks

Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-05T02:29:33.624Z · score: 11 (6 votes) · EA · GW

They have a section on 'why do this?' and don't discuss any of the obvious risks which suggests they haven't thought properly about the issue. I think a good norm to propagate would be - people put a lot of thought into whether they should publish posts that could potentially damage the movement. Do you agree?

Suppose I am going to run a poll on 'what's the most offensive thing you believe - anonymous public poll for effective altruists'. (1) do you think I should have to publicly explain why I am doing this? (2) do you think I should run this poll and publish the results?

Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-03T09:57:19.131Z · score: 35 (12 votes) · EA · GW

Hi Ben,

Thanks for this, this is useful (upvoted)

1. I think we disagree on the empirical facts here. EA seems to me unusually open to considering rational arguments for unfashionable positions. People in my experience lose points for bad arguments, not for weird conclusions. I'd be very perplexed if someone were not willing to discuss whether or not utilitarianism is false (or whether remote working is bad etc) in front of EAs, and would think someone was overcome by irrational fear if they declined to do so. Michael Plant believes one of the allegedly taboo opinions here (mental health should be a priority) and is currently on a speaking tour of EA events across the Far East.

2. This is a good point and updates me towards the usefulness of the survey, but I wonder whether there is a better way to achieve this that doesn't carry such clear reputational risks for EA.

3. The issue is not whether my colleagues have sufficient public accessible reason to believe that EA is full of good people acting in good faith (which they do), but whether this survey weighs heavily or not in the evidence that they will actually consider. i.e. this might lead them not to consider the rest of the evidence that EA is mostly full of good people working in good faith. I think there is a serious risk of that.

4. As mentioned elsewhere in the thread, I'm not saying that EA should embrace political level self-restraint. What I am saying is that there are sometimes reasons to self-censor holding forth on all of your opinions in public when you represent a community of people trying to achieve something important. The respondents to this poll implicitly agree with that given that they want to remain anonymous. For some of these statements, the reputational risk of airing them anonymously does not transfer from them to the EA movement as a whole. For other statements, the reputational risk does transfer from them to the community as a whole.

Do you think anyone in the community should ever self-censor for the sake of the reputation of the movement? Do you think scientists should ever self-censor their views?

Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-03T09:40:39.687Z · score: 10 (8 votes) · EA · GW

This post actively encourages people to post their least acceptable views online, so seems bad by this argument.

Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-03T09:39:34.687Z · score: 8 (10 votes) · EA · GW

Hi, you start with a straw man here - I'm not requesting that they write a whole essay, I'm just requesting that they put some thought into the potential downsides, rather than zero thought (as occurred here). As I understand your view, you think the person has no obligation to put any thought into whether publishing this post is a good idea or not. I have to say I find this an implausible and strange position.

Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-03T04:18:26.228Z · score: 17 (10 votes) · EA · GW

I respect your view Oli, but I don't think the person organising it put sufficient thought into the downsides of doing a poll such as this. They didn't discuss any of the obvious risks in the 'why this is a valuable exercise' section.

Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-03T00:18:07.116Z · score: 13 (9 votes) · EA · GW

The political analogy was an example; it was not meant to say that standard political constraints should apply to EA. The thought applies to any social movement, e.g. for people involved in environmentalism, radical exchange or libertarianism. If I were a libertarian and someone came to me saying "why don't we run a poll of libertarians on opinions they are scared to air publicly and then publish those opinions online for the world to see", I think it would be pretty obvious that this would be an extremely bad idea.

Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-03T00:11:58.279Z · score: 53 (24 votes) · EA · GW

Hi Ben,

I see little upside in knowing almost all of what is said here, but see lots of downside.

(1) For some (most?) of these opinions, there isn't any social pressure not to air them. Indeed, as several people have already noted, some of these topics are already the subject of extensive public debate by people who like EA. (negative utilitarianism is plausible, utilitarianism is false, human enhancement is good, abortion is bad, remote working might lead to burnout, scepticism about polyamory, mental health is important etc). No value is added in airing these things anonymously.

(2) Some seem to be discussed less often but it is not clear why. eg if people want to have a go at CFAR publicly, I don't really see what is stopping them as long as their arguments are sensible. It's not as though criticising EA orgs is forbidden. I've criticised ACE publicly and as far as I know, this hasn't negatively affected me. People have pretty brutally criticised the long-term future fund formation and grants. etc.

(3) A small minority of these might reveal truths about flaws in the movement that there is social pressure not to air. (this is where the positive value comes from).

(4) For the most important subset of beyond the pale views, there is a clear risk of people not wholly bought into EA seeing this and this being extremely offputting. This is a publicly published document which could be found by the media or major philanthropists when they are googling what effective altruism is. It could be shared on facebook by someone saying "look at all the unpleasant things that effective altruists think". In general, this post allows people to pass on reputational damage they might personally bear from themselves to the movement as a whole.

Unfortunately, I can speak from first hand experience on the harm that this post has done. This post has been shared within the organisation I work for and I think could do very large damage to the reputation of EA within my org. I suspect that this alone makes the impact of this poll clearly net negative. I hope the person who set up this post sees that and reconsiders setting up a similar poll in the future.

Comment by halstead on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-02T09:53:10.240Z · score: 23 (27 votes) · EA · GW

This post seems to me clearly net-negative for EA for PR reasons, so I would argue against running a poll like this in the future. If you got a load of Labour or Conservative voters to express opinions they wouldn't be happy to express in public, you would end up with a significant subsection being offensive/beyond the usual pale, which would be used to argue against the worth of those social movements. The same applies here.

[update: i'm not saying EA should self-censor like political parties. This was an example to illustrate the broader point]

Comment by halstead on [Link] The Case for Charter Cities Within the EA Framework (CCI) · 2019-09-26T13:04:11.257Z · score: 2 (1 votes) · EA · GW

Another beef I have is defining what an institution actually is. Institutionalists in economics often start by defining them as the 'rules of the game', which is vague as it is, but then the term gets extended to mean 'stuff' in the empirical investigations of the impact of institutions.

Comment by halstead on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-13T09:58:15.576Z · score: 6 (2 votes) · EA · GW

If you actually think that the only thing that matters is wellbeing, then personhood doesn't matter, so it makes sense that you would endorse these conclusions in this thought experiment.

Comment by halstead on [updated] Global development interventions are generally more effective than Climate change interventions · 2019-09-12T10:22:59.843Z · score: 6 (6 votes) · EA · GW

I would agree with this. My understanding is that the IAMs are so unmoored from reality as to be basically useless. They don't try to account for the risk of catastrophic impacts, and the damage functions are chosen in part for mathematical tractability rather than fidelity to what climate change will actually be like. This is why I would object to claims such as "new research shows that the social cost of carbon is $477".

This also seems like an area in which expert elicitation won't be very accurate. We're talking about impacts 100 years into the future for a problem heavily dependent on political developments which are extremely difficult to predict.

Comment by halstead on What should Founders Pledge research? · 2019-09-11T09:18:30.403Z · score: 11 (4 votes) · EA · GW

Thanks a lot for this Ryan. Re promoting science, what do you make of the worry that the long-term sign of the effect of improving science is unclear because it doesn't produce differential technological development and instead broadly increases the increase of all knowledge, including potentially harmful knowledge?

Comment by halstead on What should Founders Pledge research? · 2019-09-10T13:55:25.742Z · score: 9 (2 votes) · EA · GW

I think we will be able to convince enough of them to donate to high-impact areas regardless of what they are

Comment by halstead on What should Founders Pledge research? · 2019-09-09T22:18:53.272Z · score: 11 (6 votes) · EA · GW

I would expect it to be in the millions/yr, though I don't think I should throw about specific figures on the forum.

Comment by halstead on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-09T22:13:31.967Z · score: 14 (7 votes) · EA · GW

Inevitably, utilitarians would bite the bullet here, since ex hypothesi, there is more utility in the world in which all beings are replaced with beings with higher utility.

I think the question is whether this implication renders utilitarianism implausible. I have several observations.

(1) The assumption here of the thought experiment is that the correct way to assess moral theories is to test them against intuitions about lots of particular cases. And utilitarianism has plenty of counterintuive implications about particular cases: eg the one in the main post, the repugnant conclusion, counting sadistic pleasure, and so on ad infinitum. The problem is that I don't think this is the correct way to assess moral theories.

Many of the moral intuitions people have are best explained by the fact that those intuitions would be useful to have in the ancestral environment, rather than that they apprehend moral reality. eg incest taboos are strong across all cultures, as are beliefs that wrongdoers simply deserve punishment regardless of the benefits of punishment. These would be evolutionarily useful, which makes it hard for us to shake these beliefs. I don't think the belief that subjective wellbeing is intrinsically good is debunkable in the same way, though discussing that is beyond the scope of this post.

Analogy: the current state of moral philosophy is similar to maths if mathematicians judged mathematical proofs and theories on the basis of how intuitive they are. Thus, people's intuition against the Monty Hall Problem was thought to be a good reason to try to build an alternative theory of probability. This form of maths wouldn't get very far. By the same token, moral philosophy doesn't get far in producing agreement because it uses a predictably bad moral epistemology that overwhelmingly focuses on intuitions about particular cases.

(2) Rough outline argument:

a. Subjective experience is all that matters about the world. (Imagine a world without subjective experience - why would it matter? Imagine a world in which people complete their plans but feel nothing - why would it be good?)

b. Personal identity doesn't matter. (See Parfit. or Imagine if you were vaporised in your sleep and then a perfect clone appeared a millisecond afterwards. Why would this be bad?)

From a and b, with some plausible additional premises, you eventually end up with utilitarianism. This means you have to bite the bullet mentioned in the text, and you also find the bullet plausible because you accept a and b and the other premises.

Related to (1), I think a response to utilitarianism that started in the right way would attack these basic premises a and b, along with the other premises. eg It would try and show that something aside from subjective experience matters fundamentally.

Comment by halstead on What should Founders Pledge research? · 2019-09-09T21:43:08.667Z · score: 8 (5 votes) · EA · GW

yes it's something like that, except that we do make specific recommendations, which are suited to their core values, and that they typically make donations via our donor advised fund rather than directly.

Comment by halstead on Age-Weighted Voting · 2019-07-17T08:34:20.497Z · score: 7 (4 votes) · EA · GW

You might also want to look at Brighouse and Fleurbaey's Democracy and Proportionality where they argue that people should get power in proportion to their stake in a decision.

Comment by halstead on The case for delaying solar geoengineering research · 2019-05-17T13:31:40.768Z · score: 2 (1 votes) · EA · GW

I agree it's not technically the right name, but people generally know what it means which was important for a blogpost. In the paper I actually call it the mitigation obstruction argument. I explicitly discuss the irrationality assumption required for the mitigation obstruction argument in my paper. I think the question of how irrationally people/governments will respond to research is an open one.

Comment by halstead on Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 · 2019-05-03T08:26:18.961Z · score: 11 (4 votes) · EA · GW

3. I have a sceptical prior against EU studies of scientific issues because the EU has taken an anti-science stance on many issues under pressure from the environmental movement - see e.g. the effective prohibition of GMOs. The fact that the report you cite advocates for increased organic farming adds weight to my scepticism. The report also says that the estimate of the economic costs is extremely uncertain and potentially a massive overestimate.

4. There are many things in the world that impose substantial economic costs, including inefficienct taxation, labour market regulation, failure to invest in R&D, etc. While they may indeed create economic costs, I fail to see the connection to existential risk.

5. While it is a small part of your portfolio, there is limited political attention for existential risk, and if CSER does start advocating for the view that biodiversity loss deserves serious consideration as a factor relevant to existential risk, that comes at a cost. In this case, the fact that Partha Dasgupta is an influential person is a negative because he risks distracting policymakers from the genuine risks

Comment by halstead on Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 · 2019-05-03T08:05:19.102Z · score: 10 (6 votes) · EA · GW

There are lots of risk factors for societal resilience to catastrophes, including all contemporary political and economic problems. The key question is how much of a risk they are and I have yet to see any evidence that biodiversity loss is among the top ones.

Comment by halstead on Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 · 2019-05-02T16:28:49.833Z · score: 47 (14 votes) · EA · GW

Can you explain what the mechanism is whereby biodiversity loss creates existential risk? And if biodiversity loss is an existential risk, how big a risk is it? Should 80k be getting people to go into conservation science or not?

There are independent reasons to think that the risk is negligible. Firstly, according to wikipedia, during the Eocene period ~65m years ago, there were thousands fewer genera than today. We have made ~1% of species extinct, and we would have to continue at current rates of species extinctions for at least 200 years to return to Eocene levels of biodiversity. And yet, even though significantly warmer than today, the Eocene marked the dawn of thousands of new species. So, why would we expect the world 200 years hence to be inhospitable to humans if it wasn't inhospitable for all of the species emerging in the Eocene, who are/were significantly less numerous than humans and significantly less capable of a rational response to problems?

Secondly, as far as I am aware, evidence for pressure-induced non-linear ecosystem shifts is very limited. This is true for a range of ecosystems. Linear ecosystem damage seems to be the norm. If so, this leaves more scope for learning about the costs of our damage to ecosystems and correcting any damage we have done.

Thirdly, ecosystem services are overwhelmingly a function of the relations within local ecosystems, rather than of global trends in biodiversity. Upon discovering Hawaii, the Polynesians eliminated so many species that global decadal extinction rates would have been exceptional. This has next to no bearing on ecosystem services outside Hawaii. Humanity is an intelligent species and will be able to see if other regions are suffering from biodiversity loss and make adjustments accordingly. Why would all regions be so stupid as to ignore lessons from elsewhere? Also, is biodiversity actually decreasing in the rich world? I know forest cover is increasing in many places. Population is set to decline in many rich countries in the near future, and environmental impact per person is declining on many metrics.

I also find it surprising that you cite the Kareiva and Carranza paper in support of your claims, for this paper in fact directly contradicts them:

"The interesting question is whether any of the planetary thresholds other than CO2 could also portend existential risks. Here the answer is not clear. One boundary often mentioned as a concern for the fate of global civilization is biodiversity (Ehrlich & Ehrlich, 2012), with the proposed safety threshold being a loss of greater than 0.001% per year (Rockström et al., 2009). There is little evidence that this particular 0.001% annual loss is a threshold—and it is hard to imagine any data that would allow one to identify where the threshold was (Brook, Ellis, Perring, Mackay, & Blomqvist, 2013; Lenton & Williams, 2013). A better question is whether one can imagine any scenario by which the loss of too many species leads to the collapse of societies and environmental disasters, even though one cannot know the absolute number of extinctions that would be required to create this dystopia.

While there are data that relate local reductions in species richness to altered ecosystem function, these results do not point to substantial existential risks. The data are small-scale experiments in which plant productivity, or nutrient retention is reduced as species numbers decline locally (Vellend, 2017), or are local observations of increased variability in fisheries yield when stock diversity is lost (Schindler et al., 2010). Those are not existential risks. To make the link even more tenuous, there is little evidence that biodiversity is even declining at local scales (Vellend et al., 2013, Vellend et al., 2017). Total planetary biodiversity may be in decline, but local and regional biodiversity is often staying the same because species from elsewhere replace local losses, albeit homogenizing the world in the process. Although the majority of conservation scientists are likely to flinch at this conclusion, there is growing skepticism regarding the strength of evidence linking trends in biodiversity loss to an existential risk for humans (Maier, 2012; Vellend, 2014). Obviously if all biodiversity disappeared civilization would end—but no one is forecasting the loss of all species. It seems plausible that the loss of 90% of the world’s species could also be apocalyptic, but not one is predicting that degree of biodiversity loss either. Tragic, but plausible is the possibility of our planet suffering a loss of as many as half of its species. If global biodiversity were halved, but at the same time locally the number of species stayed relatively stable, what would be the mechanism for an end-of-civilization or even end of human prosperityscenario? Extinctions and biodiversity loss are ethical and spiritual losses, but perhaps not an existential risk."

Comment by halstead on Does climate change deserve more attention within EA? · 2019-04-18T20:06:53.486Z · score: 10 (9 votes) · EA · GW

Energy for Humanity is a great underfunded pro-nuclear NGO working in the EU. Clean Air Task Force and Third Way are also great.

I also think the current emphasis on solar and wind in some places could be a barrier to sensible low carbon policies in the long-term, especially as they don't go very well with nuclear. It also doesn't make a great deal of sense to combine intermittent renewables with nuclear, as France bizarrely recently considered doing, since it just makes nuclear run below capacity when the sun is shining, which doesn't make economic sense.

Comment by halstead on Does climate change deserve more attention within EA? · 2019-04-18T20:02:30.666Z · score: 2 (1 votes) · EA · GW

I'll focus on point 2 because I think it is the most important. I don't see the argument for it being true that for the vast majority of people, working on climate change promises more leverage on the problem of nuclear war, than does working directly on nuclear war. Nuclear war is easier to make progress on, more neglected and more important than climate change.

Comment by halstead on Does climate change deserve more attention within EA? · 2019-04-18T19:56:42.089Z · score: 4 (3 votes) · EA · GW

Yes I think you are in fact right that plausible priors do seem to exclude ECS above 5 degrees.

You pick out a major problem in drawing conclusions about ECS - the IPCC does not explain how they arrive at their pdf of ECS and the estimate seems to be produced somewhat subjectively from various current estimates from instrumental and paleoclimatic data and from their own expert judgement as to what weight to give to different studies. I think this means that they give some weight to pdfs with a very fat tail, which seems to be wrong, given their use of uniform priors. This might mean that their tail estimate is too high

Comment by halstead on Does climate change deserve more attention within EA? · 2019-04-18T19:34:51.074Z · score: 44 (20 votes) · EA · GW

I agree that the environmental movement is extremely poor at optimisation. This being said, there are a number of very large philanthropists and charities who do take a sensible approach to climate change, so I don't think this is a case in which EAs could march in and totally change everything. Much of Climateworks' giving takes a broadly EA approach, and they oversee the giving of numerous multi-billion dollar foundations. Gates also does some sensible work on the energy innovation side. Nevertheless, most money in the space does seem to be spent very badly, e.g. on opposing nuclear power. This consideration might even make the environmental movement net negative wrt climate, though I haven't crunched any numbers on that.

I would also add that sensible EA answers in this space face substantial opposition from the envionmental movement. I think a rational analysis argues in favour of advocating for nuclear and carbon capture, for energy innovation in general, and for financial incentives for preventing deforestation. All of these things are opposed quite strongly by different constituencies in the environmental movement. Maybe the one thing most people can agree on is carbon pricing, but that is hard to get through for other reasons

Comment by halstead on Does climate change deserve more attention within EA? · 2019-04-17T21:12:33.367Z · score: 21 (9 votes) · EA · GW

On Bayesianism - this is an important point. The very heavy tailed estimates all use a "zero information" prior with an arbitrary cut-off at eg 10 degrees or 20 degrees. (I discuss this in my write-up). This is flawed and more plausible priors are available which thin out the tails a lot.

However, I don't think you need this to get to there being substantial tail risk. Eyeballing the ECS estimates that use plausible priors, there's still something like a 1-5% chance of ECS being >5 degrees, which means that from 1.5 doublings of GHG concentrations, which seems plausible, there's a 1-5% of ~7 degrees

Comment by halstead on Does climate change deserve more attention within EA? · 2019-04-17T21:04:45.752Z · score: 54 (18 votes) · EA · GW

Thanks for this. It's useful for the community to think about this kind of thing and this is well-argued.


1. It's a good point that since the top AI fields seem oversubscribed, it might be worth some people moving into the next best causes. Another possibility is that they should wait until the number of organisations catches up with the number of people. It might even be that the most valuable options is having a reserve of a large number of people who could, with some probability, be a good fit for the highest-impact orgs, even though most of these people never end up working for high-impact orgs. This puts a new slant on the demandingness of EA: rather than making sacrifices by donating, EAs make sacrifices by being prepared to accept the substantial probability of themselves never having impact. This would be hard to take psychologically, but might be the right thing to do in a crowded talent space.

2. On indirect risks, another point I make in the FP report is that while climate change is an indirect stressor of other risks, this suggests to me that working on those terminal risks directly would be a better bet than working on climate change since climate change is such an indirect stressor, is very crowded and seems difficult to make progress on. What do you think of that argument?


3. I don't think it is right that problems with high tractability should be de-prioritised. I think what you mean is that we should focus on things that shift the long-term trajectory of humanity. But these could be highly tractable. e.g. the problem of not starting nuclear war was tractable for Vasili Arkhipov, but plausibly had large long-term effects. Having looked at it in some depth, climate change does look an intractable problem overall and this is indeed a reason not to work on it.

4. Another good point on how there could be increasing returns to scale in climate change, as we could affect the huge pool of funds going to the space through engagement.

5. Really, the ITN perhaps shouldn't be used when we have cost-effectiveness estimates. On the 80k rendering, ITN is literally a cost-effectiveness estimate. But we now have cost-effectiveness estimates of climate charities. If we can make plausible estimates of the impact of bio, AI and nuclear, then we should use those, rather than appealing to the ITN. similarly, for use of time as well as money.

5. It is premature to say that work on climate change could be tractable. I think careful analysis is needed to figure out whether the things you list are indeed a good bet compared to other things that EAs could do.


6. Climate Action Tracker suggests that on current policy, we are in for 3.1 to 3.5C, which is different to the 'baseline' trajectory estimate that you give. I think the current policy trajectory is most relevant for that part of your argument. (But note that this is only by 2100)

7. The impact of climate change on food production is in fact predicted to be fairly modest, as I discuss here. Yields might fall by 10-20% but this will be in the context of rising productivity and improvement in the other factors that determine the supply of food.

8. The emphasis on water shortage throughout is a bit overblown. We don't need to ration water, we just need to price it properly (which is efficient rationing). If we did that, there would be no water problems today or in the future, anywhere (provided people had enough money).

Comment by halstead on The case for delaying solar geoengineering research · 2019-03-31T19:52:35.460Z · score: 2 (1 votes) · EA · GW

2. I don't think this is right, for reasons discussed in this Nature paper. Firstly, solar geoengineering could be used to slow the rate of warming even if it is deployed temporarily. You could deploy it over e.g. a fifty year period and thereby delay the point at which we reach peak warming, and then taper it out gradually. Secondly, as you say, an exception is if CO2 emissions stay above zero. Solar geoengineering could in principle buy us time to abate emissions and to take CO2 out of the atmosphere in which case it would not have to be deployed for the full lifetime of CO2 in the atmosphere. In this case, solar geo would slow the rate of warming and reduce peak warming.

Thirdly, I don't see why solar geoengineering would ever be stopped suddenly once we started. The reasons for this are discussed in the Parker and Irvine piece on solar geoengineering. All countries would have a reason to prevent it from stopping suddenly and would have the means to do so given how cheap it is. A catastrophe causing termination would have to be extraordinarily specific.

3. To clarify, is your point here that we should focus on mitigation because then we'll be left with some spare oil come a later catastrophe?

Comment by halstead on The case for delaying solar geoengineering research · 2019-03-27T18:54:08.815Z · score: 2 (1 votes) · EA · GW


I'm not completely sure I follow why your first paragraph is a critique. I don't expect governance to improve on its own. My claim is that we do not need 50 years of governance research to get governance to a sufficiently good level should we need to deploy solar geoengineering in the future. The hope is that we will be wise enough not to have to use it because we will start serious mitigation, and I'm worried that geoengineering research could be one of many factors that could derail those efforts.

It is true that developing geoengineering technology would create incentives to improve governance mechanisms for geoengineering. I'm not sure why that is a critique of my argument.

I agree that war is unlikely for the reasons you outline.

Comment by halstead on Apology · 2019-03-26T16:47:34.153Z · score: 10 (3 votes) · EA · GW

Was deleted for tone, no interesting content

Comment by halstead on Apology · 2019-03-25T22:42:46.279Z · score: 10 (4 votes) · EA · GW

ok thanks, understood. i hope it wasn't grasping at straws, but maybe this debate has got too sidetracked and should draw to a close.

Comment by halstead on Apology · 2019-03-25T19:23:26.901Z · score: 11 (8 votes) · EA · GW

We were debating the claim "Hmm, it is not at all clear to me that the accusations that are being discussed here [the Brown accusations] are separate from the accusations that appear to have caused his apology." Julia Wise's comments has confirmed that the claims were separate. The term 'separate' here means 'different instance of sexual harassment'.

Comment by halstead on Apology · 2019-03-25T19:11:38.493Z · score: 10 (8 votes) · EA · GW

The question is about probabilities of guilt/innocence. If you have multiple people accuse you of sexual or non-sexual harassment over the course of at least 7 years in different communities, then you are either extremely unlucky or you have actually harassed people. He also admits guilt