How well did EA-funded biorisk organisations do on Covid? 2021-06-02T17:25:41.175Z
Deference for Bayesians 2021-02-13T12:33:05.556Z
[Link post] Are we approaching the singularity? 2021-02-13T11:04:02.579Z
How modest should you be? 2020-12-28T17:47:10.799Z
Instructions on potential insomnia cure 2020-10-12T13:56:53.111Z
High stakes instrumentalism and billionaire philanthropy 2020-07-19T19:19:41.206Z
What is a good donor advised fund for small UK donors? 2020-04-29T09:56:50.097Z
How hot will it get? 2020-04-18T20:29:59.579Z
Pangea: The Worst of Times 2020-04-05T15:13:23.612Z
Covid-19 Response Fund 2020-03-31T17:22:05.999Z
Growth and the case against randomista development 2020-01-16T10:11:51.136Z
Is mindfulness good for you? 2019-12-29T20:01:28.762Z
The ITN framework, cost-effectiveness, and cause prioritisation 2019-10-06T05:26:24.879Z
What should Founders Pledge research? 2019-09-09T17:41:04.073Z
[Link] New Founders Pledge report on existential risk 2019-03-28T11:46:17.623Z
The case for delaying solar geoengineering research 2019-03-23T15:26:13.119Z
Insomnia: a promising cure 2018-11-16T18:33:28.060Z
Concerns with ACE research 2018-09-07T14:56:25.737Z
New research on effective climate charities 2018-07-11T13:51:23.354Z
The counterfactual impact of agents acting in concert 2018-05-27T10:54:03.677Z
Climate change, geoengineering, and existential risk 2018-03-20T10:48:01.316Z
Economics, prioritisation, and pro-rich bias   2018-01-02T22:33:36.355Z
We're hiring! Founders Pledge is seeking a new researcher 2017-12-18T12:30:02.429Z
Capitalism and Selfishness 2017-09-15T08:30:54.508Z
How should we assess very uncertain and non-testable stuff? 2017-08-17T13:24:44.537Z
Where should anti-paternalists donate? 2017-05-04T09:36:53.654Z
The asymmetry and the far future 2017-03-09T22:05:26.700Z


Comment by Halstead on Book recommendation -- The Citizen's Guide to Climate Success by Mark Jaccard · 2021-07-07T07:56:53.314Z · EA · GW

Yep - that's a key argument and I think he is right. Offsetting is likely harmful in my view.

Comment by Halstead on Book recommendation -- The Citizen's Guide to Climate Success by Mark Jaccard · 2021-07-06T09:44:23.122Z · EA · GW

One important thing he argues for is the political economy barriers to carbon pricing. Jaccard himself worked to set up carbon pricing in Canada, but is very sceptical that it is the best thing to advocate for given political economy constraints. I think EAs sometimes miss this point and advocate for carbon pricing as the first best solution. Unfortunately, we are in the nth best world .

Comment by Halstead on Book recommendation -- The Citizen's Guide to Climate Success by Mark Jaccard · 2021-07-03T07:52:59.835Z · EA · GW

Agree this is a great book!

Comment by Halstead on How large can the solar system's economy get? · 2021-07-01T10:49:13.406Z · EA · GW

It is clear that energy consumption cannot continue to grow exponentially for much more than 1000 years. But it might be argued that we can continue to extract ever more economic value from less and less energy, especially with VR. This is discussed in the debate between Robin Hanson and Bryan Caplan, and Toby Ord in the comments. 

See the comment here by Max Daniel:

"there are limits in how much value (whether in an economic or moral sense) we can produce per unit of available energy, and (ii) we will eventually only be able to expand the total amount of available energy subexponentially (there can only be so much stuff in a given volume of space, and the amount of available space is proportional to the speed of light cubed - polynomial rather than exponential growth).



In 275, 345, and 400 years, [assuming current growth rates of global power demand] we demand all the sunlight hitting land and then the earth as a whole, assuming 20%, 100%, and 100% conversion efficiencies, respectively. In 1350 years, we use as much power as the sun generates. In 2450 years, we use as much as all hundred-billion stars in the Milky Way galaxy.

(Tom Murphy)


Comment by Halstead on Issues with Using Willingness-to-Pay as a Primary Tool for Welfare Analysis · 2021-06-27T12:54:09.899Z · EA · GW

I agree that this is a problem and had previously raised the question in a post on the Forum, (though it is my lowest scoring post ever so evidently lots of people disagree with my argument!) 

This issue became especially clear in early attempts by economists to put a value on the life of people across countries. Since people in poor countries took on greater risk for less money, their lives were valued at a fraction of those in rich countries. 

Another example is tickets. Suppose that we are selling tickets to the final of Euro 2020 and that Warren Buffet buys all the tickets for the game because he likes to watch games in empty stadia. Economists often say that because willingness to pay tracks utility across persons and the tickets went to the highest bigger, this outcome enhances social welfare compared to a world in which the ticket prices were kept artificially low. But obviously the fact that Buffet is willing to pay so much for the tickets is more a reflection of his massive wealth and peculiar tastes and not the fact that his welfare would actually be enhanced more than everyone else. 

Economists then try to solve this by having independent fairness or equity constraints. But the market outcome is bad on utilitarian grounds. 

Comment by Halstead on Climate change questions for Johannes Ackva and John Halstead · 2021-06-06T16:37:50.698Z · EA · GW

Victor and Cullenward - Making Climate Policy Work  is good.

On the science side, for an overview, I would recommend just reading the summary for policymakers or technical summary of the IPCC 2013 Physical science basis report. 

For long-termist/ex-risk takes the following are good

King et al Climate Change a Risk Assessment

Hansen et al, Climate Sensitivity, sea level and atmospheric CO2

Clark et al, Consequences of twenty-first-century policy for multi-millennial climate and sea-level change

Comment by Halstead on Climate change questions for Johannes Ackva and John Halstead · 2021-06-06T16:32:09.389Z · EA · GW


I agree that climate change is not neglected but I view that as a bit of a weak steer when deciding whether to work on it, for reasons I outline here. Neglectedness is one determinant of how cost-effective it is to work on a problem, but there are many others. Taking the example of AI safety - it is more neglected than climate change, but I have almost no idea how to make progress on this problem, whereas with climate change there is quite a clear path to making a difference. It also might be true that certain solutions within climate are less neglected than others, e.g. CCS and nuclear are neglected. 

I think to make cause prioritisation decisions when the stakes are high, we actually need to sit down and figure out directly which cause is more cost-effective to work on. 

Comment by Halstead on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-05-22T18:20:09.422Z · EA · GW

Hi Aaron, I appreciate this and understand the thought process behind the decision. I do generally agree that it is important to provide evidence for this kind of thing, but there were reasons not to do so in this case, which made it a bit unusual.

Comment by Halstead on Insomnia with an EA lens: Bigger than malaria? · 2021-05-19T17:43:56.910Z · EA · GW

I have written up the instructions for CBT-i here for those interested -

Comment by Halstead on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-05-12T19:34:11.576Z · EA · GW

Simon Beard is providing the foreword for his forthcoming book, and Luke Kemp has provided a supporting quote for it.

Comment by Halstead on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-05-12T14:46:52.879Z · EA · GW

I'm pretty surprised and disappointed by this warning. I made 3 claims about ways that Phil has interacted with me. 

  1. I didn't share the facebook messages because I thought it would be a breach of privacy to share a private message thread without Phil's permission, and I don't want to talk to him, so I can't get his permission.
  2. I also don't especially want to link to the piece calling me a racist, which anyone familiar with Phil's output would already know about, in any case.
  3. There is a reason I didn't share the screenshot of the the paedophilia/rape accusations, which is that I thought it would be totally unfair to the people accused. This is why I called  them 'celebrities' rather vaguely.

As you say, I have shown all of these claims to be true in private in any case. 

This feels a lot like punishing someone for having the guts to call out a vindictive individual in the grip of a lifelong persecution complex. As illustrated by the upvotes on my comments, lots of people agree with me, but didn't want to say anything, for whatever reason. If you were going to offer any sanction for anyone, I would have thought it would be the people at CSER, such as Simon Beard and Luke Kemp, who have kept collaborating with him and endorsing his work for the last few years, despite knowing about the behaviour that you have just banned him for. 

Comment by Halstead on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-18T21:00:08.763Z · EA · GW

Echoing what Max says, I think this paper comes from the assumption that a lot of population ethics is just off down the wrong track of trying to craft theories in a somewhat ad hoc manner that avoid the repugnant conclusion. It is difficult to think of how else these people could try and make this point given that making the same points that others have made before, in some cases several decades ago, would not be publishable because they are not novel. This strikes me as something of a (frustrated?) last resort to try and make the discipline acknowledge that there might be a problem in the way it has been going for thirty years. 

I suppose one alternative would have been to publish this on a philosophy blog, but then it would necessarily have got less reach than getting it in a top journal. 

Although unusual in philosophy, the practice is widespread in science. Scientists will often write in short letters criticising published articles that are light on substantive argument but reiterate a view among some prominent researchers. 

Finally, I think it is useful to have more surveys of what different researchers in a field believe, and this is one such instance of that - it tells us that several of the world's best moral philosophers are willing to accept this thing that everyone else seems to think is insane. 

Comment by Halstead on Julia Galef and Matt Yglesias on bioethics and "ethics expertise" · 2021-03-31T12:41:47.031Z · EA · GW

I think a key point is that bioethics usually involves applying particular moral theories, which is not that interesting an exercise from a philosophical point of view. That's why the best philosophers are often drawn to higher level theoretical questions such as about the truth of otherwise of consequentialism or rights-based theories or whether and how we should respond to moral uncertainty. Consequently, the true ethics experts (if they really exist) are not likely to be studying bioethics. as they say in the podcast it is also weird that bioethics has this special status as a field with a distinct set of experts who get to veto public policy. In most areas of public policy, the economists get to decide what happens (subject to political constraints), and the outcomes are usually much better!

Comment by Halstead on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-09T08:57:44.202Z · EA · GW

If you agree it is a serious and baseless allegation, why do you keep engaging with him? The time to stop engaging with him was several years ago. You had sufficient evidence to do so at least two years ago, and I know that because I presented you with it, e.g. when he started casually throwing around rape allegations about celebrities on facebook and tagging me in the comments, and then calling me and others nazis. Why do  you and your colleagues continue to extensively collaborate with him? 

To reiterate, the arguments he makes are not sincere: he only makes them because he thinks the people in question have wronged him. 

Comment by Halstead on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-08T18:39:14.157Z · EA · GW

It is  very generous to characterise Torres' post as insightful and thought provoking. He characterises various long-termists as white  supremacists on the flimsiest grounds imaginable. This is a very serious accusation and one that he very obviously throws around due to his own personal vendettas against certain people. e.g. despite many of his former colleagues at CSER also being long-termists he doesn't call them nazis because he doesn't believe they have slighted him. Because I made the mistake of once criticising him, he spent much of the last two years calling me a white supremacist, even though the piece of mine he cited did not even avow belief in long-termism.  

Comment by Halstead on Assessing Climate Change’s Contribution to Global Catastrophic Risk · 2021-03-02T10:26:45.572Z · EA · GW

On species extinctions, you cite the Thomas et al estimate that climate change would cause "15-37% of all species to become ‘committed to extinction’ by mid-century". This paper has been subject to an avalanche of criticism. For example, there is a good review here, and strong counter-evidence discussed at length here.  I think it would be useful to the reader to provide this context. 

Also, this is just one study (also the most pessimistic), and I think one would get a better view by providing an overview of the literature. The IPBES report that you also cite says "For instance, a synthesis of many studies estimates that the fraction of species at risk of extinction due to climate change is 5 per cent at 2°C warming, rising to 16 per cent at 4.3°C warming {}." 4.3 degrees is the median outcome at 2100 on the high emissions pathway. Being committed to extinction is also very different to being at risk of extinction. This suggests that the risk is a lot lower than the Thomas et al estimate suggests. 

Comment by Halstead on Assessing Climate Change’s Contribution to Global Catastrophic Risk · 2021-03-01T14:25:18.270Z · EA · GW

The factors you mention therefore seem to increase vulnerability, but merely in the following sense 

  • Some of the factors don't seem relevant at all (phosphorous depletion)
  • The food system will be much less vulnerable in the future vs today despite these factors.
  • Some other event would have to do 99% of the work in bringing about a global food catastrophe
Comment by Halstead on EA Updates for March 2021 · 2021-02-26T19:06:00.755Z · EA · GW

thanks for taking the time to do this!

Comment by Halstead on Deference for Bayesians · 2021-02-20T14:52:56.310Z · EA · GW

I think I would find it very hard to update on the view that the minimum wage reduces demand for labour. Maybe if there were an extremely well done RCT showing no effect from a large minimum wage increase of $10, I would update. Incidentally, here is discussion of an RCT on the minimum wage which illustrates where the observational studies might be going wrong. The RCT shows that employers reduced hours worked, which wouldn't show up in observational studies, which mainly study disemployment effects

I am very conscious of the fact that almost everyone I have ever tried to convince of this view on the minimum wage remains wholly unmoved. I should make it clear that I am in favour of redistribution through tax credits, subsidies for childcare and that kind of thing. I think the minimum wage is not a smart way to help lower income people. 

Comment by Halstead on Assessing Climate Change’s Contribution to Global Catastrophic Risk · 2021-02-20T13:28:05.258Z · EA · GW

I would agree with that - climate change seems like it could have very bad humanitarian costs for poor agrarian societies that look set to experience low economic growth this century. I do though find it very difficult to see how it could lead to a collapse of the global food system

Comment by Halstead on Assessing Climate Change’s Contribution to Global Catastrophic Risk · 2021-02-19T20:41:53.016Z · EA · GW

Thanks for sharing this. 

Regarding food, you suggest that due to climate change, soil erosion, water scarcity, and phosphorus depletion, there are risks to the global food supply that could constitute a global catastrophe. What do you think is the probability of this occurring in the next 30 or 80 years?

I am  sceptical of this. Crop yields for almost all crops have increased by 200% since 1980, despite warming of about 0.8 degrees since then. The crop effects of climate change you outline, which are typically on the order of  up to 20%  losses for major food crops at 5 degrees, should be set in this context. Various studies suggest that yield will increase by 25% to 150% by 2050. e.g UN FAO; Wiebe. The yield damage estimates you cite seem like they will be outpaced by technological progress unless there is a massive trend break in agricultural productivity progress. 

On phosphorous, according to a report by the IFDC funded by USAID, global phosphorous reserves (those which can be currently economically extracted, so this is a dynamic figure) will last for 300-400 years. "Based on the data gathered, collated and analyzed for this report, there is no indication that a "peak phosphorus" event will occur in 20-25 years. IFDC estimates of world phosphate rock reserves and resources indicate that phosphate rock of suitable quality to produce phosphoric acid will be available far into the future. Based on the data reviewed, and assuming current rates of production, phosphate rock concentrate reserves to produce fertilizer will be available for the next 300-400 years." The US geological survey says "World resources of phosphate rock are more than 300 billion tons. There are no imminent shortages of phosphate rock."

On water scarcity, agriculture is about 4% of global GDP and declining. If water became enough of a constraint on agriculture to threaten a global catastrophe, why would we not throw some money or wisdom at the problem for example by spending more money on water, desalination, or stopping subsidising agricultural uses of water? Have any middle or high income countries ever failed to produce more than enough food because of lack of water?

On soil erosion, the UN report on soil says "A synthesis of meta-analyses on the soil erosion-productivity relationship suggests that a global median loss of 0.3 percent of annual crop yield due to erosion occurs.7 If this rate of loss continues unchanged into the future, a total reduction of 10 percent of potential annual yield by 2050 would occur". again, this is in the context of otherwise increasing yields. Soil erosion rates are also declining in various regions. 

Comment by Halstead on Deference for Bayesians · 2021-02-17T15:12:18.500Z · EA · GW

As I mention in the post, it's not just theory and common sense, but also evidence from other domains. If the demand curve for labour low skilled labour is vertical, then it is all but impossible that a massive influx of Cuban workers during the Mariel boatlift had close to zero effect on native US wages. Nevertheless, that is what the evidence suggests. 

I am happy to be told of other theoretical explanations of why minimum wages don't reduce demand for labour. The ones I am aware of in the literature are monopsonistic buyer of labour (clearly not the case), or one could give up on the view that firms are systematically profit-seeking (also doesn't seem true). 

The claims that are  wrong are the ones I highlight in the post, viz that the empirical evidence is all that matters when forming believe about the minimum wage. Most empirical research isn't that good and cannot distinguish signal from noise when there are small treatment effects e.g. the Card and Krueger research that started the whole debate off got their data by telephoning fast food restaurants.

Comment by Halstead on Deference for Bayesians · 2021-02-17T14:57:31.970Z · EA · GW

2. I would disagree on economics. I view the turn of economics towards high causal identification and complete neglect of theory as a major error, for reasons I touch on here. The discipline has moved from investigating important things to trivial things with high causal identification. The trend towards empirical behavioural economics is also in my view a fad with almost no practical usefulness.  (To reiterate my point on the minimum wage - the negative findings are almost certainly false: it is  what you would expect to find for a small treatment effect and noisy data in observational studies. Before reading the literature and believing that the effect of a minimum wage increase of $3 is small but negative, I would still expect to find a lot of studies finding no effect because empirical research is not very good, so one should not update much on those negative findings. If you think the demand curve for low skilled labour is vertical, then the phenomenon of ~0 effect on native US wages after a massive influx of low skilled labour from Cuba  is inexplicable. And: the literature is very mixed - it's not like all the studies find no effect, that is a misconception)

3. I agree that focusing on base rates is important but that doesn't seem to get at the myopic empiricism issue. For example, the base rate of vaccine efficacy dropping off a cliff after 22 days is very low, but that was not established in the initial Astra Zeneca study. To form that judgement, one needs evidence from other domains, which myopic empiricists  ignore. 

4. I'm not sure where we disagree there. I don't think EAs should stay rooted in empiricism if that means 'form judgements only on the basis of the median published scientific study', which is the view I criticise. I'm not saying we should become less empirical - I think we should take account of theory but also empirical evidence from other domains, which as I discuss many other experts refuse to do in some cases. 

I'm not saying that we should be largely cut off from observation and experiment and should just deduce from theory. I'm saying that the myopic empiricist approach is not the right one. 

Comment by Halstead on Deference for Bayesians · 2021-02-17T08:44:14.686Z · EA · GW

This is maybe getting too bogged down in the object-level. The general point is that if you have a confident prior, you are not going to update on uncertain observational evidence very much. My argument in the main post is that ignoring your prior entirely is clearly not correct and that is driving a lot of the mistaken opinions I outline.

Tangentially, I stand by my position on the object-level - I actually think that 98% is too low! For any randomly selected good I can think of, I would expect a price floor to reduce demand for it in >99% of cases. Common sense aside... The only theoretical reason this might not be true is if the market for labour is monopsonistic. That is just obviously not the case. There is also evidence from the immigration literature which suggests that native wages are barely affected by a massive influx of low skilled labour, which implies a near horizontal demand curve. There is also the point that if you are slightly Keynesian you think that involuntary employment is caused by the failure of wages to adjust downward; legally forbidding them from doing this must cause unemployment.  

Comment by Halstead on Deference for Bayesians · 2021-02-16T14:11:01.524Z · EA · GW

Hello, my argument was that there are certain groups of experts you can ignore or put less weight on because they have the wrong epistemology. I agree that the median expert might have got some of these cases right. (I'm not sure that's true in the case of nutrition however)

The point in all these cases re priors is that one should have a very strong prior, which will not be shifted much by flawed empirical research. One should have a strong prior that the efficacy of the vaccine won't drop off massively for the over 65s even before this is studied.  

One can see the priors vs evidence case for the minimum wage more formally using Bayes theorem.  Suppose my prior that minimum wages reduce demand for labour is 98%, which is reasonable. I then learn that one observational study has found that they  have no effect on demand for labour. Given the flaws in empirical research,  let's say there is a  30% chance of a study finding no effect conditional on there being an effect. Given this, we might put a symmetrical probability on a study finding no effect conditional on there being no effect  - a 70% chance of a null result if minimum wages in fact have no effect. 

Then my posterior is = (.3*.98)/(.3*98+.7*.02) = 95.5% 

So I am still very sure that minimum wages have no effect even if there is one study showing the contrary. FWIW, my reading of the evidence is that most studies do find an effect on demand for labour, so after assimilating it all, one would probably end up where one's prior was. This is why the value of information of research into the minimum wage is so low. 

On drinking in pregnancy, I don't think this is driven by people's view of acceptable risk, but rather by a myopic empiricist view of the world. Oster's book is the go-to for data-driven parents and she claims that small amounts of alcohol has no effect, not that it has a small effect but is worth the risk. (Incidentally, the latter claim is also clearly false - it obviously isn't worth the risk.)

On your final point, I don't think one can or should aim to give an account of whether relying on theory or common sense is always the right thing to do. I have highlighted some examples where failure to rely on theory and evidence from other domains leads people astray. Epistemology is complicated and this insight may of course not be true in all domains. For a comprehensive account of how to approach cases such as these, one cannot say much more than that the true theory of epistemology is Bayesianism and to apply that properly you need to be apprised of all of the relevant information in different fields.

Comment by Halstead on [Link post] Are we approaching the singularity? · 2021-02-15T10:57:10.695Z · EA · GW

As a matter of interest, where do papers such as this usually get discussed? Is it in personal conversation or in some particular online location?

Comment by Halstead on Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected · 2021-02-14T12:57:01.338Z · EA · GW

Thanks for writing this. I disagree that EAs should prioritise this cause area and I disagree with the analysis of the cause-specific arguments. 

Firstly, I think it is good for happy people to come into existence, but this is ignored here. 

On climate change, I generally think Drawdown is not a reliable source. The only place where births per woman are not close to 2 is sub-saharan Africa. Thus, the only place where family planning could reduce emissions is sub-saharan Africa, which is currently a tiny fraction of emissions. Working on low carbon technology by contrast can affect global emissions, and policy change in the US or EU can affect a much larger fraction of emissions. 

I'm also  strongly sceptical of the cost to prevent a pregnancy provided. $10 seems far too low. This seems similar to the kind of mainstream charity cost to save a life estimate that EAs have criticised for a while

Comment by Halstead on Deference for Bayesians · 2021-02-14T11:46:41.730Z · EA · GW

Thanks for sharing that piece, it's a great counterpoint. I have a few thoughts in response. 

Strevens argues that myopic empiricism drives people to do useful experiments which they perhaps might not have done if they stuck to theory. This seems to have been true in the case of physics. However, there are also a mountain of cases of wasted research effort, some of them discussed in my post. The value of information from eg  most studies on the minimum wage and observational nutritional epidemiology is miniscule in my opinion. Indeed, it's plausible that the majority of social science research is wasted money, per the claims of the meta-science movement. 

I agree that it's not totally clear if it would be positive if in general people tried to put more weight on theory and common sense. But some reliance on theory and common sense is just unavoidable. So, this is a question of how much reliance we put on that, not whether to do it at all. For example, to make judgements about whether we should act on the evidence of whether masks work, we need to make judgments about the external validity of studies, which necessarily involves making some theoretical judgements about the mechanism by which masks work, which the empirical studies confirm. The true logical extension of myopic empiricism is the inability to infer anything from any study. "We showed that one set of masks worked in a series of studies in US in 2020, but we don't have a study of this other set of masks works in Manchester in 2021, so we don't know whether they work". 

I tend to think it would be positive if scientists gave up on myopic empiricism and shifted to being more explicitly Bayesian. 

Comment by Halstead on Deference for Bayesians · 2021-02-14T11:24:26.712Z · EA · GW

Hi,  thanks for this. 

I'm not making a claim that rationalists are more accurate than the standard experts. I actually don't think that is true .eg rationalists think you obviously should one-box in Newcomb's problem (which I think is wrong, as do most decision theorists). The comments of Greg Lewis' post discuss the track record of the rationalists, and I largely agree with the pessimistic view there. I also largely agree with the direction and spirit of Greg's main post.

My post is about what someone who accepts  the tenets of Bayesianism would do given the beliefs of experts. In the examples I mention, some experts have gone wrong by not taking account of their prior when forming beliefs (though there are other ways to fall short of the Bayesian standard, such as not updating properly given a prior). I think this flaw has been extremely socially damaging during the pandemic.

 I don't think this implies anything about deferring to the views of actual rationalists, which would require a sober assessment of their track record. 

Comment by Halstead on [Link post] Are we approaching the singularity? · 2021-02-14T11:07:08.907Z · EA · GW

Thanks for outlining the tests.

I'm not really sure what he thinks the probability of the singularity before 2100 is. My reading was that he probably doesn't think that given his tests, the singularity is (eg) >10% likely before 2100. 2 of the 7 tests suggest the singularity after 100 years and 5 of them fail. It might be worth someone asking him for his view on that

Comment by Halstead on Promoting EA to billionaires? · 2021-01-24T21:01:51.797Z · EA · GW

There's also Effective Giving Netherlands

Comment by Halstead on Why I'm concerned about Giving Green · 2021-01-24T21:00:04.500Z · EA · GW

For what it's worth, as someone who has thought about climate policy and philanthropy on and off for the last 3 years, I would also agree with this critique, and for the reasons Johannes (jackva) gives, I don't think the responses succeed. It's good to see these issues being discussed openly and constructively by both sides. 

Comment by Halstead on The Folly of "EAs Should" · 2021-01-11T12:01:41.009Z · EA · GW

I don't think there's any need to apologise! I was trying to make the case that I don't think you showed how we could distinguish reasonable and unreasonable uses of normative claims

Comment by Halstead on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-09T20:31:41.215Z · EA · GW

What do you think the next 4 years has in store for the US, especially concerning the probability of a major change in institutions and order there.

Comment by Halstead on The Folly of "EAs Should" · 2021-01-06T17:39:37.286Z · EA · GW

Hi, thanks for the reply!

The argument now has a bit of a motte and bailey feel, in that case. In various places you make claims such as 

  • "The Folly of "EAs Should"
  • "One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid"; 
  • "So I think we should discuss why Effective Altruism implying that there are specific and clear preferable options for Effective Altruists is often harmful"; 
  • "Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints."
  • "and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions"

These seem to be claims to the effect that (1) we should (almost) never make normative claims (2) strong scepticism about knowing that one path is better from an EA point of view than another.  But I don't see a defence of either of these claims in the piece. For example, I don't see a defence of the claim that it is mistaken to think/say/argue that focusing on US policy or on GiveWell charities is not the best way to do the most good. 

If the claim is the weaker one that sometimes EAs can be overconfident in their view of the best way forward or use language that can be off-putting, then that may be right. But that seems different to the "never say that some choices EAs make are better than others" claim, which is suggested  elsewhere in the piece

Comment by Halstead on The Folly of "EAs Should" · 2021-01-06T11:30:29.239Z · EA · GW

I think this is consistent with Will's definition because you can view the 'should' claims as what we should do conditional on us accepting the goal of doing the most good using reason and evidence. 

Comment by Halstead on The Folly of "EAs Should" · 2021-01-06T11:22:13.654Z · EA · GW

Thanks for taking the time to put this together. 

At the start, you seem to suggest that we should not use 'should' because of moral uncertainty, and then you gloss this as a claim about cooperation. Moral uncertainty is intrapersonal, whereas moral cooperation is interpersonal. It might be the case that my credence is split between Theory 1 and Theory 2, but that everyone else has the exact same credal split. In this case, there is no need for interpersonal cooperation between people with conflicting moral beliefs because there is unanimity. Rather, the puzzle I face is to act under moral uncertainty, which is a very different point. 

In general, I think you have raised some sensible considerations about whether and how we might go about making EA more popular, such as around framing. But I think the idea that we should avoid talking about what EAs should do is untenable. Even while writing this comment, I have found it impossible not to say what EAs should do. Indeed, at several points in your post you make normative claims about what EA should do 

  • "So I think we should discuss why "Effective Altruism" implying that there are specific and clear preferable options for "Effective Altruists" is often harmful"
  • "Specifically, we should be wary of making the project exclusive rather than inclusive."
  • In the section on EA beyond small and weird, your argument is maybe EA should be big and weird.
  • In the section on fragmentation, if I have interpreted you correctly, you are saying some people should not be overconfident about their cause commitments given peer disagreement.
  • In the section on human variety, you say that EAs shouldn't have narrow career paths

Without making some normative claims about what EAs should and should not do, I don't see how EA could remain a distinctive movement. I just think it is true that EAs shouldn't donate to their local opera house, pet sanctuary, homeless shelter or to their private school, and that is what makes EA distinctive. Moreover, criticising the cause choices of EA actors just seems fundamental to the project. If our aim is to do the most good, then we should criticise approaches to that that seem unpromising. 

As an example, Hauke and I wrote a piece criticising GiveWell's reliance on RCTs. I took this to be an argument about what GiveWell or other EA research orgs should do with their staff time. How would you propose reframing this?

Comment by Halstead on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T22:03:32.642Z · EA · GW

Hi, The A population and the Z population are both composed of merely possible future people, so person-affecting intuitions can't ground the repugnance. Some impartialist theories (critical level utilitaianism) are explicitly designed to avoid the repugnant conclusion. 

The case is analogous to the debate in aggregation about whether one should cure a billion headaches or save someone's life. 

Comment by Halstead on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T17:15:10.677Z · EA · GW

Second comment, on your critique of Meacham...

As a (swivel-eyed) totalist, I'm loath to stick up for a person-affecting view, but I don't find your 'extremely radical implications' criticism of the view compelling and I think it is an example of an unpromising way of approaching moral reasoning in general. The approach I am thinking of here is one that  selects theories by meeting intuitive constraints rather than by looking at the deeper rationales for the theories. 

I think a good response for Meacham would be that if you find the rationale for his theory compelling, then it is simply correct that it would be better to stop everyone existing. Similarly, totalism holds that it would be good to make everyone extinct if there is net suffering over pleasure (including among wild animals). Many might also find this counter-intuitive. But if you actually believe the deeper theoretical arguments for totalism, then this is just the correct answer. 

I agree that Meacham's view on extinction is wrong, but that is because of the deeper theoretical reasons - I think adding happy people to the world makes that world better, and I don't see an argument against that in the paper. 

The Impossibility Theorems show formally that we cannot have a theory that satisfies people's intuitions about cases. So, we should not use isolated case intuitions to select theories. We should instead focus on deeper rationales for theories. 

Comment by Halstead on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T16:53:27.689Z · EA · GW

Thanks a lot for taking the time to do this Arden, I found it useful. I have a couple of comments

Firstly, on the repugnant conclusion. I have long found the dominant dialectic in population ethics a bit strange. We (1) have this debate about whether merely possible future people are worthy of our ethical consideration and then (2) people start talking about a conclusion that they find repugnant because of aggregation of low quality lives. The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future; it is rather from the way totalism aggregates low quality lives. This repugnance is irrelevant to questions of population ethics. It's a bit like if we were talking about the totalist view of population ethics, and then people started talking about the experience machine or other criticisms of hedonism: this may be a valid criticism of totalism but it is beside the point - which is whether merely possible future people matter. 

Related to this:

(1) There are current generation perfect analogues of the repugnant conclusion. Imagine you could provide a medicine that provides a low quality life to billions of currently existing people or provide a different medicine to a much smaller number of people giving them brilliant lives. The literature on aggregation also discusses the 'headaches vs death' case which seems exactly analogous.

(2) For this reason, we shouldn't expect person-affecting views to avoid the repugnant conclusion. For one thing, some impartialist views like critical level utilitarianism, avoid the repugnant conclusion. For another thing, the A population and the Z population are merely possible future people so most person-affecting theories will say that they are incomparable. 

Meacham's view avoids this with its saturating relation in which possible future people are assigned counterparts. But (1) there are current generation analogues to the RC as discussed above, so this doesn't actually solve the (debatable) puzzle of the RC. 

(2) Meacham's view would imply that if the people in the much larger population had on average lives only slightly worse than people in the small population (A), then the smaller population would still be better. Thus, Meacham's view solves the repugnant conclusion but only by discounting aggregation of high quality lives, in some circumstances. This is not the solution to the repugnant conclusion that people wanted.

Comment by Halstead on How modest should you be? · 2020-12-31T17:20:37.846Z · EA · GW

I agree that lots of these considerations are important. On 2) especially, I agree that being epistemically modest doesn't make things easy because choosing the right experts is a non-trivial task. One example of this is using AI researchers as the correct expert group on AGI timelines, which I have myself done in the past. AI researchers have shown themselves to be good at producing AI research, not at forecasting long-term AI trends, so it's really unclear that this is the right way to be modest in this case. 

On 4 also - I agree. I think coming to a sophisticated view will often involve deferring to some experts on specific sub-questions using different groups of experts. Like maybe you defer to climate science on what will happen to the climate, philosophers on how to think about future costs, economists on the best way forward, etc. Identifying the correct expert groups is not always straightforward. 

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-30T20:40:59.001Z · EA · GW

The benefits of GiveWell's charities are worked out as health or economic benefits which are realised in the future. e.g. AMF is meant to be good because it allows people who would have otherwise died to live for a few more years. If you are agnostic about whether everyone will go extinct tomorrow, then you must be agnostic about whether people will actually get these extra years of life. 

Comment by Halstead on How modest should you be? · 2020-12-28T19:20:39.063Z · EA · GW

Hi Michael, I'm blushing!

Yes I think that would be a reasonable view to believe, but my point here is just about what role the object-level reasons should play in our epistemics. I do think something like a middle way is the right path, though I don't have a fully worked out theory. There is a good discussion of the topic here by Michael Huemer. I should note that I am generally very pro at least figuring out what the experts think about a topic in order to form reasonable views - the views of others should weigh heavily in our reasoning, especially given the widespread tendency to overconfidence. The idea of just ignoring all the object-level reasons seems wrong to me, however.

On my definition of continental philosophy, it is a form of philosophy that puts little to no value on clarity in writing. I think this is because the work of continental philosophers lacks substantive merit - when you have nothing to say, a good strategy is to be unclear; when you have no cards, all you can do is bluff. This leads to passages such as this from Hegel

"This is a light that breaks forth on spiritual substance, and shows absolute content and absolute form to be identical; - substance is in itself identical with knowledge. Self-consciousness thus, in the third place, recognizes its positive relation as its negative, and its negative as its positive, - or, in other words, recognizes these opposite activities as the same i.e. it recognizes pure Thought or Being as self-identity, and this again as separation. This is intellectual perception; but it is requisite in order that it should be in truth intellectual, that it should not be that merely immediate perception of the eternal and the divine which we hear of, but should be absolute knowledge. This intuitive perception which does not recognize itself is taken as starting-point as if it were absolutely presupposed; it has in itself intuitive perception only as immediate knowledge, and what it perceives it does not really know, - for, taken at its best, it consists of beautiful thoughts, but not knowledge."

Or this from Foucault

"An intrinsic archaeological contradiction is not a fact, purely and simply, that it is enough to state as a principle or explain as an effect. It is a complex phenomenon that is distributed over different levels of the discursive formation. Thus, for systematic Natural History and methodical Natural History, which were in constant opposition for a good part of the eighteenth century, one can recognize: an inadequation of the objects (in the one case one describes the general appearance of the plant; in the other certain predetermined variables; in the one case, one describes the totality of the plant, or at least its most important parts, in the other one describes a number of elements chosen arbitrarily for their taxonomic convenience; sometimes one takes account of the plant's different states. of growth and maturity, at others one confines one's attention to a single moment, a stage of optimum visibility); a divergence of enunciative modalities (in the case of the systematic analysis of plants, one applies a rigorous perceptual and linguistic code, and in accordance with a constant scale; for methodical description, the codes are relatively free, and the scales of mapping may oscillate); an incompatibility of concepts (in the 'systems', the concept of generic character is an arbitrary, though misleading mark to designate the genera; in the methods this same concept must include the real definition of the genus); lastly, an exclusion of theoretical options (systematic taxonomy makes 'fixism' possible, even if it is rectified by the idea of a continuous creation in time, gradually unfolding the elements of the tables, or by the idea of natural catastrophes having disturbed by our present gaze the linear order of natural proximities, but excludes the possibility of a transformation that the method accepts without absolutely implying it)."

A central confusion for continental philosophers  is acceptance of the 'worst argument in the world', which is that "We can know things only

  • as they are related to us
  • under our forms of perception
  • understanding insofar as they fall under our conceptual schemes,
  • from our cultural/economic perspective
  • insofar as they are formulated in language.

So, we cannot know things as they are in themselves." This is a common argument at the basis of  relativism of different kinds. 

I think this is an interesting test case for epistemic modesty because from the outside, these people look a lot like experts. It is only by understanding some philosophy that you could reasonably discount their epistemic virtue. 

Comment by Halstead on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-28T16:15:37.508Z · EA · GW

My thought  would be that getting the level of international coordination required would be extremely hard. (I am speaking from a position of ignorance here)

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T22:23:58.904Z · EA · GW

Another way to look at this. What do you think is the probability that everyone will go extinct tomorrow? If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff.

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T21:54:12.817Z · EA · GW

Yes thanks my mistake - edited above

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T21:26:27.690Z · EA · GW

If you refuse to claim that the chance of nuclear war up to 2100 is greater than 0.000000000001%, then I don't see how you could make a good case to work on it over some other possible intuitively trivial action, such as painting my wall blue. What would the argument be if you are completely agnostic as to whether it is a serious risk?

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T21:20:46.257Z · EA · GW

Do you for example think there is a more than 50% chance that it is greater than 10 billion?

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:19:03.123Z · EA · GW

You say that "there are good arguments for working on the threat of nuclear war". As I understand your argument, you also say we cannot rationally distinguish between the claim "the chance of nuclear war in the next 100 years is 0.00000001%" and the claim "the chance of nuclear war in the next 100 years is 1%". If you can't rationally put probabilities on the risk of nuclear war, why would you work on it?

Comment by Halstead on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-27T20:06:55.343Z · EA · GW

I would like to see these sorts of bio-catastrophes discussed in more detail. On my naive understanding, the threat of engineered pandemics seems likely to  usher in an age of disruption and surveillance and to completely undermine current liberal democratic norms