Deference for Bayesians 2021-02-13T12:33:05.556Z
[Link post] Are we approaching the singularity? 2021-02-13T11:04:02.579Z
How modest should you be? 2020-12-28T17:47:10.799Z
Instructions on potential insomnia cure 2020-10-12T13:56:53.111Z
High stakes instrumentalism and billionaire philanthropy 2020-07-19T19:19:41.206Z
What is a good donor advised fund for small UK donors? 2020-04-29T09:56:50.097Z
How hot will it get? 2020-04-18T20:29:59.579Z
Pangea: The Worst of Times 2020-04-05T15:13:23.612Z
Covid-19 Response Fund 2020-03-31T17:22:05.999Z
Growth and the case against randomista development 2020-01-16T10:11:51.136Z
Is mindfulness good for you? 2019-12-29T20:01:28.762Z
The ITN framework, cost-effectiveness, and cause prioritisation 2019-10-06T05:26:24.879Z
What should Founders Pledge research? 2019-09-09T17:41:04.073Z
[Link] New Founders Pledge report on existential risk 2019-03-28T11:46:17.623Z
The case for delaying solar geoengineering research 2019-03-23T15:26:13.119Z
Insomnia: a promising cure 2018-11-16T18:33:28.060Z
Concerns with ACE research 2018-09-07T14:56:25.737Z
New research on effective climate charities 2018-07-11T13:51:23.354Z
The counterfactual impact of agents acting in concert 2018-05-27T10:54:03.677Z
Climate change, geoengineering, and existential risk 2018-03-20T10:48:01.316Z
Economics, prioritisation, and pro-rich bias   2018-01-02T22:33:36.355Z
We're hiring! Founders Pledge is seeking a new researcher 2017-12-18T12:30:02.429Z
Capitalism and Selfishness 2017-09-15T08:30:54.508Z
How should we assess very uncertain and non-testable stuff? 2017-08-17T13:24:44.537Z
Where should anti-paternalists donate? 2017-05-04T09:36:53.654Z
The asymmetry and the far future 2017-03-09T22:05:26.700Z


Comment by Halstead on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-18T21:00:08.763Z · EA · GW

Echoing what Max says, I think this paper comes from the assumption that a lot of population ethics is just off down the wrong track of trying to craft theories in a somewhat ad hoc manner that avoid the repugnant conclusion. It is difficult to think of how else these people could try and make this point given that making the same points that others have made before, in some cases several decades ago, would not be publishable because they are not novel. This strikes me as something of a (frustrated?) last resort to try and make the discipline acknowledge that there might be a problem in the way it has been going for thirty years. 

I suppose one alternative would have been to publish this on a philosophy blog, but then it would necessarily have got less reach than getting it in a top journal. 

Although unusual in philosophy, the practice is widespread in science. Scientists will often write in short letters criticising published articles that are light on substantive argument but reiterate a view among some prominent researchers. 

Finally, I think it is useful to have more surveys of what different researchers in a field believe, and this is one such instance of that - it tells us that several of the world's best moral philosophers are willing to accept this thing that everyone else seems to think is insane. 

Comment by Halstead on Julia Galef and Matt Yglesias on bioethics and "ethics expertise" · 2021-03-31T12:41:47.031Z · EA · GW

I think a key point is that bioethics usually involves applying particular moral theories, which is not that interesting an exercise from a philosophical point of view. That's why the best philosophers are often drawn to higher level theoretical questions such as about the truth of otherwise of consequentialism or rights-based theories or whether and how we should respond to moral uncertainty. Consequently, the true ethics experts (if they really exist) are not likely to be studying bioethics. as they say in the podcast it is also weird that bioethics has this special status as a field with a distinct set of experts who get to veto public policy. In most areas of public policy, the economists get to decide what happens (subject to political constraints), and the outcomes are usually much better!

Comment by Halstead on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-09T08:57:44.202Z · EA · GW

If you agree it is a serious and baseless allegation, why do you keep engaging with him? The time to stop engaging with him was several years ago. You had sufficient evidence to do so at least two years ago, and I know that because I presented you with it, e.g. when he started casually throwing around rape allegations about celebrities on facebook and tagging me in the comments, and then calling me and others nazis. Why do  you and your colleagues continue to extensively collaborate with him? 

To reiterate, the arguments he makes are not sincere: he only makes them because he thinks the people in question have wronged him. 

Comment by Halstead on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-08T18:39:14.157Z · EA · GW

It is  very generous to characterise Torres' post as insightful and thought provoking. He characterises various long-termists as white  supremacists on the flimsiest grounds imaginable. This is a very serious accusation and one that he very obviously throws around due to his own personal vendettas against certain people. e.g. despite many of his former colleagues at CSER also being long-termists he doesn't call them nazis because he doesn't believe they have slighted him. Because I made the mistake of once criticising him, he spent much of the last two years calling me a white supremacist, even though the piece of mine he cited did not even avow belief in long-termism.  

Comment by Halstead on Assessing Climate Change’s Contribution to Global Catastrophic Risk · 2021-03-02T10:26:45.572Z · EA · GW

On species extinctions, you cite the Thomas et al estimate that climate change would cause "15-37% of all species to become ‘committed to extinction’ by mid-century". This paper has been subject to an avalanche of criticism. For example, there is a good review here, and strong counter-evidence discussed at length here.  I think it would be useful to the reader to provide this context. 

Also, this is just one study (also the most pessimistic), and I think one would get a better view by providing an overview of the literature. The IPBES report that you also cite says "For instance, a synthesis of many studies estimates that the fraction of species at risk of extinction due to climate change is 5 per cent at 2°C warming, rising to 16 per cent at 4.3°C warming {}." 4.3 degrees is the median outcome at 2100 on the high emissions pathway. Being committed to extinction is also very different to being at risk of extinction. This suggests that the risk is a lot lower than the Thomas et al estimate suggests. 

Comment by Halstead on Assessing Climate Change’s Contribution to Global Catastrophic Risk · 2021-03-01T14:25:18.270Z · EA · GW

The factors you mention therefore seem to increase vulnerability, but merely in the following sense 

  • Some of the factors don't seem relevant at all (phosphorous depletion)
  • The food system will be much less vulnerable in the future vs today despite these factors.
  • Some other event would have to do 99% of the work in bringing about a global food catastrophe
Comment by Halstead on EA Updates for March 2021 · 2021-02-26T19:06:00.755Z · EA · GW

thanks for taking the time to do this!

Comment by Halstead on Deference for Bayesians · 2021-02-20T14:52:56.310Z · EA · GW

I think I would find it very hard to update on the view that the minimum wage reduces demand for labour. Maybe if there were an extremely well done RCT showing no effect from a large minimum wage increase of $10, I would update. Incidentally, here is discussion of an RCT on the minimum wage which illustrates where the observational studies might be going wrong. The RCT shows that employers reduced hours worked, which wouldn't show up in observational studies, which mainly study disemployment effects

I am very conscious of the fact that almost everyone I have ever tried to convince of this view on the minimum wage remains wholly unmoved. I should make it clear that I am in favour of redistribution through tax credits, subsidies for childcare and that kind of thing. I think the minimum wage is not a smart way to help lower income people. 

Comment by Halstead on Assessing Climate Change’s Contribution to Global Catastrophic Risk · 2021-02-20T13:28:05.258Z · EA · GW

I would agree with that - climate change seems like it could have very bad humanitarian costs for poor agrarian societies that look set to experience low economic growth this century. I do though find it very difficult to see how it could lead to a collapse of the global food system

Comment by Halstead on Assessing Climate Change’s Contribution to Global Catastrophic Risk · 2021-02-19T20:41:53.016Z · EA · GW

Thanks for sharing this. 

Regarding food, you suggest that due to climate change, soil erosion, water scarcity, and phosphorus depletion, there are risks to the global food supply that could constitute a global catastrophe. What do you think is the probability of this occurring in the next 30 or 80 years?

I am  sceptical of this. Crop yields for almost all crops have increased by 200% since 1980, despite warming of about 0.8 degrees since then. The crop effects of climate change you outline, which are typically on the order of  up to 20%  losses for major food crops at 5 degrees, should be set in this context. Various studies suggest that yield will increase by 25% to 150% by 2050. e.g UN FAO; Wiebe. The yield damage estimates you cite seem like they will be outpaced by technological progress unless there is a massive trend break in agricultural productivity progress. 

On phosphorous, according to a report by the IFDC funded by USAID, global phosphorous reserves (those which can be currently economically extracted, so this is a dynamic figure) will last for 300-400 years. "Based on the data gathered, collated and analyzed for this report, there is no indication that a "peak phosphorus" event will occur in 20-25 years. IFDC estimates of world phosphate rock reserves and resources indicate that phosphate rock of suitable quality to produce phosphoric acid will be available far into the future. Based on the data reviewed, and assuming current rates of production, phosphate rock concentrate reserves to produce fertilizer will be available for the next 300-400 years." The US geological survey says "World resources of phosphate rock are more than 300 billion tons. There are no imminent shortages of phosphate rock."

On water scarcity, agriculture is about 4% of global GDP and declining. If water became enough of a constraint on agriculture to threaten a global catastrophe, why would we not throw some money or wisdom at the problem for example by spending more money on water, desalination, or stopping subsidising agricultural uses of water? Have any middle or high income countries ever failed to produce more than enough food because of lack of water?

On soil erosion, the UN report on soil says "A synthesis of meta-analyses on the soil erosion-productivity relationship suggests that a global median loss of 0.3 percent of annual crop yield due to erosion occurs.7 If this rate of loss continues unchanged into the future, a total reduction of 10 percent of potential annual yield by 2050 would occur". again, this is in the context of otherwise increasing yields. Soil erosion rates are also declining in various regions. 

Comment by Halstead on Deference for Bayesians · 2021-02-17T15:12:18.500Z · EA · GW

As I mention in the post, it's not just theory and common sense, but also evidence from other domains. If the demand curve for labour low skilled labour is vertical, then it is all but impossible that a massive influx of Cuban workers during the Mariel boatlift had close to zero effect on native US wages. Nevertheless, that is what the evidence suggests. 

I am happy to be told of other theoretical explanations of why minimum wages don't reduce demand for labour. The ones I am aware of in the literature are monopsonistic buyer of labour (clearly not the case), or one could give up on the view that firms are systematically profit-seeking (also doesn't seem true). 

The claims that are  wrong are the ones I highlight in the post, viz that the empirical evidence is all that matters when forming believe about the minimum wage. Most empirical research isn't that good and cannot distinguish signal from noise when there are small treatment effects e.g. the Card and Krueger research that started the whole debate off got their data by telephoning fast food restaurants.

Comment by Halstead on Deference for Bayesians · 2021-02-17T14:57:31.970Z · EA · GW

2. I would disagree on economics. I view the turn of economics towards high causal identification and complete neglect of theory as a major error, for reasons I touch on here. The discipline has moved from investigating important things to trivial things with high causal identification. The trend towards empirical behavioural economics is also in my view a fad with almost no practical usefulness.  (To reiterate my point on the minimum wage - the negative findings are almost certainly false: it is  what you would expect to find for a small treatment effect and noisy data in observational studies. Before reading the literature and believing that the effect of a minimum wage increase of $3 is small but negative, I would still expect to find a lot of studies finding no effect because empirical research is not very good, so one should not update much on those negative findings. If you think the demand curve for low skilled labour is vertical, then the phenomenon of ~0 effect on native US wages after a massive influx of low skilled labour from Cuba  is inexplicable. And: the literature is very mixed - it's not like all the studies find no effect, that is a misconception)

3. I agree that focusing on base rates is important but that doesn't seem to get at the myopic empiricism issue. For example, the base rate of vaccine efficacy dropping off a cliff after 22 days is very low, but that was not established in the initial Astra Zeneca study. To form that judgement, one needs evidence from other domains, which myopic empiricists  ignore. 

4. I'm not sure where we disagree there. I don't think EAs should stay rooted in empiricism if that means 'form judgements only on the basis of the median published scientific study', which is the view I criticise. I'm not saying we should become less empirical - I think we should take account of theory but also empirical evidence from other domains, which as I discuss many other experts refuse to do in some cases. 

I'm not saying that we should be largely cut off from observation and experiment and should just deduce from theory. I'm saying that the myopic empiricist approach is not the right one. 

Comment by Halstead on Deference for Bayesians · 2021-02-17T08:44:14.686Z · EA · GW

This is maybe getting too bogged down in the object-level. The general point is that if you have a confident prior, you are not going to update on uncertain observational evidence very much. My argument in the main post is that ignoring your prior entirely is clearly not correct and that is driving a lot of the mistaken opinions I outline.

Tangentially, I stand by my position on the object-level - I actually think that 98% is too low! For any randomly selected good I can think of, I would expect a price floor to reduce demand for it in >99% of cases. Common sense aside... The only theoretical reason this might not be true is if the market for labour is monopsonistic. That is just obviously not the case. There is also evidence from the immigration literature which suggests that native wages are barely affected by a massive influx of low skilled labour, which implies a near horizontal demand curve. There is also the point that if you are slightly Keynesian you think that involuntary employment is caused by the failure of wages to adjust downward; legally forbidding them from doing this must cause unemployment.  

Comment by Halstead on Deference for Bayesians · 2021-02-16T14:11:01.524Z · EA · GW

Hello, my argument was that there are certain groups of experts you can ignore or put less weight on because they have the wrong epistemology. I agree that the median expert might have got some of these cases right. (I'm not sure that's true in the case of nutrition however)

The point in all these cases re priors is that one should have a very strong prior, which will not be shifted much by flawed empirical research. One should have a strong prior that the efficacy of the vaccine won't drop off massively for the over 65s even before this is studied.  

One can see the priors vs evidence case for the minimum wage more formally using Bayes theorem.  Suppose my prior that minimum wages reduce demand for labour is 98%, which is reasonable. I then learn that one observational study has found that they  have no effect on demand for labour. Given the flaws in empirical research,  let's say there is a  30% chance of a study finding no effect conditional on there being an effect. Given this, we might put a symmetrical probability on a study finding no effect conditional on there being no effect  - a 70% chance of a null result if minimum wages in fact have no effect. 

Then my posterior is = (.3*.98)/(.3*98+.7*.02) = 95.5% 

So I am still very sure that minimum wages have no effect even if there is one study showing the contrary. FWIW, my reading of the evidence is that most studies do find an effect on demand for labour, so after assimilating it all, one would probably end up where one's prior was. This is why the value of information of research into the minimum wage is so low. 

On drinking in pregnancy, I don't think this is driven by people's view of acceptable risk, but rather by a myopic empiricist view of the world. Oster's book is the go-to for data-driven parents and she claims that small amounts of alcohol has no effect, not that it has a small effect but is worth the risk. (Incidentally, the latter claim is also clearly false - it obviously isn't worth the risk.)

On your final point, I don't think one can or should aim to give an account of whether relying on theory or common sense is always the right thing to do. I have highlighted some examples where failure to rely on theory and evidence from other domains leads people astray. Epistemology is complicated and this insight may of course not be true in all domains. For a comprehensive account of how to approach cases such as these, one cannot say much more than that the true theory of epistemology is Bayesianism and to apply that properly you need to be apprised of all of the relevant information in different fields.

Comment by Halstead on [Link post] Are we approaching the singularity? · 2021-02-15T10:57:10.695Z · EA · GW

As a matter of interest, where do papers such as this usually get discussed? Is it in personal conversation or in some particular online location?

Comment by Halstead on Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected · 2021-02-14T12:57:01.338Z · EA · GW

Thanks for writing this. I disagree that EAs should prioritise this cause area and I disagree with the analysis of the cause-specific arguments. 

Firstly, I think it is good for happy people to come into existence, but this is ignored here. 

On climate change, I generally think Drawdown is not a reliable source. The only place where births per woman are not close to 2 is sub-saharan Africa. Thus, the only place where family planning could reduce emissions is sub-saharan Africa, which is currently a tiny fraction of emissions. Working on low carbon technology by contrast can affect global emissions, and policy change in the US or EU can affect a much larger fraction of emissions. 

I'm also  strongly sceptical of the cost to prevent a pregnancy provided. $10 seems far too low. This seems similar to the kind of mainstream charity cost to save a life estimate that EAs have criticised for a while

Comment by Halstead on Deference for Bayesians · 2021-02-14T11:46:41.730Z · EA · GW

Thanks for sharing that piece, it's a great counterpoint. I have a few thoughts in response. 

Strevens argues that myopic empiricism drives people to do useful experiments which they perhaps might not have done if they stuck to theory. This seems to have been true in the case of physics. However, there are also a mountain of cases of wasted research effort, some of them discussed in my post. The value of information from eg  most studies on the minimum wage and observational nutritional epidemiology is miniscule in my opinion. Indeed, it's plausible that the majority of social science research is wasted money, per the claims of the meta-science movement. 

I agree that it's not totally clear if it would be positive if in general people tried to put more weight on theory and common sense. But some reliance on theory and common sense is just unavoidable. So, this is a question of how much reliance we put on that, not whether to do it at all. For example, to make judgements about whether we should act on the evidence of whether masks work, we need to make judgments about the external validity of studies, which necessarily involves making some theoretical judgements about the mechanism by which masks work, which the empirical studies confirm. The true logical extension of myopic empiricism is the inability to infer anything from any study. "We showed that one set of masks worked in a series of studies in US in 2020, but we don't have a study of this other set of masks works in Manchester in 2021, so we don't know whether they work". 

I tend to think it would be positive if scientists gave up on myopic empiricism and shifted to being more explicitly Bayesian. 

Comment by Halstead on Deference for Bayesians · 2021-02-14T11:24:26.712Z · EA · GW

Hi,  thanks for this. 

I'm not making a claim that rationalists are more accurate than the standard experts. I actually don't think that is true .eg rationalists think you obviously should one-box in Newcomb's problem (which I think is wrong, as do most decision theorists). The comments of Greg Lewis' post discuss the track record of the rationalists, and I largely agree with the pessimistic view there. I also largely agree with the direction and spirit of Greg's main post.

My post is about what someone who accepts  the tenets of Bayesianism would do given the beliefs of experts. In the examples I mention, some experts have gone wrong by not taking account of their prior when forming beliefs (though there are other ways to fall short of the Bayesian standard, such as not updating properly given a prior). I think this flaw has been extremely socially damaging during the pandemic.

 I don't think this implies anything about deferring to the views of actual rationalists, which would require a sober assessment of their track record. 

Comment by Halstead on [Link post] Are we approaching the singularity? · 2021-02-14T11:07:08.907Z · EA · GW

Thanks for outlining the tests.

I'm not really sure what he thinks the probability of the singularity before 2100 is. My reading was that he probably doesn't think that given his tests, the singularity is (eg) >10% likely before 2100. 2 of the 7 tests suggest the singularity after 100 years and 5 of them fail. It might be worth someone asking him for his view on that

Comment by Halstead on Promoting EA to billionaires? · 2021-01-24T21:01:51.797Z · EA · GW

There's also Effective Giving Netherlands

Comment by Halstead on Why I'm concerned about Giving Green · 2021-01-24T21:00:04.500Z · EA · GW

For what it's worth, as someone who has thought about climate policy and philanthropy on and off for the last 3 years, I would also agree with this critique, and for the reasons Johannes (jackva) gives, I don't think the responses succeed. It's good to see these issues being discussed openly and constructively by both sides. 

Comment by Halstead on The Folly of "EAs Should" · 2021-01-11T12:01:41.009Z · EA · GW

I don't think there's any need to apologise! I was trying to make the case that I don't think you showed how we could distinguish reasonable and unreasonable uses of normative claims

Comment by Halstead on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-09T20:31:41.215Z · EA · GW

What do you think the next 4 years has in store for the US, especially concerning the probability of a major change in institutions and order there.

Comment by Halstead on The Folly of "EAs Should" · 2021-01-06T17:39:37.286Z · EA · GW

Hi, thanks for the reply!

The argument now has a bit of a motte and bailey feel, in that case. In various places you make claims such as 

  • "The Folly of "EAs Should"
  • "One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid"; 
  • "So I think we should discuss why Effective Altruism implying that there are specific and clear preferable options for Effective Altruists is often harmful"; 
  • "Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints."
  • "and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions"

These seem to be claims to the effect that (1) we should (almost) never make normative claims (2) strong scepticism about knowing that one path is better from an EA point of view than another.  But I don't see a defence of either of these claims in the piece. For example, I don't see a defence of the claim that it is mistaken to think/say/argue that focusing on US policy or on GiveWell charities is not the best way to do the most good. 

If the claim is the weaker one that sometimes EAs can be overconfident in their view of the best way forward or use language that can be off-putting, then that may be right. But that seems different to the "never say that some choices EAs make are better than others" claim, which is suggested  elsewhere in the piece

Comment by Halstead on The Folly of "EAs Should" · 2021-01-06T11:30:29.239Z · EA · GW

I think this is consistent with Will's definition because you can view the 'should' claims as what we should do conditional on us accepting the goal of doing the most good using reason and evidence. 

Comment by Halstead on The Folly of "EAs Should" · 2021-01-06T11:22:13.654Z · EA · GW

Thanks for taking the time to put this together. 

At the start, you seem to suggest that we should not use 'should' because of moral uncertainty, and then you gloss this as a claim about cooperation. Moral uncertainty is intrapersonal, whereas moral cooperation is interpersonal. It might be the case that my credence is split between Theory 1 and Theory 2, but that everyone else has the exact same credal split. In this case, there is no need for interpersonal cooperation between people with conflicting moral beliefs because there is unanimity. Rather, the puzzle I face is to act under moral uncertainty, which is a very different point. 

In general, I think you have raised some sensible considerations about whether and how we might go about making EA more popular, such as around framing. But I think the idea that we should avoid talking about what EAs should do is untenable. Even while writing this comment, I have found it impossible not to say what EAs should do. Indeed, at several points in your post you make normative claims about what EA should do 

  • "So I think we should discuss why "Effective Altruism" implying that there are specific and clear preferable options for "Effective Altruists" is often harmful"
  • "Specifically, we should be wary of making the project exclusive rather than inclusive."
  • In the section on EA beyond small and weird, your argument is maybe EA should be big and weird.
  • In the section on fragmentation, if I have interpreted you correctly, you are saying some people should not be overconfident about their cause commitments given peer disagreement.
  • In the section on human variety, you say that EAs shouldn't have narrow career paths

Without making some normative claims about what EAs should and should not do, I don't see how EA could remain a distinctive movement. I just think it is true that EAs shouldn't donate to their local opera house, pet sanctuary, homeless shelter or to their private school, and that is what makes EA distinctive. Moreover, criticising the cause choices of EA actors just seems fundamental to the project. If our aim is to do the most good, then we should criticise approaches to that that seem unpromising. 

As an example, Hauke and I wrote a piece criticising GiveWell's reliance on RCTs. I took this to be an argument about what GiveWell or other EA research orgs should do with their staff time. How would you propose reframing this?

Comment by Halstead on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T22:03:32.642Z · EA · GW

Hi, The A population and the Z population are both composed of merely possible future people, so person-affecting intuitions can't ground the repugnance. Some impartialist theories (critical level utilitaianism) are explicitly designed to avoid the repugnant conclusion. 

The case is analogous to the debate in aggregation about whether one should cure a billion headaches or save someone's life. 

Comment by Halstead on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T17:15:10.677Z · EA · GW

Second comment, on your critique of Meacham...

As a (swivel-eyed) totalist, I'm loath to stick up for a person-affecting view, but I don't find your 'extremely radical implications' criticism of the view compelling and I think it is an example of an unpromising way of approaching moral reasoning in general. The approach I am thinking of here is one that  selects theories by meeting intuitive constraints rather than by looking at the deeper rationales for the theories. 

I think a good response for Meacham would be that if you find the rationale for his theory compelling, then it is simply correct that it would be better to stop everyone existing. Similarly, totalism holds that it would be good to make everyone extinct if there is net suffering over pleasure (including among wild animals). Many might also find this counter-intuitive. But if you actually believe the deeper theoretical arguments for totalism, then this is just the correct answer. 

I agree that Meacham's view on extinction is wrong, but that is because of the deeper theoretical reasons - I think adding happy people to the world makes that world better, and I don't see an argument against that in the paper. 

The Impossibility Theorems show formally that we cannot have a theory that satisfies people's intuitions about cases. So, we should not use isolated case intuitions to select theories. We should instead focus on deeper rationales for theories. 

Comment by Halstead on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T16:53:27.689Z · EA · GW

Thanks a lot for taking the time to do this Arden, I found it useful. I have a couple of comments

Firstly, on the repugnant conclusion. I have long found the dominant dialectic in population ethics a bit strange. We (1) have this debate about whether merely possible future people are worthy of our ethical consideration and then (2) people start talking about a conclusion that they find repugnant because of aggregation of low quality lives. The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future; it is rather from the way totalism aggregates low quality lives. This repugnance is irrelevant to questions of population ethics. It's a bit like if we were talking about the totalist view of population ethics, and then people started talking about the experience machine or other criticisms of hedonism: this may be a valid criticism of totalism but it is beside the point - which is whether merely possible future people matter. 

Related to this:

(1) There are current generation perfect analogues of the repugnant conclusion. Imagine you could provide a medicine that provides a low quality life to billions of currently existing people or provide a different medicine to a much smaller number of people giving them brilliant lives. The literature on aggregation also discusses the 'headaches vs death' case which seems exactly analogous.

(2) For this reason, we shouldn't expect person-affecting views to avoid the repugnant conclusion. For one thing, some impartialist views like critical level utilitarianism, avoid the repugnant conclusion. For another thing, the A population and the Z population are merely possible future people so most person-affecting theories will say that they are incomparable. 

Meacham's view avoids this with its saturating relation in which possible future people are assigned counterparts. But (1) there are current generation analogues to the RC as discussed above, so this doesn't actually solve the (debatable) puzzle of the RC. 

(2) Meacham's view would imply that if the people in the much larger population had on average lives only slightly worse than people in the small population (A), then the smaller population would still be better. Thus, Meacham's view solves the repugnant conclusion but only by discounting aggregation of high quality lives, in some circumstances. This is not the solution to the repugnant conclusion that people wanted.

Comment by Halstead on How modest should you be? · 2020-12-31T17:20:37.846Z · EA · GW

I agree that lots of these considerations are important. On 2) especially, I agree that being epistemically modest doesn't make things easy because choosing the right experts is a non-trivial task. One example of this is using AI researchers as the correct expert group on AGI timelines, which I have myself done in the past. AI researchers have shown themselves to be good at producing AI research, not at forecasting long-term AI trends, so it's really unclear that this is the right way to be modest in this case. 

On 4 also - I agree. I think coming to a sophisticated view will often involve deferring to some experts on specific sub-questions using different groups of experts. Like maybe you defer to climate science on what will happen to the climate, philosophers on how to think about future costs, economists on the best way forward, etc. Identifying the correct expert groups is not always straightforward. 

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-30T20:40:59.001Z · EA · GW

The benefits of GiveWell's charities are worked out as health or economic benefits which are realised in the future. e.g. AMF is meant to be good because it allows people who would have otherwise died to live for a few more years. If you are agnostic about whether everyone will go extinct tomorrow, then you must be agnostic about whether people will actually get these extra years of life. 

Comment by Halstead on How modest should you be? · 2020-12-28T19:20:39.063Z · EA · GW

Hi Michael, I'm blushing!

Yes I think that would be a reasonable view to believe, but my point here is just about what role the object-level reasons should play in our epistemics. I do think something like a middle way is the right path, though I don't have a fully worked out theory. There is a good discussion of the topic here by Michael Huemer. I should note that I am generally very pro at least figuring out what the experts think about a topic in order to form reasonable views - the views of others should weigh heavily in our reasoning, especially given the widespread tendency to overconfidence. The idea of just ignoring all the object-level reasons seems wrong to me, however.

On my definition of continental philosophy, it is a form of philosophy that puts little to no value on clarity in writing. I think this is because the work of continental philosophers lacks substantive merit - when you have nothing to say, a good strategy is to be unclear; when you have no cards, all you can do is bluff. This leads to passages such as this from Hegel

"This is a light that breaks forth on spiritual substance, and shows absolute content and absolute form to be identical; - substance is in itself identical with knowledge. Self-consciousness thus, in the third place, recognizes its positive relation as its negative, and its negative as its positive, - or, in other words, recognizes these opposite activities as the same i.e. it recognizes pure Thought or Being as self-identity, and this again as separation. This is intellectual perception; but it is requisite in order that it should be in truth intellectual, that it should not be that merely immediate perception of the eternal and the divine which we hear of, but should be absolute knowledge. This intuitive perception which does not recognize itself is taken as starting-point as if it were absolutely presupposed; it has in itself intuitive perception only as immediate knowledge, and what it perceives it does not really know, - for, taken at its best, it consists of beautiful thoughts, but not knowledge."

Or this from Foucault

"An intrinsic archaeological contradiction is not a fact, purely and simply, that it is enough to state as a principle or explain as an effect. It is a complex phenomenon that is distributed over different levels of the discursive formation. Thus, for systematic Natural History and methodical Natural History, which were in constant opposition for a good part of the eighteenth century, one can recognize: an inadequation of the objects (in the one case one describes the general appearance of the plant; in the other certain predetermined variables; in the one case, one describes the totality of the plant, or at least its most important parts, in the other one describes a number of elements chosen arbitrarily for their taxonomic convenience; sometimes one takes account of the plant's different states. of growth and maturity, at others one confines one's attention to a single moment, a stage of optimum visibility); a divergence of enunciative modalities (in the case of the systematic analysis of plants, one applies a rigorous perceptual and linguistic code, and in accordance with a constant scale; for methodical description, the codes are relatively free, and the scales of mapping may oscillate); an incompatibility of concepts (in the 'systems', the concept of generic character is an arbitrary, though misleading mark to designate the genera; in the methods this same concept must include the real definition of the genus); lastly, an exclusion of theoretical options (systematic taxonomy makes 'fixism' possible, even if it is rectified by the idea of a continuous creation in time, gradually unfolding the elements of the tables, or by the idea of natural catastrophes having disturbed by our present gaze the linear order of natural proximities, but excludes the possibility of a transformation that the method accepts without absolutely implying it)."

A central confusion for continental philosophers  is acceptance of the 'worst argument in the world', which is that "We can know things only

  • as they are related to us
  • under our forms of perception
  • understanding insofar as they fall under our conceptual schemes,
  • from our cultural/economic perspective
  • insofar as they are formulated in language.

So, we cannot know things as they are in themselves." This is a common argument at the basis of  relativism of different kinds. 

I think this is an interesting test case for epistemic modesty because from the outside, these people look a lot like experts. It is only by understanding some philosophy that you could reasonably discount their epistemic virtue. 

Comment by Halstead on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-28T16:15:37.508Z · EA · GW

My thought  would be that getting the level of international coordination required would be extremely hard. (I am speaking from a position of ignorance here)

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T22:23:58.904Z · EA · GW

Another way to look at this. What do you think is the probability that everyone will go extinct tomorrow? If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff.

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T21:54:12.817Z · EA · GW

Yes thanks my mistake - edited above

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T21:26:27.690Z · EA · GW

If you refuse to claim that the chance of nuclear war up to 2100 is greater than 0.000000000001%, then I don't see how you could make a good case to work on it over some other possible intuitively trivial action, such as painting my wall blue. What would the argument be if you are completely agnostic as to whether it is a serious risk?

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T21:20:46.257Z · EA · GW

Do you for example think there is a more than 50% chance that it is greater than 10 billion?

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:19:03.123Z · EA · GW

You say that "there are good arguments for working on the threat of nuclear war". As I understand your argument, you also say we cannot rationally distinguish between the claim "the chance of nuclear war in the next 100 years is 0.00000001%" and the claim "the chance of nuclear war in the next 100 years is 1%". If you can't rationally put probabilities on the risk of nuclear war, why would you work on it?

Comment by Halstead on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-27T20:06:55.343Z · EA · GW

I would like to see these sorts of bio-catastrophes discussed in more detail. On my naive understanding, the threat of engineered pandemics seems likely to  usher in an age of disruption and surveillance and to completely undermine current liberal democratic norms

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:01:20.773Z · EA · GW

I'm not sold on the cluelessness-type critique of long-termism. The arguments here focus on things we might do now or soon to reduce the direct risk posed by various things such as AI, bio or nuclear war. But even if this is true, this doesn't undermine the expected value of other long-termist activities. 

  1. Gathering more information about the direct risks. If we are clueless about what to do, the value of information from further research must be extremely high, on long-termism. 
  2. Building the community of people concerned about the long-term e.g. through community building. 
  3. Investing in the stock market and punting the "what to do" question to the future. 
Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T19:45:45.791Z · EA · GW

On the point about the arbitrariness of estimates of the size of the future - what is your probability distribution across the size of the future population, provided there is not an existential catastrophe?

Comment by Halstead on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T19:33:50.972Z · EA · GW

I have a few comments on the critique of Bayesian epistemology, a lot of which I think is mistaken.

  1. You say "It frames the search for knowledge in terms of beliefs (which we quantify with numbers, and must update in accordance with Bayes rule, else risk rationality-apostasy!" I don't think anyone denies that Bayes theorem is true. It is mathematically proven.  The  most common criticism of Bayesianism is that it is "too subjective". I don't really understand what this means, but few sensible people deny Bayes theorem.
  2. "It has imported valid statistical methods used in economics and computer science, and erroneously applied them to epistemology, the study of knowledge creation." Economics and computer science are epistemic enterprises. If Bayesianism is the right approach in these fields, it will be difficult to show it is not the right approach in other domains, such as political science, forecasting, other questions that long-termists are interested in.
  3. "It is based on confirmation as opposed to falsification". Falsificationism is implausible as a philosophy of science. Despite his popularity among scientists who get given one philosophy of science class, Karl Popper was a scientific irrationalist who denied that scientific knowledge has increased over the last few hundred years - (on this, I would recommend David Stove's Scientific Irrationalism). If you deny that observations confirm scientific theories, then you would have no reason to believe scientific theories which are supported by observational evidence, such as that smoking causes lung cancer.
  4. "It leads to paradoxes". Lots of smart philosophers deny that pascal's mugging is a genuine paradox.
  5. [redacted - sorry misread the quote]
  6. "It  relies on the provably false probabilistic induction". Popper was a scientific irrationalist because he denied the rationality of induction. If you deny the rationality of induction, then you must be sceptical about all scientific theories that purport to be confirmed by observational evidence. Inductive sceptics must hold that if you jumped out of a tenth floor balcony, you would be just as likely to float upwards as fall downwards. Equally, do you think that smoking causes lung cancer? Do you think that scientific knowledge has increased over the last 200 years? If you do, then you're not an inductive sceptic. Inductive scepticism can't be used to ground a criticism that distinguishes uncertain long-termist probability estimates from probability estimates based on "hard data". e.g. GiveWell's estimates on the effectiveness of bednets are based on induction - they use data from studies showing that bednets have reduced the incidence of malaria
  7. "(ironically, it’s precisely this aspect of Bayesianism which is so dubious: its inability to reject any hypothesis). " This isn't true. Bayesianism rejects some hypotheses. e.g. it assigns zero probability to  some hypotheses, such as those that are logically or analytically false, like "smoking does and does not increase the risk of lung cancer". It also assigns very low probability to some hypotheses that are not logically or analytically false but have little to no observational support, such as "smoking does not increase the risk of lung cancer". If 'reject' means "assigns <0.001% probability to", then Bayesianism obviously does reject some hypotheses.
Comment by Halstead on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-27T18:03:24.089Z · EA · GW

Some of the probability estimate seem a fair bit too high to me. 

In case 2, you say the risk of extinction is between 1% and 10%. In that world, there would be 800 million survivors - about the same as the world population in 1750. On the one hand, agriculture would be harder due to a nuclear/asteroid winter - maybe this leads to a >10-fold reduction in agricultural land? But on the other hand, we have massively more  scientific and technical knowledge today which has led to a >600% increase in yield since 1750 for some major food crops. I suspect on balance, this would make food supply harder, but not enough to produce a 1% risk of extinction (barring another catastrophe).

A general comment - I think it would be good to make an explicit model which explains how you arrived at a probability estimate - this makes it easier to pinpoint disagreements and to understand underlying reasoning. 

Comment by Halstead on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-27T17:24:37.245Z · EA · GW

Thanks a lot for this. I'm not really sure why the risk of meltdown of nuclear power plants is mentioned here. This intuitively seem like a marginal concern, and would want to see more arguments for meltdown being a risk worth considering. 

  1. In the scenario you are considering, the main source of death is from something like nuclear winter in which people would die over the course of a few months. I take it in that case, people would have enough time to just turn the nuclear power plants off before they had the change to melt down. If the catastrophe killed everyone in the surrounding area immediately, there would be incentives for others to go and turn the plant off. 
  2. In the massive Chernobyl meltdown, "As of mid-2005, however, fewer than 50 deaths had been directly attributed to radiation from the disaster, almost all being highly exposed rescue workers, many who died within months of the accident but others who died as late as 2004." Up to 4000 deaths could be attributed to the disaster in the longer term - so a series of large meltdowns only seem likely to mildly reduce life expectancy over the course of a few decades, which doesn't seem like a meaningful contribution to the catastrophe. The death toll from the Fukushima meltdown is similarly small and long-term. 
Comment by Halstead on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-26T15:54:13.003Z · EA · GW

A quick thought on the probability terminology. I think it would be better to just use numbers for the probabilities rather than to assign numbers to technical terms. I found myself going back to the glossary a lot to remind myself what each term means. Moreover, people are prone to forget the definitions given, and so information can get lost, especially if people only engage with specific sections. The IPCC does a similar thing with its probability terminology, and I think this has led to a lot of confusion. 

Comment by Halstead on Introducing High Impact Athletes · 2020-12-03T21:22:15.375Z · EA · GW

V excited to see this. One other initiative to be aware of is Juan Mata's Common Goal, which encourages footballers to pledge 1% of their salary.

Comment by Halstead on How much does a vote matter? · 2020-11-02T17:58:41.650Z · EA · GW

I thought this was really good thanks for writing it. Jason Brennan is a notable smart sceptic of the duty to vote. Do we know what he thinks of this?

Comment by Halstead on N-95 For All: A Covid-19 Policy Proposal · 2020-10-28T13:03:19.316Z · EA · GW

Thanks for this - really interesting post! A quick point on the moral hazard worry. I think there is a confusion in many moral hazard arguments between (1) "this intervention would increase risky behaviour", and (2) "this intervention would increase risky behaviour, which would thereby make the net benefits of the intervention too low to be worthwhile or even negative". (2) is the one we should be worried about - in other places, I have tried to call this a 'pernicious moral hazard' to distinguish it from (1) as it is easy to move to quickly from showing that there is a moral hazard to showing that the intervention is a bad idea. 

While it is possible that widespread use of N-95 masks would increase risky behaviour, it also seems very unlikely to make the net benefits of the intervention not worthwhile. I have looked at several real world examples of moral hazards and struggled to find a case where the moral hazard effects made the intervention not worthwhile. (One possible exception is improvements in the quality of american football helmets which enabled people to tackle other players with their head, which led to extra concussions.) It doesn't seem plausible that what you propose is a pernicious moral hazard 

Comment by Halstead on Can we drive development at scale? An interim update on economic growth work · 2020-10-28T10:18:24.086Z · EA · GW

It also seems like this comment could be made on any post that is not about long-termism, so there doesn't to be anything especially relevant to this post here.  If we don't know whether growth is good in the long-term, then we presumably also don't know whether eradicating malaria is either. 

Also, I think growth plausibly is good from a long-termist point of view because it shortens the time of perils. It also has lots of beneficial political effects as it prevents zero sum rent seeking and encourages socially valuable activity.

Comment by Halstead on Hiring engineers and researchers to help align GPT-3 · 2020-10-08T13:00:12.020Z · EA · GW

Ok, the post is still labelled as 'front page' in that case, which seems like it should be changed