Posts

Should we buy coal mines? 2022-05-04T07:28:33.057Z
A review of Our Final Warning: Six Degrees of Climate Emergency by Mark Lynas 2022-04-15T13:43:19.026Z
Are we going to run out of phosphorous? 2021-12-07T14:09:54.454Z
Good news on climate change 2021-10-28T14:04:00.848Z
Economic policy in poor countries 2021-08-07T15:16:59.850Z
How well did EA-funded biorisk organisations do on Covid? 2021-06-02T17:25:41.175Z
Deference for Bayesians 2021-02-13T12:33:05.556Z
[Link post] Are we approaching the singularity? 2021-02-13T11:04:02.579Z
How modest should you be? 2020-12-28T17:47:10.799Z
Instructions on potential insomnia cure 2020-10-12T13:56:53.111Z
High stakes instrumentalism and billionaire philanthropy 2020-07-19T19:19:41.206Z
What is a good donor advised fund for small UK donors? 2020-04-29T09:56:50.097Z
How hot will it get? 2020-04-18T20:29:59.579Z
Pangea: The Worst of Times 2020-04-05T15:13:23.612Z
Covid-19 Response Fund 2020-03-31T17:22:05.999Z
Growth and the case against randomista development 2020-01-16T10:11:51.136Z
Is mindfulness good for you? 2019-12-29T20:01:28.762Z
The ITN framework, cost-effectiveness, and cause prioritisation 2019-10-06T05:26:24.879Z
What should Founders Pledge research? 2019-09-09T17:41:04.073Z
[Link] New Founders Pledge report on existential risk 2019-03-28T11:46:17.623Z
The case for delaying solar geoengineering research 2019-03-23T15:26:13.119Z
Insomnia: a promising cure 2018-11-16T18:33:28.060Z
Concerns with ACE research 2018-09-07T14:56:25.737Z
New research on effective climate charities 2018-07-11T13:51:23.354Z
The counterfactual impact of agents acting in concert 2018-05-27T10:54:03.677Z
Climate change, geoengineering, and existential risk 2018-03-20T10:48:01.316Z
Economics, prioritisation, and pro-rich bias   2018-01-02T22:33:36.355Z
We're hiring! Founders Pledge is seeking a new researcher 2017-12-18T12:30:02.429Z
Capitalism and Selfishness 2017-09-15T08:30:54.508Z
How should we assess very uncertain and non-testable stuff? 2017-08-17T13:24:44.537Z
Where should anti-paternalists donate? 2017-05-04T09:36:53.654Z
The asymmetry and the far future 2017-03-09T22:05:26.700Z

Comments

Comment by John G. Halstead (Halstead) on New 80k problem profile - Climate change · 2022-05-18T22:17:18.311Z · EA · GW

I'm not sure I understand why you don't think the in/direct distinction is useful. 

I have worked on climate risk for many years and I genuinely don't understand how one could think it is in the same ballpark as AI, biorisk or nuclear risk. This is especially true now that the risk of >6 degrees seems to be negligible. If I read about biorisk, I can immediately see the argument for how it could kill more than 50% of the population in the next 10-20 years. With climate change, for all the literature I have read, I just don't understand how one could think that. 

You seem to think the world is extremely sensitive to what the evidence suggests will be agricultural disturbances that we live through all the time: the shocks are  well within the normal range of shocks that we might expect to see in any decade, for instance. This chart shows the variation in the food price index. Between 2004 and 2011, it increased by about 200%. This is much much bigger than any posited effects of climate change that I have seen. One could also draw lots of causal arrows from this to various GCRs. Yet, I don't see many EAs argue for working on whatever were the drivers of these changes in food prices. 

Comment by John G. Halstead (Halstead) on New 80k problem profile - Climate change · 2022-05-18T21:59:49.643Z · EA · GW

I agree it is not where the action is but given that large sections of the public think we are going to die in the next few decades from climate change, it makes lots of sense to discuss it. And, the piece makes a novel contribution on that question, which is an update from previous EA wisdom. 

I took it that the claim in the discussed footnote is that working on climate is not the best way to tackled pandemics, which I think we agree is true. 

I agree that it is a risk factor in the sense that it is socially costly. But so are many things. Inadequate pricing of water is a risk factor.  Sri Lanka's decision to ban chemical fertiliser is a risk factor. Indian nationalism is a risk factor. etc. In general, bad economic policies are risk factors. The question is: is the risk factor big enough to change the priority cause ranking for EAs? I really struggle to see how it is. Like, it is true that perceived climate injustice in South Asia could matter for bioterrorism but this is very very far down the list of levers on biorisk. 

Comment by John G. Halstead (Halstead) on New 80k problem profile - Climate change · 2022-05-18T21:48:50.577Z · EA · GW

(In that case, he said that the post ignores indirect risks, which isn't true.)

On your first point, my claim was "I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change". The papers you shared also do not make this argument. I'm not saying that it is conceptually impossible for working on one risk to be the best way to work on another risk. Obviously, it is possible. I am just saying it is not substantively true about climate on the one hand, and AI and bio on the other. To me, it is clearly absurd to hold that the best way to work on these problems is by working on climate change. 

On your second point, I agree that climate change could be a stressor of some conflict risks in the same way that anything that is socially bad can be a stressor of conflict risks. For example, inadequate pricing of water is also a stressor of India-Pakistan conflict risk for the same reason. But this still does not show that it is literally the best possible way to reduce the risk of that conflict. It would be very surprising if it were since there is no evidence in the literature of climate change causing interstate warfare. Also, even the path from India-Pakistan conflict to long-run disaster seems extremely indirect, and permanent collapse or something like that seems extremely unlikely. 

Comment by John G. Halstead (Halstead) on New 80k problem profile - Climate change · 2022-05-18T20:35:52.166Z · EA · GW

I don't think the post ignores indirect risks. It says "For more, including the importance of indirect impacts of climate change, and our climate change career recommendations, see the full profile."

As I understand the argument from indirect risk, the claim is that climate change is a very large and important  stressor of great power war, nuclear war, biorisk and AI. Firstly, I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change. 

Secondly, climate change is not an important determinant of Great Power War, according to all theories of Great Power War. The Great Power Wars that EAs most worry about are between US and China and US and Russia. The main posited drivers of these conflicts are one power surpassing the other in geopolitical status (the Thucydides trap); defence agreements made over contested territories like Ukraine and Taiwain; and accidental launches of nuclear weapons due to wrongly perceived first strike. It's hard to see how climate change is an important driver of any of these mechanisms. 

Comment by John G. Halstead (Halstead) on New 80k problem profile - Climate change · 2022-05-18T19:45:34.230Z · EA · GW

I think there is good reason to focus on direct extinction given their audience. As they say at the top of their piece, "Across the world, over half of young people believe that, as a result of climate change, humanity is doomed"

What is your response to the argument that because the direct effects of AI, bio and nuclear war are much larger than the effects of climate change, the indirect effects are also likely much larger? To think that climate change has bigger scale than eg bio, you would have to think that even though climate's direct effects are smaller, its indirect effects are large enough to outweigh the direct effects. But the direct effects of biorisk seem huge. If there is genuinely democratisation of bio WMDs, then you get regular cessation of trade and travel, there would need to be lots of surveillance, would everyone have to live in a biobubble? etc. The indirect effects of climate change that people talk about in the literature stem from agricultural disruption in low income countries leading to increased intrastate conflicts in  low income countries (though the strength/existence of the causal connection is disputed). While these indirect effects are bad, they are orders of magnitude less severe than the indirect effects of biorisk. I think similar comments apply to nuclear war and to AI. 

The papers you have linked to suggest that the main pathway through which climate change might destabilise society is via damaging agriculture. All of the studies I have ever read suggest that the effects of climate change on food production will be outpaced by technological change and that food production will increase. For example, the chart below shows per capita food consumption on different socioeconomic assumptions and on different emissions pathways for 2.5 degrees of warming by 2050 (for reference 2.5 degrees by 2100 is widely now thought to be business as usual). Average per capita food consumption increases relative to today on all socioeconomic pathways considered 

Source: Michiel van Dijk et al., ‘A Meta-Analysis of Projected Global Food Demand and Population at Risk of Hunger for the Period 2010–2050’, Nature Food 2, no. 7 (July 2021): 494–501, https://doi.org/10.1038/s43016-021-00322-9 .

Comment by John G. Halstead (Halstead) on Focus of the IPCC Assessment Reports Has Shifted to Lower Temperatures · 2022-05-16T09:16:45.034Z · EA · GW

I think the shift in temperature focus is almost entirely because of the Paris Agreement. It's pretty natural that they would mention 2 degrees and 1.5 degrees a lot given Paris. Indeed, they had a special report on 1.5 degrees for that reason. I don't think it implies a change in research focus in the main reports since, as we have seen almost all impacts lit assesses the effects of RCP8.5. 

Given that the RCP mentions have been pretty constant (barring RCP6 being mentioned less), I don't really see that there has been any change in research focus. I especially don't think it is true to say that the climate science literature is ignoring impacts of more than 3 degrees: that is just very clear if you dig into the impacts literature on any particular impact. In fact, the impacts literature focuses a lot on 4.3 degrees and assumes that we will have little adaptive capacity to deal with that. 

Comment by John G. Halstead (Halstead) on Focus of the IPCC Assessment Reports Has Shifted to Lower Temperatures · 2022-05-13T17:39:08.403Z · EA · GW

Hiya,  I think the latest IPCC report reflects the literature in that it also focuses on RCP8.5 (i.e. 4 degrees). You have sampled temperature mentions but I think if you has sampled RCP mentions, your main finding would no longer stand.

For example, for the latest IPCC report, pretty much every graph includes the impact of RCP8.5. 

Agriculture

Ocean ecosystems

Coral reef

Shoreline change

Phytoplankton phenology

Marine species richness

Marine biomass

etc

Comment by John G. Halstead (Halstead) on Deferring · 2022-05-13T09:54:26.344Z · EA · GW

i thought this post by Huemer was a nice discussion of deference - https://fakenous.net/?p=550

Comment by John G. Halstead (Halstead) on Focus of the IPCC Assessment Reports Has Shifted to Lower Temperatures · 2022-05-12T12:51:39.940Z · EA · GW

As I mentioned in my comment on your earlier post, I don't think the headline claim here is correct. The  majority of the impacts literature focuses on the impacts of RCP8.5, the highest emissions pathway, which implies 4.3 degrees of warming. Moreover, often, papers use RCP8.5 in combination with Shared socioeconomic pathway 3, a socioeconomic future which has low economic growth especially for the poorest countries. SSP3  is not actually compatible with RCP8.5. For this reason, the impacts literature has been criticised, in my view correctly, for being excessively pessimistic. So, I think the reverse of what you say is correct

Comment by John G. Halstead (Halstead) on New substack on utilitarian ethics: Good Thoughts · 2022-05-09T16:19:29.251Z · EA · GW

these posts are very good. I do feel there is a lack of simple and effective arguments for utilitarianism that get missed even by professional philosophers. Most glaringly, there are clear and to my eyes fatal problems for most stated deontological theories which people just ignore when talking about utilitarianism. Deontology seems much less well-developed than utilitarianism on so many fronts

Comment by John G. Halstead (Halstead) on 'Beneficentrism', by Richard Yetter Chappell · 2022-05-09T16:16:41.273Z · EA · GW

Is there anything wrong just with 'effective altruism' as the name?

Comment by John G. Halstead (Halstead) on 'Beneficentrism', by Richard Yetter Chappell · 2022-05-09T14:02:44.758Z · EA · GW

welfarism would be a natural one but that is already taken. 

Comment by John G. Halstead (Halstead) on A tale of 2.75 orthogonality theses · 2022-05-09T12:27:48.108Z · EA · GW

it's tricky to see what happened in that debate because i have twitter and that blog blocked on weekdays!

Comment by John G. Halstead (Halstead) on A tale of 2.75 orthogonality theses · 2022-05-09T11:08:56.151Z · EA · GW

All the argument shows is that it is logically possible for AGI not to be aligned. Since Bryan Caplan is a sane human being, it's improbable that he would ever not have accepted that claim. So, it's unclear why Yudkowsky would have presented it to him as an important argument about AGI alignment. 

Comment by John G. Halstead (Halstead) on Should we buy coal mines? · 2022-05-09T11:05:36.055Z · EA · GW

Hello,

  1. The problem with buying off uneconomic mines is that there is a greater risk of having no additionality. If the mine is indeed uneconomic you're probably just taking on a huge reclamation liability without producing any climate benefits. There are surely better ways to use the money. 
  • Comment on land acreage. I'd have to look into this more, but probably don't have the time right now.

2. Legal barriers matter because they affect the chance that the project will be possible. Again, this leads back to the additionality/barrier trade-off I mentioned in the post. 

3. The political risks matter because they make it extremely unlikely that the project will even be possible. For example, I would be happy to be proven wrong, but I would be very surprised if a province in Australia ever let an environmentalist buy a mine. States have discretion over who wins mining bids and they would likely foresee that someone is buying the mine not to run it: if the mine is economic, they would know after the first year if the lease owner wasn't actually going to mine it. The Wyoming govt has been shown to be willing to take a financial hit in order to keep mines open. 

Indeed, this has been the experience of every case I know of environmentalists trying to buy fossil fuel resources - Greenpeace in Germany, Tempest Williams in the US etc. People have been trying to buy fossil fuel resources to retire them for many years. I know of literally no cases in which the govts have let them do it. 

"Let me tell it to you straight, commodity traders will shut in coal capacity if they get paid to do it. There’s no romance in the industry. If the mine is more valuable to the EA community as sequestered carbon than it is to coal mine owners as cash producing assets, they will sell it, shut it in, and politically lobby to make sure you can keep it shut in." I am aware of that, but I don't see how it refutes anything I said in the post. 

Comment by John G. Halstead (Halstead) on A tale of 2.75 orthogonality theses · 2022-05-09T10:51:48.498Z · EA · GW

Hi Greg, I don't think anyone would ever have held that it is logically impossible for AGI not to be aligned. That is clearly a crazy view. All that orthogonality argument proves is that it is logically possible for AGI not to be aligned, which is almost trivial. 

Comment by John G. Halstead (Halstead) on Demandingness and Time/Money Tradeoffs are Orthogonal · 2022-05-06T09:54:50.249Z · EA · GW

This post is great. I have wondered whether there should be more emphasis in EA on 'warrior mindset' - working really hard to get important stuff done (discussed a bit here). A lot of highly effective people do seem to work very hard and that I think that is an important norm to spread in EA. 

Comment by John G. Halstead (Halstead) on Should we buy coal mines? · 2022-05-05T16:05:51.158Z · EA · GW

there's some endongeneity in the policy though - policymakers probably respond to that kind of activity, especially if it happens at scale. 

Comment by John G. Halstead (Halstead) on Should we buy coal mines? · 2022-05-04T10:34:45.325Z · EA · GW

Thanks for this. Yeah, I haven't crunched the numbers on cost per microdoom and probs don't have time to go through your calcs.

Comment by John G. Halstead (Halstead) on Should we buy coal mines? · 2022-05-04T09:56:17.222Z · EA · GW

the buying up plants idea is somewhat different idea to the buying mines idea. The main barrier there is that you are dealing with a highly regulated utility who has to ensure security of supply. The market to buy up coal plants is not an open one.  

Comment by John G. Halstead (Halstead) on Should we buy coal mines? · 2022-05-04T08:45:52.142Z · EA · GW

that's one option, but it would also be more expensive because then you have to cover all the costs of mining. 

Comment by John G. Halstead (Halstead) on P(utopia) is more important than P(doom) and this could have important strategic implications · 2022-05-04T07:38:30.185Z · EA · GW

I agree that there might be reasons of moral cooperation and trade to compromise with other value systems. But when deciding how to cooperate, we should at least be explicitly guided by optimising for our own values, subject to constraints. I think it is far from obvious that aligning with the intent of the programmer is the best way to optimise for utilitarian values. Perhaps we should aim for utilitarian alignment first

Comment by John G. Halstead (Halstead) on P(utopia) is more important than P(doom) and this could have important strategic implications · 2022-05-04T07:34:46.558Z · EA · GW

Thanks for this, I think it is an important and under-discussed point. In their AI alignment work, EAs seem to be aiming for intent-alignment rather than social welfare production, which I think is plausibly a very large mistake, or at least one that hasn't received very much scrutiny.

Incidentally, I also don't know what it means to say that we have aligned AIs with 'our values'. Since there is disagreement, 'our' has no referent here.

Comment by John G. Halstead (Halstead) on A tale of 2.75 orthogonality theses · 2022-05-03T12:32:25.774Z · EA · GW

ah i see i hadn't seen that

Comment by John G. Halstead (Halstead) on A tale of 2.75 orthogonality theses · 2022-05-03T10:21:56.746Z · EA · GW

Even if you think a uniform prior has zero information, which is a disputed position in philosophy, we have lots of information to update with here. eg that programmers will want AI systems to have certain motivations, that they won't want to be killed etc. 

Comment by John G. Halstead (Halstead) on A tale of 2.75 orthogonality theses · 2022-05-03T10:19:49.164Z · EA · GW

see the example from yudkowsky above. As I understand it, he is the main person who has encouraged rationalists to be focused on AI. In trying to explain why AI is important to a smart person (Bryan Caplan) he appeals to the orthogonality argument which has zero bearing on whether AI alignment will be hard or worth working on. 

Comment by John G. Halstead (Halstead) on A tale of 2.75 orthogonality theses · 2022-05-03T10:17:10.243Z · EA · GW

agreed! some evidence of that in my comment

Comment by John G. Halstead (Halstead) on A tale of 2.75 orthogonality theses · 2022-05-03T10:14:46.327Z · EA · GW

I'm a bit surprised not to see this post get more attention. My impression is that for a long time a lot of people put significant weight on the orthogonality argument. As Sasha argues, it is difficult to see why it should update us towards the view that AI alignment is difficult or an important problem to work on, let alone by far the most important problem in human history. I would be curious to hear its proponents explain what they think of Sasha's argument. 

For example, in his response to bryan caplan on the scale of AI risk, Eliezer Yudkowsky makes the following argument: 

"1. Orthogonality thesis – intelligence can be directed toward any compact goal; consequentialist means-end reasoning can be deployed to find means corresponding to a free choice of end; AIs are not
automatically nice; moral internalism is false.

2. Instrumental convergence – an AI doesn’t need to specifically hate you to hurt you; a
paperclip maximizer doesn’t hate you but you’re made out of atoms that it can use to make paperclips, so leaving you alive represents an opportunity cost and a number of foregone paperclips. Similarly,
paperclip maximizers want to self-improve, to perfect material technology, to gain control of resources, to persuade their programmers that they’re actually quite friendly, to hide their real thoughts from their programmers via cognitive steganography or similar strategies, to give no sign of value disalignment until they’ve achieved near-certainty of victory from the moment of their first overt strike, etcetera.

3. Rapid capability gain and large capability differences – under scenarios seeming more plausible than not, there’s the possibility of AIs gaining in capability very rapidly, achieving large absolute
differences of capability, or some mixture of the two. (We could try to keep that possibility non-actualized by a deliberate effort, and that effort might even be successful, but that’s not the same as the avenue
not existing.)

4. 1-3 in combination imply that Unfriendly AI is a critical Problem-to-be-solved, because AGI is not automatically nice, by default does things we regard as harmful, and will have avenues
leading up to great intelligence and power."

This argument  does not work. This argument shows that AI systems will not necessarily be aligned. It tells us nothing about whether they are likely to be aligned or whether it will be easy to align AI systems. All of the above is completely compatible with the view that AI will be easy to align and we will obviously try and to align them eg since we don't all want to die. To borrow the example from ben garfinkel, cars could in principle be given any goals. This has next to no bearing on what goals they will have in the real world or the set of likely possible futures. 

Comment by John G. Halstead (Halstead) on Longtermist EA needs more Phase 2 work · 2022-04-22T16:30:27.540Z · EA · GW

Thanks for writing this. I had had similar thoughts. I have some scattered observations:

  1. I find those bullshitty personality tests one sometimes does on work retreats quite instructive on this. On personality tests of analytic/driver/affable/expressive, EAs probably cluster hard in the analytic section. But really great executors like Bezos, Zuckerberg, Musk, Cummings are strongly in the driver section. I have heard that successful businesses are often led by combo teams of drivers and analytical people who can moderate the excesses of each type. The driver personality type can seem anathema to EA/analytic thinking because it can be quite personality-led and so is a bit like following a guru: their plans can often be obscure and can sometimes seem unrealistic and they're often not super analytical and careful - that can be why they take these really risky bets. Like it would be difficult to believe that Musk would create the world's highest valued car company and it would be difficult for him to explain to you how he was going to do it. Nevertheless, he did it, and it would have been sensible to follow him. I think we need to recognise the cultural barriers to execution in EA. 
  2. This suggests that we need to work really hard to cultivate and support the executors in the community. Alternatively, we could pull in executors from outside EA and point them at valuable projects. 
  3. Now is a good time for people to test out being executors to see whether they are a good fit. 
  4. I think this counts in favour of working on valuable projects even if they are probably not what's best from a longtermist point of view. eg from what I have seen, I don't think EAs did much to reduce the damage from COVID; others like Cowen's Fast Grants had much more impact. It is true that much bigger bio-disasters comprise a bigger fraction of the (large) risk this century. But (a) at the very least getting some practice in of doing something useful in a crisis seems like a good idea for career capital-type reasons , (b) it builds credibility in the relevant fields, (c) we can test who is good at acting in a crisis and back them in the future. 
Comment by John G. Halstead (Halstead) on A review of Our Final Warning: Six Degrees of Climate Emergency by Mark Lynas · 2022-04-21T14:29:30.073Z · EA · GW

He responded to my initial email, though he hasn't responded to my follow ups. I have invited him to comment if he wants. I'm not sharing his response because my understanding is that the norm is not to share private email correspondence or to give the gist of it, without his permission.

Comment by John G. Halstead (Halstead) on Pre-announcing a contest for critiques and red teaming · 2022-04-14T17:48:05.090Z · EA · GW

I think one especially valuable way to do this would be to commission/pay non-EA people with good epistemics to critique essays on core EA ideas. There are various philosophers/others I can think of who would be well-placed to do this - people like caplan, Michael Huemer, David Enoch, Richard Arneson etc. 

 I think it would also be good to have short online essay colloquia following the model of Cato Unbound

Comment by John G. Halstead (Halstead) on Against the "smarts fetish" · 2022-04-12T11:14:50.270Z · EA · GW

"my main claim is not that "EA overrates IQ" at a purely descriptive level, but rather that other important traits deserve more focus in practice"

The claim that EA overrates IQ is the same as the claim that other traits deserve more attention

Comment by John G. Halstead (Halstead) on How about we don't all get COVID in London? · 2022-04-12T09:40:39.745Z · EA · GW

It's table 3 I think you want to look at. For fatigue and other long covid symptoms, belief that you had covid has a higher odds ratio than does confirmed covid (but no belief that you had covid). 

I think there is good reason to be sceptical of long covid. It groups together multiple different symptoms that are strongly psychologically influenced and already prevalent in the population, such as brain fog and fatigue. In a pandemic where anxiety about disease is high and social interaction low, we should expect people to attribute these symptoms to covid. 

Another point I find useful when thinking about this is that if some of the more dire predictions of the effect of long covid, such as 1% chance of having your life ruined, that would be visible in plain sight - lots of sports stars and celebs would have to retire. I have checked and I know of zero cases of professional English footballers retiring due to covid, but we would expect several to have done so if long covid risk were really that high. 

Comment by John G. Halstead (Halstead) on How about we don't all get COVID in London? · 2022-04-12T08:18:45.862Z · EA · GW

This also seems like reason to encourage people to lobby their governments to get rid of testing requirements for travel

Comment by John G. Halstead (Halstead) on How about we don't all get COVID in London? · 2022-04-12T08:15:57.290Z · EA · GW

On long covid - what is your view on the paper I linked to which suggests that long covid is psychosomatic? 

Comment by John G. Halstead (Halstead) on How about we don't all get COVID in London? · 2022-04-12T08:13:06.496Z · EA · GW

The OP didn't make any arguments about the health risks, so one cannot infer his stance about long covid from the post. As I intimated in my comment, I don't think omicron is as bad as flu, I think it is as bad as a cold. Yes, I would attend if there were a 20% chance of getting a cold. The additional covid risk makes almost no difference to the background risk of attending the conference and is swamped by other risks, such as the risk of a nuclear strike on London or being in a traffic accident on the way to the venue. 

Comment by John G. Halstead (Halstead) on How about we don't all get COVID in London? · 2022-04-12T07:58:54.571Z · EA · GW

thanks pablo. yeah sorry I couldn't access the Financial Times page directly

Comment by John G. Halstead (Halstead) on The Vultures Are Circling · 2022-04-11T18:20:50.106Z · EA · GW

I also thought that the post provided no support for its main claim, which is that people think that EAs are giving money away in a reckless fashion. 

 Even if people are new, we should not encourage poor epistemic norms. 

Comment by John G. Halstead (Halstead) on Against the "smarts fetish" · 2022-04-11T18:18:27.206Z · EA · GW

Like Linch, I do not see how you present any arguments for your main conclusion in the post. You argue that EA overrates IQ but present no arguments that this is the case. Your response also doesn't present any arguments for that conclusion

Comment by John G. Halstead (Halstead) on How about we don't all get COVID in London? · 2022-04-11T17:28:36.158Z · EA · GW

At the start you say you are going to argue that "the median EAG London attendee will be less COVID-cautious than they would be under ideal epistemic conditions". So, I was expecting you to discuss the health risks of getting covid for EAG attendants (who will predominantly be between 20 and 40 and will ~all have been triple vaccinated) . Since you don't do that, your post shouldn't update us at all towards your conclusion.

The IFR for covid for all ages is now below seasonal flu. The risk of death for people attending EAG is extremely small given the likely age and vaccination status of attendants. 

It is difficult to work out the effects of long covid, but the most reasonable estimates I have seen put the health cost of long covid as equivalent to 0.02 DALYs, or about a week. (I'm actually pretty sceptical that long covid is real (see eg here))

For people aged 20-40 who are triple jabbed, the risks of attending EAG are extremely small, I think on the order of getting a cold. They do not justify "the usual spate of NPIs"

There's also the point that covid seems likely to be endemic so there is little value in a "wait and see" approach

Comment by John G. Halstead (Halstead) on The case for delaying solar geoengineering research · 2022-03-23T20:16:11.173Z · EA · GW

Hello! thanks for this

As I argue in the piece, I don't think deployment could happen now, at least for stratospheric aerosol injection. I don't think it will happen until there is significant within-country demand for SAI at least among all major powers. We are a long long way away from that. The governance challenges for things like marine cloud brightening are lower so I agree that could plausibly be used much sooner.  

The information/attention hazards depends not only on the idea of solar geoengineering but how much it is discussed. This is widely accepted in eg biorisk where many researchers will not mention published papers on gain of function research. It is clear that further scientific discussion and attention would increase the info/attention hazard. 

My main concern with SAI research is that it is a waste of money. The case is less clear for more regional solar geo

Comment by John G. Halstead (Halstead) on Why randomized controlled trials matter · 2022-03-23T20:10:55.975Z · EA · GW

most of his blogs for centre for global development are relevant. His recent paper on "randomizing development: method or madness?" contains most of his main arguments. He also has a blog called Lantrant where he frequently criticises the use of RCTs in economics. In my view, almost all of his critiques are correct.

Comment by John G. Halstead (Halstead) on Why randomized controlled trials matter · 2022-03-22T21:57:31.091Z · EA · GW

I enjoyed your post a lot. Lant Pritchett is a prominent critic of using RCTs for large scale social interventions - he might be worth reading. 

Comment by John G. Halstead (Halstead) on We're announcing a $100,000 blog prize · 2022-03-08T20:04:32.009Z · EA · GW

I agree that this would be very valuable. I work at an EA org and even I miss out on a lot of discussions that happen between top people,  on googledocs or over lunch. Things must be much worse for people not at EA orgs. 

It would be useful if some of the top people could share why they prefer not to make these discussions public. I would guess that one reason is that people don't want arguments which they haven't backed up in  formal ways to be classed as "the official view of EA leaders". Creating a forum for posts with shakier epistemic status seems valuable

Comment by John G. Halstead (Halstead) on Retrospective on Shall the Religious Inherit The Earth · 2022-02-23T01:46:37.334Z · EA · GW

You say "While there is a clear relationship between religiosity and the role of religion in public life, views on other political issues are less determined by religion." I wasn't sure that this was borne out by the cites. I can't access the Foreign Policy piece, but the Pew survey seemed to suggest quite strong political differences between haredim and others. This was in part confirmed by the chart that you cited but also other charts. Kaufman argues that Jerusalem has become politically extreme due to ultra-orthodox dominance - examples I recall were adverts with women being taken down and cars being stoned on the saturdays. 

 

Comment by John G. Halstead (Halstead) on Retrospective on Shall the Religious Inherit The Earth · 2022-02-23T01:34:29.521Z · EA · GW

thanks for writing this, it's great. 

Comment by John G. Halstead (Halstead) on Future-proof ethics · 2022-02-03T15:30:36.890Z · EA · GW

A quick point on the track record of utilitarianism. One counter-example is James Fitzjames Stephen, a judge and utilitarian philosopher who wrote a trenchant critique of J.S. Mill's arguments in On Liberty and his defence of the rights of women. This is in Stephen's Liberty, Equality and Fraternity. 

It does seem that the most famous utilitarians were ahead of the curve, but I do wonder whether they are famous in part because their ideas won the day. There may have been other utilitarians arguing for different opinions. 

Comment by John G. Halstead (Halstead) on Dismantling Hedonism-inspired Moral Realism · 2022-01-28T09:53:38.473Z · EA · GW

I agree that this is the next stage of the dialectic. But then the situations is: sentient experience is a necessary condition on there being value in the world. No other putative intrinsically valuable thing (preference satisfaction, authenticity, friendship etc) is a necessary condition on there being value in the world - eg even proponents of the view that authenticity is good don't think it is necessary for there being value in the world, as illustrated by the example of a torture experience machine. If you are assessing whether something is intrinsically good, I think a reasonable test is - imagine if that thing existed alone - would it matter? If the thing were good by virtue of its intrinsic or necessary properties, then it would be valuable all on its own. But that only seems to be true of sentient experience. eg if authenticity were really intrinsically valuable, then it would be valuable by virtue of its intrinsic properties. So, it should be the case that the world is better by virtue of the fact that agents have accurate beliefs about the world they interact with. But one can imagine worlds where this is true but that have zero value, namely worlds in which no agents are sentient. So, authenticity is not intrinsically valuable. 

One possible view is to say that things like authenticity and friendship have conditional intrinsic value. I however don't have this concept. 

The fact that other putative intrinsically valuable things only become valuable when there is sentient experience in the world is also a debunking argument in favour of hedonism. The argument is that people confuse things that are merely connected in some way to sentient experience with what is intrinsically valuable. 

Comment by John G. Halstead (Halstead) on Dismantling Hedonism-inspired Moral Realism · 2022-01-27T18:25:58.978Z · EA · GW

As well as intuitive critiques of hedonism like the experience machine, there are also strong intuitive arguments for hedonism. Imagine a world with no sentience, no conscious experience. No-one feels anything. It is full of things like tables and rocks. I fail to see why this world would matter. 

Another point is that all theories of morality have a hedonistic component. Any plausible moral theory should say that the inflicting pain on someone is bad in and of itself, independent of its effects on anything else, such as their resources, capabilities, abilities to achieve their projects etc. It also seems like any plausible moral theory should say that it is better other things equal for people to have pleasure. If we have a choice between a festival in which people have loads of fun and one in which people have moderate amounts of fun, we should choose the former.

Experience machine-type arguments don't work so well when we try them with suffering. According to hedonism, only bad conscious experiences are bad. Imagine experience machines that produced extreme unbearable torture for prolonged periods of time. It seems like a world in which as many sentient creatures as possible were in these negative experience  machines would be as bad as it is possible to be.  Other theories of value just don't seem to do very well here. Imagine having your strongest preference violated (what if the torture victims are moral patients incapable of agency or preference). Imagine if you had no capabilities or no resources (still doesn't seem as bad as the negative torture machine). Hedonism at least seems like the correct account of negative wellbeing, or a central component of it. By symmetry, we should also expect it to be a central component of positive wellbeing. 

Comment by John G. Halstead (Halstead) on Dismantling Hedonism-inspired Moral Realism · 2022-01-27T18:13:03.517Z · EA · GW

Thanks for these posts, they are very interesting

related to the experience machine, you say "hedonism arguably commits us to a somewhat narcissistic view of our loved ones." I don't think this is correct. The experience machine is meant to show us that hedonism is false as a theory of personal wellbeing. Hedonism says that what makes my life go well for me is positive conscious experiences. 

Hedonistic versions of utilitarianism of course say that our own personal wellbeing is not all that matters. From a utilitarian point of view, we care about our loved ones because of our own happiness and because of their own happiness as well. So, this isn't narcissistic. Indeed, this is one debunking account of the experience machine. insofar as people have moral motivations, they would no longer people able to live a moral life once they clambered into the experience machine