Posts

16 Recent Publications on Existential Risk (Nov & Dec 2019 update) 2020-01-15T12:07:42.000Z · score: 20 (7 votes)
The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) 2020-01-12T21:53:25.644Z · score: 9 (25 votes)
21 Recent Publications on Existential Risk (Sep 2019 update) 2019-11-05T14:26:31.698Z · score: 31 (16 votes)
Centre for the Study of Existential Risk Six Month Report April - September 2019 2019-09-30T19:20:24.798Z · score: 14 (5 votes)
Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 2019-05-01T15:34:20.425Z · score: 10 (13 votes)
Lecture Videos from Cambridge Conference on Catastrophic Risk 2019-04-23T16:03:21.275Z · score: 15 (9 votes)
CSER Advice to EU High-Level Expert Group on AI 2019-03-08T20:42:10.796Z · score: 13 (4 votes)
CSER and FHI advice to UN High-level Panel on Digital Cooperation 2019-03-08T20:39:29.657Z · score: 22 (7 votes)
Centre for the Study of Existential Risk: Six Month Report May-October 2018 2018-11-30T20:32:01.600Z · score: 26 (15 votes)
CSER Special Issue: 'Futures of Research in Catastrophic and Existential Risk' 2018-10-02T17:18:48.449Z · score: 9 (9 votes)
New Vacancy: Policy & AI at Cambridge University 2017-02-13T19:32:23.538Z · score: 6 (6 votes)
President Trump as a Global Catastrophic Risk 2016-11-18T18:02:46.526Z · score: 14 (18 votes)

Comments

Comment by haydnbelfield on Hayden Wilkinson: Doing Good in an Infinite, Chaotic World · 2020-02-19T16:22:30.227Z · score: 3 (3 votes) · EA · GW

Good job Hayden, nice talk.

-Haydn

Comment by haydnbelfield on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-14T19:37:13.957Z · score: 9 (6 votes) · EA · GW

Have included a paragraph up at the top that hopefully adresses (some of?) your concerns. As it says in the paragraph, thanks for your comments!

"Edit: This argument applies across the political spectrum. One of the best arguments for political party participation is similar to voting i.e. getting a say in the handful of leading political figures. We recommend that effective altruists consider this as a reason to join the party they are politically sympathetic towards in expectation of voting in future leadership contests. We're involved in the Labour Party - and Labour currently has a leadership election with only a week left to register to participate. So this post focuses on that as an example, and with a hope that if you're Labour-sympathetic you consider registering to participate. We definitely do not suggest registering to participate if you're not Labour-sympathetic. Don't be a 'hit and run entryist' (Thanks Greg for the comments!)."

Comment by haydnbelfield on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T20:27:17.804Z · score: 8 (9 votes) · EA · GW

For the avoidance of any doubt: don't be a "hit and run entryist", this post is not suggesting such a "scheme". If you're "indifferent or hostile to Labour Party politics" then I don't really know why you'd want to be part of the selection, and don't recommend you try and join as a member.

The post says "You can always cancel your membership (though of course I'd rather you'd stay a member)." That's not advocating joining just to cancel - it's saying you're not bound in if you change your mind.


Comment by haydnbelfield on EA Organization Updates: November 2019 · 2019-12-19T00:52:39.732Z · score: 1 (1 votes) · EA · GW

Thanks for this. "Haydn Belfield published a report on global catastrophic risk (GCR) preparedness on CSER's GCR policy blog." - don't want to claim credit.

Should be "CSER published a report on how governments can better understand global catastrophic risk (GCR)."

Comment by haydnbelfield on A list of EA-related podcasts · 2019-11-28T22:20:38.438Z · score: 5 (4 votes) · EA · GW

Nice! Thanks

Comment by haydnbelfield on Are comment "disclaimers" necessary? · 2019-11-27T22:59:45.189Z · score: 13 (8 votes) · EA · GW

Oh Greg your words bounce like sunbeams and drip like honey

Comment by haydnbelfield on A list of EA-related podcasts · 2019-11-27T22:55:25.441Z · score: 9 (6 votes) · EA · GW

It would be real great if these were hyperlinks...

Would take some time, but might be useful for people gathering EA resources?

Comment by haydnbelfield on A list of EA-related podcasts · 2019-11-27T22:54:13.531Z · score: 6 (5 votes) · EA · GW

Naked Scientists (BBC radio show and podcast) have done a bunch of interviews with CSER researchers:

https://www.cser.ac.uk/news/naked-scientists-planet-b/

https://www.cser.ac.uk/news/haydn-belfield-interviewed-naked-scientists/

https://www.cser.ac.uk/news/workshop-featured-on-the-naked-scientists-podcast/

https://www.cser.ac.uk/news/podcast-countdown-artificial-intelligence/

https://www.cser.ac.uk/news/podcast-interviews-martin-rees/

Comment by haydnbelfield on Institutions for Future Generations · 2019-11-19T19:03:04.870Z · score: 9 (4 votes) · EA · GW

I was surprised not to see a reference to the main (only?) paper examining this question from an EA/'longtermist' perspective:

Natalie Jones, Mark O'Brien, Thomas Ryan. (2018). Representation of future generations in United Kingdom policy-making. Futures.

Which led directly to the creation of the UK All-Party Parliamentary Group for Future Generations (an effort led by Natalie Jones and Tildy Stokes). The APPG is exploring precisely the questions you've raised. If you haven't reached out yet, here's the email: secretariat@appgfuturegenerations.com

Comment by haydnbelfield on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-07T17:17:33.749Z · score: 26 (16 votes) · EA · GW

Really really good to see CEA engaging with and accepting criticism, and showing how it's trying and is changing policies.

Comment by haydnbelfield on 21 Recent Publications on Existential Risk (Sep 2019 update) · 2019-11-06T13:29:38.894Z · score: 2 (2 votes) · EA · GW

Similar but fewer, cos Seán is a better academic than me. I was aware of upper bound and vulnerable world.

Comment by haydnbelfield on What analysis has been done of space colonization as a cause area? · 2019-10-10T12:24:58.825Z · score: 4 (2 votes) · EA · GW

https://www.vox.com/future-perfect/2018/10/22/17991736/jeff-bezos-elon-musk-colonizing-mars-moon-space-blue-origin-spacex

Overview piece

Comment by haydnbelfield on A bunch of new GPI papers · 2019-09-25T23:27:09.021Z · score: 9 (4 votes) · EA · GW

These look super interesting! Looking forward to reading them.

What's the status of these papers? Some of them look like they're forthcoming, some don't - is the plan for all of them to be published? I'd find it helpful to know which have been peer-reviewed and which haven't.

Comment by haydnbelfield on An update on Operations Camp 2019 · 2019-09-19T10:36:59.536Z · score: 9 (5 votes) · EA · GW

Great stuff! Looking forward to more

Comment by haydnbelfield on Funding chains in the x-risk/AI safety ecosystem · 2019-09-11T01:36:36.615Z · score: 1 (1 votes) · EA · GW

Jaan has given to CSER

Comment by haydnbelfield on 'Longtermism' · 2019-07-26T12:53:54.200Z · score: 14 (15 votes) · EA · GW

Similar to Ollie and Larks, I'm slightly uncomfortable with

"(i) Those who live at future times matter just as much, morally, as those who live today;"

I'm pretty longtermist (I work on existential risk) but I'm not sure whether I think that those who live at future times matter "just as much, morally". I have some sympathy with the view that people nearer to us in space or time can matter more morally than those very distant - seperately from the question of how much we can do to effect those people.

I also don't think its necessary for the definition. A less strong definition would work as well. Something like:

"(i) Those who live at future times matter morally".

Comment by haydnbelfield on Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 · 2019-05-02T17:28:04.497Z · score: 2 (2 votes) · EA · GW

Hi John, thanks for the very detailed response. My claim was that ecosystem shift is a "contributor" to existential risk - my claim is that it should be examined to assess the extent to which it is a "risk factor" that increases other risks, one of a set of causes that may overwhelm societal resilience, and a mechanism by which other risks cause damage.

As I said in the first link, "humanity relies on ecosystems to provide ecosystem services, such as food, water, and energy. Sudden catastrophic ecosystem shifts could pose equally catastrophic consequences to human societies. Indeed environmental changes are associated with many historical cases of societal ‘collapses’; though the likelihood of occurrence of such events and the extent of their socioeconomic consequences remains uncertain."

I can't respond to your comment at the length it deserves, but we will be publishing papers on the potential link between ecosystem shifts and existential risk in the future, and I hope that they will address some of your points.

I'll email you with some related stuff.

Comment by haydnbelfield on Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 · 2019-05-01T21:49:41.887Z · score: -4 (10 votes) · EA · GW

Thanks for the question. Climate change is a contributor to existential risk. Changing what business schools teach (specifically to include sustainability) might change the behaviour of the next generation of business leaders.

See:

We also have further publications forthcoming on the link between climate change and existential risk.

Comment by haydnbelfield on Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 · 2019-05-01T21:46:54.942Z · score: 6 (11 votes) · EA · GW

Thanks for the question. Biodiversity loss and associated catastrophic ecosystem shifts are a contributor to existential risk. Partha's review may influence UK and international policy.

See:

We also have further publications forthcoming on the link between biodiversity and existential risk.

Comment by haydnbelfield on Lecture Videos from Cambridge Conference on Catastrophic Risk · 2019-04-24T10:18:27.784Z · score: 1 (1 votes) · EA · GW

Cool, cheers.

Comment by haydnbelfield on Lecture Videos from Cambridge Conference on Catastrophic Risk · 2019-04-23T16:07:46.793Z · score: 3 (3 votes) · EA · GW

Does anyone have any idea when we'll be able to embed YouTube videos on the forum?

Comment by haydnbelfield on Candidate Scoring System, First Release · 2019-03-08T22:55:56.591Z · score: 4 (3 votes) · EA · GW

Warren introduced the No First Use Act (“It is the policy of the United States to not use nuclear weapons first.”) and Gillibrand is a co-sponsor.

https://www.congress.gov/bill/116th-congress/senate-bill/272/cosponsors?q=%7B%22search%22%3A%5B%22warren+nuclear%22%5D%7D&r=1&s=1

https://www.vox.com/2019/2/11/18216686/elizabeth-warren-ban-nuclear-weapons-no-first-use

Comment by haydnbelfield on How can we influence the long-term future? · 2019-03-08T20:31:19.545Z · score: 5 (4 votes) · EA · GW

I don't really understand the conclusion this post is arguing for (or if indeed there is one). In particular, I didn't spot an answer to "how can we influence the long-term future?".

Comment by haydnbelfield on CSER Special Issue: 'Futures of Research in Catastrophic and Existential Risk' · 2018-10-02T17:20:10.221Z · score: 3 (3 votes) · EA · GW

If this research seems interesting to you, CSER is currently hiring! https://www.cser.ac.uk/news/hiring-APM/

Comment by haydnbelfield on CEA on community building, representativeness, and the EA Summit · 2018-08-21T20:25:27.494Z · score: 1 (1 votes) · EA · GW

"Cause areas shouldn't be tribes" "We shouldn't entrench existing cause areas" "Some methods of increasing representativeness have the effect of entrenching current cause areas and making intellectual shifts harder."

Does this mean you wouldn't be keen on e.g. "cause-specific community liasons" who mainly talk to people with specific cause-prioritisations, maybe have some money to back projects in 'their' cause, etc? (I'm thinking of something analogous to an Open Philanthropy Project Program Officer )

Comment by haydnbelfield on Open Thread #39 · 2017-11-02T21:08:55.661Z · score: 2 (2 votes) · EA · GW

The recent quality of posts has been absolutely stellar*. Keep it up everyone!

*interesting, varied, informative, written to be helpful/useful, rigorous, etc

Comment by haydnbelfield on Effective Altruism Grants project update · 2017-10-06T18:18:20.928Z · score: 2 (2 votes) · EA · GW

Really glad to see you taking conflicts of interest so seriously!

Comment by haydnbelfield on An Effective Altruist Message Test · 2017-04-03T19:06:22.237Z · score: 1 (3 votes) · EA · GW

This is incredibly valuable (and even groundbreaking) work. Well done for doing it, and for writing it up so clearly and informatively!

Comment by haydnbelfield on A Third Take on Trump · 2017-04-03T19:03:29.256Z · score: 1 (3 votes) · EA · GW

Thanks for this!

I personally agree that Democratic control of Congress, or even Congress and the Presidency, would be great. But I'm not sure how likely that is, or how certain that I should be about that likelihood.

Even if there was a high certainty and high likelihood, I probably still wouldn't take that option - the increased risk for four years is just too high. As Michael_S says you get higher nuclear risk and higher pandemic risk. As I said in my post, I think Trump also raises the risks of increased global instability, increased international authoritarianism, climate change, and emerging technologies. Take climate change - we really don't have long to fix it! We need to make significant progress by 2030 - we can't afford to go backwards for four years.

[Writing in a personal capacity, my views are not my employer's]

Comment by haydnbelfield on What Should the Average EA Do About AI Alignment? · 2017-03-02T18:39:30.411Z · score: 2 (2 votes) · EA · GW

Whatever happened to EA Ventures?

Comment by haydnbelfield on EA Funds Beta Launch · 2017-02-28T18:30:30.869Z · score: 11 (11 votes) · EA · GW

This is a great idea and you've presented it fairly, clearly and persuasively. I've donated.

Comment by haydnbelfield on EA Funds Beta Launch · 2017-02-28T18:12:54.661Z · score: 4 (4 votes) · EA · GW

Peter's question was one I asked in the previous post as well. I'm pleased with this answer, thanks Tara.

Comment by haydnbelfield on EA Global 2017 Update · 2017-02-28T18:03:50.345Z · score: 0 (0 votes) · EA · GW

Excellent!

Comment by haydnbelfield on Some Thoughts on Public Discourse · 2017-02-24T13:27:49.429Z · score: 24 (24 votes) · EA · GW

Thanks for this! Its mentioned in the post and James and Fluttershy have made the point, but I just wanted to emphasise the benefits to others of Open Philanthropy continuing to engage in public discourse. Especially as this article seems to focus mostly on the cost/benefits to Open Philanthropy itself (rather than to others) of Open Philanthropy engaging in public discourse.

The analogy of academia was used. One of the reasons academics publish is to get feedback, improve their reputation and to clarify their thinking. But another, perhaps more important, reason academics publish academic papers and popular articles is to spread knowledge.

As an organisation/individual becomes more expert and established, I agree that the benefits to itself decrease and the costs increase. But the benefit to others of their work increases. It might be argued that when one is starting out the benefits of public discourse go mostly to oneself, and when one is established the benefits go mostly to others.

So in Open Philanthropy’s case it seems clear that the benefits to itself (feedback, reputation, clarifying ideas) have decreased and the costs (time and risk) have increased. But the benefits to others of sharing knowledge have increased, as it has become more expert and better at communicating.

For example, speaking personally, I have found Open Philanthropy’s shallow investigations on Global Catastrophic Risks a very valuable resource in getting people up to speed – posts like Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity have also been very informative and useful. I’m sure people working on global poverty would agree.

Again, just wanted to emphasise that others get a lot of benefit from Open Philanthropy continuing to engage in public discourse (in the quantity and quality at which it does so now).

Comment by haydnbelfield on Introducing the EA Funds · 2017-02-09T18:22:44.473Z · score: 12 (8 votes) · EA · GW

Very interesting idea, and potentially really useful for the community (and me personally!).

What's the timeline for this?

I'm presuming that the Funds would be transparent about how much money is in them, how much has been given and why - is that the case? Also as a starter, has Nick written about how much is/was in his Fund and how its been spent?

Comment by haydnbelfield on Should effective altruism have a norm against donating to employers? · 2016-12-05T12:13:05.273Z · score: 2 (2 votes) · EA · GW

Were I working for an EA org this would be the decisive factor that would swing me, so it would be really good if we could work this out. Giving to another org adds Gift Aid to your donation. +20% Forgoing salary saves you and your employer National Insurance. +29%

So if you're basic rate, is giving to your employer better value?

Comment by haydnbelfield on President Trump as a Global Catastrophic Risk · 2016-11-21T12:16:02.321Z · score: 2 (2 votes) · EA · GW

Thanks for commenting! I'll try to answer your points in turn.

  1. Nuclear weapons I was using the Cuban Missile Crisis as an example of a nuclear stand-off. I'm not saying a very similar crisis will occur, but that other stand-offs are possible in the future. Other examples of stand-offs include Nixon and the Yom Kippur War, or Reagan and Able Archer. There have many 'close calls' and stand-offs over the years, and there could be one in the future e.g. over the Baltics. Trump's character seems particularly ill-suited to nuclear stand-offs, so increases risk.

  2. Pandemics Many countries have had biological weapons programs: for example the US, UK, USSR, Japan, Germany, Iraq and South Africa. I agree that they're difficult to control and would likely hurt the country that used them as well as the target - but that hasn't stopped those countries. The development and use of biological weapons has been constrained by the Convention and surrounding norms. I think Trump threatens those norms, and so increases risk.

  3. Liberal global order Very interesting fact about trade and war there, although she is looking at the period 1870-1938 and I'm talking about post-1945. And yes I agree with you about democratic peace theory. My point is more general, that the liberal global order has kept us safe - to take one example we haven't had a serious great power war. Trump threatens that order, and so increases risk.

Comment by haydnbelfield on We are Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project, AUsA! · 2015-03-17T18:55:32.160Z · score: 5 (5 votes) · EA · GW

A link to the (very good!) 2015 Strategy might be helpful: http://globalprioritiesproject.org/wp-content/uploads/2015/03/GPP-Strategy-Overview-February-2015.pdf

Comment by haydnbelfield on We are Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project, AUsA! · 2015-03-17T18:49:09.343Z · score: 5 (5 votes) · EA · GW

How many people work full-time and part-time on GPP? What are sustainable growth predictions?

Do you model yourself as a think-tank?

What think-tanks have you looked at, spoken to, or modelled yourself upon?

Have you reached out to e.g. RUSI, BASIC, etc? Do you plan to?

What are your plans for the next a) 6 months b) year c) 5 years?

In what ways are you experimenting and iterating?

How many people have read your most popular content?

What are your next few marginal hires?

If a reader wants to work for GPP, what should they do/study/write/etc?

If a reader wants to help GPP, what should they do?

What would you do with a) £2,000 b) £10,000 c) £20,000?

What do you think your room-for-more-funding is?

You're based in the UK - there's about to be an election, then five years of a new government. How does that affect your plans?

When do you aim to influence debate, and policy - i.e. over what timescale? Are you trying to influence policy in 10 years, 20?

Who are the key decision-makers/stakeholders in your area? Have you mapped them out - how they relate, what their responsibilities are?

What Government Departments are you mainly interested in? Which are you monitoring? Are there any consultations open at the moment that you are submitting to? Same question for Parliamentary Committees.

Comment by haydnbelfield on One month in - it's time for more introductions · 2014-10-15T10:54:57.307Z · score: 7 (7 votes) · EA · GW

Hi everyone, I'm Haydn. I used to work at the Centre for Effective Altruism, now I work for a Member of the UK Parliament. Message me if you're interested in politics and EA.