Posts

Response to Phil Torres’ ‘The Case Against Longtermism’ 2021-03-08T18:09:57.419Z
Assessing Climate Change’s Contribution to Global Catastrophic Risk 2021-02-19T16:26:41.595Z
Alternatives to donor lotteries 2021-02-14T18:02:13.887Z
13 Recent Publications on Existential Risk (Jan 2021 update) 2021-02-08T12:42:17.694Z
Centre for the Study of Existential Risk Four Month Report June - September 2020 2020-12-02T18:33:42.374Z
4 Years Later: President Trump and Global Catastrophic Risk 2020-10-25T16:28:00.115Z
Centre for the Study of Existential Risk Newsletter June 2020 2020-07-02T14:03:07.303Z
11 Recent Publications on Existential Risk (June 2020 update) 2020-07-02T13:09:12.935Z
5 Recent Publications on Existential Risk (April 2020 update) 2020-04-29T09:37:40.792Z
Centre for the Study of Existential Risk Four Month Report October 2019 - January 2020 2020-04-08T13:28:13.479Z
19 Recent Publications on Existential Risk (Jan, Feb & Mar 2020 update) 2020-04-08T13:19:55.687Z
16 Recent Publications on Existential Risk (Nov & Dec 2019 update) 2020-01-15T12:07:42.000Z
The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) 2020-01-12T21:53:25.644Z
21 Recent Publications on Existential Risk (Sep 2019 update) 2019-11-05T14:26:31.698Z
Centre for the Study of Existential Risk Six Month Report April - September 2019 2019-09-30T19:20:24.798Z
Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 2019-05-01T15:34:20.425Z
Lecture Videos from Cambridge Conference on Catastrophic Risk 2019-04-23T16:03:21.275Z
CSER Advice to EU High-Level Expert Group on AI 2019-03-08T20:42:10.796Z
CSER and FHI advice to UN High-level Panel on Digital Cooperation 2019-03-08T20:39:29.657Z
Centre for the Study of Existential Risk: Six Month Report May-October 2018 2018-11-30T20:32:01.600Z
CSER Special Issue: 'Futures of Research in Catastrophic and Existential Risk' 2018-10-02T17:18:48.449Z
New Vacancy: Policy & AI at Cambridge University 2017-02-13T19:32:23.538Z
President Trump as a Global Catastrophic Risk 2016-11-18T18:02:46.526Z

Comments

Comment by HaydnBelfield on Pros and cons of working on near-term technical AI safety and assurance · 2021-06-18T13:05:34.025Z · EA · GW

There's been quite a bit written on the "pro" side:

https://www.cser.ac.uk/resources/bridging-concerns-about-ai/

https://www.cser.ac.uk/resources/bridging-gap-case-incompletely-theorized-agreement-ai-policy/ 

https://www.cser.ac.uk/resources/beyond-near-long-term/ 

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2976444 

https://arxiv.org/abs/2012.08630 

https://forum.effectivealtruism.org/posts/oR9tLNRSAep293rr5/why-those-who-care-about-catastrophic-and-existential-risk-2 

Also ARCHES, Concrete Problems in AI safety, etc

But not so much on the "con" side - people have generally just thought about opportunity cost. Your point that it might speed up harmful (due to safety, misuse or structural risks) applications is a really useful and important one! Would be hard to weigh things up - getting into tricky differential technological development territory. Would love for there to be more thinking on this topic.

Comment by HaydnBelfield on Working in Parliament: How to get a job & have an impact · 2021-05-24T16:37:43.510Z · EA · GW

On the other hand, this isn't as much of a constraint in opposition. Political Advisors are like senior senior parliamentary researchers - everyone's part of one (tiny!) team.

Comment by HaydnBelfield on Working in Parliament: How to get a job & have an impact · 2021-05-24T16:35:57.481Z · EA · GW

This is a great overview, thanks for writing it up - more people should work for MPs!

Some other useful resoures from 80,000 Hours on this topic: 
https://80000hours.org/career-reviews/party-politics-uk/ 
https://80000hours.org/2014/02/an-estimate-of-the-expected-influence-of-becoming-a-politician/ 
https://80000hours.org/2012/02/how-hard-is-it-to-become-prime-minister-of-the-united-kingdom/ 

Comment by HaydnBelfield on Draft report on existential risk from power-seeking AI · 2021-04-30T10:45:19.695Z · EA · GW

Oh and:

4. Cotra aims to predict when it will be possible for "a single computer program  [to] perform a large enough diversity of intellectual labor at a high enough level of performance that it alone can drive a transition similar to the Industrial Revolution." - that is a "growth rate [of the world economy of] 20%-30% per year if used everywhere it would be profitable to use"

Your scenario is premise 4 "Some deployed APS systems will be exposed to inputs where they seek power in unintended and high-impact ways (say, collectively causing >$1 trillion dollars of damage), because of problems with their objectives" (italics added).

Your bar is (much?) lower, so we should expect your scenario to come (much?) earlier.

Comment by HaydnBelfield on Draft report on existential risk from power-seeking AI · 2021-04-29T22:35:22.199Z · EA · GW

Hey Joe!

Great report, really fascinating stuff. Draws together lots of different writing on the subject, and I really like how you identify concerns that speak to different perspectives (eg to Drexler's CAIS and classic Bostrom superintelligence).

Three quick bits of feedback:

  1. I feel like some of Jess Whittlestone and collaborators' recent research would be helpful in your initial framing, eg 
    1. Prunkl, C. and Whittlestone, J. (2020). Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society. - on capability vs impact
    2.  Gruetzemacher, R. and Whittlestone, J. (2019). The Transformative Potential of Artificial Intelligence. - on different scales of impact 
    3. Cremer, C. Z., & Whittlestone, J. (2021). Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI. - on milestones and limitations
  2. I don't feel like you do quite enough to argue for premise 5 "Some of this power-seeking will scale (in aggregate) to the point of permanently disempowering ~all of humanity | (1)-(4)."
    Which is, unfortunately, a pretty key premise and the one I have the most questions about! My impression is that section 6.3 is where that argumentation is intended to occur, but I didn't leave it with a sense of how you thought this would scale, disempower everyone, and be permanent. Would love for you to say more on this.
  3. On a related, but distinct point, one thing I kept thinking is "does it matter that much if its an AI system that takes over the world and disempowers most people?". Eg you set out in 6.3.1 a number of mechanisms by which an AI system could gain power - but 10 out of the 11 you give (all except Destructive capacity)  seem relevant to a small group of humans in control of advanced capabilities too.
    Presumably we should also be worried about a small group doing this as well? For example, consider a scenario in which a powerhungry small group, or several competing groups, use aligned AI systems with advanced capabilities (perhaps APS, perhaps not) to the point of permanently disempowering ~all of humanity.
    If I went through and find-replaced all the "PS-misaligned AI system" with "power-hungry small group", would it read that differently? To borrow Tegmark's terms, does it matter if its Omega Team or Prometheus?
    I'd be interested in seeing some more from you about whether you're also concerned about that scenario, whether you're more/less concerned, and how you think its different from the AI system scenario.

Again, really loved the report, it is truly excellent work.

Comment by HaydnBelfield on What do you make of the doomsday argument? · 2021-03-19T20:28:04.938Z · EA · GW

Indeed. Seems supported by a quantum suicide argument - no matter how unlikely the observer, there always has to be a feeling of what-its-like-to-be that observer.

https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality

Comment by HaydnBelfield on AMA: Tom Chivers, science writer, science editor at UnHerd · 2021-03-19T12:47:31.353Z · EA · GW

It's worth adding that both Stephen Bush and Jeremy Cliffe at the New Statesman both do prediction posts and review them at the end of each year. The meme is spreading! They're also two of the best journalists to follow about UK Labour politics (Bush) and EU politics (Cliffe) - if you're interested in those topics, as I am.

https://www.newstatesman.com/politics/staggers/2020/12/what-i-got-right-and-wrong-2020

https://www.newstatesman.com/international/places/2020/12/january-i-made-ten-predictions-2020-how-did-they-turn-out

Comment by HaydnBelfield on Is Democracy a Fad? · 2021-03-16T11:17:48.742Z · EA · GW

I think the closest things we've got that's similar to this are:

Luke Muehlhauser's work on 'amateur macrohistory' https://lukemuehlhauser.com/industrial-revolution/ 

The (more academic) Peter Turchin's Seshat database: http://seshatdatabank.info/ 

Comment by HaydnBelfield on Is Democracy a Fad? · 2021-03-15T11:50:35.281Z · EA · GW

I would say more optimistic. I think there's a pretty big difference between emergence (a shift from authoritarianism to democracy) - and democratic backsliding, that is autocratisation (a shift from democracy to authoritarianism). Once that shift has consolidated, there's lots of changes that makes it self-reinforcing/path-dependent: norms and identities shift, economic and political power shifts, political institutions shift, the role of the military shifts. Some factors are the same for emergence and persistence, like wealth/growth, but some aren't (which I would say are pretty key) like getting authoritarian elites to accept democratisation.

Two books on emergence that I've found particularly interesting are 

  • The international dimensions of democratization: Europe and the Americas; edited by Laurence Whitehead 2001 (on underplayed international factors)
  • Conservative parties and the birth of democracy; Daniel Ziblatt 2017  (on buying off elites to accept this permanent change)

However as I said, the impact of AI systems does raise uncertainty, and is super fascinating.

Something I'm very concerned about, which I don't believe you touched, is the fate of democracies after a civilizational collapse. I've got a book chapter coming out on this later this year, that I hope I may be able to share a preprint of.

Comment by HaydnBelfield on Is Democracy a Fad? · 2021-03-14T17:45:16.894Z · EA · GW

Interesting post! If you wanted to read into the comparative political science literature a little more, you might be interested in diving into the subfield of democratic backsliding (as opposed to emergence):

  • A third wave of autocratization is here: what is new about it? Lührmann & Lindberg  2019
  • How Democracies Die. Steven Levitsky and Daniel Ziblatt 2018
  • On Democratic Backsliding  Bermeo, Nancy 2016
  • Two Modes of Democratic Breakdown: A Competing Risks Analysis of Democratic Durability; Maeda, K. 201
  • Authoritarian Reversals and Democratic Consolidation in American Political Science Review; Milan Svolik; 2008
  • Institutional Design and Democratic Consolidation in the Third World Timothy J. Power; Mark J. Gasiorowski; 04/1997
  • What Makes Democracies Endure? Jose Antonio Cheibub; Adam Przeworski; Fernando Papaterra Limongi Neto; Michael M. Alvarez 1996
  • The breakdown of democratic regimes: crisis, breakdown, and reequilibration Book  by Juan J. Linz 1978

One of the common threads in this subfield is that once a democracy has 'consolidated',  it seems to be fairly resilient to coups and perhaps incumbent takeover. 

I certainly agree that how this interacts with new AI systems: automation, surveillance and targeting/profiling, and autonomous weapons systems is absolutely fascinating. For one early stab, you might be interested in my colleagues':

Comment by HaydnBelfield on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-09T16:44:59.256Z · EA · GW

That's right, I think they should be higher priorities. As you show in your very useful post, Ord has nuclear and climate change at 1/1000 and AI at 1/10. I've got a draft book chapter on this, which I hope to be able to share a preprint of soon. 

Comment by HaydnBelfield on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-08T19:12:01.254Z · EA · GW

I'm really sorry to hear that from both of you, I agree it's a serious accusation. 

For longtermism as a whole, as I argued in the post, I don't understand describing it as white supremacy - like e.g. antiracism or feminism, longtermism is opposed to an unjust power structure.

Comment by HaydnBelfield on Assessing Climate Change’s Contribution to Global Catastrophic Risk · 2021-03-04T17:07:39.301Z · EA · GW

Sorry its taking a while to get back to you!

In the meantime, you might be interested in this from our Catherine Richards: https://www.cser.ac.uk/resources/reframing-threat-global-warming/ 

Comment by HaydnBelfield on Assessing Climate Change’s Contribution to Global Catastrophic Risk · 2021-02-20T00:42:11.162Z · EA · GW

Thanks for the comment  and these very useful links - will check with our food expert colleague and get back to you, especially on the probability question.

Just personally, however, let me note that we say that those four factors you mention are current 'sources of  significant stress' for systems for the production and allocation of food - and we note that  while 'global food productivity and production has increased dramatically' we are concerned about the  'vulnerability of our global food supply to rapid and global disruptions' and shocks. The three ways we describe climate change further reducing food security are growing conditions,  agricultural pests and diseases, and the  occurrence of extreme weather events.

Note also that the global catastrophe is the shock (hazard) plus how it cascades through interconnected systems with feedback. We're explicitly suggesting that the field move beyond 'is x a catastrophe?' to 'how does x effect critical systems, which can feed into one another, and may act more on our vulnerability and exposure than as a direct, single hazard'.

Comment by HaydnBelfield on Alternatives to donor lotteries · 2021-02-19T11:36:51.335Z · EA · GW

Interesting! I would feel I had been quasirandomly selected to allocate our shared pool of donations - and would definitely feel some obligation/responsibility.

As evidence that other people feel the same way, I would point to the extensive research and write-ups that previously selected allocators have done. A key explanation for why they've done that is a sense of obligation/responsibility for the group.

Comment by HaydnBelfield on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2021-02-17T19:11:52.061Z · EA · GW

As others have said, great piece! Well argued and evidenced and on an important and neglected topic. I broadly agree with your point estimates for the three cases. 

I think it might be worth saying a bit more (perhaps in a seperate section near the top) about why your estimates of survival are not higher. What explains the remaining 0.01-0.3 uncertainty? How could it lead 'directly' to extinction? In different sections you talk about WMD, food availability etc, but I would have found it useful to have all that together. That would allow you to address general reasons for uncertainty too. The most compelling single reason for me, for example, is the unprecedented nature of a global, post-industrial collapse.

On your suggestions for other research directions:

I'd be super interested in someone going through the old Cold War RAND reports from the 1950s+1960s looking at collapse/recovery after nuclear war, and the wider literature on civil defence. Did the Soviets produce anything similar? I don't know! Going through the 'prepper' literature might also maybe be useful? Perhaps as useful as scifi.

"For example, I think I’ve heard somewhere that places with higher levels of social trust have lower levels of looting, hoarding, and other antisocial disaster behavior." You're thinking of Aldrich, D. P. (2012). Building Resilience. University of Chicago Press. The wider field is disaster risk reduction (DRR).

Comment by HaydnBelfield on Alternatives to donor lotteries · 2021-02-17T18:53:00.707Z · EA · GW

Your policy seems reasonable. Although I wonder if the analogy with a regular lottery  might risk confusing people. When one thinks of "entering a regular lottery for charitable giving", one might think of additional money - money that counterfactually wouldn't have gone to charity. But that's not true of donor lotteries - there is no additional money.

On your second point: "making requests to pool money in a way that rich donors expect to lose control" describes the EA Funds, which I don't think are a scam. In fact, the EA funds pool money in such a way that donors are certain to lose control.

Comment by HaydnBelfield on Alternatives to donor lotteries · 2021-02-17T18:43:10.117Z · EA · GW

Hey thanks for the comment!

As mentioned, I'm offering a bunch of alternatives - not all of which I support - to help us examine our current system. 'Reverse-donation-weighted' in particular is more of a prompt to "why do we think donation-weighting is normal or unproblematic - what might we be missing out on or reinforcing with donation-weighting?" 

Note that the current 'donor lottery' is a form of random donor pooling - but with donation-weighting. I see donation weighting as a weird halfway house between EA Funds and (threshold) Random Pooling. With donation-weighting you don't get the hiring process or expertise of EA Funds, and you get way fewer of the benefits of randomisation than (threshold) Random Pooling.

The alternative I'm most sympathetic to (threshold random donor pooling in a cause-area) isn't affected by your second and third points. The allocator wouldn't be some rural-museums-obsessive, it would be a "typical well-informed EA" - and because its within a cause area we could be even more sure it won't be spent on e.g. a rural museum. Threshold random donor pooling in a cause-area would expand the search space within global health, or within animal rights, etc. And finally, the threshold would prevent raids.

Comment by HaydnBelfield on Alternatives to donor lotteries · 2021-02-15T16:29:43.038Z · EA · GW

I'm sure you would be just as happy entering a regular lottery - you're one of the few people that could approach the ideal  I mentioned of the "perfect rational maximising Homo economicus"!  

For us lesser mortals though, there are two reasons we might be queasy about entering a regular lottery. First if we're cautious/risk-sensitive - if we have a bias towards our donations being likely to do good. We might not feel comfortable being risk-neutral and just doing the expected value calculation. Second, if we're impatient/time-sensitive - for example if we believe there's a particular window for donations open now that would not be open if we waited several years to win the lottery.

That's about approaching it as a regular lottery. But again I really don't think we should be approaching these systems as matters just for individual donors. We've moved so far away from the "just maximise the impact of your own particular donation" perspective in other parts of EA! Its not just a matter for individuals - we as a community, through institutions like CEA, are supporting (logistically and through approval/sanction) some particular donor pooling systems and not others. It's worth considering what dynamics we could be reinforcing, whether alternatives might be better.

-

On the benefits of pooling, I quite agree about the time:donation size ratio.

As I said: "Donor pooling has several advantages. First, it saves everyone’s time. There are also gains from specialisation – 1 allocator spending 50 hours researching the best opportunity will likely produce better results than 50 donors spending 1 hour. Second, there are opportunities that are only available to an allocator with a large pool. Charities are more willing to provide information and spend time on discussions."

If you've got a $5k donation, its not worth spending as much time on - so maybe you should just donate to a  donor pool is in a pool with a predetermined allocator(s) e.g. the EA Funds. If you've pooled your donations with others and have $100k, it is worth spending more time on the allocator and allocation decision. But then why not 1) have an internal discussion/consensus/vote on who should be allocator or 2) randomise who gets to take the "delegate or allocate" decision? Why adopt this weird halfway house where people who have donated more to the pot have a greater chance of being selected - thereby sacrificing many of the benefits of discussion on the one hand, or randomisation on the other?

Comment by HaydnBelfield on Alternatives to donor lotteries · 2021-02-15T11:29:08.769Z · EA · GW

Thanks for your comment. I'm not entirely sure I understand what you mean by dominant action, so if you don't mind saying more about that I'd appreciate it.

My confusion is something like: there's no new money out there! Its a group of donors deciding to give individually or give collectively. So the perspective of "what will lead to optimal allocation of resources at the group level?" is the right one. Even if people are taking individual actions comparing 'donate to x directly' or 'donate to a lottery, then to x', those individual decision create a collective institution, for which the question of group optimality is relevant. Also, the EA community (+CEA) is not just endorsing this system, its providing a lot of logistical support. So the question of what its effects are and how we should be structuring it are key ones.

On another note, I don't know enough about game theory to phrase this intuition correctly, but something seems off about the suggestion that its dominant for each of the donors. E.g. if there are 10 donors in a pool, only one of them is going to be selected. They can't all 'win'. Feels a bit like defect being dominant in a prisoner's dilemma. But again, could be misunderstanding.

My understanding is that past people selected to allocate the pool haven't tended to delegate that allocation power. And indeed if you're strongly expecting to do so, why not just give the allocation power to that person beforehand, either over your individual donation (e.g. through an EA fund) or over a pool. Why go through the lottery stage?

Comment by HaydnBelfield on Alternatives to donor lotteries · 2021-02-15T11:06:49.437Z · EA · GW

Thank you, very kind!

Comment by HaydnBelfield on 13 Recent Publications on Existential Risk (Jan 2021 update) · 2021-02-14T17:57:41.266Z · EA · GW

great catch thanks - fixed!

Comment by HaydnBelfield on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-22T12:42:30.920Z · EA · GW

[Disclosure: I work for CSER]

I completely agree that BERI is a great organisation and a good choice. However, I will also just breifly note that FHI, CHAI and CSER (like any academic groups) are always open to receiving donations:

FHI: https://www.fhi.ox.ac.uk/support-fhi/

CSER: https://www.philanthropy.cam.ac.uk/give-to-cambridge/centre-for-the-study-of-existential-risk?table=departmentprojects&id=452 

CHAI: If you wanted to donate to them, here is the relevant web page. Unfortunately it is apparently broken at time of writing - they tell me any donation via credit card can be made by calling the Gift Services Department on 510-643-9789. 

Comment by HaydnBelfield on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-12T20:59:49.471Z · EA · GW

FYI if you dig into AI researchers attitudes in surveys, they hate lethal autonomous weapons and really don't want to work on them. Will dig up reports, but for now check out: https://futureoflife.org/laws-pledge/ 

Comment by HaydnBelfield on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-12T20:57:39.188Z · EA · GW

Strongly upvoted, definitely agree!

Comment by HaydnBelfield on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-27T14:56:21.010Z · EA · GW

Thanks Pablo, yes its my view too that Trump was miscalibrated and showed poor decision-making on Ebola and COVID-19, because of his populism and disregard for science and international cooperation.

Comment by HaydnBelfield on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-27T14:53:37.608Z · EA · GW

Thanks Stefan, yes this is my view too: "default view would be that it says little about global trends in levels of authoritarianism". I simply gave a few illustrative examples to underline the wider statistical point, and highlight a few causal mechanisms (e.g. demonstration effect, Bannon's transnational campaigning).

Comment by HaydnBelfield on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-27T14:51:15.158Z · EA · GW

Hi Dale,

Thanks for reading and responding. I certainly tried to review the ways Trump had been better than the worst case scenario: e.g. on nuclear use or bioweapons. Let me respond to a few points you raised (though I think we might continue to disagree!)

Authoritarianism and pandemic response - I'll comment on Pablo and Stefan's comments. However just on social progress, my  point was just 'one of the reasons authoritarianism around the world is bad is it limits social progress' - I didn't make a prediction about how social progress would fare under Trump.

Nuclear use and bioweapons - as I say in the post, there haven't been bioweapons development (that we know of) or nuclear use. However, I don't think its accurate to say this is a 'worry that didn't happen'. My point throughout this post and the last one was that Trump  will/has raised risk.  An increase from a 10% to a 20% chance is a big deal if what we're talking about is a catastrophe, and that an event did not occur does not show that this risk did not increase.

On nuclear proliferation, you said "I am not aware of any of these countries acquiring any nuclear weapons, or even making significant progress", but as I said in this post, North Korea has advanced their nuclear capabilities and Iran resumed uranium enrichment after Trump pulled out of the Iran Deal.

Thanks again, Haydn

Comment by HaydnBelfield on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-27T14:37:11.780Z · EA · GW

Hi Ian, 

Thanks for the update on your predictions! Really interesting points about the political landscape.

On your point 1 + authoritarianism, I agree with lots of your points. I think four years ago a lot of us (including me!) were worried about Trump and personal/presidential undermining of the rule of law/norms/democracy, enabled by the Republicans; when we should have been as worried about a general minoritarian push from McConnell and the rest of the Republicans, enabled by Trump.

On climate change, my intention wasn't to imply stasis/inaction over rolling back - I do agree things have gotten worse, and your examples of the EPA and the Dept of the Interior make that case well.

Comment by HaydnBelfield on EA Organization Updates: September 2020 · 2020-10-22T08:54:18.654Z · EA · GW

Reading this was so inspiring and cool!

I think we could probably add a $25m pro-Biden ad buy from Dustin Moskovitz&Cari Tuna, and Sam Bankman-Fried.

https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valley

Comment by HaydnBelfield on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T12:33:27.089Z · EA · GW

[minor, petty, focussing directly on the proposed subject point]

In this discussion, many people have described the subject of the talk as "tort law reform". This risks sounding technocratic or minor.

The actual subject (see video) is a libertarian proposal to replace the entirety of the criminal law systen with a private, corporate system with far fewer limits on torture and constitutional rights. While neglected, this proposal is unimportant (and worse, actively harmful) and completely intractable.

The 17 people who were interested in attending didn't miss out on hearing about the next great cause X.

Comment by HaydnBelfield on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T12:17:43.597Z · EA · GW

I think I have a different view on the purpose of local group events than Larks. They're not primarily about like exploring the outer edges of knowledge, breaking new intellectual ground, discovering cause x, etc.

They're primarily about attracting people to effective altruism. They're about recruitment, persuasion, raising awareness and interest, starting people on the funnel, deepening engagement etc etc.

So its good not to have a speaker at your event who is going to repel the people you want to attract.

Comment by HaydnBelfield on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-01T12:52:41.232Z · EA · GW

New paper: Personality and moral judgment: Curious consequentialists and polite deontologists https://psyarxiv.com/73bfv/

"We have provided the first examination of how the domains and aspects of the Big Five traits are linked with moral judgment.

In both of our studies, the intellect aspect of openness/intellect was the strongest predictor of consequentialist inclinations after holding constant other personality traits. Thus, intellectually curious people—those who are motivated to explore and reflect upon abstract ideas—are more inclined to judge the morality of behaviors according to the consequences they produce.

Our other main finding, which emerged very consistently across both studies and our different indices of moral judgment, was a unique association between politeness and stronger deontological inclinations. This means that individuals who are more courteous, respectful, and adherent to salient social norms, tend to judge the morality of an action not by its consequences, but rather by its alignment with particular moral rules, duties, or rights."

Comment by HaydnBelfield on AI Governance: Opportunity and Theory of Impact · 2020-09-25T19:24:27.231Z · EA · GW

Thanks for this, I found this really useful! Will be referring back to it quite a bit I imagine.

I would say researchers working on AI governance at the Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, University of Cambridge (where I work) would agree with a lot of your framing of the risks, pathways, and theory of impact.

Personally, I find it helpful to think about our strategy under four main points (which I think has a lot in common with the 'field-building model'):

1. Understand - study and better understand risks and impacts.

2. Solutions - develop ideas for solutions, interventions, strategies and policies in collaboration with policy-makers and technologists.

3. Impact - implement those strategies through extensive engagement.

4. Field-build - foster a global community of academics, technologists and policy-makers working on these issues.

Comment by HaydnBelfield on Quantifying the probability of existential catastrophe: A reply to Beard et al. · 2020-08-13T12:58:57.346Z · EA · GW

Going further down the rabbit-hole, Simon Beard, Thomas Rowe, and James Fox replied to Seth's reply!

https://www.cser.ac.uk/resources/existential-risk-assessment-reply-baum/

Highlights

  • Seth Baum’s reply to our paper “An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards” makes a very valuable contribution to this literature.
  • We raise some concerns about the definitions of terms like ‘existential catastrophe’ and how they can be both normative and non-normative.
  • While accepting Baum’s contention that there is a trade-off between rigour and accessibility of methods, we show how the community of existential risk studies could easily improve in relation to both these desiderata.
  • Finally we discuss the importance of context within which quantification of the likelihood of existential hazards takes place, and how this impacts on the appropriateness of different kinds of claim.

Abstract

We welcome Seth Baum's reply to our paper. While we are in broad agreement with him on the range of topics covered, this particular field of research remains very young and undeveloped and we think that there are many points on which further reflection is needed. We briefly discuss three: the normative aspects of terms like 'existential catastrophe,' the opportunities for low hanging fruit in method selection and application and the importance of context when making probability claims.

Comment by HaydnBelfield on EA Meta Fund Grants – July 2020 · 2020-08-13T12:54:14.839Z · EA · GW

I really appreciate your recognition of this - really positive!

"it's hard to publish critiques of organizations or the work of particular people without harming someone's reputation or otherwise posing a risk to the careers of the people involved. I also agree with you that it's useful to find ways to talk about risks and reservations. One potential solution is to talk about the issues in an anonymized, aggregate manner."

Comment by HaydnBelfield on Are there superforecasts for existential risk? · 2020-07-07T22:28:45.087Z · EA · GW

You might be interested in these two papers:

Identifying and Assessing the Drivers of Global Catastrophic Risk by Simon Beard & Phil Torres.

An Analysis and Evaluation of Methods Currently Used to Quantify the Likelihood of Existential Hazards by Simon Beard, Thomas Rowe & James Fox.

Comment by HaydnBelfield on Gordon Irlam: an effective altruist ahead of his time · 2020-06-12T10:13:44.851Z · EA · GW

Completely agree! I'd also emphasise some really important early donations to Giving What We Can and GCRI. From https://www.gricf.org/annual-report.html

"Summarizing the funding provided by the foundation for 2000-2019:

RESULTS Educational Fund - $682,603 (39%)

Global Catastrophic Risk Institute (c/o Social & Environmental Entrepreneurs) - $326,043 (19%)

Keep Antibiotics Working (c/o Food Animal Concerns Trust) - $135,000 (8%)

Institute for One World Health - $123,100 (7%)

Future of Humanity Institute (c/o Americans for Oxford Inc) - $120,000 (7%)

Knowledge Ecology International - $100,000 (6%)

Health GAP - $66,000 (4%)

Machine Intelligence Research Institute - $55,000 (3%)

Giving What We Can (c/o Centre for Effective Altruism USA Inc) - $50,000 (3%)

Kids International Dental Services - $24,000 (1%)

Total - $1,735,558.04 (100%) "

Comment by HaydnBelfield on Hayden Wilkinson: Doing good in an infinite, chaotic world · 2020-02-19T16:22:30.227Z · EA · GW

Good job Hayden, nice talk.

-Haydn

Comment by HaydnBelfield on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-14T19:37:13.957Z · EA · GW

Have included a paragraph up at the top that hopefully adresses (some of?) your concerns. As it says in the paragraph, thanks for your comments!

"Edit: This argument applies across the political spectrum. One of the best arguments for political party participation is similar to voting i.e. getting a say in the handful of leading political figures. We recommend that effective altruists consider this as a reason to join the party they are politically sympathetic towards in expectation of voting in future leadership contests. We're involved in the Labour Party - and Labour currently has a leadership election with only a week left to register to participate. So this post focuses on that as an example, and with a hope that if you're Labour-sympathetic you consider registering to participate. We definitely do not suggest registering to participate if you're not Labour-sympathetic. Don't be a 'hit and run entryist' (Thanks Greg for the comments!)."

Comment by HaydnBelfield on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T20:27:17.804Z · EA · GW

For the avoidance of any doubt: don't be a "hit and run entryist", this post is not suggesting such a "scheme". If you're "indifferent or hostile to Labour Party politics" then I don't really know why you'd want to be part of the selection, and don't recommend you try and join as a member.

The post says "You can always cancel your membership (though of course I'd rather you'd stay a member)." That's not advocating joining just to cancel - it's saying you're not bound in if you change your mind.


Comment by HaydnBelfield on EA Organization Updates: November 2019 · 2019-12-19T00:52:39.732Z · EA · GW

Thanks for this. "Haydn Belfield published a report on global catastrophic risk (GCR) preparedness on CSER's GCR policy blog." - don't want to claim credit.

Should be "CSER published a report on how governments can better understand global catastrophic risk (GCR)."

Comment by HaydnBelfield on A list of EA-related podcasts · 2019-11-28T22:20:38.438Z · EA · GW

Nice! Thanks

Comment by HaydnBelfield on Are comment "disclaimers" necessary? · 2019-11-27T22:59:45.189Z · EA · GW

Oh Greg your words bounce like sunbeams and drip like honey

Comment by HaydnBelfield on A list of EA-related podcasts · 2019-11-27T22:55:25.441Z · EA · GW

It would be real great if these were hyperlinks...

Would take some time, but might be useful for people gathering EA resources?

Comment by HaydnBelfield on A list of EA-related podcasts · 2019-11-27T22:54:13.531Z · EA · GW

Naked Scientists (BBC radio show and podcast) have done a bunch of interviews with CSER researchers:

https://www.cser.ac.uk/news/naked-scientists-planet-b/

https://www.cser.ac.uk/news/haydn-belfield-interviewed-naked-scientists/

https://www.cser.ac.uk/news/workshop-featured-on-the-naked-scientists-podcast/

https://www.cser.ac.uk/news/podcast-countdown-artificial-intelligence/

https://www.cser.ac.uk/news/podcast-interviews-martin-rees/

Comment by HaydnBelfield on Institutions for Future Generations · 2019-11-19T19:03:04.870Z · EA · GW

I was surprised not to see a reference to the main (only?) paper examining this question from an EA/'longtermist' perspective:

Natalie Jones, Mark O'Brien, Thomas Ryan. (2018). Representation of future generations in United Kingdom policy-making. Futures.

Which led directly to the creation of the UK All-Party Parliamentary Group for Future Generations (an effort led by Natalie Jones and Tildy Stokes). The APPG is exploring precisely the questions you've raised. If you haven't reached out yet, here's the email: secretariat@appgfuturegenerations.com

Comment by HaydnBelfield on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-07T17:17:33.749Z · EA · GW

Really really good to see CEA engaging with and accepting criticism, and showing how it's trying and is changing policies.

Comment by HaydnBelfield on 21 Recent Publications on Existential Risk (Sep 2019 update) · 2019-11-06T13:29:38.894Z · EA · GW

Similar but fewer, cos Seán is a better academic than me. I was aware of upper bound and vulnerable world.

Comment by HaydnBelfield on What analysis has been done of space colonization as a cause area? · 2019-10-10T12:24:58.825Z · EA · GW

https://www.vox.com/future-perfect/2018/10/22/17991736/jeff-bezos-elon-musk-colonizing-mars-moon-space-blue-origin-spacex

Overview piece