Posts

4 Years Later: President Trump and Global Catastrophic Risk 2020-10-25T16:28:00.115Z · score: 23 (21 votes)
Centre for the Study of Existential Risk Newsletter June 2020 2020-07-02T14:03:07.303Z · score: 17 (4 votes)
11 Recent Publications on Existential Risk (June 2020 update) 2020-07-02T13:09:12.935Z · score: 14 (5 votes)
5 Recent Publications on Existential Risk (April 2020 update) 2020-04-29T09:37:40.792Z · score: 23 (9 votes)
Centre for the Study of Existential Risk Four Month Report October 2019 - January 2020 2020-04-08T13:28:13.479Z · score: 8 (3 votes)
19 Recent Publications on Existential Risk (Jan, Feb & Mar 2020 update) 2020-04-08T13:19:55.687Z · score: 13 (6 votes)
16 Recent Publications on Existential Risk (Nov & Dec 2019 update) 2020-01-15T12:07:42.000Z · score: 20 (7 votes)
The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) 2020-01-12T21:53:25.644Z · score: 9 (25 votes)
21 Recent Publications on Existential Risk (Sep 2019 update) 2019-11-05T14:26:31.698Z · score: 31 (16 votes)
Centre for the Study of Existential Risk Six Month Report April - September 2019 2019-09-30T19:20:24.798Z · score: 14 (5 votes)
Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 2019-05-01T15:34:20.425Z · score: 10 (13 votes)
Lecture Videos from Cambridge Conference on Catastrophic Risk 2019-04-23T16:03:21.275Z · score: 15 (9 votes)
CSER Advice to EU High-Level Expert Group on AI 2019-03-08T20:42:10.796Z · score: 13 (4 votes)
CSER and FHI advice to UN High-level Panel on Digital Cooperation 2019-03-08T20:39:29.657Z · score: 22 (7 votes)
Centre for the Study of Existential Risk: Six Month Report May-October 2018 2018-11-30T20:32:01.600Z · score: 26 (15 votes)
CSER Special Issue: 'Futures of Research in Catastrophic and Existential Risk' 2018-10-02T17:18:48.449Z · score: 9 (9 votes)
New Vacancy: Policy & AI at Cambridge University 2017-02-13T19:32:23.538Z · score: 6 (6 votes)
President Trump as a Global Catastrophic Risk 2016-11-18T18:02:46.526Z · score: 22 (20 votes)

Comments

Comment by haydnbelfield on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-27T14:56:21.010Z · score: 3 (2 votes) · EA · GW

Thanks Pablo, yes its my view too that Trump was miscalibrated and showed poor decision-making on Ebola and COVID-19, because of his populism and disregard for science and international cooperation.

Comment by haydnbelfield on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-27T14:53:37.608Z · score: 1 (1 votes) · EA · GW

Thanks Stefan, yes this is my view too: "default view would be that it says little about global trends in levels of authoritarianism". I simply gave a few illustrative examples to underline the wider statistical point, and highlight a few causal mechanisms (e.g. demonstration effect, Bannon's transnational campaigning).

Comment by haydnbelfield on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-27T14:51:15.158Z · score: 3 (2 votes) · EA · GW

Hi Dale,

Thanks for reading and responding. I certainly tried to review the ways Trump had been better than the worst case scenario: e.g. on nuclear use or bioweapons. Let me respond to a few points you raised (though I think we might continue to disagree!)

Authoritarianism and pandemic response - I'll comment on Pablo and Stefan's comments. However just on social progress, my  point was just 'one of the reasons authoritarianism around the world is bad is it limits social progress' - I didn't make a prediction about how social progress would fare under Trump.

Nuclear use and bioweapons - as I say in the post, there haven't been bioweapons development (that we know of) or nuclear use. However, I don't think its accurate to say this is a 'worry that didn't happen'. My point throughout this post and the last one was that Trump  will/has raised risk.  An increase from a 10% to a 20% chance is a big deal if what we're talking about is a catastrophe, and that an event did not occur does not show that this risk did not increase.

On nuclear proliferation, you said "I am not aware of any of these countries acquiring any nuclear weapons, or even making significant progress", but as I said in this post, North Korea has advanced their nuclear capabilities and Iran resumed uranium enrichment after Trump pulled out of the Iran Deal.

Thanks again, Haydn

Comment by haydnbelfield on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-27T14:37:11.780Z · score: 2 (2 votes) · EA · GW

Hi Ian, 

Thanks for the update on your predictions! Really interesting points about the political landscape.

On your point 1 + authoritarianism, I agree with lots of your points. I think four years ago a lot of us (including me!) were worried about Trump and personal/presidential undermining of the rule of law/norms/democracy, enabled by the Republicans; when we should have been as worried about a general minoritarian push from McConnell and the rest of the Republicans, enabled by Trump.

On climate change, my intention wasn't to imply stasis/inaction over rolling back - I do agree things have gotten worse, and your examples of the EPA and the Dept of the Interior make that case well.

Comment by haydnbelfield on EA Organization Updates: September 2020 · 2020-10-22T08:54:18.654Z · score: 6 (4 votes) · EA · GW

Reading this was so inspiring and cool!

I think we could probably add a $25m pro-Biden ad buy from Dustin Moskovitz&Cari Tuna, and Sam Bankman-Fried.

https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valley

Comment by haydnbelfield on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T12:33:27.089Z · score: 6 (14 votes) · EA · GW

[minor, petty, focussing directly on the proposed subject point]

In this discussion, many people have described the subject of the talk as "tort law reform". This risks sounding technocratic or minor.

The actual subject (see video) is a libertarian proposal to replace the entirety of the criminal law systen with a private, corporate system with far fewer limits on torture and constitutional rights. While neglected, this proposal is unimportant (and worse, actively harmful) and completely intractable.

The 17 people who were interested in attending didn't miss out on hearing about the next great cause X.

Comment by haydnbelfield on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T12:17:43.597Z · score: 31 (18 votes) · EA · GW

I think I have a different view on the purpose of local group events than Larks. They're not primarily about like exploring the outer edges of knowledge, breaking new intellectual ground, discovering cause x, etc.

They're primarily about attracting people to effective altruism. They're about recruitment, persuasion, raising awareness and interest, starting people on the funnel, deepening engagement etc etc.

So its good not to have a speaker at your event who is going to repel the people you want to attract.

Comment by haydnbelfield on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-01T12:52:41.232Z · score: 8 (5 votes) · EA · GW

New paper: Personality and moral judgment: Curious consequentialists and polite deontologists https://psyarxiv.com/73bfv/

"We have provided the first examination of how the domains and aspects of the Big Five traits are linked with moral judgment.

In both of our studies, the intellect aspect of openness/intellect was the strongest predictor of consequentialist inclinations after holding constant other personality traits. Thus, intellectually curious people—those who are motivated to explore and reflect upon abstract ideas—are more inclined to judge the morality of behaviors according to the consequences they produce.

Our other main finding, which emerged very consistently across both studies and our different indices of moral judgment, was a unique association between politeness and stronger deontological inclinations. This means that individuals who are more courteous, respectful, and adherent to salient social norms, tend to judge the morality of an action not by its consequences, but rather by its alignment with particular moral rules, duties, or rights."

Comment by haydnbelfield on AI Governance: Opportunity and Theory of Impact · 2020-09-25T19:24:27.231Z · score: 5 (4 votes) · EA · GW

Thanks for this, I found this really useful! Will be referring back to it quite a bit I imagine.

I would say researchers working on AI governance at the Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, University of Cambridge (where I work) would agree with a lot of your framing of the risks, pathways, and theory of impact.

Personally, I find it helpful to think about our strategy under four main points (which I think has a lot in common with the 'field-building model'):

1. Understand - study and better understand risks and impacts.

2. Solutions - develop ideas for solutions, interventions, strategies and policies in collaboration with policy-makers and technologists.

3. Impact - implement those strategies through extensive engagement.

4. Field-build - foster a global community of academics, technologists and policy-makers working on these issues.

Comment by haydnbelfield on Quantifying the probability of existential catastrophe: A reply to Beard et al. · 2020-08-13T12:58:57.346Z · score: 6 (4 votes) · EA · GW

Going further down the rabbit-hole, Simon Beard, Thomas Rowe, and James Fox replied to Seth's reply!

https://www.cser.ac.uk/resources/existential-risk-assessment-reply-baum/

Highlights

  • Seth Baum’s reply to our paper “An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards” makes a very valuable contribution to this literature.
  • We raise some concerns about the definitions of terms like ‘existential catastrophe’ and how they can be both normative and non-normative.
  • While accepting Baum’s contention that there is a trade-off between rigour and accessibility of methods, we show how the community of existential risk studies could easily improve in relation to both these desiderata.
  • Finally we discuss the importance of context within which quantification of the likelihood of existential hazards takes place, and how this impacts on the appropriateness of different kinds of claim.

Abstract

We welcome Seth Baum's reply to our paper. While we are in broad agreement with him on the range of topics covered, this particular field of research remains very young and undeveloped and we think that there are many points on which further reflection is needed. We briefly discuss three: the normative aspects of terms like 'existential catastrophe,' the opportunities for low hanging fruit in method selection and application and the importance of context when making probability claims.

Comment by haydnbelfield on EA Meta Fund Grants – July 2020 · 2020-08-13T12:54:14.839Z · score: 6 (4 votes) · EA · GW

I really appreciate your recognition of this - really positive!

"it's hard to publish critiques of organizations or the work of particular people without harming someone's reputation or otherwise posing a risk to the careers of the people involved. I also agree with you that it's useful to find ways to talk about risks and reservations. One potential solution is to talk about the issues in an anonymized, aggregate manner."

Comment by haydnbelfield on Are there superforecasts for existential risk? · 2020-07-07T22:28:45.087Z · score: 4 (3 votes) · EA · GW

You might be interested in these two papers:

Identifying and Assessing the Drivers of Global Catastrophic Risk by Simon Beard & Phil Torres.

An Analysis and Evaluation of Methods Currently Used to Quantify the Likelihood of Existential Hazards by Simon Beard, Thomas Rowe & James Fox.

Comment by haydnbelfield on Gordon Irlam: an effective altruist ahead of his time · 2020-06-12T10:13:44.851Z · score: 18 (9 votes) · EA · GW

Completely agree! I'd also emphasise some really important early donations to Giving What We Can and GCRI. From https://www.gricf.org/annual-report.html

"Summarizing the funding provided by the foundation for 2000-2019:

RESULTS Educational Fund - $682,603 (39%)

Global Catastrophic Risk Institute (c/o Social & Environmental Entrepreneurs) - $326,043 (19%)

Keep Antibiotics Working (c/o Food Animal Concerns Trust) - $135,000 (8%)

Institute for One World Health - $123,100 (7%)

Future of Humanity Institute (c/o Americans for Oxford Inc) - $120,000 (7%)

Knowledge Ecology International - $100,000 (6%)

Health GAP - $66,000 (4%)

Machine Intelligence Research Institute - $55,000 (3%)

Giving What We Can (c/o Centre for Effective Altruism USA Inc) - $50,000 (3%)

Kids International Dental Services - $24,000 (1%)

Total - $1,735,558.04 (100%) "

Comment by haydnbelfield on Hayden Wilkinson: Doing Good in an Infinite, Chaotic World · 2020-02-19T16:22:30.227Z · score: 6 (4 votes) · EA · GW

Good job Hayden, nice talk.

-Haydn

Comment by haydnbelfield on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-14T19:37:13.957Z · score: 9 (6 votes) · EA · GW

Have included a paragraph up at the top that hopefully adresses (some of?) your concerns. As it says in the paragraph, thanks for your comments!

"Edit: This argument applies across the political spectrum. One of the best arguments for political party participation is similar to voting i.e. getting a say in the handful of leading political figures. We recommend that effective altruists consider this as a reason to join the party they are politically sympathetic towards in expectation of voting in future leadership contests. We're involved in the Labour Party - and Labour currently has a leadership election with only a week left to register to participate. So this post focuses on that as an example, and with a hope that if you're Labour-sympathetic you consider registering to participate. We definitely do not suggest registering to participate if you're not Labour-sympathetic. Don't be a 'hit and run entryist' (Thanks Greg for the comments!)."

Comment by haydnbelfield on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T20:27:17.804Z · score: 8 (9 votes) · EA · GW

For the avoidance of any doubt: don't be a "hit and run entryist", this post is not suggesting such a "scheme". If you're "indifferent or hostile to Labour Party politics" then I don't really know why you'd want to be part of the selection, and don't recommend you try and join as a member.

The post says "You can always cancel your membership (though of course I'd rather you'd stay a member)." That's not advocating joining just to cancel - it's saying you're not bound in if you change your mind.


Comment by haydnbelfield on EA Organization Updates: November 2019 · 2019-12-19T00:52:39.732Z · score: 1 (1 votes) · EA · GW

Thanks for this. "Haydn Belfield published a report on global catastrophic risk (GCR) preparedness on CSER's GCR policy blog." - don't want to claim credit.

Should be "CSER published a report on how governments can better understand global catastrophic risk (GCR)."

Comment by haydnbelfield on A list of EA-related podcasts · 2019-11-28T22:20:38.438Z · score: 5 (4 votes) · EA · GW

Nice! Thanks

Comment by haydnbelfield on Are comment "disclaimers" necessary? · 2019-11-27T22:59:45.189Z · score: 13 (8 votes) · EA · GW

Oh Greg your words bounce like sunbeams and drip like honey

Comment by haydnbelfield on A list of EA-related podcasts · 2019-11-27T22:55:25.441Z · score: 9 (6 votes) · EA · GW

It would be real great if these were hyperlinks...

Would take some time, but might be useful for people gathering EA resources?

Comment by haydnbelfield on A list of EA-related podcasts · 2019-11-27T22:54:13.531Z · score: 7 (6 votes) · EA · GW

Naked Scientists (BBC radio show and podcast) have done a bunch of interviews with CSER researchers:

https://www.cser.ac.uk/news/naked-scientists-planet-b/

https://www.cser.ac.uk/news/haydn-belfield-interviewed-naked-scientists/

https://www.cser.ac.uk/news/workshop-featured-on-the-naked-scientists-podcast/

https://www.cser.ac.uk/news/podcast-countdown-artificial-intelligence/

https://www.cser.ac.uk/news/podcast-interviews-martin-rees/

Comment by haydnbelfield on Institutions for Future Generations · 2019-11-19T19:03:04.870Z · score: 10 (5 votes) · EA · GW

I was surprised not to see a reference to the main (only?) paper examining this question from an EA/'longtermist' perspective:

Natalie Jones, Mark O'Brien, Thomas Ryan. (2018). Representation of future generations in United Kingdom policy-making. Futures.

Which led directly to the creation of the UK All-Party Parliamentary Group for Future Generations (an effort led by Natalie Jones and Tildy Stokes). The APPG is exploring precisely the questions you've raised. If you haven't reached out yet, here's the email: secretariat@appgfuturegenerations.com

Comment by haydnbelfield on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-07T17:17:33.749Z · score: 26 (16 votes) · EA · GW

Really really good to see CEA engaging with and accepting criticism, and showing how it's trying and is changing policies.

Comment by haydnbelfield on 21 Recent Publications on Existential Risk (Sep 2019 update) · 2019-11-06T13:29:38.894Z · score: 2 (2 votes) · EA · GW

Similar but fewer, cos Seán is a better academic than me. I was aware of upper bound and vulnerable world.

Comment by haydnbelfield on What analysis has been done of space colonization as a cause area? · 2019-10-10T12:24:58.825Z · score: 4 (2 votes) · EA · GW

https://www.vox.com/future-perfect/2018/10/22/17991736/jeff-bezos-elon-musk-colonizing-mars-moon-space-blue-origin-spacex

Overview piece

Comment by haydnbelfield on A bunch of new GPI papers · 2019-09-25T23:27:09.021Z · score: 9 (4 votes) · EA · GW

These look super interesting! Looking forward to reading them.

What's the status of these papers? Some of them look like they're forthcoming, some don't - is the plan for all of them to be published? I'd find it helpful to know which have been peer-reviewed and which haven't.

Comment by haydnbelfield on An update on Operations Camp 2019 · 2019-09-19T10:36:59.536Z · score: 9 (5 votes) · EA · GW

Great stuff! Looking forward to more

Comment by haydnbelfield on Funding chains in the x-risk/AI safety ecosystem · 2019-09-11T01:36:36.615Z · score: 1 (1 votes) · EA · GW

Jaan has given to CSER

Comment by haydnbelfield on 'Longtermism' · 2019-07-26T12:53:54.200Z · score: 14 (15 votes) · EA · GW

Similar to Ollie and Larks, I'm slightly uncomfortable with

"(i) Those who live at future times matter just as much, morally, as those who live today;"

I'm pretty longtermist (I work on existential risk) but I'm not sure whether I think that those who live at future times matter "just as much, morally". I have some sympathy with the view that people nearer to us in space or time can matter more morally than those very distant - seperately from the question of how much we can do to effect those people.

I also don't think its necessary for the definition. A less strong definition would work as well. Something like:

"(i) Those who live at future times matter morally".

Comment by haydnbelfield on Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 · 2019-05-02T17:28:04.497Z · score: 2 (2 votes) · EA · GW

Hi John, thanks for the very detailed response. My claim was that ecosystem shift is a "contributor" to existential risk - my claim is that it should be examined to assess the extent to which it is a "risk factor" that increases other risks, one of a set of causes that may overwhelm societal resilience, and a mechanism by which other risks cause damage.

As I said in the first link, "humanity relies on ecosystems to provide ecosystem services, such as food, water, and energy. Sudden catastrophic ecosystem shifts could pose equally catastrophic consequences to human societies. Indeed environmental changes are associated with many historical cases of societal ‘collapses’; though the likelihood of occurrence of such events and the extent of their socioeconomic consequences remains uncertain."

I can't respond to your comment at the length it deserves, but we will be publishing papers on the potential link between ecosystem shifts and existential risk in the future, and I hope that they will address some of your points.

I'll email you with some related stuff.

Comment by haydnbelfield on Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 · 2019-05-01T21:49:41.887Z · score: -4 (10 votes) · EA · GW

Thanks for the question. Climate change is a contributor to existential risk. Changing what business schools teach (specifically to include sustainability) might change the behaviour of the next generation of business leaders.

See:

We also have further publications forthcoming on the link between climate change and existential risk.

Comment by haydnbelfield on Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019 · 2019-05-01T21:46:54.942Z · score: 6 (11 votes) · EA · GW

Thanks for the question. Biodiversity loss and associated catastrophic ecosystem shifts are a contributor to existential risk. Partha's review may influence UK and international policy.

See:

We also have further publications forthcoming on the link between biodiversity and existential risk.

Comment by haydnbelfield on Lecture Videos from Cambridge Conference on Catastrophic Risk · 2019-04-24T10:18:27.784Z · score: 1 (1 votes) · EA · GW

Cool, cheers.

Comment by haydnbelfield on Lecture Videos from Cambridge Conference on Catastrophic Risk · 2019-04-23T16:07:46.793Z · score: 3 (3 votes) · EA · GW

Does anyone have any idea when we'll be able to embed YouTube videos on the forum?

Comment by haydnbelfield on Candidate Scoring System, First Release · 2019-03-08T22:55:56.591Z · score: 4 (3 votes) · EA · GW

Warren introduced the No First Use Act (“It is the policy of the United States to not use nuclear weapons first.”) and Gillibrand is a co-sponsor.

https://www.congress.gov/bill/116th-congress/senate-bill/272/cosponsors?q=%7B%22search%22%3A%5B%22warren+nuclear%22%5D%7D&r=1&s=1

https://www.vox.com/2019/2/11/18216686/elizabeth-warren-ban-nuclear-weapons-no-first-use

Comment by haydnbelfield on How can we influence the long-term future? · 2019-03-08T20:31:19.545Z · score: 7 (5 votes) · EA · GW

I don't really understand the conclusion this post is arguing for (or if indeed there is one). In particular, I didn't spot an answer to "how can we influence the long-term future?".

Comment by haydnbelfield on CSER Special Issue: 'Futures of Research in Catastrophic and Existential Risk' · 2018-10-02T17:20:10.221Z · score: 3 (3 votes) · EA · GW

If this research seems interesting to you, CSER is currently hiring! https://www.cser.ac.uk/news/hiring-APM/

Comment by haydnbelfield on CEA on community building, representativeness, and the EA Summit · 2018-08-21T20:25:27.494Z · score: 1 (1 votes) · EA · GW

"Cause areas shouldn't be tribes" "We shouldn't entrench existing cause areas" "Some methods of increasing representativeness have the effect of entrenching current cause areas and making intellectual shifts harder."

Does this mean you wouldn't be keen on e.g. "cause-specific community liasons" who mainly talk to people with specific cause-prioritisations, maybe have some money to back projects in 'their' cause, etc? (I'm thinking of something analogous to an Open Philanthropy Project Program Officer )

Comment by haydnbelfield on Open Thread #39 · 2017-11-02T21:08:55.661Z · score: 2 (2 votes) · EA · GW

The recent quality of posts has been absolutely stellar*. Keep it up everyone!

*interesting, varied, informative, written to be helpful/useful, rigorous, etc

Comment by haydnbelfield on Effective Altruism Grants project update · 2017-10-06T18:18:20.928Z · score: 2 (2 votes) · EA · GW

Really glad to see you taking conflicts of interest so seriously!

Comment by haydnbelfield on An Effective Altruist Message Test · 2017-04-03T19:06:22.237Z · score: 1 (3 votes) · EA · GW

This is incredibly valuable (and even groundbreaking) work. Well done for doing it, and for writing it up so clearly and informatively!

Comment by haydnbelfield on A Third Take on Trump · 2017-04-03T19:03:29.256Z · score: 1 (3 votes) · EA · GW

Thanks for this!

I personally agree that Democratic control of Congress, or even Congress and the Presidency, would be great. But I'm not sure how likely that is, or how certain that I should be about that likelihood.

Even if there was a high certainty and high likelihood, I probably still wouldn't take that option - the increased risk for four years is just too high. As Michael_S says you get higher nuclear risk and higher pandemic risk. As I said in my post, I think Trump also raises the risks of increased global instability, increased international authoritarianism, climate change, and emerging technologies. Take climate change - we really don't have long to fix it! We need to make significant progress by 2030 - we can't afford to go backwards for four years.

[Writing in a personal capacity, my views are not my employer's]

Comment by haydnbelfield on What Should the Average EA Do About AI Alignment? · 2017-03-02T18:39:30.411Z · score: 2 (2 votes) · EA · GW

Whatever happened to EA Ventures?

Comment by haydnbelfield on EA Funds Beta Launch · 2017-02-28T18:30:30.869Z · score: 11 (11 votes) · EA · GW

This is a great idea and you've presented it fairly, clearly and persuasively. I've donated.

Comment by haydnbelfield on EA Funds Beta Launch · 2017-02-28T18:12:54.661Z · score: 4 (4 votes) · EA · GW

Peter's question was one I asked in the previous post as well. I'm pleased with this answer, thanks Tara.

Comment by haydnbelfield on EA Global 2017 Update · 2017-02-28T18:03:50.345Z · score: 0 (0 votes) · EA · GW

Excellent!

Comment by haydnbelfield on Some Thoughts on Public Discourse · 2017-02-24T13:27:49.429Z · score: 26 (25 votes) · EA · GW

Thanks for this! Its mentioned in the post and James and Fluttershy have made the point, but I just wanted to emphasise the benefits to others of Open Philanthropy continuing to engage in public discourse. Especially as this article seems to focus mostly on the cost/benefits to Open Philanthropy itself (rather than to others) of Open Philanthropy engaging in public discourse.

The analogy of academia was used. One of the reasons academics publish is to get feedback, improve their reputation and to clarify their thinking. But another, perhaps more important, reason academics publish academic papers and popular articles is to spread knowledge.

As an organisation/individual becomes more expert and established, I agree that the benefits to itself decrease and the costs increase. But the benefit to others of their work increases. It might be argued that when one is starting out the benefits of public discourse go mostly to oneself, and when one is established the benefits go mostly to others.

So in Open Philanthropy’s case it seems clear that the benefits to itself (feedback, reputation, clarifying ideas) have decreased and the costs (time and risk) have increased. But the benefits to others of sharing knowledge have increased, as it has become more expert and better at communicating.

For example, speaking personally, I have found Open Philanthropy’s shallow investigations on Global Catastrophic Risks a very valuable resource in getting people up to speed – posts like Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity have also been very informative and useful. I’m sure people working on global poverty would agree.

Again, just wanted to emphasise that others get a lot of benefit from Open Philanthropy continuing to engage in public discourse (in the quantity and quality at which it does so now).

Comment by haydnbelfield on Introducing the EA Funds · 2017-02-09T18:22:44.473Z · score: 12 (8 votes) · EA · GW

Very interesting idea, and potentially really useful for the community (and me personally!).

What's the timeline for this?

I'm presuming that the Funds would be transparent about how much money is in them, how much has been given and why - is that the case? Also as a starter, has Nick written about how much is/was in his Fund and how its been spent?

Comment by haydnbelfield on Should effective altruism have a norm against donating to employers? · 2016-12-05T12:13:05.273Z · score: 2 (2 votes) · EA · GW

Were I working for an EA org this would be the decisive factor that would swing me, so it would be really good if we could work this out. Giving to another org adds Gift Aid to your donation. +20% Forgoing salary saves you and your employer National Insurance. +29%

So if you're basic rate, is giving to your employer better value?

Comment by haydnbelfield on President Trump as a Global Catastrophic Risk · 2016-11-21T12:16:02.321Z · score: 2 (2 votes) · EA · GW

Thanks for commenting! I'll try to answer your points in turn.

  1. Nuclear weapons I was using the Cuban Missile Crisis as an example of a nuclear stand-off. I'm not saying a very similar crisis will occur, but that other stand-offs are possible in the future. Other examples of stand-offs include Nixon and the Yom Kippur War, or Reagan and Able Archer. There have many 'close calls' and stand-offs over the years, and there could be one in the future e.g. over the Baltics. Trump's character seems particularly ill-suited to nuclear stand-offs, so increases risk.

  2. Pandemics Many countries have had biological weapons programs: for example the US, UK, USSR, Japan, Germany, Iraq and South Africa. I agree that they're difficult to control and would likely hurt the country that used them as well as the target - but that hasn't stopped those countries. The development and use of biological weapons has been constrained by the Convention and surrounding norms. I think Trump threatens those norms, and so increases risk.

  3. Liberal global order Very interesting fact about trade and war there, although she is looking at the period 1870-1938 and I'm talking about post-1945. And yes I agree with you about democratic peace theory. My point is more general, that the liberal global order has kept us safe - to take one example we haven't had a serious great power war. Trump threatens that order, and so increases risk.