Seth Baum AMA next Tuesday on the EA Forum

post by RyanCarey · 2015-02-23T12:37:51.817Z · EA · GW · Legacy · 9 comments

Just announcing for those interested that Seth Baum from the Global Catastrophic Risks Institute (GCRI) will be coming to the Effective Altruism Forum to answer a wide range of questions next week at 7pm on March 3.

Seth is an interesting case - more of a 'mere mortal' than Bostrom and Yudkowsky. (Clarification: his background is more standard, and he's probably more emulate-able!) He had a PhD in geography, and had come to a maximising consequentialist view, in which GCR-reduction is overwhelmingly important. So three years ago,  with risk analyst Tony Barrett, he cofounded the Global Catstrophic Risks Institute - one of the handful of places working on these particularly important problems. Since then, it's done some academic outreach and have covered issues like double-catastrophe/ recovery from catstrophe, bioengineering, food security and AI.

Just last week, they've updated their strategy, giving the following announcement:

Dear friends,

I am delighted to announce important changes in GCRI’s identity and direction. GCRI is now just over three years old. In these years we have learned a lot about how we can best contribute to the issue of global catastrophic risk. Initially, GCRI aimed to lead a large global catastrophic risk community while also performing original research. This aim is captured in GCRI’s original mission statement, to help mobilize the world’s intellectual and professional resources to meet humanity’s gravest threats.

Our community building has been successful, but our research has simply gone farther. Our research has been published in leading academic journals. It has taken us around the world for important talks. And it has helped us publish in the popular media. GCRI will increasingly focus on in-house research.

Our research will also be increasingly focused, as will our other activities. The single most important GCR research question is: What are the best ways to reduce the risk of global catastrophe? To that end, GCRI is launching a GCR Integrated Assessment as our new flagship project. The Integrated Assessment puts all the GCRs into one integrated study in order to assess the best ways of reducing the risk. And we are changing our mission statement accordingly, to develop the best ways to confront humanity’s gravest threats.

So 7pm ET Tuesday, March 3 is the time to come online to the EA Forum and post your questions about any topic you like, and Seth will remain online until at least 9 to answer as many questions as he can.

On the topic of risk organisations, I'll also mention that i) video is available from CSER's recent seminar, in which Mark Lipsitch and Derek Smith's discussed potentially pandemic pathogens, and ii) I'm helping Sean to write up a more detailed update for LessWrong and effective altruists which will go online soon.


Comments sorted by top scores.

comment by Randomized, Controlled (LKor) · 2015-03-01T01:36:40.233Z · EA(p) · GW(p)

I just signed up in order to take part in the AMA. Really looking forward to it! Is it happening on this forum or on Reddit?

Replies from: RyanCarey
comment by RyanCarey · 2015-03-01T09:39:27.921Z · EA(p) · GW(p)

Great! It's on this forum on Tuesday.

comment by Sean_o_h · 2015-02-25T14:25:21.778Z · EA(p) · GW(p)

Seth is a very smart, formidably well-informed and careful thinker - I'd highly recommend jumping on the opportunity to ask him questions.

His latest piece in the Bulletin of the Atomic Scientists is worth a read too. It's on the "Stop Killer Robots" campaign. He agrees with Stuart Russell (and others)'s view that this is a bad road to go down, and also presents it as a test case for existential risk - a pre-emptive ban on a dangerous future technology:

"However, the most important aspect of the Campaign to Stop Killer Robots is the precedent it sets as a forward-looking effort to protect humanity from emerging technologies that could permanently end civilization or cause human extinction. Developments in biotechnology, geoengineering, and artificial intelligence, among other areas, could be so harmful that responding may not be an option. The campaign against fully autonomous weapons is a test-case, a warm-up. Humanity must get good at proactively protecting itself from new weapon technologies, because we react to them at our own peril."

comment by Niel_Bowerman · 2015-02-25T14:10:17.866Z · EA(p) · GW(p)

What is your assessment of the recent report by FHI and the Global Challenges Foundation?
How will your integrated assessment differ from this?

Replies from: Sean_o_h
comment by Sean_o_h · 2015-02-25T18:17:01.067Z · EA(p) · GW(p)

I am interested in the answer to this question. However, I would point out that Seth is listed as a major contributor to the FHI-GCF report.

comment by Niel_Bowerman · 2015-02-25T14:08:51.582Z · EA(p) · GW(p)

How many man-hours per week are currently going into GCRI. How many paid staff do you have and who are they?

comment by Niel_Bowerman · 2015-02-25T14:07:59.847Z · EA(p) · GW(p)

I can't make it for the AMA, but I'm going to load up some questions here if that's OK...

  • What would you say is the single most impressive achievement that GCRI has achieved to date? (I'll put other questions in other threads)
comment by Bitton · 2015-02-25T02:03:07.395Z · EA(p) · GW(p)

Well, since nobody has asked anything...

Of all the arguments you've heard for de-prioritizing GCR reduction, which do you find most convincing?

Replies from: AlexMennen
comment by AlexMennen · 2015-02-25T04:49:47.934Z · EA(p) · GW(p)

The AMA is next week.