New US Senate Bill on X-Risk Mitigation [Linkpost]

post by Evan R. Murphy · 2022-07-04T01:28:32.056Z · EA · GW · 13 comments

This is a link post for https://www.hsgac.senate.gov/media/majority-media/peters-introduces-bipartisan-bill-to-ensure-federal-government-is-prepared-for-catastrophic-risks-

Two US Senators have introduced a bipartisan bill specifically focused on x-risk mitigation, including from AI. From the post on Senate.gov (bold mine):

WASHINGTON, D.C. – U.S. Senator Gary Peters (MI), Chairman of the Homeland Security and Governmental Affairs Committee, introduced a bipartisan bill to ensure our nation is better prepared for high-consequence events, regardless of the low probability, such as new strains of disease, biotechnology accidents, or naturally occurring risks such as super volcanoes or solar flares that though unlikely, would be exceptionally lethal if they occurred.

“Making sure our country is able to function during catastrophic events will improve national security, and help make sure people in Michigan and across the country who are affected by these incidents get the help they need from the federal government,” said Senator Peters. “Though these threats may be unlikely, they are also hard to foresee, and this bipartisan bill will help ensure our nation is prepared to address cataclysmic incidents before it’s too late.”

[The legislation] will establish an interagency committee for risk assessment that would report on the adequacy of continuity of operations (COOP) and continuity of government (COG) plans for the risks identified. The bipartisan legislation would also help counter the risk of artificial intelligence (AI), and other emerging technologies from being abused in ways that may pose a catastrophic risk.

[...]

It's interesting the term 'abused' was used with respect to AI. It makes me wonder if the authors have misalignment risks in mind at all or only misuse risks.

I haven't been able to locate the text of the bill yet. If someone finds it, please share in the comments.

Cross-posted to LessWrong [LW · GW]. Credit to Jacques Thibodeau for posting a link on Slack that made me aware of this.

13 comments

Comments sorted by top scores.

comment by Catherine Low (cafelow) · 2022-07-05T02:55:16.258Z · EA(p) · GW(p)

This bill does seem very important. It is hard to know what will help or hinder the political process, so I recommend that folks in the EA community don't try to do a public coordinated effort try to influence the content or outcome of this proposed bill - at least for now. 

My understanding is that the people involved in drafting this bill are aware of the EA community, so they know they can reach out when and if they think that would be helpful.

comment by Zach Stein-Perlman (zsp) · 2022-07-04T04:18:28.943Z · EA(p) · GW(p)

I'm curious why some people strong-downvoted this and the LessWrong linkpost. Feel free to PM me if relevant.

comment by Zach Stein-Perlman (zsp) · 2022-07-08T15:41:01.383Z · EA(p) · GW(p)

The text is now public. It's short and easily skimmable. The bill wouldn't do much directly but creates a committee that would create a report on catastrophic risk and what to do about it. This is plausibly an important part of the possible futures where the US government responds well to risks from emerging technology.

comment by Zach Stein-Perlman (zsp) · 2022-07-04T03:15:47.786Z · EA(p) · GW(p)

We should reserve judgment until seeing what the bill really does, and whether some version of it will successfully become law, but this is exciting.

Replies from: Guy Raveh
comment by Guy Raveh · 2022-07-04T11:09:46.995Z · EA(p) · GW(p)

In a sense, yes; but even the fact that catastrophic (non-climate) risk mitigation is a politically viable topic for U.S. senators to talk about is a significant change, I think.

comment by Daniel_Eth · 2022-07-06T02:02:46.017Z · EA(p) · GW(p)

It's interesting the term 'abused' was used with respect to AI. It makes me wonder if the authors have misalignment risks in mind at all or only misuse risks.

 

A separate press release says, "It is important that the federal government prepare for unlikely, yet catastrophic events like AI systems gone awry" (emphasis added), so my sense is they have misalignment risks in mind.

comment by quinn · 2022-07-04T02:19:56.904Z · EA(p) · GW(p)

Seems like a win, curious to hear about involvement of people in our networks in making this happen. 

comment by MarcelE · 2022-07-04T13:17:12.610Z · EA(p) · GW(p)

It's interesting the term 'abused' was used with respect to AI. It makes me wonder if the bill has misalignment risks in mind at all or only misuse risks.

If true, it would be exciting if someone could find out if this was intentionally left out due to complicated Senate politics or just lack of awareness? 

comment by Harrison Durland (Harrison D) · 2022-07-04T03:00:29.871Z · EA(p) · GW(p)

Two US Senators have introduced a bipartisan bill specifically focused on catastrophic risk mitigation

Are you sure this specific bill was bipartisan / introduced by two senators, or might you be conflating it with the other bill mentioned at the end of the HSGAC post? I only saw Peters listed for the first bill…

[Update/Edit: I see now that the linked article does describe the bill as bipartisan, it's just strange that Portman wasn't mentioned in the June 30th article ]

Replies from: Peter_Hurford, zsp
comment by Zach Stein-Perlman (zsp) · 2022-07-04T03:10:27.606Z · EA(p) · GW(p)

The bill is sponsored by Republican Senator Rob Portman and cosponsored Democratic Senator Gary Peters.

comment by BrownHairedEevee (evelynciara) · 2022-07-04T02:02:10.956Z · EA(p) · GW(p)

A link to the bill itself would be helpful, though I haven't been able to find one by googling.

Replies from: zsp
comment by Zach Stein-Perlman (zsp) · 2022-07-04T03:09:39.156Z · EA(p) · GW(p)

Here it is, although the text isn't there yet (I expect it'll be added there soon but I'm not an expert on congress.gov).