List of AI safety courses and resources

post by Daniel del Castillo, casebash, Kat Woods (katherinesavoie) · 2021-09-06T14:26:42.397Z · EA · GW · 2 comments

By: Daniel del Castillo, Chris Leong, and Kat Woods

We made a spreadsheet of resources for learning about AI safety. It was for internal purposes here at Nonlinear, but we thought it might be helpful to those interested in becoming safety researchers. 

Please let us know if you notice anything that we’re missing or that we need to update by commenting below. We’ll update the sheet in response to comments.


There are a lot of courses and reading lists out there. If you’re new to the field, out of the ones we investigated, we recommend Richard Ngo’s curriculum of the AGI safety fundamentals program. It is a good mix of shorter, more structured, and more broad than most alternatives. You can register interest for their program when the next round starts or simply read through the reading list on your own.

We’d also like to highlight that there is a remote AI safety reading group that might be worth looking into if you’re feeling isolated during the pandemic.

About us: Nonlinear is a new AI alignment organization founded by Kat Woods and Emerson Spartz. We are a means-neutral organization, so are open to a wide variety of interventions that reduce existential and suffering risks. Our current top two research priorities are multipliers for existing talent and prizes for technical problems.

PS - Our autumn Research Analyst Internship is open for applications. Deadline is September 7th, midnight EDT. The application should take around ten minutes if your CV is already written.


Comments sorted by top scores.

comment by Gyrodiot · 2021-09-07T20:30:35.225Z · EA(p) · GW(p)

Nice initiative, thanks!

Plugging my own list of resources [LW · GW] (last updated April 2020, next update before the end of the year).

comment by Question Mark · 2021-09-06T21:29:37.115Z · EA(p) · GW(p)

These aren't entirely about AI, but Brian Tomasik's Essays on Reducing Suffering and Tobias Baumann's articles on S-risks are also worth reading. They contain a lot of articles related to futurism and scenarios that could result in astronomical suffering. On the topic of AI alignment, Tomasik wrote this article on the risks of a "near miss" in AI alignment, and how a slightly misaligned AI may create far more suffering than a completely unaligned AI.