Introducing the ML Safety Scholars Programpost by ThomasW (ThomasWoodside), Dan Hendrycks, Mantas Mazeika, Oliver Zhang, Sidney Hough (Sidney), Kevin Liu (kliu128) · 2022-05-04T13:14:07.422Z · EA · GW · 38 comments
Program Overview Why have this program? Time Commitment Preliminary Content & Schedule ML Safety Who is eligible? Questions Acknowledgement Application None 38 comments
The Machine Learning Safety Scholars program is a paid, 9-week summer program designed to help undergraduate students gain skills in machine learning with the aim of using those skills for empirical AI safety research in the future. Apply for the program here by May 31st.
The course will have three main parts:
- Machine learning, with lectures and assignments from MIT
- Deep learning, with lectures and assignments from the University of Michigan, NYU, and Hugging Face
- ML safety, with lectures and assignments produced by Dan Hendrycks at UC Berkeley
The first two sections are based on public materials, and we plan to make the ML safety course publicly available soon as well. The purpose of this program is not to provide proprietary lessons but to better facilitate learning:
- The program will have a Slack, regular office hours, and active support available for all Scholars. We hope that this will provide useful feedback over and above what’s possible with self-studying.
- The program will have designated “work hours” where students will cowork and meet each other. We hope this will provide motivation and accountability, which can be hard to get while self-studying.
- We will pay Scholars a $4,500 stipend upon completion of the program. This is comparable to undergraduate research roles and will hopefully provide more people with the opportunity to study ML.
MLSS will be fully remote, so participants will be able to do it from wherever they’re located.
Why have this program?
Much of AI safety research currently focuses on existing machine learning systems, so it’s necessary to understand the fundamentals of machine learning to be able to make contributions. While many students learn these fundamentals in their university courses, some might be interested in learning them on their own, perhaps because they have time over the summer or their university courses are badly timed. In addition, we don’t think that any university currently devotes multiple weeks to AI Safety.
There are already sources of funding for upskilling within EA, such as the Long Term Future Fund. Our program focuses specifically on ML and therefore we are able to provide a curriculum and support to Scholars in addition to funding, so they can focus on learning the content.
Our hope is that this program can contribute to producing knowledgeable and motivated undergraduates who can then use their skills to contribute to the most pressing research problems within AI safety.
The program will last 9 weeks, beginning on Monday, June 20th, and ending on August 19th. We expect each week of the program to cover the equivalent of about 3 weeks of the university lectures we are drawing our curriculum from. As a result, the program will likely take roughly 30-40 hours per week, depending on speed and prior knowledge.
Preliminary Content & Schedule
Machine Learning (content from the MIT open course)
Week 1 - Basics, Perceptrons, Features
Week 2 - Features continued, Margin Maximization (logistic regression and gradient descent), Regression
Week 3 - Introduction, Image Classification, Linear Classifiers, Optimization, Neural Networks. ML Assignments due.
Week 4 - Backpropagation, CNNs, CNN Architectures, Hardware and Software, Training Neural Nets I & II. DL Assignment 1 due.
RL overview. DL Assignment 2 due.
Week 6 - Risk Management Background (e.g., accident models), Robustness (e.g., optimization pressure). DL Assignment 3 due.
Week 7 - Monitoring (e.g., emergent capabilities), Alignment (e.g., honesty). Project proposal due.
Week 8 - Systemic Safety (e.g., improved epistemics), Additional X-Risk Discussion (e.g., deceptive alignment). All ML Safety assignments due.
Week 9 - Final Project (edit May 5th: If students have a conflict in the last week of the program, they can choose not to complete the final project. Students who do this will receive a stipend of $4000 rather than $4500.)
Who is eligible?
The program is designed for motivated undergraduates who have interest in doing empirical AI safety research in the future. We will accept Scholars who will be enrolled undergraduate students after the conclusion of the program (this includes graduated/soon graduating high school students about to enroll in their first year of undergrad).
- Differential calculus
- At least one of linear algebra or introductory statistics (e.g., AP Statistics). Note that if you only have one of these, you may need to make a conscious effort to pick up material from the other during the program.
- Programming. You will be using Python in this course, so ideally you should be able to code in that language (or at least be able to pick it up quickly). The courses will not teach Python or programming.
We don’t assume any ML knowledge, though we expect that the course could be helpful even for people who have some knowledge of ML already (e.g., fast.ai or Andrew Ng’s Coursera course).
Questions about the program should be posted as comments on this post. If the question is only relevant to you, it can be addressed to Thomas Woodside ([firstname].[lastname]@gmail.com).
We would like to thank the FTX Future Fund regranting program for providing the funding for the program.
You can apply for the program here. Admission is rolling, but you must apply by May 31st to be considered for the program. All decisions will be released by June 7th.
Comments sorted by top scores.