Is the time crunch for AI Safety Movement Building now?

post by Chris Leong (casebash) · 2022-06-08T12:19:33.146Z · EA · GW · 10 comments

Update: A lot of people disagree with this post on the basis that we might already be past the crunch time. I think this is a plausible view[1], but this post is mainly designed to encourage the audience with longer timelines to keep in mind movement-building lag times when thinking about how urgent things are.


An increasing number of people are starting to believe that timelines may be short[2].

I find this worrying as a movement builder, since movement-building projects are much less impactful when timelines are short.

Let's consider what has to happen for an outreach program to hit its full potential. It could be a fellowship, a local meetup group or online outreach. You plan the program, then run it. You then iterate on it so that it is effective, after which you scale it up. You then need time for a decent number of people to pass through the program - naively, five cohorts can produce five times as many safety researchers as one. These people have to then advance their careers to the point where they are capable of doing useful research, then they have to actually do this research.

Let's consider a hypothetical example. You start a local AI Safety group. Your initial activities have some impact, but it takes you two years to develop a really solid set of programs. It takes you another two years to scale up the group. You (or your successor) run the group for 5 years. Group members take five years to become capable researchers, and then spend five years doing effective research. Notice that the total time is nineteen years, and it could be longer if we thought that people didn't tend to be effective until ten years in.

This is worrying. The time crunch for technical research might be coming soon, but the time crunch for movement building might be now. Obviously, it would be possible to focus outreach on more established researchers, but there are reasons why outreach has tended to focus on people who are earlier in their careers. More junior people tend have much more time to engage, and young people tend to be most open to new ideas. I'm not saying this to discourage people from enaging in outreach[3] to experienced researchers, but rather to point out that even though it is possible to focus on forms of outreach with shorter timelines, if we have to do so, then this makes things more challenging than they would otherwise be.

My main intent is to encourage people who think they might be good AI Safety movement builders to pursue this sooner rather than later. I would be especially excited to see people who are engaged in general EA movement building to pass that onto a successor (if someone competent is available) and transition towards AI Safety specific movement building.

I'd also love to see funders make an effort to encourage more (high quality[4]) projects in this space. Even though the EA Infrastructure Fund exists and is relatively generous in giving out grants, I still think that there is value in creating more specific programs in terms of drawing in more applications. I may write a post on this in the future, but I think a) it is better to have more brands for marketing reasons b) it would be possible to craft the programs to meet the specific needs[5] of these kinds of projects.

  1. ^

    It's complicated by a) the fact that each person will have a range of possible timelines, rather than just a single one b) the argument that if we can't win in short-timelines, maybe we should ignore them.

  2. ^

    I don't want to downplay the importance of high-quality projects, but I also suspect that many people who wouldn't think of themselves as particularly good movement builders might be able to do better than they think at running local meetups, especially if they're willing to pass it on to someone more competent when the time is right.

  3. ^

    A more demanding operationalization has a longer timeline.

  4. ^

    Though of course, if you choose to do so it's important to do so carefully given the greater cost.

  5. ^

    One key difficulty with applying for grants as a movement builder is uncertainty about the number of people who may participate in a program which makes it very difficult to estimate budgets.

10 comments

Comments sorted by top scores.

comment by Abby Hoskin (AbbyBabby) · 2022-06-08T12:50:27.485Z · EA(p) · GW(p)

I'm a little surprised by your perspective. My impression is that Open Phil, EA Infrastructure, FTX, Future Flourishing, etc. are all eagerly funding AI safety stuff. Who else are you imagining funding this space who isn't already? 

Also, a bunch of EA community organizers are pushing AI risks substantially harder as a cause area now than they did 5 years ago (e.g. 80k, many university groups).

If you're worried about short timelines, shouldn't the push be to transition people from meta work on community building to object level work directly on alignment?

Thanks for sharing your thoughts! Let me know if I misunderstood something. 

Replies from: casebash
comment by Chris Leong (casebash) · 2022-06-08T14:46:49.446Z · EA(p) · GW(p)
My impression is that Open Phil, EA Infrastructure, FTX, Future Flourishing, etc. are all eagerly funding AI safety stuff. Who else are you imagining funding this space who isn't already? 

There's a reason why companies often have multiple brands instead of one. It lets you reach more people. If you created the AI Safety Movement Building Fund and there was literally no difference between it and the EA Infrastructure Fund (same people evaluating and everything), you would still get more applications because lots of people make snap judgments based on a name. (Though in retrospect I'm feeling much less confident about this idea b/c movement-builders are disproportionately likely to know about what opportunities are available).

If you're worried about short timelines, shouldn't the push be to transition people from meta work on community building to object level work directly on alignment?

If I was more confident in short-timelines then I would be more supportive of this. I would say I'm more worried about short-timelines (25-70% chance) than confident in them. Another reason for being wary about this strategy is that most of our survival probability might be in timelines that are longer.

comment by ThomasW (ThomasWoodside) · 2022-06-08T15:26:16.161Z · EA(p) · GW(p)

I think I disagree with this.

To me, short timelines would mean the crunch in movement building was in the past.

It's also really not obvious when exactly "crunch time" would be. 10 years before AGI? 30 years?

If AGI is in five years I expect movement building among undergrads to not matter at all. If it's in ten years maybe you could say "movement building has almost run its course" but I still think "crunch time" would probably still be in the past.

Edit: I'm referring to undergrad movement building here. Talking to tech executives, policymakers, existing ML researchers etc. would have a different timeline.

Replies from: Aaron_Scher
comment by Aaron_Scher · 2022-06-08T17:09:48.489Z · EA(p) · GW(p)

The edit is key here. I would consider running an AI-safety arguments competition in order to do better outreach to graduate-and-above level researchers to be a form of movement building and one for which crunch time could be in the last 5 years before AGI (although probably earlier is better for norm changes). 

One value add from compiling good arguments is that if there is a period of panic following advanced capabilities (some form of firealarm), then it will be really helpful to have existing and high quality arguments and resources on hand to help direct this panic into positive actions. 

This all said, I don't think Chris's advice applies here: 

I would be especially excited to see people who are engaged in general EA movement building to pass that onto a successor (if someone competent is available) and transition towards AI Safety specific movement building.

I think this advice likely doesn't apply because the models/strategies for this sort of AI Safety field building are very different from that of general EA community building (e.g., University groups), the background knowledge is quite different, the target population is different, the end goal is different, etc. If you are a community builder reading this and you want to transition to AI Safety community building but don't know much about it, probably learning about AI Safety for >20 hours is the best thing you can do. The AGISF curriculums are pretty great. 

Replies from: ThomasWoodside
comment by ThomasW (ThomasWoodside) · 2022-06-08T17:20:15.179Z · EA(p) · GW(p)

Aaron didn't link it, so if people aren't aware,  we are running that competition [EA · GW] (judging in progress).

comment by Justis · 2022-06-08T20:01:02.986Z · EA(p) · GW(p)

Some other considerations I think might be relevant:

  • Are there top labs/research outfits that are eager for top technical talent, and don't care that much how up to speed that talent is on AI safety in particular? If so, seems like you could just attract eg. math Olympiad finalists or something and give them a small amount of field-specific info to get them started. But if lots of AI safety-specific onboarding is required, that's pretty bad for movement building.
  • How deep is the well of untapped potential talent in various ways/various places? Seems like there's lots and lots of outreach at top US universities right now, arguably even too much for image reasons. There's probably not enough in eg. India or something - it might be really fruitful to make a concerted effort to find AI Safety Ramanujan. But maybe he ends up at Harvard anyway.
  • Looking at current top safety researchers, were they as a rule at it for several years before producing anything useful? My impression is that a lot of them came on to the field pretty strong almost right away, or after just a year or so of spinning up. It wouldn't surprise me if many sufficiently smart people don't need long at all. But maybe I'm wrong!
  • The 'scaling up' step interests me. How much does this happen? How big of a scale is necessary?
  • Retention seems maybe relevant too. Very hard to predict how many group participants will stick with the field, and for how long. Introduces a lot of risk, though maybe not relevant for timelines per se.
comment by Konstantin (Konstantin Pilz) · 2022-06-08T13:02:56.062Z · EA(p) · GW(p)

I agree that AGI timelines may be very short and even Holden Karnofsky assigns a 10% probability to AGI in the next 15 years. I think at this time everyone should at least think about what they would do if they knew for certain that AGI was coming in the next 15 years and then do at least 10% of that (if not more since in a world where AGI comes soon, you have a lot more impact since there are fewer EAs around). However, I don't really see what to do about it yet. I think focusing outreach on groups that are more likely to start working on AI safety makes sense. Focusing outreach in circles of ML researchers makes sense. Encouraging EAs currently working in other areas to go work in alignment or AI government makes sense. Curious about what others think.

Replies from: casebash, calebp
comment by Chris Leong (casebash) · 2022-06-08T14:36:38.634Z · EA(p) · GW(p)

I don't suppose you could clarify your comment?

You write: "I don't really see what to do about it yet", but then you provide a bunch of suggestions: "I think focusing outreach on groups that are more likely to start working on AI safety makes sense. Focusing outreach in circles of ML researchers makes sense. Encouraging EAs currently working in other areas to go work in alignment or AI government makes sense."

Do you mean that you don't think these are very likely to work, but they're the best plan you've got? Or do you mean something else?

Replies from: Konstantin Pilz
comment by Konstantin (Konstantin Pilz) · 2022-06-08T21:10:18.079Z · EA(p) · GW(p)

I think these are all valuable, but not much more valuable in a world with short timelines. I wanted to express that I am not sure how we should change our approach in a world with short timelines. So I think these ideas are net positive but I'm uncertain whether they are much of an update

comment by calebp · 2022-06-08T13:58:41.460Z · EA(p) · GW(p)

I think that Holden assigns more than a 10% chance to AGI in the next 15 years, the post that you linked to says 'more than a 10% chance we'll see transformative AI within 15 years'.