AI Governance Career Paths for Europeans

post by careersthrowaway · 2020-05-16T06:40:55.013Z · score: 59 (27 votes) · EA · GW · 1 comments

Contents

  Paths
    Research careers
    Industry governance careers
      Policy teams in AI companies & industry bodies
      Global technological standards
    Policy careers
      US (national security) policy
      European foreign policy & international security
      EU commercial regulation
      UK science & tech (& AI) policy
    Support & field-building careers
  How to decide between different paths
    Expected impact of the average career
    Personal fit and comparative advantage
    Starting point
    Flexibility considerations
None
1 comment

Paths

The distinctions between these paths can be blurry at times and it’s possible to switch between them to some extent (see the section on flexibility). This is a categorization that makes sense to me. Hopefully, it’s also helpful for you, but I don’t claim that this is the only way to carve up this space. There might also be roles that don't fit neatly into this schema.

Research careers

What this path looks like:

I won’t focus on this path a lot here since it’s fairly widely discussed in EA. See Guide to working in AI policy and strategy (old) for some details on this path and this guide for academic careers more generally (old). For these roles, your nationality does not matter a lot. The research community is global and connections and credentials transfer across countries.

Why this path matters:

We still don’t know [EA · GW] very much about the field of AI governance. There are many questions still to be answered. Making progress here is crucial, and arguably even required for policy roles to be impactful.

Industry governance careers

Policy teams in AI companies & industry bodies

What this path looks like:

The most impactful roles on this path are arguably on the policy teams of OpenAI, Partnership on AI, and DeepMind. I don’t have a strong view about roles on the public affairs teams of big tech firms like Facebook, Amazon, Apple, Microsoft, IBM, and Google seem somewhat less impactful. I would welcome more people testing these out.

These positions have become very competitive at this point. They often require significant technical understanding, combined with good communications abilities, and, ideally, policy experience (and/or a relevant academic background). Doing well in any of the paths outlined in this post for a few years will likely make you a good fit for such roles. It’s also possible to apply for them directly of course.

For these roles, your nationality technically does not matter, but immigration to the US might be very hard in some cases, unless you are worth the hassle to the company. The big tech firms, however, have offices in Europe.

Why this path matters:

Leading companies (early on) will likely be involved in determining the governance of AI technologies. (I haven’t thought about this claim a lot but it seems very plausible to me on the face of it.)

Determining governance structures of leading companies might also be really important.

Global technological standards

What this path looks like:

I don’t know this path well at all.

It seems like there are relevant roles at (1) national standardization bodies as well as (2) international bodies like ISO, IEEE, IEC, ITU, and CEN/CENELEC. I don’t have a good sense of how these organizations work internally, e.g., how much they rely on permanent staff compared to national members or experts who are recruited from companies for temporary/voluntary committee work.

I don’t have a good sense of career progression in this field.

Why this path matters:

See Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development.

Policy careers

US (national security) policy

It’s possible for European nationals to attempt an AI governance career in the US. This path, however, is very uncertain and will involve many more obstacles than a comparable path in Europe. The biggest obstacle seems to be initial immigration (beyond university education) since you need to find an employer who is willing to sponsor you for an H-1B visa or a green card, a process that is very resource intensive due to countless bureaucratic hurdles and legal fees. Marrying a US citizen allows you to apply for a green card immediately and also speeds up the wait time for naturalization (from five to three years).

To take full advantage of this path, you should aim to get naturalized (after ~6-10 years). This would allow you to take on relevant roles in the US government. Otherwise, you will be limited to roles outside the government. It is my understanding that it’s very common for people to switch back and forth between roles outside and inside of government, so this restriction would be a significant disadvantage.

For many relevant government roles, you will also need to obtain a security clearance. You should consider whether you foresee any significant problems in this regard (e.g., contacts with nationals from or travel to countries like Russia, China, North Korea, Iran, etc., drug use, criminal behavior, general integrity/risk-seeking/discretion, promiscuity?, in general things you can get blackmailed/pressured with, maintaining dual citizenship or having worked for another government, financial problems, emotional, mental, and personality disorders). From this perspective, this path seems particularly attractive for UK nationals (Five Eyes) and, to a lesser extent, Western European allies of the US.

I have not thought a lot about US AI policy careers focused on commercial regulation and/or science & tech policy. For what it’s worth, my impression is that these things are, to a large extent, driven by national security considerations (at the moment).

What this path looks like:

See AI governance career in the US. Getting a STEM degree in the US probably makes subsequent immigration somewhat easier.

Why this path matters:

See AI governance career in the US.

European foreign policy & international security

What this path looks like:

Foreign policy & international security is determined primarily by national governments, and not the EU (staff) or NATO (staff), or (the staff of) other supra-/international organizations.

Not all European countries (and national governments) are equally influential globally. For a comparative assessment, CINC is a good start and USNews also has a power ranking. The countries that stand out to me are Germany, UK, and France. Other countries that also seem influential but perhaps significantly less so: Italy, Spain, Switzerland (especially per capita), Netherlands, Norway, Sweden.

Such careers, I’d almost always only recommend to nationals of these countries. So, for instance, I would generally not recommend entering German foreign policy as a Dane.

I’m not an expert on how to build a career in national foreign policy. It’s noteworthy that in some European countries civil service careers are very distinct from think tank and political careers (i.e., few people switch back and forth), and civil service careers are intended to be for life (this is the case in Germany, for instance). This would imply much less flexibility and makes testing fit early on much more important. At the same time, think tanks appear to have significantly less influence than in the US (weak impression). The best paths will also differ from country to country, so I can recommend talking to experts in your country to learn more if possible. If you are an expert, consider writing up your advice.

Even if you’re set on entering an international organization (like NATO), it still seems more robust to start one’s career in the national policy arena (e.g., civil service, think tank): First, one can postpone the decision which international organization to focus on to some extent while building relevant career capital. Second, international organizations mainly facilitate coordination of national governments. While staff at these organizations have some influence with regard to agenda-setting, planning, and foresight, decision-making is still in the hand of national actors (e.g., EU positions within the Common Foreign and Security Policy require unanimous approval by the member states, the North Atlantic Council consists of representatives of the member states, decisions on LAWS in the context of the Convention on Certain Conventional Weapons would be made by national governments, Wassenaar Arrangement is between national governments). Third, to the extent that the staff of such organizations is important, transition from the national arena to the international one is often easier than vice versa.

I’d expect NATO to be more important than the EU for arriving at a shared doctrine/position on the use of AI in the military (conditional on NATO still existing when the technology becomes more mature). NATO is the forum where European states coordinate with the US on defense related matters (e.g. Nuclear sharing). A European stance would likely matter little without such coordination, given the US advantage when it comes to AI and military matters. This would imply a focus on the transatlantic relationship for one’s career. Expertise on China probably also helps.

Why this path matters:

I expect that European nations will influence international regimes & norms around the development and deployment of AI, especially in military context. These could have implications for TAI outcomes due to path dependencies. Some potential levers:

European nations will likely play less of a role than the US or China in any such efforts. Their contribution might still be important.

EU commercial regulation

What this path looks like:

AI regulatory policy will be decided on the EU-level rather than the national level (see Commission plans here). It seems to me that working immediately in the EU ecosystem is the best path for this, but a start in the national policy sphere might work equally well (especially for the most important EU countries like Germany and France). Personal considerations (e.g., network) might well be decisive. See AI policy careers in the EU [EA · GW] for far more details.

Why this path matters:

UK science & tech (& AI) policy

It is possible for (non-UK) European nationals to pursue careers in the UK civil service and UK policy more generally (e.g. think tanks). Some posts in the UK civil service are reserved for UK nationals (security and intelligence services, the Diplomatic Service, the Foreign and Commonwealth Office, and some other posts if a special allegiance to the UK is deemed to be required). Naturalization in the UK seems to be possible after ~6 years. (However, I’d probably recommend most people to try to get naturalized in the US if they’re willing to commit to some naturalization in the first place.)

I don’t consider other European countries’ science & tech policy because they just don’t seem well-positioned to make a difference, i.e., incubate cutting-edge AI labs to influence, and regulation-wise they’re dominated by the EU. The UK is not in the EU and has the strongest AI ecosystem in Europe (incl. DeepMind).

What this path looks like:

Why this path matters:

One can potentially influence cutting-edge AI labs to some degree. UK regulation might also be influential globally, though I expect it to be less influential than that of the US or the EU.

Support & field-building careers

What this path looks like:

This is not so much a clear path as a collection of (1) support roles in AI governance organizations and (2) AI governance roles in effective altruist organizations. Category (1) includes (research/project) management roles and operations roles. Category (2) is very idiosyncratic. Examples include some research analyst or grantmaker roles at the Open Philanthropy Project, project management roles at GovAI, and some roles at 80,000 Hours. Since there is no clear career progression in this path, you will have to make it up as you go along. For these roles, your nationality does not matter.

Why this path matters:

You can leverage the impact of others in the community by bringing in more people or making people more effective.

How to decide between different paths

Broadly speaking, three factors matter: (1) how impactful you expect the average career in a particular path to be (independent of any personal considerations); (2) what your personal fit and comparative advantage for a particular path is; and (3) what your starting point for a particular path would be. Flexibility considerations might also affect your choice (see below).

Expected impact of the average career

Judgments about this will probably depend on a wide range of background beliefs. So different people will probably have different views about this. Factors that are relevant: (1) the potential impact of different roles in the career; (2) the tractability of career progression; and the neglectedness of the path. I don’t feel comfortable making a lot of strong claims here. Below I sketch the ones I do feel somewhat confident in:

Field-building is probably the most impactful thing to do right now. The field is still really small and young at this point. Bringing in more of the right people is crucial. There are some roles that are directly focused on building up the field or specific organizations in it. Founding the right organizations is among the best things to be doing in this regard: Through his research and work, Allan Dafoe was able to set up GovAI that has been crucial for building the field. Through his work, Jason Matheny was able to set up CSET that has been crucial for building the field of AI policy in the U.S. Field-building, however, cannot only be done “directly.” All roles and careers have field-building effects: publishing research, convincing other policy-makers of the importance of longtermism, etc. My impression is that, generally speaking, research seems to have bigger field-building effects than policy work (with some exceptions).

Careers in US (national security) policy (currently) seem to be more impactful than careers in European foreign policy & international security or careers in UK science & tech (& AI) policy.

More weakly: Careers in US (national security) policy (currently) seem to be more impactful than careers in EU commercial regulation of AI.

Personal fit and comparative advantage

80,000 Hours define personal fit roughly as “your chances of excelling in the job.” You can read more about comparative advantage in this context here.

Personal fit for the specific paths:

Starting point

Your starting point on a particular path is a function of (1) your existing career capital for that particular path, and (2) the career capital requirements for that particular path. You can learn more about career capital from 80,000 Hours.

Career capital “requirements” for specific paths:

Flexibility considerations

1 comments

Comments sorted by top scores.

comment by Rowan_Stanley · 2020-05-17T06:22:46.099Z · score: 2 (3 votes) · EA(p) · GW(p)

Thanks for digging into non-American career options-- there's not a lot geared to people outside the States.

Not being in Europe, this isn't strictly relevant to me either, but still cool to see!