AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements.

post by weeatquince · 2019-08-27T11:04:10.439Z · score: 22 (10 votes) · EA · GW · 3 comments

Contents

  Introduction
    The question: What policies will impact AI risks?
    Scope of post
    Summary
  1. Knowing the long-term impacts of today’s policies
    Ensuring long term impacts of any policy
    Applying this to TAI
  2. General Vs specific policies
  3. Delaying and urgency
    Current state of global AI policy
    What this means for TAI
  Conclusions: policies to implement
      Optimising the policy decision-making processes that could have an effect on TAI development
    Examples of the kinds of policies this implies
    Further considerations 
  Conclusions: How certain can we be that this will help?
    Why you might think this is incorrect
    Epistemic status of conclusion
  Next steps
    Further research
None
3 comments

Introduction

The question: What policies will impact AI risks?

This post is looking to answer the challenge of, given we are still at early stages of AI development and implications for Transformative AI are highly uncertain: What kinds of policies might we want to focus on now? And how certain can we be that these policies will impact the development of Transformative AI?

Scope of post

This document is focused on domestic (not international) government (not corporate) policy that could positively affect the introduction of Transformative AI (TAI, as explained here). This should be globally applicable, although it is based on my understanding of and research into UK policy, and all the examples are from the UK.

Summary

If we want to ensure a world where transitions to TAI go well we need to focus now on suggesting policies that build robust institutions and processes of checks and balances to ensure governments have good flexible decision making about the long term issues and AI issues. This is because such institutional design type policies:

Examples of such policies are:


1. Knowing the long-term impacts of today’s policies

Any details of TAI development are highly uncertain. If we want to affect TAI development with policy now, we should look at how policy makers develop policy that deal with uncertain future scenarios.


Ensuring long term impacts of any policy

There are a range of solutions that policy makers apply to develop policy for uncertain futures:

In general the process for developing good future focused policy is to do one or both of:

  1. Hold constant the things that should remain constant and build flexibility into the system so future policy makers can address uncertainty.
  2. Create incentive structures so future policy makers are likely to make good decisions (including ensuring the future policy makers will be equipped with useful resources and information).

Further research: More in depth research could be done on best practice for long term decision making in policy or in business.


Applying this to TAI

Ideally we want to develop policies to impact TAI with a reasonable level of certainty that the expected value of such policies on TAI development is positive and non-negligible. The methods above give some guidance on how to do this. It is unclear at this stage what exactly we might want to hold constant about TAI policy (although if people have ideas let me know). So it is likely that for policy makers today (Eg. if you were currently a head of state) the best way to ensuring good outcomes for future TAI scenarios would be to put in place the incentive structures, and resources to guide future decision makers.

Conclusion 1: Focus on building flexible policies, institutions and tools and systems of checks and balances to support decision makers on AI issues further down the line.

2. General Vs specific policies

I have found the following model useful for considering policies that could affect TAI development. It is a scale ranging from general policies that improve altruistic incentives to very specific TAI focused policies.

General policies. If we lived in a world with global prosperity, perfect decision systems and coordinated value-aligned actors, then the risks from TAI and other future technologies, would be reduced. Creating such a world is a difficult task. Yet there are numerous ways to improve policy decision making processes and spread good institutional design. However not everything that leads to this end goal is clearly positive. For example state actors having better access to up-to-date science might lead to actors building more dangerous weapons.


Specific policies. It does not seem implausible that there are small policy changes that a single key government individual could implement that would have a clear effect on TAI development. However, currently, there are:


It is hard to draw conclusions from this. Some considerations

Further research: it could be useful to map out example cases where high level general policies influenced specific policies, especially where the high-level changes were pushed by groups external to government.


3. Delaying and urgency

Unless we think TAI is imminent. Why not delay and take more time to develop policy suggestions?


Current state of global AI policy

The rising interest in AI is leading states from China to Tunisia to adopt AI policies or national AI strategies. These strategies cover topics such as supporting the AI industry, developing or capturing AI skills, improving digital infrastructure, funding R&D, data use, safeguarding public data, AI use by government and AI regulation. In each case these strategies sit alongside work on related topics such as supporting the tech industry, software regulation, privacy and data use. States are also developing more targeted AI polices looking at how AI could be used in military, transport, healthcare and so on.

Within this process states have recognised the need to understand and prepare for the sociological, economic, ethical and legal implications that may stem from the widespread adoption of AI technology. There has been minimal action at the state level to address the implications of TAI.

Further research: Could be useful to map out in more detail exactly what policies are in development or likely to be implemented soon. It would also be useful to map policies that are not working or poorly designed and what might be going wrong.


What this means for TAI

For many potential policies relating to TAI there is no hurry to suggest and develop policies because the topics are so far from the eyes of policy makers that decisions are not being made. Additionally, for risk related reasons you may wish to delay pushing for a policy. For example it would be prudent to delay on pushing for any policy where the sign of the impact of that policy is dependent on an uncertain crucial consideration. (I will be considering risk in a separate paper, draft here.)

However, as set out above, on some topics policy makers are already setting the policy. It would be prudent to consider the TAI implications of the policies being made and where current decisions might impact future decisions on TAI issues it would be good to work with policy-makers to ensure that good policy is developed.

Conclusion 2: Focus on the AI policies that are being implemented that could impact decisions on TAI. (Which is largely high level strategies on AI and some regulation of digital technology and data use.)


Conclusions: policies to implement

So far we discussed the connection between high level general policies and specific policies and drew the following 2 conclusions:

Conclusion 1: Focus on building flexible policies, institutions and tools and systems of checks and balances to support decision makers on AI issues further down the line

Conclusion 2: Focus on the AI policies that are being implemented that could impact decisions on TAI. (Which is largely high level strategies on AI and some regulation of digital technology and data use.)

Overall, from this, I conclude that it would be useful to focus efforts on:

Optimising the policy decision-making processes that could have an effect on TAI development

This should include the adoption of best practice decision making processes in institutions such as governments, tech regulators and militaries. (Although this should not be at the exclusion of pushing for any clear specific policies that are identifiable and beneficial.)

The rest of this section explores this idea in more detail.


Examples of the kinds of policies this implies

Ensuring that any AI regulators have a reasonable amount of autonomy from government, expert staff, public support, flexibility to adapt to new technological developments, use regulatory best practice (eg outcome focused regulation) and so forth. (Like the Office for Nuclear Regulation or Human Fertilisation and Embryology Authority.

Policies that set expected standards for good governance in the tech industry, including the expertise and ethical behaviour of senior officials at large firms and the need for clear lines of responsibility. (Like the corporate governance rules for the banking industry).

Future generations policies that encourage concern for future wellbeing (Like the Wellbeing of Future Generations (Wales) Act 2015).

Having a check and balance on the use and development of AI by the military (Like the UK’s Defence Nuclear Safety Regulator, or the UK’s commitment to international nuclear treaties, provide checks on the safety of the UK military’s development of nuclear capabilities).

Further research: I will be writing in more detail of the kinds of decision making reforms I would like to see in particular on technology regulation and improving long-term planning, with a UK focus. I would be interested in others doing similar research especially on other areas or other countries.


Further considerations

Based on my experience I would further add the following qualifiers to consider when improving institution policy decision making processes:


Conclusions: How certain can we be that this will help?

There is still a high level of uncertainty (as with any work relating to TAI or the long run future). My anecdotal experience suggests that good well-designed institutions make good decisions and that a lack of checks and balances leads to very bad decisions. This seems to be a widely supported view.

It is possible to look at evidence international development where developing world governance reforms and anti-corruption measures has been a focus of interventions for the last few decades. The evidence I have come across seems to suggest that:

It is also worth considering that on the whole humanity knows how to do this. Systems design, organisational design, regulation, policy making, etc are well researched and oft applied disciplines. There is a host of best practice and relevant examples to draw from. (That said, as above, solutions need to be adapted to circumstances and making new processes can present a challenge.)

Overall I think we can be reasonably confident that such policies are robustly positive and have a non-negligible expected impact on TAI development.

Further research: The above is a quantitative argument for the value of this policy work. It would be good to see a quantitative cost-effectiveness estimate.

Further research: I would love to see research on exactly how important good institutions are and research from experts in international development governance reforms on what we can learn about how to shape domestic institutions.


Why you might think this is incorrect

One might argue that:


Epistemic status of conclusion

Medium.

The points in this document look sensible to me. My main concern is that this document appears to be me making the case that the areas of domestic policy I understand best (regulation and institutional decision making) are the most relevant to TAI. I am concerned that this has been driven by motivated reasoning. (When you have a hammer, everything looks like a nail).


Next steps

People looking to influence TAI policy or considering careers in AI policy should also consider shifting into policy areas to do with institutional design or regulation.

Institutions and individuals who are concerned about TAI (or more generally about existential risks) and have experience of policy development and working with governments should be writing, publishing and talking to government on matters of institutional design.

Further research on the areas listed above and on the areas out of scope (such as international AI policy and corporate AI governance).

Everyone should feel free to provide feedback and comments and questions or get in touch with me at policy@ealondon.com .

I will be writing something on specific policy suggestions I would like to see

Further research

I hope to write some stuff on

Let me know if there is anything you think you would find particularly useful or not useful.


With thanks to Seb and Juila for comments. All views here are my own and are not representative of anyone else

3 comments

Comments sorted by top scores.

comment by Khorton · 2019-08-27T14:02:19.331Z · score: 9 (5 votes) · EA(p) · GW(p)

I'm not used to the acronym TAI. If the title had included 'Transformative AI' rather than 'TAI', it would have been easier to know if it's relevant to me.

comment by weeatquince · 2019-08-27T14:23:00.552Z · score: 6 (3 votes) · EA(p) · GW(p)

Thank you for the useful feedback: Corrected!

comment by MichaelA · 2019-10-02T05:21:57.211Z · score: 1 (1 votes) · EA(p) · GW(p)

I think all the four topics you highlight for potential research at the end are important, but that I'd be particularly interested in discussions of how, concretely, long-termism should and could be promoted in policy.

Also, did you mean to say "qualitative" instead of the first "quantitative" in this sentence?

Further research: The above is a quantitative argument for the value of this policy work. It would be good to see a quantitative cost-effectiveness estimate.