CSER and FHI advice to UN High-level Panel on Digital Cooperation

post by HaydnBelfield · 2019-03-08T20:39:29.657Z · EA · GW · 7 comments

This is a link post for https://www.cser.ac.uk/news/advice-un-high-level-panel-digital-cooperation/


  UN High-level Panel on Digital Cooperation: A Proposal for International AI Governance
    Issues in Digital Cooperation
    What Values and Principles Should Underpin Cooperation?
    Improving Cooperation on AI: Options for Global Governance

Researchers from Cambridge University's Centre for the Study of Existential Risk and Oxford University's Center for the Governance of AI at the Future of Humanity Institute submitted advice to the UN Secretary-General’s High-level Panel on Digital Cooperation.

The High-level Panel on Digital Cooperation was established by the UN Secretary-General in July 2018 to identify good examples and propose modalities for working cooperatively across sectors, disciplines and borders to address challenges in the digital age. It is cochaired by Melinda Gates and Jack Ma.

The full submission is below.


UN High-level Panel on Digital Cooperation: A Proposal for International AI Governance

Authors: Dr Luke Kemp1, Peter Cihon2, Matthijs Michiel Maas2, Haydn Belfield1, Dr Seán Ó hÉigeartaigh1, Jade Leung2 and Zoe Cremer1. (1=CSER, 2=FHI)


International Digital Cooperation must be underpinned by the effective international governance of artificial intelligence (AI). AI systems pose numerous transboundary policy problems in both the short- and the longterm. The international governance of AI should be anchored to a regime under the UN which is inclusive (of multiple stakeholders), anticipatory (of fast-progressing AI technologies and impacts), responsive (to the rapidly evolving technology and its uses) and reflexive (critically reviews and updates its policy principles). We propose some options for the international governance of AI which could help coordinate existing international law on AI, forecast future developments, risks and opportunities, and fill critical gaps in international governance.

1. Issues in Digital Cooperation

Digital cooperation will rise or fall by the use or misuse of rapidly developing artificial intelligence (AI) technologies. AI will transform international social, economic, and legal relations in ways that spill over far beyond the digital realm. Digital cooperation on AI is essential to help stakeholders build capacity for the ongoing digital transformation and to support a safe and inclusive digital future. Accordingly this submission will focus on the international governance of AI systems.

AI technologies are dual-use. They present opportunities for advancements in transport, medicine, the transition to renewable energy and lifting standards of living. Some systems may even be used to strengthen the monitoring and enforcement of international law and improve governance. Yet they also have the potential to create significant harms. These include labour displacement, unpredictable weapons systems, strengthened totalitarianism and destabilizing strategic shifts in the international order (Dafoe 2018; Payne 2018). The challenges of AI stem from both capabilities that already exist, or will be reached in the near-term (within 5 years), as well as from longer-term prospective capabilities. The two are intricately intertwined. How we address the near-term challenges of AI will shape longer-term policy and technology pathways (Cave and ÓhÉigeartaigh 2019). Yet the long-term disruptive impacts could dwarf other concerns. Both need to be governed in tandem.

Challenges from Existing and Near-Term Capabilities

Challenges from Long-Term Capabilities

While most of these challenges have not received sufficient attention, several have been mapped in The Malicious Use of Artificial Intelligence report (Brundage & Avin et al 2018), AI Governance: a Research Agenda (Dafoe, 2018), and in the Future of Life’s (2019) 14 policy challenges. Greater attention is needed to forecasting these potential challenges. Both the foresight of policy problems and the magnitude of existing issues underline the need for international AI governance.

2. What Values and Principles Should Underpin Cooperation?

There are already over a dozen sets of principles on AI composed by governments, researchers, standard-setting bodies and technology corporations (cf. Zeng et al. 2019). Most of these coalesce around key principles of ensuring that AI is used for the common good, does not cause harm or impinge on human rights, and respects values such as fairness, privacy, and autonomy (Whittlestone et al. 2019). We suggest that the High-level Panel on Digital Cooperation compile and categorise these principles in its synthesis report. Importantly, we need to examine trade-offs and tensions between the principles to refine rules for how they can work in practice. This can inform future negotiations on codifying AI principles.

The international governance of AI should also draw from legal precedents under the UN. In addition to general principles of international law, principles such as the polluter pays principle (those who create externalities should pay for the damages and management of externalities) could be retrofitted from the realm of environmental protection to AI policy. Values from bioethics, such as autonomy, beneficence (use for the common good), non-maleficence (ensuring AI systems do not cause harm or violate human rights), and justice are also applicable to AI (Beauchamp and Childress 2001; Taddeo & Floridi 2018). Governance should also be responsive of existing instruments of international law, and cognizant of recent regulatory steps by international regulators on the broader range of global security challenges created by AI (Kunz & Ó hÉigeartaigh 2019). Finally, while some specialization of AI governance regimes for distinct domains is unavoidable, steps should be taken to ensure these distinct standards or regimes reinforce rather than clash with each other.

3. Improving Cooperation on AI: Options for Global Governance

International governance of AI should be centred around a dedicated, legitimate and well-resourced regime. This could take numerous forms, including a UN specialised agency (such as the World Health Organisation), a Related Organisation to the UN (such as the World Trade Organisation) or a subsidiary body to the UN General Assembly (such as the UN Environment Programme). Any regime on AI should fulfil the following four objectives:

The Panel should consider the following options as components for an international regime:

The outlined options for a regime should be anticipatory, reflexive, responsive and inclusive. This adheres to the key tenets of Responsible Research and Innovation suggested by scholars (Stilgoe et al 2013). To be inclusive we suggest following the ILO’s innovative model of multipartite representation and voting. In this case voting rights could be distributed to nation states as well as other critical stakeholder group representatives. An ability to anticipate emerging challenges and respond to the quickly evolving technological landscape would be enabled by the IPAI. Responsiveness could be built into the body by having principles on AI reviewed and updated every three years. This would ensure that policies reflect the latest and in-country experiences.

With prudent action and foresight, the UN can help ensure that AI technologies are developed cooperatively for the global good.


Beauchamp, T. and Childress, J. (2001). Principles of biomedical ethics. Oxford University Press, USA.
Brundage, M. and Avin, S. et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Future of Humanity Institute and the Centre for the Study of Existential Risk.
Cave, S. and ÓhÉigeartaigh, S. (2019). An AI Race for Strategic Advantage: Rhetoric and Risks. In AAAI/ACM
Conference on Artificial Intelligence, Ethics and Society.
Cave, S. and ÓhÉigeartaigh, S. (2019). Bridging near- and long-term concerns about AI. Nature Machine Intelligence, 1: 5-6
Dafoe, A. (2018). AI Governance: A Research Agenda. Future of Humanity Institute, Oxford University.
Guterres, António. “UN Secretary-General’s Strategy on New Technologies.” United Nations, September 2018. http://www.un.org/en/newtechnologies/images/pdf/SGs-Strategy-on-New-Technologies.pdf.
Kunz, Martina, and Seán Ó hÉigeartaigh.  “Artificial Intelligence and Robotization.” In Oxford Handbook on the International Law of Global Security (Forthcoming), edited by Robin Geiss and Nils Melzer. Oxford University Press.
Payne, K. (2018). Artificial Intelligence: A Revolution in Strategic Affairs? IISS.
Stilgoe, J., Owen, R. and Macnaghten, P. (2013). Developing a Framework for Responsible Innovation. Research Policy, 42(9): 1568-1580
Taddeo, Mariarosaria, and Luciano Floridi. “How AI Can Be a Force for Good.” Science 361, no. 6404 (August 24, 2018): 751–52. https://doi.org/10.1126/science.aat5991.
Whittlestone, J., Nyrup, R., Alexandrova, A. and Cave, S. (2019). The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. Proceedings of the 2nd AAAI/ACM Conference on AI, Ethics, and Society. AAAI and ACM Digital Libraries.
Zeng, Yi, Enmeng Lu, and Cunqing Huangfu. “Linking Artificial Intelligence Principles.” Proceedings of the AAAI Workshop on Artificial Intelligence Safety (AAAI-Safe AI 2019), 2019. http://arxiv.org/abs/1812.04814.


Comments sorted by top scores.

comment by Aaron Gertler (aarongertler) · 2019-03-09T01:32:36.500Z · EA(p) · GW(p)

This post and CSER's other advice post [EA · GW] made me wonder how well one can gauge the effect of providing guidance to large governmental bodies.

For these or any past submissions, have you been able to gather evidence for how much CSER's advice mattered to an entire panel (or even just one member of a panel who took it especially seriously)?

Another question: Are any organizations providing advice to these panels that directly contradicts CSER's advice, or that seems to push in a bad or unimportant direction? It's hard to tell how much of this is "commonsense things everyone agrees on that just need more attention" vs. "controversial measures to address problems some people either don't believe in or think should be handled differently".

Replies from: Sean_o_h, Sean_o_h, Sean_o_h
comment by Sean_o_h · 2019-03-10T17:12:48.029Z · EA(p) · GW(p)

(Edit: Disclosure: I am executive director of CSER)

Re: your second question, I don't personally have a good answer re: bad advice - as these get hundreds of submissions I haven't read all or even most. (I do recall seeing some that have dismissed or ridiculed AI xrisk as conceptually nonsensical.)

Submissions I've been involved in tend towards (a) summarising already published work (b) making sensible, noncontroversial recommendations (c) occasionally gently keeping the overton window open (e.g. 'many AI experts think AGI is plausible at some point in the future, but on a very uncertain timeline; we should take safe development seriously and there is good work that can be done, and that is being done, at present on technical AI safety', as opposed to 'AI xrisk is real, scary and imminent)'. The aim of (c) being typically to counterweigh the 'AI safety/alignment is nonsense and everyone working on it is deluded' view rather than to promote action.

There are a few reasons for this. These open calls for evidence are noisy processes, and not the best way to influence policy on controversial topics or in very concrete ways. However, producing reputable input for them is a good way to get established as a reputable, trustworthy expertise source and partner. In particular, my impression is that it allows people in government, including those already concerned with these issues, greater scope to engage with orgs like ours in more in-depth conversation and analysis (more appropriate for the 'controversial/concrete action-relevant' engagement). It's easier to justify investing time and resources in an org that's been favourably featured in these processes as opposed to 'random centre somewhere working on slightly unusual topics'. But it can be hard to disentangle exactly how much these submissions play a role, versus Cambridge/Oxford 'brand', track record of academic success and publications, 1-1 meetings with policymakers that would have happened anyway, etc.

comment by Sean_o_h · 2019-03-10T16:50:44.203Z · EA(p) · GW(p)

(Edit: Disclosure: I am executive director of CSER)

Thanks for good questions. These 2 submissions are very recent, so little time to demonstrate follow-on influence/impact. Some evidence on this and previous submissions that indicate work was likely well-received/influential:

  • The CSER/GovAI researchers' input to UN was one of a small subset chosen to present at a 'virtual town hall' organised by the UN Panel (108 submissions; 6 presented).
  • House of Lords AI call (2017/2018): CSER/CFI submissions to the House of Lords AI call for evidence was favourably received. We were subsequently contacted to ask for more input on specific questions (including existential risk, AI safety, horizon-scanning). The committee requested visit to Cambridge to hear presentations and discuss further. They organised 3 such visits; the other 2 being to DeepMind and the BBC. Again, this is represents visits to a small subset of groups/individuals who participated; there were 223 submissions (although there were also an additional 22 oral presentations to this committee, including one from Nick Bostrom). We received informal feedback that the submissions were influential, including material being prominently displayed in presentations during committee meetings. Work from CSER and partners, including the Malicious Use of AI report, is referenced in the subsequent House of Lords Report.
  • House of Commons AI call (2016): There was a joint CSER/FHI submission, as well as an individual submission from a senior CSER/CFI scholar. Both resulted in invites to present evidence in Parliament (again, only extended to a small subset, though I don't have the numbers to hand). The individual submission, from then-CSER Academic director Huw Price, made 1 principal recommendation: "What the UK government can most usefully add to this mix, in my view, is a standing body of some kind, to play a monitoring, consultative and coordinating role for the foreseeable future... I recommend that the Committee propose the creation of a standing body under the purview of the Government Chief Scientific Adviser, charged with the task of ensuring continuing collaboration between technologists, academic groups including the Academies, and policy-makers, to monitor and advise on the longterm future of AI." While it's hard to prove influence definitively, the Committee followed up with the specific recommendation: "We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI. It should focus on establishing principles to govern the development and application of AI techniques, as well as advising the Government of any regulation required on limits to its progression" https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/896/89602.htm. This was subsequently followed by the establishment of the Centre for Data Ethics and Innovation, which has a senior CSER/CFI member on the board, and has a not-dissimilar structure and remit: "The Centre for Data Ethics and Innovation (CDEI) is an advisory body set up by Government and led by an independent board of expert members to investigate and advise on how we maximise the benefits of data-enabled technologies, including artificial intelligence (AI)." https://www.gov.uk/government/groups/centre-for-data-ethics-and-innovation-cdei
  • There have been various other followups and engagement with government that I'm less able to write openly about; these include meetings with policymakers and civil servants; a series of joint workshops with a relevant government department on topics relating to the Malicious Use report and other CSER work; and a planned workshop with CDEI.
Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2019-03-11T23:37:37.894Z · EA(p) · GW(p)

Thanks for both of these answers! I'm pleasantly surprised by the strength and clarity of the positive feedback (even if some of it may result from the Cambridge name, as you speculated). I'm also surprised at the sheer number of submissions to these groups, and glad to see that CSER's material stands out.

Replies from: Sean_o_h
comment by Sean_o_h · 2019-03-12T10:45:15.618Z · EA(p) · GW(p)

Thanks Aaron!

glad to see that CSER's material stands out.

Most of our submissions are in collaboration with other leading scholars/organisations, e.g. FHI/GovAI and CFI, so credit should rightly be shared. (We tend to coordinate with other leading orgs/scholars when considering a submission, which often naturally leads to joint submission).

comment by Sean_o_h · 2019-03-09T13:57:04.007Z · EA(p) · GW(p)

These are good questions, thanks Aaron. A quick placeholder to say that I'll give an answer (from my personal perspective) tomorrow. (Haydn may also have comments on, and evidence relating to, this).

comment by Michael_S · 2019-03-12T23:46:02.418Z · EA(p) · GW(p)

I'd be curious which initiatives CSER staff think would have the largest impact in expectation. The UNAIRO proposal in particular looks useful to me for making AI research less of an arms race and spreading values between countries, while being potentially tractable in the near term.