Assessing the state of AI R&D in the US, China, and Europe – Part 1: Output indicators

post by stefan.torges (storges) · 2019-11-01T14:41:09.961Z · score: 18 (8 votes) · EA · GW · None comments

Contents

  Summary
  Methodology and limitations
  Scientific publications
  Patents
None
No comments

How does the current state of artificial intelligence (AI) research and development (R&D) compare among the US, China, and Europe? I will publish my findings on this question in two posts. This one focuses on R&D output (scientific publications and patents), and the second one will focus on R&D input. I might also publish a third article that covers conclusions from the first two.

The AI governance community concerned with existential risks has so far mainly focused on US–China relations while neglecting Europe. Part of the reason for this might be that Europe significantly lags behind the two countries in terms of relevant AI R&D. However, my impression is that this has not been substantiated since Europe is simply missing from most analyses.

I also aim to contribute to a more clear-eyed comparison between the state of AI R&D in the US and China. This may improve the general conversations about an “AI arms race,” which seem to be especially prominent in the US, where many believe that the country is “losing” to China.

Since many reports and articles on this comparison tend to focus on a single analysis or study, my goal was to review the available literature for the most important indicators of AI R&D, allowing a more holistic picture. I think additional work on constructing a solid index for AI R&D would be valuable. The Center for Data Innovation, Jeffrey Ding, and a German think tank, Stiftung Neue Verantwortung (only available in German), have done some preliminary work on this.

Summary

Methodology and limitations

Scientific publications and patents are the most common quantitative indicators of R&D output.[1] One could also use qualitative assessment tools like expert evaluation, but this is outside the scope of this post. My impression is also that these two indicators are widely used in the field of AI (e.g., China AI Development Report 2018 and AI Index 2018).

My focus was on creating a snapshot of the current landscape, as opposed to a projection into the future. However, where available and clear, I do point out trends that one could cautiously use to extrapolate into the future.

I looked for relevant sources by conducting keyword searches for the various indicators on Google and Google News. I was also already aware of several reports and studies on this topic. I looked up any cited references. I welcome pointers to additional sources.

My general impression was that the quality of studies is not particularly high in this field. There is no peer-reviewed work available as far as I can tell. Reports do not compare their findings, and methodologies are often vague. Thus, I would treat individual findings with the appropriate care. I have tried to provide methodological details where relevant and available. I did not apply a strict definition for “Europe,” since the sources differed in their definitions. The reader may assume that “Europe,” unless otherwise stated, refers roughly to the European Free Trade Association (EFTA), which includes all EU member states, Iceland, Liechtenstein, Norway, and Switzerland.

I did not attempt to arrive at numerical estimates for the indicators. Instead, I decided to share my conclusions in qualitative terms to avoid the perception of rigor and precision, which the underlying data do not permit.

The indicators I examined are on a very high level and fail to capture a lot of nuances. For instance, AI technology has many different applications (e.g., natural language processing, computer vision), is used in many different sectors (e.g., online retail, defense), and has different layers (e.g., development platforms like TensorFlow compared to a concrete AI algorithms). The state of R&D with respect to these might differ significantly across countries or regions. However, I believe a more coarse-grained investigation is still informative rather than misleading.

Almost all studies and analyses I found do not focus only on machine learning. They include all publications related to AI, which also includes work on symbolic AI (e.g., expert systems). This might severely limit the relevance of these indicators, since the most notable advances in AI capabilities since 2012 have been in machine learning (deep learning and reinforcement learning in particular). The severity of this limitation depends on the relevance of symbolic AI for future progress. There is considerable disagreement on this point amongst AI experts.

Scientific publications

Numerous sources have reported data on the total number of publications by country or region. Some also included an analysis of “highly cited” publications, but definitions of this were not always clear or consistent. I also aggregated numerous analyses on the distribution of contributions to top AI conferences as a separate indicator. Lastly, I included my own analysis of AI benchmark papers as reported by the Electronic Frontier Foundation, since I am not aware of any other analysis of this dataset but consider it a good and unique representation of cutting-edge AI research.

Number of publications. In light of all the evidence I could find, I tend to believe that in terms of the overall number of scientific publications on AI, Europe is slightly ahead of the US and China, with China gaining ground. According to the only source that restricted the analysis to deep learning, China seems to have a significant lead on the US, which, in turn, is significantly ahead of Europe. The trend lines do not suggest that this ranking will change any time soon.

Number of highly cited publications. There is contradictory evidence for this indicator. However, all sources agree that Europe is not in the lead. I would expect the US and China to be in the same ballpark, with trend lines favoring China. The sources where I can best determine that they actually analyzed highly cited work seem to favor the US (AI Index 2018, Allen Institute analysis). I could not find evidence for machine learning specifically, since the US National Artificial Intelligence Research and Development Strategic Plan does not include analysis for highly cited work.

Number of publications at top AI conferences. For this section, I examined contributions to the most relevant AI conferences as measured by their h5-index on Google Scholar. I included conferences in the top 10 publications of the categories Artificial Intelligence and Computer Vision & Pattern Recognition. In terms of accepted papers at such conferences, US institutions enjoy a clear lead ahead of European and Chinese ones. The evidence also shows an edge for European institutions over Chinese ones, but not as decisive as the US lead. This trend is broadly echoed in aggregative data from the 2019 Global Artificial Intelligence Industry Data Report. They studied accepted papers from the Conference on Computer Vision and Pattern Recognition (CVPR), the International Conference on Computer Vision (ICCV), NeurIPS, and the International Conference on Robotics and Automation (ICRA) and found American authors to account for 52% of contributions and Chinese authors[5] to account for 18% of contributions. They do not provide numbers for European authors. Out of the top 15 institutions, they found eight to be based in the US, four in Europe (ETH, CNRS, INRIA, Max-Planck-Gesellschaft), and three in China. The only exception to this general pattern is the AAAI Conference on Artificial Intelligence, at which Chinese and US institutions are on equal footing and European ones are lagging significantly behind (see below). I do not know what accounts for this. I would expect most papers at these conferences to relate to machine learning, as opposed to symbolic AI.

Number of top benchmark publications. I analyzed the AI Progress Measurement database of the Electronic Frontier Foundation. I excluded duplicate entries (i.e., papers that were included on multiple benchmarks) and only studied the five best-performing algorithms per benchmark.[6] Across all benchmarks, US institutions clearly dominate. If one counts DeepMind as a US institution, then US institutions were involved in 91 unique papers of the database, clearly ahead of Canada (14) and China (14). If all European countries (including the UK, but excluding DeepMind) are aggregated, their institutions were involved in 17 unique papers, which would put them ahead of China. These benchmarks only capture advances in machine learning.

Patents

I aggregated the numerous sources on the total number of patent families by country or region. I also tried to find reliable data on highly cited or highly relevant patents, but I could only find a single graph in a report by the World Intellectual Property Organization.

Number of patent families. Based on the available evidence, it seems to me that the US is the global leader in terms of AI patent output. I have considerable uncertainty regarding the comparison of Europe and China. Tentatively, I would put Europe slightly in front for now based on the AI Index 2018 and the report by the UK Intellectual Property Office (see below). There also appears to be some evidence that the patent count in China is inflated (via Jeffrey Ding’s US congressional testimony).

Number of highly cited patent families. The only source on this I could find is the WIPO Technology Trends 2019 – Artificial Intelligence report (p. 88). They find that US institutions have filed ~28,000 highly cited patent families, followed by Japan (~6,000), Germany (~1,000), South Korea (~1,000), and China (~1,000). I could not find their definition of “highly cited” or the time period from which the data are sourced. I would not give this a lot of weight, but the margin of the US lead is still noteworthy.


  1. People in the effective altruism community might be particularly interested in indicators that track AI R&D as it relates to the development of transformative AI (TAI). However, this is outside the scope of this post. I should note that out of the indicators included in this post, scientific publications are probably more relevant than patents because the former are more likely to include foundational breakthroughs, which seem to be a better indicator for R&D relevant for TAI. Similarly, highly cited publications are likely more informative than the total number of publications. ↩︎

  2. This report is authored by the China Institute for Science and Technology Policy at Tsinghua University. I am not in a position to assess the extent to which this report is shaped by the interests of the Chinese government. Tsinghua University is one of the most reputable universities in China and a leading institution in terms of AI research in China. ↩︎

  3. This report is authored by the China Academy of Information and Communications Technology (CAICT). While it is an organ of the Chinese government and “subordinate to the powerful Ministry of Industry and Information Technology (MIIT)” (New America), I am not in a position to assess the extent to which this report is shaped by the interests of the Chinese government. It is notable that even though they used a very similar methodology to the China AI Development Report 2018, they seem to include far fewer papers, and the UK does not seem to make it into the top 5 (contrary to the China AI Development Report 2018). All of this makes me somewhat skeptical about this source. ↩︎

  4. From Wikipedia: “The Allen Institute for Artificial Intelligence (abbreviated AI2) is a research institute founded by late Microsoft co-founder Paul Allen. The institute seeks to achieve scientific breakthroughs by constructing AI systems with reasoning, learning, and reading capabilities.” ↩︎

  5. Since I do not know their methodology, it is not clear if this includes Chinese-Americans with Chinese names or only authors from Chinese institutions, etc. ↩︎

  6. I did this to save time while including the most cutting-edge research. This, however, might exclude papers that were breakthroughs at the time but have since been superseded. Analyzing all entries would solve this issue. ↩︎

None comments

Comments sorted by top scores.