Posts

Our forthcoming AI Safety book 2019-08-30T18:32:33.243Z · score: 10 (11 votes)
Quantum computing concerns? 2019-05-06T23:04:26.395Z · score: 9 (14 votes)

Comments

Comment by len-hoang-lnh on Our forthcoming AI Safety book · 2019-09-05T11:24:07.628Z · score: 2 (2 votes) · EA · GW

Well, you were more than right to do so! You (and others) have convinced us. We changed the title of the book :)

Comment by len-hoang-lnh on Our forthcoming AI Safety book · 2019-08-31T09:07:09.131Z · score: 1 (1 votes) · EA · GW

This is a fair point. We do not discuss much the global improvement of the world. I guess that we try to avoid upsetting those who have a negative vision of AI so far.

However, Chapter 5 does greatly insist on the opportunities of (aligned) AIs, in a very large number of fields. In fact, we argue that there is a compelling argument to say that fighting AI progress is morally wrong (though, of course, there is the equally compelling flip-side of the argument if one is concerned about powerful AIs...).

We should probably add something about the personification of AI. This indeed has negative side effects. But if pondered adequately, especially for reinforcement learning AIs, it is a very useful way to think about AIs and to anticipate their actions.

Thanks for the comment, Paul!

Comment by len-hoang-lnh on Our forthcoming AI Safety book · 2019-08-31T08:59:07.685Z · score: 1 (4 votes) · EA · GW

This is a good point. The book focuses a lot on research questions indeed.

We do see value in many corporations discussing AI ethics. In particular, there seems to be a rise of ethical discussions within the big tech companies, which we hope to encourage. In fact, in Chapter 7, we urge AI companies like Google and Facebook to, not only take part of the AI ethics discussion and research, but to actively motivate, organize and coordinate it, typically by sharing their AI ethics dilemmas and perhaps parts of their AI codes. In a sense, they already started to do so.

Another point is that, given our perceived urgency of AI Safety, it seems that it may be useful to reach out to academic talents in many different ways. Targeted discussions do improve the quality of the discussions. But we fear that they may not "scale" sufficiently. We feel that some academics might be quite receptive to reflecting on the public discussion. But we may be underestimating the difficulty to make this discussion productive...

(I have given a large number of public talks, and found it quite easy to raise the concerns of the book for all sorts of audiences, including start-ups / tech companies, but I do greatly fear what could happen with medias...)

I should add that the book really goes on and on to encourage calm thinking and fruitful discussions on the topic. We even added a section in Chapter 1, where we apologize for the title and clarify the purpose of the book. We also ask readers to be themselves pedagogical and benevolent when criticizing or defending the theses of the book. But clearly, such contents of the book will only have an impact on those who actually read the book.

Anyways, thanks for your comment. We're definitely pondering it!

Comment by len-hoang-lnh on Our forthcoming AI Safety book · 2019-08-31T08:13:41.903Z · score: 3 (2 votes) · EA · GW

The book will be published by EDP Sciences. They focus a lot on textbooks. But they also work on outreach books. I published my first book with them on Bayesianism.

We hope to reach out to all sorts of people who are intrigued by AI but do not have any background on the topic. We also hope that more technical readers will be interested in the book to have an overview on AI Safety.

I should point out that I run a YouTube channel, whose audience will likely be the base audience of the book too.

Comment by len-hoang-lnh on Quantum computing concerns? · 2019-06-13T19:39:02.041Z · score: 2 (2 votes) · EA · GW

Thanks! This is reassuring. I met someone last week who does his PhD in post-quantum cryptography and he did tell me about an ongoing competition to set the standards of such a cryptography. The transition seems on its way!

Comment by len-hoang-lnh on Aligning Recommender Systems as Cause Area · 2019-06-13T19:30:17.635Z · score: 2 (2 votes) · EA · GW

Great post! It's very nice to see this problem being put forward. Here are a few remarks.

It seems to be that the scale of the problem may be underestimated by the post. Two statistics that suggest this are the fact that there are now more views on YouTube than searches on Google, and that 70% of them are YouTube recommendation. Meanwhile, psychology stresses biases like availability bias or mere exposure effects that suggest that YouTube strongly influences what people think, want and do. Here are a few links about this:

https://www.visualcapitalist.com/what-happens-in-an-internet-minute-in-2019/

https://www.cnet.com/news/youtube-ces-2018-neal-mohan/

https://www.youtube.com/watch?v=cebFWOlx848

Also, I would argue that the neglectedness of the problem may be underestimated by the post. I have personally talked to many people from different areas, social sciences, healthcare, education, environmentalists, medias, YouTubers and AI Safety researchers. After ~30-minute discussions, essentially all of them acknowledged that they had overlooked the importance of aligning recommender systems. For instance, one problem is known as "mute news", i.e. the fact that important problems are overshadowed by what's put forward by recommender systems. I'd argue that the problem of mute news is neglected.

Having said this, it seems to me that the tractability of the problem may be overestimated. For one thing, aligning recommender systems is particularly hard because they act in so-called "Byzantine" environments. Namely, any small modification of recommender systems is systematically followed by SEO-optimization-like strategies from content creators. This is discussed in the following excellent series of videos with interviews of Facebook and Twitter employees:

https://www.youtube.com/watch?v=MUiYglgGbos&list=PLtzmb84AoqRRFF4rD1Bq7jqsKObbfaJIX

I would argue that aligning recommender systems may even be harder than aligning AGI, because we need to get the objective function right, even though we do not have AGI to help us do so. But as such, I'd argue that this is a perfect practice playground for alignment research, advocacy and policing. In particular, I'd argue that we too often view AGI as that system that *we* get to design. But what seems just as hard is to get leading AI companies to agree to align it.

I discussed this in a bit more length in this conference here (https://www.youtube.com/watch?v=sivsXJ1L1pg), and in this paper: https://arxiv.org/abs/1809.01036.