HLAI 2018 Field Report

post by G Gordon Worley III (gworley3) · 2018-08-29T00:13:22.489Z · score: 10 (10 votes) · EA · GW · Legacy · 6 comments

Cross-posted from LW [LW · GW]

I spent the past week in Prague (natively called Praha, but known widely as Prague via the usual mechanisms others decide your own names) at the Human Level AI multi-conference held between the AGI, BICA, the NeSY conferences. It also featured the Future of AI track, which was my proximate reason for attending to speak on the Solving the AI Race panel thanks to the selection of my submission to GoodAI's Solving the AI Race challenge as one of the finalists. I enjoyed the conference tremendously and the people I met there even more, and throughout the four days I found myself noticing things I felt were worth sharing more widely, hence this field report.

 

I arrived on Tuesday and stayed at the home of a fellow EA (they can out themselves in the comments if they like, but I won't do it here since I couldn't get in touch with them in time to get their permission to disclose their personage). If you didn't already know, Prague is rapidly becoming a hub for effective altruists in Europe, and having visited it's easy to see why: the city is beautiful, most people speak enough English that communication is easy for everyone, the food is delicious (there's lots of vegan food even though traditional Czech cuisine is literally meat and potatoes), and it's easily accessible to everyone on the continent. Some of the recent events held in Prague include the Human-aligned AI Summer School and the upcoming Prague AI safety camp.

 

Wednesday was the first day of the conference, and I honestly wasn't quite sure what to expect, but Ben Goertzel did a great job of setting the tone with his introductory keynote that made clear the focus was on exploring ideas related to building AGI. We then dove in to talk after talk for the next 4 days, each one considering an idea that might lie on the path to enabling general intelligence for machines, and we doubled-up on the third day with a Future of AI program that considered AI policy and ethics issues. I took a lot of notes, but being a safety researcher I'm hesitant to share them because that feels like a hazard that might accelerate AGI development, so instead I'll share a few high-level insights that I think summarize what I learned without saying anything with more than a very small (let's say < 5%) chance of giving someone a dangerous inspiration they wouldn't have had anyway or as soon.

 

 

To summarize, I think the main takeaway is that we, being the sorts of persons who read LessWrong, live in a bubble where we know AGI is dangerous, and outside that bubble people still don't know or have confused ideas about how it's dangerous, even among the group of people weird enough to work on AGI instead of more academically respectable, narrow AI. That's really scary, because the people outside the bubble also include those affecting public policy and making business decisions, and they lack the perspective we share about the dangers of both narrow and general AI. Luckily, this points towards two opportunities we can work on now to mitigate the risks of AGI:

 

  1. Normalize thinking about AI safety. Right now it would be a major improvement if we could move the field of AI research to be on par with the way biomedical researchers think about risks for near-term, narrow applications of their research, let alone getting everyone thinking about the existential risks of AGI. I think most of this work needs to happen on a 1-to-1, human level right now, pushing individual researchers towards more safety-focus and reshaping the culture of the AI research community. For this reason I think it's extremely important that I and others make an effort to
    1. attend AI capabilities conferences,
    2. form personal relationships with AI researchers,
    3. and encourage them to take safety seriously.
  2. Establish and publicize a "sink" for dangerous AI research. When people have an idea they think is dangerous (which assumes to some extent we succeed at the previous objective, but as I mentioned it already comes up now), they need a default script for what to do. Cybersecurity and biomedical research have standard approaches, and although I don't think their approaches will work the same for AGI, we can use them as models for designing a standard. The sink source should then be owned by a team seen as extremely responsible, reliable, and committed to safety above all else. I recommend FHI or MIRI (or both!) take on that role. The sub-actions of this work are to
    1. design and establish a process,
    2. find it a home,
    3. publicize its use,
    4. and continually demonstrate its effectiveness—especially to capabilities researchers who might have dangerous ideas—so it remains salient.

 

These interventions are more than I can take on myself, and I don't believe I have a comparative advantage to execute on them, so I need your help if you're interested, i.e. if you've been thinking about doing more with AI safety, know that there is concrete work you can do now other than technical work on alignment. For myself I've set an intention that when I attend HLAI 2020 we'll have moved at least half-way towards achieving these goals, so I'll be working on them as best I can, but we're not going to get there if I have to do it alone. If you'd like to coordinate on these objectives feel free to start in the comments below or reach out to me personally and we can talk more.

 

I feel like I've only just scratched the surface of my time at HLAI 2018 in this report, and I think it will take a while to process everything I learned and follow-up with everyone I talked to there. But if I had to give my impression of the conference in a tweet it would be this: we've come a long way since 2014, and I'm very pleased with the progress (SuperintelligencePuerto RicoAsilomar), but we have even further to go, so let's get to work!

6 comments

Comments sorted by top scores.

comment by Sean_o_h · 2018-08-29T10:02:13.979Z · score: 4 (4 votes) · EA(p) · GW(p)

Great summary, thanks.

The sink source should then be owned by a team seen as extremely responsible, reliable, and committed to safety above all else. I recommend FHI or MIRI (or both!) take on that role.

Were this to happen, these orgs would not be seen as the appropriate 'owners' by most folk in mainstream AI (I say as a fan of both). Their work is not really well-known outside of EA/Bay area circles (other than people having heard of Bostrom as the 'superintelligence guy;).

One possible path would be for a high-reputation network to take on this role. E.g. something like the partnership on AI's safety-critical AI group (which has a number of long-term safety folk on it as well as near-term safety) or something similar. The process might be normalised by focusing on reviewing/advising on risky/dual use AI research in then near-term - e.g. research that highlights new ways of doing adversarial attacks on current systems, or enables new surveillance capabilities (e.g. https://arxiv.org/abs/1808.07301). This could help set the precedents for, and establish the institutions needed for safety review for AGI-relevant research (right now I think it would be too hard to say in most cases what would constitute a 'risky' piece of research from an AGI perspective, given most of it for now would look like building blocks of fundamental research).

comment by Peter_Hurford · 2018-08-29T04:02:23.137Z · score: 2 (2 votes) · EA(p) · GW(p)

Thanks for sharing these reflections. I really appreciate them and it's exciting to see all this progress. I think some additional context about what the Human Level AI multi-conference is would be helpful. It sounds like it was a mix of non-EA and EA AI researchers meeting together?

comment by Sean_o_h · 2018-08-29T09:50:03.140Z · score: 2 (2 votes) · EA(p) · GW(p)

It sounds like it was a mix of non-EA and EA AI researchers meeting together?

Mostly the former; maybe 95% / 5% or higher. Probably best to describe it as a slightly non-mainstream AI conference (in that it was focused on AGI more so than narrow AI; but had high-quality speakers from DeepMind, Facebook, MIT, DARPA etc) which some EA folk participated in.

https://www.hlai-conf.org/

comment by zdgroff · 2018-09-03T07:25:09.917Z · score: 1 (1 votes) · EA(p) · GW(p)

I talked to two people who said things that indicated they lean EA, asked them about if they identified that way, and then they told me they didn't because they associate EA with Singer-style act utilitarianism and self-imposed poverty through maximizing donated income.

This is interesting. What about them seemed EA-aligned? When I came across EA I was attracted to it because of the Singer-style act utilitarianism, and I've had worries that it's drifting too far from that and losing touch with the moral urgency that I felt in the early days. That said, I do think that actually trying to practice act utilitarianism leads to more mature views that suggest being careful about pushing ourselves too far.

comment by G Gordon Worley III (gworley3) · 2018-09-03T21:52:46.652Z · score: 0 (0 votes) · EA(p) · GW(p)

Probably that they expressed interest in doing the most good possible for the world with their work.

comment by G Gordon Worley III (gworley3) · 2018-08-30T17:20:45.284Z · score: 0 (0 votes) · EA(p) · GW(p)

Additional reflections from Marek, CEO of GoodAI, along with links to additional media coverage, including one about whether or not to publish dangerous AI research.