Reflections on EA Global London 2019 (Mrinank Sharma)

post by aarongertler · 2019-10-29T23:00:11.710Z · score: 24 (9 votes) · EA · GW · 1 comments

This is a link post for https://mrinanksharma.github.io/post/eag_reflect/

Contents

  Making the Most of EA Global
  AI Policy & Governance
    Do you need technical expertise in AI?
    Industry vs Government
  AI Research
    Applying for Internships at OpenAI
    Labs vs Academia
    Choosing a PhD Topic
    Keeping Up with Papers
    What’s Stopping Advanced Applications of AI?
  Closing Comments
None
1 comment

Aaron's note: I'm posting this because I really enjoyed reading a participant's reflections on EA Global. Since every experience is different, I'd love to see more posts like this especially from other first-timers!


I attended EA Global for the first time in October, and I absolutely loved the experience. I thought it would be useful to go over all the notes that I made, mostly for myself, but also on the off-chance that it would be helpful for other people.

I’ve summarised some of the notes that I’ve made on general topics below. Please note that there may be errors, and that I may have misrepresented people’s views though this is certainly not my intention!

Making the Most of EA Global

The primary advice that I read before going was to maximise the time spent in one-on-one meetings and workshops as opposed to talks, most of which are later uploaded online. I only ended up filling in my Whova (conference application) profile fairly late, but would strongly recommend doing this (early!), as well as reaching out to people who share similar interests to you. I think the advice that I recieved was mostly spot on, and the most useful experiences that I had were certainly these one-on-one meetings.

I’d also like to echo the advice to write down, or at least consider, your goals for the conference.

Additionally, bring a notepad and make notes! You’ll inevitably forget something really useful.

This EA forum post [EA · GW] by Risto Uuk is great.

AI Policy & Governance

Broadly, we can divide the roles in this field into researchers and those implementing policies (either within industry, or within government). If you are interested in AI policy, it is best to focus primarily on fit within the role rather than shoe-horn yourself into a role which you may think is “more important”.

You are a good fit for a researcher if:

You are a good fit for a role in government or industry if:

This is not to say that it isn’t useful being a researcher who has excellent social skills!

Considering long-termism, you ought to try and figure out needs to happen for a positive outcome post-AGI. The decisions made today will affect the future landscape, but when attempting to convince other people, beware that the long-termism standpoint may alienate them.

Do not underestimate the importance of institutional work; trying to improve institution capacity and establishing norms can be useful.

Do you need technical expertise in AI?

The advice that I heard was that it is mostly not necessary; unless you are already doing a PhD in ML/AI, it is probably not worth pursuing one. However, whilst most questions that you will be trying to answer will not benefit from this knowledge, there are a unique set of questions which you will be better equiped to answer, such as understanding the strategic important of new developments.

A good target of technical expertise would be to be able to make sense of the Import AI Newsletter.

So if you shouldn’t do an ML PhD, what should you do? The advice seems to be that a degree in International Relations would be very useful.

Industry vs Government

It seems that government work is more neglected compared to working in an industry lab. This experience is still be useful, but perhaps more of a middle step? It is better to learn skills and advance your reputation in industry, and people in such labs end up advising government anyway. It tends to be slower to build up credibility in government positions.

AI Research

Applying for Internships at OpenAI

Many internships at OpenAI are organised on an adhoc basis; if there is somebody you specifically want to work with, it’s best to send them an email with a few ideas, suggesting a collaboration.

Labs vs Academia

There is less pressure to publish at OpenAI compared to more traditional academia, and there is a significantly higher focus on impact i.e., the choice of project depends on the OpenAI mission. This is typically not the case in academia.

At OpenAI, the research leads suggest project proposals and roadmaps, which are then iterated upon.

OpenAI seems to focus on current techniques more and neuroscience less than Deepmind (I’m not entirely sure how accurate this is).

Choosing a PhD Topic

If you are unable to work in safety directly, bear in mind that the transition from normal ML research to safety research seems to be doable and commonly done. You could then choose your topic by considering the following factors:

Keeping Up with Papers

There are a vast numbers of papers to read, especially in AI! Keeping up with research is hard, but a simple way of prioritising is to ask senior people about which papers to read. Slowly, you will develop intution about which papers to prioritise.

Also follow the Import AI Newsletter, as well as the AI Alignment newsletter.

What’s Stopping Advanced Applications of AI?

In many cases, there are cultural issues (within an industry) about the application of algorithms to make crucial decisions. Whilst interpretability of systems would increase the buy in, there are also key issues with the quality of data, and the infrastructure to collect high quality data.

It is worth nothing that the barriers here seem to not be technical, so it is unclear how much of an impact technical research would have here.

Closing Comments

I absolutely loved attending EA Global 2019, and one of the most beneficial things of going there was starting to build up a network of people who share similar interests. I learnt a huge deal from other people, and strongly recommend going if you are on the fence!

I’ve you’ve spotted any errors in this post, please do contact me and I’ll do my best to respond and to fix them.

1 comments

Comments sorted by top scores.

comment by ofer · 2019-10-30T20:36:53.472Z · score: 1 (1 votes) · EA · GW
What’s Stopping Advanced Applications of AI?
In many cases, there are cultural issues (within an industry) about the application of algorithms to make crucial decisions. Whilst interpretability of systems would increase the buy in, there are also key issues with the quality of data, and the infrastructure to collect high quality data.
It is worth nothing that the barriers here seem to not be technical, so it is unclear how much of an impact technical research would have here.

Perhaps this model was proposed for certain domains? Maybe ones in which laws restrict applications, like driverless cars?

It doesn't seem to me plausible for all domains (for example, it doesn't seem to me plausible for language models and quantitative trading).