Donating against Short Term AI risks

post by Jan-WillemvanPutten · 2020-11-16T12:23:10.469Z · EA · GW · 1 comment

This is a question post.

Contents

  Answers
    kokotajlod
None
1 comment

I have a question regarding possible donation opportunities in AI. From my understanding research in AI is not underfunded in general and AI safety research is mostly focussed on the long term risks of AI. In that light I am very curious what you think about the following. 

I received a question from someone who is worried about the short term risks coming from AI. His arguments are along the lines of:  We currently observe serious destabilizing of society and democracy caused by social media algorithms. Over the past months a lot has been written about this, e.g. that this causes a further rise of populist parties. These parties are often against extra climate change measures, against effective global cooperation on other pressing problems and are more agressive on international security. In this way polarization through social media algorithms could increase potential short term X-risks like climate change, nuclear war and even biorisks and AI. 

Could you answer the following quesions? 

Thank you all for the response!

Answers

answer by kokotajlod · 2020-11-16T15:24:29.720Z · EA(p) · GW(p)

My off-the-cuff answers:
--Yes, the EA community neglects these things in the sense that it prioritizes other things. However, I think it is right to do so. It's definitely a very important, tractable, and neglected issue, but not as important or neglected as AI alignment, for example. I am not super confident in this judgment and would be happy to see more discussion/analysis. In fact, I'm currently drafting a post on a related topic (persuasion tools).
--I don't know, but I'd be interested to see research into this question. I've heard of a few charities and activist groups working on this stuff but don't have a good sense of how effective they are.
--I don't know much about them; I saw their film The Social Dilemma and liked it.

comment by Jan-WillemvanPutten · 2020-11-16T15:28:31.387Z · EA(p) · GW(p)

Thanks! I would love to see more opinions on your first argument: 

  • Do we believe that there is no significant increase in X-risk? (no scale)
  • Do we believe there is nothing we can do about it (not solvable)
  • Do we believe there are many overfunded parties working on this issue (not neglected).
comment by kokotajlod · 2020-11-16T21:37:04.581Z · EA(p) · GW(p)

I can't speak for anyone else, but for me:
--Short term AI risks like you mention definitely increase X-risk, because they make it harder to solve AI risk (and other x-risks too, though I think those are less probable)
--I currently think there are things we can do about it, but they seem difficult: Figuring out what regulations would be good and then successfully getting them passed, probably against opposition, and definitely against competition from other interest groups with other issues.
--It's certainly a neglected issue compared to many hot-button political topics. I would love to see more attention paid to it and more smart people working on it. I just think it's probably not more neglected than AI risk reduction.

Basically, I think this stuff is currently at the "There should be a couple EAs seriously investigating this, to see how probable and large the danger is and try to brainstorm tractible solutions."  If you want to be such an EA, I encourage you to do so, and would be happy to read and give comments on drafts, video chat to discuss, etc. If no one else was doing it, I might do it myself even. (Like I said, I am working on a post about persuasion tools, motivated by feeling that someone should be talking about this...)

I think probably such an investigation will only confirm my current opinions (yup, we should focus on AI risk reduction directly rather than on raising the sanity waterline via reducing short-term risk) but there's a decent chance that it would chance my mind and make me recommend more people switch from AI risk stuff to this stuff.

comment by Jan-WillemvanPutten · 2020-11-17T09:27:08.532Z · EA(p) · GW(p)

Thanks, great response kokotajlod. Do we have any views if there are already other EAs seriously investigating this, to see how probable and large the danger is and try to brainstorm tractible solutions?

At the moment I am quite packed with community building work for EA Netherlands but I would love to be in a smaller group to have some discussions about it. I am relatively new to this forum, what would be the best way to find collaborators for this?

comment by kokotajlod · 2020-11-17T15:41:49.294Z · EA(p) · GW(p)

Here are some people you could reach out to:
Stefan Schubert (IIRC he is skeptical of this sort of thing, so maybe he'll be a good addition to the conversation)
Mojmir Stehlik (He's been thinking about polarization)
David Althaus (He's been thinking about forecasting platforms as a potential tractible and scalable intervention to raise the sanity waterline)

There are probably a bunch of people who are also worth talking to but these are the ones I know of off the top of my head.

1 comment

Comments sorted by top scores.

comment by Sean_o_h · 2020-11-17T10:56:08.248Z · EA(p) · GW(p)

A couple of resources that may be of interest here:

- The work of Aviv Ovadya of the Thoughtful Technology Project; don't think he's an EA (he may be, but it hasn't come up in my discussions with him): https://aviv.me/

- CSER's recent report with Alan Turing Institute and DSTL, which isn't specific to AI and social media algorithms only, but addresses these and other issues in crisis response:
"Tackling threats to informed decisionmaking in democratic societies"
https://www.turing.ac.uk/sites/default/files/2020-10/epistemic-security-report_final.pdf

- Recommendations for reducing malicious use of machine learning in synthetic media (Thoughtful Technology Project's Aviv Ovadya and CFI's Jess Whittlestone)
https://arxiv.org/pdf/1907.11274.pdf

- And a short review of some recent research on online targeting harms by CFI researchers

https://www.repository.cam.ac.uk/bitstream/handle/1810/296167/CDEI%20Submission%20on%20Targeting%202019.pdf?sequence=1&isAllowed=y