EA/X-risk/AI Alignment Coverage in Journalism

post by Jeremy (captainjc) · 2019-03-09T21:17:28.479Z · score: 9 (5 votes) · EA · GW · 2 comments

A couple of ideas from the recent 80k podcast with Kelsey Piper.

1) Piper suggests outlets are looking for alternative revenue models and Vox (or others) might consider a $100k donation to hire a writer to cover a non-profit issue.


2) Later in the roundtable, Kieran Harris brings up his concerns for political polarization of long term causes, arguing that nuclear weapons reduction is somewhat polarized (and mentions a few other EA cause areas that are generally supported by the left), and showing concern that A.I. alignment and other causes that are not yet polarized could become so. Rob then says "I guess I think having a first, kind of conservative, outspoken person talking about existential risk could be really quite valuable."

So, combining them, what about an EA org or foundation funding another journalistic take on EA/X-risk/AI Alignment in a conservative publication? Even if it were not cost-effective from only a coverage perspective, helping causes that are not yet polarized get/retain bi-partisan support could be of immense benefit.

I'm curious what others think about this idea. It strikes me that even a weekly column could make a big difference.

Any ideas for publications where something like this might fit? Places like Reason or National Review come to mind. I have to admit I'm not very up on journalism in general or conservative journalism in particular. My intuition is that somewhere like Breitbart could be a very bad idea, but I haven't explored that idea in much depth and would be open to arguments to the contrary.

Any ideas for foundations or orgs that might be willing to fund it?

I'll quote the relevant parts of the interview transcript:

On Funding Journalism

"Robert Wiblin: Yeah that’s really interesting. Would it be possible in principle for a listener to give Vox like $100,000, $200,000 and say, “Go away and try to find someone who can write a lot of articles about factory farming?”

Kelsey Piper: I suspect that if a listener wanted to give Vox $100,000 Vox would figure out how to make this happen.

Robert Wiblin: Okay, it’s interesting. I’m not quite sure what I think about the cost effectiveness of that as a donation. If you look at the number of views per dollar that you would get from that it could be potentially good, at least if you’re providing advice that people can actually act on to make the world better.

Kelsey Piper: Yeah, I don’t know if I’m allowed to share details of the Rockefeller grant but doing some quick math in my head, I think you’re certainly buying hundreds of views for a dollar of content that you want to exist by doing something like what the Rockefellers did.

Robert Wiblin: Do you know if other media outlets, other publications, also potentially take donations in order to hire someone to cover a nonprofit issue?

Kelsey Piper: I think if you’re a big dollar donor willing to fund someone’s salary for a year I would expect lots of outlets to have some interest in that. I think you want to be careful about not exerting too much editorial control. Like, “Hey I want you to be able to cover factory farming as a beat,” seems fine, but “Hey I want you to report on how factory farming is evil and bad,” you know, then you’re asking for sponsored content, maybe without clarity to the readers about what is getting paid for. So you’d want it to be something where you want the beat to exist rather than you want a particular angle on coverage.

Kelsey Piper: I don’t know as much about other outlets, but I get the sense that many outlets are interested right now in alternative revenue models. It seems to me like something that somebody who’s interested should certainly reach out and we could talk about what that might look like."

On Polarization

"Keiran Harris: Okay. So, my number one point this week dealt with a section that I wrote about political polarisation. So basically, my principal objection is that we didn’t actually discuss the importance of preventing political polarisation of long terms causes. We mostly discussed animal welfare and global health. I’ve got a few Rob Wiblin quotes here.

Robert Wiblin: Go for it, Keiran.

Keiran Harris: So, Rob said, “It feels like global catastrophic risks just aren’t really that partisan at the moment, or at least in principle, I don’t think there are Republicans who are in favor of nuclear war.” That’s one quote.

Robert Wiblin: Very generous. I like to steel-man the other side.

Keiran Harris: And another is, “I guess I would think it was quite foolish if someone was trying to portray globa, catastrophic risk as Left or a Liberal issue, but I guess I haven’t seen that.”

Keiran Harris: So, my response is yes, there are no Republicans who are openly hoping for a nuclear war, but there are subdivisions of the issue that are partisan, where obviously they shouldn’t be. So, for example, there appears to be a political divide on the question of reducing nuclear stockpiles or eliminating land-based missiles, things like that.

Keiran Harris: So, I found a 2013 YouGov poll that said that on the question of whether the U.S. should unilaterally reduce the number of nuclear weapons, support is 55% among Democrats versus 18% for Republicans. But, presumably, this is just an empirical question. Would reducing stockpiles actually make us safer? Or would it not? Would reducing from, the U.S. has at the moment, what is it, about 5000? Reduce that down to something in the hundreds, the consensus seems to be that we would lose very little in terms of deterrence.

Keiran Harris: And yet, there is this political divide, so when I talk about my concerns about other long-termist issues facing the same fate, I’m thinking along these lines that they wouldn’t be rational. So you and Kelsey often just agreed, saying, “Well, this would be completely irrational for people to disagree on this.” But we see it anyway, and we see a similar thing with climate change, which is how I kind of structured my version of these questions.

Keiran Harris: The idea that we can look at someone’s position on climate change and then, in the U.S., reliably predict their opinions on abortion or gun control, that seems completely insane. But that’s where we are, and so I’m concerned that decades from now we can make a similar prediction based on their opinions around artificial intelligence safety, which, to me, would be kind of a disaster.

Robert Wiblin: I mean, another example is, I think, Trump helped to shut down, or maybe it was Republicans in congress, that helped to shut down the global health security agenda, which was spoke about on the episode with Tom Inglesby about how valuable that is. I think, mostly, because they viewed that as like a foreign aid thing. It’s like something that benefits poor countries rather than something that benefits America, when I guess, in our view, it would do both.

Robert Wiblin:: So, do we all agree that it’s bad for things like nuclear policy to become partisan issues when it doesn’t seem like they divide across?

Robert Wiblin: Yeah. You think it’s likely that they will?

Keiran Harris: No, I think it’s likely that that would be bad, and that we ought to explore whether or not that’s actually likely and look at these previous examples of nuclear stockpile production, climate change, and try and investigate what actually happened there. How did they become partisan issues? Where it’s not obvious that they ever should’ve become partisan, and how we can try and avoid that for questions moving forward.

Keiran Harris: I mean, at the moment, people don’t seem that passionate about it on political lines, so we’re making some progress on, say, lethal autonomous weapons. There isn’t this coalition of people on either side that are arguing against it. You know, people who actually do really care about this can make progress, but assuming that this couldn’t happen, I think is a mistake because you can imagine a story being spun similarly to, let’s say, Cold War thinking of, okay, we need to stay ahead in the arms race. We need to maintain secrecy around our technology, that would be terrible for global coordination for A.I. I can imagine this happening. It hasn’t happened yet, so, you and Kelsey rightfully say, “I haven’t really seen this,” but that doesn’t mean it can’t happen.

Michelle Hutchinson: I wonder how robust we should expect that to be, even if it caused a partisan surge, given that, as pointed out in the podcast, these kinds of issues seem much less partisan in the UK. You might expect that this would be a temporary thing that would last as long as Trump’s presidency but maybe not much longer.

Keiran Harris: Yeah, I wonder though because once you have this perception, particularly in the U.S., it seems to stick. So, the anti-war protests in the -50s and -60s, people who were very against nuclear weapons seemed to have a strong Liberal bias. And then today, I think it still has that Left-leaning association. And we have the same thing for as Rob and Kelsey talked about it in the podcast, for animal welfare in the states. Not necessarily in other countries, but if it was just a one president or a one administration issue, wouldn’t we have seen people returning to the middle on these issues in the United States as well?

Robert Wiblin: Yeah, it seems like when these issues come up, people just try to find the nearest thing that’s a partisan issue and then try to map it on that. So, with the global health security agenda, you’ve got, “Oh, I don’t like foreign aid, so I don’t like this thing,” which involves sending money to Africa. With climate change, you’ve got like pro- versus anti-capitalism, or like pro- versus anti-fossil fuels. With nuclear stuff, I guess people map it onto, like, strong on defense or not strong on defense.

Robert Wiblin: And I guess, for example, with the Ebola issue a couple years ago, I think the Republicans were more in favor of closing off the borders to people from Africa, or some parts of Africa. I think as it happened, that was a pretty stupid policy, but it could’ve been a sensible policy, and I think in that case, Democrats probably would have rejected it on internationalist, globalist grounds. Even if it would’ve made sense. So there’s this instinctive desire to kind of map it onto some, like, existing dispute. So, perhaps that’s kind of the point you’re making.

Michelle Hutchinson: I think the fact that it does seem this slightly arbitrary mapping makes me feel a bit better about the possibility of it being something that you’d be able to swing back if you were trying. Because it makes it feel a bit more like a reason this might not have happened with factory farming is that it just wasn’t that important to that many people, apart from people who are actually raising chickens and trying to make that their livelihood.

Michelle Hutchinson: And, so, you wouldn’t necessarily expect an organic swing backwards, but you might expect that if there was a concerted effort to make a particular issue something that people cared about on both sides of the aisle using the kinds of things that they already cared about, that might be viable.

Robert Wiblin: I suppose that most people assume that it’s very bad for effective activism to be seen as having any political lean. I think that may be unrealistic, and I think that there are some benefits that people don’t really talk about that much that you could get. Where if you do get involved with one political party, then you can potentially have a lot more influence over that party than you would just as an outsider who’s just trying to stay apart from the whole political scene. So, it could be that even though there’s significant downsides to picking sides, it’s very hard to get things done without doing that to some degree.

Michelle Hutchinson: Yeah. I wonder whether there are ways of getting the best of both worlds by having different parts of this linked with different sides so that the overall worrying about existential risks ends up being not polarised because different issues within that are polarized in different ways, so you might think that nuclear disarmament is a Left kind of issue. And then you were saying that biosecurity could even be seen as a issue on the Right because Conservatives will be more willing to act as national groups rather than internationally.

Keiran Harris: Yeah, so I mean I would be excited to see long-termist causes being treated like foreign policy is in newspapers. So, at the moment if people wanna read about A.I. safety, if the only place they can go is Vox, which I think one reasonable critique might be of this episode is that Rob framed Vox as being center-left.

Keiran Harris: I think if you looked at analysis of media outlets you would put the Washington Post center-left. You would put the Atlantic at center-left. I think Vox is beyond that. And, so if they are the only outlet who are talking about these issues, then it would be reasonable to be concerned about a perception of a Liberal bias there.

Robert Wiblin: I guess as Kelsey said it’s probably easier to persuade people who have other political views to go and be outspoken advocates for these issues than it is to convince everyone who happens to be liberal or progressive to just stop talking about animal welfare, or stop talking about existential risks. It’s a lot to ask for someone to gag themselves like that just because they happen to have the most common politics that’s associated with that view.

Robert Wiblin: I guess I think having a first, kind of conservative, outspoken person talking about existential risk could be really quite valuable. So, if someone had that as an option, I’d be surprised if it wasn’t among the top handful of things that they could do.

Michelle Hutchinson: Yeah. I think this discussion has been fairly U.S. centric so far in terms of describing where various news outlets seem to be on the U.S. political spectrum. There’s a pretty exciting new project that’s recently started in the BBC called BBC Future, which seems to be trying to do a somewhat similar kind of thing to Future Perfect, and the BBC doing it, from a UK perspective, seems really good because the BBC is seen as pretty neutral by both Conservatives and Labor in the UK.

Michelle Hutchinson: On the other hand, I think Americans would typically see the BBC as pretty Left-leaning, so it’s not clear how this would stand for Americans.

Keiran Harris: Do we think that, potentially, it’s justified to have a quite U.S. centric view on this, given the outsized influence that the U.S. government might have moving forward in terms of regulating A.I.?

Robert Wiblin: Yeah, I guess the U.S. is gonna be the dominant player, or China, which I guess is a little bit outside our area. I suppose the UK could have an influence. Like, DeepMind is based there. I think the UK, at least in the past, had some influence over the EU. I guess we’ll see where that ends up going forward. But, I guess if you could convince bureaucrats in the UK, the might be able to pass the message on to the U.S. There’s like, at least some potential influence there, and likewise in Australia, but perhaps a bit tenuous.

Robert Wiblin: I guess we’ve ended up talking about the U.S. mostly because Vox is a U.S. based organization that, I guess, has most of its audience here and covers American politics a great deal. Perhaps, also, it seems like there’s a greater risk of effective activism being viewed as on one side here. Because in other countries, I think throughout, people who are involved in X-risk work who exist across the political spectrum more than you get in the U.S. at the moment."


Comments sorted by top scores.

comment by Larks · 2019-03-10T17:15:40.625Z · score: 8 (4 votes) · EA · GW

Thanks for highlighting this, I thought it was interesting. It does seem that, if you thought getting Vox to write about AI was good, it would be good to have an offsetting right-wing spokesman on the issue.

One related point would be that we can try to avoid excessively associating AI risk with left wing causes; discrimination is the obvious one. The alternative would be to try to come up with right-wing causes to associate it with as well; I have one idea, but I think this strategy may be a bad idea so am loath to share it.

comment by Jeremy (captainjc) · 2019-03-11T14:30:54.458Z · score: 2 (2 votes) · EA · GW

It seems like there are 3 possible outcomes.

1) AI risk is associated with and covered primarily by one side of the political spectrum.

2) AI risk is associated with and covered by both sides more or less evenly.

3) AI risk is associated with and covered in a neutral way.

Intuitively, 3 seems like the best-case scenario, but that horse may already have left the barn (as it seems like it does with most causes).

1 probably seems bad - though I believe Rob did point out that if it becomes a part of one party's platform, then maybe it's easier to implement policy when that party is in power. Obviously, that's a bit of a gamble.

2 seems like the best remaining option then, but obviously with some risks - perhaps along the lines of what you are hinting at. I don't see why there would need to be a right wing cause to associate it with though. I mean, if both sides are covering it, the coverage could turn into a back and forth, pro/con on the most controversial aspects of it (a less collegial version of the discussion you linked to), which also seems not that great. Perhaps having both sides be philanthropy-funded and not dependent on generating controversy for advertising/clicks could help with that.