80k hrs #88 - Response to criticism

post by mark_ledwich · 2020-12-11T08:53:53.337Z · EA · GW · 21 comments

Contents

  Was Tristan & tech media successful in improving the YouTube's recommendation algorithm?
  The anonymous user limitation of YouTube studies
  What are the most effective ways we can address problems from social media
None
21 comments

I'm a regular listener of the 80k hrs podcast. Our paper came up on the episode with Tristan Harris, and Rob encouraged responses on this forum so here I go.

Update: apologies, but this post will only make sense if you listen to this episode of the 80k hrs podcast.  


Conflict can be an effective tactic for good

I have a mini Nassim Taleb inside me that I let out for special occasions 😠. I'm sometimes rude to Tristan, Kevin Roose and others. It's not just because Tristan is worried about the possible negative impacts of social media (i'm not against that at all). It is because he has been one of the most influential people in building a white hot moral panic, and frequently bends truth for the cause.

One part he gets right is that, trigger a high-reach person into conflict with you, serves to give your message more attention. Even if they don't reply, you are more likely to be boosted by their detractors as well. This underdog advantage isn't "hate", and  the small advantage is massively outweighed by institutional status, finances and social proof. To play by gentlemans rules is to their advantage - curtailing the tools in at my disposal to makes bullshit as costly as possible.

I acknowledge there are some negative costs to this (e.g. polluting the information commons with avoidable conflict), and good people can disagree about if the tradeoff is worth it. But I believe it is.

Was Tristan & tech media successful in improving the YouTube's recommendation algorithm?

I'll give this win to Tristan and Roose. I believe YouTube did respond to this pressure when in early 2019 they reduced recommendations to conspiracies and borderline content and this was better overall, but not great.

But YouTube was probably never as they described - a recommendation rabbit hole to radicalization. If it was, there was never strong evidence to support it.

The YouTube recommendation has always boosted recent, highly watched videos, and has been through 3 main phases:

Clickbait Phase: Favoured high click-through rates on video thumbnails. This meant that clickbait thumbnails, were very "tabloidy" and edgy, and frequently misrepresented the content of the video. But no-one ever showed that this influenced users down an extremist rabbit hole - they just asserted it, or used very week attempts at evidence.

View-Neutral Phase: Favoured videos that people watch more of, and rated highly after watching. This was a big improvement for quality recommendations. They hadn't started putting their thumb on the scales, so the recommendations largely matched the portion of views for a video.

Authoritative Phase: Favours traditional media, especially highly partisan cable news. Very little recommendations to conspiracies and borderline content. This was announced early 2019, and deployed in April 2019.
 

Tristan regularly represents today's algorithm as a radicalization rabbit hole. His defence that critics are unfair because the algorithm changed after he made the critique is wrong. He didn't make any effort to clarify on the Social Dilemma (Released Jan 2020), or in his appearances about it, and hasn't updated his talking points. For example, speaking on the Joe Rogan Podcast in October 2020 he said: "no matter what I start with, what is it going to recommend next. So if you start with a WW2 video, YouTube recommends a bunch of holocaust denial videos".

What's the problem with scapegoating the algorithm and encouraging draconean platform moderation ?

Tristan's hyperbole sets the stage for drastic action. Draconian solutions for misdiagnosed problems will probably have unintended consequences that are worse than doing nothing. I wrote about this in regards to the the QAnon crackdown:

The anonymous user limitation of YouTube studies

It's technically quite difficult to analyse the YouTube algorithm that includes personalization. Our study was the most rigorous and comprehensive look at the recommendations political influence at the time, despite the limitation of collecting non-personalized recommendations. To take the results at face value, you need to assume this will "average out" to about the same influence once aggregated. I think it's an open question, but it's reasonable to assume the results will bein the same ballpark.

My experience with critics who point to this as a flaw or reason to ignore the results, are inconsistent with their skepticism. The metrics that Tristan uses in this podcast (e.g. "recommended flat Earth videos hundreds of millions of times") are based on Gualliam Chaslots data, which is also based on anonymous recommendations. I also am skeptical about these results:
-  These figures are much higher that what we see and Chaslot is not transparent about how these numbers have been calculated
- Chaslots data is based on the API, which gives distorted recommendations compared to our method of scraping the website (much closer to real-world). 

This quality of the research is quickly improving in this space. This most recent study uses real-world user traffic to estimate people following recommendations from videos. A very promising approach once they fix some issues.

We have been collecting personalized recommendations since november. We are analysing the results and will present there on transparency.tube and a paper in the coming months. I hope tristan and other prominent people will start update the way they talk about YouTube based on the best and latest research. If they continue to misdiagnose problems, the fervor for solutions they whip up will be misdirected.

What are the most effective ways we can address problems from social media

I have a narrow focus on the mechanics of YouTube's platform, but I'll give my intuitional grab bag of ideas that are the most promising to reduce the bad things about social media:

Rob's did some really good background research and gently pushed back in the right places. The best interview with Tristan I have listened to.

21 comments

Comments sorted by top scores.

comment by MichaelPlant · 2020-12-11T12:23:56.744Z · EA(p) · GW(p)

Thanks for writing this. I haven't (yet) listened to the podcast and that's perhaps why your post here felt like I was joining in the middle of a discussion. Could I suggest that at the top of your post you very briefly say who you are and what your main claim is, just so these are clear? I take it the claim is that YouTube's recommendations engine does not (contrary to recent popular opinion) push people towards polarisation and conspiracy theories. If that is your main claim, I'd like you to say why YouTube doesn't have that feature and why people who claim it does are mistaken.

(FWIW, I'm an old forum hand and I've learnt you can't expect people to read papers you link to. If you want people to discuss them, you need to make your main claims in the post here itself.)

Replies from: Tsunayoshi
comment by Tsunayoshi · 2020-12-11T15:25:48.496Z · EA(p) · GW(p)

In general I agree, but the forum guidelines do state "Polish: We'd rather see an idea presented imperfectly than not see it at all.", and this is a post explicitly billed as "response" that were invited by Rob. So if this is all the time Mark wants to spend on it, I feel it is perfectly fine to have a post that is only for people who have listened to the podcast/are aware of the debate.

Replies from: MichaelPlant
comment by MichaelPlant · 2020-12-11T17:37:12.038Z · EA(p) · GW(p)

Oh, what I said wasn't a criticism, so much as a suggestion to how more people might get up to speed on what's under debate!

comment by Ben_West · 2020-12-11T23:56:31.818Z · EA(p) · GW(p)

Thanks for posting this! I thought it was interesting, and I would support more people writing up responses to 80 K podcasts.

Minor: you have a typo in your link to transparency.tube

Replies from: mark_ledwich
comment by mark_ledwich · 2020-12-13T06:34:16.604Z · EA(p) · GW(p)

Thankyou 🙏 Fixed.

comment by jsteinhardt · 2020-12-13T17:58:48.977Z · EA(p) · GW(p)

Thanks for writing this and for your research in this area. Based on my own read of the literature, it seems broadly correct to me, and I wish that more people had an accurate impression of polarization on social media vs mainstream news and their relative effects.

While I think your position is much more correct than the conventional one, I did want to point to an interesting paper by Ro'ee Levy, which has some very good descriptive and casual statistics on polarization on Facebook: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3653388. It suggests (among many other interesting findings) that Facebook probably is somewhat more slanted than mainstream news and that this may drive a small but meaningful increase in affective polarization. That being said, it's unlikely to be the primary driver of US trends.

comment by Tsunayoshi · 2020-12-12T22:11:32.293Z · EA(p) · GW(p)

Hi Mark, thanks for writing this post. I only had a cursory reading of your linked paper and the 80k episode transcript, but my impression is that Tristan's main worry (as I understand it)  and your analysis are not incompatible:  

Tristan and parts of broader society fear that through the recommendation algorithm, users discover radicalizing content. According to your paper, the algorithm does not favour and might even  actively be biased against e.g conspiracy content.

 Again, I am not terribly familiar with  the whole discussion, but so far I have not yet seen the point made clearly (enough), that both these claims can be true: The algorithm could show less "radicalizing" content than an unbiased algorithm would, but  even these fewer recommendations could be enough to radicalize the viewers compared to a baseline where the algorithm would recommend no such content.  Thus, YouTube could be accused of not "doing enough". 

Your own paper cites this paper arguing that there is a clear pattern of viewership migration from moderate "Intellectual Dark Web" channels to alt-right content based on an analysis of user comments. Despite the limitation of using only user comments that your paper mentions, I think that commenting users are still a valid subset of all users and  their movement towards more radical content  needs to be explained, and that the recommendation algorithm is certainly a plausible explanation.  Since you have doubts about this hypothesis, may I ask if you think there are likelier ways these users have radicalized?

A way to test the role of the recommendation algorithm could be to redo the analysis of the user movement data for comments left after the change of the recommendation algorithm. If the movement is basically the same despite less recommendations for radical content, that is evidence that the recommendations never played a role like you argue in this post. If however the movement towards alt-right or radical content is lessened, it is reasonable to conclude that recommendations have played a role in the past, and by extension could still play a (smaller) role now.

Replies from: mark_ledwich
comment by mark_ledwich · 2020-12-13T02:23:39.899Z · EA(p) · GW(p)

I agree you can still criticize YouTube, even if they are recommending conspiracy content less than "view-neural". My main disagreement is with the facts - Tristan us representing YouTube is as a radicalization pipeline caused be the influence of recommendations. Let's say that YouTube is more radicalizing than a no-recommendation system all-things-considered because users were sure to click on radical content whenever it appeared. In this case you would describe radicalization as a demand from users, rather than a radialization rabbit hole caused by a manipulative algorithm. I'm open to this possibility, I wouldn't give this much pushback if this is what is being described.

The algorithmic "Auditing Radicalization Pathways on YouTube" paper is clever in the way they use comments to get at real-world movement. But that paper doesn't tell us much given that a) they didn't analyse movement form right to left (one way movement tell you churn, but nothing directional) and b) thy didn't share their data.

The best I have seen is this study, which uses real-world web usage forma representative sample of users to get the real behaviour of users who are clicking on recommendations. They are currently re-doing analysis with better classifications so we will see what happens.

Still doesn't fully answer your question tho. To get at the real influence of recommendation you will need to do actual experiments, something only YouTube can really do right now. Or if a third party was allowed to provide a youtube recsys somehow.

My suspicions about radicalization that leads to real word violence is mainly to do with things outside influence of analythims. Disillusionment, Experience of malevolence,  Grooming by terrorist ideologically violent religious/political groups.

comment by Akash · 2020-12-11T23:58:15.630Z · EA(p) · GW(p)

Thank you for this post, Mark! I appreciate that you included the graph, though I'm not sure how to interpret it. Do you mind explaining what the "recommendation impression advantage" is? (I'm sure you explain this in great detail in your paper, so feel free to ignore me or say "go read the paper" :D).

The main question that pops out for me is "advantage relative to what?" I imagine a lot of people would say "even if YouTube's algorithm is less likely to recommend [conspiracy videos/propaganda/fake news] than [traditional media/videos about cats],  then it's still a problem! Any amount of recommending [bad stuff that is  harmful/dangerous/inaccurate] should not be tolerated!"

What would you say to those people?

Replies from: mark_ledwich
comment by mark_ledwich · 2020-12-12T10:53:38.508Z · EA(p) · GW(p)

Recommendation advantage is the ratio of impressions sent vs received. https://github.com/markledwich2/recfluence#calculations-and-considerations

Yeas, I agree with that. Definitely a lot of room for criticism and different points of view about what should be removed, or sans-recommended. My main effort here is to make sure people know what is happening.

comment by Denise_Melchin · 2020-12-11T10:30:41.103Z · EA(p) · GW(p)

I downvoted this post. Some of our writing guidelines here are to approach disagreements with curiosity as well as trying to be kind. You are clearly deciding against both of these.

Replies from: mark_ledwich
comment by mark_ledwich · 2020-12-11T10:59:32.462Z · EA(p) · GW(p)

I wasn't aware of the writing guidelines. But also, I think this was kind and curious given the context, so I downvoted your comment.

Replies from: Denise_Melchin
comment by Denise_Melchin · 2020-12-11T12:08:50.865Z · EA(p) · GW(p)

I did not realise you are a new user and probably would have framed my comment differently if I had, I am sorry about that!

To familiarise yourself with our writing guidelines, you can find them on the left bar under 'About the Forum', or just click [? · GW].

In the past, other users have stated they prefer when people who downvote give explanations for their downvotes. This does seem particularly helpful if you are new and don't know the ins and outs of our forum guidelines and norms yet.

It is great to see you engage with your expertise, and I think it would be a shame if users are put off from engaging with your writing because your content is framed antagonistically.

Replies from: Akash
comment by Akash · 2020-12-11T23:46:52.677Z · EA(p) · GW(p)

I read this post before I encountered this comment. I didn't recall seeing anything unkind or uncivil. I then re-read the post to see if I missed anything.

I still haven't been able to find anything problematic. In fact, I notice a few things that I really appreciate from Mark. Some of these include:

  • Acknowledging explicitly that he's sometimes rude to his opponents (and explaining why)
  • Acknowledging certain successes of those he disagrees with (e.g., "I'll give this win to Tristan and Roose.")
  • Citing specific actions/quotes when criticizing others (e.g., the quote from the Joe Rogan podcast)
  • Acknowledging criticisms of his own work 

Overall, I found the piece to be thoughtfully written & in alignment with the community guidelines. I'm also relatively new to the forum, though, so please point out if I'm misinterpreting the guidelines.

I'll also add that I appreciate/support the guideline of "approaching disagreements with curiosity" and "aim to explain, not persuade." But I also think that it would be a mistake to overapply these. In some contexts, it makes sense for a writer to "aim to persuade" and approach a disagreement from the standpoint of expertise rather than curiosity. 

Like any post, I'm sure this post could have been written in a way that was more kind/curious/community-normsy. But I'm struggling to see any areas in which this post falls short. I also think "over-correcting" could have harms (e.g., causing people to worry excessively about how to phrase things, deterring people from posting, reducing the clarity of posts, making writers feel like they have to pretend to be super curious when they're actually trying to persuade).

Denise, do you mind pointing out some parts of the post that violate the writing guidelines? (It's not your responsibility, of course, and I fully understand if you don't have time to articulate it. If you do, though, I think I'd find it helpful & it might help me understand the guidelines better.)

Replies from: Denise_Melchin
comment by Denise_Melchin · 2020-12-12T09:00:40.985Z · EA(p) · GW(p)

Sure. I am pretty baffled by the response to my comments. I agree the first was insufficiently careful about the fact that Mark is a new user, but even the second got downvotes.

In the past, users of the forum have said many times that posting on the EA Forum gives them anxiety as they are afraid of hostile criticism. So I think it is good to be on the lookout for posts and comments that might have this effect. Being 'kind' and 'approaching disagreements with curiosity' should protect against this risk. But I ask the question: Is Tristan going to feel comfortable engaging in the Forum, in particular as a response to this post? I don't think so.

Quotes I thought were problematic in that I think they would upset Tristan or put him off responding (or others who might work with him or agree with him):

I have a mini Nassim Taleb inside me that I let out for special occasions 😠. I'm sometimes rude to Tristan, Kevin Roose and others.

I read this as Mark proudly announcing that he likes to violate good discourse norms.

Others which I think will make feel Tristan accused and unwelcome (not 'kind' and not 'approaching disagreements with curiosity'):

It is because he has been one of the most influential people in building a white hot moral panic, and frequently bends truth for the cause.

Tristan's hyperbole sets the stage for drastic action.

Generally hostile:

To play by gentlemans rules is to their advantage - curtailing the tools in at my disposal to makes bullshit as costly as possible.

If the 'Conflict can be an effective tactic for good' section had not been written, I would not have downvoted, as it seems to add little to the content, while making Tristan likely feel very unwelcome.

There was a post which was similar in style to Mark's post arguing against Will here [EA · GW] and the response to that was pretty negative, so I am surprised that Mark's post is being perceived so differently.

I only rarely downvote. There have been frequent requests in the past that it would be good if users generally explained why they downvoted. This has not come up before, but I took from that that the next time I downvote, it would be good if I explained why. So I did. And then got heavily downvoted myself for it. I am not sure what to make of this - are the people requesting for downvoters to generally explain themselves just different people than the ones who downvoted my comment (apparently so, otherwise they would have explained themselves)? Whatever is the reason, I doubt I will explain my downvotes again in the future.

Replies from: Akash, aarongertler, richard_ngo, MaxRa, Denise_Melchin
comment by Akash · 2020-12-12T18:00:12.163Z · EA(p) · GW(p)

Thank you, Denise! I think this gives me a much better sense of some specific parts of the post that may be problematic.  I still don't think this post, on balance, is particularly "bad" discourse (my judgment might be too affected by what I see on other online discussion platforms-- and maybe as I spend more time on the EA forum, I'll raise my standards!). Nonetheless, your comment helped me see where you're coming from.

I'll add that I appreciated that you explained why you downvoted, and it seems like a good norm to me. I think some of the downvotes might just be people who disagree with you. However, I also think some people may be reacting to the way you articulated your explanation. I'll explain what I mean below:

In the first comment, it seemed to me (and others) like you assumed Mark intentionally violated the norms. You also accused him of being unkind and uncurious without offering additional details. 

In the second comment, you linked to the guidelines, but you didn't engage with Mark's claim ("I think this was kind and curious given the context."). This seemed a bit dismissive to me (akin to when people assume that a genuine disagreement is simply due to a lack of information/education on the part of the person they disagree with).

In the third comment (which I upvoted), you explained some specific parts of the post that you found excessively unkind/uncivil. This was the first comment where I started to understand why you downvoted this post.

To me, this might explain why your most recent post has received a lot of upvotes. In terms of "what to make of this," I hope you don't conclude "users should not explain why they downvote." Rather, I wonder if a conclusion like "users should explain why they downvote comments, and they should do so in ways that are kind & curious, ideally supported by specific examples when possible" would be accurate. Of course, the higher the bar to justify a downvote, the fewer people will do it, and I don't think we should always expect downvote-explainers to write up a thorough essay on why they're downvoting. 

Finally, I'll briefly add that upvotes/downvotes are useful metrics, but I wouldn't place too much value in them. I'm guessing that upvotes/downvotes often correspond to "do I agree with this?" rather than "do I think this is a valuable contribution?"  Even if your most recent comment had 99 downvotes, I would still find it helpful and appreciate it!

comment by Aaron Gertler (aarongertler) · 2020-12-14T11:27:42.902Z · EA(p) · GW(p)

My reaction was similar to Akash's. 

I wished that the initial comment had been more specific given the user's status and the tone of the criticism (when I put myself in the author's shoes, I could imagine being baffled, since the tone of "my" post was relatively tame by the standards of most online discussion spaces). 

I downvoted that comment, because I didn't see the explanation as helpful to the author and I want to discourage comments that attack an author's motivations without evidence ("you are clearly deciding against both of these" -- I wouldn't call the post "kind", but it seemed reasonably curious to me in that it closely engaged with Tristan's work and acknowledged that he had achieved some of his aims, with plausibly good results). 

I thought the third comment was really helpful, and is exactly what I hoped to see from the first comment. I upvoted it. Highlighting specific passages is great; it was also nice to see language like "I read X as the author intending Y" rather than "by X, the author intended Y".

As for the post itself, I chose not to vote, as I was caught between upvoting and downvoting. I also objected to elements of the author's tone, but I thought the content was a useful counterpoint to a widely-experienced piece of EA content and provided enough specific arguments for commentators to engage productively.

comment by richard_ngo · 2020-12-12T17:37:04.868Z · EA(p) · GW(p)

I expect that people interpreted the "You are clearly deciding against both of these" as an unkind/uncharitable phrase, since it reads like an accusation of deliberate wrongdoing. I expect that, if you'd instead said something like "Parts of your post seem unnecessarily inflammatory", then it wouldn't have received such a negative response.

I also personally tend to interpret the kindness guidelines as being primarily about how to engage with people who are on the forum, or who are likely to read forum posts. Of course we shouldn't be rude in general, but it seems significantly less bad to critique external literature harshly than to directly critique people harshly.

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2020-12-14T11:30:18.648Z · EA(p) · GW(p)

I agree that the kindness guidelines are largely related to community management. I also think they apply more weakly to public figures than to other people who aren't active on the Forum. When someone who has a Netflix special and influence over millions of listeners is making ostensibly bad/deceptive arguments, the stakes are higher than usual, and I'm more likely to think that criticism is valuable enough that even "unkind" responses are net-valuable.

That said, all of this is contextual; if people began to violate the norm more often, moderation would crack down more to arrest the slide. I haven't seen this happening.

comment by MaxRa · 2020-12-12T21:32:09.439Z · EA(p) · GW(p)

I also really appreciate your comments. I didn‘t downvote your initial comment, but my first reaction upon seeing it was something like „Hey, I felt really positive about a researcher coming to the forum and explaining why he disagrees with Tristan. I don’t want someone to discourage this from happening!“ I’ve initially read the parts you cited partly as tongue in cheek and maybe as a little unnecessary, but far from wanting to signal that the overall contribution was not welcome.

I appreciate that you explained your negative reaction a lot, especially given how rarely people do it. I did read over the parts you cited not even wondering much how Tristan would react to it and I think it’s great someone brought it up as I now think that new users of our forum should strive to communicate disagreements less confrontationally than is common on other platforms. So I think it’d be unfortunate if you feel discouraged from this experience.

comment by Denise_Melchin · 2020-12-14T14:54:12.194Z · EA(p) · GW(p)

Thank you all for your responses, I really appreciated them. Your perspectives make more sense to me now, though I have to say I still feel really confused.

[Following comment not exhaustively responding to everything you said.]

I hadn't intended to communicate in my first comment that Mark deliberately intended to violate the forum guidelines, but that he deliberately decided against being kind and curious. (Thank you for pointing that out, I did not think of the alternative reading.) I didn't provide any evidence for this because I thought Mark said this very explicitly at the start of his post:

To play by gentlemans rules is to their advantage - curtailing the tools in at my disposal to makes bullshit as costly as possible.

I acknowledge there are some negative costs to this (e.g. polluting the information commons with avoidable conflict), and good people can disagree about if the tradeoff is worth it. But I believe it is.

Gentleman's rules usually include things like being kind and curious I would guess, and Mark says explicitly that he ignores them because the tradeoff is worth it to him. I don't understand how these lines can be interpreted in any other way, this seems like the literal reading to me.

I have to admit that even after all your kind elaborate explanations I struggle to understand how anything in the section 'Conflict can be an effective tactic for good' could be read as tongue-in-cheek, as it reads very openly hostile to me (...it's right there in the title?) .

I don't think it is that unlikely that interviewees on the 80k podcast would respond to a kind thoughtful critique on the EA Forum. That said, this is not just about Tristan, but everyone who might disagree with Mark, as the 'Conflict can be an effective tactic for good' section made me doubt they would be treated with curiosity and kindness.

I will take from this that people can have very different interpretations of the same content, even if I think the content is is very explicit and straightforward.