Posts

Aligning Recommender Systems as Cause Area 2019-05-08T08:56:14.686Z · score: 96 (39 votes)

Comments

Comment by ivanvendrov on Aligning Recommender Systems as Cause Area · 2019-05-22T03:33:35.037Z · score: 2 (2 votes) · EA · GW

Agreed that's an important distinction. I just assumed that if you make an aligned system, it will become trusted by users, but that's not at all obvious.

Comment by ivanvendrov on Aligning Recommender Systems as Cause Area · 2019-05-18T23:27:54.621Z · score: 6 (5 votes) · EA · GW

My mental model of why Facebook doesn't have "turn off inflammatory political news" and similar switches is because 99% of their users never toggle any such switches, so the feature won't affect any of the metrics they track, so no engineer or product manager has an incentive to add it. Why won't users toggle the switches? Part of it is laziness; but mostly I think users don't trust that the system will faithfully give them what they want based on a single short description like "inflammatory political news" -what if they miss out on an important national story? What if a close friend shares a story with them and they don't see it? What if their favorite comedian gets classified as inflammatory and filtered out?

As additional evidence that we're more bottlenecked by research than by incentives, consider Twitter's call for research to measure the "health" of Twitter conversations, and Facebook's decision to demote news content. I believe if you gave most companies a robust and well-validated metric (analogous to differential privacy) for alignment with user value, they would start optimizing for it even at the cost of some short term growth/revenue.

The monopoly point is interesting. I don't think existing recommender systems are well modelled as monopolies; they certainly behave as if they are in a life-and-death struggle with each other, probably because their fundamental product is "ways to occupy your time" and that market is extremely competitive. But a monopoly might actually be better because it wouldn't have the current race to the bottom in pursuit of monetisable eyeballs.

Comment by ivanvendrov on Aligning Recommender Systems as Cause Area · 2019-05-09T03:37:20.056Z · score: 3 (2 votes) · EA · GW

The first two links are identical; was that your intention?

Thanks for the catch - fixed.

Comment by ivanvendrov on Aligning Recommender Systems as Cause Area · 2019-05-08T18:40:47.344Z · score: 3 (2 votes) · EA · GW

Definitely the latter. Though I would frame it more optimistically as "better alignment of recommender systems seems important, there's a lot of plausible solutions out there, let's prioritize them and try out the few most promising ones". Actually doing that prioritization was out of scope for this post but definitely something we want to do - and are looking for collaborators on.

Comment by ivanvendrov on Aligning Recommender Systems as Cause Area · 2019-05-08T18:01:42.673Z · score: 1 (1 votes) · EA · GW

To my mind they are fully complementary: Iterated Amplification is a general scheme for AI alignment, whereas this post describes an application area where we could use and learn more about various alignment schemes. I personally think using amplification for aligning recommender systems is very much worth trying. It would have great direct positive effects if it worked, and the experiment would shed light on the viability of the scheme as a whole.