Posts

Comments

Comment by william_saunders on Aligning Recommender Systems as Cause Area · 2019-05-21T00:49:29.304Z · score: 2 (2 votes) · EA · GW

Appreciate that point that they are competing for time (as I was only thinking of monopolies over content).

If the reason it isn't used is that users don't "trust that the system will give what they want given a single short description", then part of the research agenda for aligned recommender systems is not just producing systems that are aligned, but systems where their users have a greater degree of justified trust that they are aligned (placing more emphasis on the user's experience of interacting with the system). Some of this research could potentially take place with existing classification-based filters.

Comment by william_saunders on Aligning Recommender Systems as Cause Area · 2019-05-18T21:35:15.602Z · score: 3 (3 votes) · EA · GW

While fully understanding a user's preferences and values requires more research, it seems like there are simpler things that could be done by the existing recommender systems that would be a win for users, ie. facebook having a "turn off inflammatory political news" switch (or a list of 5-10 similar switches), where current knowledge would suffice to train a classification system.

It could be the case that this is bottlenecked by the incentives of current companies, in that there isn't a good revenue model for recommender systems other than advertising, and advertising creates the perverse incentive to keep users on your system as long as possible. Or it might be the case that most recommender systems are effectively monopolies on their respective content, and users will choose an aligned system over an unaligned one if options are available, but otherwise a monopoly faces no pressure to align their system.

In these cases, the bottleneck might be "start and scale one or more new organizations that do aligned recommender systems using current knowledge" rather than "do more research on how to produce more aligned recommender systems".

Comment by william_saunders on Aligning Recommender Systems as Cause Area · 2019-05-12T21:58:24.693Z · score: 23 (8 votes) · EA · GW

If we want to maximize flow-through effects to AI Alignment, we might want to deliberately steer the approach adopted for aligned recommender systems to one that is also designed to scale to more difficulty problems/more advanced AI systems (like Iterated Amplification). Having an idea become standard in the world of recommender systems could significantly increase the amount of non-saftey researcher effort put towards that idea. Solving the problem a bit earlier with a less scalable approach could close off this opportunity.

Comment by william_saunders on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T16:41:27.133Z · score: 41 (22 votes) · EA · GW

I wonder how much of the interview/work stuff is duplicated between positions - if there's a lot of overlap, then maybe it would be useful for someone to create the EA equivalent of TripleByte - run initial interviews/work projects with a third party organization to evaluate quality, pass along to most relevant EA jobs.

Comment by william_saunders on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-12-26T22:52:02.741Z · score: 2 (2 votes) · EA · GW

I agree with this. It seems like the world where Moral Circle Expansion is useful is the world where:

The creators of AI are philosophically sophisticated (or persuadable) enough to expand their moral circle if they are exposed to the right arguments or work is put into persuading them.

They are not philosophically sophisticated enough to realize the arguments for expanding the moral circle on their own (seems plausible).

They are not philosophically sophisticated enough to realize that they might want to consider a distribution of arguments that they could have faced and could have persuaded them about what is morally right, and design AI with this in mind (ie CEV), or with the goal of achieving a period of reflection where they can sort out the sort of arguments that they would want to consider.

I think I'd prefer pushing on point 3, as it also encompasses a bunch of other potential philosophical mistakes that AI creators could make.