Denise_Melchin's Shortform

post by Denise_Melchin · 2020-09-03T06:11:42.046Z · score: 6 (1 votes) · EA · GW · 22 comments

22 comments

Comments sorted by top scores.

comment by Denise_Melchin · 2020-10-01T21:17:43.635Z · score: 28 (13 votes) · EA(p) · GW(p)

Something I have been wondering about is how social/'fluffy' the EA Forum should be. Most posts just make various claims and then the comments are mostly about disagreement with those claims. (There have been various threads about how to handle disagreements, but this is not what I am getting at here.) Of course not all posts fall in this category: AMAs are a good example, and they encourage people to indulge in their curiosity about others and their views. This seems like a good idea to me.

For example, I wonder whether I should write more comments pointing out what I liked in a post even if I don't have anything to criticise instead of just silently upvoting. This would clutter the comment section more, but it might be worth it by people feeling more connected to the community if they hear more specific positive feedback.

I feel like Facebook groups used to do more online community fostering within EA than they do now, and the EA Forum hasn't quite assumed the role they used to play. I don't know whether it should. It is valuable to have a space dedicated to 'serious discussions'. Although having an online community space might be more important than usual while we are all stuck at home.

comment by alexrjl · 2020-10-02T12:28:50.035Z · score: 8 (3 votes) · EA(p) · GW(p)

One positive secondary effect of this is that Great but uncontroversial posts will be seen by lots of people. Currently posts which are good but don't generate any disagreement get a few upvotes then fall off the front page pretty quickly because nobody has much to say.

comment by Linch · 2020-10-02T00:07:25.330Z · score: 8 (4 votes) · EA(p) · GW(p)

I think specific/precise positive feedback is almost as good (and in some cases better) as specific criticism, especially if you (implicitly) point to features that other posts don't have. This allows onlookers to learn and improve in addition to giving a positive signal to the author. For a close reference class, the LessWrong team often has comments explaining [LW(p) · GW(p)] why they like a certain post.

The type of social/"fluffy" content that some readers may be worried about is if lots of our threads have non-substantive comments like this one [EA(p) · GW(p)], especially if they're bloated and/or repeated often. I don't have a strong sense of where our balance should be on this.

comment by Thomas Kwa (tkwa) · 2020-10-02T01:27:44.977Z · score: 3 (2 votes) · EA(p) · GW(p)

I don't see bloat as much of a concern, because our voting system, which works pretty well, can bring the best comments to the top. If they're not substantive, they should either be pretty short, or not be highly upvoted.

comment by Linch · 2020-10-02T07:58:42.533Z · score: 3 (2 votes) · EA(p) · GW(p)

I will personally feel bad downvoting low-information comments of encouragement, even if they're currently higher up on the rankings than (what I perceive to be) more substantive neutral or negative comments.

comment by Harrison D · 2020-10-03T18:54:06.957Z · score: 1 (1 votes) · EA(p) · GW(p)

Perhaps comments/posts should have more than just one "like or dislike" metric? For example, it could have upvoting or downvoting in categories of "significant/interesting," "accurate," "novel," etc. It also need not eliminate the simple voting metric if you prefer that.

(People may have already discussed this somewhere else, but I figured why not comment--especially on a post that asks if we should engage more?)

comment by MichaelDickens · 2020-10-02T20:13:01.750Z · score: 5 (3 votes) · EA(p) · GW(p)

IMO the best type of positive comment adds something new on top of the original post, by extending it or by providing new and relevant information. This is more difficult than generic praise, but I don't think it's particularly harder than criticism.

comment by Neel Nanda · 2020-10-02T08:42:26.115Z · score: 5 (3 votes) · EA(p) · GW(p)

Fairly strongly agreed - I think it's much easier to express disagreement than agreement on the margin, and that on the margin people find it too intimidating to post to the EA Forum and it would be better to be perceived as friendlier. (I have a somewhat adjacent blog post about going out of your way to be a nicer person)

I strongly feel this way for specific positive feedback, since I think that's often more neglected and can be as useful as negative feedback (at least, useful to the person making the post). I feel less strongly for "I really enjoyed this post"-esque comments, though I think more of those on the margin would be good.

An alternate approach would be to PM people the positive feedback - I think this adds a comparable amount to the person, but removes the "changing people's perceptions of how scary posting on the EA Forum is" part

comment by Aaron Gertler (aarongertler) · 2020-10-06T17:31:02.002Z · score: 3 (2 votes) · EA(p) · GW(p)

I wrote a quick post [EA · GW] in response to this comment (though I've also been thinking about this issue for a while).

I think people should just share their reactions to things most of the time, unless there's a good reason not to, without worrying about how substantive their reactions are. If praise tends to be silent and criticism tends to be loud, I worry that authors will end up with a very skewed view of how people perceive their work. (And that's even before considering that criticism tends to occupy more space in our minds than praise.)

comment by Alexxxxxxx · 2020-10-02T09:03:29.197Z · score: 2 (2 votes) · EA(p) · GW(p)

I agree, positive feedback can be a great motivator.

comment by Denise_Melchin · 2020-09-19T11:42:50.772Z · score: 19 (12 votes) · EA(p) · GW(p)

[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before]

I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs.

The way I see the potential of the EA community is by helping people to understand their values and then actually try to optimize for them, whatever they are. What the EA community brings to the table is the idea that we should prioritise between causes, that triaging is worth it.

If we focus the community on longtermism, we lose out on lots of other people with different moral views who could really benefit from the 'Effectiveness' idea in EA.

This has some limits, there are some views I consider morally atrocious. I prefer not giving these people the tools to more effectively pursue their goals.

But overall, I would much prefer to have more people to have access to cause prioritisation tools, and not just people who find longtermism appealing. What underlies this view is possibly that I think the world would be a better place if most people had better tools to do the most good, whatever they consider good to be (if you want to use SSC jargon, you could say I favour mistake theory over conflict theory).

I appreciate this might not necessarily be true from a longtermist perspective, especially if you take the arguments around cluelessness seriously. If you don't even know what is best to do from a longtermist perspective, you can hardly say the world would be better off if more people tried to pursue their moral views more effectively.

comment by Larks · 2020-09-19T16:51:42.321Z · score: 7 (4 votes) · EA(p) · GW(p)

I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I'm not sure how one would cache out the limits on 'atrocious' views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!

comment by Denise_Melchin · 2020-09-21T21:08:00.774Z · score: 8 (2 votes) · EA(p) · GW(p)

Yes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although 'doing the most good' is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are.

I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth.

I'd be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.

comment by Thomas Kwa (tkwa) · 2020-09-23T22:00:55.340Z · score: 3 (2 votes) · EA(p) · GW(p)

I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.

One possible way to identify views reasonable enough to develop tools for is checking that they're consistent under some amount of reflection; another way could be checking that they're consistent with facts e.g. lack of evidence for supernatural entities, or the best knowledge on conscious experience of animals.

comment by brb243 · 2020-09-19T17:28:28.811Z · score: 1 (1 votes) · EA(p) · GW(p)

I think that thinking about longtermism enables people to feel empowered to solve problems somewhat beyond the reality, truly feeling the prestige/privilege/knowing-better of 'doing the most good'- also, this may be a viewpoint applicable for those who really do not have to worry about finances, but also that is relative. Which also links to my second point that some affluent persons enjoy speaking about innovative solutions, reflecting current power structures defined by high-technology, among others. It would be otherwise hard to make a community of people feeling the prestige of being paid a little to do good or donating to marginally improve some of the current global institutions, that cause the present problems. Or wouldit

comment by Denise_Melchin · 2020-09-17T16:35:21.710Z · score: 15 (8 votes) · EA(p) · GW(p)

[epistemic status: musing]

When I consider one part of AI risk as 'things go really badly if you optimise straightforwardly for one goal' I occasionally think about the similarity to criticisms of market economies (aka critiques of 'capitalism').

I am a bit confused why this does not come up explicitly, but possibly I have just missed it, or am conceptually confused.

Some critiques of market economies think this is exactly what the problem with market economies is: they should maximize for what people want, but instead they maximize for profit instead, and these two goals are not as aligned as one might hope. You could just call it the market economy alignment problem.

A paperclip maximizer might create all the paperclips, no matter what it costs and no matter what the programmers' intentions were. The Netflix recommender system recommends movies to people which glue them to Netflix, whether they endorse this or not, to maximize profit for Netflix. Some random company invents a product and uses marketing that makes having the product socially desirable, even though people would not actually have wanted it on reflection.

These problems seem very alike to me. I am not sure where I am going with this, it does kind of feel to me like there is something interesting hiding here, but I don't know what. EA feels culturally opposed to 'capitalism critiques' to me, but they at least share this one line of argument. Maybe we are even missing out on a group of recruits.

Some 'latestage capitalism' memes seem very similar to Paul's What Failure looks like to me.

Edit: Actually, I might be using the terms market economy and capitalism wrongly here and drawing the differences in the wrong place, but it's probably not important.

comment by David_Moss · 2020-09-17T18:18:01.388Z · score: 8 (4 votes) · EA(p) · GW(p)

A similar analogy with the fossil fuel industry is mentioned by Stuart Russell (crediting Danny Hillis) here:

let’s say the fossil fuel industry as if it were an AI system. I think this is an interesting line of thought, because what he’s saying basically and — other people have said similar things — is that you should think of a corporation as if it’s an algorithm and it’s maximizing a poorly designed objective, which you might say is some discounted stream of quarterly profits or whatever. And it really is doing it in a way that’s oblivious to lots of other concerns of the human race. And it has outwitted the rest of the human race.

It also seems that  "things go really badly if you optimise straightforwardly for one goal" bears similarities to criticisms of central planning or utopianism in general though.

comment by Larks · 2020-09-17T17:18:30.048Z · score: 7 (4 votes) · EA(p) · GW(p)

People do bring this up a fair bit - see for example some previous related discussion on Slatestarcodex here and the EA forum here [EA · GW].

I think most AI alignment people would be relatively satisfied with an outcome where our controls over AI outcomes were as strong as our current control over corporations: optimisation for a criteria that requires continual human input from a broad range of people, while keeping humans in-the-loop of decision making inside the optimisation process, and with the ability to impose additional external constrains at run-time (regulations).

comment by Denise_Melchin · 2020-09-17T20:54:29.842Z · score: 3 (2 votes) · EA(p) · GW(p)

Thank you so much for the links! Possibly I was just being a bit blind. I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott's post.

I'm not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to 'I feel like there is something more important here but I don't know what').

The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals - first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning 'recommender systems' to peoples' reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit.

Your second paragraph feels like something interesting in the capitalism critiques - we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?

comment by HaukeHillebrandt · 2020-09-18T15:33:33.730Z · score: 2 (1 votes) · EA(p) · GW(p)

I mused about something similar here - about corporations as dangerous optimization demons which will cause GCRs if left unchecked :

https://forum.effectivealtruism.org/posts/vy2QCTXfWhdiaGWTu/corporate-global-catastrophic-risks-c-gcrs-1 [EA · GW]

Not sure how fruitful it was.

For capitalism more generally, GPI also has "Alternatives to GDP" in their research agenda, presumably because the GDP measure is what the whole world is pretty much optimizing for, and creating a new measure might be really high value.

comment by Denise_Melchin · 2020-09-03T06:11:42.449Z · score: 9 (6 votes) · EA(p) · GW(p)

There is now a Send to Kindle Chrome browser extension, powered by Amazon. I have been finding it very valuable for actually reading long EA Forum posts as well as 80,000hours podcast transcripts.