Denis Drescher's Shortform

post by Denis Drescher (Telofy) · 2020-04-23T15:44:50.620Z · score: 6 (1 votes) · EA · GW · 3 comments

3 comments

Comments sorted by top scores.

comment by Denis Drescher (Telofy) · 2020-06-06T11:10:49.384Z · score: 3 (2 votes) · EA(p) · GW(p)

Studies on Slack” by Scott Alexander: Personal takeaways

There have been studies on how software teams use Slack. Scott Alexander’s article “Studies on Slack” is not about that. Rather it describes the world as a garlic-like nesting of abstraction layers on which there are different degrees of competition vs. cooperation between actors; how they emerged (in some cases); and what their benefit is.

The idea, put simply, at least in my mind, is that in a fierce competition innovations need to prove beneficial immediately in logical time or the innovator will be outcompeted. But limiting innovations to only those that either consist of only one step or whose every step is individually beneficial is, well, limiting. The result are innovators stuck in local optima unable to reach more global optima.

Enter slack. Somehow you create a higher-order mechanism that alleviates the competition a bit. The result is that now innovators have the slack to try a lot of multi-step innovations despite any neutral or detrimental intermediate steps. The mechanisms are different ones in different areas. Scott describes mechanisms from human biology, society, ecology, business management, fictional history, etc. Hence the garlic-like nesting: It seems to me that these systems are nested within each other, and while Scott only ever describes two levels at a time, it’s clear enough that higher levels such as business management depend on lower levels such as those that enable human bodies to function.

This essay made a lot of things clearer to me that I had half intuited but never quite understood. In particular it made me update downward a bit on how much I expect AGI to outperform humans. One of mine reasons for thinking that human intelligence is vastly inferior to a theoretical optimum is that I thought evolution could almost only ever improve one step at a time – that it would take an extremely long time for a multi-step mutation with detrimental intermediate steps to happen through sheer luck. Since slack seems to be built into biological evolution to some extent, maybe it is not as inferior as I thought to “intelligent design” like we’re attempting it now.

It would also be interesting to think about how slack affects zero-sum board games – simulations of fierce competition. In the only board game I know, Othello, you can thwart any plans the opponent might have with your next move in, like, 90+% of cases. Hence, I made a (small but noticeable) leap forward in my performance when I switched from analyzing my position through the lens of “What is a nice move I can play?” to “What is a nice move my opponent could now play if it were their turn and how can I prevent it?” A lot of perfect moves, especially early in the game, switch from looking surprising and grotesk to looking good once I viewed them through that lens. So it seems that in Othello there is rarely any Slack. (I’m not saying that you don’t plan multi-step strategies in Othello, but it’s rare that you can plan them such that actually get to carry them out. Robust strategies play a much greater role in my experience. Then again this may be different at higher levels of gameplay than mine.)

Perhaps that’s related to why I’ve seen not particularly smart people yet turning out to be shockingly efficient social manipulators, and why these people are usually found in low-slack fields. If your situation is so competitive that your opponent can never plan more than one step ahead anyway, you only need to do the equivalent of thinking “What is a nice move my opponent could now play if it were their turn and how can I prevent it?” to beat, like, 80% of them. No need for baroque and brittle stratagems like in Skyfall.

I wonder if Go is different? The board is so big that I’d expect there to be room to do whatever for a few moves from time to time? Very vague surface-level heuristic idea! I have no idea of Go strategy.

I’m a bit surprised that Scott didn’t draw parallels to his interest in cost disease, though. Not that I see any clear once, but there got to be some that are worth at least checking and debunking – innovation slowing so that you need more slack to innovate at the same rate, or increasing wealth creating more slack thereby decreasing competition that would’ve otherwise kept prices down, etc.

The article was very elucidating, but I’m not quite able to now look at a system and tell whether it needs more or less slack or how to establish a mechanism that could produce that slack. That would be important since I have a number of EA friends who could use some more slack to figure out psychological issues or skill up on some areas. The EA funds try to help a bit here, but I feel like we need more of that [EA · GW].

comment by Denis Drescher (Telofy) · 2020-04-23T15:44:50.839Z · score: 3 (2 votes) · EA(p) · GW(p)

[“If you value future people, why do you consider near term effects? [EA · GW]” by Alex HT: Personal takeaways.]

I find it disconcerting that there are a lot of very smart people in the EA community who focus more on near-term effects than I currently find reasonable.

“If you value future people, why do you consider near term effects?” by Alex HT makes the case that a lot of reasons to focus on near-term effects fall short of being persuasive. The case is based centrally on complex cluelessness. It closes with a series of possible objections and why they are not persuasive. (Alex also cites the amazing article “Growth and the case against randomista development [EA · GW].”)

The article invites a discussion, and Michael St. Jules responded by explaining the shape of a utility function (bounded above and below) that would lead to a near term focus and why it is a sensible utility function to have. This seems to be a common reason to prefer near-term interventions, judging by the number of upvotes.

There are also hints in the discussion of whether there may be a reason to focus on near-term effects as a Schelling point in coordination problem with future generations. But that point is not fully developed, and I don’t think I could steelman it.

I’ve heard smart people argue for the merits of bounded utility functions before. They have a number of merits – avoiding Pascal’s mugging, the St. Petersburg game, and more. (Are there maybe even some benefits for dealing with infinite ethics?) But they’re also awfully unintuitive to me.

Besides, I wouldn’t know how to select the right parameters for it. With some parameters, it’ll be nearly linear in a third-degree-polynomial increase in aggregate positive or negative valence over the coming millennium, and that may be enough to prefer current longtermist over current near-termist approaches.

Related: https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/

comment by Denis Drescher (Telofy) · 2020-06-05T06:52:44.619Z · score: 2 (1 votes) · EA(p) · GW(p)

Effective Altruism and Free Riding [EA · GW]” by Scott Behmer: Personal takeaways

Coordination is an oft-discussed topic within EA, and people generally try hard to behave cooperatively toward other EA researchers, entrepreneurs, and donors present and future. But “Effective Altruism and Free Riding” makes the case that standard EA advice favors defection over cooperation in prisoner’s dilemmas (and stag hunts) with non-EAs. It poses the question whether this is good or bad, and what can be done about it.

I’ve had a few thoughts while reading the article but found that most of them were already covered in the most upvoted comment thread. I’ll still outline them in the following as a reference for myself, to add some references that weren’t mentioned, and to frame them a bit differently.

The project of maximizing gains from moral trade is one that I find very interesting and promising, and want to investigate further to better understand its relative importance and strategic implications.

Still, Scott’s perspective was a somewhat new one for me. He points out that in particular the neglectedness criterion encourages freeriding: Climate change is a terrible risk but we tend to be convinced by neglectedness considerations that additional work on it is not maximally pressing. In effect, we’re freeriding on the efforts of activists working on climate change mitigation.

What was new to me about that is that I’ve conceived of neglectedness as a cheap coordination heuristic. Cheap in that it doesn’t require a lot of communication with other cooperators; coordination in the sense that everyone is working towards a bunch of similar goals but need to distribute work among themselves optimally; and heuristic in that it falls short insofar as values are not perfectly aligned, momentum in capacity building is hard to anticipate, and the tradeoffs with tractability and importance are usually highly imprecise.

So in essence, my simplification was to conceive of the world as filled with agents like me in values that use neglectedness to coordinate their cooperative work, and Scott conceives of the world as filled with agents very much unlike me in values that use neglectedness to freeride off of each other’s work.

Obviously, neither is exactly true, but I don’t see an easy way to home in on which model is better: (1) I suppose most people are not centrally motivated by consequentialism in their work, and it may be impossible for us to benefit the motivations that are central to them. But then again there are probably consequentialist aspects to most people’s motivations. (2) Insofar as there are aspects to people’s motivations for their work that we can benefit, how would these people wish for their preferences to be idealized (if that is even the framing that they’d prefer to think about their behavior)? Caspar Oesterheld discusses the ins and outs of different forms of idealization in the eponymous section 3.3.1 of “Multiverse-wide Cooperation via Correlated Decision Making.” The upshot is, very roughly, that idealization through additional information seems less doubious than idealization through moral arguments (Scott’s article mentions advocacy for example). So would exposing non-EAs to information about the importance of EA causes lead them to agree that people should focus on them even at the expense of the cause that they chose? (3) What consequentialist preferences should be even take into account – only altruistic ones or also personal ones, since personal ones may be particularly strong? A lot of people have personal preferences not to die or suffer and for their children not to die or suffer, which may be (imperfectly) aligned with catastrophe prevention.

But the framing of the article and the comments was also different from the way I conceive of the world in that it framed the issue as a game between altruistic agents with different goals. I’ve so far seen all sorts of nonagents as being part of the game by dint of being moral patients. If instead we have a game between altruists who are stewards of the interests of other nonagent moral patients, it becomes clearer why everyone is part of the game, their power, but there are a few other aspects that elude me. Is there a risk of double-counting the interests of the nonagent moral patients if they have many altruist stewards – and does that make a difference if everyone does it? And should a bargaining solution only take the stewards’ power into account (perhaps the natural default, for better or worse) or also the number of moral patients they stand up for? The first falls short of my moral intuitions in the case. It may also cause Ben Todd and many others to leave the coalition because the gains from trade are not worth the sacrifice for them. Maybe we can do better. But the second option seems gameable (by pretending to see moral patienthood where one in fact does not see it) and may cause powerful cooperators to leave the coalition if they have a particularly narrow concept of moral patienthood. (Whatever the result, it seems like that this the portfolio that commenters mentioned, probably akin to the compromise utility function that you maximize in evidential cooperation – see Caspar Oesterheld’s paper.)

Personally, I can learn a lot more about these questions by just reading up on more game theory research. More specifically, it’s probably smart to investigate what the gains from trade are that we could realize in the best case to see if all of this is even worth the coordination overhead.

But there are probably also a few ways forward for the community. Causal (as opposed to acausal) cooperation requires some trust, so maybe the signal that there is a community of altruists that cooperate particularly well internally can be good if paired with the option of others to join that community by proving themselves to be sufficiently trustworthy. (That community may be wider than EA and called differently.) That would probably take the shape of newcomers making the case for new cause areas not necessarily based on their appeal to utilitarian values but based on their appeal to the values of the newcomer – alongside an argument that those values wouldn’t just turn into some form of utilitarianism upon idealization. That way, more value systems could gradually join this coalition, and we’d promote cooperation the way Scott recommends in the article. It’ll probably make sense to have different nested spheres of trust, though, with EA orgs at the center, the wider community around that, new aligned cooperators further outside, occasional mainstream cooperators further outside yet, etc. That way, the more high-trust spheres remain even if sphere’s further on the outside fail.

Finally, a lot of these things are easier in the acausal case that evidential cooperation in large worlds (ECL) is based on (once again, see Caspar Oesterheld’s paper). Perhaps ECL will turn out to make sufficiently strong recommendations that we’ll want to cooperate causally anyway despite any risk of causal defection against us. This stikes me as somewhat unlikely (e.g., many environmentalists may find ECL weird, so there may never be many evidential cooperators among them), but I still feel sufficiently confused about the implications of ECL that I find it at least worth mentioning.