Posts

How do you balance reading and thinking? 2021-01-17T13:47:57.526Z
How do you approach hard problems? 2021-01-04T14:00:25.588Z
How might better collective decision-making backfire? 2020-12-13T11:44:43.758Z
Summary of Evidence, Decision, and Causality 2020-09-05T20:23:04.019Z
Self-Similarity Experiment 2020-09-05T17:04:14.619Z
Modelers and Indexers 2020-05-12T12:01:14.768Z
Denis Drescher's Shortform 2020-04-23T15:44:50.620Z
Current Thinking on Prioritization 2018 2018-03-13T19:22:20.654Z
Cause Area: Human Rights in North Korea 2017-11-26T14:58:10.490Z
The Attribution Moloch 2016-04-28T06:43:10.413Z
Even More Reasons for Donor Coordination 2015-10-27T05:30:37.899Z
The Redundancy of Quantity 2015-09-03T17:47:20.230Z
My Cause Selection: Denis Drescher 2015-09-02T11:28:51.383Z
Results of the Effective Altruism Outreach Survey 2015-07-26T11:41:48.500Z
Dissociation for Altruists 2015-05-14T11:27:21.834Z
Meetup : Effective Altruism Berlin Meetup #3 2015-05-10T19:40:40.990Z
Incentivizing Charity Cooperation 2015-05-10T11:02:46.433Z
Expected Utility Auctions 2015-05-02T16:22:28.948Z
Telofy’s Effective Altruism 101 2015-03-29T18:50:56.188Z
Meetup : EA Berlin #2 2015-03-26T16:55:04.882Z
Common Misconceptions about Effective Altruism 2015-03-23T09:25:36.304Z
Precise Altruism 2015-03-21T20:55:14.834Z
Telofy’s Introduction to Effective Altruism 2015-01-21T16:46:18.527Z

Comments

Comment by telofy on How do you balance reading and thinking? · 2021-01-17T18:15:55.995Z · EA · GW

Thank you! Also for the answer on the first question! (And thanks for encouraging me to go for this format.)

Comment by telofy on Can I have impact if I’m average? · 2021-01-05T02:31:35.780Z · EA · GW

I think the most important message in your message is the one about doing the most with the resources that one has. I think there is a form of contentment that should be highly rewarded socially. I for one am very impressed when I see someone who is not in the top ~ 1% and yet is motivated to do their best. (I want to be like that too.)

Yet I want to add a less important note to the second part: Very impactful roles (I’m thinking of the world at large here, not EA) tend to filter for people with a certain recklessness. A certain president comes to mind, but I even think that someone as skilled as Elon Musk has so far probably had a vastly net-negative impact. So there is a separate skill of thoughtfulness that might have a huge effect on one’s impact too.

Comment by telofy on Can I have impact if I’m average? · 2021-01-05T01:54:01.024Z · EA · GW

Yeah, I’m super noncompetitive, and yet, when I fear that I might replace someone who would’ve done a better job – be it because there is no interview process or I don’t trust the process to be good enough – I get into this compare-y mindset and shy away from it completely.

Comment by telofy on Donating to EA funds from Germany · 2021-01-02T11:48:24.824Z · EA · GW

If I remember correctly, German law requires that the charity that writes the donation certificate has control of or at least documents how the donations were invested. (Not sure which of the two requirements it was.) I think that is why the forwarding to the EA Funds had to be discontinued. The EA Funds pool all the donations and then allocate them at their discretion, so it’s not possible for the German charity to control how the donations are used, and it’s probably difficult or impossible to know what project was supported to what degree by which donation.

Comment by telofy on What are some potential coordination failures in our community? · 2021-01-02T11:38:21.485Z · EA · GW

Thanks! :-D

Comment by telofy on How might better collective decision-making backfire? · 2020-12-29T18:01:48.786Z · EA · GW
  1. Huh, yeah. I wonder whether this isn’t more of an “inadequate equilibria” type of thing where we use all the right tools that our goals incentivize us to use – an so do all the other groups, except their incentives are weird and different. Then there could easily be groups with uncooperative values but incentives that lead them to use the same tools.

    A counterargument could be that a lot of these tools require some expertise, and people who have that expertise are probably not usually desperate enough to have to take some evil job, so most of these people will choose a good/neutral job over and evil job even if the salary is a bit lower.

    But I suppose some socially skilled narcissist can just exploit any random modern surrogate religion to recruit good people for something evil by appealing to their morality in twisted ways. So I think it’s a pretty neat mechanism but also one that fails frequently.

  2. Yeah, one of many, many benefits! :-) I don’t think the effect is going to be huge (so that we could rely on it) or tiny. But I’m also hoping that someone will use my system to help me clarify my values. ^^

Deferring to future versions of us: Yep!

Comment by telofy on How might better collective decision-making backfire? · 2020-12-29T17:30:41.268Z · EA · GW

Okay, I think I understand what you mean. What I meant by “X is value neutral” is something like “The platform FTX is value neutral even if the company FTX is not.” That probably not 100% true, but it’s a pretty good example, especially since I’m quite enamoured of FTX at the moment. OpenAI is all murky and fuzzy and opaque to me, so I don’t know what to think about that.

I think your suggestions go in similar directions as some of mine in various answers, e.g., marketing the product mostly to altruistic actors.

Intentional use of jargon is also something I’ve considered, but it comes at heavy costs, so it’s not my first choice.

References to previous EA materials can work, but I find it hard to think of ways to apply that to Squiggle. But certainly some demo models can be EA-related to make it differentially easier and more exciting for EA-like people to learn how to use it.

Lineage, implicit knowledge, and privacy: High costs again. Making a collaborative system secret would have it miss out on many of the benefits. And enforced openness may also help against bad stuff. But the lineage one is a fun idea I hadn’t thought of! :-D

My conclusion mostly hinges on whether runaway growth is unlikely or extremely unlikely. I’m assuming that it is extremely unlikely, so that we’ll always have time to react when things happen that we don’t want.

So the first thing I’m thinking about now is how to notice when things happen that we don’t want – say, through monitoring the referrers of website views, Google alerts, bounties, or somehow creating value in the form of a community so that everyone who uses the software has a strong incentive to engage with that community.

All in all, the measures I can think of are weak, but if the threat is also fairly unlikely, maybe those weak measures are proportional.

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-27T11:17:46.142Z · EA · GW

Whee! Thank you too!

Yeah, I think that perspective on self-consciousness is helpful!

Work hours: I also wonder how much this varies between professions. Maybe that’s worth a quick search and writeup for me at some point. When you go from a field where it’s generally easy to concentrate for a long time every day to a field where it’s generally hard, that may seem disproportionately discouraging when you don’t know about that general difference.

“Try to make a map of what the key questions are and what the answers proposed by different authors are”: Yeah, combining that with Jason’s tips seems fruitful too: When talking to a lot of people, always also ask what those big questions and proposed answers are. More nonobvious obvious advice! :-D

I may try out social incentives and dictation software, but social things are usually draining and sometimes scary for me, so there’d be a tradeoff between the motivation and my energy. And I feel like I think in a particular and particularly useful way while writing but can often not think new thoughts while speaking, but that may be just a matter of practice. We’ll see! And even if it doesn’t work, these questions and answers are not (primarily) for me, and others probably find them brilliantly useful!

I’ve bought some Performance Lab products (following a recommendation from Alex in a private conversation). They have better reviews on Vaga and are a bit cheaper than the Athletic Greens.

“Default mode network”: Interesting! I didn’t know about that.

Comment by telofy on How might better collective decision-making backfire? · 2020-12-27T10:31:03.414Z · EA · GW

Indeed. :-/ Or would you disagree with my impression that, for example, Squiggle or work on prediction markets is value neutral?

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-20T12:09:48.561Z · EA · GW

Do you use the guided lessons of Keybr or a custom text? I think the guided lessons are geared toward your weaknesses, which probably leads to a lower speed than what you’d achieve with the average text.

my current typing speed is below average for programmers

That’s something where I’ve never felt bottlenecked by my typing speed. Learning to type blindly was very useful, though, because it gave me a lot more freedom with screen configurations. (And switching to a keyboard layout other than German, where most brackets are super hard to reach. I use a customized Colemak.)

Have you tried calibration practice?

Yeah, it’s on my list of things I want to practice more, but the few times I did some tests I was mostly well-calibrated already (with the exception of one probability level or what they’re called). There’s surely room for improvement, though. Maybe I’ll do worse if the questions are from an area that I think I know something about. ^^

Maybe I’m also too impressionable by people who speak with an air of confidence. I might be falling for some sort of typical mind fallacy and assume that when someone doesn’t use a lot of hedges, they must be so sure that they’re almost certain to be right, and then update strongly on that. But I’m not quite convinced by that theory either. That probably happens sometimes, but at other times I also overupdate on my own new ideas. I’m pretty sure I overupdate whenever people use guilt-inducing language, though.

I filled in Brian Tomasik’s list of beliefs and values on big questions at one point. :-D

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-20T11:40:44.000Z · EA · GW

Thanks! Yeah, I sometimes wonder about that. I suppose in rationality-adjacent circles I can just ask what someone’s preference is (free-wheeling chat or no-nonsense and to the point). Maybe that’d be a faux pas or weird in general, but I think it should be fine among most EAs?

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-19T21:24:00.341Z · EA · GW

Awesome! For me the size of an area plays a role for how long I have a high level of motivation for it. When you’re studying a board game, there are a few activities, they are quite similar, and if you try out all of them it might be that you run out of motivation within a year. This happened to me with Othello. But computer science or EA are so wide that if you lose motivation for some subfield of decision theory, you move on to another subfield of decision theory, or to something else entirely, like history. And there are probably a lot of such subareas where there are potentially impactful investigations waiting to be done. So it makes sense to me to be optimistic about having long sustained motivation for such a big field.

My motivation did shift a few times, though. I think before 2012 it was more a “This is probably hopeless, but I have to at least try on the off-chance that I’m in a world where it’s not hopeless.” 2012–2014 it was more “Someone has to do it and no one else will.” After March 28, 2014, it was carried a lot by the sudden enormous amount of hope I got from EA. On October 28, 2015, I suddenly lost an overpowering feeling of urgency and became able to consider more long-term strategies than a decade or two. Even later, I became increasingly concerned with coordination and risk from regression to the (lower) mean.

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-19T18:00:25.015Z · EA · GW

In a private conversation we figured out that I may tend too much toward setting specific goals and then only counting achievement of these goals as success ignoring all the little things that I learn along the way. If the goal is hard to achieve, I have to learn a lot of little things on the way and that takes time, but if I don’t count these little things as little successes, my feedback gets too sparse, and I lose motivation. So noticing little successes seems valuable.

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-19T17:51:28.061Z · EA · GW

Yeah, I even mentioned this idea (about preventing someone from “wasting” time on a dead end you already explored) in a blog post a while back. :-D

Much of the bulk of the iceberg is research, which has the interesting property that often negative results – if they are the result of a high-quality, sufficiently powered study – can be useful. If the 100 EAs from the introduction (under 1.a.) are researchers, they know that one of the plausible ideas got to be right, and 99 of them have already been shown not to be useful, then the final EA researcher can already eliminate 99% of the work with very little effort by relying on what the others have already done. The bulk of that impact iceberg was thanks to the other researchers. Insofar as research is a component of the iceberg, it’s a particularly strong investment.

It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-19T17:43:24.352Z · EA · GW

Part of this reminds me a lot of CFAR’s approach here (I can’t quite tell whether Julia Galef is interviewer, interviewee, or both):

For example, when I've decided to take a calculated risk, knowing that I might well fail but that it's still worth it to try, I often find myself worrying about failure even after having made the decision to try. And I might be tempted to lie to myself and say, "Don't worry! This is going to work!" so that I can be relaxed and motivated enough to push forward.

But instead, in those situations I like to use a framework CFAR sometimes calls "Worker-me versus CEO-me." I remind myself that CEO-me has thought carefully about this decision, and for now I'm in worker mode, with the goal of executing CEO-me's decision. Now is not the time to second-guess the CEO or worry about failure.

Your approach to gathering feedback and iterating on the output, making it more and more refined with every iteration but also deciding whether it’s worth another iteration, that process sounds great!

I think a lot of people aim for such a process or want to after reading your comment, but will held back from showing their first draft to their first round of reviewers because they worry the reviewers will think badly of them for addressing a topic of this particular level of perceived difficulty or relevance (maybe it’s too difficult or too irrelevant in the reviewer’s opinion), or think badly of them for a particular wording, or think badly of them because they think you should’ve anticipated a negative effect of writing about the topic and not done so (e.g., some complex acausal trade or social dynamics thing that didn’t occur to you), or just generally have diffuse fears holding them back. Such worries are probably disproportionate, but still, overcoming them will probably require particular tricks or training.

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-19T15:44:48.674Z · EA · GW

Yeah, I agree about how well or not well those concepts line up. But I think insofar as I still struggle with probably disproportionate survival mindset, it’s about questions of being accepted socially and surviving financially rather than anything linked to beliefs (maybe indirectly in a few edge cases, but that feels almost irrelevant).

If this is not just my problem, it could mean that a universal basic income could unlock more genius researchers. :-)

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-19T15:39:25.852Z · EA · GW

Typing speed: Interesting! What is your typing speed?

Obvious questions: Thanks, I’ll keep that in mind. It seems unlikely to be the case for me, but I haven’t tried to observe such a connection either. I observed the opposite tendency in me in the sense that I’m worried about being wrong and so probe all the ways in which I may be wrong a lot, which has had the unintended negative effect that I’m too likely to abandon old approaches in favor of ones I’ve heard of very freshly because for the latter I haven’t come up with as many counterarguments. I also find rehearsing stuff that I already believe to be yucky and boring in ways that rehearsing counterarguments is not. But of course I might be falling for both traps in different contexts.

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-19T15:06:51.436Z · EA · GW

Your advice to talk to people is probably most important to me! I haven’t tried that a lot but when I did, it was very successful. One hurdle is not wanting to come off as too stupid to the other person (but there are also people who make me feel sufficiently at ease that I don’t mind coming off as stupid) and another is not wanting to waste people’s time. So I want to first be sure that I can’t just figure it out myself within ~ 10x the time. Maybe that a bad tradeoff. I also sometimes worry that people would actually like to chat more, but my reluctance to waste their time interferes with both our interest to chat. (Maybe they have the same reluctance, and both of us would be happier if he didn’t have it. Can we have a Reciprocity.io for talking about research, please? ^^)

Typing speed: Haha! You can test it here for example: https://10fastfingers.com/typing-test/english. I’ve been stagnating at ~ 60 WPM now for years. Maybe there’s some sort of distinction that some brains are more optimized toward (e.g., worse memory) or incentivized to optimize toward (e.g., through positive feedback) fewer low-level concepts and other more toward more high-level concepts. So when it comes to measures of performance that have time in the denominator, the first group hits diminishing marginal returns early while the second keeps speeding up for a long time. Maybe the second group is, in turn, less interested in understanding from first-principles, which might make them less innovative. Just random speculation.

Obvious questions: Yeah, I’ve been wondering how it can be that now a lot of people come up independently with cases for nonhuman rights and altruism regardless of distance, but a century ago seemingly almost no one did. Maybe it’s just that I don’t know because most of those are lost in history and those that are not, I just don’t know about (though I can think of some examples). Or maybe culture was so different that a lot of the frameworks weren’t there that these ideas attach to. So if moral genius is, say, normally distributed, then values-spreading could have the benefit that it increases the number of people that use relevant frameworks and thereby also increases the absolute number of moral geniuses who work within those frameworks. The values would have to be sufficiently cooperative not to risk zero-sum competition between values. I suppose that’s similar to Bostrom’s Megaearth scenario except with people who share certain frameworks in their thinking rather than the pure number of people.

Getting work done when tired: Well, to some degree I noticed that I over-update on tiredness, and then get into a negative feedback loop where I give up on things too quickly because I think I’m too tired to do them. At that point I’m usually not actually particularly tired.

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-19T13:32:20.314Z · EA · GW

Thank you! Using the thinking vs. reading balance as a feedback mechanism is an interesting take, and in my experience it’s also most fruitful in philosophy, though I can’t compare with those branches of economics.

Survival mindset: I suppose it serves its purpose when you’re in a very low-trust environment, but it’s probably not necessary most of the time for most aspiring EA researchers.

Thanks for linking that list of textbooks! It’s also been helpful for me in the past. :-D

Planning the next day the evening before also seems like a good thing to try for me. Thanks!

I wonder whether you all have such fairly high typing speeds simply because you all type a lot or whether 80+ WPM is a speed threshold that is necessary to achieve before one ceases to perceive typing speed as a limiting factor. (Mine is around 60 WPM.)

I hope you can get your work hours down to a manageable level!

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-19T12:49:12.818Z · EA · GW

Thanks! This is something I sometimes struggle with I think. Is the culture just all about sharing early and often and helping each other, or are there also other aspects to the culture that I may not anticipate that help you overcome this self-consciousness? :-)

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-19T12:40:26.694Z · EA · GW

Wow! Thanks for all the insightful answers, everyone!

Would anyone mind if I transfer these into a post on my blog (or a separate post in the EA Forum) that is linear in the sense that there is one question and then all answers to it, then the next question and all answers to it, and so on? That may also generate more attention for these answers. :-)

Comment by telofy on How might better collective decision-making backfire? · 2020-12-16T11:52:27.535Z · EA · GW

But you perhaps seem to be thinking of tools to generate progress related to better values

No, I think that unfortunately, the tools I envision are pretty value neutral. I’m thinking of Squiggle, of Ozzie’s ideas for improving prediction markets, and of such things as using better metrics – e.g., SWB instead of QALYs, or expected value of the future instead of probability of extinction.

capabilities (predictive accuracy, coordination ability, precommitment mechanisms)

Hmm, in my case: yes, noish, no. I think I’m really only thinking of making the decisions better, so more predictive accuracy, better Brier scores, or something like that.

In the end I’m of course highly agnostic about how this will be achieved. So this only reflects how I envision this project might turn out to be characterized. Ozzie wants to work that out in more detail, so I’ll leave that to him. :-)

Especially coordination ability may turn out to be affected. More de facto than de jure, I imagine, but when people wanted to collaborate on open source software, their goal was (presumably) to create better software faster and not to improve humanity’s coordination ability. But to do that, they developed version control systems and bug tracking systems, so in the end, they did improve coordination ability. So improving coordination ability is a likely externality of this sort of project, you could say.

For precommitment mechanisms, I can’t think of a way this might be affected either on purpose or accidentally.

Maybe it’ll be helpful to collect a lot of attributes like these and discuss whether we think we’ll need to directly, intentionally affect them, or whether we think we might accidentally affect them, or whether we don’t think they’ll be affected at all. I could easily be overlooking many ways in which they are interconnected.

Comment by telofy on How might better collective decision-making backfire? · 2020-12-15T06:17:47.456Z · EA · GW

Assorted Risks

I’ve had a chat with Justin Shovelain yesterday where we discussed a few more ways in which improved collaborative truth-seeking can backfire. These are, for me, rather early directions for further thought, so I’ll rather combine them in one answer for now.

They fall into two categories: Negative effects of better decision making and negative effects of collaborative methods.

Negative effects of better decision-making:

  1. Valuable ambiguity. It might be that ambiguity plays an important role in social interactions. There is the stylized example where it is necessary to keep the number of rounds of an iterated game a secret or else that knowledge will distort the game. I’ve also read somewhere that there’s the theory that conflicts between two countries can be exacerbated if the countries have too low-quality intelligence about each other but also if they have too high-quality intelligence about each other. But I can’t find the source, so I’m likely to misremember something. Charity evaluators also benefit from ambiguity in that fewer charities would be willing to undergo their evaluation process if the only reason why a charity would either decline it or block the results from being published were reasons that reflect badly on the charity. But there are also good and neutral reasons, so charities will always have plausible deniability.

Negative effects of collaborative methods:

  1. Centralized control. My earlier answer titled “Legibility” argued that collaborative methods will make it necessary to make considerations and values more legible than they are now so they can be communicated and quantified. This may also make them more transparent and thus susceptible to surveillance. That, in turn, may enable more powerful authoritarian governments, which may steer the world into a dystopian lock-in state.
  2. Averaging effect. Maybe there are people who are particularly inclined toward outré opinions. These people will be either unusually right or unusually wrong for their time. Maybe there’s more benefit in being unusually right than there is harm in being unusually wrong (e.g., thanks to the law). And maybe innovation toward most of what we care about is carried by unusually right people. (I’m thinking of Newton here, whose bad ideas didn’t seem to have much of an effect compared to his good ideas.) Collaborative systems likely harness – explicitly or implicitly – some sort of wisdom of the crowds type of effect. But such an effect is likely to average away the unusually wrong and the unusually right opinions. So such systems might slow progress.
  3. More power to the group. It might be that the behavior of groups (e.g., companies) is generally worse (e.g., more often antisocial) than that of individuals. Collaborative systems would shift more power from individuals to groups. So that may be undesirable.
  4. Legible values. It may be very hard to model the full complexity of moral intuitions of people. In practice, people tend toward systems that greatly reduce the dimensionality of what people typically care about. The result are utils, DALYs, SWB, consumption, life years, probability of any existential catastrophe, etc. Collaborative systems would incentivize such low-dimensional measures of value, and through training, people may actually come to care about them more. It’s contentious and a bit circular to ask whether this is good or bad. But at least it’s not clearly neutral or good.
Comment by telofy on How might better collective decision-making backfire? · 2020-12-14T17:53:07.290Z · EA · GW

Interesting. That’s a risk when pushing for greater coordination (as you said). If you keep the ability to coordinate the same and build better tools for collective decision-making, would that backfire in such a way?

I imagine collaborative tools would have to make values legible to some extent if they are to be used to analyze anything not-value-neutral. That may push toward legible values, so more like utilitarianism and less like virtue ethics or the mixed bag of moral intuitions that we usually have? But that’s perhaps a separate effect.

But I’m also very interested in improving coordination, so this risk is good to bear in mind.

Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-14T14:16:52.224Z · EA · GW

Hi Michael! Huh, true, those terms seem to be vastly less commonly used than I had thought.

By survival mindset I mean: extreme risk aversion, fear, distrust toward strangers, little collaboration, isolation, guarded interaction with others, hoarding of money and other things, seeking close bonds with family and partners, etc., but I suppose it also comes with modesty and contentment, equanimity in the face of external catastrophes, vigilance, preparedness, etc.

By exploratory mindset I mean: risk neutrality, curiosity, trust toward strangers, collaboration, outgoing social behavior, making oneself vulnerable, trusting partners and family without much need for ritual, quick reinvestment of profits, etc., but I suppose also a bit lower conscientiousness, lacking preparedness for catastrophes, gullibility, overestimating how much others trust you, etc.

Those categories have been very useful for me, but maybe they’re a lot less useful for most other people? You can just ignore that question if the distinction makes no intuitive sense this way or doesn’t quite fit your world models.

Comment by telofy on How might better collective decision-making backfire? · 2020-12-14T13:31:00.206Z · EA · GW

Thank you! This seems importantly distinct from the “Benefiting Unscrupulous People” failure mode in that you put the emphasis not on intentional exploitation but on cooperation failures even among well-intentioned groups.

I’ll reuse this comment to bring up something related. The paper “Open Problems in Cooperative AI ” has a section The Potential Downsides of Cooperative AI.

The paper focuses on the cooperation aspect of the collaborative/cooperative truth-seeking, so the section on potential downsides focuses on downsides of cooperation and downsides of promoting cooperation rather than downsides of truth-seeking. That said, it might be the case that any tool that enables collaborative truth-seeking also promotes capacities that are needed for cooperation.

I currently think that promoting cooperation is one of the most robustly good things we can do, so challenges to that view are very interesting!

The paper has three:

  1. Cooperation can enable collusion, cartels, and bribes. Various antisocial behaviors that wouldn’t be as easy if there were no way to trust the other party.
  2. Promoting cooperation will go through promoting capacities that enable cooperation. Such capacities include understanding, recognizing honesty and deception, and commitment. Those same capacities can also be used to understand others’ vulnerabilities, deceive them, and commit to threats. Those, however, can also be prosocial again, e.g., in the case of laws that are enforced through a threat of a fine.
  3. Finally, the authors note that competition can facilitate learning or training.

They also suggest mitigating factors when it comes to such cooperation failures:

Secondly, mutual gains in coercive capabilities tend to cancel each other out, like how mutual training in chess will tend to not induce a large shift in the balance of skill. To the extent, then, that research on Cooperative AI unavoidably also increases coercive skill, the hope is that those adverse impacts will largely cancel, whereas the increases in cooperative competence will be additive, if not positive complements. This argument is most true for coercive capabilities that are not destructive but merely lead to transfers of wealth between agents. Nevertheless, mutual increases in destructive coercive capabilities will also often cancel each other out through deterrence. The world has not experienced more destruction with the advent of nuclear weapons, because leaders possessing nuclear weapons have greatly moderated their aggression against each other. By contrast, cooperation and cooperative capabilities lead to positive feedback and are reinforcing; it is in one’s interests to help others learn to be better cooperators.

My thoughts, in the same order, on how these apply to collaborative truth-seeking:

  1. This seems to depend on whether collaborative truth-seeking changes how easy it is to cooperate with others. While I’m very enthusiastic about promoting cooperation and so take this concern seriously, it also doesn’t seem to fit this post, or only very tenuously. It might be that tools that depend on cooperation enhance cooperation, but it might also not be the case. Or the effect might be very small.
  2. A similar response applies here, but there is the wrinkle that enhanced truth-seeking may be used to improve what the author’s call “understanding.” I’m not very concerned about this, though, because your evolved intuitions for Theory of Mind are probably so good that a technical, collaborative tool can barely improve upon them. (Unlike AI, which the paper is about.) This applies less to groups of people than to individuals, so maybe this is again a danger once we’re dealing with companies or states.
  3. This probably doesn’t apply. At least I can’t see a way in which collaborative tools would affect the level of competition. In fact it seems that it is hard to set the incentives of prediction markets such that people are not discouraged from information sharing (which is probably bad on balance).
Comment by telofy on Ask Rethink Priorities Anything (AMA) · 2020-12-14T10:21:12.033Z · EA · GW

I’ve been very impressed with your work, and I’m looking forward to you hopefully making similarly impressive contributions to probing longtermism!

But when it comes to questions: You did say “anything,” so may I ask some questions about productivity when it comes to research in particular? Please pick and choose from these to answer any that seem interesting to you.

  1. Thinking vs. reading. If you want to research a particular topic, how do you balance reading the relevant literature against thinking yourself and recording your thoughts? I’ve heard second-hand that Hilary Greaves recommends thinking first so to be unanchored by the existing literature and the existing approaches to the problem. Another benefit may be that you start out reading the literature with a clearer mental model of the problem, which might make it easier to stay motivated and to remain critical/vigilant while reading. Would you agree or do you have a different approach?
  2. Self-consciousness. I imagine that virtually any research project, successful and unsuccessful, starts with some inchoate thoughts and notes. These will usually seem hopelessly inadequate but they’ll sometimes mature into something amazingly insightful. Have you ever struggled with mental blocks when you felt self-conscious about these beginnings, and have you found ways to (reliably) overcome them?
  3. Is there something interesting here? I often have some (for me) novel ideas, but then it turns out that whether true or false, the idea doesn’t seem to have any important implications. Conversely, I’ve dismissed ideas as unimportant, and years later someone developed them – through a lot of work I didn’t do because I thought it wasn’t important – into something that did connect to important topics in unanticipated ways. Do you have rules of thumb that help you assess early on whether a particular idea is worth pursuing?
  4. Survival vs. exploratory mindset. I’ve heard of the distinction between survival mindset and exploratory mindset, which makes intuitive sense to me. (I don’t remember where I learned of these terms, but I tried to clarify how I use them in a comment below.) I imagine that for most novel research, exploratory mindset is the more useful one. (Or would you disagree?) If it doesn’t come naturally to you, how do you cultivate it?
  5. Optimal hours of work per day. Have you found that a particular number of hours of concentrated work per day works best for you? By this I mean time you spend focused on your research project, excluding time spent answering emails, AMAs, and such. (If hours per day doesn’t seem like an informative unit to you, imagine I asked “hours per week” or whatever seems best to you.)
  6. Learning a new field. I don’t know what I mean by “field,” but probably something smaller than “biology” and bigger than “how to use Pipedrive.” If you need to get up to speed on such a field for research that you’re doing, how do you approach it? Do you read textbooks (if so, linearly or more creatively?) or pay grad students to answer your questions? Does your approach vary depending on whether it’s a subfield of your field of expertise or something completely new?
  7. Hard problems. I imagine that you’ll sometimes have to grapple with problems that are sufficiently hard that it feels like you didn’t make any tangible progress on them (or on how to approach them) for a week or more. How do you stay optimistic and motivated? How and when do you “escalate” in some fashion – say, discuss hiring a freelance expert on some other field?
  8. Emotional motivators. It’s easy to be motivated on a System 2 basis by the importance of the work, but sometimes that fails to carry over to System 1 when dealing with some very removed or specific work – say, understanding some obscure proof that is relevant to AI safety along a long chain of tenuous probabilistic implications. Do you have tricks for how to stay System 1 motivated in such cases – or when do you decide that a lack of motivation may actually mean that something is wrong with the topic and you should question whether it is sufficiently important?
  9. Typing speed. I have this pet theory that a high typing speed is important for some forms of research that involves a lot of verbal thinking (e.g., maybe not maths). The idea is that our memory is limited, so we want to take notes of our thoughts. But handwriting is slow, and typing is only mildly faster, so unless one thinks slowly or types very fast, there is a disconnect that causes continual stalling, impatience, forgotten ideas, and prevents the process from flowing. Does that make any intuitive sense to you? Do you have any tricks (e.g., dictation software)?
  10. Obvious questions. Nate Soares has an essay on “obvious advice.” Michael Aird mentioned that in many cases he just wanted to follow up on some obvious ideas. They were obvious in hindsight, but evidently they hadn’t been obvious to anyone else for years. Is there a distinct skill of “noticing the obvious ideas” or “noticing the obvious open questions”? And can it be trained or turned into a repeatable process?
  11. Tiredness, focus, etc. We sometimes get tired or have trouble focusing. Sometimes this happens even when we’ve had enough sleep (just to get an obvious solution out of the way: sleep/napping). What are your favorite things to do when focusing seems hard or you feel tired? Do you use any particular nootropics, supplements, air quality monitor, music, or exercise routine?
  12. Meta. Which of these questions would you like to see answered by more people because you are interested in the answers too?

Thank you kindly! And of course just pick out the questions you think are interesting for you or other readers to answer. :-)

Comment by telofy on How might better collective decision-making backfire? · 2020-12-13T11:47:47.877Z · EA · GW

Social Effects

Beliefs are often entangled with social signals. This can pose difficulties for what I’ll call in the following a “truth-seeking community.”

When people want to disassociate from a disreputable group – say, because they’ve really never had anything to do with the group and don’t want that to change – they can do this in two ways: They can steer clear of anything that is associated with the disreputable group or they can actively signal their difference from the disreputable group.

Things that are associated with the disreputable group are, pretty much necessarily, things that are either sufficiently specific that they rarely come up randomly or things that are common but on which the group has an unusual, distinctive stance. Otherwise these things could not serve as distinguishing markers of the group.

If the disreputable group is small, is distinguished by an unusual focus on a specific topic, and a person wants to disassociate from them, it’s usually enough to steer clear of the specific topic, and no one will assume any association. Others will start out with a prior that the person < 1% likely to be part of the group, and absent signals to the contrary, will maintain that credence.

But if the disreputable group is larger, at least in one’s social vicinity, or the group’s focal topic is a common one, then one needs to countersignal more actively. Others may start out with a prior that the person is ~ 30% likely to be part of the group and may avoid contact with them unless they see strong signals to the contrary. This is where people will find it necessary to countersignal strongly. Moreover, once there is a norm to countersignal strongly, the absence of such a signal or a cheaper signal will be doubly noticeable.

I see two, sometimes coinciding, ways along which that can become a problem. First, the disreputable group may be so because of their values, which may be extreme or uncooperative, and it is just historical contingency that they endorse some distinctive belief. Or second, the group may be disreputable because they have a distinctive belief that is so unusual as to reflect badly on their intelligence or sanity.

The first of these is particularly problematic because the belief can be any random one with any random level of likelihood, quite divorced from the extreme, uncooperative values. It might also not be so divorced, e.g., if it is one that the group can exploit to their advantage if they convince the right people of it. But the second is problematic too.

If a community of people who want to optimize their collective decision-making (let’s call it a “truth-seeking community”) builds sufficiently complex models, e.g., to determine the likelihood of intelligent life re-evolving, then maybe at some point they’ll find that one node in their model (a Squiggle program, a Bayesian network, vel sim.) would be informed by more in-depth research of a question that is usually associated with a disreputable group. They can use sensitivity analysis to estimate the cost it would have to leave the node as it is, but maybe it turns out that their estimate is quite sensitive to that node.

In the first case, in the case of a group that is disreputable by dint of their values, that is clearly a bad catch-22.

But it can also go wrong in the second case, the case of the group that is disreputable because of their unusual beliefs, because people in the truth-seeking community will usually find it impossible to assign a probability of 0 to any statement. It might be that their model is very sensitive to whether they assign 0.1% or 1% likelihood to a disreputable belief. Then there’s a social cost also in the second case: Even though their credence is low either way, the truth-seeking community will risk being associated with a disreputable group (which may assign > 90% credence to the belief), because they engage with the belief.

I see five ways in which this is problematic:

  1. Exploitation of the community by bad actors: The truth-seeking community may be socially adroit, and people will actually grant them some sort of fool’s licence because they trust their intentions. But that may turn out to be exploitable: People with bad intentions may use the guise of being truth-seeking to garner attention and support while subtly manipulating their congregation toward their uncooperative values. (Others may only be interested in the attention.) Hence such a selective fool’s licence may erode societal defenses against extreme, uncooperative values and the polarization and fragmentation of society that they entail. Meanwhile the previously truth-seeking community may be overtaken by such people, who’ll be particularly drawn to its influential positions while being unintimidated with the responsibility that comes with these positions.
  2. Exploitation of the results of the research by bad actors: The same can be exploitable in that the truth-seeking community may find that some value-neutral belief is likely to be true. Regardless of how value-neutral the belief is, the disreputable group may well be able to cunningly reframe it to exploit and weaponize it for their purposes.
  3. Isolation of and attacks on the community: Conversely, the truth-seeking community may also not be sufficiently socially adroit and still conduct their research. Other powerful actors – potential cooperation partners – will consider the above two risks or will not trust the intentions of the truth-seeking community in the first place, and so will withhold their support from the community or even attack it. This may also make it hard to attract new contributors to the community.
  4. Internal fragmentation through different opinions: The question whether the sensitivity of the model to the controversial belief is high enough to warrant any attention may be a narrow one, one that is not stated and analyzed very explicitly, or one that is analyzed explicitly but through models that make contradictory predictions. In such a case it seems very likely that people will arrive at very different predictions as to whether it’s worse to ignore the belief or to risk the previous failure modes. This can lead to fragmentation, which often leads to the demise of a community.
  5. Internal fragmentation through lack of trust: The same internal fragmentation can also be the result of decreasing trust within the community because the community is being exploited or may be exploited by bad actors along the lines of failure mode 1.
  6. Collapse of the community due to stalled recruiting: This applies when the controversial belief is treated as a serious infohazard. It’s very hard to recruit people for research without being able to tell them what research you would like them to do. This can make recruiting very or even prohibitively expensive. Meanwhile there is usually some outflow of people from any community, so if the recruitment is too slow or fully stalled, the community may eventually vanish. This would be a huge waste especially if the bulk of the research is perfectly uncontroversial.

I have only very tentative ideas of how these risks can be alleviated:

  1. The community will need to conduct an appraisal, as comprehensive and unbiased as possible, of all the expected costs/harms that come with engaging with controversial beliefs.
  2. It will need to conduct an appraisal of the sensitivity of its models to the controversial beliefs and what costs/harms can be averted, say, through more precise prioritization, if the truth value of the beliefs is better known.
  3. Usually, I think, any specific controversial belief will likely be close to irrelevant for a model so that it can be safely ignored. But when this is not the case, further safeguards can be installed:
  4. Engagement with the belief can be treated as an infohazard, so those who research it don’t do so publicly, and new people are onboarded to the research only after they’ve won the trust of the existing researchers.
  5. External communication may take the structure of a hierarchy of tests, at least in particularly hazardous cases. The researchers need to gauge the trustworthiness of a new recruit with questions that, if they backfire, afford plausible deniability and can’t do much harm. Then they only gradually increase the concreteness of the questions if they learn that the recruit is well-intentioned and sufficiently open-minded. But this can be uncooperative if some codes become known, and then people who don’t know them use them inadvertently.
  6. If the risks are mild, there may be some external communication. In it, frequent explicit acknowledgements of the risks and reassurances of the intentions of the researchers can be used to cushion the message. But these signals are cheap, so they don’t help if the risks are grave or others are already exploiting these cheap signals.
  7. Internal communication needs to frequently reinforce the intentions of the participants, especially if there are some among them who haven’t known the others for a long time, to dispel worries that some of them may practice other than prosocial, truth-seeking intentions.
  8. Agreed-upon procedures such as voting may avert some risk of internal fragmentation.

An example that comes to mind is a situation when a friend of mine complained about the lacking internal organization of certain unpolitical (or maybe left-wing) groups and contrasted it with a political party that was very well organized internally. It was an, in our circles, highly disreputable right-wing party. His statement was purely about the quality of the internal organization of the party, but I only knew that because I knew him. Strangers at that meetup might’ve increased their credence that he agrees with the policies of that party. Cushioning such a mildly hazardous statement would’ve gone a long way to reduce that risk and keep the discussion focused on value-neutral organizational practices.

Another disreputable opinion is that of Dean Radin who seems to be fairly confident that there is extrasensory perception, in particular (I think) presentiment on the timescale of 3–5 s. He is part of a community that, from my cursory engagement with it, seems to not only assign a nonzero probability to these effects and study them for expected value reasons but seems to actually be substantially certain. This entails an air of disreputability either because of the belief by itself or the particular confidence in it. If someone were to create a model to predict how likely it is that we’re in a simulation, specifically in a stored world history, they may wonder whether cross-temporal fuzziness like this presentiment may be signs of motion compensation, a technique used in video compression, which may also serve to lossily compress world histories. This sounds wild because we’re dealing with unlikely possibilities, but the simulation hypothesis, if true, may have vast effects on the distribution of impacts from interventions in the longterm. These effects may plausibly even magnify small probabilities to a point where they become relevant. Most likely, though, they stem from whatever diverse causes are behind the experimenter effect.

I imagine that history can also be a guide here as these problems are not new. I don’t know much about religion or history, so I may be mangling the facts, but Wikipedia tells me that the First Council of Nicaea in 325 CE addressed the question of whether God created Jesus from nothing (Arianism) or whether Jesus was “begotten of God,” so that there was no time when there was no Jesus because he was part of God. It culminated as follows:

The Emperor carried out his earlier statement: everybody who refused to endorse the Creed would be exiled. Arius, Theonas, and Secundus refused to adhere to the creed, and were thus exiled to Illyria, in addition to being excommunicated. The works of Arius were ordered to be confiscated and consigned to the flames, while his supporters were considered as "enemies of Christianity." Nevertheless, the controversy continued in various parts of the empire.

This also seems like a time when, at least in most parts of the empire, a truth-seeking bible scholar would’ve been well advised to consider whether the question has sufficiently vast implication as to be worth the reputational damage and threat of exile that came with engaging with it open-mindedly. But maybe there were monasteries where everyone shared a sufficiently strong bond of trust into one another’s intentions that some people had the leeway to engage with such questions.

Comment by telofy on How might better collective decision-making backfire? · 2020-12-13T11:47:18.411Z · EA · GW

Psychological Effects

Luke Muehlhauser warns that overconfidence and sunk cost fallacy may be necessary for many people to generate and sustain motivation for a project. (But note that the post is almost nine years old.) Entrepreneurs are said to be overconfident that their startup ideas will succeed. Maybe increased rationality (individual or collective) will stifle innovation.

I feel that. When I do calibration exercises, I’m only sometimes mildly overconfident in some credence intervals, and indeed, my motivation usually feels like, “Well, this is a long shot, and why am I even trying it? Oh yeah, because everything else is even less promising.” That could be better.

On a community level it may mean that any community that develops sufficiently good calibration becomes demotivated and falls apart.

Maybe there is a way of managing expectations. If you grow up in an environment where you’re exposed to greatly selection-biased news about successes, your expectations may be so high that any well-calibrated 90th percentile successes that you project may seem disappointing. But if you’re in an environment where you constantly see all the failures around you, the same level of 90th percentile success may seem motivating.

Maybe that’s also a way in which the EA community backfires. When I didn’t know about EA, I saw around me countless people who failed completely to achieve my moral goals because they didn’t care about them. The occasional exceptions seemed easy to emulate or exceed. Now I’m surrounded by people who’ve achieved things much greater than my 90th percentile hopes. So my excitement is lower even though my 90th percentile hopes are higher than they used to be.

Comment by telofy on How might better collective decision-making backfire? · 2020-12-13T11:46:46.359Z · EA · GW

Benefitting Unscrupulous People

A system that improves collective decision making is likely value-neutral, so it can also be used by unscrupulous agents for their nefarious ends.

Moreover unscrupulous people may benefit from it more because they have fewer moral side-constraints. If set A is the set of all ethical, legal, cooperative methods of attaining a goal, and set B is the set of all methods of attaining the same goal, then A ⊆ B. So it should always be as easy or easier to attain a goal by any means necessary than only by ethical, legal, and cooperative means.

Three silver linings:

  1. Unscrupulous people probably also have different goals from ours. Law enforcement will block them from attaining those goals, and better decision-making will hopefully not get them very far.
  2. These systems are collaborative, so you can benefit from them more the more people collaborate on them (I’m not saying monotonically, just as a rough tendency). When you invite more people into some nefarious conspiracy, then the risk that one of them blows the whistle increases rapidly. (Though it may depend on the structure of the group. There are maybe some terrorist cells who don’t worry much about whistleblowing.)
  3. If a group is headed by a narcissistic leader, the person may see a threat to their authority in a collaborative decision-making system, so that they won’t adopt it to begin with. (Though it might be that they like that collaborative systems can make it infeasible for individuals to use them to put their individual opinions to the test, so that they can silence individual dissenters. This will depend a lot on implementation details of the system.)

More speculatively, we can also promote and teach the system such that everyone who learns to use it also learns about multiverse-wide superrationality alias evidential cooperation in large worlds (ECL). Altruistic people with uncooperative agent-neutral goals will reason that they can now realize great gains from trade by being more cooperative or else lose out on them by continuing to defect.

We can alleviate the risk further by marketing the system mostly to people who run charities, social enterprises, prosocial research institutes, and democratic governments. Other people will still learn about the tools, and there are also a number of malevolent actors in those generally prosocial groups, but it may shift the power a bit toward more benevolent people. (The Benevolence, Intelligence, and Power framework may be helpful in this context.)

Finally, there is the option to make it hard to make models nonpublic. But that would have other downsides, and it’s also unlikely to be a stable equilibrium as others will just run a copy of the software on their private servers.

Comment by telofy on How might better collective decision-making backfire? · 2020-12-13T11:46:05.072Z · EA · GW

Modified Ultimatum Game

A very good example of the sort of risks that I’m referring to is based on a modified version of the ultimatum game and comes from the Soares and Fallenstein paper “Toward Idealized Decision Theory”:

Consider a simple two-player game, described by Slepnev (2011), played by a human and an agent which is capable of fully simulating the human and which acts according to the prescriptions of [Updateless Decision Theory (UDT)]. The game works as follows: each player must write down an integer between 0 and 10. If both numbers sum to 10 or less, then each player is paid according to the number that they wrote down. Otherwise, they are paid nothing. For example, if one player writes down 4 and the other 3, then the former gets paid $4 while the latter gets paid $3. But if both players write down 6, then neither player gets paid. Say the human player reasons as follows:

I don’t quite know how UDT works, but I remember hearing that it’s a very powerful predictor. So if I decide to write down 9, then it will predict this, and it will decide to write 1. Therefore, I can write down 9 without fear.

The human writes down 9, and UDT, predicting this, prescribes writing down 1.

This result is uncomfortable, in that the agent with superior predictive power “loses” to the “dumber” agent. In this scenario, it is almost as if the human’s lack of ability to predict UDT (while using correct abstract reasoning about the UDT algorithm) gives the human an “epistemic high ground” or “first mover advantage.” It seems unsatisfactory that increased predictive power can harm an agent.

A solution to this problem would have to come from the area of decision theory. It probably can’t be part of the sort of collaborative decision-making system that we envision here. Maybe there is a way to make such a problem statement inconsistent because the smarter agent would’ve committed to writing down 5 and signaled that sufficiently long in advance of the game. Ozzie also suggests that introducing randomness along the lines of the madman theory may be a solution concept.

Comment by telofy on How might better collective decision-making backfire? · 2020-12-13T11:45:32.516Z · EA · GW

Legibility

This is a less interesting failure mode as it is one where the systems that we create to improve our decision-making actually fail to achieve that goal. It’s not one where successfully achieving that goal backfires.

I also think that while this may be a limitation of some collaborative modeling efforts, it’s probably no problem for prediction markets.

The idea is that collaborative systems will always, at some stage, require communication, and specifically communication between brains rather than within brains. To make ideas communicable, they have to be made legible. (Or maybe literature, music, and art are counterexamples.) By legible, I’m referring to the concept from Seeing Like A State.

In my experience, this can be very limiting. Take for example what I’ll call the Cialdini puzzle:

Robert Cialdini’s Wikipedia page says “He is best known for his book Influence“. Since its publication, he seems to have spent his time directing an institute to spread awareness of techniques for success and persuasion. At the risk of being a little too cynical – a guy knows the secrets of success, so he uses them to… write a book about the secrets of success? If I knew the secrets of success, you could bet I’d be doing much more interesting things with them. All the best people recommend Cialdini, and his research credentials are impeccable, but I can’t help wondering: If he’s so smart, why isn’t he Emperor?

It seems to me like a common pattern that for certain activities the ability to do them well is uncorrelated or even anticorrelated with the ability to explain them. Some of that may be just because people want to keep their secrets, but I don’t think that explains much of it.

Hence Robert Cialdini may be > 99th percentile at understanding and explaining social influence, but in terms of doing social influence, that might’ve boosted him from the 40th to the 50th percentile or so. (He says his interest in the topic stems from his being particularly gullible.) Meanwhile, all the people he interviews because they have a knack for social influence are probably 40th to 50th percentile at explaining what they do. I don’t mean that they are average at explaining in general but that what they do is too complex, nuanced, unconscious, intertwined with self-deception, etc. for them to grasp it in a fashion that would allow for anything other than execution.

Likewise, a lot of amazing, famous writers have written books on how to write. And almost invariably these books are… unhelpful. If these writers followed the advice they set down in their own books, they’d be lousy writers. (This is based on a number of Language Log posts on such books.) Meanwhile, some of the most helpful books on writing that I’ve read were written by relatively unknown writers. (E.g., Style: Toward Clarity and Grace.)

My learning of Othello followed a similar trajectory. I got from a Kurnik rating of 1200 up to 1600 quite quickly by reading every explanatory book and text on game strategy that I could find and memorizing hundreds of openings. Beyond that, the skill necessary to progress further becomes too complex, nuanced, and unconscious that, it seems to me, it can only be attained through long practice, not taught. (Except, of course, if the teaching is all about practice.) And I didn’t like practice because it often meant playing against other people. (That is just my experience. If someone is an Othello savant, they may rather feel like some basic visualization practice unlocked the game for them, so that they’d still have increasing marginal utility from training around the area where it started dropping for me.)

Orthography is maybe the most legible illegible skill that I can think of. It can be taught in books, but few people read dictionaries in full. For me it sort of just happened rather suddenly that from one year to the next, I made vastly fewer orthographic mistakes (in German). It seems that my practice through reading must’ve reached some critical (soft) threshold where all the bigrams, trigrams, and exceptions of the language became sufficiently natural and intuitive that my error rate dropped noticeably.

For this to become a problem there’d have to be highly skilled practitioners, like the sort of people Cialdini likes to interview, who are brought together by a team or researchers to help them construct a model of some long-term future trajectory.

These skilled practitioners will do exactly the strategically optimal thing when put in a concrete situation, but in the abstract environment of such a probabilistic model, their predictions may be no better than anyone’s. It’ll take well-honed elicitation methods to get high-quality judgments out of these people, and then a lot of nuance may still be lost because what is elicited and how it fits into the model is probably again something that the researchers will determine, and that may be too low-fidelity.

Prediction markets, on the other hand, tend to be about concrete events in the near future, so skilled practitioners can probably visualize the circumstances that would lead to any outcome in sufficient detail to contribute a high-quality judgment.

Comment by telofy on What are some potential coordination failures in our community? · 2020-12-12T15:51:16.873Z · EA · GW

Keeping people up-to-date.

I observe that a lot of people are still active and motivated in their problem area but seem to still associate EA with the sort of ideas that were associated with it around 2014, some of which are long since regarded as mistaken or oversimplified. (I’m not talking about people who drifted away from EA but rather ones for who new EA-related insights are still very action-relevant.)

There are surely a lot of things like the EA newsletter already going on that keep people up-to-date, but maybe there are more ideas that can be tried.

Comment by telofy on What are some potential coordination failures in our community? · 2020-12-12T15:44:49.059Z · EA · GW

Mentoring.

I remember Chi mentioned how well mentoring worked at Oxford. I’ve observed a number of EA efforts to get mentoring off the ground through better coordination, and I also replied to her comment with some ideas.

Comment by telofy on Where are you donating in 2020 and why? · 2020-12-08T11:10:46.812Z · EA · GW

I’ve documented my donation plans in a blog post Donations 2020.

I’m currently planning to put CHF 6,000 into the $500,000 donor lottery and to donate some odds and ends to the Center on Long-Term Risk.

Comment by telofy on My mistakes on the path to impact · 2020-12-07T15:42:59.213Z · EA · GW

Yeah, and besides the training effect there is also the benefit that while one person who disagrees with hundreds is unlikely to be correct, if they are correct, it’s super important that those hundreds of others get to learn from them.

So it may be very important in expectation to notice such disagreements, do a lot of research to understand one’s own and the others’ position as well as possible, and then let them know of the results.

(And yes, the moral uncertainty example doesn’t seem to fit very well, especially for antirealists.)

Comment by telofy on Interrogating Impact · 2020-11-24T21:48:47.510Z · EA · GW

Re 3: Yes and no. ^.^ I’m currently working on something of whose robustness I have very weak evidence. I made a note to think about it, interview some people, and maybe write a post to ask for further input, but then I started working on it before I did any of these things. It’s like an optimal stopping problem. I’ll need to remedy that before my sunk cost starts to bias me too much… I suppose I’m not the only one in this situation. But then again I have friends who’ve thought for many years mostly just about the robustness of various approaches to their problem.

Hilary Graves doesn’t seem to be so sure that robustness gets us very far, but the example she gives is unlike the situations that I usually find myself in.

Arden Koehler: Do you think that’s an appropriate reaction to these cluelessness worries or does that seem like a misguided reaction?

Hilary Greaves: Yeah, I don’t know. It’s definitely an interesting reaction. I mean, it feels like this is going to be another case where the discussion is going to go something like, “Well, I’ve got one intervention that might be really, really, really good, but there’s an awful lot of uncertainty about it. It might just not work out at all. I’ve got another thing that’s more robustly good, and now how do we trade off the maybe smaller probability or very speculative possibility of a really good thing against a more robustly good thing that’s a bit more modest?”

Hilary Greaves: And then this feels like a conversation we’ve had many times over; is what we’re doing just something structurally, like expected utility theory, where it just depends on the numbers, or is there some more principled reason for discarding the extremely speculative things?

Arden Koehler: And you don’t think cluelessness adds anything to that conversation or pushes in favor of the less speculative thing?

Hilary Greaves: I think it might do. So again, it’s really unclear how to model cluelessness, and its plausible different models of it would say really different things about this kind of issue. So it feels to me just like a case where I would need to do a lot more thinking and modeling, and I wouldn’t be able to predict in advance how it’s all going to pan out. But I do think it’s a bit tempting to say too quickly, “Oh yeah, obviously cluelessness is going to favor more robust things.” I find it very non-obvious. Plausible, but very non-obvious.

She has thought about this a lot more than I have, so my objection probably doesn’t make sense, but the situation I find myself in is usually different from the one she describes in two ways: (1) There is no one really good but not robust intervention but rather everything is super murky (even whether the interventions have EV) and I can usually think of a dozen ways any particular intervention can backfire; and (2) this backfiring doesn’t mean that we have no impact but that we have enormous negative impact. In the midst of this murkiness, the very few interventions that seem much less murky than others – like priorities research or encouraging moral cooperation – stand out quite noticeably.

Re 4: I’ve so far only seen Shapley values as a way of attributing impact, something that seems relevant for impact certificates, thanking the right people, and noticing some relevant differences between situations, but by and large only for niche applications and none that are relevant for me at the moment. Nuno might disagree with that.

I usually ask myself not what impact I would have by doing something but which of my available actions will determine the world history with the maximal value. So I don’t break this down to my person at all. Doing so seems to me like a lot of wasted overhead. (And I don’t currently understand how to apply Shapley values to infinite sets of cooperators, and I don’t quite know who I am given that there are many people who are like me to various degrees.) But maybe using Shapley values or some other, similar algorithm would just make that reasoning a lot more principled and reliable. It’s well possible.

Comment by telofy on Interrogating Impact · 2020-11-22T21:41:44.019Z · EA · GW

Sorry if you’re well aware of these, but points 3 and 4 sound like the following topics may be interesting for you: For 3, cluelessness and the recent 80k interview with Hilary Greaves that touches on the topic. For 4, Shapley values or cooperative game theory in general. You can find more discussions of it on the EA Forum (e.g., by Nuno), and I also have a post on it, but it’s a couple years old, so I don’t know anymore if it’s worth your time to read. ^.^'

Comment by telofy on Are there any other pro athlete aspiring EAs? · 2020-09-13T19:42:46.257Z · EA · GW

I’d like to keep up-to-date on what you’re doing. I don’t have chance getting anywhere close to an interesting level anymore in the sport that I do (climbing, mostly bouldering), but I might occasionally meet those who do. (No worries, I can be tactful. ^^)

Comment by telofy on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-05T21:33:31.601Z · EA · GW

I’ve thought a bit about this for personal reasons, and I found Scott Alexander’s take on it to be enlightening.

I see a tension between the following two arguments that I find plausible:

  1. Some people run into health issues due to a vegan diet despite correct supplementation. In most cases it’s probably because of incorrect or absent supplementation, but probably not in all. This could mean that a highly productive EA doing highly important work may cease to be as productive with a small probability. Since they’ve probably been doing extremely valuable work, this decrease in output may be worse than the suffering they would’ve inflicted if they had [eaten some beef and had some milk](https://impartial-priorities.org/direct-suffering-caused-by-various-animal-foods.html). So they should at least eat a bit of beef and drink a bit of milk to reduce that risk. (These foods may increase other risks – but let’s assume for the moment that the person can make that tradeoff correctly for themselves.)
  2. There is currently in our society a strong moral norm against stealing. We want to live in a society that has a strong norm against stealing. So whenever we steal – be it to donate the money to a place where it has much greater marginal utility than with its owner – we erode, in expectation, the norm against stealing a bit. People have to invest more into locks, safes, guards, and fences. People can’t just offer couchsurfing anymore. This increase in anomie (roughly, lack of trust and cohesion) may be small in expectation but has a vast expected societal effect. Hence we should be very careful about eroding valuable societal norms, and, conversely, we should also take care to foster new valuable societal norms or at least not stand in the way of them emerging.

I see a bit of a Laffer curve here (like an upside-down U) where upholding societal rules that are completely unheard of has little effect, and violating societal rules that are extremely well established has little effect again (except that you go to prison). The middle section is much more interesting, and this is where I generally advise to tread softly. (But I’m also against stealing.)

Because the way I resolve this tension for me is to assess whether in my immediate environment – the people who are most likely to be directly influenced by me – a norm is potentially about to emerge. If that is the case, and I approve of the norm, I try to always uphold that norm to at least an above average level.

                                                                                           

Well, and then there are a few more random caveats:

  1. As the norm not to harm other animals for food becomes stronger, it’ll be less socially awkward for people (outside vegan circles) to eat vegan food. Social effects were (last time I checked) still the second most common reason for vegan recidivism.
  2. As the norm not to harm other animals for food becomes stronger, more effort will be put into providing properly fortified food to make supplementation automatic.
  3. Eroding a budding social norm because it comes at a cost to one’s own goals seems like the sort of freeriding that I think the EA community needs to be very careful about. In some cases the conflict is only due to lacking idealization of preferences or only between instrumental rather than terminal goals or the others would defect against us in any case, but we don’t know any of this to be the case here. The first comes down to unanswered questions of population ethics, the second to the exact tradeoffs between animal suffering and health risks for a particular person, and the third to how likely animal rights activists are to badmouth AI safety, priorities research, etc. – probably rarely.
  4. Being vegan among EAs, young, educated people, and other disproportionately antispeciesist groups may be more important than being vegan in a community of hunters.
  5. A possible, unusual conclusion to draw from this is to be “private carnivor”: You only eat vegan food in public, and when people ask you whether you’re vegan, you tell them that you think eating meat is morally bad, a bad norm, and shameful, and so you only do it in private and as rarely as possible. No lies or pretense.
  6. There’s also the option of moral offsetting, which I find very appealing (despite these criticisms – I think I somewhat disagree with my five-year-old comment there now), but it doesn’t seem to quite address the core issue here. 
  7. Another argument you mentioned to me at an EAGx was something along the lines that it’ll be harder to attract top talent in field X (say, AI safety) if they not only have to subscribe X being super important but have to subscribe to  X being super important and be vegan. Friend of mine solve that by keeping those things separate. Yes, the catering may be vegan, but otherwise nothing indicates that there’s any need for them to be vegan themselves. (That conversation can happen, if at all, in a personal context separate of any ties to field X.)
Comment by telofy on The Case for Education · 2020-08-16T15:59:13.767Z · EA · GW

Interesting, thank you! Assuming there are enough people who can do the “normal good things EAs would also do,” that leaves the problem that it’ll be expensive for enough people with the necessary difference in subject-matter expertise to devote time to tutoring.

I’m imagining a hierarchical system where the absolute experts on some topic (such as agent foundations or s-risks) set some time aside to tutor carefully junior researchers at their institute; those junior researchers tutor somewhat carefully selected amateur enthusiasts; and the the amateur enthusiasts tutor people who’ve signed up for (self-selected into) a local reading club on the topic. These tutors may need to be paid for this work to be able to invest the necessary time.

This is difficult if the field of research is new because then (1) there may be only a small number of experts with very little time to spare and no one else who comes close in expertise or (2) there may be not yet enough knowledge in the area to sustain three layers of tutors while still having a difference in expertise that allows for this mode of tutoring socially.

But whenever problem 2 occurs, the hierarchical scheme is just unnecessary. So only problem 1 in isolation remains unsolved.

Do you think that could work? Maybe this is something that’d be interesting for charity entrepreneurs to solve. :-)

What would also be interesting: (1) How much time do these tutors devote to each student per week? (2) Does one have to have exceptional didactic skills to become tutor or are these people only selected for their subject-matter expertise? (3) Was this particular tutor exceptional or are they all so good?

Maybe my whole idea is unrealistic because too few people could combine subject-matter expertise with didactic skill. Especially the skill of understanding a different, incomplete or inconsistent world model and then providing just the information that the person needs to improve it seems unusual.

Comment by telofy on The Case for Education · 2020-08-16T13:24:50.925Z · EA · GW

Hi Chi! I keep thinking about this:

My tutor pushed back and improved my thinking a lot and in a way that I frankly don't expect most of the people in my EA circle to do. I hope this also helps me evaluate the quality of discussion and arguments in EA a bit although I'm not sure if that's a real effect.

If you have a moment, I’d be very interested to understand what exactly this tutor did right and how. Maybe others (like me) can emulate what they did! :-D

Comment by telofy on Objections to Value-Alignment between Effective Altruists · 2020-07-16T10:44:57.980Z · EA · GW

I’ve come to think that evidential cooperation in large worlds and, in different ways, preference utilitarianism pushes even antirealists toward relatively specific moral compromises that require an impartial empirical investigation to determine. (That may not apply to various antirealists that have rather easy-to-realize moral goals or one’s that others can’t help a lot with. Say, protecting your child from some dangers or being very happy. But it does to my drive to reduce suffering.)

Comment by telofy on Objections to Value-Alignment between Effective Altruists · 2020-07-16T10:35:40.774Z · EA · GW

Thank you for writing this article! It’s interesting and important. My thoughts on the issue:

Long Reflection

I see a general tension between achieving existential security and putting sentient life on the best or an acceptable trajectory before we cease to be able to cooperate causally very well anymore because of long delays in communication.

A focus on achieving existential security pushes toward investing less time into getting all basic assumptions just right because all these investigations trade off against a terrible risk. I’ve read somewhere that homogeneity is good for early-stage startups because their main risk is in being not fast enough and not in getting something wrong. So people who are mainly concerned with existential risk may accept being very wrong about a lot of things so long as they still achieve existential security in time. I might call this “emergency mindset.”

Personally – I’m worried I’m likely biased here – I would rather like to precipitate the Long Reflection to avoid getting some things terribly wrong in the futures where we achieve existential security even if these investigations comes at some risk of diverting resources from reducing existential risk. I might call this “reflection mindset.”

There is probably some impartially optimal trade off here (plus comparative advantages of different people), and that trade off would also imply how much resources it is best to invest into avoiding homogeneity.

I’ve also commented on this on a recent blog article where I mention more caveats.

Ideas for Solutions

I’ve seen a bit of a shift toward reflection over emergency mindset at least since 2019 and more gradually since 2015. So if it turns out that we’re right and EA should err more in the direction of reflection, then a few things may aid that development.

Time

I’ve found that I need to rely a lot on others’ judgments on issues when I don’t have much time. But now that I have more time, I can investigate a lot of interesting questions myself and so need to rely less on the people I perceive as experts. Moreover, I’m less afraid to question expert opinions when I know something beyond the Cliff’s Notes about a topic, because I’ll be less likely to come off as arrogantly stupid.

So maybe it would help if people who are involved in EA in nonresearch positions were generally encouraged, incentivized, and allowed to take off more time to also learn things for themselves.

Money

The EA Funds could explicitly incentivize the above efforts but they could also explicitly incentivize broad literature research and summarization of topics and interviews with topic experts for topics that relate to foundational assumptions in EA projects.

“Growth and the Case Against Randomista Development” seems like a particular impressive example of such an investigation.

Academic Research

I’ve actually seen a shift toward academic research over the past 3–4 years. And that seems valuable to continue (though my above reservations about my personal bias in the issue may apply). It is likely slower and maybe less focused. But academic environments are intellectually very different from EA, and professors in some field are very widely read in that field. So being in that environment and becoming a person that widely read people are happy to collaborate with should be very helpful in avoiding the particular homogeneities that the EA community comes with. (They’ll have homogeneities of their own of course.)

Comment by telofy on Denis Drescher's Shortform · 2020-06-06T11:10:49.384Z · EA · GW

Studies on Slack” by Scott Alexander: Personal takeaways

There have been studies on how software teams use Slack. Scott Alexander’s article “Studies on Slack” is not about that. Rather it describes the world as a garlic-like nesting of abstraction layers on which there are different degrees of competition vs. cooperation between actors; how they emerged (in some cases); and what their benefit is.

The idea, put simply, at least in my mind, is that in a fierce competition innovations need to prove beneficial immediately in logical time or the innovator will be outcompeted. But limiting innovations to only those that either consist of only one step or whose every step is individually beneficial is, well, limiting. The result are innovators stuck in local optima unable to reach more global optima.

Enter slack. Somehow you create a higher-order mechanism that alleviates the competition a bit. The result is that now innovators have the slack to try a lot of multi-step innovations despite any neutral or detrimental intermediate steps. The mechanisms are different ones in different areas. Scott describes mechanisms from human biology, society, ecology, business management, fictional history, etc. Hence the garlic-like nesting: It seems to me that these systems are nested within each other, and while Scott only ever describes two levels at a time, it’s clear enough that higher levels such as business management depend on lower levels such as those that enable human bodies to function.

This essay made a lot of things clearer to me that I had half intuited but never quite understood. In particular it made me update downward a bit on how much I expect AGI to outperform humans. One of mine reasons for thinking that human intelligence is vastly inferior to a theoretical optimum is that I thought evolution could almost only ever improve one step at a time – that it would take an extremely long time for a multi-step mutation with detrimental intermediate steps to happen through sheer luck. Since slack seems to be built into biological evolution to some extent, maybe it is not as inferior as I thought to “intelligent design” like we’re attempting it now.

It would also be interesting to think about how slack affects zero-sum board games – simulations of fierce competition. In the only board game I know, Othello, you can thwart any plans the opponent might have with your next move in, like, 90+% of cases. Hence, I made a (small but noticeable) leap forward in my performance when I switched from analyzing my position through the lens of “What is a nice move I can play?” to “What is a nice move my opponent could now play if it were their turn and how can I prevent it?” A lot of perfect moves, especially early in the game, switch from looking surprising and grotesk to looking good once I viewed them through that lens. So it seems that in Othello there is rarely any Slack. (I’m not saying that you don’t plan multi-step strategies in Othello, but it’s rare that you can plan them such that actually get to carry them out. Robust strategies play a much greater role in my experience. Then again this may be different at higher levels of gameplay than mine.)

Perhaps that’s related to why I’ve seen not particularly smart people yet turning out to be shockingly efficient social manipulators, and why these people are usually found in low-slack fields. If your situation is so competitive that your opponent can never plan more than one step ahead anyway, you only need to do the equivalent of thinking “What is a nice move my opponent could now play if it were their turn and how can I prevent it?” to beat, like, 80% of them. No need for baroque and brittle stratagems like in Skyfall.

I wonder if Go is different? The board is so big that I’d expect there to be room to do whatever for a few moves from time to time? Very vague surface-level heuristic idea! I have no idea of Go strategy.

I’m a bit surprised that Scott didn’t draw parallels to his interest in cost disease, though. Not that I see any clear once, but there got to be some that are worth at least checking and debunking – innovation slowing so that you need more slack to innovate at the same rate, or increasing wealth creating more slack thereby decreasing competition that would’ve otherwise kept prices down, etc.

The article was very elucidating, but I’m not quite able to now look at a system and tell whether it needs more or less slack or how to establish a mechanism that could produce that slack. That would be important since I have a number of EA friends who could use some more slack to figure out psychological issues or skill up on some areas. The EA funds try to help a bit here, but I feel like we need more of that.

Comment by telofy on Denis Drescher's Shortform · 2020-06-05T06:52:44.619Z · EA · GW

Effective Altruism and Free Riding” by Scott Behmer: Personal takeaways

Coordination is an oft-discussed topic within EA, and people generally try hard to behave cooperatively toward other EA researchers, entrepreneurs, and donors present and future. But “Effective Altruism and Free Riding” makes the case that standard EA advice favors defection over cooperation in prisoner’s dilemmas (and stag hunts) with non-EAs. It poses the question whether this is good or bad, and what can be done about it.

I’ve had a few thoughts while reading the article but found that most of them were already covered in the most upvoted comment thread. I’ll still outline them in the following as a reference for myself, to add some references that weren’t mentioned, and to frame them a bit differently.

The project of maximizing gains from moral trade is one that I find very interesting and promising, and want to investigate further to better understand its relative importance and strategic implications.

Still, Scott’s perspective was a somewhat new one for me. He points out that in particular the neglectedness criterion encourages freeriding: Climate change is a terrible risk but we tend to be convinced by neglectedness considerations that additional work on it is not maximally pressing. In effect, we’re freeriding on the efforts of activists working on climate change mitigation.

What was new to me about that is that I’ve conceived of neglectedness as a cheap coordination heuristic. Cheap in that it doesn’t require a lot of communication with other cooperators; coordination in the sense that everyone is working towards a bunch of similar goals but need to distribute work among themselves optimally; and heuristic in that it falls short insofar as values are not perfectly aligned, momentum in capacity building is hard to anticipate, and the tradeoffs with tractability and importance are usually highly imprecise.

So in essence, my simplification was to conceive of the world as filled with agents like me in values that use neglectedness to coordinate their cooperative work, and Scott conceives of the world as filled with agents very much unlike me in values that use neglectedness to freeride off of each other’s work.

Obviously, neither is exactly true, but I don’t see an easy way to home in on which model is better: (1) I suppose most people are not centrally motivated by consequentialism in their work, and it may be impossible for us to benefit the motivations that are central to them. But then again there are probably consequentialist aspects to most people’s motivations. (2) Insofar as there are aspects to people’s motivations for their work that we can benefit, how would these people wish for their preferences to be idealized (if that is even the framing that they’d prefer to think about their behavior)? Caspar Oesterheld discusses the ins and outs of different forms of idealization in the eponymous section 3.3.1 of “Multiverse-wide Cooperation via Correlated Decision Making.” The upshot is, very roughly, that idealization through additional information seems less doubious than idealization through moral arguments (Scott’s article mentions advocacy for example). So would exposing non-EAs to information about the importance of EA causes lead them to agree that people should focus on them even at the expense of the cause that they chose? (3) What consequentialist preferences should be even take into account – only altruistic ones or also personal ones, since personal ones may be particularly strong? A lot of people have personal preferences not to die or suffer and for their children not to die or suffer, which may be (imperfectly) aligned with catastrophe prevention.

But the framing of the article and the comments was also different from the way I conceive of the world in that it framed the issue as a game between altruistic agents with different goals. I’ve so far seen all sorts of nonagents as being part of the game by dint of being moral patients. If instead we have a game between altruists who are stewards of the interests of other nonagent moral patients, it becomes clearer why everyone is part of the game, their power, but there are a few other aspects that elude me. Is there a risk of double-counting the interests of the nonagent moral patients if they have many altruist stewards – and does that make a difference if everyone does it? And should a bargaining solution only take the stewards’ power into account (perhaps the natural default, for better or worse) or also the number of moral patients they stand up for? The first falls short of my moral intuitions in the case. It may also cause Ben Todd and many others to leave the coalition because the gains from trade are not worth the sacrifice for them. Maybe we can do better. But the second option seems gameable (by pretending to see moral patienthood where one in fact does not see it) and may cause powerful cooperators to leave the coalition if they have a particularly narrow concept of moral patienthood. (Whatever the result, it seems like that this the portfolio that commenters mentioned, probably akin to the compromise utility function that you maximize in evidential cooperation – see Caspar Oesterheld’s paper.)

Personally, I can learn a lot more about these questions by just reading up on more game theory research. More specifically, it’s probably smart to investigate what the gains from trade are that we could realize in the best case to see if all of this is even worth the coordination overhead.

But there are probably also a few ways forward for the community. Causal (as opposed to acausal) cooperation requires some trust, so maybe the signal that there is a community of altruists that cooperate particularly well internally can be good if paired with the option of others to join that community by proving themselves to be sufficiently trustworthy. (That community may be wider than EA and called differently.) That would probably take the shape of newcomers making the case for new cause areas not necessarily based on their appeal to utilitarian values but based on their appeal to the values of the newcomer – alongside an argument that those values wouldn’t just turn into some form of utilitarianism upon idealization. That way, more value systems could gradually join this coalition, and we’d promote cooperation the way Scott recommends in the article. It’ll probably make sense to have different nested spheres of trust, though, with EA orgs at the center, the wider community around that, new aligned cooperators further outside, occasional mainstream cooperators further outside yet, etc. That way, the more high-trust spheres remain even if sphere’s further on the outside fail.

Finally, a lot of these things are easier in the acausal case that evidential cooperation in large worlds (ECL) is based on (once again, see Caspar Oesterheld’s paper). Perhaps ECL will turn out to make sufficiently strong recommendations that we’ll want to cooperate causally anyway despite any risk of causal defection against us. This stikes me as somewhat unlikely (e.g., many environmentalists may find ECL weird, so there may never be many evidential cooperators among them), but I still feel sufficiently confused about the implications of ECL that I find it at least worth mentioning.

Comment by telofy on What are the leading critiques of "longtermism" and related concepts · 2020-05-30T21:57:12.292Z · EA · GW

“The Epistemic Challenge to Longtermism” by Christian Tarsney is perhaps my favorite paper on the topic.

Longtermism holds that what we ought to do is mainly determined by effects on the far future. A natural objection is that these effects may be nearly impossible to predict—perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.

“How the Simulation Argument Dampens Future Fanaticism” by Brian Tomasik has also influenced my thinking but has a more narrow focus.

Some effective altruists assume that most of the expected impact of our actions comes from how we influence the very long-term future of Earth-originating intelligence over the coming ~billions of years. According to this view, helping humans and animals in the short term matters, but it mainly only matters via effects on far-future outcomes.

There are a number of heuristic reasons to be skeptical of the view that the far future astronomically dominates the short term. This piece zooms in on what I see as perhaps the strongest concrete (rather than heuristic) argument why short-term impacts may matter a lot more than is naively assumed. In particular, there's a non-trivial chance that most of the copies of ourselves are instantiated in relatively short-lived simulations run by superintelligent civilizations, and if so, when we act to help others in the short run, our good deeds are duplicated many times over. Notably, this reasoning dramatically upshifts the relative importance of short-term helping even if there's only a small chance that Nick Bostrom's basic simulation argument is correct.

My thesis doesn't prove that short-term helping is more important than targeting the far future, and indeed, a plausible rough calculation suggests that targeting the far future is still several orders of magnitude more important. But my argument does leave open uncertainty regarding the short-term-vs.-far-future question and highlights the value of further research on this matter.

Finally, you can also conceive of yourself as one instantiation of a decision algorithm that probably has close analogs at different points throughout time, which makes Caspar Oesterheld’s work relevant to the topic. There are a few summaries linked from that page. I think it’s an extremely important contribution but a bit tangential to your question.

Comment by telofy on [Stats4EA] Expectations are not Outcomes · 2020-05-19T12:11:00.135Z · EA · GW

I’ve found Christian Tarsney’s “Exceeding Expectations” insightful when it comes to recognizing and maybe coping with the limits of expected value.

The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized.

See also the post/sequence by Daniel Kokotajlo, “Tiny Probabilities of Vast Utilities”. I’m linking to the post that was most valuable to me, but by default it might make sense to start with the first one in the sequence. ^^

Comment by telofy on Modelers and Indexers · 2020-05-16T10:50:21.672Z · EA · GW

Yeah, totally agree! The Birds and Frogs distinction sounds very similar! I’ve pocketed the original article for later reading.

And I also feel that the Adaptors–Innovators one is “may be slightly correlated but is a different thing.” :-)

Comment by telofy on Modelers and Indexers · 2020-05-16T10:36:26.113Z · EA · GW

Yes! I’ve been thinking about you a lot while I was writing that post because you yourself strike me as a potential counterexample to the usefulness of the distinction. I’ve seen you do exactly what you describe and generally display comfort in situations that indexers would normally be comfortable in, while at the same time you evidently have quite similar priorities to me. So either you break the model or you’re just really good at both! :-)