Posts

Summary of Evidence, Decision, and Causality 2020-09-05T20:23:04.019Z · score: 17 (7 votes)
Self-Similarity Experiment 2020-09-05T17:04:14.619Z · score: 7 (2 votes)
Modelers and Indexers 2020-05-12T12:01:14.768Z · score: 34 (10 votes)
Denis Drescher's Shortform 2020-04-23T15:44:50.620Z · score: 6 (1 votes)
Current Thinking on Prioritization 2018 2018-03-13T19:22:20.654Z · score: 10 (10 votes)
Cause Area: Human Rights in North Korea 2017-11-26T14:58:10.490Z · score: 21 (16 votes)
The Attribution Moloch 2016-04-28T06:43:10.413Z · score: 12 (10 votes)
Even More Reasons for Donor Coordination 2015-10-27T05:30:37.899Z · score: 4 (6 votes)
The Redundancy of Quantity 2015-09-03T17:47:20.230Z · score: 2 (4 votes)
My Cause Selection: Denis Drescher 2015-09-02T11:28:51.383Z · score: 6 (6 votes)
Results of the Effective Altruism Outreach Survey 2015-07-26T11:41:48.500Z · score: 3 (5 votes)
Dissociation for Altruists 2015-05-14T11:27:21.834Z · score: 5 (9 votes)
Meetup : Effective Altruism Berlin Meetup #3 2015-05-10T19:40:40.990Z · score: 0 (0 votes)
Incentivizing Charity Cooperation 2015-05-10T11:02:46.433Z · score: 6 (6 votes)
Expected Utility Auctions 2015-05-02T16:22:28.948Z · score: 4 (4 votes)
Telofy’s Effective Altruism 101 2015-03-29T18:50:56.188Z · score: 3 (3 votes)
Meetup : EA Berlin #2 2015-03-26T16:55:04.882Z · score: 0 (0 votes)
Common Misconceptions about Effective Altruism 2015-03-23T09:25:36.304Z · score: 9 (9 votes)
Precise Altruism 2015-03-21T20:55:14.834Z · score: 6 (6 votes)
Telofy’s Introduction to Effective Altruism 2015-01-21T16:46:18.527Z · score: 7 (9 votes)

Comments

Comment by telofy on Are there any other pro athlete aspiring EAs? · 2020-09-13T19:42:46.257Z · score: 2 (1 votes) · EA · GW

I’d like to keep up-to-date on what you’re doing. I don’t have chance getting anywhere close to an interesting level anymore in the sport that I do (climbing, mostly bouldering), but I might occasionally meet those who do. (No worries, I can be tactful. ^^)

Comment by telofy on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-05T21:33:31.601Z · score: 4 (2 votes) · EA · GW

I’ve thought a bit about this for personal reasons, and I found Scott Alexander’s take on it to be enlightening.

I see a tension between the following two arguments that I find plausible:

  1. Some people run into health issues due to a vegan diet despite correct supplementation. In most cases it’s probably because of incorrect or absent supplementation, but probably not in all. This could mean that a highly productive EA doing highly important work may cease to be as productive with a small probability. Since they’ve probably been doing extremely valuable work, this decrease in output may be worse than the suffering they would’ve inflicted if they had [eaten some beef and had some milk](https://impartial-priorities.org/direct-suffering-caused-by-various-animal-foods.html). So they should at least eat a bit of beef and drink a bit of milk to reduce that risk. (These foods may increase other risks – but let’s assume for the moment that the person can make that tradeoff correctly for themselves.)
  2. There is currently in our society a strong moral norm against stealing. We want to live in a society that has a strong norm against stealing. So whenever we steal – be it to donate the money to a place where it has much greater marginal utility than with its owner – we erode, in expectation, the norm against stealing a bit. People have to invest more into locks, safes, guards, and fences. People can’t just offer couchsurfing anymore. This increase in anomie (roughly, lack of trust and cohesion) may be small in expectation but has a vast expected societal effect. Hence we should be very careful about eroding valuable societal norms, and, conversely, we should also take care to foster new valuable societal norms or at least not stand in the way of them emerging.

I see a bit of a Laffer curve here (like an upside-down U) where upholding societal rules that are completely unheard of has little effect, and violating societal rules that are extremely well established has little effect again (except that you go to prison). The middle section is much more interesting, and this is where I generally advise to tread softly. (But I’m also against stealing.)

Because the way I resolve this tension for me is to assess whether in my immediate environment – the people who are most likely to be directly influenced by me – a norm is potentially about to emerge. If that is the case, and I approve of the norm, I try to always uphold that norm to at least an above average level.

                                                                                           

Well, and then there are a few more random caveats:

  1. As the norm not to harm other animals for food becomes stronger, it’ll be less socially awkward for people (outside vegan circles) to eat vegan food. Social effects were (last time I checked) still the second most common reason for vegan recidivism.
  2. As the norm not to harm other animals for food becomes stronger, more effort will be put into providing properly fortified food to make supplementation automatic.
  3. Eroding a budding social norm because it comes at a cost to one’s own goals seems like the sort of freeriding that I think the EA community needs to be very careful about. In some cases the conflict is only due to lacking idealization of preferences or only between instrumental rather than terminal goals or the others would defect against us in any case, but we don’t know any of this to be the case here. The first comes down to unanswered questions of population ethics, the second to the exact tradeoffs between animal suffering and health risks for a particular person, and the third to how likely animal rights activists are to badmouth AI safety, priorities research, etc. – probably rarely.
  4. Being vegan among EAs, young, educated people, and other disproportionately antispeciesist groups may be more important than being vegan in a community of hunters.
  5. A possible, unusual conclusion to draw from this is to be “private carnivor”: You only eat vegan food in public, and when people ask you whether you’re vegan, you tell them that you think eating meat is morally bad, a bad norm, and shameful, and so you only do it in private and as rarely as possible. No lies or pretense.
  6. There’s also the option of moral offsetting, which I find very appealing (despite these criticisms – I think I somewhat disagree with my five-year-old comment there now), but it doesn’t seem to quite address the core issue here. 
  7. Another argument you mentioned to me at an EAGx was something along the lines that it’ll be harder to attract top talent in field X (say, AI safety) if they not only have to subscribe X being super important but have to subscribe to  X being super important and be vegan. Friend of mine solve that by keeping those things separate. Yes, the catering may be vegan, but otherwise nothing indicates that there’s any need for them to be vegan themselves. (That conversation can happen, if at all, in a personal context separate of any ties to field X.)
Comment by telofy on The Case for Education · 2020-08-16T15:59:13.767Z · score: 2 (1 votes) · EA · GW

Interesting, thank you! Assuming there are enough people who can do the “normal good things EAs would also do,” that leaves the problem that it’ll be expensive for enough people with the necessary difference in subject-matter expertise to devote time to tutoring.

I’m imagining a hierarchical system where the absolute experts on some topic (such as agent foundations or s-risks) set some time aside to tutor carefully junior researchers at their institute; those junior researchers tutor somewhat carefully selected amateur enthusiasts; and the the amateur enthusiasts tutor people who’ve signed up for (self-selected into) a local reading club on the topic. These tutors may need to be paid for this work to be able to invest the necessary time.

This is difficult if the field of research is new because then (1) there may be only a small number of experts with very little time to spare and no one else who comes close in expertise or (2) there may be not yet enough knowledge in the area to sustain three layers of tutors while still having a difference in expertise that allows for this mode of tutoring socially.

But whenever problem 2 occurs, the hierarchical scheme is just unnecessary. So only problem 1 in isolation remains unsolved.

Do you think that could work? Maybe this is something that’d be interesting for charity entrepreneurs to solve. :-)

What would also be interesting: (1) How much time do these tutors devote to each student per week? (2) Does one have to have exceptional didactic skills to become tutor or are these people only selected for their subject-matter expertise? (3) Was this particular tutor exceptional or are they all so good?

Maybe my whole idea is unrealistic because too few people could combine subject-matter expertise with didactic skill. Especially the skill of understanding a different, incomplete or inconsistent world model and then providing just the information that the person needs to improve it seems unusual.

Comment by telofy on The Case for Education · 2020-08-16T13:24:50.925Z · score: 3 (2 votes) · EA · GW

Hi Chi! I keep thinking about this:

My tutor pushed back and improved my thinking a lot and in a way that I frankly don't expect most of the people in my EA circle to do. I hope this also helps me evaluate the quality of discussion and arguments in EA a bit although I'm not sure if that's a real effect.

If you have a moment, I’d be very interested to understand what exactly this tutor did right and how. Maybe others (like me) can emulate what they did! :-D

Comment by telofy on Objections to Value-Alignment between Effective Altruists · 2020-07-16T10:44:57.980Z · score: 9 (3 votes) · EA · GW

I’ve come to think that evidential cooperation in large worlds and, in different ways, preference utilitarianism pushes even antirealists toward relatively specific moral compromises that require an impartial empirical investigation to determine. (That may not apply to various antirealists that have rather easy-to-realize moral goals or one’s that others can’t help a lot with. Say, protecting your child from some dangers or being very happy. But it does to my drive to reduce suffering.)

Comment by telofy on Objections to Value-Alignment between Effective Altruists · 2020-07-16T10:35:40.774Z · score: 11 (5 votes) · EA · GW

Thank you for writing this article! It’s interesting and important. My thoughts on the issue:

Long Reflection

I see a general tension between achieving existential security and putting sentient life on the best or an acceptable trajectory before we cease to be able to cooperate causally very well anymore because of long delays in communication.

A focus on achieving existential security pushes toward investing less time into getting all basic assumptions just right because all these investigations trade off against a terrible risk. I’ve read somewhere that homogeneity is good for early-stage startups because their main risk is in being not fast enough and not in getting something wrong. So people who are mainly concerned with existential risk may accept being very wrong about a lot of things so long as they still achieve existential security in time. I might call this “emergency mindset.”

Personally – I’m worried I’m likely biased here – I would rather like to precipitate the Long Reflection to avoid getting some things terribly wrong in the futures where we achieve existential security even if these investigations comes at some risk of diverting resources from reducing existential risk. I might call this “reflection mindset.”

There is probably some impartially optimal trade off here (plus comparative advantages of different people), and that trade off would also imply how much resources it is best to invest into avoiding homogeneity.

I’ve also commented on this on a recent blog article where I mention more caveats.

Ideas for Solutions

I’ve seen a bit of a shift toward reflection over emergency mindset at least since 2019 and more gradually since 2015. So if it turns out that we’re right and EA should err more in the direction of reflection, then a few things may aid that development.

Time

I’ve found that I need to rely a lot on others’ judgments on issues when I don’t have much time. But now that I have more time, I can investigate a lot of interesting questions myself and so need to rely less on the people I perceive as experts. Moreover, I’m less afraid to question expert opinions when I know something beyond the Cliff’s Notes about a topic, because I’ll be less likely to come off as arrogantly stupid.

So maybe it would help if people who are involved in EA in nonresearch positions were generally encouraged, incentivized, and allowed to take off more time to also learn things for themselves.

Money

The EA Funds could explicitly incentivize the above efforts but they could also explicitly incentivize broad literature research and summarization of topics and interviews with topic experts for topics that relate to foundational assumptions in EA projects.

“Growth and the Case Against Randomista Development” seems like a particular impressive example of such an investigation.

Academic Research

I’ve actually seen a shift toward academic research over the past 3–4 years. And that seems valuable to continue (though my above reservations about my personal bias in the issue may apply). It is likely slower and maybe less focused. But academic environments are intellectually very different from EA, and professors in some field are very widely read in that field. So being in that environment and becoming a person that widely read people are happy to collaborate with should be very helpful in avoiding the particular homogeneities that the EA community comes with. (They’ll have homogeneities of their own of course.)

Comment by telofy on Denis Drescher's Shortform · 2020-06-06T11:10:49.384Z · score: 3 (2 votes) · EA · GW

Studies on Slack” by Scott Alexander: Personal takeaways

There have been studies on how software teams use Slack. Scott Alexander’s article “Studies on Slack” is not about that. Rather it describes the world as a garlic-like nesting of abstraction layers on which there are different degrees of competition vs. cooperation between actors; how they emerged (in some cases); and what their benefit is.

The idea, put simply, at least in my mind, is that in a fierce competition innovations need to prove beneficial immediately in logical time or the innovator will be outcompeted. But limiting innovations to only those that either consist of only one step or whose every step is individually beneficial is, well, limiting. The result are innovators stuck in local optima unable to reach more global optima.

Enter slack. Somehow you create a higher-order mechanism that alleviates the competition a bit. The result is that now innovators have the slack to try a lot of multi-step innovations despite any neutral or detrimental intermediate steps. The mechanisms are different ones in different areas. Scott describes mechanisms from human biology, society, ecology, business management, fictional history, etc. Hence the garlic-like nesting: It seems to me that these systems are nested within each other, and while Scott only ever describes two levels at a time, it’s clear enough that higher levels such as business management depend on lower levels such as those that enable human bodies to function.

This essay made a lot of things clearer to me that I had half intuited but never quite understood. In particular it made me update downward a bit on how much I expect AGI to outperform humans. One of mine reasons for thinking that human intelligence is vastly inferior to a theoretical optimum is that I thought evolution could almost only ever improve one step at a time – that it would take an extremely long time for a multi-step mutation with detrimental intermediate steps to happen through sheer luck. Since slack seems to be built into biological evolution to some extent, maybe it is not as inferior as I thought to “intelligent design” like we’re attempting it now.

It would also be interesting to think about how slack affects zero-sum board games – simulations of fierce competition. In the only board game I know, Othello, you can thwart any plans the opponent might have with your next move in, like, 90+% of cases. Hence, I made a (small but noticeable) leap forward in my performance when I switched from analyzing my position through the lens of “What is a nice move I can play?” to “What is a nice move my opponent could now play if it were their turn and how can I prevent it?” A lot of perfect moves, especially early in the game, switch from looking surprising and grotesk to looking good once I viewed them through that lens. So it seems that in Othello there is rarely any Slack. (I’m not saying that you don’t plan multi-step strategies in Othello, but it’s rare that you can plan them such that actually get to carry them out. Robust strategies play a much greater role in my experience. Then again this may be different at higher levels of gameplay than mine.)

Perhaps that’s related to why I’ve seen not particularly smart people yet turning out to be shockingly efficient social manipulators, and why these people are usually found in low-slack fields. If your situation is so competitive that your opponent can never plan more than one step ahead anyway, you only need to do the equivalent of thinking “What is a nice move my opponent could now play if it were their turn and how can I prevent it?” to beat, like, 80% of them. No need for baroque and brittle stratagems like in Skyfall.

I wonder if Go is different? The board is so big that I’d expect there to be room to do whatever for a few moves from time to time? Very vague surface-level heuristic idea! I have no idea of Go strategy.

I’m a bit surprised that Scott didn’t draw parallels to his interest in cost disease, though. Not that I see any clear once, but there got to be some that are worth at least checking and debunking – innovation slowing so that you need more slack to innovate at the same rate, or increasing wealth creating more slack thereby decreasing competition that would’ve otherwise kept prices down, etc.

The article was very elucidating, but I’m not quite able to now look at a system and tell whether it needs more or less slack or how to establish a mechanism that could produce that slack. That would be important since I have a number of EA friends who could use some more slack to figure out psychological issues or skill up on some areas. The EA funds try to help a bit here, but I feel like we need more of that.

Comment by telofy on Denis Drescher's Shortform · 2020-06-05T06:52:44.619Z · score: 2 (1 votes) · EA · GW

Effective Altruism and Free Riding” by Scott Behmer: Personal takeaways

Coordination is an oft-discussed topic within EA, and people generally try hard to behave cooperatively toward other EA researchers, entrepreneurs, and donors present and future. But “Effective Altruism and Free Riding” makes the case that standard EA advice favors defection over cooperation in prisoner’s dilemmas (and stag hunts) with non-EAs. It poses the question whether this is good or bad, and what can be done about it.

I’ve had a few thoughts while reading the article but found that most of them were already covered in the most upvoted comment thread. I’ll still outline them in the following as a reference for myself, to add some references that weren’t mentioned, and to frame them a bit differently.

The project of maximizing gains from moral trade is one that I find very interesting and promising, and want to investigate further to better understand its relative importance and strategic implications.

Still, Scott’s perspective was a somewhat new one for me. He points out that in particular the neglectedness criterion encourages freeriding: Climate change is a terrible risk but we tend to be convinced by neglectedness considerations that additional work on it is not maximally pressing. In effect, we’re freeriding on the efforts of activists working on climate change mitigation.

What was new to me about that is that I’ve conceived of neglectedness as a cheap coordination heuristic. Cheap in that it doesn’t require a lot of communication with other cooperators; coordination in the sense that everyone is working towards a bunch of similar goals but need to distribute work among themselves optimally; and heuristic in that it falls short insofar as values are not perfectly aligned, momentum in capacity building is hard to anticipate, and the tradeoffs with tractability and importance are usually highly imprecise.

So in essence, my simplification was to conceive of the world as filled with agents like me in values that use neglectedness to coordinate their cooperative work, and Scott conceives of the world as filled with agents very much unlike me in values that use neglectedness to freeride off of each other’s work.

Obviously, neither is exactly true, but I don’t see an easy way to home in on which model is better: (1) I suppose most people are not centrally motivated by consequentialism in their work, and it may be impossible for us to benefit the motivations that are central to them. But then again there are probably consequentialist aspects to most people’s motivations. (2) Insofar as there are aspects to people’s motivations for their work that we can benefit, how would these people wish for their preferences to be idealized (if that is even the framing that they’d prefer to think about their behavior)? Caspar Oesterheld discusses the ins and outs of different forms of idealization in the eponymous section 3.3.1 of “Multiverse-wide Cooperation via Correlated Decision Making.” The upshot is, very roughly, that idealization through additional information seems less doubious than idealization through moral arguments (Scott’s article mentions advocacy for example). So would exposing non-EAs to information about the importance of EA causes lead them to agree that people should focus on them even at the expense of the cause that they chose? (3) What consequentialist preferences should be even take into account – only altruistic ones or also personal ones, since personal ones may be particularly strong? A lot of people have personal preferences not to die or suffer and for their children not to die or suffer, which may be (imperfectly) aligned with catastrophe prevention.

But the framing of the article and the comments was also different from the way I conceive of the world in that it framed the issue as a game between altruistic agents with different goals. I’ve so far seen all sorts of nonagents as being part of the game by dint of being moral patients. If instead we have a game between altruists who are stewards of the interests of other nonagent moral patients, it becomes clearer why everyone is part of the game, their power, but there are a few other aspects that elude me. Is there a risk of double-counting the interests of the nonagent moral patients if they have many altruist stewards – and does that make a difference if everyone does it? And should a bargaining solution only take the stewards’ power into account (perhaps the natural default, for better or worse) or also the number of moral patients they stand up for? The first falls short of my moral intuitions in the case. It may also cause Ben Todd and many others to leave the coalition because the gains from trade are not worth the sacrifice for them. Maybe we can do better. But the second option seems gameable (by pretending to see moral patienthood where one in fact does not see it) and may cause powerful cooperators to leave the coalition if they have a particularly narrow concept of moral patienthood. (Whatever the result, it seems like that this the portfolio that commenters mentioned, probably akin to the compromise utility function that you maximize in evidential cooperation – see Caspar Oesterheld’s paper.)

Personally, I can learn a lot more about these questions by just reading up on more game theory research. More specifically, it’s probably smart to investigate what the gains from trade are that we could realize in the best case to see if all of this is even worth the coordination overhead.

But there are probably also a few ways forward for the community. Causal (as opposed to acausal) cooperation requires some trust, so maybe the signal that there is a community of altruists that cooperate particularly well internally can be good if paired with the option of others to join that community by proving themselves to be sufficiently trustworthy. (That community may be wider than EA and called differently.) That would probably take the shape of newcomers making the case for new cause areas not necessarily based on their appeal to utilitarian values but based on their appeal to the values of the newcomer – alongside an argument that those values wouldn’t just turn into some form of utilitarianism upon idealization. That way, more value systems could gradually join this coalition, and we’d promote cooperation the way Scott recommends in the article. It’ll probably make sense to have different nested spheres of trust, though, with EA orgs at the center, the wider community around that, new aligned cooperators further outside, occasional mainstream cooperators further outside yet, etc. That way, the more high-trust spheres remain even if sphere’s further on the outside fail.

Finally, a lot of these things are easier in the acausal case that evidential cooperation in large worlds (ECL) is based on (once again, see Caspar Oesterheld’s paper). Perhaps ECL will turn out to make sufficiently strong recommendations that we’ll want to cooperate causally anyway despite any risk of causal defection against us. This stikes me as somewhat unlikely (e.g., many environmentalists may find ECL weird, so there may never be many evidential cooperators among them), but I still feel sufficiently confused about the implications of ECL that I find it at least worth mentioning.

Comment by telofy on What are the leading critiques of "longtermism" and related concepts · 2020-05-30T21:57:12.292Z · score: 22 (10 votes) · EA · GW

“The Epistemic Challenge to Longtermism” by Christian Tarsney is perhaps my favorite paper on the topic.

Longtermism holds that what we ought to do is mainly determined by effects on the far future. A natural objection is that these effects may be nearly impossible to predict—perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.

“How the Simulation Argument Dampens Future Fanaticism” by Brian Tomasik has also influenced my thinking but has a more narrow focus.

Some effective altruists assume that most of the expected impact of our actions comes from how we influence the very long-term future of Earth-originating intelligence over the coming ~billions of years. According to this view, helping humans and animals in the short term matters, but it mainly only matters via effects on far-future outcomes.

There are a number of heuristic reasons to be skeptical of the view that the far future astronomically dominates the short term. This piece zooms in on what I see as perhaps the strongest concrete (rather than heuristic) argument why short-term impacts may matter a lot more than is naively assumed. In particular, there's a non-trivial chance that most of the copies of ourselves are instantiated in relatively short-lived simulations run by superintelligent civilizations, and if so, when we act to help others in the short run, our good deeds are duplicated many times over. Notably, this reasoning dramatically upshifts the relative importance of short-term helping even if there's only a small chance that Nick Bostrom's basic simulation argument is correct.

My thesis doesn't prove that short-term helping is more important than targeting the far future, and indeed, a plausible rough calculation suggests that targeting the far future is still several orders of magnitude more important. But my argument does leave open uncertainty regarding the short-term-vs.-far-future question and highlights the value of further research on this matter.

Finally, you can also conceive of yourself as one instantiation of a decision algorithm that probably has close analogs at different points throughout time, which makes Caspar Oesterheld’s work relevant to the topic. There are a few summaries linked from that page. I think it’s an extremely important contribution but a bit tangential to your question.

Comment by telofy on [Stats4EA] Expectations are not Outcomes · 2020-05-19T12:11:00.135Z · score: 13 (6 votes) · EA · GW

I’ve found Christian Tarsney’s “Exceeding Expectations” insightful when it comes to recognizing and maybe coping with the limits of expected value.

The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized.

See also the post/sequence by Daniel Kokotajlo, “Tiny Probabilities of Vast Utilities”. I’m linking to the post that was most valuable to me, but by default it might make sense to start with the first one in the sequence. ^^

Comment by telofy on Modelers and Indexers · 2020-05-16T10:50:21.672Z · score: 3 (2 votes) · EA · GW

Yeah, totally agree! The Birds and Frogs distinction sounds very similar! I’ve pocketed the original article for later reading.

And I also feel that the Adaptors–Innovators one is “may be slightly correlated but is a different thing.” :-)

Comment by telofy on Modelers and Indexers · 2020-05-16T10:36:26.113Z · score: 3 (2 votes) · EA · GW

Yes! I’ve been thinking about you a lot while I was writing that post because you yourself strike me as a potential counterexample to the usefulness of the distinction. I’ve seen you do exactly what you describe and generally display comfort in situations that indexers would normally be comfortable in, while at the same time you evidently have quite similar priorities to me. So either you break the model or you’re just really good at both! :-)

Comment by telofy on Modelers and Indexers · 2020-05-16T10:31:22.541Z · score: 3 (2 votes) · EA · GW

Thank you!

Yeah, that feels fitting to me too. I found these two posts on the term:

https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality https://www.lesswrong.com/posts/j2mcSRxhjRyhyLJEs/what-is-social-reality

A lot of social things appear arbitrary when deep down they must be deterministic. But bridging that gap is perhaps both computationally infeasible and doesn’t lend itself to particularly powerful abstractions (except for intentionality). At the same time, though, the subject is more inextricably integrated with the environment, so that it makes more sense to model the environment as falling into intentional units (agents) who are reactive. And then maybe certain bargaining procedures emerged (because they were adaptive) that are now integrated into our psyche as customs and moral intuitions.

For these bargaining procedures, I imagine, it’ll be important to abstract usefully from specific situations to more general games. Then you can classify a new situation as one that either requires going through the bargaining procedure again or is a near-replication of a situation whose bargaining outcome you already have stored. That would require exactly the indexer types of abilities – abstracting from situations to archetypes and storing the archetypes.

(E.g., if you sell books, there’s a stored bargaining solution for that where you declare a price, and if it’s right, hand over the book and get the money for it, and otherwise keep the book and don’t get the money. But if you were the first to create a search engine that indexes the full-text of books, there were no stored bargaining solutions for that and you had to go through the bargaining procedures.)

It also seems to me that there are people who, when in doubt, tend more toward running through the bargaining procedure, while others instead tend more toward observing and learning established bargaining solutions very well and maybe widening their references classes for games. I associate the first a bit with entrepreneurs, low agreeableness, Australia, and the Pavlov strategy, and the second with me, agreeable friends of mine, Germany/Switzerland, and tit for tat.

Comment by telofy on Bored at home? Contribute to the EA Wiki! · 2020-05-01T13:35:15.557Z · score: 6 (3 votes) · EA · GW

I love the idea of wikis for EA knowledge, but is there an attempt underway yet to consolidate all the existing wikis, beyond the Wikia one? Maybe you can coordinate some data import with the other people who are running EA wikis.

When the Priority Wiki was launched, (I but much more so) John Maxwell compiled some of the existing wikis here.

I think for one of these wikis to take off, it’ll probably need to become the clear Schelling point for wiki activity – maybe an integration with the concepts platform or the forum and a consolidation of all the other wikis as a basis.

I imagine there’d also need to be a way for active wiki authors to gain reputation points, e.g., in this forum, so wiki editing can have added benefits for CV building. Less Wrong also has forum and wiki, and the forum is a very similar software, so maybe they already have plans for such a system.

Comment by telofy on Does Utilitarian Longtermism Imply Directed Panspermia? · 2020-05-01T10:26:25.356Z · score: 3 (2 votes) · EA · GW

Oh yeah, I was also talking about it only from utilitarian perspectives. (Except for one aside, “Others again refuse it on deontological or lexical grounds that I also empathize with.”) Just utilitarianism doesn’t make a prescription as to the exchange rate of intensity/energy expenditure/… of individual positive experiences to individual negative experiences.

It seems that reasonable people think the outcome of B might actually be worse than A, based on your response.

Yes, I hope they do. :-)

Sorry for responding so briefly! I’m falling behind on some reading.

Comment by telofy on Does Utilitarian Longtermism Imply Directed Panspermia? · 2020-04-25T20:21:45.190Z · score: 3 (2 votes) · EA · GW

I think I’m not well placed to answer that at this point and would rather defer that to someone who has thought about this more than I have from the vantage points of many ethical theories rather than just from my (or their) own. (I try, but this issue has never been a priority for me.) Then again this is a good exercise for me in moral perspective-taking or what it’s called. ^^

It seems C > B > A, with the difference between A and B greater than the difference between B and C.

In the previous reply I tried to give broadly applicable reasons to be careful about it, but those were mostly just from Precipice. The reason is that if I ask myself, e.g., how long I would be willing to endure extreme torture to gain ten years of ultimate bliss (apparently a popular thought experiment), I might be ready to invest a few seconds if any, for a tradeoff ratio of 1e7 or 1e8 to 1. So from my vantage point, the r-strategist style “procreation” is very disvaluable. It seems like it may well be disvaluable in expectation, but either way, it seems like an enormous cost to bear for a highly uncertain payoff. I’m much more comfortable with careful, K-strategist “procreation” on a species level. (Magnus Vinding has a great book coming out soon that covers this problem in detail.)

But assuming the agnostic position again, for practice, I suppose A and C are clear cut: C is overwhelmingly good (assuming the Long Reflection works out well and we successfully maximize what we really terminally care about, but I suppose that’s your assumption) and A is sort of clear because we know roughly (though not very viscerally) how much disvalue our ancestors have paid forward over the past millions of years so that we can hopefully eventually create a utopia.

But B is wide open. It may go much more negative than A even considering all our past generations – suffering risks, dystopian-totalitarian lock-ins, permanent prehistoric lock-ins, etc. The less certain it is, the more of this disvalue we’d have to pay forward to get one utopia out of it. And it may also go positive of course, almost like C, just with lower probability and a delay.

People have probably thought about how to spread self-replicating probes to other planets so that they produce everything a species will need at the destination to rebuild a flourishing civilization. Maybe there’ll be some DNA but also computers with all sorts of knowledge, and child-rearing robots, etc. ^^ But a civilization needs so many interlocking parts to function well – all sorts of government-like institutions, trust, trade, resources, … – that it seems to me like the vast majority of these civilizations either won’t get off the ground in the first place and remain locked in a probably disvaluable Stone Age type of state, or will permanently fall short of the utopia we’re hoping for eventually.

I suppose a way forward may to consider the greatest uncertainties about the project – probabilities and magnitudes at the places where things can go most badly net negative or most awesomely net positive.

Maybe one could look into Great Filters (they may exist less necessarily than I had previously thought), because if we are now past the (or a) Great Filter, and the Great Filter is something about civilization rather than something about evolution, we should probably assign a very low probability to a civilization like ours emerging under very different conditions through the probably very narrow panspermia bottleneck. I suppose this could be tested on some remote islands? (Ethics committees may object to that, but then these objections also and even more apply to untested panspermia, so they should be taken very seriously. Then again they may not have read Bostrom or Ord. Or Pearce, Gloor, Tomasik, or Vinding for that matter.)

Oh, here’s an idea: The Drake Equation has the parameter f_i for the probability that existing life develops (probably roughly human-level?) intelligence, f_c that intelligent life becomes detectable, and L for the longevity of the civilization. The probability that intelligent life creates a civilization with similar values and potential is probably a bit less than f_c (these civilizations could have any moral values) but more than the product of the two fs. The paper above has a table that says “f_i: log-uniform from 0.001 to 1” and “f_c: log-uniform from 0.01 to 1.” So I suppose we have some 2–5 orders of magnitude uncertainty from this source.

The longevity of a civilization is “L: log-uniform from 100 to 10,000,000,000” in the paper. An advanced civilization that exists for 10–100k years may be likely to have passed the Precipice… Not sure at all about this because of the risk of lock-ins. And I’d have to put this distribution into Guesstimate to get a range of probabilities out of this. But it seems like a major source of uncertainty too.

The ethical tradeoff question above feels almost okay to me with a 1e8 to 1 tradeoff but others are okay with a 1e3 or 1e4 to 1 tradeoff. Others again refuse it on deontological or lexical grounds that I also empathize with. It feels like there are easily five orders of magnitude uncertainty here, so maybe this is the bigger question. (I’m thinking more in terms of an optimal compromise utility function than in moral realist terms, but I suppose that doesn’t change much in this case.)

In the best case within B, there’s also the question whether it’ll be a delay compared to C of thousands or of tens of thousands of years, and how much that would shrink the cosmic endowment.

I don’t trust myself to be properly morally impartial about this after such a cursory investigation, but that said, I would suppose that most moral systems would put a great burden of proof on the intervention because it can be so extremely good and so extremely bad. But tackling these three to four sources of uncertainty and maybe others can perhaps shed more light on how desirable it really is.

I empathize with the notion that some things can’t wait until the Long Reflection, at least as part in a greater portfolio, because it seems to me that suffering risks (s-risks) are a great risk (in expectation) even or especially now in the span until the Long Reflection. They can perhaps be addressed through different and more tractable avenues than other longterm risks and by researchers with different comparative advantages.

A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.

Hmm, I don’t quite follow… Does the above change the relative order of preference for you, and if so, to which order?

So I guess the implied position would be that we should prepare a biotic hedge in case things get especially dire, and invest more in SETI type searches. If we know that life exists elsewhere in the universe, we do not need to deploy the biotic hedge?

There are all these risks from drawing the attention of hostile civilizations. I haven’t thought about what the risk and benefits are there. It feels like that came up in Precipice too, but I could be mixing something up.

Comment by telofy on Does Utilitarian Longtermism Imply Directed Panspermia? · 2020-04-25T11:40:19.279Z · score: 9 (6 votes) · EA · GW

Such an effort would likely be irreversible or at least very slow and costly to reverse. It comes at an immense cost of option value.

Directed panspermia also bears greater average-case risks than a controlled expansion of our civilization because we’ll have less control over the functioning and the welfare standards of the civilization (if any) and thus the welfare of the individuals at the destination.

Toby Ord’s Precipice more or less touches on this:

Other bold actions could pose similar risks, for instance spreading out beyond our Solar System into a federation of independent worlds, each drifting in its own cultural direction.

This is not to reject such changes to the human condition—they may well be essential to realizing humanity’s full potential. What I am saying is that these are the kind of bold changes that would need to come after the Long Reflection. Or at least after enough reflection to fully understand the consequences of that particular change. We need to take our time, and choose our path with great care. For once we have existential security we are almost assured success if we take things slowly and carefully: the game is ours to lose; there are only unforced errors.

Absent the breakthroughs the Long Reflection will hopefully, we can’t even be sure that the moral value of a species spread out across many solar systems will be positive even if its expected aggregate welfare is positive. They may not be willing to trade suffering and happiness 1:1.

I could imagine benefits in scenarios where Earth gets locked into some small-scale, stable, but undesirable state. Then there’d still be a chance that another civilization emerges elsewhere and expands to reclaim the space around our solar system. (If they become causally disconnected from us before they reach that level of capability, they’re probably not so different from any independently evolved life elsewhere in the universe.) But that would come at a great cost.

The approach seems similar to that of r-strategist species that have hundreds of offspring of which on average only two survive. These are thought to be among the major sources of disvalue in nature. In the case of directed panspermia we could also be sure of the high degree of phenomenal consciousness of the quasi-offspring so that the expected disvalue would be even greater than in the case of the r-strategist species where many of the offspring die while they’re still eggs.

In most other scenarios, risks are either on a planetary scale or remain so long as there aren’t any clusters that are so far apart as to be causally isolated. So in those scenarios, an expansion beyond our solar system would buy minimal risk reduction. That can be achieved at a much lesser cost in terms of expected disvalue.

So I’d be more comfortable to defer such grave and near-irreversible decisions to future generations that have deliberated all their aspects and implications thoroughly and even-handedly for a long time and have reached a widely shared consensus.

Comment by telofy on Denis Drescher's Shortform · 2020-04-23T15:44:50.839Z · score: 3 (2 votes) · EA · GW

[“If you value future people, why do you consider near term effects?” by Alex HT: Personal takeaways.]

I find it disconcerting that there are a lot of very smart people in the EA community who focus more on near-term effects than I currently find reasonable.

“If you value future people, why do you consider near term effects?” by Alex HT makes the case that a lot of reasons to focus on near-term effects fall short of being persuasive. The case is based centrally on complex cluelessness. It closes with a series of possible objections and why they are not persuasive. (Alex also cites the amazing article “Growth and the case against randomista development.”)

The article invites a discussion, and Michael St. Jules responded by explaining the shape of a utility function (bounded above and below) that would lead to a near term focus and why it is a sensible utility function to have. This seems to be a common reason to prefer near-term interventions, judging by the number of upvotes.

There are also hints in the discussion of whether there may be a reason to focus on near-term effects as a Schelling point in coordination problem with future generations. But that point is not fully developed, and I don’t think I could steelman it.

I’ve heard smart people argue for the merits of bounded utility functions before. They have a number of merits – avoiding Pascal’s mugging, the St. Petersburg game, and more. (Are there maybe even some benefits for dealing with infinite ethics?) But they’re also awfully unintuitive to me.

Besides, I wouldn’t know how to select the right parameters for it. With some parameters, it’ll be nearly linear in a third-degree-polynomial increase in aggregate positive or negative valence over the coming millennium, and that may be enough to prefer current longtermist over current near-termist approaches.

Related: https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/

Comment by telofy on (How) Could an AI become an independent economic agent? · 2020-04-05T11:42:11.431Z · score: 2 (1 votes) · EA · GW

I’ve read some documents where the developers of a cryptocurrency were worried that it might become possible to restore a lot of lost crypto that no one currently has access too – presumably because it might lead to an inflation? I don’t remember where I read it or what the concrete concerns are. Maybe someone with more blockchain knowledge can fill in the details.

Comment by telofy on Good Done Right conference · 2020-02-05T19:06:01.937Z · score: 4 (3 votes) · EA · GW

Great sleuthing! I remember my surprise in late 2014, months after the conference, when I found out that something with such a roaster of speakers had happened, and I hadn’t noticed it. Luckily I also found that invaluable Soundcloud account. :-D

Comment by telofy on Coordinating Commitments Through an Online Service · 2020-01-12T14:06:37.798Z · score: 2 (1 votes) · EA · GW

I love this idea! Of course you’d want to test the demand for it cheaply. Maybe there is already a Kickstarter-like platform where you need to meet a minimum number of contributors rather than just a minimum total contribution (or a maximum contribution per contributor). Then you could just use that platform for a test run. If not among Kickstarter-like platforms, then maybe among petition platforms? Or you could repurpose a mere Mailchimp newsletter signup for this! You could style it to look like a solemn signing of a conditional pledge.

If there is such a platform, you could see if you can get a charity like Animal Equality on board with the experiment, one that has a substantial online audience. (They’ll also be happy about the newsletter signups, though that should require a separate opt-in.)

Finally, you could run quick anonymous surveys of the participants: What did they do, and what would they have done without the campaign. Perhaps one after a month and one after a year or so. (It would also be interesting again to follow up after several years because of vegan recidivism, which usually sets in after around 7 years afaik.)

Maybe you can even do all of that without any coding.

Comment by telofy on What things do you wish you discovered earlier? · 2019-09-18T23:00:16.165Z · score: 8 (6 votes) · EA · GW

My latest breakthrough: Text to speech readers set to a high reading rate. (Plus cheap and good-quality Mpow Bluetooth headphones.)

Now I effortlessly plow through my reading list at up to 400 words per minute during commutes. My micromorts/km are lower too since I don't need to keep my eyes glued to the Kindle.

I started out at lower rates and then gradually increased them over the course of a few weeks as the lower rates started to feel slow.

I mostly use @Voice Aloud Reader and Pocket for Android.

PS: Thanks for your recommendations, especially Otter!

Comment by telofy on Ask Me Anything! · 2019-08-20T18:47:11.421Z · score: 12 (7 votes) · EA · GW

Comment by telofy on Ask Me Anything! · 2019-08-20T18:28:05.075Z · score: 10 (5 votes) · EA · GW

Do I understand you correctly that you’re relatively less worried about existential risks because you think they are less likely to be existential (that civilization will rebound) and not because you think that the typical global catastrophes that we imagine are less likely?

Comment by telofy on Ask Me Anything! · 2019-08-20T18:25:53.008Z · score: 12 (6 votes) · EA · GW

I’d like to vote for more detail on:

I find (non-extinction) trajectory change more compelling as a way of influencing the long-run future than I used to.

Unless the change in importance is fully explained by the relative reprioritization after updating downward on existential risks.

Comment by telofy on Seeking feedback on Cause Prioritization Platform · 2019-08-14T16:03:45.150Z · score: 3 (2 votes) · EA · GW

Cool idea!

A few things I just noticed:

  1. It would be nice if the reordering happened only on reload and not immediately. I sometimes ticked an “importance” box fully intent to also tick another one, but then the block jumped away, and I’d have to go looking for it again.
  2. I second Saulius’s feedback: I also feel like it’d be easier for me to rate things on a three- or five-point scale or even with a slider. As it is, I’m tempted to tick almost all the boxes on all problems.
  3. I really like your initial selection. I had a hard time voting on animal welfare because my assessments of farmed and wild animal welfare are so different. Then I noticed that you had anticipated my problem and added the subsections!
  4. The term tractable seems more intuitive to me than solvable, since the second can be misunderstood as whether the problem can be fully solved as opposed to the definition that Saulius cites.
  5. I’d like to make a distinction (and maybe a granular one) between whether I think a problem is not important/neglected/tractable or whether I have no opinion on it. E.g., I currently don’t know whether the Pain Gap is tractably addressable, but not ticking the box feels as if I were saying that it’s not tractable.
  6. Not showing other people’s ratings could help avoid anchoring people.
Comment by telofy on What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) · 2019-08-05T10:12:46.265Z · score: 5 (3 votes) · EA · GW

A remote research organization for funding and coordinating ongoing individual remote research efforts.

I’d be interested in this! In terms of coordination, it could (1) pool information on what organizations and individuals are already working on to allow cooperation and mentorship, and avoid duplication unless duplication is useful; (2) give guidance on prioritization of research ideas; and (3) provide a curated, accessible, and well categorized outlet (e.g., blog) for researchers that is sufficiently high-quality that a lot of people read it.

Comment by telofy on A vision for anthropocentrism to supplant wild animal suffering · 2019-06-06T18:39:17.701Z · score: 3 (2 votes) · EA · GW

A major factor in my calculus is also that, given enough time, it’ll probably become much more feasible to send non-biological life without any need for terraforming to planets outside of our solar system than to send biological life, so that even in terraforming scenarios the terraforming will probably be limited to just planets within the solar system.

Comment by telofy on Is visiting North Korea effective? · 2019-04-06T12:48:39.522Z · score: 11 (4 votes) · EA · GW

I’ve written a comparative article on plausible intervention for human rights in North Korea. The activists I interviewed had already considered running campaigns to discourage travel to North Korea because tourism is an important source of foreign currency for the government. (They can force their citizens to stage North Korean life for tourists while paying them in their worthless national currency, so that they make a large profit on tourism.)

To my knowledge, these activists never pursued that strategy because it may actually be an attention hazard and thus actually increase tourism, and because it might strain relationships with organizations that think that tourists may show North Koreans that other ways of life are possible. But I find that implausible because almost no one is allowed to travel within North Korea (and tourists are even more tightly controlled and restricted) so that it’s always only the same most loyal North Koreans who come into contact with tourists.

But I discuss other more promising interventions in the article. For more detailed, reliable, and up-to-date information you can get in touch with, e.g., Saram as I’m not myself active in the space.

Comment by telofy on Tool recommendation: Polar personal knowledge repository · 2019-04-02T12:12:38.241Z · score: 3 (2 votes) · EA · GW

Very promising! They have plans to create a mobile client, and maybe the web version will also eventually support HTML and ebook formats. Looking forward to that!

Comment by telofy on Potential funding opportunity for woman-led EA organization · 2019-03-15T17:25:16.938Z · score: 2 (1 votes) · EA · GW

CFAR and Encompass (https://encompassmovement.org/) might also fit the bill? Maybe also some (other) EA meta-charities whose current team configuration I don't remember well enough.

Comment by telofy on Three Biases That Made Me Believe in AI Risk · 2019-02-22T12:17:40.725Z · score: 4 (3 votes) · EA · GW

I’d particularly appreciate an updated version of “Astronomical waste, astronomical schmaste” that disentangles the astronomical waste argument from arguments for the importance of AI safety. The current one makes it hard for me to engage with it because I don’t go along with the astronomical waste argument at all but are still convinced that a lot of projects under the umbrella of AI safety are top priorities, because extinction is considered bad by a wide variety of moral systems irrespective of astronomical waste, and particularly in order to avert s-risks, which are also considered bad by all moral systems I have a grasp on.

Comment by telofy on Small animals have enormous brains for their size · 2019-02-22T12:07:06.020Z · score: 5 (2 votes) · EA · GW

This is fascinating! I’ve heard (though it may well be bunk) that intelligence in humans is somewhat correlated with brain size but that the brain size is limited by the size of the birth canal. (Which made me think that c-section should lead to smarter people in the long run.) But if there’s still so much room for optimization left without changing the brain size, does that merely indicate that the changes would take too many mutations to be likely to happen (sort of why we still have our weird eye architecture when other animals have straightforward eyes) or that a lot of human thinking happens at a lower abstraction level than that of the neuron so that, e.g., whole brain emulation at a neuronal level would be destined to fail?

Comment by telofy on are values a potential obstacle in scaling the EA movement? · 2019-01-04T10:09:21.850Z · score: 2 (1 votes) · EA · GW

Seminal for me has been Owen Cotton-Barratt’s paper “How valuable is movement growth?” I therefore welcome the shift toward very careful if any growth that has happened over the past years. Today I think of the EA community like a startup of sorts that tries to hire slowly and selects staff carefully based on culture fit, character, commitment, etc.

Comment by telofy on Announcing the EA donation swap system · 2018-12-22T13:00:44.731Z · score: 3 (2 votes) · EA · GW

Hi! Thank you! That sounds good (“Charity X” would be a free text field?), but I don’t know whether there are other problems it doesn’t address. To guard against that, an FAQ entry explaining the problem would be best. Generally that’ll be needed because this is probably unintuitive for many people (like me), so even if they have the information about the swap counterfactual, they may not be able to use it optimally without an explanation.

Comment by telofy on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-22T12:40:08.482Z · score: 5 (5 votes) · EA · GW

Hmm, yeah, curious as well. Maybe it’s because I link long essays without summarizing them, so people are left wondering whether the essays are relevant enough to be worth reading.

But apart from the link to Simon’s reply, Kaj’s comment is much better than mine anyway.

Comment by telofy on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-18T15:00:08.955Z · score: 4 (17 votes) · EA · GW

Wow! Thank you again for another amazing overview! :-D

With regard to the FRI section: Here is a reply to Toby Ord by Simon Knutsson and another piece that seems related. (And by “suffering focus,” people are referring to something much broader than NU, which may be true of some CUs too.)

Comment by telofy on EA Global Lightning Talks (San Francisco 2018) · 2018-12-03T17:36:31.293Z · score: 2 (1 votes) · EA · GW

Very interesting talks – thank you! For me, especially Phillip Trammell’s talk.

Comment by telofy on Announcing the EA donation swap system · 2018-12-02T12:18:15.217Z · score: 3 (2 votes) · EA · GW

Thank you for creating this! I want to understand some possible risks to my value system better. So here’s one scenario that I’ve been thinking about.

I realize that it’s a trust system but if Donor A trusts Donor B on something that is not clear enough to Donor A to be able to ask and that’s so unremarkable to Donor B that they see no reason to tell Donor A about it any more than their espresso preferences, then no one is really at fault if they miscommunicate.

Say Donor A:

  1. is neutral between Rethink Priorities (RP) and the Against Malaria Foundation (AMF) (but Donor B doesn’t know this),
  2. can get tax exemption for a donation to RP but not AMF, and
  3. wants to donate $2k.

And Donor B:

  1. values a dollar to RP more than 100 times as highly as one to AMF (but Donor A doesn’t know this),
  2. can get tax exemption for a donation to AMF but not RP, and
  3. wants to donate $1k.

Without donation swap:

  1. Donor A is perfectly happy, and donates $2k to RP because of the tax exemption as tie breaker (but even if they split the donation 50:50 or donate with 50% probability, this case is still problematic).
  2. Donor B is a bit sad but donates, say, $850 to RP, which comes down to the same cost to them due to the lacking tax break.
  3. In effect: RP gains $2,850 and AMF gains $0. Both donors are reasonably happy with this result, the only wrinkle being the taxes.

But with donation swap:

  1. Donor A loves helping their fellow EAs and so offers a swap even though they don’t personally need it.
  2. Donor B enthusiastically takes them up on the offer to save the taxes, donates $1k to AMF, and Donor A donates $1k to RP. Later, Donor A donates their remaining $1k to RP.
  3. In effect: RP gains $2k and AMF gains $1k. Slightly positive for Donor A but big loss for Donor B.

This seems like a plausible scenario to me but there are other scenarios that are less extreme but still very detrimental to one side and possibly even harder to spot.

So am I overlooking something that alleviates this worry or do donors have to know (commit) and be transparent about where they will donate if no swap happens, in order for the other party to know whether they should take them up on the offer?

Comment by telofy on Which World Gets Saved · 2018-11-11T13:05:10.047Z · score: 6 (5 votes) · EA · GW

Very interesting! This strikes me as a particular type of mission hedging, right?

Comment by telofy on Announcing the EA Angel Group - Seeking EAs with the Time and Money to Evaluate and Fund Early-Stage Grants · 2018-10-18T19:24:19.019Z · score: 2 (2 votes) · EA · GW

I'd love to subscribe to a blog where you publish what grants you've recommended. Are you planning to run something like that?

Comment by telofy on Current Thinking on Prioritization 2018 · 2018-09-02T16:27:50.387Z · score: 0 (0 votes) · EA · GW

Oh, cool! I'm reading that study at the moment. I'll be able to say more once I'm though. Then I'll turn to your article. Sounds interesting!

Comment by telofy on A Critical Perspective on Maximizing Happiness · 2018-08-02T06:05:37.754Z · score: 7 (7 votes) · EA · GW

Thank you for starting that discussion. Some resources that come to mind that should be relevant here are:

  • Lukas Gloor’s concept of Tranquilism,
  • different types of happiness (a talk by Michael Plant where I think I heard them explained), and
  • the case for the relatively greater moral urgency and robustness of suffering minimization over happiness maximization, i.e., a bit of a focus on suffering.
Comment by telofy on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-20T11:02:22.218Z · score: 2 (2 votes) · EA · GW

I’m against it. ;-)

Just kidding. I think monopolies and competition are bundles of advantages and disadvantages that we can also combine differently. Competition comes with duplication of effort, sometimes sabotaging the other rather than improving oneself, and some other problems. A monopoly would come with the local optima problem you mentioned. But we can also acknowledge (as we do in many other fields) that we don’t know how to run the best wiki, and have different projects that try out different plausible strategies while not being self-interested by being interested in the value of information from the experiment. So they can work together, automatically synchronize any content that can be synchronized, etc. We’ll first need meaningful differences between the projects that it’ll be worthwhile to test out, e.g., restrictive access vs. open access.

Comment by telofy on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-20T10:52:50.979Z · score: 0 (0 votes) · EA · GW

That would be an immensely valuable meta problem to solve!

Then maybe we can have a wiki which reaches "meme status".

On a potentially less serious note, I wonder if one could make sure that a wiki remains popular by adding a closed section to it that documents particular achievements from OMFCT the way Know Your Meme does. xD

Comment by telofy on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-19T15:33:59.753Z · score: 16 (15 votes) · EA · GW

Sweet! I hope it’ll become a great resource! Are you planning to merge it with https://causeprioritization.org/? If there are too many wikis, we’d just run into the same problem with fragmented bits of information again.

Comment by telofy on Lessons for estimating cost-effectiveness (of vaccines) more effectively · 2018-06-08T09:59:21.506Z · score: 1 (1 votes) · EA · GW

Thank you! I suspect, this is going to be very helpful for me.

Comment by telofy on Introducing Charity Entrepreneurship: an Incubation and Research Program for New Charities · 2018-06-08T08:00:30.167Z · score: 1 (1 votes) · EA · GW

Awesome! Do you also have plans to assist EA founders of for-profit social enterprises (like e.g. Wave)?

Comment by telofy on Current Thinking on Prioritization 2018 · 2018-03-30T08:30:39.866Z · score: 0 (0 votes) · EA · GW

Awesome, thank you!

Comment by telofy on Is Effective Altruism fundamentally flawed? · 2018-03-25T15:05:07.194Z · score: 1 (1 votes) · EA · GW

Hi Jeff!

To just briefly answer your question, “Are you concluding from this that there is not actually a single subject-of-experience”: I don’t have an intuition for what a subject-of-experience is – if it is something defined along the lines of the three characteristics of continuous person moments from my previous message, then I feel that it is meaningful but not morally relevant, but if it is defined along the lines of some sort of person essentialism then I don’t believe it exists on Occam’s razor grounds. (For the same reason, I also think that reincarnation is metaphysically meaningless because I think there is no essence to a person or a person moment besides their physical body* until shown otherwise.)

* This is imprecise but I hope it’s clear what I mean. People are also defined by their environment, culture, and whatnot.