Posts

Modelers and Indexers 2020-05-12T12:01:14.768Z · score: 34 (10 votes)
Denis Drescher's Shortform 2020-04-23T15:44:50.620Z · score: 6 (1 votes)
Current Thinking on Prioritization 2018 2018-03-13T19:22:20.654Z · score: 9 (9 votes)
Cause Area: Human Rights in North Korea 2017-11-26T14:58:10.490Z · score: 15 (15 votes)
The Attribution Moloch 2016-04-28T06:43:10.413Z · score: 12 (10 votes)
Even More Reasons for Donor Coordination 2015-10-27T05:30:37.899Z · score: 4 (6 votes)
The Redundancy of Quantity 2015-09-03T17:47:20.230Z · score: 2 (4 votes)
My Cause Selection: Denis Drescher 2015-09-02T11:28:51.383Z · score: 6 (6 votes)
Results of the Effective Altruism Outreach Survey 2015-07-26T11:41:48.500Z · score: 3 (5 votes)
Dissociation for Altruists 2015-05-14T11:27:21.834Z · score: 5 (9 votes)
Meetup : Effective Altruism Berlin Meetup #3 2015-05-10T19:40:40.990Z · score: 0 (0 votes)
Incentivizing Charity Cooperation 2015-05-10T11:02:46.433Z · score: 6 (6 votes)
Expected Utility Auctions 2015-05-02T16:22:28.948Z · score: 4 (4 votes)
Telofy’s Effective Altruism 101 2015-03-29T18:50:56.188Z · score: 3 (3 votes)
Meetup : EA Berlin #2 2015-03-26T16:55:04.882Z · score: 0 (0 votes)
Common Misconceptions about Effective Altruism 2015-03-23T09:25:36.304Z · score: 8 (8 votes)
Precise Altruism 2015-03-21T20:55:14.834Z · score: 6 (6 votes)
Telofy’s Introduction to Effective Altruism 2015-01-21T16:46:18.527Z · score: 7 (9 votes)

Comments

Comment by telofy on What are the leading critiques of "longtermism" and related concepts · 2020-05-30T21:57:12.292Z · score: 10 (6 votes) · EA · GW

“The Epistemic Challenge to Longtermism” by Christian Tarsney is perhaps my favorite paper on the topic.

Longtermism holds that what we ought to do is mainly determined by effects on the far future. A natural objection is that these effects may be nearly impossible to predict—perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.

“How the Simulation Argument Dampens Future Fanaticism” by Brian Tomasik has also influenced my thinking but has a more narrow focus.

Some effective altruists assume that most of the expected impact of our actions comes from how we influence the very long-term future of Earth-originating intelligence over the coming ~billions of years. According to this view, helping humans and animals in the short term matters, but it mainly only matters via effects on far-future outcomes.

There are a number of heuristic reasons to be skeptical of the view that the far future astronomically dominates the short term. This piece zooms in on what I see as perhaps the strongest concrete (rather than heuristic) argument why short-term impacts may matter a lot more than is naively assumed. In particular, there's a non-trivial chance that most of the copies of ourselves are instantiated in relatively short-lived simulations run by superintelligent civilizations, and if so, when we act to help others in the short run, our good deeds are duplicated many times over. Notably, this reasoning dramatically upshifts the relative importance of short-term helping even if there's only a small chance that Nick Bostrom's basic simulation argument is correct.

My thesis doesn't prove that short-term helping is more important than targeting the far future, and indeed, a plausible rough calculation suggests that targeting the far future is still several orders of magnitude more important. But my argument does leave open uncertainty regarding the short-term-vs.-far-future question and highlights the value of further research on this matter.

Finally, you can also conceive of yourself as one instantiation of a decision algorithm that probably has close analogs at different points throughout time, which makes Caspar Oesterheld’s work relevant to the topic. There are a few summaries linked from that page. I think it’s an extremely important contribution but a bit tangential to your question.

Comment by telofy on [Stats4EA] Expectations are not Outcomes · 2020-05-19T12:11:00.135Z · score: 13 (6 votes) · EA · GW

I’ve found Christian Tarsney’s “Exceeding Expectations” insightful when it comes to recognizing and maybe coping with the limits of expected value.

The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized.

See also the post/sequence by Daniel Kokotajlo, “Tiny Probabilities of Vast Utilities”. I’m linking to the post that was most valuable to me, but by default it might make sense to start with the first one in the sequence. ^^

Comment by telofy on Modelers and Indexers · 2020-05-16T10:50:21.672Z · score: 3 (2 votes) · EA · GW

Yeah, totally agree! The Birds and Frogs distinction sounds very similar! I’ve pocketed the original article for later reading.

And I also feel that the Adaptors–Innovators one is “may be slightly correlated but is a different thing.” :-)

Comment by telofy on Modelers and Indexers · 2020-05-16T10:36:26.113Z · score: 3 (2 votes) · EA · GW

Yes! I’ve been thinking about you a lot while I was writing that post because you yourself strike me as a potential counterexample to the usefulness of the distinction. I’ve seen you do exactly what you describe and generally display comfort in situations that indexers would normally be comfortable in, while at the same time you evidently have quite similar priorities to me. So either you break the model or you’re just really good at both! :-)

Comment by telofy on Modelers and Indexers · 2020-05-16T10:31:22.541Z · score: 3 (2 votes) · EA · GW

Thank you!

Yeah, that feels fitting to me too. I found these two posts on the term:

https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality https://www.lesswrong.com/posts/j2mcSRxhjRyhyLJEs/what-is-social-reality

A lot of social things appear arbitrary when deep down they must be deterministic. But bridging that gap is perhaps both computationally infeasible and doesn’t lend itself to particularly powerful abstractions (except for intentionality). At the same time, though, the subject is more inextricably integrated with the environment, so that it makes more sense to model the environment as falling into intentional units (agents) who are reactive. And then maybe certain bargaining procedures emerged (because they were adaptive) that are now integrated into our psyche as customs and moral intuitions.

For these bargaining procedures, I imagine, it’ll be important to abstract usefully from specific situations to more general games. Then you can classify a new situation as one that either requires going through the bargaining procedure again or is a near-replication of a situation whose bargaining outcome you already have stored. That would require exactly the indexer types of abilities – abstracting from situations to archetypes and storing the archetypes.

(E.g., if you sell books, there’s a stored bargaining solution for that where you declare a price, and if it’s right, hand over the book and get the money for it, and otherwise keep the book and don’t get the money. But if you were the first to create a search engine that indexes the full-text of books, there were no stored bargaining solutions for that and you had to go through the bargaining procedures.)

It also seems to me that there are people who, when in doubt, tend more toward running through the bargaining procedure, while others instead tend more toward observing and learning established bargaining solutions very well and maybe widening their references classes for games. I associate the first a bit with entrepreneurs, low agreeableness, Australia, and the Pavlov strategy, and the second with me, agreeable friends of mine, Germany/Switzerland, and tit for tat.

Comment by telofy on Bored at home? Contribute to the EA Wiki! · 2020-05-01T13:35:15.557Z · score: 6 (3 votes) · EA · GW

I love the idea of wikis for EA knowledge, but is there an attempt underway yet to consolidate all the existing wikis, beyond the Wikia one? Maybe you can coordinate some data import with the other people who are running EA wikis.

When the Priority Wiki was launched, (I but much more so) John Maxwell compiled some of the existing wikis here.

I think for one of these wikis to take off, it’ll probably need to become the clear Schelling point for wiki activity – maybe an integration with the concepts platform or the forum and a consolidation of all the other wikis as a basis.

I imagine there’d also need to be a way for active wiki authors to gain reputation points, e.g., in this forum, so wiki editing can have added benefits for CV building. Less Wrong also has forum and wiki, and the forum is a very similar software, so maybe they already have plans for such a system.

Comment by telofy on Does Utilitarian Longtermism Imply Directed Panspermia? · 2020-05-01T10:26:25.356Z · score: 3 (2 votes) · EA · GW

Oh yeah, I was also talking about it only from utilitarian perspectives. (Except for one aside, “Others again refuse it on deontological or lexical grounds that I also empathize with.”) Just utilitarianism doesn’t make a prescription as to the exchange rate of intensity/energy expenditure/… of individual positive experiences to individual negative experiences.

It seems that reasonable people think the outcome of B might actually be worse than A, based on your response.

Yes, I hope they do. :-)

Sorry for responding so briefly! I’m falling behind on some reading.

Comment by telofy on Does Utilitarian Longtermism Imply Directed Panspermia? · 2020-04-25T20:21:45.190Z · score: 3 (2 votes) · EA · GW

I think I’m not well placed to answer that at this point and would rather defer that to someone who has thought about this more than I have from the vantage points of many ethical theories rather than just from my (or their) own. (I try, but this issue has never been a priority for me.) Then again this is a good exercise for me in moral perspective-taking or what it’s called. ^^

It seems C > B > A, with the difference between A and B greater than the difference between B and C.

In the previous reply I tried to give broadly applicable reasons to be careful about it, but those were mostly just from Precipice. The reason is that if I ask myself, e.g., how long I would be willing to endure extreme torture to gain ten years of ultimate bliss (apparently a popular thought experiment), I might be ready to invest a few seconds if any, for a tradeoff ratio of 1e7 or 1e8 to 1. So from my vantage point, the r-strategist style “procreation” is very disvaluable. It seems like it may well be disvaluable in expectation, but either way, it seems like an enormous cost to bear for a highly uncertain payoff. I’m much more comfortable with careful, K-strategist “procreation” on a species level. (Magnus Vinding has a great book coming out soon that covers this problem in detail.)

But assuming the agnostic position again, for practice, I suppose A and C are clear cut: C is overwhelmingly good (assuming the Long Reflection works out well and we successfully maximize what we really terminally care about, but I suppose that’s your assumption) and A is sort of clear because we know roughly (though not very viscerally) how much disvalue our ancestors have paid forward over the past millions of years so that we can hopefully eventually create a utopia.

But B is wide open. It may go much more negative than A even considering all our past generations – suffering risks, dystopian-totalitarian lock-ins, permanent prehistoric lock-ins, etc. The less certain it is, the more of this disvalue we’d have to pay forward to get one utopia out of it. And it may also go positive of course, almost like C, just with lower probability and a delay.

People have probably thought about how to spread self-replicating probes to other planets so that they produce everything a species will need at the destination to rebuild a flourishing civilization. Maybe there’ll be some DNA but also computers with all sorts of knowledge, and child-rearing robots, etc. ^^ But a civilization needs so many interlocking parts to function well – all sorts of government-like institutions, trust, trade, resources, … – that it seems to me like the vast majority of these civilizations either won’t get off the ground in the first place and remain locked in a probably disvaluable Stone Age type of state, or will permanently fall short of the utopia we’re hoping for eventually.

I suppose a way forward may to consider the greatest uncertainties about the project – probabilities and magnitudes at the places where things can go most badly net negative or most awesomely net positive.

Maybe one could look into Great Filters (they may exist less necessarily than I had previously thought), because if we are now past the (or a) Great Filter, and the Great Filter is something about civilization rather than something about evolution, we should probably assign a very low probability to a civilization like ours emerging under very different conditions through the probably very narrow panspermia bottleneck. I suppose this could be tested on some remote islands? (Ethics committees may object to that, but then these objections also and even more apply to untested panspermia, so they should be taken very seriously. Then again they may not have read Bostrom or Ord. Or Pearce, Gloor, Tomasik, or Vinding for that matter.)

Oh, here’s an idea: The Drake Equation has the parameter f_i for the probability that existing life develops (probably roughly human-level?) intelligence, f_c that intelligent life becomes detectable, and L for the longevity of the civilization. The probability that intelligent life creates a civilization with similar values and potential is probably a bit less than f_c (these civilizations could have any moral values) but more than the product of the two fs. The paper above has a table that says “f_i: log-uniform from 0.001 to 1” and “f_c: log-uniform from 0.01 to 1.” So I suppose we have some 2–5 orders of magnitude uncertainty from this source.

The longevity of a civilization is “L: log-uniform from 100 to 10,000,000,000” in the paper. An advanced civilization that exists for 10–100k years may be likely to have passed the Precipice… Not sure at all about this because of the risk of lock-ins. And I’d have to put this distribution into Guesstimate to get a range of probabilities out of this. But it seems like a major source of uncertainty too.

The ethical tradeoff question above feels almost okay to me with a 1e8 to 1 tradeoff but others are okay with a 1e3 or 1e4 to 1 tradeoff. Others again refuse it on deontological or lexical grounds that I also empathize with. It feels like there are easily five orders of magnitude uncertainty here, so maybe this is the bigger question. (I’m thinking more in terms of an optimal compromise utility function than in moral realist terms, but I suppose that doesn’t change much in this case.)

In the best case within B, there’s also the question whether it’ll be a delay compared to C of thousands or of tens of thousands of years, and how much that would shrink the cosmic endowment.

I don’t trust myself to be properly morally impartial about this after such a cursory investigation, but that said, I would suppose that most moral systems would put a great burden of proof on the intervention because it can be so extremely good and so extremely bad. But tackling these three to four sources of uncertainty and maybe others can perhaps shed more light on how desirable it really is.

I empathize with the notion that some things can’t wait until the Long Reflection, at least as part in a greater portfolio, because it seems to me that suffering risks (s-risks) are a great risk (in expectation) even or especially now in the span until the Long Reflection. They can perhaps be addressed through different and more tractable avenues than other longterm risks and by researchers with different comparative advantages.

A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.

Hmm, I don’t quite follow… Does the above change the relative order of preference for you, and if so, to which order?

So I guess the implied position would be that we should prepare a biotic hedge in case things get especially dire, and invest more in SETI type searches. If we know that life exists elsewhere in the universe, we do not need to deploy the biotic hedge?

There are all these risks from drawing the attention of hostile civilizations. I haven’t thought about what the risk and benefits are there. It feels like that came up in Precipice too, but I could be mixing something up.

Comment by telofy on Does Utilitarian Longtermism Imply Directed Panspermia? · 2020-04-25T11:40:19.279Z · score: 9 (6 votes) · EA · GW

Such an effort would likely be irreversible or at least very slow and costly to reverse. It comes at an immense cost of option value.

Directed panspermia also bears greater average-case risks than a controlled expansion of our civilization because we’ll have less control over the functioning and the welfare standards of the civilization (if any) and thus the welfare of the individuals at the destination.

Toby Ord’s Precipice more or less touches on this:

Other bold actions could pose similar risks, for instance spreading out beyond our Solar System into a federation of independent worlds, each drifting in its own cultural direction.

This is not to reject such changes to the human condition—they may well be essential to realizing humanity’s full potential. What I am saying is that these are the kind of bold changes that would need to come after the Long Reflection. Or at least after enough reflection to fully understand the consequences of that particular change. We need to take our time, and choose our path with great care. For once we have existential security we are almost assured success if we take things slowly and carefully: the game is ours to lose; there are only unforced errors.

Absent the breakthroughs the Long Reflection will hopefully, we can’t even be sure that the moral value of a species spread out across many solar systems will be positive even if its expected aggregate welfare is positive. They may not be willing to trade suffering and happiness 1:1.

I could imagine benefits in scenarios where Earth gets locked into some small-scale, stable, but undesirable state. Then there’d still be a chance that another civilization emerges elsewhere and expands to reclaim the space around our solar system. (If they become causally disconnected from us before they reach that level of capability, they’re probably not so different from any independently evolved life elsewhere in the universe.) But that would come at a great cost.

The approach seems similar to that of r-strategist species that have hundreds of offspring of which on average only two survive. These are thought to be among the major sources of disvalue in nature. In the case of directed panspermia we could also be sure of the high degree of phenomenal consciousness of the quasi-offspring so that the expected disvalue would be even greater than in the case of the r-strategist species where many of the offspring die while they’re still eggs.

In most other scenarios, risks are either on a planetary scale or remain so long as there aren’t any clusters that are so far apart as to be causally isolated. So in those scenarios, an expansion beyond our solar system would buy minimal risk reduction. That can be achieved at a much lesser cost in terms of expected disvalue.

So I’d be more comfortable to defer such grave and near-irreversible decisions to future generations that have deliberated all their aspects and implications thoroughly and even-handedly for a long time and have reached a widely shared consensus.

Comment by telofy on Denis Drescher's Shortform · 2020-04-23T15:44:50.839Z · score: 3 (2 votes) · EA · GW

[“If you value future people, why do you consider near term effects?” by Alex HT: Personal takeaways.]

I find it disconcerting that there are a lot of very smart people in the EA community who focus more on near-term effects than I currently find reasonable.

“If you value future people, why do you consider near term effects?” by Alex HT makes the case that a lot of reasons to focus on near-term effects fall short of being persuasive. The case is based centrally on complex cluelessness. It closes with a series of possible objections and why they are not persuasive. (Alex also cites the amazing article “Growth and the case against randomista development.”)

The article invites a discussion, and Michael St. Jules responded by explaining the shape of a utility function (bounded above and below) that would lead to a near term focus and why it is a sensible utility function to have. This seems to be a common reason to prefer near-term interventions, judging by the number of upvotes.

There are also hints in the discussion of whether there may be a reason to focus on near-term effects as a Schelling point in coordination problem with future generations. But that point is not fully developed, and I don’t think I could steelman it.

I’ve heard smart people argue for the merits of bounded utility functions before. They have a number of merits – avoiding Pascal’s mugging, the St. Petersburg game, and more. (Are there maybe even some benefits for dealing with infinite ethics?) But they’re also awfully unintuitive to me.

Besides, I wouldn’t know how to select the right parameters for it. With some parameters, it’ll be nearly linear in a third-degree-polynomial increase in aggregate positive or negative valence over the coming millennium, and that may be enough to prefer current longtermist over current near-termist approaches.

Related: https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/

Comment by telofy on (How) Could an AI become an independent economic agent? · 2020-04-05T11:42:11.431Z · score: 2 (1 votes) · EA · GW

I’ve read some documents where the developers of a cryptocurrency were worried that it might become possible to restore a lot of lost crypto that no one currently has access too – presumably because it might lead to an inflation? I don’t remember where I read it or what the concrete concerns are. Maybe someone with more blockchain knowledge can fill in the details.

Comment by telofy on Good Done Right conference · 2020-02-05T19:06:01.937Z · score: 4 (3 votes) · EA · GW

Great sleuthing! I remember my surprise in late 2014, months after the conference, when I found out that something with such a roaster of speakers had happened, and I hadn’t noticed it. Luckily I also found that invaluable Soundcloud account. :-D

Comment by telofy on Coordinating Commitments Through an Online Service · 2020-01-12T14:06:37.798Z · score: 2 (1 votes) · EA · GW

I love this idea! Of course you’d want to test the demand for it cheaply. Maybe there is already a Kickstarter-like platform where you need to meet a minimum number of contributors rather than just a minimum total contribution (or a maximum contribution per contributor). Then you could just use that platform for a test run. If not among Kickstarter-like platforms, then maybe among petition platforms? Or you could repurpose a mere Mailchimp newsletter signup for this! You could style it to look like a solemn signing of a conditional pledge.

If there is such a platform, you could see if you can get a charity like Animal Equality on board with the experiment, one that has a substantial online audience. (They’ll also be happy about the newsletter signups, though that should require a separate opt-in.)

Finally, you could run quick anonymous surveys of the participants: What did they do, and what would they have done without the campaign. Perhaps one after a month and one after a year or so. (It would also be interesting again to follow up after several years because of vegan recidivism, which usually sets in after around 7 years afaik.)

Maybe you can even do all of that without any coding.

Comment by telofy on What things do you wish you discovered earlier? · 2019-09-18T23:00:16.165Z · score: 8 (6 votes) · EA · GW

My latest breakthrough: Text to speech readers set to a high reading rate. (Plus cheap and good-quality Mpow Bluetooth headphones.)

Now I effortlessly plow through my reading list at up to 400 words per minute during commutes. My micromorts/km are lower too since I don't need to keep my eyes glued to the Kindle.

I started out at lower rates and then gradually increased them over the course of a few weeks as the lower rates started to feel slow.

I mostly use @Voice Aloud Reader and Pocket for Android.

PS: Thanks for your recommendations, especially Otter!

Comment by telofy on Ask Me Anything! · 2019-08-20T18:47:11.421Z · score: 12 (7 votes) · EA · GW

Comment by telofy on Ask Me Anything! · 2019-08-20T18:28:05.075Z · score: 10 (5 votes) · EA · GW

Do I understand you correctly that you’re relatively less worried about existential risks because you think they are less likely to be existential (that civilization will rebound) and not because you think that the typical global catastrophes that we imagine are less likely?

Comment by telofy on Ask Me Anything! · 2019-08-20T18:25:53.008Z · score: 12 (6 votes) · EA · GW

I’d like to vote for more detail on:

I find (non-extinction) trajectory change more compelling as a way of influencing the long-run future than I used to.

Unless the change in importance is fully explained by the relative reprioritization after updating downward on existential risks.

Comment by telofy on Seeking feedback on Cause Prioritization Platform · 2019-08-14T16:03:45.150Z · score: 3 (2 votes) · EA · GW

Cool idea!

A few things I just noticed:

  1. It would be nice if the reordering happened only on reload and not immediately. I sometimes ticked an “importance” box fully intent to also tick another one, but then the block jumped away, and I’d have to go looking for it again.
  2. I second Saulius’s feedback: I also feel like it’d be easier for me to rate things on a three- or five-point scale or even with a slider. As it is, I’m tempted to tick almost all the boxes on all problems.
  3. I really like your initial selection. I had a hard time voting on animal welfare because my assessments of farmed and wild animal welfare are so different. Then I noticed that you had anticipated my problem and added the subsections!
  4. The term tractable seems more intuitive to me than solvable, since the second can be misunderstood as whether the problem can be fully solved as opposed to the definition that Saulius cites.
  5. I’d like to make a distinction (and maybe a granular one) between whether I think a problem is not important/neglected/tractable or whether I have no opinion on it. E.g., I currently don’t know whether the Pain Gap is tractably addressable, but not ticking the box feels as if I were saying that it’s not tractable.
  6. Not showing other people’s ratings could help avoid anchoring people.
Comment by telofy on What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) · 2019-08-05T10:12:46.265Z · score: 5 (3 votes) · EA · GW

A remote research organization for funding and coordinating ongoing individual remote research efforts.

I’d be interested in this! In terms of coordination, it could (1) pool information on what organizations and individuals are already working on to allow cooperation and mentorship, and avoid duplication unless duplication is useful; (2) give guidance on prioritization of research ideas; and (3) provide a curated, accessible, and well categorized outlet (e.g., blog) for researchers that is sufficiently high-quality that a lot of people read it.

Comment by telofy on A vision for anthropocentrism to supplant wild animal suffering · 2019-06-06T18:39:17.701Z · score: 3 (2 votes) · EA · GW

A major factor in my calculus is also that, given enough time, it’ll probably become much more feasible to send non-biological life without any need for terraforming to planets outside of our solar system than to send biological life, so that even in terraforming scenarios the terraforming will probably be limited to just planets within the solar system.

Comment by telofy on Is visiting North Korea effective? · 2019-04-06T12:48:39.522Z · score: 11 (4 votes) · EA · GW

I’ve written a comparative article on plausible intervention for human rights in North Korea. The activists I interviewed had already considered running campaigns to discourage travel to North Korea because tourism is an important source of foreign currency for the government. (They can force their citizens to stage North Korean life for tourists while paying them in their worthless national currency, so that they make a large profit on tourism.)

To my knowledge, these activists never pursued that strategy because it may actually be an attention hazard and thus actually increase tourism, and because it might strain relationships with organizations that think that tourists may show North Koreans that other ways of life are possible. But I find that implausible because almost no one is allowed to travel within North Korea (and tourists are even more tightly controlled and restricted) so that it’s always only the same most loyal North Koreans who come into contact with tourists.

But I discuss other more promising interventions in the article. For more detailed, reliable, and up-to-date information you can get in touch with, e.g., Saram as I’m not myself active in the space.

Comment by telofy on Tool recommendation: Polar personal knowledge repository · 2019-04-02T12:12:38.241Z · score: 3 (2 votes) · EA · GW

Very promising! They have plans to create a mobile client, and maybe the web version will also eventually support HTML and ebook formats. Looking forward to that!

Comment by telofy on Potential funding opportunity for woman-led EA organization · 2019-03-15T17:25:16.938Z · score: 2 (1 votes) · EA · GW

CFAR and Encompass (https://encompassmovement.org/) might also fit the bill? Maybe also some (other) EA meta-charities whose current team configuration I don't remember well enough.

Comment by telofy on Three Biases That Made Me Believe in AI Risk · 2019-02-22T12:17:40.725Z · score: 4 (3 votes) · EA · GW

I’d particularly appreciate an updated version of “Astronomical waste, astronomical schmaste” that disentangles the astronomical waste argument from arguments for the importance of AI safety. The current one makes it hard for me to engage with it because I don’t go along with the astronomical waste argument at all but are still convinced that a lot of projects under the umbrella of AI safety are top priorities, because extinction is considered bad by a wide variety of moral systems irrespective of astronomical waste, and particularly in order to avert s-risks, which are also considered bad by all moral systems I have a grasp on.

Comment by telofy on Small animals have enormous brains for their size · 2019-02-22T12:07:06.020Z · score: 5 (2 votes) · EA · GW

This is fascinating! I’ve heard (though it may well be bunk) that intelligence in humans is somewhat correlated with brain size but that the brain size is limited by the size of the birth canal. (Which made me think that c-section should lead to smarter people in the long run.) But if there’s still so much room for optimization left without changing the brain size, does that merely indicate that the changes would take too many mutations to be likely to happen (sort of why we still have our weird eye architecture when other animals have straightforward eyes) or that a lot of human thinking happens at a lower abstraction level than that of the neuron so that, e.g., whole brain emulation at a neuronal level would be destined to fail?

Comment by telofy on are values a potential obstacle in scaling the EA movement? · 2019-01-04T10:09:21.850Z · score: 2 (1 votes) · EA · GW

Seminal for me has been Owen Cotton-Barratt’s paper “How valuable is movement growth?” I therefore welcome the shift toward very careful if any growth that has happened over the past years. Today I think of the EA community like a startup of sorts that tries to hire slowly and selects staff carefully based on culture fit, character, commitment, etc.

Comment by telofy on Announcing the EA donation swap system · 2018-12-22T13:00:44.731Z · score: 3 (2 votes) · EA · GW

Hi! Thank you! That sounds good (“Charity X” would be a free text field?), but I don’t know whether there are other problems it doesn’t address. To guard against that, an FAQ entry explaining the problem would be best. Generally that’ll be needed because this is probably unintuitive for many people (like me), so even if they have the information about the swap counterfactual, they may not be able to use it optimally without an explanation.

Comment by telofy on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-22T12:40:08.482Z · score: 5 (5 votes) · EA · GW

Hmm, yeah, curious as well. Maybe it’s because I link long essays without summarizing them, so people are left wondering whether the essays are relevant enough to be worth reading.

But apart from the link to Simon’s reply, Kaj’s comment is much better than mine anyway.

Comment by telofy on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-18T15:00:08.955Z · score: 4 (17 votes) · EA · GW

Wow! Thank you again for another amazing overview! :-D

With regard to the FRI section: Here is a reply to Toby Ord by Simon Knutsson and another piece that seems related. (And by “suffering focus,” people are referring to something much broader than NU, which may be true of some CUs too.)

Comment by telofy on EA Global Lightning Talks (San Francisco 2018) · 2018-12-03T17:36:31.293Z · score: 2 (1 votes) · EA · GW

Very interesting talks – thank you! For me, especially Phillip Trammell’s talk.

Comment by telofy on Announcing the EA donation swap system · 2018-12-02T12:18:15.217Z · score: 3 (2 votes) · EA · GW

Thank you for creating this! I want to understand some possible risks to my value system better. So here’s one scenario that I’ve been thinking about.

I realize that it’s a trust system but if Donor A trusts Donor B on something that is not clear enough to Donor A to be able to ask and that’s so unremarkable to Donor B that they see no reason to tell Donor A about it any more than their espresso preferences, then no one is really at fault if they miscommunicate.

Say Donor A:

  1. is neutral between Rethink Priorities (RP) and the Against Malaria Foundation (AMF) (but Donor B doesn’t know this),
  2. can get tax exemption for a donation to RP but not AMF, and
  3. wants to donate $2k.

And Donor B:

  1. values a dollar to RP more than 100 times as highly as one to AMF (but Donor A doesn’t know this),
  2. can get tax exemption for a donation to AMF but not RP, and
  3. wants to donate $1k.

Without donation swap:

  1. Donor A is perfectly happy, and donates $2k to RP because of the tax exemption as tie breaker (but even if they split the donation 50:50 or donate with 50% probability, this case is still problematic).
  2. Donor B is a bit sad but donates, say, $850 to RP, which comes down to the same cost to them due to the lacking tax break.
  3. In effect: RP gains $2,850 and AMF gains $0. Both donors are reasonably happy with this result, the only wrinkle being the taxes.

But with donation swap:

  1. Donor A loves helping their fellow EAs and so offers a swap even though they don’t personally need it.
  2. Donor B enthusiastically takes them up on the offer to save the taxes, donates $1k to AMF, and Donor A donates $1k to RP. Later, Donor A donates their remaining $1k to RP.
  3. In effect: RP gains $2k and AMF gains $1k. Slightly positive for Donor A but big loss for Donor B.

This seems like a plausible scenario to me but there are other scenarios that are less extreme but still very detrimental to one side and possibly even harder to spot.

So am I overlooking something that alleviates this worry or do donors have to know (commit) and be transparent about where they will donate if no swap happens, in order for the other party to know whether they should take them up on the offer?

Comment by telofy on Which World Gets Saved · 2018-11-11T13:05:10.047Z · score: 6 (5 votes) · EA · GW

Very interesting! This strikes me as a particular type of mission hedging, right?

Comment by telofy on Announcing the EA Angel Group - Seeking EAs with the Time and Money to Evaluate and Fund Early-Stage Grants · 2018-10-18T19:24:19.019Z · score: 2 (2 votes) · EA · GW

I'd love to subscribe to a blog where you publish what grants you've recommended. Are you planning to run something like that?

Comment by telofy on Current Thinking on Prioritization 2018 · 2018-09-02T16:27:50.387Z · score: 0 (0 votes) · EA · GW

Oh, cool! I'm reading that study at the moment. I'll be able to say more once I'm though. Then I'll turn to your article. Sounds interesting!

Comment by telofy on A Critical Perspective on Maximizing Happiness · 2018-08-02T06:05:37.754Z · score: 7 (7 votes) · EA · GW

Thank you for starting that discussion. Some resources that come to mind that should be relevant here are:

  • Lukas Gloor’s concept of Tranquilism,
  • different types of happiness (a talk by Michael Plant where I think I heard them explained), and
  • the case for the relatively greater moral urgency and robustness of suffering minimization over happiness maximization, i.e., a bit of a focus on suffering.
Comment by telofy on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-20T11:02:22.218Z · score: 2 (2 votes) · EA · GW

I’m against it. ;-)

Just kidding. I think monopolies and competition are bundles of advantages and disadvantages that we can also combine differently. Competition comes with duplication of effort, sometimes sabotaging the other rather than improving oneself, and some other problems. A monopoly would come with the local optima problem you mentioned. But we can also acknowledge (as we do in many other fields) that we don’t know how to run the best wiki, and have different projects that try out different plausible strategies while not being self-interested by being interested in the value of information from the experiment. So they can work together, automatically synchronize any content that can be synchronized, etc. We’ll first need meaningful differences between the projects that it’ll be worthwhile to test out, e.g., restrictive access vs. open access.

Comment by telofy on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-20T10:52:50.979Z · score: 0 (0 votes) · EA · GW

That would be an immensely valuable meta problem to solve!

Then maybe we can have a wiki which reaches "meme status".

On a potentially less serious note, I wonder if one could make sure that a wiki remains popular by adding a closed section to it that documents particular achievements from OMFCT the way Know Your Meme does. xD

Comment by telofy on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-19T15:33:59.753Z · score: 16 (15 votes) · EA · GW

Sweet! I hope it’ll become a great resource! Are you planning to merge it with https://causeprioritization.org/? If there are too many wikis, we’d just run into the same problem with fragmented bits of information again.

Comment by telofy on Lessons for estimating cost-effectiveness (of vaccines) more effectively · 2018-06-08T09:59:21.506Z · score: 1 (1 votes) · EA · GW

Thank you! I suspect, this is going to be very helpful for me.

Comment by telofy on Introducing Charity Entrepreneurship: an Incubation and Research Program for New Charities · 2018-06-08T08:00:30.167Z · score: 1 (1 votes) · EA · GW

Awesome! Do you also have plans to assist EA founders of for-profit social enterprises (like e.g. Wave)?

Comment by telofy on Current Thinking on Prioritization 2018 · 2018-03-30T08:30:39.866Z · score: 0 (0 votes) · EA · GW

Awesome, thank you!

Comment by telofy on Is Effective Altruism fundamentally flawed? · 2018-03-25T15:05:07.194Z · score: 1 (1 votes) · EA · GW

Hi Jeff!

To just briefly answer your question, “Are you concluding from this that there is not actually a single subject-of-experience”: I don’t have an intuition for what a subject-of-experience is – if it is something defined along the lines of the three characteristics of continuous person moments from my previous message, then I feel that it is meaningful but not morally relevant, but if it is defined along the lines of some sort of person essentialism then I don’t believe it exists on Occam’s razor grounds. (For the same reason, I also think that reincarnation is metaphysically meaningless because I think there is no essence to a person or a person moment besides their physical body* until shown otherwise.)

* This is imprecise but I hope it’s clear what I mean. People are also defined by their environment, culture, and whatnot.

Comment by telofy on Current Thinking on Prioritization 2018 · 2018-03-25T09:49:59.130Z · score: 0 (0 votes) · EA · GW

Cool, thank you! Have you written about direct chemical synthesis of food or can you recommend some resources to me?

Comment by telofy on Is Effective Altruism fundamentally flawed? · 2018-03-16T22:43:32.380Z · score: 1 (1 votes) · EA · GW

Argh, sorry, I haven’t had time to read through the other conversation yet, but to clarify, my prior was the other one – not that there is something linking the experiences of the five people but that there is very little, and nothing that seems very morally relevant – that links the experiences of the one person. Generally, people talk about continuity, intentions, and memories linking the person moments of a person such that we think of them as the same one even though all the atoms of their bodies may’ve been exchanged for different ones.

In your first reply to Michael, you indicate that the third one, memories, is important to you, but in themselves I don’t feel that they confer moral importance in this sense. What you mean, though, may be that five repeated headaches are more than five times as bad as one because of some sort of exhaustion or exasperation that sets in. I certainly feel that, in my case especially with itches, and I think I’ve read that some estimates of DALY disability weights also take that into account.

But I model that as some sort of ability of a person to “bear” some suffering, which gets worn down over time by repeated suffering without sufficient recovery in between or by too extreme suffering. That leads to a threshold that makes suffering below and above seem morally very different to me. (But I recognize several such thresholds in my moral intuitions, so I seem to be some sort of multilevel prioritarian.)

So when I imagine what it is like to suffer headaches as bad as five people suffering one headache each, I imagine them far apart with plenty of time to recover, no regularity to them, etc. I’ve had more than five headaches in my life but no connection and nothing pathological, so I don’t even need to rely on my imagination. (Having five attacks of a frequently recurring migraine must be noticeably worse.)

Comment by telofy on Is Effective Altruism fundamentally flawed? · 2018-03-15T12:32:26.563Z · score: 0 (0 votes) · EA · GW

Okay, curious. What is to you a “clear experiential sense” is just as clear or unclear to me no matter whether I think about the person moments of the same person or of different people.

It would be interesting if there’s some systematic correlation between cultural aspects and someone’s moral intuitions on this issue – say, more collectivist culture leading to more strongly discounted aggregation and more individualist culture leading to more linear aggregation… or something of the sort. The other person I know who has this intuition is from a eastern European country, hence that hypothesis.

Comment by telofy on Is Effective Altruism fundamentally flawed? · 2018-03-13T19:20:08.838Z · score: 3 (3 votes) · EA · GW

I think Brian Tomasik has addressed this briefly and Nick Bostrom at greater length.

What I’ve found most convincing (quoting myself in response to a case that hinged on the similarity of the two or many experiences):

If you don’t care much more about several very similar beings suffering than one of them suffering, then you would also not care more about them, when they’re your own person moments, right? You’re extremely similar to your version a month or several months ago, probably more similar than you are to any other person in the whole world. So if you’re suffering for just a moment, it would be no better than being suffering for an hour, a day, a month, or any longer multiple of that moment. And if you’ve been happy for just a moment sufficiently recently, then close to nothing more can be done for you for a long time.

I imagine that fundamental things like that are up to the subjectivity of moral feelings – so close to the axioms, it’s hard to argue with even more fundamental axioms. But I for one have trouble empathizing with a nonaggregative axiology at least.

Comment by telofy on Cause prioritization for downside-focused value systems · 2018-02-07T14:02:59.976Z · score: 3 (3 votes) · EA · GW

Just FYI, Simon Knutsson has responded to Toby Ord.

Comment by Telofy on [deleted post] 2018-01-12T12:07:41.713Z

Thanks!

Comment by telofy on Four Organizations EAs Should Fully Fund for 2018 · 2018-01-12T12:01:32.089Z · score: 1 (1 votes) · EA · GW

Sophie and Meret will know more, but from what I’ve heard, they’re pretty much on board with it because it will shift demand toward them. I can point Sophie to this thread if you’d like a more detailed or reliable answer than mine. ;-)

Comment by Telofy on [deleted post] 2018-01-04T08:16:45.733Z

What happened to this post? Is there another place where it is being discussed? It sounds very interesting. Thanks!