Posts

The Most Important Century: The Animation 2022-07-24T22:34:47.897Z
Preventing a US-China war as a policy priority 2022-06-22T18:07:01.444Z
My current thoughts on the risks from SETI 2022-03-15T17:17:43.804Z
A proposal for a small inducement prize platform 2021-06-05T19:06:39.791Z
Matthew_Barnett's Shortform 2020-03-02T05:03:33.053Z
Effects of anti-aging research on the long-term future 2020-02-27T22:42:40.043Z
Concerning the Recent 2019-Novel Coronavirus Outbreak 2020-01-27T05:47:34.546Z

Comments

Comment by Matthew_Barnett on Hobbit Manifesto · 2022-08-30T20:54:17.768Z · EA · GW

One consideration is that, in the long-run, uploading people onto computers would probably squeeze far more value out of each atom than making people into hobbits. In that case, the housing stock would be multiplied by orders of magnitude, since people can be stored in server rooms. Assuming uploaded humans aren't retired, economic productivity would be a lot higher too.

Comment by Matthew_Barnett on The Most Important Century: The Animation · 2022-07-25T21:10:39.254Z · EA · GW

Which is why I'm way more convinced by Gary Marcus' examples than by e.g. Scott Alexander. I don't think they need to be able to describe "true understanding" to demonstrate that current AI is far from human capabilities.

My impression is that this debate is mostly people talking past each other. Gary Marcus will often say something to the effect of, "Current systems are not able to do X". The other side will respond with, "But current systems will be able to do X relatively soon." People will act like these statements contradict, but they do not.

I recently asked Gary Marcus to name a set of concrete tasks he thinks deep learning systems won't be able to do in the near-term future. Along with Ernie Davis, he replied with a set of mostly vague and difficult to operationalize tasks, collectively constituting AGI, which he thought won't happen by the end of 2029 (with no probability attached). 

While I can forgive people for being a bit vague, I'm not impressed by the examples Gary Marcus offered. All of the tasks seem like the type of thing that could easily be conquered by deep learning if given enough trial and error, even if the 2029 deadline is too aggressive. I have yet to see anyone -- either Gary Marcus, or anyone else -- name a credible, specific reason why deep learning will fail in the coming decades. Why exactly, for example, do we think that it will stop short of being able to write books (when it can already write essays), or it will stop short of being able to write 10,000 lines of code (when it can already write 30 lines of code)? 

Now, some critiques of deep learning seem right: it's currently too data-hungry,  and very costly to run large training runs, for example. But of course, these objections only tell us that there might be some even more efficient paradigm that brings us AGI sooner. It's not a good reason to expect AGI to be centuries away.

Comment by Matthew_Barnett on The Most Important Century: The Animation · 2022-07-25T20:08:31.438Z · EA · GW

Why the thought that AGI is theoretically possible should make us expect it from the current paradigm (my impression is that most researchers don't expect that, and that's why their survey answers are so volatile with slight changes in phrasing)

Holden Karnofsky does discuss this objection in his blog post sequence,

The argument I most commonly hear that it is "too aggressive" is along the lines of: "There's no reason to think that a modern-methods-based AI can learn everything a human does, using trial-and-error training - no matter how big the model is and how much training it does. Human brains can reason in unique ways, unmatched and unmatchable by any AI unless we come up with fundamentally new approaches to AI." This kind of argument is often accompanied by saying that AI systems don't "truly understand" what they're reasoning about, and/or that they are merely imitating human reasoning through pattern recognition.

I think this may turn out to be correct, but I wouldn't bet on it. A full discussion of why is outside the scope of this post, but in brief:

  • I am unconvinced that there is a deep or stable distinction between "pattern recognition" and "true understanding" (this Slate Star Codex piece makes this point). "True understanding" might just be what really good pattern recognition looks like. Part of my thinking here is an intuition that even when people (including myself) superficially appear to "understand" something, their reasoning often (I'd even say usually) breaks down when considering an unfamiliar context. In other words, I think what we think of as "true understanding" is more of an ideal than a reality.
  • I feel underwhelmed with the track record of those who have made this sort of argument - I don't feel they have been able to pinpoint what "true reasoning" looks like, such that they could make robust predictions about what would prove difficult for AI systems. (For example, see this discussion of Gary Marcus's latest critique of GPT3, and similar discussion on Astral Codex Ten).
  • "Some breakthroughs / fundamental advances are needed" might be true. But for Bio Anchors to be overly aggressive, it isn't enough that some breakthroughs are needed; the breakthroughs needed have to be more than what AI scientists are capable of in the coming decades, the time frame over which Bio Anchors forecasts transformative AI. It seems hard to be confident that things will play out this way - especially because:
    • Even moderate advances in AI systems could bring more talent and funding into the field (as is already happening8).
    • If money, talent and processing power are plentiful, and progress toward PASTA is primarily held up by some particular weakness of how AI systems are designed and trained, a sustained attempt by researchers to fix this weakness could work. When we're talking about multi-decade timelines, that might be plenty of time for researchers to find whatever is missing from today's techniques.

I think more generally, even if AGI is not developed via the current paradigm, it is still a useful exercise to predict when we could in principle develop AGI via deep learning. That's because, even if some even more efficient paradigm takes over in the coming years, that could make AGI arrive even sooner, rather than later, than we expect.

Comment by Matthew_Barnett on The Most Important Century: The Animation · 2022-07-25T19:47:54.457Z · EA · GW

What operative conclusion can be drawn from the "importance" of this century. If it turned out to be only the 17th most important century, would that affect our choices?

One major implication is that we should spend our altruistic and charity money now, rather than putting it into a fund and investing it, to be spent much later. The main alternative to this view is the view taken by the Patient Philanthropy Project, which invests money until such time that there is an unusually good opportunity.

Comment by Matthew_Barnett on Preventing a US-China war as a policy priority · 2022-06-23T09:41:33.730Z · EA · GW

My assessment is that actually the opposite is true.

The argument you presented appears excellent to me, and I've now changed my mind on this particular point.

Comment by Matthew_Barnett on Preventing a US-China war as a policy priority · 2022-06-23T09:07:25.365Z · EA · GW

Thanks. I don’t agree with your interpretation of the survey data. I'll quote another sentence from the essay that made my statement on this more clear,

The majority of the population of Taiwan simply want to be left alone, as a sovereign nation—which they already are, in every practical sense.

The position "declare independence as soon as possible" is unpopular for an obvious reason that I explained in the post. Namely, if Taiwan made a formal declaration of independence, it would potentially trigger a Chinese invasion.

"Maintaining the status quo" is, for the most part, code for maintaining functional independence, which is popular, because as you said, "It means peace and prosperity, and it has been surprisingly stable over the last 70 years." This is what I meant by saying the Taiwanese "want to be their own nation instead, indefinitely" in the sentence you quoted, because I was talking about what's actually practically true, not just what's true on paper. 

I'll note that if you add up the percentage of people who want to maintain the status quo indefinitely, and those who want to maintain the status quo but move towards independence, it sums to 52.4%. It goes up to 58.4% if you include people who want to declare independence as soon as possible.

I admit my wording sucked, but I think what I said basically matches the facts-on-the ground, if not the literal survey data you quoted, in the sense that there is almost no political will right now to reunify with China (at least until they meet some hypothetical conditions, which they probably won't any time soon).

Comment by Matthew_Barnett on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-19T20:31:48.581Z · EA · GW

I like that you admit that your examples are cherry-picked. But I'm actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky's successes?

While he's not single-handedly responsible, he lead the movement to take AI risk seriously at a time when  approximately no one was talking about it, which has now attracted the interests of top academics. This isn't a complete track record, but it's still a very important data-point. It's a bit like if he were the first person to say that we should take nuclear war seriously, and then five years later people are starting to build nuclear bombs and academics realize that nuclear war is very plausible.

Comment by Matthew_Barnett on How much current animal suffering does longtermism let us ignore? · 2022-04-22T23:03:16.981Z · EA · GW

What I view as the Standard Model of  Longtermism is something like the following:

  • At some point we will develop advanced AI capable of "running the show" for civilization on a high level
  • The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
  • One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
  • To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.

This model doesn't predict that longtermists will make the future much larger than it otherwise would . It just predicts that they'll make it look a bit different than it otherwise would look like.

Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.

Comment by Matthew_Barnett on How much current animal suffering does longtermism let us ignore? · 2022-04-22T22:45:11.560Z · EA · GW

I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn't actually true, though I agree if you just mean pragmatically, most longtermists aren't suffering focused.

Hilary Greaves and William MacAskill loosely define strong longtermism as, "the view that impact on the far future is the most important feature of our actions today." Longtermism is therefore completely agnostic about whether you're a suffering-focused altruist, or a traditional welfarist in line with Jeremy Bentham. It's entirely consistent to prefer to minimize suffering over the long-run future, and be a longtermist. Or put another way, there are no major axiological commitments involved with being a longtermist, other than the view that we should treat value in the far-future similar to the way we treat value in the near-future.

Of course, in practice, longtermists are more likely to advocate a Benthamite utility function than a standard negative utilitarian. But it's still completely consistent to be a negative utilitarian and a longtermist, and in fact I consider myself one.

Comment by Matthew_Barnett on How much current animal suffering does longtermism let us ignore? · 2022-04-22T18:27:25.731Z · EA · GW

There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.

I agree, there is already a lot of human suffering that longtermists de-prioritize. More concrete examples include,

  • The 0.57% of the US population that is imprisoned at any given time this year. (This might even be more analogous to battery cages than slavery).
  • The 25.78 million people who live under the totalitarian North Korean regime.
  • The estimated 27.2% of the adult US population that who lives with more than one of these chronic health conditions: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, and weak or failing kidneys.
  • The nearly 10% of the world population who lives in extreme poverty, which is defined as a level of consumption equivalent to less than $2 of spending per day, adjusting for price differences between nations.
  • The 7 million Americans who are currently having their brain rot away, bit by bit, due to Alzheimer's and other forms of dementia. Not to mention their loved ones who are forced to witness this.
  • The 6% of the US population who experienced at least one major depressive episode in the last year.
  • The estimated half a million homeless population in the United States .
  • The significant fraction of people who have profound difficulties with learning and performing work, who disproportionately live in poverty and are isolated from friends and family
Comment by Matthew_Barnett on Critique of OpenPhil's macroeconomic policy advocacy · 2022-03-25T03:56:00.629Z · EA · GW

I want to understand the main claims of this post better. My understanding is that you have made the following chain of reasoning:

  1. OpenPhil funded think tanks that advocated looser macroeconomic policy since 2014.
  2. This had some non-trivial effect on actual macroeconomic policy in 2020-2022.
  3. The result of this policy was to contribute to high inflation.
  4. High inflation is bad for two reasons: (1) real wages decline, especially among the poor, (2) inflation causes populism, which may cause Democrats to lose the 2022 midterm elections.
  5. Therefore, OpenPhil should not make similar grants in the future.

I'm with you on claims 1, 2, and 3. I'm not sure about 4 and 5. Let me focus on my confusions with claim 4.

In another comment, I pointed out that it wasn't clear to me that inflation hurts low-wage workers by a substantial margin. Maybe the sources I cited there were poor, but it doesn't seem like there's a consensus about this issue to my (untrained) eyes.

The fact that prediction markets currently indicate that Republicans have an edge in the midterm elections is not surprising. FiveThirtyEight says, "One of the most ironclad rules in American politics is that the president’s party loses ground in midterm elections." The only modern exception to this rule was the 2002 midterm election, in which Republicans gained seats because of 9/11.

If we look at ElectionBettingOdds, it appears that the main shock that pushed the markets in favor of a Republican win was the election last year. (see Senate, and House forecasts). It's harder to see Republicans gaining due to inflation in the data (though I agree they probably did). EDIT: OK I think it's more clear to me now that the spike in the House forecast in May 2021 was probably due to inflation concerns.

Comment by Matthew_Barnett on Critique of OpenPhil's macroeconomic policy advocacy · 2022-03-25T03:22:41.147Z · EA · GW

More voters have seen their real wages go down than up (mostly in the lower income brackets).

What is your source for this claim? By contrast, this article says,

Between roughly 56 and 57 percent of occupations, largely concentrated in the bottom half of the income distribution, are seeing real hourly wage increases.


And they show this chart,

Here's another article that cites economists saying the same thing.

Comment by Matthew_Barnett on How we failed · 2022-03-25T03:13:55.956Z · EA · GW

Here's a quote from Wei Dai, speaking on Feburary 26th 2020, 

Here's another example, which has actually happened 3 times to me already:

  1. The truly ignorant don't wear masks.
  2. Many people wear masks or encourage others to wear masks in part to signal their knowledge and conscientiousness.
  3. "Experts" counter-signal with "masks don't do much", "we should be evidence-based" and "WHO says 'If you are healthy, you only need to wear a mask if you are taking care of a person with suspected 2019-nCoV infection.'"
  4. I respond by citing actual evidence in the form of a meta-analysis: medical procedure masks combined with hand hygiene achieved RR of .73 while hand hygiene alone had a (not statistically significant) RR of .86.

After over a month of dragging their feet, and a whole bunch of experts saying misleading things, the CDC finally recommended people wear masks on April 3rd 2020.

Comment by Matthew_Barnett on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-23T02:47:31.925Z · EA · GW

Thanks for the continued discussion.

If I'm understanding correctly the main point you're making is that I probably shouldn't have said this:

There is little room for improvement here...

I think I'm making two points. The first point was, yeah, I think there is substantial room for improvement here. But the second point is necessary: analyzing the situation with Taiwan is crucial if we seek to effectively reduce nuclear risk.

I do not think it was wrong to focus on the trade war. It depends on your goals. If you wanted to promote quick, actionable and robust advice, it made sense. If you wanted to stare straight into the abyss, and solve the problem directly, it made a little less sense. Sometimes the first thing is what we need. But, as I'm glad to hear, you seem to agree with me that we also sometimes need to do the second thing.

Comment by Matthew_Barnett on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-23T01:49:28.315Z · EA · GW

My reason for focusing on the trade war though is because trade deescalation would have very few downsides and would probably be a substantial positive all on its own before even considering the potential positive effects it could have on relations with China and possibly nuclear risk.

I agree. I think we're both on the same page about the merits of ending the trade war, as an issue by itself.

The optimal policy here is far from clear to me.

Right. From my perspective, this is what makes focusing on Taiwan precisely right thing to do in a high-level analysis.

My understanding of your point here is something like, "The US-Taiwan policy is a super complicated issue so I decided not to even touch it." But, since the US-Taiwan policy is also the most important question regarding US-China relations, not talking about it is basically just avoiding the hard part of the issue.  It's going to be difficult to make any progress if we don't do the hard work of actually addressing the central problem.

(Maybe this is an unfair analogy, but I find what you're saying to be a bit similar to, "I have an essay due in 12 hours. It's on an extremely fraught topic, and I'm unsure whether my thesis is sound, or whether the supporting arguments make any sense. So, rather than deeply reconsider the points I make in my essay, I'll just focus on making sure the essay has the right formatting instead." I can sympathize with this sort of procrastination emotionally, but the clock is still ticking.)

I agree it's a significant issue that should be carefully considered, but it's also an issue that I'm sure international relations experts have spilled huge amounts of ink over so I'm not sure if there are any clearly superior policy improvements available in this area.

I expect experts to have basically spilled a huge amount of ink about every policy regarding US-China relations, so I don't see this as a uniquely asymmetric argument against thinking about Taiwan. Maybe your point is merely that these experts have not yet come to a conclusion, so it seems unlikely that you could come to a conclusion in the span of a short essay. This would be fair reply, but I have two brief heuristic thoughts on that,

  1. Most international relations experts neither understand, nor are motivated by an EA mindset. To the extent that you buy EA philosophy, I think we are well-positioned to have interesting analyses on questions such as, "Is it worth risking nuclear war to save a vibrant democracy?" It's not clear to me at all that moral philosophers have adequately responded to this question already, in the way EAs would find appealing.
  2. I understand the mindset of "Don't try to make progress on a topic that experts have thought about for decades and yet have gone nowhere." That's probably true for things like string theory and the Collatz conjecture. But, this is "philosophy with a deadline" to co-opt a phrase from Nick Bostrom. There's a real chance that World War 3 is coming in the next few decades; so, we better look that possibility in the face, rather than turning away, and caring about something comparatively minor instead.
Comment by Matthew_Barnett on Reducing Nuclear Risk Through Improved US-China Relations · 2022-03-22T23:23:10.409Z · EA · GW

You mention ending the trade war as the main mechanism by which we could ease US-China tensions. I agree that this policy change seems especially tractable, but it does not appear to me to be an effective means of avoiding a global conflict. As Stefan Schubert pointed out, the tariffs appear to have a very modest effect on either the American or Chinese economy.

The elephant in the room, as you alluded to, is Taiwan. A Chinese invasion of Taiwan, and subsequent intervention by the United States, is plausibly the most likely trigger for World War 3 in the near-term future. You write that,

There is little room for improvement here, as China-Taiwan relations have a long history, and the US must walk a fine line between supporting Taiwan but also not signaling to Taiwan that US support will enable Taiwan to declare full independence, which could raise the likelihood of hostilities from China.

However, we can just as easily say that because the US position on Taiwan is ambiguous, there is much more room for improvement here. More specifically, since it's unclear how and whether the US will intervene in a Chinese-Taiwan conflict, this indicates that US foreign policy is variable and easily subject to change. 

In this situation, we can imagine that a mere change in attitude from the US president could be enough to dramatically influence the plausibility of a global conflict. For instance, suppose in the future, an anti-Taiwan president gets elected in America, and as a result, China decides to invade Taiwan, confident that the US will not respond. This election would then have profound implications for not only Taiwan, but the shape of global politics going forward.

We need to think very seriously about how the US should approach the China-Taiwan situation. Should we attempt to defend a vibrant democracy at the risk of starting a catastrophic nuclear war? This is a real question, with real stakes, and one where public opinion has a real chance of determining what ends up happening. In my opinion, the trade war is much less important.

Comment by Matthew_Barnett on My current thoughts on the risks from SETI · 2022-03-17T19:08:17.798Z · EA · GW

One question I have is whether this is possible and how difficult it is?

I think it would be very difficult without human assistance. I don't, for example, think that aliens could hijack the computer hardware we use to process potential signals (though, it would perhaps be wise not to underestimate billion-year-old aliens).

We can imagine the following alternative strategy of attack. Suppose the aliens sent us the code to an AI with the note "This AI will solve all your problems: poverty, disease, world hunger etc.". We can't verify that the AI will actually do any of those things, but enough people think that the aliens aren't lying that we decide to try it. 

After running the AI, it immediately begins its plans for world domination. Soon afterwards, humanity is extinct; and in our place, an alien AI begins constructing a world more favorable to alien values than our own.

Comment by Matthew_Barnett on My current thoughts on the risks from SETI · 2022-03-15T18:04:55.289Z · EA · GW

I don't find the scenario plausible. I think the grabby aliens model (cited in the post) provides a strong reason to doubt that there will be many so-called "quiet" aliens that hide their existence. Moreover, I think malicious grabby (or loud) aliens would not wait for messages before striking, which the Dark Forest theory relies critically on. See also section 15 in the grabby aliens paper, under the heading "SETI Implications".

In general, I don't think there are significant risks associated with messaging aliens (a thesis that other EAs have argued for, along these lines).

Comment by Matthew_Barnett on [linkpost] Peter Singer: The Hinge of History · 2022-01-20T01:33:13.123Z · EA · GW

I think failing to act can itself be atrocious. For example, the failure of rich nations to intervene in the Rwandan genocide was an atrocity. Further, I expect Peter Singer to agree that this was an atrocity. Therefore, I do not think that deontological commitments are sufficient to prevent oneself from being party to atrocities.

Comment by Matthew_Barnett on [linkpost] Peter Singer: The Hinge of History · 2022-01-17T01:47:14.639Z · EA · GW

My interpretation of Peter Singer's thesis is that we should be extremely cautious about acting on a philosophy that claims that an issue is extremely important, since we should be mindful that such philosophies have been used to justify atrocities in the past. But I have two big objections to his thesis.

First, it actually matters whether the philosophy we are talking about is a good one. Singer provides a comparison to communism and Nazism, both of which were used to justify repression and genocide during the 20th century. But are either of these philosophies even theoretically valid, in the sense of being both truth-seeking and based on compassion? I'd argue no. And the fact that these philosophies are invalid was partly why people committed crimes in their name.

Second, this argument proves too much. We could have presented an identical argument to a young Peter Singer in the context of animal farming. "But Peter, if people realize just how many billions of animals are suffering, then this philosophy could be used to justify genocide!" Yet my guess is that Singer would not have been persuaded by that argument at the time, for an obvious reason.

Any moral philosophy which permits ranking issues by importance (and are there any which do not?) can be used to justify atrocities. The important thing is whether the practitioners of the philosophy strongly disavow anti-social or violent actions themselves. And there's abundant evidence that they do in this case, as I have not seen even a single prominent x-risk researcher publicly recommend that anyone commit violent acts of any kind.

Comment by Matthew_Barnett on Democratising Risk - or how EA deals with critics · 2021-12-30T03:28:30.307Z · EA · GW

I'm happy with more critiques of total utilitarianism here. :) 

For what it's worth, I think there are a lot of people unsatisfied with total utilitarianism within the EA community. In my anecdotal experience, many longtermists (including myself) are suffering focused. This often takes the form of negative utilitarianism, but other variants of suffering focused ethics exist.

I may have missed it, but I didn't see any part of the paper that explicitly addressed suffering-focused longtermists. (One part mentioned, "Preventing existential risk is not primarily about preventing the suffering and termination of existing humans.").

I think you might be interested in the arguments made for caring about the long-term future from a suffering-focused perspective. The arguments for avoiding existential risk are translated into arguments for reducing s-risks

I also think that suffering-focused altruists are not especially vulnerable to your argument about moral pluralism. In particular, what matters to me is not the values of humans who exist now but the values of everyone who will ever exist. A natural generalization of this principle is the idea that we should try to step on as few people's preferences as possible (with the preferences of animals and sentient AI included), which leads to a sort of negative preference utilitarianism.

Comment by Matthew_Barnett on Against Negative Utilitarianism · 2021-12-17T22:41:52.966Z · EA · GW

Another strange implication is that enough worlds of utopia plus pinprick would be worse than a world of pure torture.

I view this implication as merely the consequence of two facts, (1) utilitarians generally endorse torture in the torture vs. dust specks thought experiment, (2) negative preference utilitarians don't find value in creating new beings just to satisfy their preferences.

The first fact is shared by all non-lexical varieties of consequentialism, so it doesn't appear to be a unique critique of negative preference utilitarianism. 

The second fact doesn't seem counterintuitive to me, personally. When I try to visualize why other people find it counterintuitive, I end up imagining that it would be sad/shameful/disappointing if we never created a utopia. But under negative preference utilitarianism, existing preferences to create and live in a utopia are already taken into account. So, it's not optimal to ignore these people's wishes.

On the other hand, I find it unintuitive that we should build preferenceonium (homogeneous matter optimized to have very strong preferences that are immediately satisfied). So, this objection doesn't move me by much.

A final implication is that for a world of Budhist monks who have rid themselves completely of desires and merely take in the joys of life without having any firm desires for future states of the world, it would be morally neutral to bring their well-being to zero.  

I think if someone genuinely removed themselves of all desire then, yes, I think it would be acceptable to lower their well-being to zero (note that we should also take into account their preferences not to be exploited in such a manner). But this thought experiment seems hollow to me, because of the well-known difficulty of detaching oneself completely from material wants, or empathizing with those who have truly done so. 

The force of the thought experiment seems to rest almost entirely on the intuition that the monks have not actually succeeded -- as you say, they "merely take in the joys of life without having desires". But if they really have no desires, then why are they taking joy in life? Indeed, why would they take any action whatsoever?

Comment by Matthew_Barnett on Against Negative Utilitarianism · 2021-12-17T05:33:29.180Z · EA · GW

Moving from our current world to utopia + pinprick would be a strong moral improvement under NPU. But you're right that if the universe was devoid of all preference-having beings, then creating a utopia with a pinprick would not be recommended.

Comment by Matthew_Barnett on Against Negative Utilitarianism · 2021-12-16T01:33:55.626Z · EA · GW

World destruction would violate a ton of people's preferences. Many people who live in the world want it to keep existing. Minimizing preference frustration would presumably give people what they want, rather than killing them (something they don't want).

Comment by Matthew_Barnett on Against Negative Utilitarianism · 2021-12-15T23:51:01.301Z · EA · GW

I'm curious whether you think your arguments apply to negative preference utilitarianism (NPU): the view that we ought to minimize aggregate preference frustration. It shares many features with ordinary negative hedonistic utilitarianism (NHU), such as,

But NPU also has several desirable properties that are not shared with NHU:

  • Utopia, rather than world-destruction, is the globally optimal solution that maximizes utility.
  • It's compatible with the thesis that value is highly complex. More specifically, the complexity of value under NPU is a consequence of the complexity of individual preferences. People generally prefer to live in a diverse, fun, interesting, and free world, than a homogenous world filled with hedonium.

Moreover,

  • As Brian Tomasik argued, preference utilitarianism can be seen as a generalization of the golden rule.
  • Preference utilitarianism puts primacy on consent, mostly because actions are wrong insofar as they violate someone's consent. This puts it on a firm foundation as an ethical theory of freedom and autonomy.

That said, there are a number of problems with the theory, including the problem of how to define preference frustration, identify agents across time and space, perform interpersonal utility comparisons, idealize individual preferences, and cope with infinite preferences.

Comment by Matthew_Barnett on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2021-12-14T23:31:59.575Z · EA · GW

For a long time, I've believed in the importance of not being alarmist. My immediate reaction to almost anybody who warns me of impending doom is: "I doubt it". And sometimes, "Do you want to bet?"

So, writing this post was a very difficult thing for me to do. On an object-level, l realized that the evidence coming out of Wuhan looked very concerning. The more I looked into it, the more I thought, "This really seems like something someone should be ringing the alarm bells about." But for a while, very few people were predicting anything big on respectable forums (Travis Fisher, on Metaculus, being an exception), so I stayed silent.

At some point, the evidence became overwhelming. It seemed very clear that this virus wasn't going to be contained, and it was going to go global. I credit Dony Christie and Louis Francini with interrupting me from my dogmatic slumber. They were able to convince me —in the vein of Eliezer Yudkowsky's Inadequate Equilibria —that the reason why no one was talking about this probably had nothing to do whatsoever  with the actual evidence. It wasn't that people had a model and used that model to predict "no doom" with high confidence: it was a case of people not having models at all.

I thought at the time—and continue to think—that the starting place of all our forecasting should be using the outside view. But—and this was something Dony Christie was quite keen to argue—sometimes people just use the "outside view" as a rationalization; to many people, it means just as much, and no more than, "I don't want to predict something weird, even if that weird thing is overwhelmingly determined by the actual evidence."

And that was definitely true here: pandemics are not a rare occassion in human history. They happen quite frequently. I am most thankful for belonging to a community that opened my mind long ago, by having abundant material written about natural pandemics, the Spanish flu, and future bio-risks. That allowed me to enter the mindset of thinking "OK maybe this is real" as opposed to rejecting all the smoke under the door until the social atmosphere became right.

My intuitions, I'm happy to say, paid off. People are still messaging me about this post. Nearly two years later, I wear a mask when I enter a supermarket. 

There are many doomsayers who always get things wrong. A smaller number of doomsayers are occasionally correct—good enough that it might be worth listening to them, but rejecting them, most of the time. 

Yet, I am now entitled to a distinction that I did not think I would ever earn, and one that I perhaps do not deserve (as the real credit goes to Louis and Dony): the only time I've ever put out a PSA asking people to take some impending doom very seriously, was when I correctly warned about the most significant pandemic in one hundred years. And I'm pretty sure I did it earlier than any other effective altruist in the community (though I'm happy to be proven wrong, and congratulate them fully).

That said, there are some parts of this post I am not happy with.  These include,

  • I only had one concrete prediction in the whole post, and it wasn't very well-specified. I said that there was a >2% probability that 50 million people would die within one year. That didn't happen.
  • I overestimated the mortality rate. At the time, I didn't understand which was likely to be a greater factor in biasing the case fatality rate: the selection effect of missed cases, or the time-delay of deaths. It is now safe to say that the former was a greater issue. The infection fatality rate of Covid-19 is less than 1%, putting it into a less dangerous category of disease than I had pictured at the time.

Interestingly, one part I didn't regret writing was the vaccine timeline I implicitly predicted in the post. I said, "we should expect that it will take about a year before a vaccine comes out." Later, health authorities claimed that it would take much longer, with some outlets "fact-checking" the claim that a vaccine could arrive by the end of 2020. I'm pleased to say I outlasted the pessimists on this point, as vaccines started going into people's arms on a wide scale almost exactly one year after I wrote this post.

Overall, I'm happy I wrote this post. I'm even happier to have friends who could trigger me to write it. And I hope, when the next real disaster comes, effective altruists will correctly anticipate it, as they did for Covid-19.

Comment by Matthew_Barnett on Rowing, Steering, Anchoring, Equity, Mutiny · 2021-12-03T22:37:50.277Z · EA · GW

It was much less disruptive than revolutions like in France, Russia or China, which attempted to radically re-order their governments, economies and societies. In a sense I guess you could think of the US revolution as being a bit like a mutiny that then kept largely the same course as the previous captain anyway.

I agree with the weaker claim here that the US revolution didn't radically re-order "government, economy and society." But I think you might be exaggerating how conservative the US revolution was. 

The United States is widely considered to be one of the first modern constitutional democracies, following literally thousands of years of near-universal despotism throughout the world. Note that while many of its democratic institutions were inherited from the United Kingdom, sources such as Boix et al.'s "A complete data set of political regimes, 1800–2007" (which Our World In Data cites on their page for democray) tend to say that democracy in the United States is older than democracy in the United Kingdom, or Western Europe more generally. 

One of the major disruptive revolutions you mention, the French Revolution, was inspired by the American revolution quite directly. Thomas Jefferson even assisted Marquis de Lafayette draft the Declaration of the Rights of Man and of the Citizen. More generally, the intellectual ideals of both revolutions are regularly compared with each other, and held as prototypical examples of Enlightenment values.

However, I do agree with what is perhaps the main claim, which is that the US constitution, by design, did not try to impose the perfect social order: its primary principle was precisely that of limited government and non-intervention, ie. the government not trying to change as much as possible.

Comment by Matthew_Barnett on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-12T22:01:43.169Z · EA · GW

The main way I could see an AGI taking over the world without being exceedingly superhuman would be if it hid its intentions well enough so that it could become trusted enough to be deployed widely and have control of lots of important infrastructure.

My understanding is that Eliezer's main argument is that the first superintelligence will have access to advanced molecular nanotechnology, an argument that he touches on in this dialogue. 

I could see breaking his thesis up into a few potential steps,

  1. At some point, an AGI will FOOM to radically superhuman levels, via recursive self-improvement or some other mechanism.
  2. The first radically superhuman AGI will have the unique ability to deploy advanced molecular nanomachines, capable of constructing arbitrary weapons, devices, and nanobot swarms.
  3. If some radically smarter-than-human agent has the unique ability to deploy advanced molecular nanotechnology, then it will be able to unilaterally cause an existential catastrophe.

I am unsure which premise you disagree with most. My guess is premise (1), but it sounds a little bit like you're also  skeptical of (2) or (3), given your reply.

It's also not clear to me whether the AGI would be consequentialist?

One argument is that broadly consequentialist AI systems will be more useful, since they allow us to more easily specify our wishes (as we only need to tell it what we want, not how to get it). This doesn't imply that GPT-type AGI will become consequentialist on its own, but it does imply the existence of a selection pressure for consequentialist systems.

Comment by Matthew_Barnett on A proposal for a small inducement prize platform · 2021-06-10T22:28:11.009Z · EA · GW

Potential ways around this that come to mind:

Good ideas. I have a few more,

  • Have a feature that allows people to charge fees to people who submit work. This would potentially compensate the arbitrator who would have to review the work, and would discourage people from submitting bad work in the hopes that they can fool people into awarding them the bounty.
  • Instead of awarding the bounty to whoever gives a summary/investigation, award the bounty to the person who provides the best  summary/investigation, at the end of some time period. That way, if someone thinks that the current submissions are omitting important information, or are badly written, then they can take the prize for themselves by submitting a better one.
  • Similar to your first suggestion: have a feature that restricts people from submitting answers unless they pass certain basic criteria. E.g. "You aren't eligible unless you are verified to have at least 50 karma on the Effective Altruist Forum or Lesswrong." This would ensure that only people from within the community can contribute to certain questions.
  • Use adversarial meta-bounties: at the end of a contest, offer a bounty to anyone who can convince the judge/arbitrator to change their mind on the decision they have made.
Comment by Matthew_Barnett on A proposal for a small inducement prize platform · 2021-06-07T22:31:19.320Z · EA · GW

What is the likely market size for this platform?

I'm not sure, but I just opened a Metaculus question about this, and we should begin getting forecasts within a few days. 

Comment by Matthew_Barnett on How should longtermists think about eating meat? · 2020-05-17T21:34:10.558Z · EA · GW

Eliezer Yudkowsky wrote a sequence on ethical injunctions where he argued why things like this were wrong (from his own, longtermist perspective).

Comment by Matthew_Barnett on How should longtermists think about eating meat? · 2020-05-17T21:32:19.623Z · EA · GW
And it feels terribly convenient for the longtermist to argue they are in the moral right while making no effort to counteract or at least not participate in what they recognize as moral wrongs.

This is only convenient for the longtermist if they do not have equivalently demanding obligations to the longterm. Otherwise we could turn it around and say that it's "terribly convenient" for a shorttermist to ignore the longterm future too.

Comment by Matthew_Barnett on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-12T07:56:32.541Z · EA · GW

Regarding the section on estimating the probability of AI extinction, I think a useful framing is to focus on disjunctive scenarios where AI ends up being used. If we imagine a highly detailed scenario where a single artificial intelligence goes rougue, then of course these types of things will seem unlikely.

However, my guess is that AI will gradually become more capable and integrated into the world economy, and there won't be a discrete point where we can say "now the AI was invented." Over the broad course of history, we have witnessed numerous instances of populations displacing other populations eg. species displacements in ecosystems, and humans populations displacing other humans. If we think about AI as displacing humanity's seat of power in this abstract way, then an AI takeover doesn't seem implausible anymore, and indeed I find it quite likely in the long run.

Comment by Matthew_Barnett on Matthew_Barnett's Shortform · 2020-05-01T06:31:38.099Z · EA · GW

A trip to Mars that brought back human passengers also has the chance of bringing back microbial Martian passengers. This could be an existential risk if microbes from Mars harm our biosphere in a severe and irreparable manner.

From Carl Sagan in 1973, "Precisely because Mars is an environment of great potential biological interest, it is possible that on Mars there are pathogens, organisms which, if transported to the terrestrial environment, might do enormous biological damage - a Martian plague, the twist in the plot of H. G. Wells' War of the Worlds, but in reverse."

Note that the microbes would not need to have independently arisen on Mars. It could be that they were transported to Mars from Earth billions of years ago (or the reverse occurred). While this issue has been studied by some, my impression is that effective altruists have not looked into this issue as a potential source of existential risk.

A line of inquiry to launch could be to determine whether there are any historical parallels on Earth that could give us insight into whether a Mars-to-Earth contamination would be harmful. The introduction of an invasive species into some region loosely mirrors this scenario, but much tighter parallels might still exist.

Since Mars missions are planned for the 2030s, this risk could arrive earlier than essentially all the other existential risks that EAs normally talk about.

See this Wikipedia page for more information: https://en.wikipedia.org/wiki/Planetary_protection

Comment by Matthew_Barnett on If you value future people, why do you consider near term effects? · 2020-04-23T05:44:37.242Z · EA · GW

I recommend the paper The Case for Strong Longtermism, as it covers and responds to many of these arguments in a precise philosophical framework.

Comment by Matthew_Barnett on Growth and the case against randomista development · 2020-03-26T00:51:33.114Z · EA · GW
It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.

If this is true, is there a post that expands on this argument, or is it something left implicit?

I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."

I think Bostrom has talked about something similar: namely, differential technological development (he talks about technology rather than economic growth, but the two are very related). The idea is that fast innovation in some fields is preferable to fast innovation in others, and we should try to find which areas to speed up the most.

Comment by Matthew_Barnett on Growth and the case against randomista development · 2020-03-26T00:34:10.533Z · EA · GW
Growth will have flowthrough effects on existential risk.

This makes sense as an assumption, but the post itself didn't argue for this thesis at all.

If the argument was that the best way to help the longterm future is to minimize existential risk, and the best way to minimize existential risk is by increasing economic growth, then you'd expect the post to primarily talk about how economic growth decreases existential risk. Instead, the post focuses on human welfare, which is important, but secondary to the argument you are making.

This is something very close to my personal view on what I'm working on.

Can you go more into detail? I'm also very interested in how increased economic growth impacts existential risk. This is a very important question because it could determine the influence from accelerating economic-growth inducing technologies such as AI and anti-aging.

Comment by Matthew_Barnett on Growth and the case against randomista development · 2020-03-26T00:20:07.186Z · EA · GW

I'm confused what type of EA would primarily be interested in strategies for increasing economic growth. Perhaps someone can help me understand this argument better.

The reason presented for why we should care about economic growth seemed to be a long-termist one. That is, economic growth has large payoffs in the long-run, and if we care about future lives equally to current lives, then we should invest in growth. However, Nick Bostrom argued in 2003 that a longtermist utilitarian should primarily care about minimizing existential risk, rather than increasing economic growth. Therefore, accepting this post requires you to both be a longtermist, but simultanously reject Bostrom's argument. Am I correct in that assumption? If it's true, then what arguments are there for rejecting his thesis?

Comment by Matthew_Barnett on Matthew_Barnett's Shortform · 2020-03-13T22:01:18.629Z · EA · GW

I have now posted as a comment on Lesswrong my summary of some recent economic forecasts and whether they are underestimating the impact of the coronavirus. You can help me by critiquing my analysis.

Comment by Matthew_Barnett on What are the key ongoing debates in EA? · 2020-03-13T08:10:11.015Z · EA · GW
I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.

Is this a prediction, or is this what you want? If it's a prediction, I'd love to hear your reasons why you think this would happen.

My own prediction is that this won't happen. But I'd be happy to see some reasons why I am wrong.

Comment by Matthew_Barnett on Matthew_Barnett's Shortform · 2020-03-02T05:03:33.348Z · EA · GW

I hold a few core ethical ideas that are extremely unpopular: the idea that we should treat the natural suffering of animals as a grave moral catastrophe, the idea that old age and involuntary death is the number one enemy of humanity, the idea that we should treat so-called farm animals with an very high level of compassion.

Given the unpopularity of these ideas, you might be tempted to think that the reason they are unpopular is that they are exceptionally counterinuitive ones. But is that the case? Do you really need a modern education and philosphical training to understand them? Perhaps I shouldn't blame people for not taking things seriously that which they lack the background to understand.

Yet, I claim that these ideas are not actually counterintuitive: they are the type of things you would come up on your own if you had not been conditioned by society to treat them as abnormal. A thoughtful 15 year old who was somehow educated without human culture would find no issue taking these issues seriously. Do you disagree? Let's put my theory to the test.

In order to test my theory -- that caring about wild animal suffering, aging, animal mistreatment -- are the things that you would care about if you were uncorrupted by our culture, we need look no further than the bible.

It is known that the book of Genesis was written in ancient times, before anyone knew anything of modern philosophy, contemporary norms of debate, science, advanced mathematics. The writers of Genesis wrote of a perfect paradise, the one that we fell from after we were corrupted. They didn't know what really happened, of course, so they made stuff up. What is that perfect paradise that they made up?

From Anwers In Genesis, a creationist website,

Death is a sad reality that is ever present in our world, leaving behind tremendous pain and suffering. Tragically, many people shake a fist at God when faced with the loss of a loved one and are left without adequate answers from the church as to death’s existence. Unfortunately, an assumption has crept into the church which sees death as a natural part of our existence and as something that we have to put up with as opposed to it being an enemy

Since creationists believe that humans are responsible for all the evil in the world, they do not make the usual excuse for evil that it is natural and therefore necessary. They openly call death an enemy, that which to be destroyed.

Later,

Both humans and animals were originally vegetarian, then death could not have been a part of God’s Creation. Even after the Fall the diet of Adam and Eve was vegetarian (Genesis 3:17–19). It was not until after the Flood that man was permitted to eat animals for food (Genesis 9:3). The Fall in Genesis 3 would best explain the origin of carnivorous animal behavior.

So in the garden, animals did not hurt one another. Humans did not hurt animals. But this article even goes further, and debunks the infamous "plants tho" objection to vegetarianism,

Plants neither feel pain nor die in the sense that animals and humans do as “Plants are never the subject of חָיָה ” (Gerleman 1997, p. 414). Plants are not described as “living creatures” as humans, land animals, and sea creature are (Genesis 1:20–21, 24 and 30; Genesis 2:7; Genesis 6:19–20 and Genesis 9:10–17), and the words that are used to describe their termination are more descriptive such as “wither” or “fade” (Psalm 37:2; 102:11; Isaiah 64:6).

In God's perfect creation, the one invented by uneducated folks thousands of years ago, we can see that wild animal suffering did not exist, nor did death from old age, or mistreatment of animals.

In this article, I find something so close to my own morality, it strikes me a creationist of all people would write something so elegant,

Most animal rights groups start with an evolutionary view of mankind. They view us as the last to evolve (so far), as a blight on the earth, and the destroyers of pristine nature. Nature, they believe, is much better off without us, and we have no right to interfere with it. This is nature worship, which is a further fulfillment of the prophecy in Romans 1 in which the hearts of sinful man have traded worship of God for the worship of God’s creation.
But as people have noted for years, nature is “red in tooth and claw.”4 Nature is not some kind of perfect, pristine place.

Unfortunately, it continues

And why is this? Because mankind chose to sin against a holy God.

I contend it doesn't really take a modern education to invent these ethical notions. The truly hard step is accepting that evil is bad even if you aren't personally responsible.

Comment by Matthew_Barnett on Effects of anti-aging research on the long-term future · 2020-02-28T21:53:49.301Z · EA · GW

Right, I wasn't criticizing cause priortization. I was criticizing the binary attitude people had towards anti-aging. Imagine if people dismissed AI safety research because, "It would be fruitless to ban AI research. We shouldn't even try." That's what it often sounds like to me when people fail to think seriously about anti-aging research. They aren't even considering the idea that there are other things we could do.

Comment by Matthew_Barnett on Effects of anti-aging research on the long-term future · 2020-02-28T21:29:52.059Z · EA · GW
Now look again at your bulleted list of "big" indirect effects, and remember that you can only hasten them, not enable them. To me, this consideration make the impact we can have on them seem no more than a rounding error if compared to the impact we can have due to LEV (each year you bring LEV closer by saves 36,500,000 lives of 1000QALYS. This is a conservative estimate I made here.)

This isn't clear to me. In Hilary Greaves and William MacAskill's paper on strong longtermism, they argue that unless what we do now impacts a critical lock-in period, then most of the stuff we do now will "wash out" and have a low impact on the future.

If a lock-in period never comes, then there's no compelling reason to focus on indirect effects of anti-aging, and therefore I'd agree with you that these effects are small. However, if there is a lock-in period, then the actual lives saved from ending aging could be tiny compared to the lasting billion year impact that shifting to a post-aging society lead to.

What a strong long-termist should mainly care about are these indirect effects, not merely the lives saved.

Comment by Matthew_Barnett on Effects of anti-aging research on the long-term future · 2020-02-28T21:21:28.775Z · EA · GW

Thanks for the bullet points and thoughtful inquiry!

I've taken this as an opportunity to lay down some of my thoughts on the matter; this turned out to be quite long. I can expand and tidy this into a full post if people are interested, though it sounds like it would overlap somewhat with what Matthew's been working on.

I am very interested in a full post, as right now I think this area is quite neglected and important groundwork can be completed.

My guess is that most people who think about the effects of anti-aging research don't think very seriously about it because they are either trying to come up with reasons to instantly dismiss it, or come up with reasons to instantly dismiss objections to it. As a result, most of the "results" we have about what would happen in a post-aging world come from two sides of a very polarized arena. This is not healthy epistemologically.

In wild animal suffering research, most people assume that there are only two possible interventions: destroy nature, or preserve nature. This sort of binary thinking infects discussions about wild animal suffering, as it prevents people from thinking seriously about the vast array of possible interventions that could make wild animal lives better. I think the same is true for anti-aging research.

Most people I've talked to seem to think that there's only two positions you can take on anti-aging: we should throw our whole support behind medical biogerontology, or we should abandon it entirely and focus on other cause areas. This is crazy.

In reality, there are many ways that we can make a post-aging society better. If we correctly forecast the impacts to global inequality or whatever, and we'd prefer to have inequality go down in a post-aging world, then we can start talking about ways to mitigate such effects in the future. The idea that not talking about the issue or dismissing anti-aging is the best way to make these things go away is a super common reaction that I cannot understand.

Apart from technological stagnation, the other common worry people raise about life extension is cultural stagnation: entrenchment of inequality, extension of authoritarian regimes, aborted social/moral progress, et cetera.

I'm currently writing a post about this, because I see it as one of the most important variables affecting our evaluation of the long-term impact of anti-aging. I'll bring forward arguments both for and against what I see as "value drift" slowed by ending aging.

Overall, I see no clear arguments for either side, but I currently think that the "slower moral progress isn't that bad" position is more promising than it first appears. I'm actually really skeptical of many of the arguments that philosophers and laypeople have brought forward about the necessary function of moral progress brought about by generational death.

And as you mention, it's unclear why we should expect better value drift when we have an aging population, given that there is evidence that the aging process itself makes people more prejudiced and closed-minded in a number of ways.

Comment by Matthew_Barnett on Effects of anti-aging research on the long-term future · 2020-02-28T03:39:44.074Z · EA · GW
There are more ways, yes, but I think they're individually much less likely than the ways in which they can get better, assuming they're somewhat guided by reflection and reason.

Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).

I expect future generations, compared to people alive today, to be less religious

I agree with that.

less speciesist

This is also likely. However, I'm very worried about the idea that caring about farm animals doesn't imply an anti-speciesist mindset. Most vegans aren't concerned about wild animal suffering, and the primary justification that most vegans give for their veganism is from an exploitation framework (or environmentalist one) rather than a harm-reduction framework. This might not robustly transfer to future sentience.

less prejudiced generally, more impartial

This isn't clear to me. From this BBC article, "Psychologists used to believe that greater prejudice among older adults was due to the fact that older people grew up in less egalitarian times. In contrast to this view, we have gathered evidence that normal changes [ie. aging] to the brain in late adulthood can lead to greater prejudice among older adults." Furthermore, "prejudice" is pretty vague, and I think there are many ways that young people are prejudiced without even realizing it (though of course this applies to old people too).

more consequentialist, more welfarist

I don't really see why we should expect this personally. Could you point to some trends that show that humans have become more consequentialist over time? I tend to think that Hansonian moral drives are really hard to overcome.

because of my take on the relative persuasiveness of these views (and the removal of psychological obstacles to having these views)

The second reason is a good one (I agree that when people stop eating meat they'll care more about animals). The relative persuasiveness thing seems weak to me because I have a ton of moral views that I think are persuasive and yet don't seem to be adopted by the general population. Why would we expect this to change?

I don't expect them to be more suffering-focused (beyond what's implied by the expectations above), though. Actually, if current EA views become very influential on future views, I might expect those in the future to be less suffering-focused and to cause s-risks, which is concerning to me.

It sounds like you are not as optimistic as I thought you were. Out of all the arguments you gave, I think the argument from moral circle expansion is the most convincing. I'm less sold on the idea that moral progress is driven by reason and reflection.

I also have a strong prior against positive moral progress relative to any individual parochial moral view given what looks like positive historical evidence against that view (the communists of the early 20th century probably thought that everyone would adopt their perspective by now; same for Hitler, alcohol prohibitionists, and many other movements).

Overall, I think there are no easy answers here and I could easily be wrong.

Comment by Matthew_Barnett on Effects of anti-aging research on the long-term future · 2020-02-28T01:33:30.619Z · EA · GW
I'm a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense

Sure. There are a number of versions of moral anti-realism. It makes sense for some people to think that moral progress is a real thing. My own version of ethics says that morality doesn't run that deep and that personal preferences are pretty arbitrary (though I do agree with some reflection).

In the same way, I think the views of future generations can end up better than my views will ever be.

Again, that makes sense. I personally don't really share the same optimism as you.

So I don't expect such views to be very common over the very long-term

One of the frameworks I propose in my essay that I'm writing is the perspective of value fragility. Across many independent axes, there are many more ways that your values can get worse than better. This is clear in the case of giving an artificial intelligence some utility function, but it could also (more weakly) be the case in deferring to future generations.

You point to idealized values. My hypothesis is that allowing everyone who currently lives to die and putting future generations in control is not a reliable idealization process. There are many ways that I am OK with deferring my values to someone else, but I don't really understand how generational death is one of those.

By contrast, there are a multitude of human biases that make people have more rosy views about future generations than seems (to me) warranted by the evidence:

  • Status quo bias. People dying and leaving stuff to the next generations has been the natural process for millions of years. Why should we stop it now?
  • The relative values fallacy. This goes something like, "We can see that the historical trend is for values to get more normal over time. Each generation has gotten more like us. Therefore future generations will be even more like us, and they'll care about all the things I care about."
  • Failure to appreciate diversity of future outcomes. Robin Hanson talks about how people use a far-view when talking about the future, which means that they ignore small details and tend to focus on one really broad abstract element that they expect to show up. In practice this means that people will assume that because future generations will likely share our values across one axis (in your case, care for farm animals) that they will also share our values across all axes.
  • Belief in the moral arc of the universe. Moral arcs play a large role in human psychology. Religions display them prominently in the idea of apocalypses where evil is defeated in the end. Philosophers have believed in a moral arc, and since many of the supposed moral arcs contradict each other, it's probably not a real thing. This is related to the just-world fallacy where you imagine how awful it would be that future generations could actually be so horrible, so you just sort of pretend that bad outcomes aren't possible.

I personally think that the moral circle expansion hypothesis is highly important as a counterargument, and I want more people to study this. I am very worried that people assume that moral progress will just happen automatically, almost like a spiritual force, because well... the biases I gave above.

Finally, just in practice, I think my views are more aligned with those of younger generations and generations to come

This makes sense if you are referring to the current generation, but I don't see how you can possibly be aligned with future generations that don't exist yet?

Comment by Matthew_Barnett on Effects of anti-aging research on the long-term future · 2020-02-28T00:10:54.938Z · EA · GW
I think newer generations will tend to grow up with better views than older ones (although older generations could have better views than younger ones at any moment, because they're more informed), since younger generations can inspect and question the views of their elders, alternative views, and the reasons for and against with less bias and attachment to them.

This view assumes that moral progress is a real thing, rather than just an illusion. I can personally understand this of view if the younger generations shared the same terminal values, and merely refined instrumental values or became better at discovering logical consistencies or something. However, it also seems likely that moral progress can be described as moral drift.

Personally, I'm a moral anti-realist. Morals are more like preferences and desires than science. Each generation has preferences, and the next generation has slightly different preferences. When you put it that way, the idea of fundamentally better preferences doesn't quite make sense to me.

More concretely, we could imagine several ways that future generations disagree with us (and I'm assuming a suffering reduction perspective here, as I have identified you as among that crowd):

  • Future generations could see more value in deep ecology and preserving nature.
  • They could see more value in making nature simulations.
  • They could see less value in ensuring that robots have legally protected rights, since that's a staple of early 21st century fiction and future generations who grew up with robot servants might not really see it as valuable.

I'm not trying to say that these are particularly likely things, but it would seem strange to put full faith in a consistent direction of moral progress, when nearly every generation before us has experienced the opposite ie. take any generation from prior centuries and they would hate what we value these days. The same will probably be true for you too.

Comment by Matthew_Barnett on Effects of anti-aging research on the long-term future · 2020-02-27T23:57:25.090Z · EA · GW
I think this could be due (in part) to biases accumulated by being in a field (and being alive) longer, not necessarily (just) brain aging.

I'm not convinced there is actually that much of a difference between long-term crystallization of habits and natural aging. I'm not qualified to say this with any sort of confidence. It's also worth being cautious about confidently predicting the effects of something like this in either direction.

Comment by Matthew_Barnett on Effects of anti-aging research on the long-term future · 2020-02-27T23:44:18.601Z · EA · GW
Do Long-Lived Scientists Hold Back Their Disciplines? It's not clear reducing cognitive decline can make up for this or the effects of people becoming more set in their ways over time; you might need relatively more "blank slates".

In addition to what I wrote here, I'm also just skeptical that scientific progress decelerating in a few respects is actually that big of a deal. The biggest case where it would probably matter is if medical doctors themselves had incorrect theories, or engineers (such as AI developers) were using outdated ideas. In the first case, it would be ironic to avoid curing aging to prevent medical doctors from using bad theories. In the second, I would have to do more research, but I'm still leaning skeptical.

Similarly, a lot of moral progress is made because of people with wrong views dying. People living longer will slow this trend, and, in the worst case, could lead to suboptimal value lock-in from advanced AI or other decisions that affect the long-term future.

I have another post in the works right now and I actually take the opposite perspective. I won't argue it fully here, but I don't actually believe the thesis that humanity makes consistent moral progress due to the natural cycle of birth and death. There are many cognitive biases that make us think that we do though (such as the fact that most people who say this are young and disagree with the elder, but when you are old you will disagree with the young. Who's correct?)

Comment by Matthew_Barnett on Effects of anti-aging research on the long-term future · 2020-02-27T23:39:31.403Z · EA · GW
Eliminating aging also has the potential for strong negative long-term effects.

Agreed. One way you can frame what I'm saying is that I'm putting forward a neutral thesis: anti-aging could have big effects. I'm not necessarily saying they would be good (though personally I think they would be).

Even if you didn't want aging to be cured, it still seems worth thinking about it because if it were inevitable, then preparing for a future where aging is cured is better than not preparing.

Another potentially major downside is the stagnation of research. If Kuhn is to be believed, a large part of scientific progress comes not from individuals changing their minds, but from outdated paradigms being displaced by more effective ones.

I think this is real, and my understanding is that empirical research supports this. But the theories I have read also assume a normal aging process. It is quite probable that bad ideas stay alive mostly because the proponents are too old to change their mind. I know for a fact that researchers in their early 20s change their mind quite a lot, and so a cure to aging would also mean more of that.