Posts

Matthew_Barnett's Shortform 2020-03-02T05:03:33.053Z · score: 4 (1 votes)
Effects of anti-aging research on the long-term future 2020-02-27T22:42:40.043Z · score: 36 (16 votes)
Concerning the Recent 2019-Novel Coronavirus Outbreak 2020-01-27T05:47:34.546Z · score: 82 (56 votes)

Comments

Comment by matthew_barnett on Growth and the case against randomista development · 2020-03-26T00:51:33.114Z · score: 2 (2 votes) · EA · GW
It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.

If this is true, is there a post that expands on this argument, or is it something left implicit?

I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."

I think Bostrom has talked about something similar: namely, differential technological development (he talks about technology rather than economic growth, but the two are very related). The idea is that fast innovation in some fields is preferable to fast innovation in others, and we should try to find which areas to speed up the most.

Comment by matthew_barnett on Growth and the case against randomista development · 2020-03-26T00:34:10.533Z · score: 1 (1 votes) · EA · GW
Growth will have flowthrough effects on existential risk.

This makes sense as an assumption, but the post itself didn't argue for this thesis at all.

If the argument was that the best way to help the longterm future is to minimize existential risk, and the best way to minimize existential risk is by increasing economic growth, then you'd expect the post to primarily talk about how economic growth decreases existential risk. Instead, the post focuses on human welfare, which is important, but secondary to the argument you are making.

This is something very close to my personal view on what I'm working on.

Can you go more into detail? I'm also very interested in how increased economic growth impacts existential risk. This is a very important question because it could determine the influence from accelerating economic-growth inducing technologies such as AI and anti-aging.

Comment by matthew_barnett on Growth and the case against randomista development · 2020-03-26T00:20:07.186Z · score: 1 (1 votes) · EA · GW

I'm confused what type of EA would primarily be interested in strategies for increasing economic growth. Perhaps someone can help me understand this argument better.

The reason presented for why we should care about economic growth seemed to be a long-termist one. That is, economic growth has large payoffs in the long-run, and if we care about future lives equally to current lives, then we should invest in growth. However, Nick Bostrom argued in 2003 that a longtermist utilitarian should primarily care about minimizing existential risk, rather than increasing economic growth. Therefore, accepting this post requires you to both be a longtermist, but simultanously reject Bostrom's argument. Am I correct in that assumption? If it's true, then what arguments are there for rejecting his thesis?

Comment by matthew_barnett on Matthew_Barnett's Shortform · 2020-03-13T22:01:18.629Z · score: 5 (3 votes) · EA · GW

I have now posted as a comment on Lesswrong my summary of some recent economic forecasts and whether they are underestimating the impact of the coronavirus. You can help me by critiquing my analysis.

Comment by matthew_barnett on What are the key ongoing debates in EA? · 2020-03-13T08:10:11.015Z · score: 3 (2 votes) · EA · GW
I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.

Is this a prediction, or is this what you want? If it's a prediction, I'd love to hear your reasons why you think this would happen.

My own prediction is that this won't happen. But I'd be happy to see some reasons why I am wrong.

Comment by matthew_barnett on Matthew_Barnett's Shortform · 2020-03-02T05:03:33.348Z · score: 5 (3 votes) · EA · GW

I hold a few core ethical ideas that are extremely unpopular: the idea that we should treat the natural suffering of animals as a grave moral catastrophe, the idea that old age and involuntary death is the number one enemy of humanity, the idea that we should treat so-called farm animals with an very high level of compassion.

Given the unpopularity of these ideas, you might be tempted to think that the reason they are unpopular is that they are exceptionally counterinuitive ones. But is that the case? Do you really need a modern education and philosphical training to understand them? Perhaps I shouldn't blame people for not taking things seriously that which they lack the background to understand.

Yet, I claim that these ideas are not actually counterintuitive: they are the type of things you would come up on your own if you had not been conditioned by society to treat them as abnormal. A thoughtful 15 year old who was somehow educated without human culture would find no issue taking these issues seriously. Do you disagree? Let's put my theory to the test.

In order to test my theory -- that caring about wild animal suffering, aging, animal mistreatment -- are the things that you would care about if you were uncorrupted by our culture, we need look no further than the bible.

It is known that the book of Genesis was written in ancient times, before anyone knew anything of modern philosophy, contemporary norms of debate, science, advanced mathematics. The writers of Genesis wrote of a perfect paradise, the one that we fell from after we were corrupted. They didn't know what really happened, of course, so they made stuff up. What is that perfect paradise that they made up?

From Anwers In Genesis, a creationist website,

Death is a sad reality that is ever present in our world, leaving behind tremendous pain and suffering. Tragically, many people shake a fist at God when faced with the loss of a loved one and are left without adequate answers from the church as to death’s existence. Unfortunately, an assumption has crept into the church which sees death as a natural part of our existence and as something that we have to put up with as opposed to it being an enemy

Since creationists believe that humans are responsible for all the evil in the world, they do not make the usual excuse for evil that it is natural and therefore necessary. They openly call death an enemy, that which to be destroyed.

Later,

Both humans and animals were originally vegetarian, then death could not have been a part of God’s Creation. Even after the Fall the diet of Adam and Eve was vegetarian (Genesis 3:17–19). It was not until after the Flood that man was permitted to eat animals for food (Genesis 9:3). The Fall in Genesis 3 would best explain the origin of carnivorous animal behavior.

So in the garden, animals did not hurt one another. Humans did not hurt animals. But this article even goes further, and debunks the infamous "plants tho" objection to vegetarianism,

Plants neither feel pain nor die in the sense that animals and humans do as “Plants are never the subject of חָיָה ” (Gerleman 1997, p. 414). Plants are not described as “living creatures” as humans, land animals, and sea creature are (Genesis 1:20–21, 24 and 30; Genesis 2:7; Genesis 6:19–20 and Genesis 9:10–17), and the words that are used to describe their termination are more descriptive such as “wither” or “fade” (Psalm 37:2; 102:11; Isaiah 64:6).

In God's perfect creation, the one invented by uneducated folks thousands of years ago, we can see that wild animal suffering did not exist, nor did death from old age, or mistreatment of animals.

In this article, I find something so close to my own morality, it strikes me a creationist of all people would write something so elegant,

Most animal rights groups start with an evolutionary view of mankind. They view us as the last to evolve (so far), as a blight on the earth, and the destroyers of pristine nature. Nature, they believe, is much better off without us, and we have no right to interfere with it. This is nature worship, which is a further fulfillment of the prophecy in Romans 1 in which the hearts of sinful man have traded worship of God for the worship of God’s creation.
But as people have noted for years, nature is “red in tooth and claw.”4 Nature is not some kind of perfect, pristine place.

Unfortunately, it continues

And why is this? Because mankind chose to sin against a holy God.

I contend it doesn't really take a modern education to invent these ethical notions. The truly hard step is accepting that evil is bad even if you aren't personally responsible.

Comment by matthew_barnett on Effects of anti-aging research on the long-term future · 2020-02-28T21:53:49.301Z · score: 1 (1 votes) · EA · GW

Right, I wasn't criticizing cause priortization. I was criticizing the binary attitude people had towards anti-aging. Imagine if people dismissed AI safety research because, "It would be fruitless to ban AI research. We shouldn't even try." That's what it often sounds like to me when people fail to think seriously about anti-aging research. They aren't even considering the idea that there are other things we could do.

Comment by matthew_barnett on Effects of anti-aging research on the long-term future · 2020-02-28T21:29:52.059Z · score: 3 (3 votes) · EA · GW
Now look again at your bulleted list of "big" indirect effects, and remember that you can only hasten them, not enable them. To me, this consideration make the impact we can have on them seem no more than a rounding error if compared to the impact we can have due to LEV (each year you bring LEV closer by saves 36,500,000 lives of 1000QALYS. This is a conservative estimate I made here.)

This isn't clear to me. In Hilary Greaves and William MacAskill's paper on strong longtermism, they argue that unless what we do now impacts a critical lock-in period, then most of the stuff we do now will "wash out" and have a low impact on the future.

If a lock-in period never comes, then there's no compelling reason to focus on indirect effects of anti-aging, and therefore I'd agree with you that these effects are small. However, if there is a lock-in period, then the actual lives saved from ending aging could be tiny compared to the lasting billion year impact that shifting to a post-aging society lead to.

What a strong long-termist should mainly care about are these indirect effects, not merely the lives saved.

Comment by matthew_barnett on Effects of anti-aging research on the long-term future · 2020-02-28T21:21:28.775Z · score: 7 (4 votes) · EA · GW

Thanks for the bullet points and thoughtful inquiry!

I've taken this as an opportunity to lay down some of my thoughts on the matter; this turned out to be quite long. I can expand and tidy this into a full post if people are interested, though it sounds like it would overlap somewhat with what Matthew's been working on.

I am very interested in a full post, as right now I think this area is quite neglected and important groundwork can be completed.

My guess is that most people who think about the effects of anti-aging research don't think very seriously about it because they are either trying to come up with reasons to instantly dismiss it, or come up with reasons to instantly dismiss objections to it. As a result, most of the "results" we have about what would happen in a post-aging world come from two sides of a very polarized arena. This is not healthy epistemologically.

In wild animal suffering research, most people assume that there are only two possible interventions: destroy nature, or preserve nature. This sort of binary thinking infects discussions about wild animal suffering, as it prevents people from thinking seriously about the vast array of possible interventions that could make wild animal lives better. I think the same is true for anti-aging research.

Most people I've talked to seem to think that there's only two positions you can take on anti-aging: we should throw our whole support behind medical biogerontology, or we should abandon it entirely and focus on other cause areas. This is crazy.

In reality, there are many ways that we can make a post-aging society better. If we correctly forecast the impacts to global inequality or whatever, and we'd prefer to have inequality go down in a post-aging world, then we can start talking about ways to mitigate such effects in the future. The idea that not talking about the issue or dismissing anti-aging is the best way to make these things go away is a super common reaction that I cannot understand.

Apart from technological stagnation, the other common worry people raise about life extension is cultural stagnation: entrenchment of inequality, extension of authoritarian regimes, aborted social/moral progress, et cetera.

I'm currently writing a post about this, because I see it as one of the most important variables affecting our evaluation of the long-term impact of anti-aging. I'll bring forward arguments both for and against what I see as "value drift" slowed by ending aging.

Overall, I see no clear arguments for either side, but I currently think that the "slower moral progress isn't that bad" position is more promising than it first appears. I'm actually really skeptical of many of the arguments that philosophers and laypeople have brought forward about the necessary function of moral progress brought about by generational death.

And as you mention, it's unclear why we should expect better value drift when we have an aging population, given that there is evidence that the aging process itself makes people more prejudiced and closed-minded in a number of ways.

Comment by matthew_barnett on Effects of anti-aging research on the long-term future · 2020-02-28T03:39:44.074Z · score: 2 (2 votes) · EA · GW
There are more ways, yes, but I think they're individually much less likely than the ways in which they can get better, assuming they're somewhat guided by reflection and reason.

Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).

I expect future generations, compared to people alive today, to be less religious

I agree with that.

less speciesist

This is also likely. However, I'm very worried about the idea that caring about farm animals doesn't imply an anti-speciesist mindset. Most vegans aren't concerned about wild animal suffering, and the primary justification that most vegans give for their veganism is from an exploitation framework (or environmentalist one) rather than a harm-reduction framework. This might not robustly transfer to future sentience.

less prejudiced generally, more impartial

This isn't clear to me. From this BBC article, "Psychologists used to believe that greater prejudice among older adults was due to the fact that older people grew up in less egalitarian times. In contrast to this view, we have gathered evidence that normal changes [ie. aging] to the brain in late adulthood can lead to greater prejudice among older adults." Furthermore, "prejudice" is pretty vague, and I think there are many ways that young people are prejudiced without even realizing it (though of course this applies to old people too).

more consequentialist, more welfarist

I don't really see why we should expect this personally. Could you point to some trends that show that humans have become more consequentialist over time? I tend to think that Hansonian moral drives are really hard to overcome.

because of my take on the relative persuasiveness of these views (and the removal of psychological obstacles to having these views)

The second reason is a good one (I agree that when people stop eating meat they'll care more about animals). The relative persuasiveness thing seems weak to me because I have a ton of moral views that I think are persuasive and yet don't seem to be adopted by the general population. Why would we expect this to change?

I don't expect them to be more suffering-focused (beyond what's implied by the expectations above), though. Actually, if current EA views become very influential on future views, I might expect those in the future to be less suffering-focused and to cause s-risks, which is concerning to me.

It sounds like you are not as optimistic as I thought you were. Out of all the arguments you gave, I think the argument from moral circle expansion is the most convincing. I'm less sold on the idea that moral progress is driven by reason and reflection.

I also have a strong prior against positive moral progress relative to any individual parochial moral view given what looks like positive historical evidence against that view (the communists of the early 20th century probably thought that everyone would adopt their perspective by now; same for Hitler, alcohol prohibitionists, and many other movements).

Overall, I think there are no easy answers here and I could easily be wrong.

Comment by matthew_barnett on Effects of anti-aging research on the long-term future · 2020-02-28T01:33:30.619Z · score: 2 (2 votes) · EA · GW
I'm a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense

Sure. There are a number of versions of moral anti-realism. It makes sense for some people to think that moral progress is a real thing. My own version of ethics says that morality doesn't run that deep and that personal preferences are pretty arbitrary (though I do agree with some reflection).

In the same way, I think the views of future generations can end up better than my views will ever be.

Again, that makes sense. I personally don't really share the same optimism as you.

So I don't expect such views to be very common over the very long-term

One of the frameworks I propose in my essay that I'm writing is the perspective of value fragility. Across many independent axes, there are many more ways that your values can get worse than better. This is clear in the case of giving an artificial intelligence some utility function, but it could also (more weakly) be the case in deferring to future generations.

You point to idealized values. My hypothesis is that allowing everyone who currently lives to die and putting future generations in control is not a reliable idealization process. There are many ways that I am OK with deferring my values to someone else, but I don't really understand how generational death is one of those.

By contrast, there are a multitude of human biases that make people have more rosy views about future generations than seems (to me) warranted by the evidence:

  • Status quo bias. People dying and leaving stuff to the next generations has been the natural process for millions of years. Why should we stop it now?
  • The relative values fallacy. This goes something like, "We can see that the historical trend is for values to get more normal over time. Each generation has gotten more like us. Therefore future generations will be even more like us, and they'll care about all the things I care about."
  • Failure to appreciate diversity of future outcomes. Robin Hanson talks about how people use a far-view when talking about the future, which means that they ignore small details and tend to focus on one really broad abstract element that they expect to show up. In practice this means that people will assume that because future generations will likely share our values across one axis (in your case, care for farm animals) that they will also share our values across all axes.
  • Belief in the moral arc of the universe. Moral arcs play a large role in human psychology. Religions display them prominently in the idea of apocalypses where evil is defeated in the end. Philosophers have believed in a moral arc, and since many of the supposed moral arcs contradict each other, it's probably not a real thing. This is related to the just-world fallacy where you imagine how awful it would be that future generations could actually be so horrible, so you just sort of pretend that bad outcomes aren't possible.

I personally think that the moral circle expansion hypothesis is highly important as a counterargument, and I want more people to study this. I am very worried that people assume that moral progress will just happen automatically, almost like a spiritual force, because well... the biases I gave above.

Finally, just in practice, I think my views are more aligned with those of younger generations and generations to come

This makes sense if you are referring to the current generation, but I don't see how you can possibly be aligned with future generations that don't exist yet?

Comment by matthew_barnett on Effects of anti-aging research on the long-term future · 2020-02-28T00:10:54.938Z · score: 1 (1 votes) · EA · GW
I think newer generations will tend to grow up with better views than older ones (although older generations could have better views than younger ones at any moment, because they're more informed), since younger generations can inspect and question the views of their elders, alternative views, and the reasons for and against with less bias and attachment to them.

This view assumes that moral progress is a real thing, rather than just an illusion. I can personally understand this of view if the younger generations shared the same terminal values, and merely refined instrumental values or became better at discovering logical consistencies or something. However, it also seems likely that moral progress can be described as moral drift.

Personally, I'm a moral anti-realist. Morals are more like preferences and desires than science. Each generation has preferences, and the next generation has slightly different preferences. When you put it that way, the idea of fundamentally better preferences doesn't quite make sense to me.

More concretely, we could imagine several ways that future generations disagree with us (and I'm assuming a suffering reduction perspective here, as I have identified you as among that crowd):

  • Future generations could see more value in deep ecology and preserving nature.
  • They could see more value in making nature simulations.
  • They could see less value in ensuring that robots have legally protected rights, since that's a staple of early 21st century fiction and future generations who grew up with robot servants might not really see it as valuable.

I'm not trying to say that these are particularly likely things, but it would seem strange to put full faith in a consistent direction of moral progress, when nearly every generation before us has experienced the opposite ie. take any generation from prior centuries and they would hate what we value these days. The same will probably be true for you too.

Comment by matthew_barnett on Effects of anti-aging research on the long-term future · 2020-02-27T23:57:25.090Z · score: 1 (1 votes) · EA · GW
I think this could be due (in part) to biases accumulated by being in a field (and being alive) longer, not necessarily (just) brain aging.

I'm not convinced there is actually that much of a difference between long-term crystallization of habits and natural aging. I'm not qualified to say this with any sort of confidence. It's also worth being cautious about confidently predicting the effects of something like this in either direction.

Comment by matthew_barnett on Effects of anti-aging research on the long-term future · 2020-02-27T23:44:18.601Z · score: 2 (2 votes) · EA · GW
Do Long-Lived Scientists Hold Back Their Disciplines? It's not clear reducing cognitive decline can make up for this or the effects of people becoming more set in their ways over time; you might need relatively more "blank slates".

In addition to what I wrote here, I'm also just skeptical that scientific progress decelerating in a few respects is actually that big of a deal. The biggest case where it would probably matter is if medical doctors themselves had incorrect theories, or engineers (such as AI developers) were using outdated ideas. In the first case, it would be ironic to avoid curing aging to prevent medical doctors from using bad theories. In the second, I would have to do more research, but I'm still leaning skeptical.

Similarly, a lot of moral progress is made because of people with wrong views dying. People living longer will slow this trend, and, in the worst case, could lead to suboptimal value lock-in from advanced AI or other decisions that affect the long-term future.

I have another post in the works right now and I actually take the opposite perspective. I won't argue it fully here, but I don't actually believe the thesis that humanity makes consistent moral progress due to the natural cycle of birth and death. There are many cognitive biases that make us think that we do though (such as the fact that most people who say this are young and disagree with the elder, but when you are old you will disagree with the young. Who's correct?)

Comment by matthew_barnett on Effects of anti-aging research on the long-term future · 2020-02-27T23:39:31.403Z · score: 3 (3 votes) · EA · GW
Eliminating aging also has the potential for strong negative long-term effects.

Agreed. One way you can frame what I'm saying is that I'm putting forward a neutral thesis: anti-aging could have big effects. I'm not necessarily saying they would be good (though personally I think they would be).

Even if you didn't want aging to be cured, it still seems worth thinking about it because if it were inevitable, then preparing for a future where aging is cured is better than not preparing.

Another potentially major downside is the stagnation of research. If Kuhn is to be believed, a large part of scientific progress comes not from individuals changing their minds, but from outdated paradigms being displaced by more effective ones.

I think this is real, and my understanding is that empirical research supports this. But the theories I have read also assume a normal aging process. It is quite probable that bad ideas stay alive mostly because the proponents are too old to change their mind. I know for a fact that researchers in their early 20s change their mind quite a lot, and so a cure to aging would also mean more of that.

Comment by matthew_barnett on Effects of anti-aging research on the long-term future · 2020-02-27T23:23:00.601Z · score: 3 (3 votes) · EA · GW

If I had to predict, I would say that yes, ~70% chance that most suffering (or other disvalue you might think about) will exist in artificial systems rather than natural ones. It's not actually clear whether this particular fact is relevant. Like I said, the effects of curing aging extend beyond the direct effects on biological life. Studying anti-aging can be just like studying electoral reform, or climate change in this sense.

Comment by matthew_barnett on My personal cruxes for working on AI safety · 2020-02-26T17:56:21.567Z · score: 2 (2 votes) · EA · GW
I think to switch my position on crux 2 using only timeline arguments, you'd have to argue something like <10% chance of transformative AI in 50 years.

That makes sense. "Plausibly soonish" is pretty vague so I pattern matched to something more similar to -- by default it will come within a few decades.

It's reasonable that for people with different comparative advantages, their threshold for caring should be higher. If there were only a 2% chance of transformative AI in 50 years, and I was in charge of effective altruism resource allocation, I would still want some people (perhaps 20-30) to be looking into it.

Comment by matthew_barnett on Thoughts on electoral reform · 2020-02-26T00:54:59.091Z · score: 1 (1 votes) · EA · GW
In my utilitarian view, [democracy and utility maximizing procedures] are one in the same. An election is effectively just "a decision made by more than one person", thus the practical measure of democratic-ness is "expected utility of a voting procedure".

Doesn't this ignore the irrational tendencies of voters?

Comment by matthew_barnett on My personal cruxes for working on AI safety · 2020-02-25T20:12:53.116Z · score: 2 (2 votes) · EA · GW

I like this way of thinking about AI risk, though I would emphasize that my disagreement comes a lot from my skepticism of crux 2 and in turn crux 3. If AI is far away, then it seems pretty difficult to understand how it will end up being used, and I think even when timelines are 20-30 years from now, this remains an issue [ETA: Note that also, during a period of rapid economic growth, much more intellectual progress might happen in a relatively small period of physical time, as computers could automate some parts of human intellectual labor. This implies that short physical timelines could underestimate the conceptual timelines before systems are superhuman].

I have two intuitions that pull me in this direction.

The first is that it seems like if you asked someone from 10 years ago what AI would look like now, you'd mostly get responses that wouldn't really help us that much at aligning our current systems. If you agree with me here, but still think that we know better now, I think you need to believe that the conceptual distance between now and AGI is smaller than the conceptual distance between AI in 2010 and AI in 2020.

The second intuition is that it seems like safety engineering is usually very sensitive to small details of a system that are hard to get access to unless the design schematics are right in front of you.

Without concrete details, the major approach within AI safety (as Buck explicitly advocates here) is to define a relaxed version of the problem that abstracts low level details away. But if safety engineering mostly involves getting little details right rather than big ones, then this might not be very fruitful.

I haven't discovered any examples of real world systems where doing extensive abstract reasoning beforehand was essential for making it safe. Computer security is probably the main example where abstract mathematics seems to help, but my understanding is that the math probably could have been developed alongside the computers in question, and that the way these systems are compromised is usually not due to some conceptual mistake.

Comment by matthew_barnett on Harsanyi's simple “proof” of utilitarianism · 2020-02-22T18:17:39.762Z · score: 3 (3 votes) · EA · GW
I think if you believe the conditions of the theorem are all plausible or desirable and so give them some weight, then you should give the conclusion some weight, too.

This makes sense, but the type of things that tend to convince me to believe in an ethical theory generally depend a lot on how much I resonate with the main claims of the theory. When I look at the premises in this theorem, none of them seem to be type of things that I care about.

On the other hand, pointing out that utilitarians care about people and animals, and they want them to be as happy as possible (and free, or with agency, desire satisfaction) that makes me happy to endorse the theory. When I think about all people and animals being happy and free from pain in a utilitarian world, I get a positive feeling. When I think about "Total utilitarians are the only ones that satisfy these three assumptions" I don't get the same positive feeling.

When it comes to ethics, it's the emotional arguments that really win me over.

Comment by matthew_barnett on Why SENS makes sense · 2020-02-22T17:26:08.982Z · score: 22 (13 votes) · EA · GW

(Comment cross-posted from Lesswrong)

Under a total utilitarian view, it is probably second or third after existential risk mitigation.
[...]
I can count at least three times in which non-profits operating under the principles of Effective Altruism have acknowledged SENS and then dismissed it without good reasons.

I once read a comment on the effective altruism subreddit that tried to explain why aging didn't get much attention in EA despite being so important, and I thought it was quite enlightening. Supporting anti-aging research requires being weird across some axes, but not others. You have to be against something that most people think is normal, natural and inevitable while at the same time being short-termist and human-focused.

People who are weird across all axes will generally support existential risk mitigation, or moral circle expansion, depending on their ethical perspective. If you're short termist but weird in other regards, then you generally will help factory farm animals or wild animals. If you are not weird across all axes, you will support global health interventions.

I want to note that I support anti-aging research, but I tend to take a different perspective than most EAs do. On a gut level, if something is going to kill me, my family, my friends, everyone I know, everyone on Earth if they don't get killed by something else first, and probably do so relatively soon and in a quite terrible way, I think it's worth investing in a way to defeat that. This gut-level reaction comes before any calm deliberation, but it still seems compelling to me.

My ethical perspective is not perfectly aligned with a long-termist utilitarian perspective, and being a moral anti-realist, I think it's OK to sometimes support moral causes that don't necessarily have a long-term impact. Using similar reasoning, I come to the conclusion that we should be nice to others and we should help our friends and those around us when possible, even when these things are not as valuable from a long-termist perspective.

Comment by matthew_barnett on Harsanyi's simple “proof” of utilitarianism · 2020-02-22T01:01:42.391Z · score: 11 (7 votes) · EA · GW

+1

I have a strongly negative bias against any attempt to ground normative theories in abstract mathematical theories, such as game theory and decision theory. The way I see it, the two central claims of utilitarianism are the axiological claim (well-being is what matters) and the maximizing claim (we should maximize what matters ie. well-being). This argument provides no reason to ground our axiology in well-being, and also provides no reason that we should be maximizers.

In general, there is a significant difference between normative claims, like total utilitarianism, and factual claims, like "As a group, VNM rational agents will do X."

Comment by matthew_barnett on Should Longtermists Mostly Think About Animals? · 2020-02-04T04:50:47.417Z · score: 8 (4 votes) · EA · GW
neglecting animal welfare on the grounds that humans will dominate via space exploration seems to require further information about the relative probabilities of the various situations, multiplied by the relative populations in these situations.

I took the argument to mean that artificial sentience will outweigh natural sentience (eg. animals). You seem to be implying that the relevant question is whether there will be more human sentience, or more animal sentience, but I'm not quite sure why. I would predict that most of the sentience that will exist will be neither human or animal.

Comment by matthew_barnett on Should Longtermists Mostly Think About Animals? · 2020-02-04T02:51:34.984Z · score: 8 (5 votes) · EA · GW

I also expect artificial sentience to vastly outweigh natural sentience in the long-run, though it's worth pointing out that we might still expect focusing on animals to be worthwhile if it widens people's moral circles.

Comment by matthew_barnett on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-27T19:41:47.033Z · score: 3 (2 votes) · EA · GW

I think we were both confused. But based on what Greg Colbourn said, my point still stands, albeit to a weaker extent.

Comment by matthew_barnett on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-27T17:42:21.426Z · score: 10 (7 votes) · EA · GW

I don't think this is a good summary for an important reason: I think the Wuhan Coronavirus is a few orders of magnitude more deadly than a normal seasonal flu. The mortality estimates for the Wuhan Coronavirus are in the single digit percentages, whereas this source tells me that the seasonal flu mortality rate is about 0.014%. [ETA: Sorry, it's closer to 0.1%, see Greg Colbourn's comment].

Comment by matthew_barnett on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-27T07:16:44.092Z · score: 13 (8 votes) · EA · GW

Current death rates are likely to underestimate the total mortality rate, since the disease has likely not begun to affect most of the people who are infected.

I'll add information about incubation period to the post.

Comment by matthew_barnett on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T02:28:27.600Z · score: 12 (8 votes) · EA · GW

An s-risk could occur via a moral failure, which could happen even if we knew how to align our AIs.

Comment by matthew_barnett on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T01:53:50.884Z · score: 3 (2 votes) · EA · GW
But- you won't be able to copy our generator by doing that, the thing that created those novel predictions

I would think this might be our crux (other than perhaps the existence of qualia themselves). I imagine any predictions you produce can be adequately captured in a mathematical framework that makes no reference to qualia as ontologically primitive. And if I had such a framework, then I would have access to the generator, full stop. Adding qualia doesn't make the generator any better -- it just adds unnecessary mental stuff that isn't actually doing anything for the theory.

I am not super confident in anything I said here, although that's mostly because I have an outside view that tells me consciousness is hard to get right. My inside view tells me that I am probably correct, because I just don't see how positing mental stuff that's separate from mathematical law can add anything whatsoever to a physical theory.

I'm happy to talk more about this some day, perhaps in person. :)

Comment by matthew_barnett on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T01:02:05.059Z · score: 3 (2 votes) · EA · GW
Thanks Matthew! I agree issues of epistemology and metaphysics get very sticky very quickly when speaking of consciousness.

Agreed :).

My basic approach is 'never argue metaphysics when you can argue physics'

My main claim was that by only arguing physics, I will never agree upon your theory because your theory assumes the existence of elementary stuff that I don't believe in. Therefore, I don't understand how this really helps.

Would you be prepared say the same about many worlds vs consciousness causes collapse theories? (Let's assume that we have no experimental data which distinguishes the two theories).

One way to frame this is that at various points in time, it was completely reasonable to be a skeptic about modeling things like lightning, static, magnetic lodestones, and such, mathematically.

The problem with the analogy to magnetism and electricity is that fails to match the pattern of my argument. In order to incorporate magnetism into our mathematical theory of physics, we merely added more mathematical parts. In this, I see a fundamental difference between the approach you take and the approach taken by physicists when they admit the existence of new forces, or particles.

In particular, your theory of consciousness does not just do the equivalent of add a new force, or mathematical law that governs matter, or re-orient the geometry of the universe. It also posits that there is a dualism in physical stuff: that is, that matter can be identified as having both mathematical and mental properties.

Even if your theory did result in new predictions, I fail to see why I can't just leave out the mental interpretation of it, and keep the mathematical bits for myself.

To put it another way, if you are saying that symmetry can be shown to be the same as valence, then I feel I can always provide an alternative explanation that leaves out valence as a first-class object in our ontology. If you are merely saying that symmetry is definitionally equivalent to valence, then your theory is vacuous because I can just delete that interpretation from my mathematical theory and emerge with equivalent predictions about the world.

And in practice, I would probably do so, because symmetry is not the kind of thing I think about when I worry about suffering.

I think metaphysical arguments change distressingly few peoples' minds. Experiments and especially technology changes peoples' minds. So that's what our limited optimization energy is pointed at right now.

I agree that if you had made predictions that classical neuroscientists all agreed would never occur, and then proved them all wrong, then that would be striking evidence that I had made an error somewhere in my argument. But as it stands, I'm not convinced by your analogy to magnetism, or your strict approach towards talking about predictions rather than metaphysics.

(I may one day reply to your critique of FRI, as I see it as similarly flawed. But it is simply too long to get into right now.)

Comment by matthew_barnett on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T17:45:05.230Z · score: 11 (5 votes) · EA · GW

Mike, while I appreciate the empirical predictions of the symmetry theory of valence, I have a deeper problem with QRI philosophy, and it makes me skeptical even if the predictions come to bear.

In physics, there are two distinctions we can make about our theories:

  • Disputes over what we predict will happen.
  • Disputes over the interpretation of experimental results.

The classic Many Worlds vs. Copenhagen is a dispute of the second kind, at least until someone can create an experiment which distinguishes the two. Another example of the second type of dispute is special relativity vs. Lorentz ether theory.

Typically, philosophers of science and most people who follow Lesswrong philosophy, will say that the way to resolve disputes of the second kind is to find out which interpretation is simplest. That's one reason why most people follow Einstein's special relativity over the Lorentz ether theory.

However, simplicity of an interpretation is often hard to measure. It's made more complicated for two reasons,

  • First, there's no formal way of measuring simplicity even in principle in a way that is language independent.
  • Second, there are ontological disputes about what type of theories we are even allowing to be under consideration.

The first case is usually not a big deal because we mostly can agree on the right language to frame our theories. The second case, however, plays a deep role in why I consider QRI philosophy to be likely incorrect.

Take, for example, the old dispute over whether physics is discrete or continuous. If you apply standard Solomonoff induction, then you will axiomatically assign 0 probability to physics being continuous.

It is in this sense that QRI philosophy takes an ontological step that I consider unjustified. In particular, QRI assumes that there simply is an ontologically primitive consciousness-stuff that exists. That is, it takes it as elementary that qualia exist, and then reasons about them as if they are first class objects in our ontology.

I have already talked to you in person why I reject this line of reasoning. I think that an illusionist perspective is adequate to explain our beliefs in why we believe in consciousness, without making any reference to consciousness as an ontological primitive. Furthermore, my basic ontological assumption is that physical entities, such as electrons, have mathematical properties, but not mental properties.

The idea that electrons can have both mathematical and mental properties (ie. panpsychism) is something I consider to be little more than property dualism, and has the same known issues as every property dualist theory that I have been acquainted with.

I hope that clears some things up about why I disagree with QRI philosophy. However, I definitely wouldn't describe you as practicing crank philosophy, as that term is both loaded, and empirically false. I know you care a lot about critical reflection, debate, and standard scientific virtues, which immediately makes you unable to be a "crank" in my opinion.

Comment by matthew_barnett on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-17T04:35:19.191Z · score: 3 (2 votes) · EA · GW

I see. I asked only because I was confused why you asked "before crunch time" rather than leaving that part out.

Comment by matthew_barnett on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-17T04:10:17.604Z · score: 3 (2 votes) · EA · GW

Apologies, aren't we already in crunch time?

Are your referring to this comment from Eliezer Yudkowsky,

This is crunch time.  This is crunch time for the entire human species. This is the hour before the final exam, we are trying to get as much studying done as possible, and it may be that you can’t make yourself feel that, for a decade, or 30 years on end or however long this crunch time lasts.
Comment by matthew_barnett on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T19:33:54.146Z · score: 7 (5 votes) · EA · GW
Presumably, people tend to avoid saying terrifying things.

I'm a bit skeptical of this statement, although I admit it could be true for some people. If anything I tend to think that people have a bias for exaggerating risk rather than the opposite, although I don't have anything concrete to say either way.

Comment by matthew_barnett on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T05:59:57.728Z · score: 10 (7 votes) · EA · GW
Also, I think that at least some researchers are less likely to discuss their estimates publicly if they're leaning towards shorter timelines and a discontinuous takeoff

Could you explain more about why you think people who hold those views are more likely to be silent?

Comment by matthew_barnett on The evolutionary argument against cognitive enhancement research is weak · 2019-10-19T22:38:21.102Z · score: 4 (3 votes) · EA · GW
I find the argument fairly weak for a number of reasons. Iodine supplementation seems to have worked great

Iodine supplementation isn't enhancement; it's more like fixing a broken component. If a machine has a broken part, fixing it might dramatically raise the performance of the machine, but that doesn't tell you how easy it would be to improve a machine with no broken parts.

Comment by matthew_barnett on What are the best arguments for an exclusively hedonistic view of value? · 2019-10-19T04:57:18.055Z · score: 9 (5 votes) · EA · GW

I don't have a fully hedonistic view, but I'm sympathetic toward one. I prefer using different words than "hedonism", since hedonism has a bad connotation. I like to say that I care primarily about conscious experiences, where conscious experience refers to the common sense referent of "what it's like to be me" or similar (for intuition pumps, read how Chalmers defines consciousness).

To me it comes down to two related notions:

1. I don't see how non-conscious facts could possibly ever imply a tragedy. It's a tragedy if someone gets hurt, but in what sense is something ever a tragedy if no one's actually experiencing the badness?

2. Likewise, how could something ever be good if no one experiences it. What fact could I learn that would make me leap with joy, assuming the fact had no bearing on whether someone had a positive life or experience?

In practice, arguments against pure hedonism come down to pointing out a few things that are left out in the naive hedonistic view. These include: a lack of diversity of experiences, a lack of concern for truth, a lack of a coherent "adventure" that exists beyond the feeling of adventure. Fair enough, but I feel like these all could be bought at less than 1% of the price of regular hedonism. In other words, I think that I can still reasonably maintain that 99% of value comes from conscious experience and still keep these things.

Comment by matthew_barnett on Does improving animal rights now improve the far future? · 2019-09-16T18:52:03.587Z · score: 4 (4 votes) · EA · GW
If it's referring to making current humans more humane by changing their terminal values, then it's not clear to me how this would occur. My understanding is that animal rights activities tend to spend their time showing people how bad conditions are, and I see no mechanism by which this would change people's terminal values.

Plenty of people now think that slavery is bad now that it has been abolished. It is intuitively clear to me that when people lose the ability to rationalize something, then they will tend to be more careful before endorsing it as good (especially if it causes a bunch of suffering). Right now, few people explicitly care about animal suffering (besides maybe dogs and cats) -- but in a world where factory farming is remembered as a great crime of the past, I expect our attitudes to shift.

Comment by matthew_barnett on What are people's objections to earning-to-give? · 2019-04-14T19:08:24.216Z · score: 3 (5 votes) · EA · GW

I strongly empathize with this framing.

Comment by matthew_barnett on Altruistic action is dispassionate · 2019-03-31T02:08:04.434Z · score: 2 (2 votes) · EA · GW

Yeah, there are many possible ways to frame this. I like the idea of a coherent agent, but that might just be the part of me capable of putting verbal thoughts on a forum page. In any case, over time I've experienced a shift from viewing preferences as different types which compete, to viewing preferences as all existing together in one coherent thread. Of course, my introspection is not perfect, but this is how I feel when I look inward to find what I really want.

I do not claim that this is what other people feel. However, to the extent which I find the idea pleasing, I certainly would like if people shared my view.

Comment by matthew_barnett on Altruistic action is dispassionate · 2019-03-31T00:47:44.418Z · score: 3 (3 votes) · EA · GW

I understand that you aren't saying that altruism is completely unemotional, but I still want to emphasize the role that emotion plays. I do not distinguish too sharply between things that I want for personal reasons, and things that I want for altruistic concerns. Personally, when I learned about utility functions, it was a watershed moment for my understanding of ethics.

If you describe an agent as having a utility function, it means that all of its preferences are commensurate. To put it another way, the agent might want to have a cup of coffee and also want world peace. Importantly, the two preferences are the same type -- I don't distinguish between moral wants and non-moral wants.

Therefore, when I say that I am altruistic, I am not saying that it is my duty to be so. If I were to put my biases aside and dispassionately calculate the action with the highest utility, it is because I truly believe that being dispassionate is the best way to get what I want. I would do the same for actions which concern my own life, and feelings.

Splitting our motivation into two pieces, one personal, and one moral, seems like a remnant of our evolutionary past. It seems to me that people naturally believe in social norms, moral standards, duty, virtues and these don't always align with what they personally want. I seek to dissolve this whole dichotomy: there is simply a world that I want to be in, and I am trying to do whatever is necessary to make that world the real one.

Comment by matthew_barnett on Reducing Wild Animal Suffering Literature Library: Introductory Materials, Philosophical & Empirical Foundations · 2019-03-28T21:57:23.654Z · score: 2 (2 votes) · EA · GW

Are you arguing against any particular finding? What specifically, do you disagree with? Which one of these articles would be rebuked by the academic community as a whole, and why? Until you can answer these questions, I don't really understand your critique.

Comment by matthew_barnett on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-23T05:56:55.537Z · score: 3 (3 votes) · EA · GW

Please, what AIA organizations? MIRI?

Yes, MIRI is one. FHI is another.

That being said, I wish you would've examined the actual claims I presented. I did not claim AI researchers are worried about a malevolent AI.

You did, however, say "The theoretical threat of a malevolent strong AI would be immense. But that does not mean one has cause or a valid reason to support CS grad students financially." I assumed you meant that you believed someone was giving an argument along the lines of "since malevolent AI is possible, then we should support CS grads." If that is not what you meant, then I don't see the relevance of mentioning malevolent AI.

Since you also stated that you had an issue with me not being charitable, I would reciprocate likewise. I agree that we should be charitable to each other's opinions.

Having truthful views is not about winning debate. It's about making sure that you hold good beliefs for good reasons, end of story. I encourage you to imagine this conversation not as a way to convince me that I'm wrong -- but more of a case study about what the current arguments are, and whether they are valid. In the end, you don't get points for winning an argument. You get points for actually holding correct views.

Therefore, it's good to make sure that your beliefs actually hold weight under scrutiny. Not in a, "you can't find the flaw after 10 minutes of self-sabotaged thinking" sort of way, but in a very deep understanding sort of way.

It is donating my income, as an individual that I take offense. People can fund whatever they want: A new planetary wing at a museum, research in robotics, research in CS, research in CS philosophy.

I agree people can fund whatever they want. It's important to make a distinction between normative questions and factual ones. It's true that people can fund whatever project they like; however, it's also true that some projects have a high value from an impersonal utilitarian perspective. It is this latter category that I care about, which is why I want to find projects with particular high value. I believe that existential risk mitigation and AI alignment is among these projects, although I fully admit that I may be mistaken.

Although, Earning to Give does not follow. Thinking about and discussing the risks of strong AI does make sense, and we both seem to agree it is important.

If you agree that thinking about something is valuable, why not also agree that funding that thing is valuable. It seems you think that the field should just get a certain threshold of funding that allows certain people to think about the problem just enough -- but not too much. I don't a reason to believe that the field of AI alignment has reached that critical threshold. On the contrary, I believe the field is far from it at the moment.

Following the money, there is not a clear answer on which CS grad students are receiving it. Low or zero transparency. MIRI or no? Am I missing some public information?

I suppose when you make a donation to MIRI, it's true that you can't be certain about how they spend that money (although I might be wrong about this, I haven't actually donated to MIRI). Generally though, funding an organization is about whether you think that their mission is neglected, and whether you think that further money would make a marginal impact in their cause area. This is no different than any other charity that EA aligned people endorse.

Second, what do you define as advanced AI? Before, I said strong AI. Is that what you mean? Is there some sort of AI in between? I'm not aware.

It might be confusing that there are all these terms for AI. To taboo the words "advanced AI", "strong AI", "AGI" or others -- what I am worried about is an information processing system that can achieve broad success in cognitive tasks in a way that rivals or surpasses humans. I hope that makes it clear.

This is crucially where I split with AI safety. The theory is an idea of a belief about the far future. To claim that we're close to developing strong AI is unfounded to me.

I'm not quite clear what you mean here. If you mean we are worried about AI in the far future, fine. But then in the next sentence you say that we're worried about being close to strong AI. How can we simultaneously believe both. If AI is near then I care about the near-term future. If AI is not near, then I care about the long-term future. I do not claim either, however. I think it is important consideration even if it's a long way off.

Neural networks do not seem to be (from my light research).

This is what I'm referring to when I talk about how important it is to really, truly understand something before developing an informed opinion about it. If you admit that you have only done light research, how can you be confident that you are right. Doing a bit of research might give you an edge for debate purposes, but we are talking about the future of life on Earth here. We really need to know the answers to these questions.

Perhaps a large rogue solar flair or the Yellowstone supervolcano. Or perhaps even a time travel analogy would suffice ~ time travel safety research. There is no tractability/solvability.

Lumping all existential risks in a single category and then asserting that there's no tractability is a simplified approach. First what we need is the probability of any given existential risk occurring. For instance, if scientists discovered that the Yellowstone supervolcano was probably about to erupt sometime in the next few centuries, I'd definitely agree we should do research in that area, and we should fund that research as well. In fact, some research is being done in that area and I'm happy that it's being done.

A belief in an idea about the future is a poor reason for claiming some sort of tractability for funding.

I'd agree with you if it was an idea asserted without evidence or reason. But there's a whole load of arguments about why it is a tractable field, and how we can do things now -- yes right now -- about making the future safer. Ignorance of these arguments does not mean they do not exist.

Remember, ask yourself first what is true. Then form your opinion. Do not go the other way.

Comment by matthew_barnett on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-22T05:07:43.057Z · score: 2 (2 votes) · EA · GW

I don't think anyone here is suggesting supporting random CS grads financially. Although, they might endorse something like that indirectly by funding AI alignment research, which tends to attract CS grads.

I agree that simply because an asteroid collision would be devastating, it does not follow that we should necessarily focus on that work in particular. However, there are variables which I think you might be overlooking.

The reason why people are concerned with AI alignment is not necessarily because of the scope of the issue, but also the urgency and tractability of the problem. The urgency of the problem comes from the idea that advanced AI will probably be developed this century. The tractability of the problem comes from the idea that there exists a set of goals that we could in theory put into an AI goals that are congruent with ours -- you might want to read up on the Orthogonality Thesis.

Furthermore, it is dangerous to assume that we should judge the effectiveness of certain activities merely based on prior evidence or results. There are some activities which are just infeasible to give post hoc judgements about -- and this issue is one of them. The inherent nature of the problem is that we will probably only get about one chance to develop superintelligence -- because if we fail, then we will all probably die or otherwise be permanently unable to alter its goals.

To give you an analogy, few would agree that because climate change is an unprecedented threat, it therefore follows that we should wait until after the damage has been done to assess the best ways of mitigating it. Unfortunately for issues that have global scope, it doesn't look like we get a redo if things start going badly.

If you want to learn more about the research, I recommend reading Superintelligence by Nick Bostrom. The vast majority of AI alignment researchers are not worried about malevolent AI despite your statement. I mean this is in the kindest way possible, but if you really want to be sure that you're on the right side of a debate, it's worth understanding the best arguments against your position, not the worst.

Comment by matthew_barnett on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T08:01:40.744Z · score: 5 (5 votes) · EA · GW

A very interesting and engaging article indeed.

I agree that people often underestimate the value of strategic value spreading. Oftentimes, proposed moral models that AI agents will follow have some lingering narrowness to them, even when they attempt to apply the broadest of moral principles. For instance, in Chapter 14 of Superintelligence, Bostrom highlights his common good principle:

Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.

Clearly, even something as broad as that can be controversial. Specifically, it doesn't speak at all about any non-human interests except insofar as humans express widely held beliefs to protect them.

I think one thing to add is that AIA researchers who hold more traditional moral beliefs (as opposed to wide moral circles and transhumanist beliefs) are probably less likely to believe that moral value spreading is worth much. The reason for this is obvious: if everyone around you holds, more or less, the same values that you do, then why change anyone's mind? This may explain why many people dismiss the activity you proposed.

Comment by matthew_barnett on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T07:47:37.425Z · score: 2 (2 votes) · EA · GW

Just because an event is theoretical doesn't mean that it won't occur. An asteroid hitting the Earth is theoretical, but something I think you might realize is quite real when it impacts.

Some say that superintelligence doesn't have precedence, but I think that's overlooking a key fact. The rise of homo sapiens has radically altered the world -- and all signs point toward intelligence as the cause. We think at the moment that intelligence is just a matter of information processing, and therefore, there should be a way that it could be done by our own computers some day, if only we figured out the right algorithms to implement.

If we learn that superintelligence is impossible, that means our current most descriptive scientific theories are wrong, and we will have learned something new. That's because that would indicate that humans are somehow cosmically special, or at least have hit the ceiling for general intelligence. On the flipside, if we create superintelligence, none of our current theories of how the world operates must be wrong.

That's why it's important to take seriously. Because the best evidence we have available tells us that it's possible, not that it's impossible.

Comment by matthew_barnett on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T07:36:21.727Z · score: 3 (3 votes) · EA · GW

But it seems that it would be very bad if everyone took this advice literally.

Fortunately, not everyone does take this advice literally :).

This is very similar to the tragedy of the commons. If everyone acts out of their own self motivated interests, then everyone will be worse off. However, the situation as you described does not fully reflect reality because none of the groups you mentioned are actually trying to influence AI researchers at the moment. Therefore, MCE has a decisive advantage. Of course, this is always subject to change.

In contrast, preventing the extinction of humanity seems to occupy a privileged position

I find that it is often the case that people will dismiss any specific moral recommendation for AI except this one. Personally I don't see a reason to think that there are certain universal principles of minimal alignment. You may argue that human extinction is something that almost everyone agrees is bad -- but now the principle of minimal alignment has shifted to "have the AI prevent things that almost everyone agrees is bad" which is another privileged moral judgement that I see no intrinsic reason to hold.

In truth, I see no neutral assumptions to ground AI alignment theory in. I think this is made even more difficult because even relatively small differences in moral theory from the point of view of information theoretic descriptions of moral values can lead to drastically different outcomes. However, I do find hope in moral compromise.

Comment by matthew_barnett on Cosmic EA: How Cost Effective Is Informing ET? · 2018-01-01T20:49:06.073Z · score: 1 (1 votes) · EA · GW

There seems to be somewhat a consensus among effective altruists that the Rare Earth explanation is the most likely resolution to the Fermi Paradox. I tend to agree, but like you, I think that effective altruists generally underestimate the risk from aliens.

However, I would caution against a few assumptions that you made in the article. The first is the assumption that aliens would be anything like they show in the movies -- rouge civilizations restricted to quadrants in the galaxy. As many have pointed out in the past, a civilization with artificial superintelligence would likely be able to colonize the entire galaxy within just a few million years, which means that if aliens with advanced artificial intelligence existed, we probably would have seen evidence of them existing already. Of course, maybe they're hiding, but now you're running up against Occam's razor.

The second assumption is that we can affect the state of affairs of civilizations at our stage of development. Now, even given the generous assumption that we have the ability to share useful knowledge with aliens at our stage of development, it would be unlikely that we ever find aliens that are exactly at our development stage. A civilization just decades younger would be unavailable to contact without radio, and a civilization just centuries more advanced would probably have artificial intelligence already.

Comment by matthew_barnett on We Could Move $80 Million to Effective Charities, Pineapples Included · 2017-12-14T04:50:10.487Z · score: 4 (4 votes) · EA · GW

Just a thought: if you think that earning-to-give is a good strategy, then this is one of the best things you can do as an effective altruist. Just to put things in perspective here, if you donated $50,000 to an effective charity for 20 years, then you would be doing just about as much good as merely leaving a good comment in that thread. I hope that helps to internalize just what's at stake here.

Just make sure that the Pineapple fund doesn't generate some animosity towards EA. If it takes 100 good reasons to change someone's mind, it only takes 1 really bad one to turn them away. The person doing the giveaway said that they are interested in the SENS foundation. This is pretty good evidence that they care about the long-term future. We might be able to do the most good if we focus our efforts on that cause area specifically.