Posts

How to assign numerical values to individual welfare? 2021-07-29T13:52:19.322Z

Comments

Comment by Frank_R on The problem of artificial suffering · 2021-09-25T07:54:05.339Z · EA · GW

I think that it is possible that whole brain emulation (WBE) will be developed before AGI and that there are s-risks associated with WBE. It seems to me that most people in the s-risk community work on AI risks. 

Do you know of any research that deals specifically with the prevention of s-risks from WBE?  Since an emulated mind should resemble the original person, it should be difficult to tweak the code of the emulation such that extreme suffering is impossible. Although this may work for AGI, you need probably a different strategy for emulated minds.  

Comment by Frank_R on On the Expanded Implementation of Nuclear Energy: An evaluation of the present technology and its potential to reduce global CO2 Emissions · 2021-09-25T07:42:55.384Z · EA · GW

Thank you very much for sharing your paper. I have heard somewhere that Thorium reactors could be a big deal against climate change.  The advantage would be that there are greater Thorium reserves than Uranium reserves and that you cannot use Thorium to build nuclear weapons. Do you have an opinion if the technology can be developed fast enough and deployed worldwide? 

Comment by Frank_R on Magnitude of uncertainty with longtermism · 2021-09-15T12:49:43.720Z · EA · GW

I think that the case for longtermism gets stronger if you consider truly irreversible catastrophic risks, for example human extinction. Lets say that there is a chance of 10% for the extinction of humankind. Suppose you suggest some policy that reduces this risk by 2%, but introduces a new extinction risk with a probability of 1%. Then it would be wise to enact this policy.

This kind of reasoning would be probably wrong if you have a chance of 2% for a very good outcome such as unlimited cheap energy, but an additional extinction risk of 1%.

Moreover, you cannot argue that everything will be OK several thousand years in the future if humankind is eradicated instead of "just" reduced to a much smaller population size. 

Your forum and your blog post contain many interesting thoughts and I think that the role of high variations in longtermism is indeed underexplored. Nevertheless, I think that even if everything that you  have written is correct, it would still be sensible to limit global warming and care for extinction risks. 

Comment by Frank_R on Anti-Aging and EA (Recorded Talk) · 2021-08-18T18:58:13.781Z · EA · GW

Thank you for your detailed answer. I expect that other people here have similar questions in mind. Therefore, it is nice to see your arguments written up.

Comment by Frank_R on Anti-Aging and EA (Recorded Talk) · 2021-08-18T18:53:00.847Z · EA · GW

Thank you for your answer and for the links to the other forum posts.

Comment by Frank_R on Anti-Aging and EA (Recorded Talk) · 2021-08-17T18:16:32.509Z · EA · GW

How would you answer the following arguments?

  1. Existential risk reduction is much more important than life extension since it is possible to solve aging a few generations later, whereas humankinds potential, which could be enormous, is lost after an extinction event.

  2. From a utilitarian perspective it does not matter if there are ten generations of people living 70 years or one generation of people living 700 years as long as they are happy. Therefore the moral value of life extension is neutral.

I am not wholly convinced of the second argument myself, but I do not see where exactly the logic goes wrong. Moreover, I want to play the devils advocate and I am curious for your answer.

Comment by Frank_R on Optimal Allocation of Spending on Existential Risk Reduction over an Infinite Time Horizon (in a too simplistic model) · 2021-08-14T05:57:25.669Z · EA · GW

Maybe you are interested in the following paper, which deals with similar questions as yours:

Existential risk and growth - Leopold Aschenbrenner (Columbia University) - Global Priorities Institute

Comment by Frank_R on How to assign numerical values to individual welfare? · 2021-07-30T06:11:18.848Z · EA · GW

My question was mainly the first one. (Are 20 insects happier than one human?) Of course similar problems arise if you compare the welfare of humans. (Are 20 people whose living standard is slightly above subsistence happier than one millionaire ?) 

The reason why I have chosen interspecies comparison as an example is that it is much harder to compare the welfare of members of different species. At least you can ask humans to rate their happiness on a scale from 1 to 10. Moreover, the moral consequences of different choices for the function f are potentially greater.

The forum post seems to be what I have asked for, but I need some time to read through the literature. Thank you very much! 

Comment by Frank_R on Digital People Would Be An Even Bigger Deal · 2021-07-28T09:04:06.134Z · EA · GW

You mention that the ability to create digital people could lead to dystopian outcomes or a Malthusian race to the bottom. In my humble opinion bad outcomes could only be avoided if there is a world government that monitors what happens on every computer that is capable to run digital people. Of course, such a powerful governerment is a risk of its own. 

Moreover I think that a benevolent world goverment can be realised only several centuries in the future, while mind uploading could be possible at the end of this century. Therefore I believe that bad outcomes are much more likely than good ones. I would be glad to hear if you have some arguments why this line of reasoning could be wrong.    

Comment by Frank_R on ‘High-hanging Fruits’ and Coordination · 2021-07-26T13:07:52.124Z · EA · GW

I had similar thoughts , too. My scenario was that at a certain point in the future all technologies that are easy to build will have been discovered and that you need multi-generational projects to develop further technologies. Just to name an example, you can think of a Dyson sphere. If the sun was enclosed by a Dyson sphere, each individual would have a lot more energy available or there would be enough room for many  additional individuals. Obivously you need a lot of money before you get the first non-zero payoff and the potential payoff could be large.

Does this mean that effective altruists should prioritise building a Dyson sphere? There are at least three objections:

  1. According to some ethical theories (person-affecting views, certain brands of suffering-focused ethics) it may not be desirable to build a Dyson sphere.
  2. It is not clear if it is possible to improve existing technologies piecewisely such that you a obtain a Dyson sphere in the end. Maybe you start with space tourism, then hotels in the orbit, then giant solar plants in space etc. It could even be the case that each intermediate step is profitable such that market forces will lead to a Dyson sphere without the EA movement spending ressources. 
  3. If effective altruism becomes too much associated  with speculative ideas, it could be negative for the growth of the movement. 

Please do not misunderstand me. I am very  sympathetic towards your proposal, but the difficulties should not be underestimated and much more research is necessary before you can say with high enough certainty that the EA movement as a whole should prioritise some kind of high-hanging fruit. 

Comment by Frank_R on The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion · 2021-07-06T20:15:54.376Z · EA · GW

Thank you for sharing your thoughts. What do you think of the following scenario?

In world A the risk for an existential catastrophe is fairly low and most currently existing people are happy.

In world B the existential risk is slightly lower. In expectation there will live 100 billion additional people (compared to A) in the far future whose lifes are better than those of the people today. However, this reduction of risk is so costly that most of the currently existing people have miserable lifes. 

 Your theory probably favours option B. Is this intended ?

Comment by Frank_R on Open Thread: July 2021 · 2021-07-04T11:59:56.664Z · EA · GW

Hi,

maybe you find this overview of longtermism interesting if you have not already found it:

Intro to Longtermism | Fin Moorhouse

Comment by Frank_R on Open Thread: May 2021 · 2021-05-07T08:39:45.799Z · EA · GW

Hello! As long as I can remember, I have been interested in the long term future and have asked myself if there is any possibility to direct the future of humankind in a positive direction. Every once in a while I searched the internet for a community of like-minded people. A few month ago I discovered that many effective altruists are interested in longtermism. 

Since then, I often take a look at this forum and have read 'The Precipice' by Toby Ord. I am not quite sure if I agree with every belief that is common among EAs. Nevertheless, I think that we can agree on many things. 

My highest priorities are avoiding existential risks and improving decision making. Moreover, I think about the consequences of technological stagnation and the question if there are possible events far in the future that can only be influenced positively if we start working soon. At the moment my  time is very constrained, but I hope that I will be able to participate in the discussion.