Posts

Curing past sufferings and preventing s-risks via indexical uncertainty 2018-09-27T10:48:26.411Z · score: -3 (9 votes)
Islands as refuges for surviving global catastrophes 2018-09-13T13:33:32.528Z · score: 3 (5 votes)
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks 2018-06-23T12:54:12.976Z · score: 0 (16 votes)
[Draft] Fighting Aging as an Effective Altruism Cause 2018-04-16T10:18:23.041Z · score: 1 (15 votes)
[Paper] Surviving global risks through the preservation of humanity's data on the Moon 2018-03-03T18:39:56.988Z · score: 10 (9 votes)
[Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale 2018-01-14T10:07:26.123Z · score: 10 (12 votes)
[Paper]: Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence 2018-01-04T14:31:56.824Z · score: 2 (4 votes)
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” 2017-11-25T11:51:50.606Z · score: 1 (3 votes)
Military AI as a Convergent Goal of Self-Improving AI 2017-11-13T13:02:43.446Z · score: 5 (5 votes)
Surviving Global Catastrophe in Nuclear Submarines as Refuges 2017-04-05T08:06:31.780Z · score: 14 (14 votes)
The Map of Impact Risks and Asteroid Defense 2016-11-03T15:34:30.738Z · score: 7 (7 votes)
The Map of Shelters and Refuges from Global Risks (Plan B of X-risks Prevention) 2016-10-22T10:22:45.429Z · score: 11 (11 votes)
The map of organizations, sites and people involved in x-risks prevention 2016-10-07T12:17:15.954Z · score: 9 (11 votes)
The Map of Global Warming Prevention 2016-08-11T20:03:47.241Z · score: 8 (8 votes)
Plan of Action to Prevent Human Extinction Risks 2016-03-14T14:51:15.784Z · score: 11 (11 votes)

Comments

Comment by turchin on Thoughts on short timelines · 2018-10-27T19:02:27.695Z · score: 0 (0 votes) · EA · GW

Yes, it is clear. My question was: "Do we have any specific difference in mind about AI strategies for 1 per cent in 10 years vs. 10 per cent in 10 years cases?" If we going to ignore the risk in both cases, there is no difference is it 1 per cent or 10 per cent.

I don't know any short-term publically available strategy for the 10 years case, no matter what is the probability.

Comment by turchin on Thoughts on short timelines · 2018-10-26T11:12:04.176Z · score: 0 (0 votes) · EA · GW

What is the actionable difference between "1-2 per cent" and "10 per cent" predictions? If we knew that an asteroid is coming to Earth and it will hit the Earth with one of these probabilities, how our attempts to diverge it would depend on the probability of the impact?

Should we ignore 1 per cent probability, but go all-in in preventing 10 per cent probability?

If there is no difference in actions, the difference in probability estimates is rather meaningless.

Comment by turchin on The Map of Impact Risks and Asteroid Defense · 2018-10-15T13:28:11.536Z · score: 0 (0 votes) · EA · GW

Thanks - just saw this comment now. Not really miss the idea, but not decoded include it here.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-30T15:20:34.918Z · score: 0 (0 votes) · EA · GW

If the connection between observer-moments is not regarded as important (as I understand, this is your point of view: there is no identity that chains observer-moments in timelines) - than the creation of happy observer-moments is also a way to dilute past sufferings.

In other words, the creation of infinitely many happy observer-moments in the future makes the probability that a person finds himself into a suffering observer-moment infinitely small.

I addressed the problem of the simulation of evil AIs in the "Message to any future AI" text, which we already discussed. The main types of Evil AIs may be predicted by simulating the worlds where they likely to appear. We could get the advantage over the Evil AIs by declaring now, before their creation, that we will simulate and punish them in those timelines where will be able to create Benevolent AI. But we don't need to discuss all the technical details how we will do it, as a benevolent AI will find better ways. (The idea comes from Rolf Nelson)

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-29T16:35:04.683Z · score: -1 (1 votes) · EA · GW

See patches in comments below: there are ways to do the trick not increasing the total number of suffering observer-moments.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-29T16:32:46.796Z · score: 0 (0 votes) · EA · GW

It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-29T16:29:22.286Z · score: 0 (0 votes) · EA · GW

I just find the way how the whole trick will increase total welfare in the multiverse, copied from the comment below:

No copies of suffering observer-moments will be created - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony - to feel that the pain disappeared and to know that he is saved from hell.

It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.

Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AIs.

Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can't win and know it - will increase the total positive utility in the universe.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-29T16:26:10.827Z · score: 0 (0 votes) · EA · GW

This is because you use not-copy-friendly theory of personal identity, which is reasonable but has other consequences.

I patched the second problem in comments above - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony - to feel that the pain disappeared and to know that he is saved from hell.

It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.

Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AI.

Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can't win and know it - will increase the total positive utility in the universe.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T23:50:16.838Z · score: 0 (0 votes) · EA · GW

See my patch to the argument in the comment to Lukas: we can simulate those moments which are not in intense pain, but still are very close to the initial suffering-observer moment, so they could be regarded its continuation.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T23:48:11.038Z · score: 0 (0 votes) · EA · GW

It is an algorithmic trick only if personal identity is strongly connected to exact this physical brain. But in the text, it is assumed that identity is not brain-connected, without any discussion. However, it doesn't mean that I completely endorse this "copy-friendly" theory of identity.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T23:39:20.642Z · score: 0 (0 votes) · EA · GW

I could see three possible problems:

The method will create new suffering moments, and even may be those suffering moments, which will not exist otherwise. But there is a patch for it: see my comment above to Lukas.

The second possible problem is that the universe will be tiled with past simulations which try to resurrect any ant ever lived on Earth – and thus there will be an opportunity cost, as many other good things could be done. This could be patched by what could be called "cheating death in Damascus" approach where some timelines choose not to play this game by using a random generator, or by capping amount of resources which they may spend on the past sufferings prevention.

The third problem could be ontological, like a wrong theory of the human personal identity. But if a (pseudo)-Benevolent AI has a wrong understanding of the human identity, we will have many other problems, e.g. during uploading.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T23:30:33.097Z · score: 0 (0 votes) · EA · GW

Reading your comment I come to the following patch of my argument: benevolent AI starts not from S(t), but immediately from many copies of those S(t+1) which have much less intense sufferings, but still have enough similarity with S(t) to be regarded as its next moment of experience. Not S(t) will be diluted, but the next moments of the S(t). This solves the need to create many S(t)-moments which seems morally wrong and computationally intensive.

My plan is that FAI can't decrease the number of suffering moments, but the plan is to create an immediate way out of each such moment. While total utilitarian will not feel the difference, it is just a theory which was not designed to account for the length of suffering, but for any particular observer, this will be a salvation.

Comment by turchin on Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk · 2018-07-08T14:09:55.150Z · score: 0 (0 votes) · EA · GW

What if AI exploring moral uncertainty finds that there is provably no correct moral theory or right moral facts? It that case, there is no moral uncertainty between moral theories, as they are all false. Could it escape this obstacle just by aggregating human's opinion about possible situations?

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-28T10:08:13.167Z · score: -1 (1 votes) · EA · GW

One more problem with the idea that I should consult my friends first before publishing a text is a "friend' bias": people who are my friends tend to react more positively on the same text than those who are not friends. I personally had a situation when my friends told me that my text is good and non-info-hazardous, but when I presented it to people who didn't know me, their reaction was opposite.

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-26T09:30:03.873Z · score: -2 (4 votes) · EA · GW

Sometimes, when I work on a complex problem, I feel as if I become one of the best specialists in it. Surely, I know three other people who are able to understand my logic, but one of them is dead, another is not replying on my emails and the third one has his own vision, affected by some obvious flaw. So none of them could give me correct advice about the informational hazard.

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-25T11:51:54.495Z · score: 3 (3 votes) · EA · GW

It would be great to have some kind of a committee for info-hazards assessment, like a group of trusted people who will a) will take responsibility to decide whether the idea should be published or not b) will read all incoming suggestions in timely manner с) their contacts (but may be not all the personalities) will be publicly known.

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-25T11:47:21.918Z · score: 0 (0 votes) · EA · GW

It was in fact a link on the article about how to kill everybody using multiple simultaneous pandemics - this idea may be regarded by someone as an informational hazard, but it was already suggested by some terrorists from Voluntary Human extinction movement. I also discussed with some biologists and other x-risks researchers and we concluded that it is not an infohazard. I can send you a draft.

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-24T10:21:14.295Z · score: 0 (0 votes) · EA · GW

I've not had the best luck reaching out to talk to people about my ideas. I expect that the majority of new ideas will come from people not heavily inside the group and thus less influenced by group think. So you might want to think of solutions that take that into consideration.

Yes, I met the same problem. The best way to find people who are interested and are able to understand the specific problem is to publish the idea openly in a place like this forum, but in that situation, hypothtical bad people also will be able to read the idea.

Also, info-hazard discussion applies only to "medium level safety reserachers", as top level ones have enough authority to decide what is the info hazard, and (bio)scientists are not reading our discussions. As result, all fight with infor hazards is applied to small and not very relevant group.

For example, I was advised not to repost the a scientific study as even reposting it would create the informational hazard in the form of attracting attention to its dangerous applications. However, I see the main problem on the fact that such scinetific research was done and openly published, and our relactance to discuss such events only lower our strategic understanding of the different risks.

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-23T19:15:00.631Z · score: 3 (3 votes) · EA · GW

That is absolutely right, and I am always discussing ideas with friends and advanced specialist before discussing them publicly. But doing this, I discovered two obstacles:

1) If the idea is really simple, it is likely not new, but in case of a complex idea not much people are able to properly evaluate it. Maybe if Bostrom will spend a few days analysing it, he will say "yes" or "no", but typically best thinkers are very busy with their own deadlines, and will not have time to evaluate the ideas of random people. So you are limited to your closer friends, who could be biased in favour of you and ignore the info-hazard.

2) "False negatives". This is the situation when a person thinks that the idea X is not an informational hazard because it is false. However, the reasons why he thinks that the idea X is false are wrong. In that situation, the info hazard assessment is not happening.

Comment by turchin on Expected cost per life saved of the TAME trial · 2018-05-27T10:08:03.876Z · score: 0 (0 votes) · EA · GW

That is why I think that we should divide discussion in two lines: One is the potential impact of simple interventions in life extension, which are many, and another is, is it possible that metformin will be such simple intervention.

In case of metformin, there is a tendency to prescribe it to the larger share of the population, as a first line drug of diabetes 2, but I think that its safety should be personalized by some genetic tests and bloodwork for vitamin deficiency.

Around 30 mln people in US or 10 per cent of the population already have diabetes 2 (https://www.healthline.com/health/type-2-diabetes/statistics) and this population share is eligible for metformin prescriptions.

This means that we could get large life expecting benefits replacing prescription drugs not associated with longevity - with longevity associated drugs for the same condition, like metformin for diabetes, lazortan for hypertension, aspirin for blood thining etc.

Comment by turchin on Expected cost per life saved of the TAME trial · 2018-05-26T10:32:11.401Z · score: 1 (1 votes) · EA · GW

Thanks for this detailed analysis. I think that the main difference in our estimations is the number of adopters, which is 1.3 percent in your average case. In my estimation, it was almost a half of the world population.

This difference highlights the important problem: how to make really good life-extending intervention widely adopted. This question is related not only to metformin, but for any other interventions, including now known interventions such as sport, healthy diet and quitting smoking, which all depends on a person's will.

Taking a pill will require fewer efforts than quitting smoking, and around 70 percent of US adult population is taking some form of supplements. https://www.nutraceuticalsworld.com/contents/view_online-exclusives/2016-10-31/over-170-million-americans-take-dietary-supplements/

However, supplements market depends on expensive advertising, not on real benefits of the supplements.

Comment by turchin on A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4) · 2018-05-12T15:52:43.445Z · score: 0 (0 votes) · EA · GW

I think that actual good step in EA direction would be to find a relatively cheap combination of chemicals which provide fixation for a longer term, or may be preserving brain slices (as Lenin's brain was preserved).

I am interested to write something about cryonics as a form EA, but the main problem here is price. Starting price of the funeral is 4000 pounds in UK and they are not much cheaper in poor countries. Cryonics should be cheaper to be successful and affordable.

Comment by turchin on A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4) · 2018-05-11T15:09:57.063Z · score: 1 (1 votes) · EA · GW

To become part of EA, cryonics must become cheap, and to become cheap, it should be, imho, pure chemical fixation without cooling, which could cost only a few dollars per brain, something like aldehyde fixation without cryopreservation.

Comment by turchin on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2018-04-30T19:43:28.588Z · score: 0 (0 votes) · EA · GW

Were chicken preferences measured by EEG or choice? see also may comment above.

Comment by turchin on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2018-04-30T19:35:45.208Z · score: 0 (0 votes) · EA · GW

How could we know that they are unhappy? Photos of overcrowded farms look terrible, but animals may have different value structure, like:

  • warm
  • safe
  • many friends
  • longer life expectancy than in the forest
  • guaranteed access to unlimited amounts of food.

Technically, we could have two ways to measure their preferences: do they feel constant pain according to their EEG + do they want to escape at any price and even happy to be slaughtered?

Comment by turchin on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2018-04-19T20:48:09.505Z · score: 2 (2 votes) · EA · GW

I am puzzled by the value of non-born animals in this case. Ok, less chicken will be born and later culled, but it means that some chickens will never be born at all. In extreme case, the whole species of farm chicken could go extinct if there will be no meet consumption.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-19T19:00:42.263Z · score: 0 (0 votes) · EA · GW

In the next version of the article, I will present general equation in which will try to answer all these concerns. It will be (price of the experiment)(probability of success) + indirect benefits of experiment - (fixed price of metformin pills for life)(number of people)(share of adopters)(probability of success of the experiment) - unexpected side effects - growth of food consumption because of higher population. Anything lost?

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-19T14:22:03.136Z · score: -1 (1 votes) · EA · GW

Ok. I just have two ideas in different moments of time, that is why there are two comments.

I think that again the problem of expensive pills is not a problem of antiaging therapies, but a more general problem of expensive medicine and poverty. I should not try to solve all possible problems in one article as it will immediately grow to the size of the book.

Most drugs we now consume are overpriced compared with bulk prices; also food is much more expensive in retail. I think it is important problem, but it is another problem.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T17:13:13.879Z · score: 0 (0 votes) · EA · GW

How could it explain that diabetics lived longer than healthy people?

Anyway, we need a direct test on healthy people to know if it works or not.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T16:33:51.153Z · score: 0 (0 votes) · EA · GW

Also, Alibaba suggests metformin for 5 USD for kg, which implies lifelong supply could be bought for something like 50 USD.

https://www.alibaba.com/product-detail/HOT-SALE--99-High-Purity_50033115776.html?spm=a2700.7724857.main07.53.2c7f20b6ktwrdq

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T16:27:19.339Z · score: 0 (0 votes) · EA · GW

Also, the global market for snake-oil life extension is 300 bn a year, so spending 10 times less would provide everybody with actually working drug.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T16:02:34.520Z · score: 0 (0 votes) · EA · GW

It probably should be analysed how the bulk price of metformin could be lowered. For example, global supply of vitamin C costs around 1 billion USD a year with 150 kt of bulk powder.

I also not suggesting buying metformin for people. In case of food fortification, the price is probably included into the total price of food and the manufacturers pay lowerest bulk price.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T15:58:40.402Z · score: 0 (0 votes) · EA · GW

At the time when metformin will reach these markets as a life-extending drug, may be somewhere in 2040, these market will develop.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T15:56:16.213Z · score: 0 (0 votes) · EA · GW

Yes, my typo but 0.015 is still around 2 cents as is said in the article.

About persuasion: it is a problem of marketing, which was successfully solved about vitamins.

The global market of vitamin C is around 1 bln USD, btw. https://globenewswire.com/news-release/2016/08/24/866422/0/en/Global-Ascorbic-Acid-Market-Poised-to-Surge-from-USD-820-4-Million-in-2015-to-USD-1083-8-Million-by-2021-MarketResearchStore-Com.html

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T15:42:14.682Z · score: 0 (0 votes) · EA · GW

I updated the section about unborn people and I am going to read and add more links on the topic. Currently it is:

2) Life extension will take resources and fewer new people will be born, thus unborn people will lose the opportunity to be alive. It is not easy to measure value of unborn people without some ethical axioms. If this value is very high, we may try to increase population as much as possible, which seems absurd as in would decrease the quality of life.

While life extension seems to mean fewer new people born each century, the total number of new people is still infinitely large in the situation of constant space exploration (Bostrom, 2003b). Also, fewer newborn people in the 21 cnetury could be compensated by much more people which will be born in the next centuries in the much better world with higher quality of medicine.

If the explorable universe is infinite, the total number of newborn people will not change, but these people will move to later epochs, where they will live even better lives. Tipler (Tipler, 1997) suggested that at the end all possible people will be created by enormous superintelligence in Omega point, and thus all possible people will get chance to be alive. However, we can’t count on such remote events.

But we could compare potential 21th and 22th centuries from our model. In 21th century, fewer people will be born because of life extension, but after superintelligent AI or other power technology will appear, supposedly at 2100, much more new people could live on Earth on much better conditions.

Also, it is not obvious that life extension will affect reproduction negatively because of the “grandmother effect”: the decision about reproduction people typically take in early life, but if they have available grandparents which could help them with babysitting this would increase the willingness to have children as also less strain economy outside the family

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T15:03:24.117Z · score: 0 (0 votes) · EA · GW

It was in fact discussed in section 7.1 there we wrote:

The price of a lifetime supply of metformin, 500 USD, will pay for an additional 1-3 years of life expectancy and a proportional delay of age-related diseases.

However, the actual price of the therapy for a person could be negative, because medical insurance companies will be interested that people will start taking age-slowing drugs, as it will delay payments on medical bills. Insurance companies could gain interest on this money. For example, if 100K of medical bills is delayed by three years, and the interest rate is two percent, the insurance company will earn 6 000 USD on later billing. Thus, insurance companies could provide incentives such as discounts or free aging treatments to those who use antiaging therapies.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T13:59:57.411Z · score: 0 (0 votes) · EA · GW

For example, here https://www.medindia.net/drug-price/metformin/diamet.htm one table of 500mg costs 1 rupee, which is 0.0015 USD.

The model was deliberately oversimplified, as actually these 5 billions will be born the whole duration of the 21 century and will start to take the drug in different ages.

I will add more links on previous studies of metformin, as it probably seems unclear from the article that it is already tested drug for other conditions.

If we speak about fortification of food with useful microelements like iodine, fluoride, and some vitamines it probably has very high reach in developed countries. For some life extending drugs was shown that they could be taken in courses and could have effect on life expectancy.

The problem of constant taking a medical drug is not related to metformin, but to any drug which a person has to take constantly, like hypertension drugs, antidepressant, vitamins etc. This is a different important problem which should be solved to improve public health. There is one possible solution in the form of app (already exist) which records what one has taken and remind to take the drug.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T13:00:17.341Z · score: 0 (0 votes) · EA · GW

Yes, I just suggested it as an example of absurd consequences of the idea that one has value unborn people as much as already existing.

Anyway, If humanity survives and start space exploration, an enormous amount of new people will be born, and they will be born in the much better conditions, where there is no aging and involuntary death. Thus, postponing new lives until creation a better world may be morally good.

I also added the following section to the article where tried to answer yours and other commenters concerns:

4.6. Analysis of the opportunity costs and possible negative consequences of the life extension

Proper cost-benefit analysis of the effective altruistic intervention requires looking into possible opportunity costs of the suggested intervention. Here we list some considerations:

  1. Life extension will increase global population which will increase food and other prices and lower quality of life of the poorest people. The main driver of the population growth is fertility, and if it becomes lower, we move to the next point about the value of unborn people. The main model of the future on which we rely is based on the idea of indefinite technological progress, and if the progress will outperform growth of the population, there will no negative consequences. So, overpopulation will be a problem in the situation of low fertility, low technological progress and very large life extension. This outcome is unlikely as the same biotech which will help extend human life could be also used to produce more food recourses. Also, in our model of the effect of simple interventions, the total effect on the population is rather insignificant, in order of magnitude of several percent, which is smaller than expected error in the population projection.

  2. Life extension will take resources and fewer new people will be born, thus unborn people will lose the opportunity to be alive. As we discussed above, fewer newborn people now could be compensated but much more people which will be born in the future in the much better world.

  3. The older population will be less innovative and diverse. The population is aging anyway, and slowing aging process will make people behave as if they are younger in the same calendar age.

  4. Effects on pension system and employment. Life extension may put pressure on labor market and pension funds, but the general principle is that we can’t kill people to make the economy better. In reality, if powerful life extension technologies will be available, the same technological level will revolutionize other spheres of society.

  5. Optimizer curse could affect our judgment. Optimizer curse is mathematical proof that in case of choice between several uncertain variables, the median error tends to accumulate, and the best solution likely has the biggest error (Smith & Winkler, 2006). This means that our estimation of the metformin efficiency in saving lives is likely to be an overestimation. However, we have around 4 orders of magnitude margin to be the best possible solution to save lives.

We also will explore relations between life extension and existential risks prioritization in the section 8.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T12:23:16.790Z · score: 0 (0 votes) · EA · GW

Metformin is not 8 dollars a day, but 2 cents a day in Indian pharmacies. As TAME study and adoption will take at least a decade, people will be in general even reacher and can take the drug.

Metformin has already passed Phaze 1,2 and 3 for many other conditions so its safety profile is well known. It is even known to extend the life of diabetics so they live longer than healthy people.

I explored the problem that not everybody will take it in the article. First, I assume that only half people will take it for whatever reason. Secondary, I explore the ways solving administration problem via food fortification or insurance pressure. Thirdly, metformin is just an example of simple intervention, there are some which are even easier to administrate via food fortification, first of all, vitamin D.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T10:47:28.758Z · score: 0 (0 votes) · EA · GW

The first is an argument against fertility in general, not only about life extension. Higher fertility will increase the population and food prices.

The second is an argument is pro-fertility: by having fewer children I refuse the opportunity to be alive to my unborn children.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T10:25:15.281Z · score: 0 (0 votes) · EA · GW

Ok, I will try without the infinite universe. If we assume that there will be no existential risks and there will be space exploration, there will be a lot of new people (trillions?), much more than actually living people now, so extending life of currently living people is not taking the opportunity from future people to be born.

Moreover, if future people will be born in the world there is no death, suffering and ageing, they could enjoy much better life, as long as they want, so there is no negative opportunity cost for such people - but there is positive opportunity to be born in the better world.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T10:14:07.583Z · score: 0 (0 votes) · EA · GW

Speaking more generally, the ethical theory should be based on some kind of axioms. If we take an axiom that "human life is most important value", we easily come to the conclusion that death and aging are bad.

If we take the utilitarian axiom "sufferings are bad", we could come to the same conclusions again, but after more complex constructions, which includes attempts to correctly define sufferings.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T10:04:07.992Z · score: 0 (0 votes) · EA · GW

I know this surprising fact that older people report better life satisfaction despite having more chronics pain, fewer opportunities, more deceases. I addressed this in the article in the following paragraph:

"This relationship is not obvious, as we are culturally adapted to see age-related changes as normal, and economically based surveys show a u-shaped relation between satisfaction and age (T. C. Cheng, Powdthavee, & Oswald, 2017). However, if all objective and subjective data are taken into account, a plot of this relationship produces a convex form with peak of quality of life at 18, followed by decline (Easterlin, 2006)."

The opportunity cost of life extension is that another person will be never born. But if we take into account the infinite size of the universe, he will be born somewhere else. These infinities are known to cause ethical difficulties as was explored by Bostrom here https://nickbostrom.com/ethics/infinite.pdf

My position is that we first take care about actually exiting people in our space neighbourhood, and later we will take care of all non-existent possible people and of all animals. Maybe we will do it via resurrection of all possible beings near the Omega point, as was suggested by Tipler. The reason to do good level by level is that it helps us to escape "utility monsters" which we can't solve on our level of recourses.

I will try to incorporate replies to your comments in the article

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-17T10:52:41.987Z · score: 1 (1 votes) · EA · GW

May I share with you the next version when all that changes will be done? I expect that the next revision will appear in 2 months.

Comment by turchin on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-17T00:41:29.768Z · score: 1 (1 votes) · EA · GW

Thank you for review.

Taking median date of the AI arrival like 2062 is not informative as in half cases it will not be here at 2062. The date of 2100 is taken as the date when it (or other powerful life-extending technology) almost sure will appear as a very conservative estimate. Maybe it should be justified more in the text.

Yes, it is assumed by Barzilai and gwern that metformin will extend human life 1 year, based on many human cohorts studies, but to actually prove it we need TAME study, and until this study is finished, metformin can't be used as a life-extending drug. So any year of delay of the experiment means a year in the delay in global implementation. For now, it is already delayed for 2 years by luck of funds.

Given all uncertainty, the simplified model provides only an order of magnitude of the effect, but a more detailed model which take into account actual age distribution is coming.

As the paper is already too long, we tried to outline the main arguments or provide links to the articles where detailed refutation is presented, as in case of Gavrilov, 2010, where the problem of overpopulation is analysed in detail. But it is obvious now that this points should be clarified.

The next round of professional grammar editing is scheduled.

Comment by turchin on [Paper] Surviving global risks through the preservation of humanity's data on the Moon · 2018-03-15T16:55:48.644Z · score: 1 (1 votes) · EA · GW

Surely, 7 million years estimation has big uncertainty, and it could be shorter, but unlikely shorter than 1 million year, as chimps have to undergo important anatomic changes to become human-like: they need to have larger heads, different walking and hanging anatomy, different voice anatomy etc, and selection for such anatomic changes was slow in humans. Also, most catastrophes which will kill humans will probably kill chimps too, as they are already endangered species in many locations, and orangutangs are on the brink of extinction in natural habitats.

However, there is another option for the quick evolution of intelligence after humans, that is domesticated animals, firstly dogs. They have been selected for many human-like traits, including understanding voice commands.

Chimps in zoos also were trained to speak some rudimentary forms of gesture language and trained their children to do so. If they preserve these skills, they could evolve much quicker.

Comment by turchin on [Paper] Surviving global risks through the preservation of humanity's data on the Moon · 2018-03-04T21:24:44.328Z · score: 1 (3 votes) · EA · GW

Basically, there are two constraints on the timing of the new civilization, which are explored in details in the article:

1) As closest our relative are chimps with 7 million genetic difference from us, human extinction means that at least 7 million years there will be no other civilization, and likely more, as most causes of human extinction would kill great apes too. 2) Life on Earth will be possible approximately next 600 mln years based on the Earth and Sun models.

Thus the next civilization timing is between 7 and 600 mln years, but the probability peaks closer to 100 mln years, as it is time needed for the evolution of primates "again" from the "rodents", and it will later decline as the conditions on the planet will deteriorate.

We explored the difference between human extinction risks and l-risks, that is life extinction risk in another article: http://effective-altruism.com/ea/1jm/paper_global_catastrophic_and_existential_risks/

In it, we show that life extinction is worse than human extinction, and universe destruction is even worse than life extinction, and this should be taken into account in risk prevention prioritisation.

Comment by turchin on [Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale · 2018-01-14T22:05:30.487Z · score: 1 (1 votes) · EA · GW

Surely, there are two types of global warming.

I think that risks of runaway global warming are underestimated, but there is very small scientific literature to support the idea.

If we take accumulated tall from smaller effects of the long-term global warming of 2-6C, it could be easily calculated as a very larger number, but to be regarded as a global catastrophe, it probably should be more like a one-time event, or many other things will be also a global catastrophe, like cancer etc.

Comment by turchin on [Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale · 2018-01-14T14:20:48.114Z · score: 1 (1 votes) · EA · GW

In the article AI is destroying all life on earth, but on the previous version of the image in this blog post the image was somewhat redesign to better visibility and AI risk jumped to the kill all humans. I corrected the image now, so it is the same as in the article, - so the previous comment was valid.

Will the AI be able to destroy other civilizations in the universe depends on the fact if these civilizations will create their own AI before intelligence explosion wave from us arrive to them.

So AI will kill only potential and young civilizations in the universe, but not mature civilizations.

But it is not the case for false vacuum decay wave which will kill everything (according to our current understanding of AI and vacuum).

Comment by turchin on [Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale · 2018-01-14T12:26:40.145Z · score: 1 (1 votes) · EA · GW

No, in the paper we clearly said that non-alaigned AI is the risk to the whole universe in the worst case scenario.