Posts

Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] 2022-06-19T09:01:29.009Z
My list of effective altruism ideas that seem to be underexplored 2022-05-31T12:33:17.524Z
The future of nuclear war 2022-05-21T08:00:29.798Z
Curing past sufferings and preventing s-risks via indexical uncertainty 2018-09-27T10:48:26.411Z
Islands as refuges for surviving global catastrophes 2018-09-13T13:33:32.528Z
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks 2018-06-23T12:54:12.976Z
[Draft] Fighting Aging as an Effective Altruism Cause 2018-04-16T10:18:23.041Z
[Paper] Surviving global risks through the preservation of humanity's data on the Moon 2018-03-03T18:39:56.988Z
[Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale 2018-01-14T10:07:26.123Z
[Paper]: Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence 2018-01-04T14:31:56.824Z
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” 2017-11-25T11:51:50.606Z
Military AI as a Convergent Goal of Self-Improving AI 2017-11-13T13:02:43.446Z
Surviving Global Catastrophe in Nuclear Submarines as Refuges 2017-04-05T08:06:31.780Z
The Map of Impact Risks and Asteroid Defense 2016-11-03T15:34:30.738Z
The Map of Shelters and Refuges from Global Risks (Plan B of X-risks Prevention) 2016-10-22T10:22:45.429Z
The map of organizations, sites and people involved in x-risks prevention 2016-10-07T12:17:15.954Z
The Map of Global Warming Prevention 2016-08-11T20:03:47.241Z
Plan of Action to Prevent Human Extinction Risks 2016-03-14T14:51:15.784Z

Comments

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-23T10:59:15.060Z · EA · GW

Check my site about it: http://digital-immortality-now.com/

Or my paper: Digital Immortality: Theory and Protocol for Indirect Mind Uploading

And there is a group in FB about life-logging as life extension where a few EA participate: https://www.facebook.com/groups/1271481189729828 

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-22T16:10:18.892Z · EA · GW

To collect all that information we need superintelligent AI, and actually we don't need all vibrations, but only the most relevant pieces of data - the data which is capable to predict human behaviour. Such data could be collected from texts, photos, DNA and historical simulations - but it is better to invest in personal life-logging to increase ones chances to be resurrected. 

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-22T16:05:35.227Z · EA · GW

I wrote two articles about resurrection: You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection

and

Classification of Approaches to Technological Resurrection

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-22T16:01:03.724Z · EA · GW

Sure, it a typo, thanks, will correct.

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-22T16:00:18.138Z · EA · GW

I am serious about resurrection of the dead, there are several ways, including running the simulation of the  whole history of mankind and filling the knowledge gaps with random noise which, thanks to Everett, will be correct in one of the branches. I explained this idea in longer article: You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-21T15:25:56.102Z · EA · GW

I need to clarify my views: I want to save humans first, and after that save all animals, from closest to humans to more remote. By "saving" I mean resurrection of the dead, of course. I am pro resurrection of mammoth and I am for cryonics for pets.  Such framework will eventually save everyone, so in infinity it converges with other approaches to saving animals.  

But "saving humans first" gives us a leverage, because we will have more powerful civilisation which will have higher capacity to do more good. If humans will extinct now, animals will eventually extinct too when Sun will become a little brighter, around 600 mln. years from now. 

But the claim that I want to save only my life is factually false. 

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-20T18:43:14.995Z · EA · GW

The transition from "good" to "wellbeing" seems rather innocent, but it opens the way to rather popular line of reasoning: that we should care only about the number of happy observer-moments, without caring whose are these moments. Extrapolating, we stop caring about real humans, but start caring about possible animals. In other words, it opens the way to pure utilitarian-open-individualist bonanza, where value of human life and individuality are lost and badness of death is ignored.  The last point is most important for me, as I view irreversible mortality as the main human problem.

I wrote more about why death is bad in Fighting Aging as an Effective Altruism Cause: A Model of the Impact of the Clinical Trials of Simple Interventions – and decided not to say it again in the main post, as the conditions of the contest requires that only new material should be published, but I recently found that the similar problem was raised in another application in the section "Defending person-affecting views". 

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-20T08:04:49.887Z · EA · GW

The problem with (1) is that here it is assumed that fuzzy set of well-being has a subset of "real goodness" inside it, but we just don't know how to define it correctly. But it could be the real goodness is outside well-being. In my view, reaching radical life extension  and death-reversal is more important than well-being, if it is understood as comfortable healthy life. 

The fact that  an organisation is doing good assumes that some concept of good exists in it. And we can't do good effectively without measuring it, which requires even stricter model of good. In other words, altruism can't be effective, if it escapes defining good. 

Moreover, some choices about what is good could be pivotal acts both for organisations and for AIs: like should we work more on biosafety, on nuclear war prevention, or digital immortality (data preservation). Here again we are ready to make such choices for organisation, but not for AI.

Of cause I known that (2) is the main problem in AI alignment.  But what I wanted to say here is that many problems which we encounter in AI alignment, also reappear in  organisations, e.g. goodharting. Without knowing how to solve them, we can't do good effectively. 

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-19T20:16:32.014Z · EA · GW

Two things I am speaking about are: 

(1) what is terminal moral value (good)  and 

(2) how we can increase it.

EA have some understanding what are 1 and 2, like 1 = wellbeing and 2 = donations to effective charities.

But if we ask AI safety researcher, he can't point on what should be the final goal of a friendly AI. Maximum what he can say is that future superintelligence will solve this task. Any attempt to define "good" will suffer from our incomplete understanding.

EA works  both on AI safety, where good is undefined, and on non-AI-related issue where good is defined. This looks contradictory: either we know what is real good and could use this knowledge in AI safety, or we don't know, and in that case we can't do anything useful.

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-19T13:35:02.514Z · EA · GW

When I tell people that have reminded a driver to look at a pedestrian ahead and probably saved that pedestrian, people generally react negatively, saying something like the driver will see the pedestrian anyway eventually, but my crazy reaction could have distracted him. 

Also, once I almost pull a girl back to safety from a street where a SUV was ready to hit her - and she does't not even call my on my birthdays! So helping neighbours doesn't give  status in my experience.

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-19T12:37:12.180Z · EA · GW

Actually, I wanted to say something like this: "here is a list of half-baked critiques, let me know which ones intrigue you and I will elaborate on them", but removed my introduction, as I think that it will be too personal. Below is what was  cut:

"I consider myself an effective altruist: I strive for the benefit of the greatest number of people. I spent around 15 years of my life on topics which I consider EA.

 At the same time, there is some difference in my understanding of EA from the “mainstream” EA: My view is that the real good is prevention of human extinction, the victory over death and in the possibility of unlimited evolution for everyone. This understanding in some aspects diverges from the generally accepted in EA, where more importance is given to the number of happy moments in human and animal life. 

During my work, I encounter several potential criticisms of EA. In the following, I will briefly characterize each of them."

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-19T11:10:45.546Z · EA · GW

Covid age error!  Corrected. 

Comment by turchin on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-19T11:09:16.304Z · EA · GW

If in  Homograd everyone will be absolute copies of each other, the city will have much less moral value for me.

If in Homograd there will be only two exact copies of two people, and all other people will be different, if would mean for me that its real population is N-1, so I will chose to nuke Homograd.

But! I don't judge diversity here as aesthetic, but only as a chance that there will be more or less exact copies.

Comment by turchin on My list of effective altruism ideas that seem to be underexplored · 2022-06-03T07:50:49.164Z · EA · GW

I think that there is way to calculate relative probabilities even in infinite case and it will converge to 1:. For example, there is an article "The watchers of multiverse" which suggest a plausible way to do so. 
 

Comment by turchin on My list of effective altruism ideas that seem to be underexplored · 2022-06-02T10:17:32.976Z · EA · GW

1.The identity problems is known to be difficult, but here I assume that continuity of consciousness is not needed for it. Only informational identity is enough.

2. The difference between quantum - or big world- immortality is that we can select which minds to create and exclude N+1 moments which are damages or suffering. 

Comment by turchin on My current thoughts on the risks from SETI · 2022-06-02T10:10:07.587Z · EA · GW

If aliens need only powerful computers to produce interesting qualia, this will be no different from other large scale projects, and boils down to some Dyson spheres-like objects. But we don't know how qualia appear. 

Also, a whole human industry of tourism is only producing pleasant qualia. Extrapolating, aliens will have mega-tourism: almost pristine universe, where some beings interact with nature in very intimate ways. Now it becomes similar to some observations of UFOs.

Comment by turchin on My list of effective altruism ideas that seem to be underexplored · 2022-06-01T07:38:18.974Z · EA · GW

I support animal resurrection too, but only after all humans will be resurrected. Again starting from most complex and close to human animals, like pets, primates. Also, it seems that some animals will be resurrected before humans, like mammoth, nematodes and some pets.

When I speak about human preferences, I mean current preferences: people do not want to die now and many prefer that they will be resuscitated if no damage.

Comment by turchin on My list of effective altruism ideas that seem to be underexplored · 2022-06-01T07:22:11.786Z · EA · GW

Thanks, I do a lot of lifelogging, but didn't know about this app.

Comment by turchin on My list of effective altruism ideas that seem to be underexplored · 2022-06-01T07:21:29.962Z · EA · GW

If we simulate all possible universes, we can do it. It is enormous computational task, but it can be done via acausal cooperation between different branches of multiverse, where each of them simulate only one history.

Comment by turchin on My list of effective altruism ideas that seem to be underexplored · 2022-05-31T18:45:51.517Z · EA · GW

Humans have strong preference not die, and they -many of them -would like to be resurrected if it will be possible and will be done with high quality. I am supporter of the preferential utilitarianism, so I care not only of the number of happy of observer-moments, but also about what people really want.

Anyway, resurrecting  is a limited task: only 100 billion people ever lived, and resurrecting them all will not preclude as of creating of trillions of trillions new happy people.

Also, mortal being can't be really happy. So new people need to be immortal or they will suffer of existential dread.

Comment by turchin on My list of effective altruism ideas that seem to be underexplored · 2022-05-31T18:39:33.625Z · EA · GW

If we start space colonisation, we may not be able to change goal-system of the spaceships that we will send to stars, as they will move away with near-light speed. So we need to specify what we will do with the universe before starting the space colonisation: either we will spend all resources to build as many simulations with happy minds as possible – or we will reorganise matter in the ways with will help to survive the end of the universe, e.g. building Tipler's Omega point or building worm hole into another universe.

---

Very high precision of brain details is not needed for resurrection as we every second forget our mind state. So only a core of long-term memory is sufficient to preserve what I call "information identity", which is necessary conditions for a person to regard himself as the same person, say, next day. But the whole problem of identity is not solved yet, and it would be a strong EA cause to solve it: we want to help people in the ways which will not destroy their personal identity, if that identity really matters. 

Comment by turchin on Geoengineering to reduce global catastrophic risk? · 2022-05-30T05:18:49.775Z · EA · GW

Yes, I come here to say that building dams is a type of geoengineering, but it is net positive despite occasional catastrophic failures.

Comment by turchin on The future of nuclear war · 2022-05-28T15:57:39.396Z · EA · GW

Thanks for correction, it is tons in that case, as he speaks about small yield weapons. 

Comment by turchin on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-24T08:39:22.579Z · EA · GW

Yes. Also l-risks should be added in the list of letter-risks: the risks that all life will go extinct, if humans continue to do what they do in ecology - and it is covered in section 5 of the post.

Comment by turchin on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-21T08:42:30.483Z · EA · GW

I don't endorse it, but a-risks could be added: the risks that future human space colonistion will kill or prevent appearance of alien civilizations.

Comment by turchin on Risks from Autonomous Weapon Systems and Military AI · 2022-05-21T08:13:03.989Z · EA · GW

I always don't know if it is appropriate to put links on own articles in the comments. Will it be seen as  just self-advertising? Or they may contribute to discussion?

I looked at these problems in two articles:

Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons 

and

Military AI as a Convergent Goal of Self-Improving AI

Comment by turchin on Release of Existential Risk Research database · 2022-04-26T07:38:46.106Z · EA · GW

Thanks! BTW, I found that some my x-risks related articles are included while other  are not. I don't think that it is because not-included articles are more off-topic, so your search algorithm may fail to find them.

Examples of my published relevant articles which were not included: 

The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI

Islands as refuges for surviving global catastrophes

Surviving global risks through the preservation of humanity's data on the Moon

Aquatic refuges for surviving a global catastrophe

Comment by turchin on Mitigating x-risk through modularity · 2021-11-30T19:14:44.035Z · EA · GW

I hope that posting links on my own work related to modular survival  will be ok.

 

Use of submarines: Aquatic refuges for surviving a global catastrophe, article in Futures

Islands as refuges for surviving global catastrophes, article in Forsight

Surviving global risks through the preservation of humanity's data on the Moon,  Acta Astronautica

The Map of Shelters and Refuges from Global Risks, post on EA forum

Comment by turchin on [Paper] Surviving global risks through the preservation of humanity's data on the Moon · 2021-10-07T20:44:09.557Z · EA · GW

If they advance enough to reconstruct us, then most of bad enslavement ways are likely not interesting to them. For example, we no try to reconstruct mammoths in order to improve climate in Siberia, but not for hunting or meet.

Comment by turchin on Thoughts on short timelines · 2018-10-27T19:02:27.695Z · EA · GW

Yes, it is clear. My question was: "Do we have any specific difference in mind about AI strategies for 1 per cent in 10 years vs. 10 per cent in 10 years cases?" If we going to ignore the risk in both cases, there is no difference is it 1 per cent or 10 per cent.

I don't know any short-term publically available strategy for the 10 years case, no matter what is the probability.

Comment by turchin on Thoughts on short timelines · 2018-10-26T11:12:04.176Z · EA · GW

What is the actionable difference between "1-2 per cent" and "10 per cent" predictions? If we knew that an asteroid is coming to Earth and it will hit the Earth with one of these probabilities, how our attempts to diverge it would depend on the probability of the impact?

Should we ignore 1 per cent probability, but go all-in in preventing 10 per cent probability?

If there is no difference in actions, the difference in probability estimates is rather meaningless.

Comment by turchin on The Map of Impact Risks and Asteroid Defense · 2018-10-15T13:28:11.536Z · EA · GW

Thanks - just saw this comment now. Not really miss the idea, but not decoded include it here.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-30T15:20:34.918Z · EA · GW

If the connection between observer-moments is not regarded as important (as I understand, this is your point of view: there is no identity that chains observer-moments in timelines) - than the creation of happy observer-moments is also a way to dilute past sufferings.

In other words, the creation of infinitely many happy observer-moments in the future makes the probability that a person finds himself into a suffering observer-moment infinitely small.

I addressed the problem of the simulation of evil AIs in the "Message to any future AI" text, which we already discussed. The main types of Evil AIs may be predicted by simulating the worlds where they likely to appear. We could get the advantage over the Evil AIs by declaring now, before their creation, that we will simulate and punish them in those timelines where will be able to create Benevolent AI. But we don't need to discuss all the technical details how we will do it, as a benevolent AI will find better ways. (The idea comes from Rolf Nelson)

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-29T16:35:04.683Z · EA · GW

See patches in comments below: there are ways to do the trick not increasing the total number of suffering observer-moments.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-29T16:32:46.796Z · EA · GW

It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-29T16:29:22.286Z · EA · GW

I just find the way how the whole trick will increase total welfare in the multiverse, copied from the comment below:

No copies of suffering observer-moments will be created - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony - to feel that the pain disappeared and to know that he is saved from hell.

It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.

Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AIs.

Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can't win and know it - will increase the total positive utility in the universe.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-29T16:26:10.827Z · EA · GW

This is because you use not-copy-friendly theory of personal identity, which is reasonable but has other consequences.

I patched the second problem in comments above - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony - to feel that the pain disappeared and to know that he is saved from hell.

It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.

Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AI.

Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can't win and know it - will increase the total positive utility in the universe.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T23:50:16.838Z · EA · GW

See my patch to the argument in the comment to Lukas: we can simulate those moments which are not in intense pain, but still are very close to the initial suffering-observer moment, so they could be regarded its continuation.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T23:48:11.038Z · EA · GW

It is an algorithmic trick only if personal identity is strongly connected to exact this physical brain. But in the text, it is assumed that identity is not brain-connected, without any discussion. However, it doesn't mean that I completely endorse this "copy-friendly" theory of identity.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T23:39:20.642Z · EA · GW

I could see three possible problems:

The method will create new suffering moments, and even may be those suffering moments, which will not exist otherwise. But there is a patch for it: see my comment above to Lukas.

The second possible problem is that the universe will be tiled with past simulations which try to resurrect any ant ever lived on Earth – and thus there will be an opportunity cost, as many other good things could be done. This could be patched by what could be called "cheating death in Damascus" approach where some timelines choose not to play this game by using a random generator, or by capping amount of resources which they may spend on the past sufferings prevention.

The third problem could be ontological, like a wrong theory of the human personal identity. But if a (pseudo)-Benevolent AI has a wrong understanding of the human identity, we will have many other problems, e.g. during uploading.

Comment by turchin on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T23:30:33.097Z · EA · GW

Reading your comment I come to the following patch of my argument: benevolent AI starts not from S(t), but immediately from many copies of those S(t+1) which have much less intense sufferings, but still have enough similarity with S(t) to be regarded as its next moment of experience. Not S(t) will be diluted, but the next moments of the S(t). This solves the need to create many S(t)-moments which seems morally wrong and computationally intensive.

My plan is that FAI can't decrease the number of suffering moments, but the plan is to create an immediate way out of each such moment. While total utilitarian will not feel the difference, it is just a theory which was not designed to account for the length of suffering, but for any particular observer, this will be a salvation.

Comment by turchin on Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk · 2018-07-08T14:09:55.150Z · EA · GW

What if AI exploring moral uncertainty finds that there is provably no correct moral theory or right moral facts? It that case, there is no moral uncertainty between moral theories, as they are all false. Could it escape this obstacle just by aggregating human's opinion about possible situations?

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-28T10:08:13.167Z · EA · GW

One more problem with the idea that I should consult my friends first before publishing a text is a "friend' bias": people who are my friends tend to react more positively on the same text than those who are not friends. I personally had a situation when my friends told me that my text is good and non-info-hazardous, but when I presented it to people who didn't know me, their reaction was opposite.

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-26T09:30:03.873Z · EA · GW

Sometimes, when I work on a complex problem, I feel as if I become one of the best specialists in it. Surely, I know three other people who are able to understand my logic, but one of them is dead, another is not replying on my emails and the third one has his own vision, affected by some obvious flaw. So none of them could give me correct advice about the informational hazard.

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-25T11:51:54.495Z · EA · GW

It would be great to have some kind of a committee for info-hazards assessment, like a group of trusted people who will a) will take responsibility to decide whether the idea should be published or not b) will read all incoming suggestions in timely manner с) their contacts (but may be not all the personalities) will be publicly known.

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-25T11:47:21.918Z · EA · GW

It was in fact a link on the article about how to kill everybody using multiple simultaneous pandemics - this idea may be regarded by someone as an informational hazard, but it was already suggested by some terrorists from Voluntary Human extinction movement. I also discussed with some biologists and other x-risks researchers and we concluded that it is not an infohazard. I can send you a draft.

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-24T10:21:14.295Z · EA · GW

I've not had the best luck reaching out to talk to people about my ideas. I expect that the majority of new ideas will come from people not heavily inside the group and thus less influenced by group think. So you might want to think of solutions that take that into consideration.

Yes, I met the same problem. The best way to find people who are interested and are able to understand the specific problem is to publish the idea openly in a place like this forum, but in that situation, hypothtical bad people also will be able to read the idea.

Also, info-hazard discussion applies only to "medium level safety reserachers", as top level ones have enough authority to decide what is the info hazard, and (bio)scientists are not reading our discussions. As result, all fight with infor hazards is applied to small and not very relevant group.

For example, I was advised not to repost the a scientific study as even reposting it would create the informational hazard in the form of attracting attention to its dangerous applications. However, I see the main problem on the fact that such scinetific research was done and openly published, and our relactance to discuss such events only lower our strategic understanding of the different risks.

Comment by turchin on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-23T19:15:00.631Z · EA · GW

That is absolutely right, and I am always discussing ideas with friends and advanced specialist before discussing them publicly. But doing this, I discovered two obstacles:

1) If the idea is really simple, it is likely not new, but in case of a complex idea not much people are able to properly evaluate it. Maybe if Bostrom will spend a few days analysing it, he will say "yes" or "no", but typically best thinkers are very busy with their own deadlines, and will not have time to evaluate the ideas of random people. So you are limited to your closer friends, who could be biased in favour of you and ignore the info-hazard.

2) "False negatives". This is the situation when a person thinks that the idea X is not an informational hazard because it is false. However, the reasons why he thinks that the idea X is false are wrong. In that situation, the info hazard assessment is not happening.

Comment by turchin on Expected cost per life saved of the TAME trial · 2018-05-27T10:08:03.876Z · EA · GW

That is why I think that we should divide discussion in two lines: One is the potential impact of simple interventions in life extension, which are many, and another is, is it possible that metformin will be such simple intervention.

In case of metformin, there is a tendency to prescribe it to the larger share of the population, as a first line drug of diabetes 2, but I think that its safety should be personalized by some genetic tests and bloodwork for vitamin deficiency.

Around 30 mln people in US or 10 per cent of the population already have diabetes 2 (https://www.healthline.com/health/type-2-diabetes/statistics) and this population share is eligible for metformin prescriptions.

This means that we could get large life expecting benefits replacing prescription drugs not associated with longevity - with longevity associated drugs for the same condition, like metformin for diabetes, lazortan for hypertension, aspirin for blood thining etc.

Comment by turchin on Expected cost per life saved of the TAME trial · 2018-05-26T10:32:11.401Z · EA · GW

Thanks for this detailed analysis. I think that the main difference in our estimations is the number of adopters, which is 1.3 percent in your average case. In my estimation, it was almost a half of the world population.

This difference highlights the important problem: how to make really good life-extending intervention widely adopted. This question is related not only to metformin, but for any other interventions, including now known interventions such as sport, healthy diet and quitting smoking, which all depends on a person's will.

Taking a pill will require fewer efforts than quitting smoking, and around 70 percent of US adult population is taking some form of supplements. https://www.nutraceuticalsworld.com/contents/view_online-exclusives/2016-10-31/over-170-million-americans-take-dietary-supplements/

However, supplements market depends on expensive advertising, not on real benefits of the supplements.