Posts
Comments
Check my site about it: http://digital-immortality-now.com/
Or my paper: Digital Immortality: Theory and Protocol for Indirect Mind Uploading
And there is a group in FB about life-logging as life extension where a few EA participate: https://www.facebook.com/groups/1271481189729828
To collect all that information we need superintelligent AI, and actually we don't need all vibrations, but only the most relevant pieces of data - the data which is capable to predict human behaviour. Such data could be collected from texts, photos, DNA and historical simulations - but it is better to invest in personal life-logging to increase ones chances to be resurrected.
I wrote two articles about resurrection: You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection
and
Sure, it a typo, thanks, will correct.
I am serious about resurrection of the dead, there are several ways, including running the simulation of the whole history of mankind and filling the knowledge gaps with random noise which, thanks to Everett, will be correct in one of the branches. I explained this idea in longer article: You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection
I need to clarify my views: I want to save humans first, and after that save all animals, from closest to humans to more remote. By "saving" I mean resurrection of the dead, of course. I am pro resurrection of mammoth and I am for cryonics for pets. Such framework will eventually save everyone, so in infinity it converges with other approaches to saving animals.
But "saving humans first" gives us a leverage, because we will have more powerful civilisation which will have higher capacity to do more good. If humans will extinct now, animals will eventually extinct too when Sun will become a little brighter, around 600 mln. years from now.
But the claim that I want to save only my life is factually false.
The transition from "good" to "wellbeing" seems rather innocent, but it opens the way to rather popular line of reasoning: that we should care only about the number of happy observer-moments, without caring whose are these moments. Extrapolating, we stop caring about real humans, but start caring about possible animals. In other words, it opens the way to pure utilitarian-open-individualist bonanza, where value of human life and individuality are lost and badness of death is ignored. The last point is most important for me, as I view irreversible mortality as the main human problem.
I wrote more about why death is bad in Fighting Aging as an Effective Altruism Cause: A Model of the Impact of the Clinical Trials of Simple Interventions – and decided not to say it again in the main post, as the conditions of the contest requires that only new material should be published, but I recently found that the similar problem was raised in another application in the section "Defending person-affecting views".
The problem with (1) is that here it is assumed that fuzzy set of well-being has a subset of "real goodness" inside it, but we just don't know how to define it correctly. But it could be the real goodness is outside well-being. In my view, reaching radical life extension and death-reversal is more important than well-being, if it is understood as comfortable healthy life.
The fact that an organisation is doing good assumes that some concept of good exists in it. And we can't do good effectively without measuring it, which requires even stricter model of good. In other words, altruism can't be effective, if it escapes defining good.
Moreover, some choices about what is good could be pivotal acts both for organisations and for AIs: like should we work more on biosafety, on nuclear war prevention, or digital immortality (data preservation). Here again we are ready to make such choices for organisation, but not for AI.
Of cause I known that (2) is the main problem in AI alignment. But what I wanted to say here is that many problems which we encounter in AI alignment, also reappear in organisations, e.g. goodharting. Without knowing how to solve them, we can't do good effectively.
Two things I am speaking about are:
(1) what is terminal moral value (good) and
(2) how we can increase it.
EA have some understanding what are 1 and 2, like 1 = wellbeing and 2 = donations to effective charities.
But if we ask AI safety researcher, he can't point on what should be the final goal of a friendly AI. Maximum what he can say is that future superintelligence will solve this task. Any attempt to define "good" will suffer from our incomplete understanding.
EA works both on AI safety, where good is undefined, and on non-AI-related issue where good is defined. This looks contradictory: either we know what is real good and could use this knowledge in AI safety, or we don't know, and in that case we can't do anything useful.
When I tell people that have reminded a driver to look at a pedestrian ahead and probably saved that pedestrian, people generally react negatively, saying something like the driver will see the pedestrian anyway eventually, but my crazy reaction could have distracted him.
Also, once I almost pull a girl back to safety from a street where a SUV was ready to hit her - and she does't not even call my on my birthdays! So helping neighbours doesn't give status in my experience.
Actually, I wanted to say something like this: "here is a list of half-baked critiques, let me know which ones intrigue you and I will elaborate on them", but removed my introduction, as I think that it will be too personal. Below is what was cut:
"I consider myself an effective altruist: I strive for the benefit of the greatest number of people. I spent around 15 years of my life on topics which I consider EA.
At the same time, there is some difference in my understanding of EA from the “mainstream” EA: My view is that the real good is prevention of human extinction, the victory over death and in the possibility of unlimited evolution for everyone. This understanding in some aspects diverges from the generally accepted in EA, where more importance is given to the number of happy moments in human and animal life.
During my work, I encounter several potential criticisms of EA. In the following, I will briefly characterize each of them."
Covid age error! Corrected.
If in Homograd everyone will be absolute copies of each other, the city will have much less moral value for me.
If in Homograd there will be only two exact copies of two people, and all other people will be different, if would mean for me that its real population is N-1, so I will chose to nuke Homograd.
But! I don't judge diversity here as aesthetic, but only as a chance that there will be more or less exact copies.
I think that there is way to calculate relative probabilities even in infinite case and it will converge to 1:. For example, there is an article "The watchers of multiverse" which suggest a plausible way to do so.
1.The identity problems is known to be difficult, but here I assume that continuity of consciousness is not needed for it. Only informational identity is enough.
2. The difference between quantum - or big world- immortality is that we can select which minds to create and exclude N+1 moments which are damages or suffering.
If aliens need only powerful computers to produce interesting qualia, this will be no different from other large scale projects, and boils down to some Dyson spheres-like objects. But we don't know how qualia appear.
Also, a whole human industry of tourism is only producing pleasant qualia. Extrapolating, aliens will have mega-tourism: almost pristine universe, where some beings interact with nature in very intimate ways. Now it becomes similar to some observations of UFOs.
I support animal resurrection too, but only after all humans will be resurrected. Again starting from most complex and close to human animals, like pets, primates. Also, it seems that some animals will be resurrected before humans, like mammoth, nematodes and some pets.
When I speak about human preferences, I mean current preferences: people do not want to die now and many prefer that they will be resuscitated if no damage.
Thanks, I do a lot of lifelogging, but didn't know about this app.
If we simulate all possible universes, we can do it. It is enormous computational task, but it can be done via acausal cooperation between different branches of multiverse, where each of them simulate only one history.
Humans have strong preference not die, and they -many of them -would like to be resurrected if it will be possible and will be done with high quality. I am supporter of the preferential utilitarianism, so I care not only of the number of happy of observer-moments, but also about what people really want.
Anyway, resurrecting is a limited task: only 100 billion people ever lived, and resurrecting them all will not preclude as of creating of trillions of trillions new happy people.
Also, mortal being can't be really happy. So new people need to be immortal or they will suffer of existential dread.
If we start space colonisation, we may not be able to change goal-system of the spaceships that we will send to stars, as they will move away with near-light speed. So we need to specify what we will do with the universe before starting the space colonisation: either we will spend all resources to build as many simulations with happy minds as possible – or we will reorganise matter in the ways with will help to survive the end of the universe, e.g. building Tipler's Omega point or building worm hole into another universe.
---
Very high precision of brain details is not needed for resurrection as we every second forget our mind state. So only a core of long-term memory is sufficient to preserve what I call "information identity", which is necessary conditions for a person to regard himself as the same person, say, next day. But the whole problem of identity is not solved yet, and it would be a strong EA cause to solve it: we want to help people in the ways which will not destroy their personal identity, if that identity really matters.
Yes, I come here to say that building dams is a type of geoengineering, but it is net positive despite occasional catastrophic failures.
Thanks for correction, it is tons in that case, as he speaks about small yield weapons.
Yes. Also l-risks should be added in the list of letter-risks: the risks that all life will go extinct, if humans continue to do what they do in ecology - and it is covered in section 5 of the post.
I don't endorse it, but a-risks could be added: the risks that future human space colonistion will kill or prevent appearance of alien civilizations.
I always don't know if it is appropriate to put links on own articles in the comments. Will it be seen as just self-advertising? Or they may contribute to discussion?
I looked at these problems in two articles:
and
Thanks! BTW, I found that some my x-risks related articles are included while other are not. I don't think that it is because not-included articles are more off-topic, so your search algorithm may fail to find them.
Examples of my published relevant articles which were not included:
The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI
Islands as refuges for surviving global catastrophes
Surviving global risks through the preservation of humanity's data on the Moon
I hope that posting links on my own work related to modular survival will be ok.
Use of submarines: Aquatic refuges for surviving a global catastrophe, article in Futures
Islands as refuges for surviving global catastrophes, article in Forsight
Surviving global risks through the preservation of humanity's data on the Moon, Acta Astronautica
The Map of Shelters and Refuges from Global Risks, post on EA forum
If they advance enough to reconstruct us, then most of bad enslavement ways are likely not interesting to them. For example, we no try to reconstruct mammoths in order to improve climate in Siberia, but not for hunting or meet.
Yes, it is clear. My question was: "Do we have any specific difference in mind about AI strategies for 1 per cent in 10 years vs. 10 per cent in 10 years cases?" If we going to ignore the risk in both cases, there is no difference is it 1 per cent or 10 per cent.
I don't know any short-term publically available strategy for the 10 years case, no matter what is the probability.
What is the actionable difference between "1-2 per cent" and "10 per cent" predictions? If we knew that an asteroid is coming to Earth and it will hit the Earth with one of these probabilities, how our attempts to diverge it would depend on the probability of the impact?
Should we ignore 1 per cent probability, but go all-in in preventing 10 per cent probability?
If there is no difference in actions, the difference in probability estimates is rather meaningless.
Thanks - just saw this comment now. Not really miss the idea, but not decoded include it here.
If the connection between observer-moments is not regarded as important (as I understand, this is your point of view: there is no identity that chains observer-moments in timelines) - than the creation of happy observer-moments is also a way to dilute past sufferings.
In other words, the creation of infinitely many happy observer-moments in the future makes the probability that a person finds himself into a suffering observer-moment infinitely small.
I addressed the problem of the simulation of evil AIs in the "Message to any future AI" text, which we already discussed. The main types of Evil AIs may be predicted by simulating the worlds where they likely to appear. We could get the advantage over the Evil AIs by declaring now, before their creation, that we will simulate and punish them in those timelines where will be able to create Benevolent AI. But we don't need to discuss all the technical details how we will do it, as a benevolent AI will find better ways. (The idea comes from Rolf Nelson)
See patches in comments below: there are ways to do the trick not increasing the total number of suffering observer-moments.
It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.
I just find the way how the whole trick will increase total welfare in the multiverse, copied from the comment below:
No copies of suffering observer-moments will be created - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony - to feel that the pain disappeared and to know that he is saved from hell.
It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.
Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AIs.
Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can't win and know it - will increase the total positive utility in the universe.
This is because you use not-copy-friendly theory of personal identity, which is reasonable but has other consequences.
I patched the second problem in comments above - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony - to feel that the pain disappeared and to know that he is saved from hell.
It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.
Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AI.
Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can't win and know it - will increase the total positive utility in the universe.
See my patch to the argument in the comment to Lukas: we can simulate those moments which are not in intense pain, but still are very close to the initial suffering-observer moment, so they could be regarded its continuation.
It is an algorithmic trick only if personal identity is strongly connected to exact this physical brain. But in the text, it is assumed that identity is not brain-connected, without any discussion. However, it doesn't mean that I completely endorse this "copy-friendly" theory of identity.
I could see three possible problems:
The method will create new suffering moments, and even may be those suffering moments, which will not exist otherwise. But there is a patch for it: see my comment above to Lukas.
The second possible problem is that the universe will be tiled with past simulations which try to resurrect any ant ever lived on Earth – and thus there will be an opportunity cost, as many other good things could be done. This could be patched by what could be called "cheating death in Damascus" approach where some timelines choose not to play this game by using a random generator, or by capping amount of resources which they may spend on the past sufferings prevention.
The third problem could be ontological, like a wrong theory of the human personal identity. But if a (pseudo)-Benevolent AI has a wrong understanding of the human identity, we will have many other problems, e.g. during uploading.
Reading your comment I come to the following patch of my argument: benevolent AI starts not from S(t), but immediately from many copies of those S(t+1) which have much less intense sufferings, but still have enough similarity with S(t) to be regarded as its next moment of experience. Not S(t) will be diluted, but the next moments of the S(t). This solves the need to create many S(t)-moments which seems morally wrong and computationally intensive.
My plan is that FAI can't decrease the number of suffering moments, but the plan is to create an immediate way out of each such moment. While total utilitarian will not feel the difference, it is just a theory which was not designed to account for the length of suffering, but for any particular observer, this will be a salvation.
What if AI exploring moral uncertainty finds that there is provably no correct moral theory or right moral facts? It that case, there is no moral uncertainty between moral theories, as they are all false. Could it escape this obstacle just by aggregating human's opinion about possible situations?
One more problem with the idea that I should consult my friends first before publishing a text is a "friend' bias": people who are my friends tend to react more positively on the same text than those who are not friends. I personally had a situation when my friends told me that my text is good and non-info-hazardous, but when I presented it to people who didn't know me, their reaction was opposite.
Sometimes, when I work on a complex problem, I feel as if I become one of the best specialists in it. Surely, I know three other people who are able to understand my logic, but one of them is dead, another is not replying on my emails and the third one has his own vision, affected by some obvious flaw. So none of them could give me correct advice about the informational hazard.
It would be great to have some kind of a committee for info-hazards assessment, like a group of trusted people who will a) will take responsibility to decide whether the idea should be published or not b) will read all incoming suggestions in timely manner с) their contacts (but may be not all the personalities) will be publicly known.
It was in fact a link on the article about how to kill everybody using multiple simultaneous pandemics - this idea may be regarded by someone as an informational hazard, but it was already suggested by some terrorists from Voluntary Human extinction movement. I also discussed with some biologists and other x-risks researchers and we concluded that it is not an infohazard. I can send you a draft.
I've not had the best luck reaching out to talk to people about my ideas. I expect that the majority of new ideas will come from people not heavily inside the group and thus less influenced by group think. So you might want to think of solutions that take that into consideration.
Yes, I met the same problem. The best way to find people who are interested and are able to understand the specific problem is to publish the idea openly in a place like this forum, but in that situation, hypothtical bad people also will be able to read the idea.
Also, info-hazard discussion applies only to "medium level safety reserachers", as top level ones have enough authority to decide what is the info hazard, and (bio)scientists are not reading our discussions. As result, all fight with infor hazards is applied to small and not very relevant group.
For example, I was advised not to repost the a scientific study as even reposting it would create the informational hazard in the form of attracting attention to its dangerous applications. However, I see the main problem on the fact that such scinetific research was done and openly published, and our relactance to discuss such events only lower our strategic understanding of the different risks.
That is absolutely right, and I am always discussing ideas with friends and advanced specialist before discussing them publicly. But doing this, I discovered two obstacles:
1) If the idea is really simple, it is likely not new, but in case of a complex idea not much people are able to properly evaluate it. Maybe if Bostrom will spend a few days analysing it, he will say "yes" or "no", but typically best thinkers are very busy with their own deadlines, and will not have time to evaluate the ideas of random people. So you are limited to your closer friends, who could be biased in favour of you and ignore the info-hazard.
2) "False negatives". This is the situation when a person thinks that the idea X is not an informational hazard because it is false. However, the reasons why he thinks that the idea X is false are wrong. In that situation, the info hazard assessment is not happening.
That is why I think that we should divide discussion in two lines: One is the potential impact of simple interventions in life extension, which are many, and another is, is it possible that metformin will be such simple intervention.
In case of metformin, there is a tendency to prescribe it to the larger share of the population, as a first line drug of diabetes 2, but I think that its safety should be personalized by some genetic tests and bloodwork for vitamin deficiency.
Around 30 mln people in US or 10 per cent of the population already have diabetes 2 (https://www.healthline.com/health/type-2-diabetes/statistics) and this population share is eligible for metformin prescriptions.
This means that we could get large life expecting benefits replacing prescription drugs not associated with longevity - with longevity associated drugs for the same condition, like metformin for diabetes, lazortan for hypertension, aspirin for blood thining etc.
Thanks for this detailed analysis. I think that the main difference in our estimations is the number of adopters, which is 1.3 percent in your average case. In my estimation, it was almost a half of the world population.
This difference highlights the important problem: how to make really good life-extending intervention widely adopted. This question is related not only to metformin, but for any other interventions, including now known interventions such as sport, healthy diet and quitting smoking, which all depends on a person's will.
Taking a pill will require fewer efforts than quitting smoking, and around 70 percent of US adult population is taking some form of supplements. https://www.nutraceuticalsworld.com/contents/view_online-exclusives/2016-10-31/over-170-million-americans-take-dietary-supplements/
However, supplements market depends on expensive advertising, not on real benefits of the supplements.