Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics 2020-03-18T18:30:53.949Z
More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them 2020-03-13T16:48:48.379Z
Russian x-risks newsletter winter 2019-2020 2020-03-01T12:51:29.032Z
Russian x-risks newsletter, fall 2019 2019-12-03T17:01:23.705Z
How to Survive the End of the Universe 2019-11-28T12:40:37.426Z
Russian x-risks newsletter, summer 2019 2019-09-07T09:55:05.076Z
Wireheading as a Possible Contributor to Civilizational Decline 2018-11-12T19:48:45.759Z


Comment by avturchin on How to Survive the End of the Universe · 2021-05-30T12:17:08.189Z · EA · GW

Hi! Just saw your comment - i am the author and i will write you

Comment by avturchin on Report on Semi-informative Priors for AI timelines (Open Philanthropy) · 2021-04-06T19:19:30.324Z · EA · GW

 Actually, I expected Gott equation to be mentioned here, as his Doomsday argument is a contemporary version of Laplace equation.

Also, qualified observers are not distributed linearly inside this period of time: from the idea of AI to creation of AI. If we assume that qualified observers are those who are interested in AI timing,  than it look like that such people are much more numerous closer to the end of the period. As result, a random qualified observer should find oneself closer to the end of the period. If the number of qualified observers is growing exponentially, the median is just one doubling before the end. This makes AI timing prior closer to current events. 

Comment by avturchin on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-26T20:08:09.349Z · EA · GW

Thanks for great piece! One thing which may increase the extinction risk is that after the collapse, the remaining economy will be based not on agriculture and manufacturing, but on scavenging remains of previous civilisation. The problem with such economy is that it constantly shrinking and also does not help to learn useful skills, but instead helps local warlords arise and fight over leftovers. (Example: the economy of the post Soviet Union countries declined partly because it was more profitable to sell a factory for metal than to use it for manufacturing of goods.)

This problem will be also fuelled by "dangerous leftovers": in any moment, the access to weapons will be easy than to manufacturing capabilities. If most humans will extinct, enormous amounts of powerful weapons will still be available for decades, mostly firearms. Dangerous leftovers will include pieces of what has caused the catastrophe, for example, a biological agent or radioactive contamination.  In such situation, the remaining population will die more often than rise successful children. 

An example of dangerous leftovers are... dogs. If 99 per cent people went extinct, but all dogs remains, these feral dogs will eventually start hunting of survivors. 

Comment by avturchin on ALLFED 2020 Highlights · 2020-12-26T17:02:37.039Z · EA · GW

Toby Ord estimated in the Precipice a one in a thousand probability of
existential risk this century due to climate change, largely due to
locking in a moist greenhouse effect. We would estimate the
feasibility of maintaining industrial civilization (with eventual
colonization of space) in this scenario. The physical space on
Antarctica is adequate for industrial civilization, but alternative
foods produced on other continents would likely be required, such as
foods grown in air-conditioned greenhouses, single-cell protein
powered by renewable hydrogen,  electrosynthesized vinegar, and foods
created by chemical synthesis. Though there would be significant time
to develop these solutions, the primary value of this research would
be the value of information on how much to prioritize climate change
as an existential risk.

It looks like for me that survival of the moisture greenhouse is possible in altitudes above 5000- 6000 meters. But the main question is if the temperature rise will be uniform. The town Rinconada is 5100 meters high and it has the median temperature of 2C. Meanwhile the hottest city on Earth, Basra, reaches 50C every summer everyday. So people in Rinconada could survive. 

Also, the median elevation of Himalaya is around 6000 meters, they have 595 000 sq km and 52 millions people. 

Meanwhile, the ice of Antarctica will probably take hundreds of years to melt after the start of the moisture greenhouse, and until it happens, the land there can't be used. Strong rains and rivers also prevent living on the ice surface (though there are regions of ice in Antarctica with -60 C stable temperature and they could survive). 

More realistic moist greenhouse will result in stronger polar warming and weaker equatorial warming because of the polar amplification of the greenhouse effect. This means that the equatorial region will get additional 20-30 C warming and poles will get more than that (like +60). This again favours Himalaya as the survival place. 

What worries me is that there will be no stop from moist greenhouse to runaway global warming, as some regions of Earth could get close +100C in this scenario (like Persian Gulf) and the water may start boil there, creating never ending positive feedback loop.

Comment by avturchin on The Fermi Paradox has not been dissolved · 2020-12-13T12:06:50.503Z · EA · GW

I think that estimating fl should take into account the possibility of interstellar panspermia. Life appearing once could be disseminated through the whole galaxy in a few billion years via interstellar comets. 

This creates strong observation selection effect: the galaxies where panspermia is possible will create billion times more observers than non-panspermia galaxies, and we are certainly in such a galaxy. So, fl is likely to be 1. 

Comment by avturchin on How to Survive the End of the Universe · 2020-09-26T18:06:49.042Z · EA · GW

Interestingly, if no God exists, then all possible things should exist, and thus there is no end for our universe. To limit the number of actually existing things, we need some supernatural force, which allows only some worlds to exist, but which is not part of any of these worlds.

Comment by avturchin on A New X-Risk Factor: Brain-Computer Interfaces · 2020-08-11T20:34:38.756Z · EA · GW

Easily available BCI may fuel a possible epidemic of wireheading, which may result in civilisational decline.

Comment by avturchin on Will Three Gorges Dam Collapse And Kill Millions? · 2020-07-27T17:37:21.081Z · EA · GW

I read in Tweeter (so it is not very good source) that one of the problem of the 3GD is cavitation inside discharge tubes. Cavitation is happening when the speed of the waterflow is above 10 meter per second and water creates "vacuum bubbles" which later collapse and create shockwaves which are able to destroy even strongest materials. The discharge channels are inside the body of the dam as we can see on photos and if there will be a problem, they will affect the dam from inside without overtoping. Obviously, such channels could be closed but this will slow water release and increase chances of the use of the emergency spillway. Such spillway itself could be fragile (like in the case of Oroville dam) and could undermine the dam if damaged.

Comment by avturchin on X-risks to all life v. to humans · 2020-06-11T17:14:22.486Z · EA · GW

If they evolve, say, from cats, they will share the same type-values: power, sex, love to children as all mammals. By token-values will be different as they will like not human children but kittens etc. An advance non-human civilization may be more similar to ours than we-now to Ancient Egyptian, as it would have more rational world models.

Comment by avturchin on X-risks to all life v. to humans · 2020-06-10T17:56:50.438Z · EA · GW

The article may reflect my immoralist view point that in almost all circumstances it is better to be alive than not.

Future torture is useless and thus unlikely. Let's look on humanity: as we mature, we tend to care more about other species that lived on Earth and of minority cultures. Torture for fun or for experiment is only for those who don't know how to get information or pleasure in other ways. It is unlikely that advance civilization will deliberately torture humans. Even if resurrected humans will not have full agency, they may have much better live than most people on Earth have now.

Reconstruction of the past is universally interesting. We have a mammoth resurrection project, a lot of archeological studies, Sentinel uncontacted tribe preservation program, etc - so we find a lot of value in studying past, preserving and reconstructing it, and I think it is natural for advanced civilizations.

The x-risks information will be vital for them before they get superintelligence (but humans could be resurrected after it). Imagine that Apollo program would find some data storage on the Moon: it will be one of the biggest scientific discoveries of all times. Some information could be useful for end-of-20th-century humanity, like estimation of the probability of natural pandemics or nuclear wars.

Past data is useful. Future civilization on Earth will get a lot of scientific data from other fields of knowledge: biology, geology, even some math problems may be solved by us which they still not solved. Moreover, they will get access to enormous amount of art, which may have fun value (or not).

The resurrection (on good conditions) here is a part of an acasual deal from our side, similar to Parfit's hitchhiker. They may not take their side of the deal, so there is a risk. Or they may do it much later, after they advance to interstellar civilization and will know that there is a minimal risk and cost for them. For example, if they give 0.0001 of all their resources to us, but colonise a whole galaxy, it is still 10 million stars under human control, or bilion bilions of human beings: much better than extinction.

TL;DR: if there is any value at human existence, it is reasonable to desire resurrection of humanity (under no-torture conditions) + they will get x-risks useful information on earlier stage (end-20th-century equivalent) than they will actually resurrect us (they may do it much later, only if this information was useful, thus closing the deal).

Comment by avturchin on X-risks to all life v. to humans · 2020-06-10T01:04:58.440Z · EA · GW

We could survive by preserving data about humanity (on the Moon or other places), which will be found by the next civilisation on Earth, and they will recreate humans (based on our DNA) and our culture.

Comment by avturchin on What's the big deal about hypersonic missiles? · 2020-05-18T13:51:08.992Z · EA · GW

May be they are also less detectable, so early warning systems will not catch them on early stages?

Comment by avturchin on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-17T16:26:06.661Z · EA · GW

I created once a map of crazy ideas how biotech could cause human extinction. There are around 100 ways.

Comment by avturchin on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-12T13:23:09.346Z · EA · GW

There is an idea of a multipandemic, that is several pandemics running simultaneously. This would significantly increase the probability of extinction.

Comment by avturchin on Database of existential risk estimates · 2020-04-17T13:01:34.845Z · EA · GW

Yes, natural catastrophes probabilities could be presented as frequentist probabilities, but some estimates are based on logical uncertainty of the claims like "AGI is possible".

Also, are these probabilities conditioned on "all possible prevention measures are taken"? If yes, they are final probabilities which can't be made lower.

Comment by avturchin on Database of existential risk estimates · 2020-04-16T21:09:22.651Z · EA · GW

Great database!

Your estimates are presented as numerical values similar to probabilities. Is it actually probabilities and if yes, are they frequentist probabilities or Bayesian? And more generally: How we can define the "probability of end of the world"?

Comment by avturchin on Tips for overcoming low back pain · 2020-03-29T14:46:40.479Z · EA · GW

For me, the most important intervention is to sleep on hard surface. I put 4 layers of yoga mat on my sofa, and it helps much.

Comment by avturchin on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-23T13:54:43.501Z · EA · GW

Some internal air cleaner exist, including the ones with UV purification. My friend Denis Odinokov suggested to make a system to clean external air, which should consist of a tube with HEPA filter, ventilator and UV light source, which will create a positive air pressure inside the apartment. I think it is too difficult to hand-make at home. But it is another business opportunity of our time.

Comment by avturchin on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-21T19:29:26.436Z · EA · GW

I heard about infection in HK via vent tubes.

If I were in a space with many people, I would like the windows will be open. At home, not.

Comment by avturchin on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-19T01:07:50.658Z · EA · GW

What are the chances that the virus will flow from the apartment beneath mine into the mine one during ventilation?

Comment by avturchin on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-18T19:38:07.084Z · EA · GW

I think that for viruses it will be difficult to become completely radiation resistant, as it would require complete overhaul of their makeup: thicker walls, stronger self-repair.

Comment by avturchin on More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them · 2020-03-13T19:25:10.874Z · EA · GW

There is a new animal in the room: private pay-to-play clinical trials in third countries. In one case, people have to pay 1 million USD to enrol into an anti-aging clinical trial. Some of them could be scams. But it an option to take the risk and get the vaccine earlier for customers, and to get volunteers for the company.

EDITED: Andre Watson will be now live about private vaccine creation:

Comment by avturchin on Russian x-risks newsletter winter 2019-2020 · 2020-03-11T12:29:51.600Z · EA · GW

It is currently renamed as Porfirich which is joke name with some relation to a novel by Dostoyvsy. It was created by just one programmer, Michael Grankin.

Comment by avturchin on Russian x-risks newsletter winter 2019-2020 · 2020-03-11T12:22:02.027Z · EA · GW

It is just part of life here. Even when I was in the university 20 years ago, there was a student who hated one professor and he mined the whole university every Thursday. They found him eventually. Current mining is either related to war with Ukraine or to money blackmail.

Comment by avturchin on Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics · 2020-03-05T11:39:34.245Z · EA · GW


Comment by avturchin on Causal diagrams of the paths to existential catastrophe · 2020-03-02T10:51:30.267Z · EA · GW

I once created a causal map of all global risks starting from the beginning of evolution and accumulation of biases – and up to the end. But it included too many high-knotted elements which make the reading of the map difficult. Smaller causal maps with less than 10 elements are better adapted for human understanding.

Comment by avturchin on The Web of Prevention · 2020-02-05T12:00:13.881Z · EA · GW

Another good idea from the biosecurity literature is "distancing": that any bio threat increases the tendency of people tp distant from each other via quarantine, masks, less travel, and thus R0 will decline, hopely below 1.

Comment by avturchin on Pandemics Caused by Accident: Biorisk as a Self-Fulfilling Prophecy · 2020-01-31T15:59:37.087Z · EA · GW

Some Chinese may think that it was a bioweapon used against them and may want a retaliation. This is how nuclear-biological war could start.

Comment by avturchin on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-30T12:30:08.304Z · EA · GW

Maybe because of anchoring effect: everyone on metaculus sees the median prediction before he makes the bet and doesn't want to be much different from the group.

Comment by avturchin on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-29T10:32:01.899Z · EA · GW

It could have longer tail, but given high R0 large part of human population could be simultaneously ill (or self isolated) in March-April 2020.

What is you opinion, Dave, could this could put food production at risk?

Comment by avturchin on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T14:54:02.564Z · EA · GW

It looks like it almost not affecting children; a person of older age should give himself a higher estimate of being affected.

Comment by avturchin on How to Survive the End of the Universe · 2020-01-14T09:23:41.034Z · EA · GW

Thanks. "a bible of new vacuum" is nice, but should be "bubble".

Comment by avturchin on How to Survive the End of the Universe · 2019-12-04T13:35:38.797Z · EA · GW

Thanks. I always try to create a full list of possible solutions even if some seems very improbable.

Comment by avturchin on Russian x-risks newsletter, fall 2019 · 2019-12-03T23:38:35.672Z · EA · GW

I write it in English. 90 per cent my Russian friends could read English and also they probably know most of these news from different Russian media.

Comment by avturchin on Eight high-level uncertainties about global catastrophic and existential risk · 2019-11-28T19:11:54.986Z · EA · GW

One such uncertainty is related to the conditional probability of x-risks and their relative order. Imagine that there is 90 per cent chance of biological x-risk before 2030, but if it doesn't happen, there is 90 per cent chance of AI-related x-risk event between 2030 and 2050.

In that case, total probability of survival extinction is 99 per cent, of which 90 is biological and only 9 is from AI. In other words, more remote risks are "reduced" in expected size by earlier risks which "overshadow" them.

Another point is that x-risks are by definition one-time events, so the frequentist probability is not applicable to them.

Comment by avturchin on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T11:05:03.860Z · EA · GW

What EY is doing now? Is he coding, writing fiction or new book, working on math foundations, providing general leadership?

Comment by avturchin on Expected cost per life saved of the TAME trial · 2019-09-16T13:07:47.956Z · EA · GW

I think that there are other cost-effective interventions in life extension, including research in geroprotectors combinations and brain plastination.

Comment by avturchin on Expected cost per life saved of the TAME trial · 2019-09-16T11:22:15.659Z · EA · GW

TAME study got needed funding from a private donor:

"After closing the final $40m of its required $75m budget with a donation from a private source, the first drug trial directly targeting aging is set to begin at the end of this year, lead researcher Dr Nir Barzilai has revealed."

Comment by avturchin on X-risks of SETI and METI? · 2019-07-04T13:32:03.744Z · EA · GW

If such message will be a description of a computer and a program for it, it is net bad. Think about malevolent AI, which anyone able to download from stars.

Such viral message is aimed on the self-replication and thus will eventually convert Earth into its next node which use all our resources to send copies of the message farther.

Simple darwinian logic implies that such viral messages should numerically dominate between all alien messages if any exists. I wrote an article, linked below to discuss the idea in details

Comment by avturchin on X-risks of SETI and METI? · 2019-07-04T08:36:43.642Z · EA · GW

If we know that there are aliens and they are sending some information, everybody will try to download their message. It is infohazard.

Comment by avturchin on X-risks of SETI and METI? · 2019-07-03T17:17:26.401Z · EA · GW

I also have an article which compare different ETI-related risk, now under review in JBIS.

Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence

Comment by avturchin on X-risks of SETI and METI? · 2019-07-03T17:00:13.828Z · EA · GW

The latest version was published as proper article in 2018:

The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI

Alexey Turchin. Journal of British Interpanetary Society 71 (2):71-79 (2018)

Comment by avturchin on Corporate Global Catastrophic Risks (C-GCRs) · 2019-07-03T09:20:39.389Z · EA · GW

Great post. Also, I expected that Meditation on Moloch would be mentioned.

Comment by avturchin on The case for delaying solar geoengineering research · 2019-03-31T11:40:02.508Z · EA · GW

There is a small probability that we are very wrong about climate sensivity and only in this case climate change is an existential risk. The reason for this is not in the climate science, but in the anthropic principle: if our climate is very fragile to the runaway global warming, we can't observe it, as we find ourselves only on planets where it didn't happen.

To fight runaway global warming we need different type of geo-engineering then for the ordinary climate management, as it should be able to provide quicker results for larger climate changes, and also require less research time and may be implemented unilaterally.

I call this type of geoengineering "plan C". it could be something like artificial effect of nuclear winter, may be started by nuclear explosions in dormant volcanos.

Comment by avturchin on Cost-Effectiveness of Aging Research · 2019-02-09T13:26:25.644Z · EA · GW

Surely, there are lager effect sizes there, but they need much more testing to prove the safety and such testing is the most expensive part of any trials. There is a few already safe intervention which could help to extend life, that is, besides metformin, green tee and vitamin D.

Even as a trillion dollar project, fighting aging could be still cost-effective, after we divide the benefit for 10 billion people.

If we speaking on de novo therapies, current price of just one drug development is close to 10 billions, and comprehensive aging therapy like SENS should include many new interventions, so it may be reasonable to estimate that it will be equal to 100 new interventions, and thus trillion dollar price is real. The sum is large but affordable for humanity as whole: total space funding for all history is around this price.

However, it is impossible to get such trillion dollar funding via donations. But EA efforts could be used to attract larger funders, like pension funds, farma, governments, billionaires and insurance companies for funding such projects as they will eventually benefit from the cure for aging.

Comment by avturchin on Cost-Effectiveness of Aging Research · 2019-02-02T10:30:02.597Z · EA · GW

The main question as I see: is current spending of 1 billion-a-year on aging enough to delay aging for 10 years? Aging is a problem of (hyper)exponentially increasing complexity with time. There are probably a few interventions which could give 1-3 years of expected life extension (and aging delay): metformin, vitamin D and green tea, and proper testing of them could cost as few as tens millions of dollars as in proposed TAME study of metformin. This (+chance to survive for other life extending technologies) means much higher cost-effectiveness of such small experiments, as I described in the post. There are several other ways to donate more cost-effectively than directly funding aging research, like lobbying WHO that aging is a disease.

On the other hand, as aging is so quickly grows in time, adding up with small interventions will not give us 10 years delay of aging. So when we speak about 10 years aging delay, costs become much higher, as there is no more low-hanging fruits.

I read an opinion that current aging research may benefit of 10 times increase in spending. But it is still not clear, how much should be spent in this mode to find "a cure for aging". I guestimate that at least a trillion dollars for 10 ten years delay of aging - above the level which we could get via simple (but undertested) interventions, which is 3-5 years.

Now, spending a trillion dollars will give 10 billion people 10 years QALY each, which is only 10 dollars for QALY (assuming that we should not count the price of therapy, as people will pay themselves, and they only need an opportunity for life extension, but not constrained in health spendings).

Comment by avturchin on Combination Existential Risks · 2019-01-15T21:30:12.007Z · EA · GW

In fact, I tried also to explore this idea - which I find crucial - in my Russian book "Structure of global catastrophe", but my attempts to translate into English didn't work well, so I now slowly convert its content in the articles.

I would add an important link on the A Singular Chain of Events by Tonn and MacGregor, as well as work of Seth Baum on double catastrophes. The idea of "Peak everything" about simultaneous depletion of all natural resources also belong here, but should be combined with idea of Singularity as idea of acceleration of everything, which combined create very unstable situation.

Comment by avturchin on Climate Change Is, In General, Not An Existential Risk · 2019-01-15T12:40:31.482Z · EA · GW

Theoretical reasons for Doomsday weapon was laid by Herman Khan in "On Thermonuclear war". I scanned related chapter here:

The main idea is that it is ideal defence weapon, as no body will ever attack a country owning such a device.

The idea of attacking the Yellowstone is discussed very often in Russian blogosphere (like here, and interest to the geophysical weapons was strong in the Soviet Union (details here: - this interest ended up in creating Poseidon artificial tsunami system which is now under testing.

I've read that US has an instrument to attack hardened underground facilities by multiple heavy nuclear strike in one place, which allows creating much deeper crate than a single nuclear explosion and destroy targets around 1 km deep. The same way an volcanic caldera cover could be attacked, and such multiple strikes could weaken its strength until it blow up by internal pressure - so you don't need to go through the whole caldera's cover. no new weapons for it is needed - just special targeting of already exiting.

Russian Poseidon system has 100-200 Mt bombs delivered by a very large torpedo and is in final stages of construction.

Comment by avturchin on Climate Change Is, In General, Not An Existential Risk · 2019-01-13T11:23:41.969Z · EA · GW

"Normal" nuclear war could be only only a first stage of multistage collapse. However, there are some ideas, how to use exiting nuclear stockpiles to cause more damage and trigger a larger global catastrophe - one is most discussed is nuking a supervolcano, but there are others. In Russian sources is a common place that retaliation attack on US may include attack on the Yellowstone, but I don't know if it is a part of the official doctrine.

Future nuclear war could be using even more destructive weapons (which may exist secretly now). Teller has been working on 10 gigaton bomb. Russians now making Poseidon large torpedo system which will be probably equipped with 100 Mt cobalt bombs.

Comment by avturchin on Climate Change Is, In General, Not An Existential Risk · 2019-01-12T10:28:40.643Z · EA · GW

"Normal" global warming is not x-risk, but possible heavy tail connected with something unknown could be. For example, the observed stability of our climate may be just an "anthropic shadow", and, in fact, climate transition to the next hotter meta-stable condition is long overdue, and could be triggered by small human actions.

The next meta-stable state may be with median temperature 57C according to the article ("The climate instability is caused by a positive cloud feedback and leads to a new steady state with global-mean sea-surface temperatures above 330 K")

Because of rising solar luminosity the extinction level global warming is a question of "when", not "if", but typically it is estimated to happen hundreds millions years from now.