Posts

Update on civilizational collapse research 2020-02-10T23:40:39.529Z · score: 47 (22 votes)
Does the US nuclear policy still target cities? 2019-10-02T17:46:44.439Z · score: 32 (12 votes)

Comments

Comment by landfish on Long-Term Future Fund: April 2019 grant recommendations · 2020-02-10T09:47:48.971Z · score: 4 (3 votes) · EA · GW

Some quick answers to your questions based on my current beliefs:

  • Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?

I think the answer in the short term is no, if "completely collapses" means something like "is unable to get back to at least 1950's level technology in 500 years". I think think there are a number of things that could reduce humanity's "technological carrying capacity". I'm currently working on explicating some of these factors, but some examples would be drastic climate change, long-lived radionuclides, increase in persistent pathogens.

  • Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)

I think we can. I'm not sure we can get very confident about exactly which potential bottlenecks will prove most significant, but I think we can narrow the search space and put forth some good hypotheses, both by reasoning from the best reference class examples we have and by thinking through the economics of potential scenarios.

  • Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it's quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact

I'm not sure about this one. I can think of some scenarios that would wipe out 90%+ of the population but none of them seem very likely. Engineered pandemics seem like one candidate (I agree with Denkenberger here), and the worst-case nuclear winter scenarios might also do it, though I haven't read the nuclear winter papers in a while, and there has been several new papers and comments in the last year, including real disagreement in the field (yay, finally!)

  • Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?

Population seems like one important variable in our technological carrying capacity, but I expect some of the others are as important. The one I mentioned in my other post is basically I think a huge one is state planning & coordination capacity. I think post-WWII Germany and Japan illustrate this quite well. However, I don't have a very good sense of what might cause most states to fail without also destroying a large part of the population at the same time. But what I'm saying is that the population factor might not be the most important one in those scenarios.

  • Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)

I'm very uncertain about this. I do think there is a good case for interventions aimed at improving the existential risk profile of post-disaster civilization being competitive with interventions aimed at improving the existential risk profile of our current civilization. The gist is that there is far less competition for the former interventions. Of course, given the huge uncertainties about both the circumstances of global catastrophes and the potential intervention points, it's hard to say whether it would possible to actually alter the post-disaster civilization's profile at all. However, it's also hard to say whether we can alter the current civilization's profile at all, and it's not obvious to me that this latter task is easier.

Comment by landfish on Long-Term Future Fund: April 2019 grant recommendations · 2020-02-10T06:07:16.887Z · score: 5 (5 votes) · EA · GW

I want to give a brief update on this topic. I spent a couple months researching civilizational collapse scenarios and come to some tentative conclusions. At some point I may write a longer post on this, but I think some of my other upcoming posts will address some of my reasoning here.

My conclusion after investigating potential collapse scenarios:

1) There are a number of plausible (>1% probability) scenarios in the next hundred years that would result in a "civilizational collapse", where an unprecedented number of people die and key technologies are (temporarily) lost.

2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years.

3) The highest leverage point for intervention in a potential post-collapse environment would be at the state level. Individuals, even wealthy individuals, lack the infrastructure and human resources at the scale necessary to rebuild effectively. There are some decent mitigations possible in the space of information archival, such as seed banks and internet archives, but these are far less likely to have long term impacts compared to state efforts.

Based on these conclusions, I decided to focus my efforts on other global risk analysis areas, because I felt I didn't have the relevant skills or resources to embark on a state-level project. If I did have those skills & resources, I believe (low to medium confidence) it would be worthwhile project, and if I found a person or group who did possess those skills / resources, I would strongly consider offering my assistance.

Comment by landfish on Information security careers for GCR reduction · 2020-01-31T08:34:40.854Z · score: 2 (2 votes) · EA · GW

I do know of a project here that is pretty promising, related to improving secure communication between nuclear weapons states. If you know people with significant expertise who might be interested pm me.

Comment by landfish on Moloch and the Pareto optimal frontier · 2020-01-14T20:11:58.709Z · score: 2 (2 votes) · EA · GW

This seems approximately right. I have some questions around how competitive pressures relate to common-good pressures. It's sometimes the case that they are aligned (e.g. in many markets).

Also, there may be a landscape of coalitions (which are formed via competitive pressures), and some of these may be more aligned with the common good and some may be less. And their alignment with the public good may be orthogonal to their competitiveness / fitness.

It would be weird if it were completely orthogonal, but I would expect it to naturally be somewhat orthogonal.

Comment by landfish on Information security careers for GCR reduction · 2019-07-08T05:21:41.473Z · score: 9 (4 votes) · EA · GW

An additional point is that "relevant roles in government" should probably mean contracting work as well. So it's possible to go work for Raytheon, get a security clearance, and do cybersecurity work for government (and that pays significantly better!)

Comment by landfish on Information security careers for GCR reduction · 2019-07-05T07:59:48.866Z · score: 13 (4 votes) · EA · GW

I think working at a top security company could be a way to gain a lot of otherwise hard to get experience. Trail of bits, NCC Group, FireEye are a few that come to mind.

Comment by landfish on Information security careers for GCR reduction · 2019-07-05T07:53:11.418Z · score: 19 (8 votes) · EA · GW
Our current best guess is that people who are interested should consider seeking security training in a top team in industry, such as by working on security at Google or another major tech company, or maybe in relevant roles in government (such as in the NSA or GCHQ). Some large security companies and government entities offer graduate training for people with a technical background. However, note that people we’ve discussed this with have had differing views on this topic.

This is a big area of uncertainty for me. I agree that Google & other top companies would be quite valuable, but I'm much less convinced that government work will be as good. At high levels of the NSA, CIA, military intelligence, etc. I expect it be, but for someone getting early experience, it's less obvious. Government positions are probably going to be less flexible / more constrained in the types of problems to work on and have less quality mentorship opportunities at the lower levels. Startups can be good if they startups value security (Reserve was great for me because I got to actually be in charge of security for the whole company & learn how to get people to use good practices), but most startups do not value security, so I wouldn't recommend working for a startup unless they showed strong signs of valuing security.

My guess is that the important factors are roughly:

  • Good technical mentorship - While I expect this to be better than average at the big tech companies, it isn't guaranteed.
  • Experience responding to real threats (i.e., a company that has enough attack surface and active threats to get a good sense of what real attacks look like)
  • Red team experience, as there is no substitute for actually learning how to attack a system
  • Working with non-security & non-technical people to implement security controls. I think most of the opportunities described in this post will require this kind of experience. Some technical security roles in big companies do not require this, since there is enough specialization that vulnerability remediation can happen via other companies.

Comment by landfish on Information security careers for GCR reduction · 2019-07-05T07:45:53.743Z · score: 7 (2 votes) · EA · GW

One potential area of biorisk + infosec work would be in improving the biotech industry's ability to secure synthesis & lab automation technology from use in creating dangerous pathogens / organisms.

This could be done via circumventing existing controls (i.e. ordering a virus which is on a banned-sequence list), or by hijacking synthesis equipment itself. So protecting this type of infrastructure may be super important. I could see this being a more policy oriented role, but one that would require infosec skills.

I expect this work to be valuable if someone possessed both the political acumen to convince the relevant policy-makers / companies that it was worthwhile and the technical / organizational skill to put solid controls in place. I don't expect this kind of work to be done by default unless something bad happens [i.e. a company is hacked and a dangerous organism is produced]. So having someone driving preventative measures before any disaster happens could be valuable.