Posts

Donation offsets for ChatGPT Plus subscriptions 2023-03-16T23:11:18.163Z
Thoughts on the OpenAI alignment plan: will AI research assistants be net-positive for AI existential risk? 2023-03-10T08:20:45.898Z
When you plan according to your AI timelines, should you put more weight on the median future, or the median future | eventual AI alignment success? ⚖️ 2023-01-05T01:55:21.812Z
Marriage, the Giving What We Can Pledge, and the damage caused by vague public commitments 2022-07-11T19:38:48.955Z
My vision of a good future, part I 2022-07-06T01:23:10.588Z
US Citizens: Targeted political contributions are probably the best passive donation opportunities for mitigating existential risk 2022-05-05T23:04:26.501Z
Information security considerations for AI and the long term future 2022-05-02T20:53:56.389Z
EA Hangout Prisoners' Dilemma 2021-09-27T23:17:03.016Z
Retrospective on Catalyst, a 100-person biosecurity summit 2021-05-26T13:10:22.942Z
Nuclear war is unlikely to cause human extinction 2020-11-07T05:39:27.126Z
Update on civilizational collapse research 2020-02-10T23:40:39.529Z
Does the US nuclear policy still target cities? 2019-10-02T17:46:44.439Z

Comments

Comment by Jeffrey Ladish (landfish) on Donation offsets for ChatGPT Plus subscriptions · 2023-03-17T00:05:10.170Z · EA · GW

@Daniel_Eth asked me why I choose 1:1 offsets. The answer is that I did not have a principled reason for doing so, and do not think there's anything special about 1:1 offsets except that they're a decent schelling point. I think any offsets are better than no offsets here. I don't feel like BOTECs of harm caused as a way to calculate offsets are likely to be particularly useful here but I'd be interested in arguments to this effect if people had them. 

Comment by Jeffrey Ladish (landfish) on Thank you so much to everyone who helps with our community's health and forum. · 2023-02-06T07:57:07.688Z · EA · GW

Really appreciate you! It's felt stressful sometimes as just someone in the community and it's hard to imagine how stressful it would feel for me in your shoes. Really appreciate your hard work, and I think the EA movement is significantly improved through your hard work maintaining and improving and moderating the forum, and all the mostly-unseen-but-important work mitigating conflicts & potential harm in the community.

Comment by Jeffrey Ladish (landfish) on Overreacting to current events can be very costly · 2022-10-05T00:41:43.069Z · EA · GW

I think it's worth noting that that I'd expect you would gain a significant relative advantage if you get out of cities before other people, such that acting later would be a lot less effective at furthering your survival & rebuilding goals.

I expect the bulk of the risk of an all out nuclear war to happen in the couple of weeks after the first nuclear use. If I'm right, then the way to avoid the failure mode you're identifying is returning in a few weeks if no new nuclear weapons have been used, or similar.

Comment by Jeffrey Ladish (landfish) on Marriage, the Giving What We Can Pledge, and the damage caused by vague public commitments · 2022-07-12T00:35:29.911Z · EA · GW

I think the problem is that the vagueness of the type of commitment the GWWC represents. If it's an ironclad commitment, people should lose a lot of trust in you. If it was a "best of intention" type commitment, people should only lose a modest amount of trust in you. I think the difference matters!

Comment by Jeffrey Ladish (landfish) on Marriage, the Giving What We Can Pledge, and the damage caused by vague public commitments · 2022-07-12T00:32:00.194Z · EA · GW

I super agree it's important not to conflate  "do you keep actually-thoughtful promises you think people expected you to interpret as real commitments" and "do you take all superficially-promise-like-things as serious promises"!  And while I generally want people to think harder about what they're asking for wrt commitments, I don't think going overboard on strict-promise interpretations is good. Good promises have a shared understanding between both parties. I think a big part of building trust with people is figuring out a good shared language and context for what you mean, including when making strong and weak commitments.

 I wrote something related my first draft but removed since it seemed a little tangtial, but I'll paste it here:

"It’s interesting that there are special kinds of ways of saying things that hold more weight than other ways of saying things. If I say “I absolutely promise I will come to your party”, you will probably have a much higher expectation that I’ll attend then if I say “yeah I’ll be there”. Humans have fallible memory, they sometimes set intentions and then can’t carry through. I think some of this is a bit bad and some is okay. I don’t think everyone would be better off if every time they said they would do something they treated this as an ironclad commitment and always followed through. But I do think it would be better if we could move at least somewhat in this direction."

Which, based on your comment, I now think the thing to move for is not just "interpreting commitments as stronger" but rather "more clarity in communication about what kind of commitments are what type."
 

Comment by Jeffrey Ladish (landfish) on My vision of a good future, part I · 2022-07-06T02:00:55.902Z · EA · GW

I think it will require us to reshape / redesign most ecosystems & probably pretty large parts of many / most animals. This seems difficult but well within the bounds of a superintelligence's capabilities. I think that at least within a few decades of greater-than-human-AGI we'll have superintelligence, so in the good future I think we can solve this problem.

Comment by Jeffrey Ladish (landfish) on Information security considerations for AI and the long term future · 2022-05-05T05:37:58.420Z · EA · GW

I don't think an ordinary small/medium tech company can succeed at this. I think it's possible with significant (extraordinary) effort, but that sort of remains to be seen.

As I said in another thread:

>> I think it's an open question right now. I expect it's possible with the right resources and environment, but I might be wrong. I think it's worth treating as an untested hypothesis ( that we can secure X kind of system for Y application of resources ), and to try to get more information to test the hypothesis. If AGI development is impossible to secure, that cuts off a lot of potential alignment strategies. So it seems really worth trying to find out if it's possible.
 

Comment by Jeffrey Ladish (landfish) on Information security considerations for AI and the long term future · 2022-05-05T05:33:31.612Z · EA · GW

I agree that a lot of the research today by leading labs is being published. I think the norms are slowly changing, at least for some labs. Deciding not to (initially) release the model weights of GPT-2 was a big change in norms iirc, and I think the trend towards being cautious with large language models has continued. I expect that as these systems get more powerful, and the ways they can be misused gets more obvious, norms will naturally shift towards less open publishing. That being said, I'm not super happy with where we're at now, and I think a lot of labs are being pretty irresponsible with their publishing.

The dual-use question is a good one, I think. Offensive security knowledge is pretty dual-use, yes. Pen testers can use their knowledge to illegally hack if they want to. But the incentives in the US are pretty good regarding legal vs. illegal hacking, less so in other countries. I'm not super worried about people learning hacking skills to protect AGI systems only to use those skills to cause harm -- mostly because the offensive security area is already very big / well resourced. In terms of using AI systems to create hacking tools, that's an area where I think dual-use concerns can definitely come into play, and people should be thoughtful & careful there.

I liked your shortform post. I'd be happy to see people apply infosec skills towards securing nuclear weapons (and in the biodefense area as well). I'm not very convinced this would mitigate risk from superintelligent AI, since nuclear weapons would greatly damage infrastructure without killing everyone, and thus not be very helpful to eliminating humans imo. You'd still need some kind of manufacturing capability in order to create more compute, and if you have the robotics capability to do this then wiping out humans probably doesn't take nukes - you could do it with drones or bioweapons or whatever. But this is all highly speculative, of course, and I think there is a case for securing nuclear weapons without looking at risks form superintelligence. Improving the security of nuclear weapons may increase the stability of nuclear weapons states, and that seems good for their ability to negotiate with one another, so I could see there being some route to AI existential risk reduction via that avenue. 

Comment by Jeffrey Ladish (landfish) on Information security considerations for AI and the long term future · 2022-05-05T05:20:26.632Z · EA · GW

I think it's an open question right now. I expect it's possible with the right resources and environment, but I might be wrong. I think it's worth treating as an untested hypothesis ( that we can secure X kind of system for Y application of resources ), and to try to get more information to test the hypothesis. If AGI development is impossible to secure, that cuts off a lot of potential alignment strategies. So it seems really worth trying to find out if it's possible.

Comment by Jeffrey Ladish (landfish) on EA Hangout Prisoners' Dilemma · 2021-09-28T00:43:29.023Z · EA · GW

I expect most people to think either that AMF or MIRI is much more likely to do good. So from most agent's perspectives, the unilateral defection is only better if their chosen org wins. If someone has more of a portfolio approach that weights longtermist and global poverty  efforts similarly, then your point holds. I expect that's a minority position though.

Comment by Jeffrey Ladish (landfish) on The $100trn opportunity: ESG investing should be a top priority for EA careers · 2021-04-22T18:26:54.188Z · EA · GW

Thanks!

Comment by Jeffrey Ladish (landfish) on The $100trn opportunity: ESG investing should be a top priority for EA careers · 2021-03-23T19:31:38.676Z · EA · GW

I see you define it a few paragraphs down, but at the top would be helpful I think

Comment by Jeffrey Ladish (landfish) on The $100trn opportunity: ESG investing should be a top priority for EA careers · 2021-03-23T19:29:05.714Z · EA · GW

Could you define ESG investing at the begining of your post?

Comment by Jeffrey Ladish (landfish) on Nuclear war is unlikely to cause human extinction · 2020-12-26T00:53:54.573Z · EA · GW

Yeah, I would agree with that! I think radiological weapons are some of the most relevant nuclear capabilities / risks to consider from a longterm perspective, due to their risk of being developed in the future.

Comment by Jeffrey Ladish (landfish) on Nuclear war is unlikely to cause human extinction · 2020-12-11T01:40:12.355Z · EA · GW

The part I added was:

"By a full-scale war, I mean a nuclear exchange between major world powers, such as the US, Russia, and China, using the complete arsenals of each country. The total number of warheads today (14,000) is significantly smaller than during the height of the cold war (70,000). While extinction from nuclear war is unlikely today, it may become more likely if significantly more warheads are deployed or if designs of weapons change significantly."

I also think indirect extinction from nuclear war is unlikely, but I would like to address this more in a future post.  I disagree that additional clarifications are needed. I think people made these points clearly in the comments, and that anyone motivated to investigate this area seriously can read those.  If you want to try to doublecrux on why we disagree here I'd be up for that, though on a call might be preferable for saving time.

Comment by Jeffrey Ladish (landfish) on Nuclear war is unlikely to cause human extinction · 2020-12-11T01:31:11.470Z · EA · GW

Thanks for this perspective!

Comment by Jeffrey Ladish (landfish) on Nuclear war is unlikely to cause human extinction · 2020-12-11T01:30:19.589Z · EA · GW

Strong agree!

Comment by Jeffrey Ladish (landfish) on Nuclear war is unlikely to cause human extinction · 2020-12-11T01:28:51.534Z · EA · GW

I mean that the amount required to cover every part of the Earth's surface  would serve no military purpose. Or rather, it might enhance one's deterrent a little bit, but it would
1) kill all of one's own people, which is the opposite of a defense objective
2) not be a very cost effective way to improve one's deterrent. In nearly all cases it would make more sense to expand second strike capabilities by adding more submarines, mobile missile launchers, or other stealth second strike weapons.

Which isn't to say this couldn't happen! Military research teams have proposed crazy plans like this before. I'm just arguing, as have many others at RAND and elsewhere, that a doomsday machine isn't a good deterrent, compared to the other options that exist (and given the extraordinary downside risks).

Comment by Jeffrey Ladish (landfish) on Nuclear war is unlikely to cause human extinction · 2020-11-09T22:58:16.350Z · EA · GW

FWIW, my guess is that you're already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it'd be good to describe in detail "here is how this combination of different hazards could kill everyone"]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I'd be happy to review a post prior to publication, though I'm not sure if I'm particularly qualified.)

Yes, I was planning to get review prior to publishing this. In general when it comes to risks from biotechnology, I'm trying to follow the principles we developed here: https://www.lesswrong.com/posts/ygFc4caQ6Nws62dSW/bioinfohazards I'd be excited to see, or help workshop, better guidance for navigating information hazards in this space in the future. 

Comment by Jeffrey Ladish (landfish) on Nuclear war is unlikely to cause human extinction · 2020-11-09T03:33:48.076Z · EA · GW

Thanks, fixed!

Comment by Jeffrey Ladish (landfish) on Nuclear war is unlikely to cause human extinction · 2020-11-09T03:33:30.993Z · EA · GW

Thanks, fixed!

Comment by Jeffrey Ladish (landfish) on Nuclear war is unlikely to cause human extinction · 2020-11-08T00:04:19.465Z · EA · GW

This may be in the Brookings estimate, which I haven't read yet, but I wonder how much cost disease + reduction in nuclear force has affected the cost per warhead / missile. My understanding is that many military weapon systems get much more expensive over time for reasons I don't well understand.

Warheads could be altered to increase the duration of radiation effects from fallout, but this would would also reduce their yield, and would represent a pretty large change in strategy. We've gone 70 years without such weapons, which the recent Russian submersible system as a possible exception. It seems unlikely such a shift in strategy will occur in the next 70 years, but like 3% unlikely rather than really unlikely.

It's a good point that risks of extinction could get significantly worse if different/more nuclear weapons were built & deployed, and combined with other WMDs. And the existence of 70k+ weapons in the cold war presents a decent outside view argument that we might see that many in the future. I'll edit the post to clarify that I mean present and not future risks from nuclear war.

Comment by Jeffrey Ladish (landfish) on Nuclear war is unlikely to cause human extinction · 2020-11-07T23:44:41.278Z · EA · GW

I think I gave the impression that I'm making  a more expansive claim than I actually mean to make, and will edit the post to clarify this.  The main reason I wanted to write this post is that a lot of people, including a number in the EA community, start with the conception that a nuclear war is relatively likely to kill everyone, either for nebulous reason or because of nuclear winter specifically. I know most people who've examined it know this is wrong, but I wanted that information to be laid out pretty clearly, so someone could get a summary of this argument. I think that's just the beginning in assessing existential risk from nuclear war, and I really wouldn't want people to read my post and walk away thinking "nuclear war is nothing to worry about from a longtermist perspective." 

I agree that "We know that one type of existential risk from nuclear war is very small, but we don't really have a good idea for how large total existential risk from nuclear war". I'm planning to follow this post with a discussion of existential risks from compounding risks like nuclear war, climate change, biotech accidents, bioweapons, & others.

It feels like I disagree with you on the likelihood that a collapse induced by nuclear war would lead to permanent loss of humanity's potential / eventual extinction. I currently think humans would retain the most significant basic survival technologies following a collapse and then reacquire lost technological capacities relatively quickly. (I discussed this investigation here though not in depth). I'm planning too write this up as part of my compounding risks post or as a separate one.

Agreed that it's very hard to know the sign on a huge history-altering event, whether it's a nuclear war or covid.

Comment by Jeffrey Ladish (landfish) on Long-Term Future Fund: April 2019 grant recommendations · 2020-02-10T09:47:48.971Z · EA · GW

Some quick answers to your questions based on my current beliefs:

  • Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?

I think the answer in the short term is no, if "completely collapses" means something like "is unable to get back to at least 1950's level technology in 500 years". I think think there are a number of things that could reduce humanity's "technological carrying capacity". I'm currently working on explicating some of these factors, but some examples would be drastic climate change, long-lived radionuclides, increase in persistent pathogens.

  • Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)

I think we can. I'm not sure we can get very confident about exactly which potential bottlenecks will prove most significant, but I think we can narrow the search space and put forth some good hypotheses, both by reasoning from the best reference class examples we have and by thinking through the economics of potential scenarios.

  • Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it's quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact

I'm not sure about this one. I can think of some scenarios that would wipe out 90%+ of the population but none of them seem very likely. Engineered pandemics seem like one candidate (I agree with Denkenberger here), and the worst-case nuclear winter scenarios might also do it, though I haven't read the nuclear winter papers in a while, and there has been several new papers and comments in the last year, including real disagreement in the field (yay, finally!)

  • Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?

Population seems like one important variable in our technological carrying capacity, but I expect some of the others are as important. The one I mentioned in my other post is basically I think a huge one is state planning & coordination capacity. I think post-WWII Germany and Japan illustrate this quite well. However, I don't have a very good sense of what might cause most states to fail without also destroying a large part of the population at the same time. But what I'm saying is that the population factor might not be the most important one in those scenarios.

  • Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)

I'm very uncertain about this. I do think there is a good case for interventions aimed at improving the existential risk profile of post-disaster civilization being competitive with interventions aimed at improving the existential risk profile of our current civilization. The gist is that there is far less competition for the former interventions. Of course, given the huge uncertainties about both the circumstances of global catastrophes and the potential intervention points, it's hard to say whether it would possible to actually alter the post-disaster civilization's profile at all. However, it's also hard to say whether we can alter the current civilization's profile at all, and it's not obvious to me that this latter task is easier.

Comment by Jeffrey Ladish (landfish) on Long-Term Future Fund: April 2019 grant recommendations · 2020-02-10T06:07:16.887Z · EA · GW

I want to give a brief update on this topic. I spent a couple months researching civilizational collapse scenarios and come to some tentative conclusions. At some point I may write a longer post on this, but I think some of my other upcoming posts will address some of my reasoning here.

My conclusion after investigating potential collapse scenarios:

1) There are a number of plausible (>1% probability) scenarios in the next hundred years that would result in a "civilizational collapse", where an unprecedented number of people die and key technologies are (temporarily) lost.

2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years.

3) The highest leverage point for intervention in a potential post-collapse environment would be at the state level. Individuals, even wealthy individuals, lack the infrastructure and human resources at the scale necessary to rebuild effectively. There are some decent mitigations possible in the space of information archival, such as seed banks and internet archives, but these are far less likely to have long term impacts compared to state efforts.

Based on these conclusions, I decided to focus my efforts on other global risk analysis areas, because I felt I didn't have the relevant skills or resources to embark on a state-level project. If I did have those skills & resources, I believe (low to medium confidence) it would be worthwhile project, and if I found a person or group who did possess those skills / resources, I would strongly consider offering my assistance.

Comment by Jeffrey Ladish (landfish) on Information security careers for GCR reduction · 2020-01-31T08:34:40.854Z · EA · GW

I do know of a project here that is pretty promising, related to improving secure communication between nuclear weapons states. If you know people with significant expertise who might be interested pm me.

Comment by Jeffrey Ladish (landfish) on Moloch and the Pareto optimal frontier · 2020-01-14T20:11:58.709Z · EA · GW

This seems approximately right. I have some questions around how competitive pressures relate to common-good pressures. It's sometimes the case that they are aligned (e.g. in many markets).

Also, there may be a landscape of coalitions (which are formed via competitive pressures), and some of these may be more aligned with the common good and some may be less. And their alignment with the public good may be orthogonal to their competitiveness / fitness.

It would be weird if it were completely orthogonal, but I would expect it to naturally be somewhat orthogonal.

Comment by Jeffrey Ladish (landfish) on Information security careers for GCR reduction · 2019-07-08T05:21:41.473Z · EA · GW

An additional point is that "relevant roles in government" should probably mean contracting work as well. So it's possible to go work for Raytheon, get a security clearance, and do cybersecurity work for government (and that pays significantly better!)

Comment by Jeffrey Ladish (landfish) on Information security careers for GCR reduction · 2019-07-05T07:59:48.866Z · EA · GW

I think working at a top security company could be a way to gain a lot of otherwise hard to get experience. Trail of bits, NCC Group, FireEye are a few that come to mind.

Comment by Jeffrey Ladish (landfish) on Information security careers for GCR reduction · 2019-07-05T07:53:11.418Z · EA · GW
Our current best guess is that people who are interested should consider seeking security training in a top team in industry, such as by working on security at Google or another major tech company, or maybe in relevant roles in government (such as in the NSA or GCHQ). Some large security companies and government entities offer graduate training for people with a technical background. However, note that people we’ve discussed this with have had differing views on this topic.

This is a big area of uncertainty for me. I agree that Google & other top companies would be quite valuable, but I'm much less convinced that government work will be as good. At high levels of the NSA, CIA, military intelligence, etc. I expect it be, but for someone getting early experience, it's less obvious. Government positions are probably going to be less flexible / more constrained in the types of problems to work on and have less quality mentorship opportunities at the lower levels. Startups can be good if they startups value security (Reserve was great for me because I got to actually be in charge of security for the whole company & learn how to get people to use good practices), but most startups do not value security, so I wouldn't recommend working for a startup unless they showed strong signs of valuing security.

My guess is that the important factors are roughly:

  • Good technical mentorship - While I expect this to be better than average at the big tech companies, it isn't guaranteed.
  • Experience responding to real threats (i.e., a company that has enough attack surface and active threats to get a good sense of what real attacks look like)
  • Red team experience, as there is no substitute for actually learning how to attack a system
  • Working with non-security & non-technical people to implement security controls. I think most of the opportunities described in this post will require this kind of experience. Some technical security roles in big companies do not require this, since there is enough specialization that vulnerability remediation can happen via other companies.

Comment by Jeffrey Ladish (landfish) on Information security careers for GCR reduction · 2019-07-05T07:45:53.743Z · EA · GW

One potential area of biorisk + infosec work would be in improving the biotech industry's ability to secure synthesis & lab automation technology from use in creating dangerous pathogens / organisms.

This could be done via circumventing existing controls (i.e. ordering a virus which is on a banned-sequence list), or by hijacking synthesis equipment itself. So protecting this type of infrastructure may be super important. I could see this being a more policy oriented role, but one that would require infosec skills.

I expect this work to be valuable if someone possessed both the political acumen to convince the relevant policy-makers / companies that it was worthwhile and the technical / organizational skill to put solid controls in place. I don't expect this kind of work to be done by default unless something bad happens [i.e. a company is hacked and a dangerous organism is produced]. So having someone driving preventative measures before any disaster happens could be valuable.