Bioinfohazards

post by Fin · 2019-09-17T02:41:29.760Z · score: 56 (27 votes) · EA · GW · 8 comments

Contents

        Biorisk
  Risks of Information Sharing
    Bad conceptual ideas to bad actors
      Examples
    Bad conceptual ideas to careless actors
      Examples
    Implementation details to bad actors
      Examples
    Implementation details to careless actors
      Examples
    Information vulnerable to future advances
      Examples
    Risk of Idea Inoculation
      Examples
    Some Other Risk Categories
  Risks from Secrecy
    Risk of Lost Progress
      Examples
    Dangerous work is not stopped
      Examples
    Risk of Information Siloing
      Catalyst Biosummit
      Examples
    Barriers to Funding and New Talent
      Examples
    Streisand Effect
      Examples
  Conclusion
      Sources
None
8 comments

Authors: Megan Crawford, Finan Adamson, Jeffrey Ladish

Special Thanks to Georgia Ray for Editing

Biorisk

Most in the effective altruism community are aware of a possible existential threat from biological technology but not much beyond that. The form biological threats could take is unclear. Is the primary threat from state bioweapon programs? Or superorganisms accidentally released from synthetic biology labs? Or something else entirely?

If you’re not already an expert, you’re encouraged to stay away from this topic. You’re told that speculating about powerful biological weapons might inspire terrorists or rogue states, and simply articulating these threats won’t make us any safer. The cry of “Info hazard!” shuts down discussion by fiat, and the reasons cannot be explained since these might also be info hazards. If concerned, intelligent people cannot articulate their reasons for censorship, cannot coordinate around principles of information management, then that itself is a cause for concern. Discussions may simply move to unregulated forums, and dangerous ideas will propagate through well intentioned ignorance.

We believe that well reasoned principles and heuristics can help solve this coordination problem. The goal of this post is to carve up the information landscape into areas of relative danger and safety; to illuminate some of the islands in the mire that contain more treasures than traps, and to help you judge where you’re likely to find discussion more destructive than constructive.

Useful things to know already if you’re reading this post:

Much of the material in this also overlaps with Gregory Lewis’ Information Hazards in Biotechnology article, which we recommend.

Risks of Information Sharing

We’ve divided this paper into two broad categories: risks from information sharing, and risks from secrecy. First we will go over the ways in which sharing information can cause harm, and then how keeping information secret can cause harm.

We believe considering both is important for determining whether or not to share a particular thought or paper. To keep things relatively targeted and concrete, we provide illustrative toy examples, or sometimes even real examples.

This section categorizes ways that sharing information in the biological sciences can be risky.

A topic covered in other Information Hazard posts that we chose not to focus on here is that different audiences can present substantially different risk profiles for the same idea.

With some ideas, almost all of the benefits and de-risking associated with sharing can be achieved by only mentioning your idea to one key researcher, or sharing findings in a journal associated with some obscure subfield, while simultaneously dodging most of the risk of these ideas finding their way to a foolish or bad actor.

If you’re interested in that topic, Gregory Lewis’ paper Information Hazards in Biotechnology is a good place to read about it.

Bad conceptual ideas to bad actors

A bad actor gets an idea they did not previously have

Some ways this could manifest:

Why might this be important?

State or non-state actors may have trouble developing ideas on their own. Model generation can be quite difficult, so generating or sharing clever new models can be risky. In particular, we are concerned about the possibility of ideas moving from biology researchers to bioterrorists or state actors. Biosecurity researchers are often better-educated and/or more creative than most bad actors. There are also probably many more researchers than people interested in bioterrorism; the difference in numbers could be even more impactful. If there are more biosecurity researchers than there are bad actors, researchers are likely to come up with many more ideas.

Examples

Bad conceptual ideas to careless actors

A careless actor gets an idea they did not previously have

Some ways this could manifest:

Why might this be important?

Careless actors may be unlikely to have a given interesting idea on their own, but might have the inclination and ability to implement an idea if they hear about it from someone else. One reason this might be true is that biosecurity researchers could specifically be looking for interesting possible threats, so the “interesting idea” space they explore will focus more heavily on risky ideas.

Examples

Implementation details to bad actors

A bad actor gains access to details (but not an original idea) on how to create a harmful biological agent

Some ways this could manifest:

Why might this be important?

The bad actor would not have been able to easily generate the instructions to create the harmful agent without the new source of information. As DNA synthesis & lab automation technology improves, the bottleneck to the creation of a harmful agent is increasingly knowledge & information rather than applied skill. Technical knowledge and precise implementation details have historically been a bottleneck for bioweapons programs, particularly terrorist or poorly-funded programs (see Barriers to Bioweapons by Sonia Ben Ouagrham-Gormley).

Examples

Implementation details to careless actors

A little knowledge is a dangerous thing

Some ways this could manifest:

Why might this be important?

Many new technologies (especially in biology) may have unintended side effects. Microscopic organisms can proliferate, and that may get out of hand if procedures are not followed carefully. Sometimes a tentative plan, which might or might not be a good idea, is perceived as a great plan by someone less familiar with its risks. The more careless actor may then take steps to implement a plan without considering the externalities.

As advanced lab equipment becomes cheaper and more accessible, and as more non-academic labs open up without the highly-cautious pro-safety incentives of academia, we might expect to see more experimenters who neglect to practice appropriate safety procedures. We might even see more experimenters who fell through the cracks, and never learned these procedures in the first place. How bad a development this is depends on precisely what those labs are working on, and the quality of their self-supervision.

Second-degree variant: Dangerous implementation knowledge is given to someone who is likely to distribute it, which might later result in a convergence of intent and means in a single individual, either a careless or malicious actor, who produces a dangerous biological product. Some examples of possible distributors might be a person whose job rewards the dissemination of information, or a person who chronically underestimates risks.

This risk means it is important to keep in mind what incentives people have to share information, and whether that might incline them to share information hazards.

Examples

Information vulnerable to future advances

Information that is not currently dangerous becomes dangerous

Some ways this could manifest:

Why might this be important?

Technological progress can be difficult to predict. Sometimes there are major advances in technology that allow for new capabilities, such as rapidly sequencing and copying genomes. Could the information you share be dangerous in 5 years? 10? 100? How does this weigh against how useful the information is, or how likely it is to become public soon anyway?

Examples

Risk of Idea Inoculation

Presenting an idea causes people to dismiss risks

Some ways this could manifest:

Why might this be important?

Trying to change norms can backfire. If the first people presenting a measure to reduce the publication of risky research are too low-prestige to be taken seriously, no effect might actually be the best-case scenario. An idea that is associated with disreputable people or hard-to-swallow arguments may itself start being treated as disreputable, and face much higher skepticism and hostility than if better, proven arguments had been presented first.

This is almost the inverse of the Streisand effect, which appears to derive from similar psychological principles. In the case of the Streisand Effect, attempts to remove information are what catapult it into public consciousness. In the case of idea inoculation, attempts to publicize an idea ensure that the concept is ignored or dismissed out-of-hand, with no further consideration given to it.

It also connects in interesting ways with Bostrom's Schema[1]

Examples

Some Other Risk Categories

This list is not exhaustive, and we chose to lean concrete rather than abstract.

There were a few important-but-abstract risk categories that we didn’t think we could easily do justice while keeping them succinct and concrete. We felt that several were already implied in a more concrete way by the categories we did keep, but that they encompass some edge-cases that our schemas don’t capture. They at least warrant a mention and description.

One is the “Risk of Increased Attention,” what Bostrom calls “Attention Hazard.” This is naturally implied by the four “ideas/actors” categories, but in fact covers a broader set of cases. A zone we focused less on are the circumstances in which even useful ideas, combined with smart actors, can eventually lead to unintuitive but catastrophic consequences if given enough attention and funding. This is best exemplified in the fears about the rate of development and investment in AI. It’s also partially exemplified in “Information vulnerable to future advances.”

The other is “Information Several Inferential Distances Out Is Hazardous.” This is a superset of “Information vulnerable to future advances,” but it also encompasses cases where it’s merely a matter of extending an idea out a few further logical steps, not just technological ones.

For both, we felt they partially-overlapped with the examples already given, and leaned a bit too abstract and hard-to-model for this post’s focus on concrete examples. However, we think there’s still a lot of value in these important, abstract, and complete (but harder-to-use) schemas.

Risks from Secrecy

We’ve talked above about many of the risks involved in information hazards. We take the risks of sharing information hazards seriously, and think others should as well. But in the Effective Altruist community, it has been our observation that people don’t observe the flipside of this.

Conversations about risks from biology get shut down and turn into discussions of infohazards, even when the information being shared is already available. There is something to be said for not spreading information further, but shutting down the discussion of people looking for solutions also has downsides.

Leaving it to the experts is not enough when there may not be a group of experts thinking and coming up with solutions. We encourage people that want to work on biorisks to think about the value and risks in sharing potentially dangerous information. Below we will go through the risks or loss of value from not sharing information.

A holistic model of information sharing will include weighing both the risks and benefits of sharing information. A decision should be made having considered how the information might be used by bad or careless actors AND how valuable the information is for good actors to further research or coordinate to solve a problem.

Risk of Lost Progress

Closed research culture stifles innovation

Some ways this could manifest:

Why might this be important?

Good actors need information to develop useful countermeasures. In a world where researchers cannot communicate their ideas with each other it makes model generation more difficult and reduces the ability of the field to build up good defensive systems.

Examples

Dangerous work is not stopped

Information is not shared, so risky work is not stopped

Some ways this could manifest:

Why might this be important?

Some fields of research are dangerous, or may eventually become dangerous. It is much harder to prevent a class of research if the dangers posed by that research cannot be discussed publicly.

Informal social checks on the standards or behavior of others seems to serve an important, and often underestimated, function as a monitoring and reporting system against unethical or unsafe behaviors. It can be easy to underestimate how much the objections of a friend can shift the way you view the safety of your research, as they may bring up a concern you didn’t even think to ask about.

There are also entities with a mandate to do formal checks, and it is dangerous if they are left in the dark. Work environments, labs, or even entire fields can develop their own unusual work cultures. Sometimes, these cultures systematically undervalue a type of risk because of its disproportionate benefits to them, even if the general populace would have objections. Law enforcement, lawmakers, public discussion, reporting, and entities like ethical review boards are intended to intervene in these sorts of cases, but have no way to do so if they never hear about a problem.

Each of these entities have their strengths and weaknesses, but a world without whistleblowers, or one where no one can access anyone capable of changing these environments, is likely to be a more dangerous world.

Examples

Risk of Information Siloing

Siloing information leaves individual workers blind to the overall goal accomplished

Some ways this could manifest:

Why might this be important?

Lab work seems to be increasingly getting automated, or outsourced piecemeal. At the same time, the biotechnology industry has an incentive to be secretive with any pre-patent information they uncover. Without additional precautions being taken, secretive assembly-line-esque offerings increase the likelihood that someone could order a series of steps that look harmless in isolation, but create something dangerous when combined.

Catalyst Biosummit

By the way, the authors are part of the organizing team for the Catalyst Biosecurity Summit. It will bring together synthetic biologists and policymakers, academics and biohackers, and a broad range of professionals invested in biosecurity for a day of collaborative problem-solving. It will be in February 2020. We haven’t locked down a specific date yet, but you can sign up for updates here.

Examples

Barriers to Funding and New Talent

Talented people don’t go into seemingly empty or underfunded fields

Some ways this could manifest:

Why might this be important?

While many researchers and policy makers work in biosecurity, there is a shortage of talent applied to longer term and more extreme biosecurity problems. There have been only limited efforts to successfully attract top talent to this nascent field.

This may be changing. The Open Philanthropy Project has begun funding projects focused on Global Catastrophic Biorisk, and has provided funding for many individuals beginning their careers in the field of biosecurity.

Policies that require a lot of oversight or add on procedures that increase the cost of doing research cause there to be fewer opportunities for people who want to make a positive difference.

Examples

Streisand Effect

Suppressing information can cause it to spread

Some ways this could manifest:

Why might this be important?

The Streisand effect is named after an incident where attempts to have photographs taken down led to a media spotlight and widespread discussion of those same photos. The photos had previously been posted in a context where only 1 or 2 people had taken enough of an interest to access it.

Something analogous could very easily happen with a paper outlining something hazardous in a research journal, or with an online discussion. The audience may have originally been quite targeted simply due to the nicheness or the obscurity of its original context. But an attempt at calling for intervention leads to a public discussion, which spreads the original information. This could be viewed as one of the possible negative outcomes of poorly-targeted whistleblowing.

As mentioned in the section on idea inoculation, this effect is functionally idea inoculation’s inverse and is based on similar principles.

Examples

Conclusion

Overall, we think biosecurity in the context of catastrophic risks has been underfunded and underdiscussed. There has been positive development in the time since we started on this paper; the Open Philanthropy Project is aware of funding problems in the realm of biosecurity and has been funding a variety of projects to make progress on biosecurity.

It can be difficult to know where to start helping in biosecurity. In the EA community, we have the desire to weigh the costs and benefits of philanthropic actions, but that is made more difficult in biosecurity by the need for secrecy.

We hope we’ve given you a place to start and factors to weigh when deciding to share or not share a particular piece of information in the realm of biosecurity. We think the EA community has sometimes erred too much on the side of shutting down discussions of biology by turning them into discussions about infohazards. It’s possible EA is being left out of conversations and decision making processes that could benefit from an EA perspective. We’d like to see collaborative discussion aimed towards possible actions or improvements in biosecurity with risks and benefits of the information considered, but not the central point of the conversation.

It’s a big world with many problems to focus on. If you prefer to focus your efforts elsewhere, feel free to do so. But if you do choose to engage with biosecurity, we hope you can weigh risks appropriately and choose the conversations that will lead to many talented collaborators and a world safer from biological risks.

Sources


  1. Connecting “Risk of Idea Inoculation” with Bostrom’s Schema: this could be seen as a subset of Attention Hazard and a distant cousin of Knowing-Too-Much Hazard. Attention Hazard encompasses any situation where drawing too much attention to a set of known facts increases risk, and the link is obvious. In Knowing-Too-Much Hazard, the presence of knowledge makes certain people a target of dislike. However, in Idea Inoculation, people’s dislike for your incomplete version of the idea rubs that dislike off onto the idea itself ↩︎

8 comments

Comments sorted by top scores.

comment by Gregory_Lewis · 2019-09-22T12:28:57.494Z · score: 44 (16 votes) · EA · GW

Thanks for writing the post. I essentially agree with the steers on which areas are more or less ‘risky’. Another point worth highlighting is that, given these issues tend to be difficult to judge and humans are error-prone, it can be worth running things by someone else. Folks are always welcome to contact me if I can be helpful for this purpose.

But I disagree with the remarks in the post along the lines of that ‘There’s lots of valuable discussion that is being missed out on in EA spaces on biosecurity, due to concerns over infohazards’. Often - perhaps usually - the main motivation for discretion isn’t ‘infohazards!’.

Whilst (as I understand it) the ‘EA’ perspective on AI safety covers distinct issues from mainstream discussion on AI ethics (e.g. autonomous weapons, algorithmic bias), the main distinction between ‘EA’ biosecurity and ‘mainstream’ biosecurity is one of scale. Thus similar topics are shared between both, and many possible interventions/policy improvements have dual benefit: things that help mitigate the risk of smaller outbreaks tend to help mitigate the risk of catastrophic ones.

These topics are generally very mature fields of study. To put it in perspective, with ~5 years in medicine and public health and 3 degrees, I am roughly par for credentials and substantially below-par for experience at most expert meetings I attend - I know people who have worked on (say) global health security longer than I have been alive. I’d guess some of this could be put down to unnecessary credentialism and hierarchalism, and it doesn’t mean there’s nothing to do as all the good ideas have already been thought, but it does make low hanging fruit likely to be plucked, and that useful contributions are hard to make without substantial background knowledge.

These are also areas which tend to have powerful stakeholders, entrenched interests, in many cases (especially security-adjacent issues) great political sensitivity. Thus even areas which are pretty ‘safe’ from an information hazard perspective (e.g. better governance of dual-use research of concern), can be nonetheless delicate to talk about publicly. Missteps are easy to make (especially without the relevant tacit knowledge), and the consequences can be to (as you note in the write-up) to innoculate the idea, but also to alienate powerful interests and potentially discredit the wider EA community.

The latter is something I’m particularly sensitive to. This is partly due to my impression that the ‘growing pains’ in other EA cause areas tended to incur unnecessary risk. It is also due to the reactions of folks the pre-existing community when contemplating EA involvement tend not to be unalloyed enthusiasm. They tend to be very impressed with my colleagues who are starting to work in the area, have an appetite for new ideas and ‘fresh eyes’, and reassured that EAs in this area tend to be cautious and responsible. Yet despite this they tend to remain cautious about the potential to have a lot of inexperienced people bouncing around delicate areas, both in general but also for their exposure to this community in particular, as they are often going somewhat ‘out on a limb’ to support ‘EA biosecurity’ objectives in the first place.

Another feature of this landscape is that the general path to impact of a ‘good biosecurity idea’ is to socialize it in the relevant expert community and build up a coalition of support. (One could argue how efficient this from the point of view of the universe, but it is the case regardless.) In consequence, my usual advice for people seeking to work in this area is that career capital is particularly valuable, not just for developing knowledge and skills, but also gaining the network and credibility to engage with the relevant groups.

comment by Spiracular · 2019-09-26T06:49:24.000Z · score: 16 (5 votes) · EA · GW

Thanks for the thoughtful response!

I want to start with the recognition that everything I remember hearing from you in particular around this topic, here and elsewhere, has been extremely reasonable. I also very much liked your paper.

My experience has been that I have had multiple discussions around disease shut down prematurely in some in-person EA spaces, or else turned into extended discussions of infohazards, even if I'm careful. At some point, it started to feel more like a meme than anything. There are some cases where "infohazards" were brought up as a good, genuine, relevant concern, but I also think there are a lot of EAs and rationalists who seem to have a better grasp of the infohazard meme than they do of anything topical in this space. Some of the sentiment you're pointing to is largely a response to that, and it was one of the motivations for writing a post focused on clear heuristics and guidelines. I suspect this sort of thing happening repeatedly comes with its own kind of reputational risk, which could stand to see some level of critical examination.

I think there are good reasons for the apparent consensus you present that particularly effective EA Biorisk work requires extraordinarily credentialed people.* You did a good job of presenting that here. The extent to which political sensitivity and the delicate art of reputation-management plays into this, is something I was partially aware of, but had perhaps under-weighted. I appreciate you spelling it out.

The military seems to have every reason to adopt discretion as a default. There's also a certain tendency of the media and general public to freak out in actively damaging directions around topics like epidemiology, which might feed somewhat into a need for reputation-management-related discretion in those areas as well. The response to an epidemic seems to have a huge, and sometimes negative, impact on how a disease progresses, so a certain level of caution in these fields seems pretty warranted.

I want to quickly note that I tend to be relatively-unconvinced that mature and bureaucratic hierarchies are evidence of a field being covered competently. But I would update considerably in your direction if your experience agrees with something like the following:

Is it your impression that whenever you -or talented friends in this area- come up with a reasonably-implementable good idea, that after searching around, you tend to discover that someone else has already found it and tried it?

And if not, what typically seems to have gone wrong? Is there a step that usually falls apart?

(Here are some possible bottlenecks I could think of, and I'm curious if one of them sounds more right to you than the others: Is it hard to search for what's already been done, to the point that there are dozens of redundant projects? Is it a case of there being too much to do, and each project is a rather large undertaking? (a million good ideas, each of which would take 10 years to test) Does it seem to be too challenging for people to find some particular kind of collaborator? A resource inadequacy? Is the field riddled with untrustworthy contributions, just waiting for a replication crisis? (that would certainly do a lot to justify the unease and skepticism about newcomers that you described above) Does it mostly look like good ideas tend to die a bureaucratic death? Or does it seem as if structurally, it's almost impossible for people to remain motivated by the right things? Or is the field just... noisy, for lack of a better word for it. Hard to measure for real effect or success.)

*It does alienate me, personally. I try very hard to stand as a counterargument to "credentialism-required"; someone who tries to get mileage out of engaging with conversations and small biorisk-related interventions as a high-time-investment hobby on the side of an analysis career. Officially, all I'm backed up with on this is a biology-related BS degree, a lot of thought, enthusiasm, and a tiny dash of motivating spite. If there wasn't at least a piece of me fighting against some of the strong-interpretation implications of this conclusion, this post would never have been written. But I do recognize some level of validity to the reasoning.

comment by Gregory_Lewis · 2019-09-30T08:28:22.026Z · score: 8 (4 votes) · EA · GW

Hello Spiracular,

Is it your impression that whenever you -or talented friends in this area- come up with a reasonably-implementable good idea, that after searching around, you tend to discover that someone else has already found it and tried it?

I think this is somewhat true, although I don't think this (or the suggestions for bottlenecks in the paragraph below) quite hits the mark. The mix of considerations are something like these:

1) I generally think the existing community covers the area fairly competently (from an EA perspective). I think the main reason for this is because the 'wish list' of what you'd want to see for (say) a disease surveillance system from an EA perspective will have a lot of common elements with what those with more conventional priorities would want. Combined with the billions of dollars and lots of able professionals, even areas which are neglected in relative terms still tend to have well-explored margins.

1.1) So there are a fair few cases where I come across something in the literature that anticipates an idea I had, or of colleagues/collaborators reporting back, "It turns out people are already trying to do all the things I'd want them to do re. X".

1.2) Naturally, given I'm working on this, I don't think there's no more good ideas to have. But it also means I foresee quite a lot of the value is rebalancing/pushing on the envelope of the existing portfolio rather than 'EA biosecurity' striking out on its own.

2) A lot turns on 'reasonably-implementable'. There's a generally treacherous terrain that usually lies between idea and implementation, and propelling the former to the latter through this generally needs a fair amount of capital (of various types). I think this is the typical story for why many fairly obvious improvements haven't happened.

2.1) For policy contributions, perhaps the main challenge is buy-in. Usually one can't 'implement yourself', and rely instead on influencing the relevant stakeholders (e.g. science, industry, government(s)) to have an impact. Bandwidth is generally limited in the best case, and typical cases tend to be fraught with well-worn conflicts arising from differing priorities etc. Hence the delicateness mentioned above.

2.2) For technical contributions, there are 'up-front' challenges common to doing any sort of bio-science research (e.g. wet-labs are very expensive). However, pushing one of these up the technology readiness levels to implementation also runs into similar policy challenges (as, again, you can seldom 'implement yourself').

3) This doesn't mean there are no opportunities to contribute. Even if there's a big bottleneck further down the policy funnel, new ideas upstream still have value (although knowing what the bottleneck looks like can help one target these to have easier passage - and not backfire), and in many cases there will be more incremental work which can lay the foundation for further development. There could be a synergistic relationship with folks who are more heavily enmeshed in the existing community can help translate initiatives/ideas from those less so.


comment by mike_mclaren · 2019-10-05T17:04:12.889Z · score: 5 (3 votes) · EA · GW

Just wanted to say thanks to both Gregory and Spiracular for their detailed and thoughtful back and forth in this thread. As someone coming from a place somewhere in the middle but having spent less time thinking through these considerations, I found getting to hear your personal perspectives very helpful.

comment by Spiracular · 2019-10-03T00:42:55.585Z · score: 1 (1 votes) · EA · GW

Thanks! For me, this does a bit to clear up why buy-in is perceived as such a key bottleneck.

(And secondarily, supporting the idea that other areas of fairly-high ROI are likely to be centered around facilitating collaboration and consolidation of resources among people with a lot of pre-existing experience/expertise/buy-in.)

comment by Spiracular · 2019-09-17T03:00:13.091Z · score: 19 (13 votes) · EA · GW

Now that we've gone over some of the considerations, here's some of the concrete topics I see as generally high or low hazard for open discussion.

Good for Open Discussion

  • Broad-application antiviral developments and methods
    • Vaccines
    • Antivirals proper
    • T-cell therapy
    • Virus detection and monitoring
  • How to report lab hazards
    • ...and how to normalize and encourage this
  • Broadly-applicable protective measures
    • Sanitation
    • Bunkers?
  • The state of funding
  • The state of talent
    • What broad skills to develop
    • How to appeal to talent
    • Who talent should talk to

Bad for Open Discussion

These things may be worth specialists discussing among themselves, but are likely to do more harm than good in an open thread.

  • Disease delivery methods
  • Specific Threats
  • Specific Exploitable Flaws in Defense Systems
    • Ex: immune systems, hospital monitoring systems
    • It is especially bad to mention them if they are exploitable reliably
    • If you are simultaneously providing a comprehensive solution to the problem, this can become more of a gray-area. Partial-solutions, or challenging-to-implement solutions, are likely to fall on the bad side of this equation.
  • Much of the synthetic biology surrounding this topic
  • Arguments for and against various agents using disease as an M.O.