Posts

ofer's Shortform 2020-02-19T06:53:16.647Z · score: 3 (1 votes)

Comments

Comment by ofer on Why not give 90%? · 2020-03-25T12:17:13.136Z · score: 10 (8 votes) · EA · GW

Thanks for writing this!

I worry that people who are new to EA might read this post and get the impression that there's an expectation from people in EA to have some form of utilitarianism as their only intrinsic goal. So I'd like to flag that EA is a community of humans :). Humans are the result of human evolution—a messy process that roughly optimizes for inclusive fitness. It's unlikely that any human can be perfectly modeled as a utilitarian (with limited will power etcetera, but without any intrinsic goal that is selfish).

Of course, this does not imply we shouldn't have important discussions about burnout in EA. (In the case of the OP I would just pose the question a bit differently, maybe: "Should a utilitarian give 90%?").

Comment by ofer on How can EA local groups reduce likelihood of our members getting COVID-19 or other infectious diseases? · 2020-02-27T05:19:23.742Z · score: 5 (4 votes) · EA · GW

Strongly discourage handshakes. Encourage the elbow bump or bows instead.

Is the elbow bump recommended even if people are sneezing/coughing into their elbows?

[EDIT: maybe people should only cough into their left elbow?]

Comment by ofer on ofer's Shortform · 2020-02-19T06:53:18.423Z · score: 4 (3 votes) · EA · GW

The 2020 annual letter of Bill and Melinda Gates is titled "Why we swing for the fences" and it seems to spotlight an approach that resembles OpenPhil's hits-based giving approach.

From the 2020 annual letter:

At its best, philanthropy takes risks that governments can’t and corporations won’t. Governments need to focus most of their resources on scaling proven solutions.

[...]

As always, Warren Buffett—a dear friend and longtime source of great advice—put it a little more colorfully. When he donated the bulk of his fortune to our foundation and joined us as a partner in its work, he urged us to “swing for the fences.”

That’s a phrase many Americans will recognize from baseball. When you swing for the fences, you’re putting every ounce of strength into hitting the ball as far as possible. You know that your bat might miss the ball entirely—but that if you succeed in making contact, the rewards can be huge.

That’s how we think about our philanthropy, too. The goal isn’t just incremental progress. It’s to put the full force of our efforts and resources behind the big bets that, if successful, will save and improve lives.

[...]

When Warren urged Melinda and me to swing for the fences all those years ago, he was talking about the areas our foundation worked on at the time, not climate change. But his advice applies here, too. The world can’t solve a problem like climate change without making big bets.

Comment by ofer on Conversation on AI risk with Adam Gleave · 2019-12-27T23:49:13.230Z · score: 11 (4 votes) · EA · GW
Gleave thinks discontinuous progress in AI is extremely unlikely:

I'm confused about this point. Did Adam Gleave explicitly say that he thinks discontinuous progress is "extremely unlikely" (or something to this effect)?

From the transcript I get a sense of a less confident estimate being made:

Adam Gleave: [...] I don’t see much reason for AI progress to be discontinuous in particular.

Adam Gleave: [...] I don’t expect there to be a discontinuity, in the sense of, we just see this sudden jump.
Comment by ofer on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-21T09:01:24.118Z · score: 25 (11 votes) · EA · GW
Financial Reserves

You listed important considerations; here are some additional points to consider:

1. As suggested in SethBaum's comment, a short runway may deter people from joining the org (especially people with larger personal financial responsibilities and opportunity cost).

2. It seems likely that—all other things being equal—orgs with a longer runway are "less vulnerable to Goodhart's law" and generally less prone to optimize for short-term impressiveness in costly ways. Selection effects alone seem sufficient to justify this belief: Orgs with a short runway that don't optimize for short-term impressiveness seem less likely to keep on existing.

Comment by ofer on But exactly how complex and fragile? · 2019-12-13T13:31:18.200Z · score: 1 (1 votes) · EA · GW
The traditional argument for AI alignment being hard is that human value is ‘complex’ and ‘fragile’.

Presumably, many actors will be investing a lot of resources into building the most capable and competitive ML models in many domains (e.g. models for predicting stock prices). It seems to me that the purpose of the field of AI alignment is to make it easier for actors to build such models in a way that is both safe and competitive. AI alignment seems hard to me because using arbitrarily-scaled-up versions of contemporary ML methods—in a safe and competitive way—seems hard.

Comment by ofer on What metrics may be useful to measure the health of the EA community? · 2019-11-14T16:51:48.216Z · score: 1 (1 votes) · EA · GW

Some more ideas for metrics that might be useful for tracking 'the health of the EA community' (not sure whether they fit in the first category):

How much runway do EA orgs have?

How diverse is the 'EA funding portfolio'? [EDIT: I'm referring here to the diversity of donors rather than the diversity of funding recipients.]

Comment by ofer on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-08T11:24:10.889Z · score: 4 (3 votes) · EA · GW

Thanks for this helpful explanation!

To clarify my view, I do think there is a large variance in risk among 'long-term future interventions' (such as donating to FHI, or donating to fund an independent researcher with a short track record).

Comment by ofer on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-07T22:41:28.918Z · score: 2 (2 votes) · EA · GW

Thanks for publishing this!

Respondents mentioned two broad concerns about EA Funds:

...

  1. Funds was targeted to meet the needs of a small set of donors, but was advertised to the entire EA community.

.

Many donors may not want their donations going towards “unusual, risky, or time-sensitive projects”, and respondents were concerned that the Funds were advertised to too broad a set of donors, including those for whom the Funds may not have been a good fit.

.

we do not currently proactively advertise EA Funds.

I'd be happy to learn more about these considerations/concerns. It seems to me that many of the interventions that are a good idea from a 'long-term future perspective' are unusual, risky, or time-sensitive. Is this an unusual view in the EA sphere?

Comment by ofer on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T19:58:39.927Z · score: 4 (3 votes) · EA · GW

Is this the case in the AI safety community?

I have no idea to what extent the above factor is influential amongst the AI safety community (i.e. the set of all AI safety (aspiring) researchers?).

If the reasoning for their views isn't obviously bad, I would guess that it's "cool" to say unpopular or scary but not unacceptable things, because the rationality community has been built in part on this.

(As an aside, I'm not sure what's the definition/boundary of the "rationality community", but obviously not all AI safety researchers are part of it.)

Comment by ofer on [deleted post] 2019-11-03T19:56:45.943Z

.

Comment by ofer on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T10:15:09.319Z · score: 30 (10 votes) · EA · GW

Thanks for asking.

One factor that seems important is that even a small probability of "very short timelines and a sharp discontinuity" is probably a terrifying prospect for most people. Presumably, people tend to avoid saying terrifying things. Saying terrifying things can be costly, both socially and reputationally (and there's also the possible side effect of, well, making people terrified).

I hope to write a more thorough answer to this soon (I'll update this comment accordingly by 2019-11-20).

[EDIT (2019-11-18): adding the content below]

(I should note that I haven't yet discussed some of the following with anyone else. Also, so far I had very little one-on-one interaction with established AI safety researchers, so consider the following to be mere intuitions and wild speculations.)

Suppose that some AI safety researcher thinks that 'short timelines and a sharp discontinuity' is likely. Here are some potential reasons that might cause them to not discuss their estimate publicly:

  1. Extending the point above ("people tend to avoid saying terrifying things"):

    • Presumably, most people don't want to give a vibe of an extremist.
    • People might be concerned that the most extreme/weird part of their estimate would end up getting quoted a lot in an adversarial manner, perhaps is a somewhat misleading way, for the purpose of dismissing their thoughts and making them look like a crackpot.
    • Making someone update towards such an estimate might put them in a lot of stress which might have a negative impact on their productivity.
  2. Voicing such estimates publicly might make the field of AI safety more fringe.

    • When the topic of 'x-risks from AI' is presented to a random person, presenting a more severe account of the risks might make it more likely that the person would rationalize away the risks due to motivated reasoning.
    • Being more optimistic probably correlates with others being more willing to collaborate with you. People are probably generally attracted to optimism, and working with someone who is more optimistic is probably a more attractive experience.
    • Therefore, the potential implications of voicing such estimates publicly include:
      • making talented people less likely to join the field of AI safety;
      • making established AI researchers (and other key figures) more hesitant to be associated with the field; and
      • making donors less likely to donate to this cause area.
  3. Some researchers might be concerned that discussing such estimates publicly would make them appear as fear mongering crooks who are just trying to get funding or better job security.

    • Generally, I suspect that most researchers that work on xrisk reduction would strongly avoid saying anything that could be pattern-matched to "I have this terrifying estimate about the prospect of the world getting destroyed soon in some weird way; and also, if you give me money I'll do some research that will make the catastrophe less likely to happen."
    • Some supporting evidence that those who work on xrisk reduction indeed face the risk of appearing as fear mongering crooks:
      • Oren Etzioni, a professor of computer science at the University of Washington and the CEO of the Allen Institute for Artificial Intelligence (not to be confused with the Alan Turing Institute) wrote an article for the MIT Technology Review in 2016 (which was summarized by an AI Impacts post on November 2019). In that article, which is titled "No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity", Etzioni cited the following comment that is attributed to an anonymous AAAI Fellow:

        Nick Bostrom is a professional scare monger. His Institute’s role is to find existential threats to humanity. He sees them everywhere. I am tempted to refer to him as the ‘Donald Trump’ of AI.

        Note: at the end of that article there's an update from November 2016 that includes the following:

        I’m delighted that Professors Dafoe & Russell, who responded to my article here, and I seem to be in agreement on three critical matters. One, we should refrain from ad hominem attacks. Here, I have to offer an apology: I should not have quoted the anonymous AAAI Fellow who likened Dr. Bostrom to Donald Trump. I didn’t mean to lend my voice to that comparison; I sincerely apologized to Bostrom for this misstep via e-mail, an apology that he graciously accepted. [...]

      • See also this post by Jessica Taylor from July 2019, titled "The AI Timelines Scam" (a link post for it was posted on the EA Forum), which seems to argue for the (very reasonable) hypothesis that financial incentives have caused some people to voice short timelines estimates (it's unclear to me what fraction of that post is about AI safety orgs/people, as opposed to AI orgs/people in general).

  4. Some researchers might be concerned that in order to explain why they have short timelines they would need to publicly point at some approaches that they think might lead to short timelines, which might draw more attention to those approaches which might cause shorter timelines in a net-negative manner.

  5. If voicing such estimates would make some key people in industry/governments update towards shorter timelines, it might contribute to 'race dynamics'.

  6. If a researcher with such an estimate does not see any of their peers publicly sharing such estimates, they might reason that sharing their estimate publicly is subject to the unilateralist’s curse. If the researcher has limited time or a limited network, they might opt to "play it safe", i.e. decide to not share their estimate publicly (instead of properly resolving the unilateralist’s curse by privately discussing the topic with others).

Comment by ofer on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T05:46:10.280Z · score: 30 (9 votes) · EA · GW

There seems to be a large variance in researchers' estimates about timelines and takeoff-speed. Pointing to specific writeups that lean one way or another can't give much insight about the distribution of estimates. Also, I think that at least some researchers are less likely to discuss their estimates publicly if they're leaning towards shorter timelines and a discontinuous takeoff, which subjects the public discourse on the topic to a selection bias.

So I'm skeptical about the claim that "Most researchers seem to be moving away from a fast takeoff view of AI safety, and are now opting for a softer takeoff view".

Top AI safety researchers are now saying that they expect AI to be safe by default, without further intervention from EA. See here and here.

Again, there seems to be a large variance in researchers' views about this. Pointing to specific writeups can't give much insight about the distribution of views.

Comment by ofer on Reflections on EA Global London 2019 (Mrinank Sharma) · 2019-10-30T20:36:53.472Z · score: 1 (1 votes) · EA · GW
What’s Stopping Advanced Applications of AI?
In many cases, there are cultural issues (within an industry) about the application of algorithms to make crucial decisions. Whilst interpretability of systems would increase the buy in, there are also key issues with the quality of data, and the infrastructure to collect high quality data.
It is worth nothing that the barriers here seem to not be technical, so it is unclear how much of an impact technical research would have here.

Perhaps this model was proposed for certain domains? Maybe ones in which laws restrict applications, like driverless cars?

It doesn't seem to me plausible for all domains (for example, it doesn't seem to me plausible for language models and quantitative trading).

Comment by ofer on Only a few people decide about funding for community builders world-wide · 2019-10-25T17:05:57.965Z · score: 2 (2 votes) · EA · GW

Thanks for this helpful explanation!

Comment by ofer on Only a few people decide about funding for community builders world-wide · 2019-10-24T18:47:22.778Z · score: 1 (1 votes) · EA · GW

The latter (not MIRI in particular).

Comment by ofer on Only a few people decide about funding for community builders world-wide · 2019-10-24T10:34:06.915Z · score: 1 (3 votes) · EA · GW

(unrelated to the OP)

You might well think that eg MIRI's agenda should be more widely worked on, or that it would be better if MIRI had more sources of funding. But it doesn't seem worrying that that isn't case.

This consideration seems important and I couldn't understand it (I'm talking about the general consideration, not its specific application to MIRI's agenda). I'd be happy to read more about it.

Comment by ofer on Conditional interests, asymmetries and EA priorities · 2019-10-22T06:41:39.244Z · score: 1 (3 votes) · EA · GW

My very tentative view is that we're sufficiently clueless about the probability distribution of possible outcomes from "Risks posed by artificial intelligence" and other x-risks, that the ratio between [the value one places on creating a happy person] and [the value one places on helping a person who is created without intervention] should have little influence on the prioritization of avoiding existential catastrophes.

Comment by ofer on The Future of Earning to Give · 2019-10-14T05:59:00.663Z · score: 10 (6 votes) · EA · GW

Interesting post!

Today, there's almost enough money going into far future causes, so that vetting and talent constraints have become at least as important as funding.

This seems to rely on the assumption that existing prestigious orgs are asking for all the funding they can effectively use. My best guess is that these orgs tend to not ask for a lot more funding than what they predict they can get. One potential reason for this is that orgs/grant-seekers regard such requests as a reputational risk.

Here's some supporting evidence for this, from this Open Phil blog post by Michael Levine (August 2019):

After conversations with many funders and many nonprofits, some of whom are our grantees and some of whom are not, our best model is that many grantees are constantly trying to guess what they can get funded, won’t ask for as much money as they should ask for, and, in some cases, will not even consider what they would do with some large amount because they haven’t seriously considered the possibility that they might be able to raise it.
Comment by ofer on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-10T08:33:33.351Z · score: 16 (5 votes) · EA · GW

Thank you!

This suggests that at an additional counterfactually valid donation of $10,000 to the fund, donated prior to this grant round, would have had (if not saved for future rounds) about 60% of the cost-effectiveness of the $439,197 that was distributed.

It might be useful to understand how much more money the fund could have distributed before reaching a very low marginal cost-effectiveness. For example, if the fund had to distribute in this grant round a counterfactually valid donation of $5MM, how would the cost-effectiveness of that donation compare to that of the $439,197 that was distributed?

Comment by ofer on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-09T13:45:05.668Z · score: 25 (9 votes) · EA · GW

It might be useful to get some opinions/intuitions from fund managers on the following question:

How promising is the most promising application that you ended up not recommending a grant for? How would a counterfactually valid grant for that application compare to the $439,197 that was distributed in this round, in terms of EV per dollar?

Comment by ofer on Are we living at the most influential time in history? · 2019-09-19T15:33:33.055Z · score: 2 (2 votes) · EA · GW
So your argument doesn't seems to save existential risk work. The only way to get a non-trivial P(high influence | long future) with your prior seems to be by conditioning on an additional observation "we're extremely early". As I argued here, that's somewhat sketchy to do.

As you wrote, the future being short "doesn’t necessarily imply that xrisk work doesn’t have much impact because the future might just be short in terms of people in our anthropic reference class".

Another thought that comes to mind is that there may exist many evolved civilizations that their behavior is correlated with our behavior. If so, us deciding to work hard on reducing x-risks means it's more likely that those other civilizations would also decide—during early centuries—to work hard on reducing x-risks.

Comment by ofer on Ask Me Anything! · 2019-09-18T16:02:19.665Z · score: 2 (2 votes) · EA · GW
(ii) trying to map the Yudkowsky/Bostrom arguments, which were made before the deep learning paradigm, onto actual progress in machine learning, and finding them hard to fit well. Going into this properly would require a lot more discussion though!)

I'd be happy to read more about this point.

If we end up with powerful deep learning models that optimize a given objective extremely well, the main arguments in Superintelligence seem to go through.

(If we end up with powerful deep learning models that do NOT optimize a given objective, it seems to me plausible that x-risks from AI are more severe, rather than less.)

[EDIT: replaced "a specified objective function" with "a given objective"]

Comment by ofer on Are we living at the most influential time in history? · 2019-09-04T15:36:19.496Z · score: 10 (8 votes) · EA · GW

Interesting post!

But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000.

Why should we use a uniform distribution as a prior? If I had to bet on which century would be the most influential for a random alien civilization, my prior distribution for "most influential century" would be a monotonically decreasing function.

Comment by ofer on The Case for the EA Hotel · 2019-04-10T10:31:57.799Z · score: 1 (1 votes) · EA · GW

Yes, thanks.

Comment by ofer on The Case for the EA Hotel · 2019-04-01T05:09:56.761Z · score: 14 (12 votes) · EA · GW

There's an additional argument in favor of the EA Hotel idea which I find very compelling (I've read it on this forum in a comment that I can't find; EDIT: it was this comment by the user Agrippa - the following is not at all a precise description of the original comment and contains extra things that Agrippa might not agree with):

A lot of people are optimizing to get money as an instrumental goal and funders don't always have a great way to evaluate how much a person that is asking for money is "EA-aligned" (for any reasonable definition of that term).

The willingness to travel and live for a while in a building with people that are excited about EA probably correlates with "being EA-aligned".

So supporting people via funding their residency in a place like the EA Hotel seems to allow an implicit weak vetting mechanism that doesn't exist when funding people directly.

Comment by ofer on Severe Depression and Effective Altruism · 2019-03-30T14:51:23.138Z · score: 3 (2 votes) · EA · GW

Just an additional point to consider:

If you (and therefore other people similar to you) decide to act in a way that causes a lot of harm/suffering to yourself or your family, and you wouldn't have acted in that way had you never heard about EA, then that would create a causal link between "Alice learns about EA" and "Alice or her family suffer". From a utilitarian perspective, such a causal link seems extremely harmful (e.g. making it less likely that a random talented/rich person would end up being involved in EA related efforts).

So this is an argument in favor of NOT making such decisions.

Comment by ofer on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-29T10:54:13.322Z · score: 1 (1 votes) · EA · GW
To verify I'm a real person that will in fact award $100, find me on FB here.

The link appears to be broken.

(my interest here is in finding/popularizing ways for users of this forum to easily prove their identity to other users in case they wish to).

Comment by ofer on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2019-02-16T15:26:58.098Z · score: 1 (1 votes) · EA · GW
For sure, forecasters who devoted more effort to it tended to make more accurate predictions. It would be surprising if that wasn't true!

I agree. But I am not referring to an extra effort that makes a person provide a better forecast (e.g. by spending more time looking for arguments), but rather an extra effort that allows one to improve their average daily Brier scores by simply using new public information that was not available when the question was first presented (e.g. new poll results).

Comment by ofer on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2019-02-16T10:38:01.285Z · score: 2 (2 votes) · EA · GW

Thank you for writing this.

Is the one-hour training module publicly available?

One might worry that training improves accuracy by motivating the trainees to take their jobs more seriously. Indeed it seems that the trained forecasters made more predictions per question than the control group, though they didn’t make more predictions overall. Nevertheless it seems that the training also had a direct effect on accuracy as well as this indirect effect.34

I could not find results like the ones in Table 4 in which the Brier scores are based only on the first answer that forecasters provide. Allowing forecasters to update their forecasts as frequently as they want (while reporting average daily Brier scores) plausibly gives an advantage to the forecasters who are willing to invest more time in their task.

The paper from which Table 4 is from stated that "Training was a significant predictor of average number of forecasts per question for year 1 and the number of forecasts per question was also significant predictor of accuracy (measured as mean standardized Brier score)". Consider Table 10 in the paper that shows "Forecasts per question per user by year". Notice that in year 3 the forecasters that got training made 4.27 forecasts per question, while forecasters that did not get training made only 1.90 forecasts per question. The paper includes additional statistical analyses related to this issue (unfortunately I don't have the combination of time and background in statistics to understand them all).

Comment by ofer on Three Biases That Made Me Believe in AI Risk · 2019-02-14T06:15:08.234Z · score: 38 (30 votes) · EA · GW
If people here would appreciate it, I would be happy to write one or more posts on object-level arguments as to why I am now sceptical of AI risk. Let me know in the comments.

I would like to read about these arguments.

Comment by ofer on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T11:21:15.595Z · score: 4 (4 votes) · EA · GW

When planning how to donate, it seems very important to consider the impact of market returns increasing due to progress in AI. But I think more considerations should be taken into account before drawing the conclusion in the OP.

For each specific cause, we should estimate the curve over time of EV-per-additional-dollar-invested-in-2019-and-used-now (given an estimate of market returns over time). As Richard pointed out, for reducing AI x-risk, it is not obvious we will have time to effectively use the money we invest today if we wait for too long (so "the curve" for AI safety might be sharply decreasing).

Here is another consideration I find relevant for AI x-risk: in slow takeoff worlds more people are likely to become worried about x-risk from AI (e.g. after they see that the economy has doubled in the past 4 years and that lots of weird things are happening). In such worlds, it might be the case that a very small fraction of the money that will be allocated for reducing AI x-risk would be donated by people who are currently worried about AI x-risk. This consideration might make us increase the weight of fast takeoff worlds.

On the other hand, maybe in slow takeoff worlds there is generally a lot more that could be done for reducing x-risk from AI (especially if slow takeoff correlates with longer timelines), which suggests we increase the weight of slow takeoff worlds.

If you think a fast takeoff is more likely, it probably makes more sense to either invest your current capital in tooling up as an AI alignment researcher, or to donate now to your favorite AI alignment organization (Larks' 2018 review (a) is a good starting point here).

I just wanted to note that some of the research directions for reducing AI x-risk, including ones that seem relevant in fast takeoff worlds, are outside of the technical AI alignment field (for example, governance/policy/strategy research).

Comment by ofer on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-23T15:29:17.183Z · score: 6 (6 votes) · EA · GW

In this FLI podcast episode, Andrew Critch suggested handling a potentially dangerous idea like a software update rollout procedure, in which the update is distributed gradually rather than to all customers at once:

... I would tell you the same thing I would tell anyone who discovers a potentially dangerous idea, which is not to write a blog post about it right away.

I would say, find three close, trusted individuals that you think reason well about human extinction risk, and ask them to think about the consequences and who to tell next. Make sure you’re fair-minded about it. Make sure that you don’t underestimate the intelligence of other people and assume that they’ll never make this prediction

...

Then do a rollout procedure. In software engineering, you developed a new feature for your software, but it could crash the whole network. It could wreck a bunch of user experiences, so you just give it to a few users and see what they think, and you slowly roll it out. I think a slow rollout procedure is the same thing you should do with any dangerous idea, any potentially dangerous idea. You might not even know the idea is dangerous. You may have developed something that only seems plausibly likely to be a civilizational scale threat, but if you zoom out and look at the world, and you imagine all the humans coming up with ideas that could be civilizational scale threats.

...

If you just think you’ve got a small chance of causing human extinction, go ahead, be a little bit worried. Tell your friends to be a little bit worried with you for like a day or three. Then expand your circle a little bit. See if they can see problems with the idea, see dangers with the idea, and slowly expand, roll out the idea into an expanding circle of responsible people until such time as it becomes clear that the idea is not dangerous, or you manage to figure out in what way it’s dangerous and what to do about it, because it’s quite hard to figure out something as complicated as how to manage a human extinction risk all by yourself or even by a team of three or maybe even ten people. You have to expand your circle of trust, but, at the same time, you can do it methodically like a software rollout, until you come up with a good plan for managing it. As for what the plan will be, I don’t know. That’s why I need you guys to do your slow rollout and figure it out.