Announcing the 2023 CLR Summer Research Fellowship 2023-03-17T12:11:15.771Z
Center on Long-Term Risk: 2023 Fundraiser 2022-12-09T18:03:56.067Z
[Open position] S-Risk Community Manager at CLR 2022-09-22T13:17:08.628Z
CLR's Annual Report 2021 2022-02-26T12:47:23.123Z
S-risk Intro Fellowship 2021-12-20T17:26:49.615Z
[Link post] Coordination challenges for preventing AI conflict 2021-03-09T09:39:53.764Z
Center on Long-Term Risk: 2021 Plans & 2020 Review 2020-12-08T13:39:30.476Z
First S-Risk Intro Seminar 2020-12-08T09:23:56.356Z
The case for building more and better epistemic institutions in the effective altruism community 2020-03-29T17:01:35.941Z
[Link] EAF Research agenda: "Cooperation, Conflict, and Transformative Artificial Intelligence" 2020-01-17T13:28:08.380Z
Assessing the state of AI R&D in the US, China, and Europe – Part 1: Output indicators 2019-11-01T14:41:09.961Z
How Europe might matter for AI governance 2019-07-12T23:42:25.351Z
First application round of the EAF Fund 2019-07-06T02:14:29.330Z
Review of Fundraising Activities of EAF in 2018 2019-06-04T17:34:52.644Z
Ingredients for creating disruptive research teams 2019-05-16T16:23:41.047Z
Launching the EAF Fund 2018-11-28T17:13:42.285Z
Takeaways from EAF's Hiring Round 2018-11-19T20:50:23.729Z


Comment by stefan.torges (storges) on Center on Long-Term Risk: 2023 Fundraiser · 2022-12-13T13:37:26.589Z · EA · GW

I'm not sure I understand your question correctly, so please respond if I didn't get it.

You ask: Could your donation be for nothing if we don't meet our fundraising goals. I don't think this is the case. If we don't even meet our minimal goal, we will possibly have to downsize or do so sooner than otherwise. Your donation would still help in those cases. The only scenario I see where your donation "would have been for nothing" is short-term insolvency. This is very unlikely.

Even if there were some scenarios in which your donation "will have been for nothing" in hindsight, I am not sure this is the right way to think about it. Your donation would still have made a difference ex-ante in expectation. 

To answer your broader question about "hingey"-ness: I think at the moment is a particularly good and important time to donate CLR compared to the past and also likely compared to the future. That would make this time particularly "hingey".

Comment by stefan.torges (storges) on Who's hiring? (May-September 2022) [closed] · 2022-09-22T13:04:39.657Z · EA · GW

The Center on Long-term Risk is looking for a Community Manager, to work with Chi Nguyen and me on growing and supporting the community around our mission of reducing risks of astronomical suffering. The application deadline is October 16th. Details and application form on our website here: 

The work in this role will be across areas like event & project management, 1:1 outreach & advising calls, setting up & improving IT infrastructure, writing, giving talks, and attending in-person networking events – depending on the skill set of the successful candidate. Since we are a small team, each person can meaningfully shape our strategy, propose new ideas, and take ownership of projects early. They also have the chance to engage with our research team.

Previous community-building experience is a good demonstration of the relevant skills, but no specific experience or qualifications are required.

Comment by stefan.torges (storges) on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-23T14:50:44.672Z · EA · GW

This question has been considered to some extent by people in the community already. Consider the following posts:

It's would also be worth pointing out that most people in this community who hold views that can be categorized as negative utilitarian or suffering-focused don't endorse bringing about human extinction, e.g.:

I am not claiming that these posts/articles have settled the debate, but I think any post on a sensitive topic like this would benefit from including such content.

Comment by stefan.torges (storges) on List of EA funding opportunities · 2021-11-04T11:32:48.417Z · EA · GW

Yes, the CLR Fund is still accepting applications. I will see that we clarify this in the appropriate places.

Comment by stefan.torges (storges) on First S-Risk Intro Seminar · 2020-12-11T13:06:06.517Z · EA · GW


Comment by stefan.torges (storges) on Net value of saving a child's life from a negative utilitarian perspective? · 2020-10-30T02:13:04.771Z · EA · GW

Answering this or similar questions will be challenging for any worldview that takes into account second-order and long-run consequences of actions, not just negative utilitarianism.

Saving a child has many such effects that will be very difficult to account for: not just effects on loved ones but also effects on the ecosystem, climate change, demand for meat, the economy more generally, etc. So assessing the grief experienced by loved ones is probably only a small piece of the answer to your overall question. At the same time, it might be particularly salient or important because the bond is personal and irreplaceable. If this life is not saved, we can do little to offset that harm.

For what it’s worth, a negative utilitarian theory might also include the frustration of preferences in the evaluation of an action. To the extent that the child wants to continue living, this would provide reasons to save them, even by negative utilitarian lights. Whether this is a decisive reason is another matter of course. 

If you do find negative utilitarianism or other suffering-focused views compelling, I think it makes more sense to ask the question: according to this view, what could be the very best thing I could be doing with my time and money? Most people who have asked this question have come up with interventions that seem much more impactful than saving lives directly -- regardless of whether the latter would overall be a good thing. Here is one person's attempt to answer this very difficult question:

Comment by stefan.torges (storges) on Forecasting Newsletter: April 2020 · 2020-05-01T09:00:17.237Z · EA · GW
and 10% for Nicolás Maduro.

The time horizon for this is "before 1 June 2020." That seems reasonable.

Comment by stefan.torges (storges) on Focusing on Career & Cause Movement Building · 2020-04-20T09:15:35.727Z · EA · GW

Thanks for writing this! This seems to be very important if we want the community to tap increasingly into professional networks.

Comment by stefan.torges (storges) on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T20:49:12.246Z · EA · GW

I agree with all of what you say here. Building things for others can often go badly wrong. Thanks for sharing this perspective!

Comment by stefan.torges (storges) on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T08:11:49.126Z · EA · GW

I was referring to the option "Building the EA and related communities." If building such institutions is a form of community-building, then this gives some indication of its importance compared to other areas. Now, it might be the case that respondents didn't have this in mind when answering and if they did, they would give it a much lower score.

Comment by stefan.torges (storges) on EA Handbook 3.0: What content should I include? · 2019-12-04T15:32:57.189Z · EA · GW

This introduction might in some ways be more accessible: S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017)

Comment by stefan.torges (storges) on How Europe might matter for AI governance · 2019-07-26T08:30:49.127Z · EA · GW

Do you think these points make Europe/the EU more important than the US or China? Otherwise, they don't give a reason for focusing on the Europe/the EU over these countries to the extent that this focus is mutually exclusive, which it is to some extent (e.g., you either set up your think tank in Washington DC or Brussels, you either analyze the EU policy-making process or the US one).

Reasons to focus on the EU/Europe over these countries are in my opinion:

  • personal fit/comparative advantage
  • diminishing returns for additional people to focus on the US/China (should have noted this in the OP)
  • threshold effects
Comment by stefan.torges (storges) on How Europe might matter for AI governance · 2019-07-15T13:29:21.151Z · EA · GW

Maybe I misunderstood. What's the point of highlighting only this statistic? It does not seem very representative of the report you're linking to or the overall claim this statistic might support if looked at in isolation.

EDIT: I didn't mean to imply intent on your part. Apologies for the unclear language. Edited original comment as well.

Comment by stefan.torges (storges) on How Europe might matter for AI governance · 2019-07-14T17:08:55.204Z · EA · GW

This strikes me as an isolated example of Europe leading on one metric. I plan to write something more comprehensive, but I think just seeing this statistic could create a wrong impression for some people.

(edited to remove accusatory tone)

Comment by stefan.torges (storges) on First application round of the EAF Fund · 2019-07-09T23:48:45.647Z · EA · GW

Thanks! Clarified.

Comment by stefan.torges (storges) on Review of Fundraising Activities of EAF in 2018 · 2019-06-05T08:48:01.044Z · EA · GW

Edited. (Initial draft was made in April and I didn't update afterwards.)

Comment by stefan.torges (storges) on Ingredients for creating disruptive research teams · 2019-06-04T17:55:49.021Z · EA · GW

This issue is something I am still somewhat confused about. Feynman makes a similar point about the IAS. I also know about a few more anecdotes in line with the "constraints breed creativity" point.

I think the 'constraints breed creativity' applies more to the tools people work with, and other constraints like teaching, administrative tasks, and grant applications mostly waste time.

There might be something to this, but I distinctly recall reading somewhere that having state of the art tools is also crucial for being able to work at the frontier. Without an electron microscope, some research is simply unavailable. (It might also create an incentive to develop an alternative and this is the kind of disruption we're actually looking for.) More powerful computers also seem like a good thing in general. So I'm not sure how to resolve this.

Edit: Also consider the anecdote mentioned by John Maxwell about PARC of course.

Another thing I remember him once mentioning to me is that PARC bought its researchers very expensive, cutting-edge equipment to do research with, on the assumption that Moore's Law would eventually drive down the price of such equipment to the point where it was affordable to the mainstream.

Comment by stefan.torges (storges) on Ingredients for creating disruptive research teams · 2019-06-04T17:47:09.810Z · EA · GW

How long do you estimate that you spent looking at each of the case studies?

Good question. I'd say on average about 10 hours; some more, some less.

It seems that most are based on a small number of sources. Did you find that reading additional sources changed your views about a particular research team compared to the first source or two that you read? Do you expect steeply diminishing returns from investing more time into digging further into particular case studies?

In my experience, most of the material went back to one or two authoritative accounts of these teams. So there appeared to be little value beyond finding and reading these. I'm not sure how well this generalizes to other case studies though.

Comment by stefan.torges (storges) on Which scientific discovery was most ahead of its time? · 2019-05-17T07:37:35.127Z · EA · GW

This post might be relevant to your question:

On Einstein:

What about great geniuses like Einstein? Doesn’t he disprove the notion of inevitability? The conventional wisdom is that Einstein’s wildly creative ideas about the nature of the universe, first announced to world in 1905, were so out of the ordinary, so far ahead of his time, and so unique that if he had not been born we might not have a his theories of relativity even today, a century later. Einstein was a unique genius no doubt. But as always, others were working on the same problems. Hendrik Lorentz, a theoretical physicists who studied light wave, introduced a mathematical structure of space-time in July 1905, the same year as Einstein. In 1904 the French mathematician Henri Poincare pointed out that observers in different frames will have clocks which will “… mark what on may call the local time. … as demanded by the relativity principle the observer cannot know whether he is at rest or in absolute motion.” And the 1911 winner of the Nobel prize in physics Wilhelm Wien proposed to the Swedish committee that Lorentz and Einstein be jointly awarded a Nobel prize in 1912 for their work on special relativity. He told the committee “…While Lorentz must be considered as the first to have found the mathematical content of the relativity principle, Einstein succeeded in reducing it to a simple principle. One should therefore assess the merits of both investigators as being comparable…” (Neither won that year.) However, according to Walter Isaacson, who wrote a wonderful biography of Einstein’s ideas in “Einstein: His Life and Universe”, “Lorentz and Poincare never were able to make Einstein’s leap even after they read his paper. Lorentz still clung to the existence of the ether and its ‘at rest’ frame of reference. Until his death in 1912, Poincare never fully gave up the concept of the ether or the notion of absolute rest. In other words, Einstein made a conceptual leap that Poincare and Lorenz could not make even after Einstein explained it.” But Isaacson, a celebrator of Einstein’s special genius for the improbable insights of relativity admits that “someone else would have come up with it, but not for at least ten years or more.” So the greatest icon genius of the human race is able to leap ahead of the inevitable maybe 10 years. For the rest of humanity, the inevitable happens on schedule.

Comment by stefan.torges (storges) on Takeaways from EAF's Hiring Round · 2018-11-21T09:13:26.483Z · EA · GW

Thanks for your response, Denise! That's a helpful perspective, and we'll take it into account next time.

Comment by stefan.torges (storges) on Takeaways from EAF's Hiring Round · 2018-11-20T10:00:38.738Z · EA · GW

Usually, we gave applicants the benefit of the doubt in such cases, especially early on. Later in the process we discussed strengths and weaknesses, compared candidates directly, and asked ourselves if somebody could turn out to be strongest candidates if we learned more about them. One low score usually was not decisive in these cases.

Comment by stefan.torges (storges) on Takeaways from EAF's Hiring Round · 2018-11-20T09:56:42.562Z · EA · GW

I just ran the numbers. These are the GMA correlations with an equally-weighted combination of all other instruments of the first three stages (form, CV, work test(s), two interviews). Note that this make the sample size very small:

  • Research Analyst: 0.19 (N=6)
  • Operations Analyst: 0.79 (N=4)

First two stages only (CV, form, work test(s)):

  • Research Analyst: 0.13 (N=9)
  • Operations Analyst: 0.70 (N=7)

I think the strongest case is their cost-effectiveness in terms of time invested on both sides.

Comment by stefan.torges (storges) on Takeaways from EAF's Hiring Round · 2018-11-20T09:40:56.624Z · EA · GW

Reference checks can mimic a longer trial which allow you to learn much more about somebody's behavior and performance in a regular work context. This depends on references being honest and willing to share potential weaknesses of candidates as well. We thought the EA community was very exemplary in this regard.

No reference checks was decisive. I'd imagine this would only be the case for major red flags. Still, they informed our understanding of the relative strengths and weaknesses.

We think they're great because they're very cost-effective, and can highlight potential areas of improvement and issues to further investigate in a trial.