'Existential Risk and Growth' Deep Dive #3 - Extensions and Variations 2020-12-20T12:39:11.984Z
Urgency vs. Patience - a Toy Model 2020-08-19T14:13:32.802Z
Expected Value 2020-07-31T13:59:54.861Z
Poor meat eater problem 2020-07-10T08:13:11.628Z
Are there superforecasts for existential risk? 2020-07-07T07:39:24.271Z
AI Governance Reading Group Guide 2020-06-25T10:16:25.029Z
'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper 2020-06-21T09:22:06.735Z
If you value future people, why do you consider near term effects? 2020-04-08T15:21:13.500Z


Comment by alex-ht on Lessons from my time in Effective Altruism · 2021-01-18T16:40:16.715Z · EA · GW

Joey, are there unusual empirical beliefs you have in mind other than the two mentioned? Hits based giving seems clearly related to Charity Entrepreneurship's work - what other important but unusual empirical beliefs do you/CE/neartermist EAs hold? (I'm guessing hinge of history hypothesis is irrelevant to your thinking?)

Comment by alex-ht on Can people be persuaded by anything other than an appeal to emotion? · 2021-01-02T19:58:30.024Z · EA · GW

My guess is that few EAs care emotionally about cost effectiveness and that they care emotionally about helping others a lot. Given limited resources, that means they have to be cost effective. Imagine a mother with a limited supply of food to share between her children. She’s doesn’t care emotionally about rationing food, but she’ll pay a lot of attention to how best to do rationing.

I do think there are things in the vicinity of careful reasoning/thinking clearly/having accurate beliefs that are core to many EAs identities. I think those can be developed naturally to some extent, and don’t seem like complete prerequisites to being an EA

Comment by alex-ht on Should Effective Altruists Focus More on Movement Building? · 2020-12-30T13:13:35.019Z · EA · GW

Thanks for writing this and contributing to the conversation :)

Relatedly, an “efficient market for ideas” hypothesis would suggest that if MB really was important, neglected, and tractable, then other more experienced and influential EAs would have already raised its salience.

I do think the salience of movement building has been raised elsewhere eg:

Having said that, I share the feeling that movement building seems underrated. Given how impactful it seems, I would expect more EAs to want to use their careers to work on movement building.

One resolution to this apparent conflict is that the fraction of people who can be good at movement building long-term might be smaller than it first seems. For lots of the interventions that you suggest, strong social skills and a strong understanding of EA concepts seem important, as well as some general executional or project management ability. Though movement builders don’t necessarily have to be excellent in any of these domains, they have to be at least pretty good at all of them. They also have to be interested enough in all of them to do movement building. This narrows down the pool of people who can work in movement building. 

Another possible reason is that  within the EA community movement building careers are generally seen as less prestigious than more ‘direct’ kinds of work and social incentives play a large role in career choice. For example, some people would be more impressed by someone doing technical AI safety research than by someone building talent pipelines into AI safety, even if the second one has more impact.

Also, as Aaron says, a lot of direct work has helpful movement building effects. 

I also agree with Aaron that looking at funding is a bit complicated with movement building, partly because movement building is probably cheaper than other things, but also that it can be hard to tease apart what's movement building and what's not. 

Comment by alex-ht on A case against strong longtermism · 2020-12-18T12:08:34.413Z · EA · GW

You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments

Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)

Comment by alex-ht on Introducing High Impact Athletes · 2020-12-01T21:25:43.786Z · EA · GW

Thanks! I appreciate it :)

It makes me feel anxious to get a lot of downvotes with no explanation so I really appreciate your comment.

Just to clarify when you say "if that is a real tradeoff that a founder faces in practice, it is nearly always an indication the founder just hasn't bothered to put much time or effort into cultivating a diverse professional network" I think I agree, but that this isn't always something the founder could have predicted ahead of time, and the founder isn't necessarily to blame. I think it can be very easy to 'accidentally' end up with a fairly homogeneous network eg. because your profession or university is homogenous. Sounds like Marcus is in this category himself (if tennis is mainly white, and his network is mainly tennis players).

Comment by alex-ht on Introducing High Impact Athletes · 2020-12-01T09:28:37.757Z · EA · GW

Was this meant as a reply to my comment or a reply to Ben's comment?

I was just asking what the position was and made explicit I wasn't suggesting Marcus change the website.

Comment by alex-ht on Introducing High Impact Athletes · 2020-11-30T14:55:50.920Z · EA · GW

Yep! I assumed this kind of thing was the case (and obviously was just flagging it as something to be aware of, not trying to finger-wag)

Comment by alex-ht on Introducing High Impact Athletes · 2020-11-30T14:55:02.081Z · EA · GW

I don't find anything wrong at all with 'saintly' personally, and took it as a joke. But I could imagine someone taking it the wrong way. Maybe I'd see what others on the forum think

Comment by alex-ht on Introducing High Impact Athletes · 2020-11-30T14:09:39.307Z · EA · GW

It looks like all the founders, advisory team, and athletes are white or white-passing. I guess you're already aware of this as something to consider, but it seems worth flagging (particularly given the use of 'Saintly' for those donating 10% :/).

Some discussion of why this might matter here:

Edit: In fact, while I think appearing all-white and implicitly describing some of your athletes as 'Saintly' are both acceptable PR risks, having the combination of them both is pretty worrying and I'd personally be in favour of changing it.

Edited to address downvotes: Obviously, it is not bad in itself that the team if the team is all white, and I'm not implying that any deliberate filtering for white people has gone on. I just think it's something to be aware of - both for PR reasons (avoiding look like white saviours) and for more substantive reasons (eg. building a movement and sub-movements that can draw on a range of experiences)

Comment by alex-ht on Introducing High Impact Athletes · 2020-11-30T14:08:55.385Z · EA · GW

Some of the wording on the 'Take the Pledge' section seems a little bit off (to me at least!). Eg. saying a 1-10% pledge will 'likely have zero noticeable impact on your standard of living' seems misleading, and could give off the impression that the pledge is only for the very wealthy (for whom the statement is more likely to be true). I'm also not sure about the 'Saintly' categorisation of the highest giving level (10%). It could come across as a bit smug or saviour-ish. I'm not sure about the tradeoffs here though and obviously you have much more context than me.

Maybe you've done this already, but it could be good to ask Luke from GWWC for advice on tone here.

Comment by alex-ht on Introducing High Impact Athletes · 2020-11-30T14:08:35.190Z · EA · GW

I see you mention that HIA's recommendations are based on a suffering-focused perspective. It's great that you're clear about where you're coming from/what you're optimising for. To explore the ethical perspective of HIA further - what is HIA's position on longtermism?

(I'm not saying you should mention your take on longtermism on the website.)

Comment by alex-ht on Introducing High Impact Athletes · 2020-11-30T14:08:18.146Z · EA · GW

This is really cool! Thanks for doing this :)

Is there a particular reason the charity areas are 'Global Health and Poverty' and 'Environmental Impact' rather than including any more explicit mention of animal welfare? (For people reading this - the environmental charities include the Good Food Institute and the Humane League along with four climate-focussed charities.)

Comment by alex-ht on The Case for Space: A Longtermist Alternative to Existential Threat Reduction · 2020-11-18T13:46:09.095Z · EA · GW

Welcome to the forum!

Have you read Bostrom's Astronomical Waste? He does a very similar estimate there.

I'd be keen to hear more about why you think it's not possible to meaningfully reduce existential risk.

Comment by alex-ht on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T13:38:19.293Z · EA · GW

"Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.

If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists."

Comment by alex-ht on Why we should grow One for the World chapters alongside EA student groups · 2020-11-04T14:18:07.547Z · EA · GW

Thanks for writing this! I and an EA community builder I know found it interesting and helpful.

I'm pleased you have a 'counterarguments' section, though I think there are some counterarguments missing:

  • OFTW groups may crowd out GWWC groups. You mention the anchoring effect on 1%, but there's also the danger of anchoring on a particular cause area. OFTW is about ending extreme poverty, whereas GWWC is about improving the lives of others (much broader)

  • OFTW groups may crowd out EA groups. If there's a OFTW group at a university, the EA group may have to compete, even if the groups are officially collaborating. In any case, they groups will be competing for attention of the altruistically motivated people at the university

  • Because OFTW isn't cause neutral, it might not be a great introduction to EA. For some people, having lots of exposure to OFTW might even make them less receptive to EA, because of anchoring on a specific cause. As you say "Since it is a cause-specific organization working to alleviate extreme global poverty, that essentially erases EA’s central work of evaluating which causes are the most important." I agree with you that trying to impartially work out which cause is best to work on is core to EA

  • OFTW's direct effects (donations to end extreme poverty) may not be as uncontroversially good as they seem. See this talk by Hilary Greaves from the Student Summit:

-OFTW outreach could be so broad and shallow that it doesn't actually select that strongly for future dedicated EAs. In a comment below, Jack says "OFTW on average engages a donor for ~10-60 mins before they pledge (and pre-COVID this was sometimes as little as 2 mins when our volunteers were tabling)". Of course, people who take that pledge will be more likely to become dedicated EAs than the average student, but there are many other ways to select at that level

Comment by alex-ht on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-09-03T09:28:00.685Z · EA · GW

Thanks, that's helpful for thinking about my career (and thanks for asking that question Michael!) 

Edit: helpful for thinking about my career because I'm thinking about getting economics training, which seems useful for answering specific sub-questions in detail ('Existential Risk and Economic Growth' being the perfect example of this),  but one economic model alone is very unlikely to resolve a big question.

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-08-29T14:22:39.092Z · EA · GW

Thank you :) I've corrected it

Comment by alex-ht on Urgency vs. Patience - a Toy Model · 2020-08-20T09:09:24.168Z · EA · GW
  1. I think I've conflated patient longtermist work with trajectory change (with the example of reducing x-risk in 200 years time being patient, but not trajectory change). This means the model is really comparing trajectory change with XRR. But trajectory change could be urgent (eg. if there was a lock-in event coming soon), and XRR could be patient. 
    1. (Side note: There are so many possible longtermist strategies! Any combination of  is a distinct strategy. This is interesting as often people conceptualise the available strategies as either patient, broad, trajectory change or urgent, narrow, XRR but there's actually at least six other strategies)
  2. This model completely neglects meta strategic work along the lines of 'are we at the hinge of history?' and 'should we work on XRR or something else?'. This could be a big enough shortcoming to render the model useless. But this meta work does have to cash out as either increasing the probability of technological maturity, or in improving the quality of the future. So I'm not sure how worrisome the shortcoming is. Do you agree that meta work has to cash out in one of those areas?
  3. I had s-risks in mind when I caveated it as 'safely' reaching technological maturity, and was including s-risk reduction in XRR. But I'm not sure if that's the best way to think about it, because the most worrying s-risks seem to be of the form: we do reach technological maturity, but the quality is large and negative. So it seems that s-risks are more like 'quality increasing' than 'probability increasing'. The argument for them being 'probability increasing' is that I think the most empirically likely s-risks might primarily be risks associated with transitions to technological maturity, just like other existential risks. But again, this conflates XRR with urgency (and so trajectory change with patience) 
Comment by alex-ht on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T07:41:00.657Z · EA · GW

Thanks for writing this, I like that it's short and has a section on subjective probability estimates. 

  1. What would you class as longterm x-risk (reduction) vs. nearterm? Is it entirely about the timescale rather than the approach? Eg. hypothetically very fast institutional reform could be nearterm, and doing AI safety field building research in academia could hypothetically be longterm if you thought it would pay off very late. Or do you think the longterm stuff necessarily has to be investment or intitutional reform?
  2. Is the main crux for 'Long-term x-risk matters more than short-term risk' around how transformative the next two centuries will be? If we start getting technologically mature, then x-risk might decrease significantly. Or do you think we might reach technological maturity, and x-risk will be low, but we should still work on reducing it?
  3. What do you think about the assumption that 'efforts can reduce x-risk by an amount proportional to the current risk'? That seems maybe appropriate for medium levels of risk eg. 1-10%, but if risk is small, like 0.01-1%, it might get very difficult to halve the risk. 
Comment by alex-ht on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-20T07:16:47.058Z · EA · GW

This is really interesting and I'd like to hear more. Feel free to just answer the easiest questions:

Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia? 

What kinds of specialisation do you think we'd want - subject knowledge? Along different subject lines to academia? 

Do you think EA should primarily use existing academia for training new researchers, or should there be lots of RSP-type things?

What do you see as the current route into longtermist research? It seems like entry-level research roles are relatively rare, and generally need research experience. Do you think this is a good model?

Comment by alex-ht on What (other) posts are you planning on writing? · 2020-08-20T06:52:02.823Z · EA · GW

I'd really like to see "If causes differ astronomically in EV, then personal fit in career choice is unimportant"

Comment by Alex HT on [deleted post] 2020-08-19T14:02:33.660Z

Toy Model

Let  be the value of the longterm future. Let  be the probability that our descendants safely reach technological maturity. Let  be the expected quality of the longterm future, given that we safely reach technological maturity.  Then the value of the longterm future is:

This ignores all the value in the longterm future that occurs when our descendants don't safely reach technological maturity. 

Assume that we can choose between doing some urgent longtermist work, say existential risk reduction - , or some patient longtermist work, let's call this global priorities research - . Assume that the existential risk reduction work increases the probability that our descendants safely reach technological maturity, but has no other effect on the quality of the future. Assume that the global priorities research increases the quality of the longterm future conditional on it occurring, but has no effect on existential risk.

Consider some small change in either existential risk reduction work or global priorities research. You can imagine this as $10 trillion, or 'what the EA community focuses on for the next 50 years', or something like that. Then for some small finite change in risk reduction, , or in global priorities research, , the change in the value of the longterm future will be:

Dropping the subscripts and dividing the first equation by the other:

Rewriting in more intuitive terms:

Critiquing the Model

I've made the assumption that x-risk reduction work doesn't otherwise affect the quality of the future, and patient longtermist work doesn't affect the probability of existential risk. Obviously, this isn't true. However, I don't think that reduces the value of the model much as I'm just trying to get a rough estimate of which produces more value - increasing the probability of space colonisation, or increasing the quality of the civilisation the colonises space. 

I have the suspicion that most of the value of broad, patient longtermist work (such as much of the philosophy being done at GPI, moral circle expansion

I've made the assumption that we can ignore all value other than worlds where we safely reach technological maturity.  This seems pretty intuitive to me, given the likely quality, size, and duration of a technologically mature society, and my ethical views. 

Putting some numbers in 

Let's put some numbers in. Toby Ord thinks that with a big effort, humanity can reduce the probability of existential risk this century from  to 1/6. That would make the fractional increase in probability of survival  (it goes from  to ). Assume for simplicity that x-risk after this century is zero. 

For GPR to be cost effective with XRR given these numbers (so the above equation is ), the fractional increase in the value of the future for a comparable amount of work would have to be .

Though Toby's numbers are really quite favourable to XRR, so putting in your own seems good. 

Eg. If you think X-risk is , and we could reduce it to  with some amount of effort, then the fractional increase in probability of survival is about  (it goes from  to ). So for GPR to be cost competitive, we'd have to be able to increase the value of the future by \(5.6\\)% with a similar amount of work that the XRR would have taken.


Would it take a similar amount of effort to reduce the probability of existential risk this century from  to 1/6 and to increase the fractional value of the future conditional on it occuring by ? My intuition is that the latter is actually much harder than the former. Remember, you've got to make the whole future  better for all time. What do you think?

Some things going into this are:

  • I think it's pretty likely () that there will be highly transformative events over the next two centuries. It seems really hard to make detailed plans with steps that happen after these highly transformative events.
  • I'm not sure if research about how the world works now actually helps much for people understanding how the world works after these highly transformative events. If we're all digital minds, or in space, or highly genetically modified then understanding how today's poverty, ecosystems, or governments worked might not be very helpful.
  • The minds doing research after the transition might be much more powerful than current researchers. A lower bound seems like 200+IQ humans (and lots more of them than are researchers now), a reasonable expectation seems like a group of superhuman narrow AIs, an upper bound seems like a superintelligent general AI. I think these could do much better research, much faster than current humans working in our current institutions. Of course, building the field means these future researchers have more to work with when they get started. But I guess this is negligible compared to increasing the probability that these future researchers exist, given how much faster they would be.

Having said that, I don't have a great understanding of the route to value of longtermist research there is that doesn't contribute to reducing or understanding existential risk (and I think it probably valuable for epistemic modesty reasons). 

I should also say that lots of actual 'global priorities research' does a lot to understand and reduce x-risk, and could understood as XRR work. I wonder how useful a concept 'global priorities research is', and whether it's too broad. 


  • Do you think this model is right enough to be at all useful? 
  • What numbers do you think are appropriate to put into this model? If a given unit of XRR work increases the probability of survival by , how much value could it have created via trajectory change? Any vague/half-baked considerations here are appreciated.
  • What's the best way to conceptualise the value of non-XRR longtermist work? Is it 'make the future go better for the rest of time'? Does it rely on a lock-in event, like transformative technologies, to make the benefits semi-permanent? 
Comment by alex-ht on Super-exponential growth implies that accelerating growth is unimportant in the long run · 2020-08-11T11:04:58.744Z · EA · GW

Thanks for writing this. I'd love to see your napkin math

Comment by alex-ht on Are there superforecasts for existential risk? · 2020-07-08T08:33:10.839Z · EA · GW

Thanks for the answer.

Will MacAskill mentioned in this comment that he'd 'expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.'

You're a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will's view, rather than the median FHI view?

Comment by alex-ht on Are there superforecasts for existential risk? · 2020-07-08T08:32:52.868Z · EA · GW

Thanks for the answer.

Will MacAskill mentioned in this comment that he'd 'expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.'

You're a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will's view, rather than the median FHI view?

Comment by alex-ht on Are there superforecasts for existential risk? · 2020-07-08T08:15:25.157Z · EA · GW

Thanks, those look good and I wasn't aware of them

Comment by alex-ht on The Moral Value of Information - edited transcript · 2020-07-03T19:16:20.072Z · EA · GW

Yep - the author can click on the image and then drag from the corner to enlarge them (found this difficult to find myself)

Comment by alex-ht on AI Governance Reading Group Guide · 2020-06-30T12:26:45.987Z · EA · GW

It's pretty blank - something like this

Comment by alex-ht on 'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper · 2020-06-22T08:13:27.866Z · EA · GW

Yeah, that seems right to me.

On doubling consumption though, if you can suggest a policy that increases growth consistently, eventually you might cause consumption to be doubled (at some later time consumption under the faster growth will be twice as much as it would have been with the slower growth). Do you mean you don't think you could suggest a policy change that would increase the growth rate by much?

Comment by alex-ht on 'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper · 2020-06-16T09:28:37.966Z · EA · GW

Great to hear this has been useful!

I think if is around 1 then yes, spreading longtermism probably looks better than accelerating growth. Though I don't know how expensive it is to double someone's consumption in the long-run.

Doubling someone's consumption by just giving them extra money might cost $30,000 for 50 years=~$0.5million. It seems right to me that there are ways to reduce the discount rate that are much cheaper than half a million dollars for 13 basis points. Eg. some community building probably takes a person's discount rate from around 2% to around 0% for less than half a million dollars.

I don't know how much cheaper it might be to double someone's consumption by increasing growth but I suspect that spreading longtermism still looks better for this value of .

How confident are you that is around 1? I haven't looked into it and don't know how much consensus there is.

Comment by alex-ht on Existential Risk and Economic Growth · 2020-06-14T21:40:49.598Z · EA · GW

I've written a summary here in case you haven't seen it:

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-05-25T10:06:10.556Z · EA · GW

What do you think absorbers might be in cases of complex cluelessness? I see that delaying someone on the street might just cause them to spend 30 seconds less procrastinating, but how might this work for distributing bednets, or increasing economic growth?

Maybe there's an line of argument around nothing being counterfactual in the long-term - because every time you solve a problem someone else was going to solve it eventually. Eg. if you didn't increase growth in some region, someone else would have 50 years later. And now you did it they won't. But this just sounds like a weirdly stable system and I guess this isn't what you have in mind

Comment by alex-ht on [Stats4EA] Expectations are not Outcomes · 2020-05-20T10:57:31.107Z · EA · GW

Thanks for writing this. I hadn't though it about this explicitly and think it's useful. The bite-sized format is great. A series of posts would be great too.

Comment by alex-ht on Existential Risk and Economic Growth · 2020-05-08T10:42:46.221Z · EA · GW

So you think the hazard rate might go from around 20% to around 1%? That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct.

I don't have any specific stories tbh, I haven't thought about it (and maybe it's just pretty implausible idk).

Comment by alex-ht on Existential Risk and Economic Growth · 2020-05-05T11:31:14.162Z · EA · GW

Not the author but I think I understand the model so can offer my thoughts:

1. Why do the time axes in many of the graphs span hundreds of years? In discussions about AI x-risk, I mostly see something like 20-100 years as the relevant timescale in which to act (i.e. by the end of that period, we will either go extinct or else build an aligned AGI and reach a technological singularity). Looking at Figure 7, if we only look ahead 100 years, it seems like the risk of extinction actually goes up in the accelerated growth scenario.

The model is looking at general dynamics of risk from the production of new goods, and isn’t trying to look at AI in any kind of granular way. The timescales on which we see the inverted U-shape depend on what values you pick for different parameters, so there are different values for which the time axes would span decades instead of centuries. I guess that picking a different growth rate would be one clear way to squash everything into a shorter time. (Maybe this is pretty consistent with short/medium AI timelines, as they probably correlate strongly with really fast growth).

I think your point about AI messing up the results is a good one -- the model says that a boom in growth has a net effect to reduce x-risk because, while risk is increased in the short-term, the long-term effects cancel that out. But if AI comes in the next 50-100 years, then the long-term benefits never materialise.

2. What do you think of Wei Dai's argument that safe AGI is harder to build than unsafe AGI and we are currently putting less effort into the former, so slower growth gives us more time to do something about AI x-risk (i.e. slower growth is better)?

Sure, maybe there’s a lock-in event coming in the next 20-200 years which we can either

  • Delay (by decreasing growth) so that we have more time to develop safety features, or
  • Make more safety-focussed (by increasing growth) so it is more likely to lock in a good state

I’d think that what matters is resources (say coordination-adjusted-IQ-person-hours or whatever) spent on safety rather than time that could available to be spent on safety if we wanted. So if we’re poor and reckless, then more time isn’t necessarily good. And this time spent being less rich also might make other x-risks more likely. But that’s a very high level abstraction, and doesn’t really engage with the specific claim too closely so keen to hear what you think.

3. What do you think of Eliezer Yudkowsky's argument that work for building an unsafe AGI parallelizes better than work for building a safe AGI, and that unsafe AGI benefits more in expectation from having more computing power than safe AGI, both of which imply that slower growth is better from an AI x-risk viewpoint?

The model doesn’t say anything about this kind of granular consideration (and I don’t have strong thoughts of my own).

4. What do you think of Nick Bostrom's urn analogy for technological developments? It seems like in the analogy, faster growth just means pulling out the balls at a faster rate without affecting the probability of pulling out a black ball. In other words, we hit the same amount of risk but everything just happens sooner (i.e. growth is neutral).

In the model, risk depends on production of consumption goods, rather than the level of consumption technology. The intuition behind this is that technological ideas themselves aren’t dangerous, it’s all the stuff people do with the ideas that’s dangerous. Eg. synthetic biology understanding isn’t itself dangerous, but a bunch of synthetic biology labs producing loads of exotic organisms could be dangerous.

But I think it might make sense to instead model risk as partially depending on technology (as well as production). Eg. once we know how to make some level of AI, the damage might be done, and it doesn’t matter whether there are 100 of them or just one.

And the reason growth isn’t neutral in the model is that there are also safety technologies (which might be analogous to making the world more robust to black balls). Growth means people value life more so they spend more on safety.

5. Looking at Figure 7, my "story" for why faster growth lowers the probability of extinction is this: The richer people are, the less they value marginal consumption, so the more they value safety (relative to consumption). Faster growth gets us sooner to the point where people are rich and value safety. So faster growth effectively gives society less time in which to mess things up (however, I'm confused about why this happens; see the next point). Does this sound right? If not, I'm wondering if you could give a similar intuitive story.

Sounds right to me.

6. I am confused why the height of the hazard rate in Figure 7 does not increase in the accelerated growth case. I think equation (7) for δ_t might be the cause of this, but I'm not sure. My own intuition says accelerated growth not only condenses along the time axis, but also stretches along the vertical axis (so that the area under the curve is mostly unaffected).

The hazard rate does increase for the period that there is more production of consumption goods, but this means that people are now richer, earlier than they would have been so they value safety more than they would otherwise.

As an extreme case, suppose growth halted for 1000 years. It seems like in your model, the graph for hazard rate would be constant at some fixed level, accumulating extinction probability during that time. But my intuition says the hazard rate would first drop near zero and then stay constant, because there are no new dangerous technologies being invented. At the opposite extreme, suppose we suddenly get a huge boost in growth and effectively reach "the end of growth" (near period 1800 in Figure 7) in an instant. Your model seems to say that the graph would compress so much that we almost certainly never go extinct, but my intuition says we do experience a lot of risk for extinction. Is my interpretation of your model correct, and if so, could you explain why the height of the hazard rate graph does not increase?

Hmm yeah, this seems like maybe the risk depends in part on the rate of change of consumption technologies - because if no new techs are being discovered, it seems like we might be safe from anthropogenic x-risk.

But, even if you believe that the hazard rate would decay in this situation, maybe what's doing the work is that you're imagining that we're still doing a lot of safety research, and thinking about how to mitigate risks. So that the consumption sector is not growing, but the safety sector continues to grow. In the existing model, the hazard rate could decay to zero in this case.

I guess I'm also not sure if I share the intuition that the hazard rate would decay to zero. Sure, we don't have the technology right now to produce AGI that would constitute an existential risk but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways? It seems plausible to me that if we kept our current level of technology and production then we'd have a non-trivial chance each year of killing ourselves off.

What's doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it's not but that if growth stopped we'd keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we'd eventually be safe?

Comment by alex-ht on What posts do you want someone to write? · 2020-04-30T15:06:21.376Z · EA · GW

Now done here. It's a ~10 page summary that someone with college-level math can understand (though I think you could read it, skip the math, and get the general idea).

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T14:15:31.376Z · EA · GW

Ah yeah that makes sense. I think they seemed distinct to me because one seems like 'buy some QALYS now before the singularity' and the other seems like 'make the singularity happen sooner' (obviously these are big caricatures). And the second one seems like it has a lot more value than the first if you can do it (of course I'm not saying you can). But yeah they are the same in that they are adding value before a set time. I can imagine that post being really useful to send to people I talk to - looking forward to reading it.

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T09:34:53.230Z · EA · GW

Pleased you liked it and thanks for the question. Here are my quick thoughts:

That kind of flourishing-education sounds a bit like Bostrom's evaluation function described here:

Or steering capacity described here:

Unfortunately he doesn't talk about how to construct the evaluation function, and steering capacity is only motivated by an analogy. I agree with you/Bostrom/Milan that there are probably some things that look more robustly good than others. It's a bit unclear how to get these but something like :'Build models of how the world works by looking to the past and then updating based on inside view arguments of the present/future. Then take actions that look good on most of your models' seems vaguely right to me. Some things that look good to me are: investing, building the EA community, reducing the chance of catastrophic risks, spreading good values, getting better at forecasting, building models of how the world works

Adjusting our values based on them being difficult to achieve seems a bit backward to me, but I'm motivated by subjective preferences, and maybe it would make more sense if you were taking a more ethical/realist approach (eg. because you expect the correct moral theory to actually be feasible to implement).

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T09:23:58.067Z · EA · GW

Following that paper, I think growth might increase x-risk in the near-term (say ~100-200 years), and might decrease x-risk in the long-term (if the growth doesn't come at the cost of later growth). I meant (1), but was thinking about the effect of x-risk in the near-term.

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T09:19:15.644Z · EA · GW

Again, nice clarification.

I didn't want to make any strong claims about which interventions people should end up prioritising, only about which effects they should consider to choose interventions.

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T09:17:44.903Z · EA · GW

Yep I meant (1) - thanks for checking. Also, that post sounds great - let me know if you want me to look over a draft :)

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T09:16:23.994Z · EA · GW

Yep I agree (I frame this as 'beware updating on epistemic clones' - people who have your beliefs for the same reason as you). My point in bringing this up was just that the common-sense view isn't obviously near-termist.

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T09:14:17.981Z · EA · GW

Nice, thanks

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T09:13:45.544Z · EA · GW

I also found this to be a great framing of absorbers and hadn't really got this before. It's an argument against 'all actions we take have huge effects on the future', and I'm not sure how to weigh them up against each other empirically. Like how would I know if the world was more absorber-y or more sensitive to small changes?

I think conception events are just one example and there a bunch of other examples of this, the general idea being that the world has systems which are complex, hard to predict and very sensitive to initial conditions. Eg. the weather and climate system (a butterfly flapping its wings in China causing hurricane in Texas). But these are cases of simple cluelessness where we have evidential symmetry.

My claim is that we are faced with complex cluelessness, where there are some kind of systematic effects going on. To apply this to conception events - imagine we changed conception events so that girls were much more likely to be conceived than boys (say because in the near-term that had some good effects eg. say women tended to be happier at the time). My intuition here is that there could be long-term effects of indeterminate sign (eg. from increased/decreased population growth) which might dominate the near-term effects. Does that match your intuition?

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T09:02:12.458Z · EA · GW

Ah ok. Can you say a bit more about why long-term-focused interventions don't meet your standards for rigour? I guess you take speculation about long-term effects as Bayesian evidence, but only as extremely weak evidence compared to evidence about near-term effects. Is that right?

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T08:59:10.154Z · EA · GW

Nice, that's well put. Do you think we can get any idea of longterm effects eg. (somewhere between -10,000 and +10,000, but tending towards the higher/lower end)?

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T08:57:31.362Z · EA · GW

Yeah that sounds like simple cluelessness. I still don't get this point (whereas I like other points you've made). Why would we think the distributions are identical or the probabilities are exactly 50% when we don't have evidential symmetry?

I see why you would not be sure of the long-term effects (not have an EV estimate), but not why you would have an estimate of exactly zero. And if you're not sure, I think it makes sense to try to get more sure. But I think you guys think this is harder than I do (another useful answer you've given).

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T08:49:22.730Z · EA · GW

Is the idea that most of the opportunities to do good will be soon (say in the next 100-200 years)? Eg. because we expect less poverty, and factory farms etc. Or because the AI is gonna come and make us all happy, so we should just make the bit before that good?

Distinct from that seems 'make us get to that point faster' (I'm imagining this could mean things like increasing growth/creating friendly AI/spreading good values) - that seems very much like looking to long-term effects.

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T08:43:53.766Z · EA · GW

Do you think any of these things have positive compounding effects or avoid lock-in:

-Investing to donate later,

-Narrow x-risk reduction,

-Building the EA community?

Comment by alex-ht on If you value future people, why do you consider near term effects? · 2020-04-15T08:41:10.146Z · EA · GW

Thanks for this. (I should say I don't completely understand it). My intuitions are much more sympathetic to additivity over prioritarianism but I see where you're coming from and it does help to answer my question (and updates me a bit).

I wonder if you've seen this. I didn't take the time to understand it fully but it looks like the kind of thing you might be interested in. (Also curious to hear whether you agree with the conclusions).