Posts

'Existential Risk and Growth' Deep Dive #3 - Extensions and Variations 2020-12-20T12:39:11.984Z
Urgency vs. Patience - a Toy Model 2020-08-19T14:13:32.802Z
Expected Value 2020-07-31T13:59:54.861Z
Poor meat eater problem 2020-07-10T08:13:11.628Z
Are there superforecasts for existential risk? 2020-07-07T07:39:24.271Z
AI Governance Reading Group Guide 2020-06-25T10:16:25.029Z
'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper 2020-06-21T09:22:06.735Z
If you value future people, why do you consider near term effects? 2020-04-08T15:21:13.500Z

Comments

Comment by Alex HT on Non-consequentialist longtermism · 2021-06-05T10:19:52.228Z · EA · GW

https://globalprioritiesinstitute.org/andreas-mogensen-staking-our-future-deontic-long-termism-and-the-non-identity-problem/ 

Comment by Alex HT on Non-consequentialist longtermism · 2021-06-05T10:08:06.161Z · EA · GW

I've haven't read it, but the name of this paper from Andreas at GPI at least fits what you're asking - "Staking our future: deontic long-termism and the non-identity problem"

Comment by Alex HT on Is there evidence that recommender systems are changing users' preferences? · 2021-04-13T10:21:07.835Z · EA · GW

 Is The YouTube Algorithm Radicalizing You? It’s Complicated.

Recently, there's been significant interest among the EA community in investigating short-term social and political risks of AI systems. I'd like to recommend this video (and Jordan Harrod's channel as a whole) as a starting point for understanding the empirical evidence on these issues.

Comment by Alex HT on Confusion about implications of "Neutrality against Creating Happy Lives" · 2021-04-11T18:28:32.635Z · EA · GW

I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.

But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.

Comment by Alex HT on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-12T11:16:29.383Z · EA · GW

I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?


Yep that is what I'm saying. I think I don't agree but thanks for explaining :)

Comment by Alex HT on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-10T11:00:13.526Z · EA · GW

Can you say a bit more about why the quote is objectionable? I can see why the conclusion 'saving a life in a rich country is substantially more important than saving a life in a poor country' would be objectionable. But it seems Beckstead is saying something more like 'here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries' (because he says 'other things being equal').

Comment by Alex HT on Should I transition from economics to AI research? · 2021-02-28T19:42:59.334Z · EA · GW

There are also more applied AI/tech focused economics questions that seem important for longtermists (eg if GPI stuff seems to abstract for you)

Comment by Alex HT on Running an AMA on the EA Forum · 2021-02-18T22:01:35.604Z · EA · GW

Agree with Marisa that you'd be well suited to do an AMA

Comment by Alex HT on How can non-biologists contribute to wild animal welfare? · 2021-02-18T08:32:03.896Z · EA · GW

Also not CS and you may already know it: this EAG talk is about wild animal welfare research using economics techniques. Both authors of the paper discussed are economists, not biologists.

Comment by Alex HT on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-12T19:56:48.288Z · EA · GW

Thanks for you comment, it makes a good point . My comment was hastily written and I think my argument that you're referring to is weak, but not as weak as you suggest.

At some points the author is specifically critiquing longtermism the philosophy (not what actual longtermists think and do) eg. when talking about genocide. It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear. 

There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists). 

I'm also not sure that lots of longtermists (even of the Bostrom/hinge of history type) would agree that the quoted claim accurately represent their views

 our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.”

But, I do agree that some longtermists do think 

  • there are likely to be very transformative events soon eg. within 50 years
  • in the long run, if they go well, these events will massively improve the human condition 

And there's some criticisms you can make of that kind of ideology that are similar to the criticisms the author makes. 

Comment by Alex HT on Ecosystems vs Projects in EA Movement Building · 2021-02-10T15:33:54.794Z · EA · GW

from 'Things CEA is not doing' forum post https://forum.effectivealtruism.org/posts/72Ba7FfGju5PdbP8d/things-cea-is-not-doing 

We are not actively focusing on:

...

  • Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)
Comment by Alex HT on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-09T13:36:10.536Z · EA · GW

I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted.  Happy to expand on any points and have a discussion.

In general, I think criticisms of longtermism from people who 'get' longtermism are incredibly valuable to longtermists.

One reason if that if the criticisms carry entirely, you'll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn't have spotted themselves.  And a third reason is that in the worlds where longtermism is true, this helps longtermists work out better ways to frame the ideas to not put off potential sympathisers.

Clarity

In general, I found it hard to work out the actual arguments of the book and how they interfaced with the case for longtermism. 

Sometimes I found that there were some claims being implied but they were not explicit. So please point out any incorrect inferences I’ve made below!

I was unsure what was being critiqued: longtermism, Bostrom’s views, utilitarianism, consequentialism, or something else. 

The thesis of the book (for people reading this comment, and to check my understanding)

“Longtermism is a radical ideology that could have disastrous consequences if the wrong people—powerful politicians or even lone actors—were to take its central claims seriously.”

“As outlined in the scholarly literature, it has all the ideological ingredients needed to justify a genocidal catastrophe.”

Utilitarianism (Edit: I think Tyle has added a better reading of this section below)

  • This section seems to caution against naive utilitarianism, which seems to form a large fraction of the criticism of longtermim. I felt a bit like this section was throwing intuitions at me, and I just disagreed with the intuitions being thrown at me. Also, doing longtermism better obviously means better accounting for all the effects of our actions, which naturally pushes away from naive utilitarianism
  • In particular, there seems to be a sense of derision at any philosophy where the ‘means justify the end’. I didn't really feel like this was argued for (please correct me if I'm wrong!)
  • I don’t know whether that meant the book was arguing against consequentialism in general, or arguing that longtermism overweights consequences in the longterm future compared to other consequences, but is right to focus on consequences generally
  • I would have preferred if these parts of the book were clear about exactly what the argument was
  • I would have preferred if these parts of the book did less intuition-fighting (there’s a word for this but I can’t remember it)

Millennialism

  • “A movement is millennialist if it holds that our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.” (pg.24 of the book)
  • Longtermism does not say our current world is replete with suffering and death
  • Longtermism does not say the world will be transformed soon
  • Longtermism does not say that if the world is transformed it will be into a world of justice, peace, abundance, and mutual love.
  • Therefore, longtermism does not meet the stated definition of a millennialist movement
  • Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism

Mere Ripples

  • Some things are bigger than other things
  • That doesn’t mean that the smaller things aren’t bad or good or important- they are just smaller than the bigger things
  • If you can make a good big thing happen or make a good small thing happen you can make  more good by making the big thing happen
  • That doesn't mean the small thing is not important, but it is smaller than the big thing
  • I feel confused

White Supremacy

  • The book quotes this section from Beckstead’s Thesis:

Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

The book goes on to say:

In a phrase, they support white supremacist ideology. To be clear, I am using this term in a technical scholarly sense. It denotes actions or policies that reinforce “racial subordination and maintaining a normalized White privilege.” As the legal scholar Frances Lee Ansley wrote in 1997, the concept encompasses “a political, economic and cultural system in which whites overwhelmingly control power and material resources,” in which “conscious and unconscious ideas of white superiority and entitlement are widespread, and relations of white dominance and non-white subordination are daily reenacted across a broad array of institutions and social settings.”

On this definition, the claims of Mogensen and Beckstead are clearly white supremacist: African nations, for example, are poorer than Sweden, so according to the reasoning above we should transfer resources from the former to the latter. You can fill in the blanks. Furthermore, since these claims derive from the central tenets of Bostromian longtermism itself, the very same accusation applies to longtermism as well. Once again, our top four global priorities, according to Bostrom, must be to reduce existential risk, with the fifth being to minimize “astronomical waste” by colonizing space as soon as possible. Since poor people are the least well-positioned to achieve these aims, it makes perfect sense that longtermists should ignore them. Hence, the more longtermists there are, the worse we might expect the plight of the poor to become.

  • I'm pretty sure the book isn't using 'white supremacist' in the normal sense of the phrase. For that reason, I'm confused about this, and would appreciate answers to these questions
    • The Beckstead quote ends ‘other things being equal’. Doesn't that imply that the claim is not 'overall, it's better to save lives in rich countries than poor countries' but 'here is an argument that pushes in favour of saving lives in rich countries over poor countries'?
    • Imagine longtermism did imply helping rich people instead of helping poor people, and that that made it white supremacist. Does that mean that anything that helps rich people is white supremacist (because the resources could have been used to help poor people)?
      • What if the poor people are white and the rich people are not white?
      • Why do  rich-nation government health services not meet this definition of white supremacy?
  • I'd also have preferred if it was clear how this version of white supremacy interfaces with the normal usage of the phrase

Genocide (Edit: I think Tyle and Lowry have added good explanations of this below)

  • The book argues that a longtermist would support a huge nuclear attack to destroy everyone in Germany if there was a less than one-in-a-million chance of someone in Germany building a nuclear weapon. (Ch.5)
  • The book says that maybe a longtermist could avoid saying that they would do this if they thought that the nuclear attack would decrease existential risk
  • The book says that this does not avoid the issue though and implies that because the longtermist would even consider this action, longtermism is dangerous (please correct me if I’m misreading this)
  • It seems to me that this argument is basically saying that because a consequentialist weighs up the consequences of each potential action against other potential actions, they at least consider many actions, some of which would be terrible (or at least would be terrible from a common-sense perspective). Therefore, consequentialism is dangerous. I think I must be misunderstanding this argument as it seems obviously wrong as stated here. I would have preferred if the argument here was clearer
Comment by Alex HT on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-28T18:51:55.214Z · EA · GW

I’d be keen to hear your thoughts about the (small) field of AI forecasting and its trajectory. Feel free to say whatever’s easiest or most interesting. Here are some optional prompts:

  • Do you think the field is progressing ‘well’, however you define ‘well’? 
  • What skills/types of people do you think AI forecasting needs?
  • What does progress look like in the field? Eg. does it mean producing a more detailed report, getting a narrower credible interval, getting better at making near-term AI predictions...(relatedly, how do we know if we're making progress?)
  • Can you make any super rough predictions like ‘by this date I expect we’ll be this good at AI forecasting’? 
Comment by Alex HT on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-28T18:42:59.640Z · EA · GW

I'd be keen to hear your thoughts on AI forecasting-forecasting. It seems like progress is being made on  forecasting AI timelines.  Can you say a bit about how quick that progress is and what progress looks like. 

Comment by Alex HT on Lessons from my time in Effective Altruism · 2021-01-18T16:40:16.715Z · EA · GW

Joey, are there unusual empirical beliefs you have in mind other than the two mentioned? Hits based giving seems clearly related to Charity Entrepreneurship's work - what other important but unusual empirical beliefs do you/CE/neartermist EAs hold? (I'm guessing hinge of history hypothesis is irrelevant to your thinking?)

Comment by Alex HT on Can people be persuaded by anything other than an appeal to emotion? · 2021-01-02T19:58:30.024Z · EA · GW

My guess is that few EAs care emotionally about cost effectiveness and that they care emotionally about helping others a lot. Given limited resources, that means they have to be cost effective. Imagine a mother with a limited supply of food to share between her children. She’s doesn’t care emotionally about rationing food, but she’ll pay a lot of attention to how best to do rationing.

I do think there are things in the vicinity of careful reasoning/thinking clearly/having accurate beliefs that are core to many EAs identities. I think those can be developed naturally to some extent, and don’t seem like complete prerequisites to being an EA

Comment by Alex HT on Should Effective Altruists Focus More on Movement Building? · 2020-12-30T13:13:35.019Z · EA · GW

Thanks for writing this and contributing to the conversation :)

Relatedly, an “efficient market for ideas” hypothesis would suggest that if MB really was important, neglected, and tractable, then other more experienced and influential EAs would have already raised its salience.

I do think the salience of movement building has been raised elsewhere eg:

Having said that, I share the feeling that movement building seems underrated. Given how impactful it seems, I would expect more EAs to want to use their careers to work on movement building.

One resolution to this apparent conflict is that the fraction of people who can be good at movement building long-term might be smaller than it first seems. For lots of the interventions that you suggest, strong social skills and a strong understanding of EA concepts seem important, as well as some general executional or project management ability. Though movement builders don’t necessarily have to be excellent in any of these domains, they have to be at least pretty good at all of them. They also have to be interested enough in all of them to do movement building. This narrows down the pool of people who can work in movement building. 

Another possible reason is that  within the EA community movement building careers are generally seen as less prestigious than more ‘direct’ kinds of work and social incentives play a large role in career choice. For example, some people would be more impressed by someone doing technical AI safety research than by someone building talent pipelines into AI safety, even if the second one has more impact.

Also, as Aaron says, a lot of direct work has helpful movement building effects. 

I also agree with Aaron that looking at funding is a bit complicated with movement building, partly because movement building is probably cheaper than other things, but also that it can be hard to tease apart what's movement building and what's not. 

Comment by Alex HT on A case against strong longtermism · 2020-12-18T12:08:34.413Z · EA · GW

You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments

Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)

Comment by Alex HT on Introducing High Impact Athletes · 2020-12-01T21:25:43.786Z · EA · GW

Thanks! I appreciate it :)

It makes me feel anxious to get a lot of downvotes with no explanation so I really appreciate your comment.

Just to clarify when you say "if that is a real tradeoff that a founder faces in practice, it is nearly always an indication the founder just hasn't bothered to put much time or effort into cultivating a diverse professional network" I think I agree, but that this isn't always something the founder could have predicted ahead of time, and the founder isn't necessarily to blame. I think it can be very easy to 'accidentally' end up with a fairly homogeneous network eg. because your profession or university is homogenous. Sounds like Marcus is in this category himself (if tennis is mainly white, and his network is mainly tennis players).

Comment by Alex HT on Introducing High Impact Athletes · 2020-12-01T09:28:37.757Z · EA · GW

Was this meant as a reply to my comment or a reply to Ben's comment?

I was just asking what the position was and made explicit I wasn't suggesting Marcus change the website.

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:55:50.920Z · EA · GW

Yep! I assumed this kind of thing was the case (and obviously was just flagging it as something to be aware of, not trying to finger-wag)

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:55:02.081Z · EA · GW

I don't find anything wrong at all with 'saintly' personally, and took it as a joke. But I could imagine someone taking it the wrong way. Maybe I'd see what others on the forum think

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:09:39.307Z · EA · GW

It looks like all the founders, advisory team, and athletes are white or white-passing. I guess you're already aware of this as something to consider, but it seems worth flagging (particularly given the use of 'Saintly' for those donating 10% :/).

Some discussion of why this might matter here: https://forum.effectivealtruism.org/posts/YCPc4qTSoyuj54ZZK/why-and-how-to-make-progress-on-diversity-and-inclusion-in

Edit: In fact, while I think appearing all-white and implicitly describing some of your athletes as 'Saintly' are both acceptable PR risks, having the combination of them both is pretty worrying and I'd personally be in favour of changing it.

Edited to address downvotes: Obviously, it is not bad in itself that the team if the team is all white, and I'm not implying that any deliberate filtering for white people has gone on. I just think it's something to be aware of - both for PR reasons (avoiding look like white saviours) and for more substantive reasons (eg. building a movement and sub-movements that can draw on a range of experiences)

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:08:55.385Z · EA · GW

Some of the wording on the 'Take the Pledge' section seems a little bit off (to me at least!). Eg. saying a 1-10% pledge will 'likely have zero noticeable impact on your standard of living' seems misleading, and could give off the impression that the pledge is only for the very wealthy (for whom the statement is more likely to be true). I'm also not sure about the 'Saintly' categorisation of the highest giving level (10%). It could come across as a bit smug or saviour-ish. I'm not sure about the tradeoffs here though and obviously you have much more context than me.

Maybe you've done this already, but it could be good to ask Luke from GWWC for advice on tone here.

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:08:35.190Z · EA · GW

I see you mention that HIA's recommendations are based on a suffering-focused perspective. It's great that you're clear about where you're coming from/what you're optimising for. To explore the ethical perspective of HIA further - what is HIA's position on longtermism?

(I'm not saying you should mention your take on longtermism on the website.)

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:08:18.146Z · EA · GW

This is really cool! Thanks for doing this :)

Is there a particular reason the charity areas are 'Global Health and Poverty' and 'Environmental Impact' rather than including any more explicit mention of animal welfare? (For people reading this - the environmental charities include the Good Food Institute and the Humane League along with four climate-focussed charities.)

Comment by Alex HT on The Case for Space: A Longtermist Alternative to Existential Threat Reduction · 2020-11-18T13:46:09.095Z · EA · GW

Welcome to the forum!

Have you read Bostrom's Astronomical Waste? He does a very similar estimate there. https://www.nickbostrom.com/astronomical/waste.html

I'd be keen to hear more about why you think it's not possible to meaningfully reduce existential risk.

Comment by Alex HT on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T13:38:19.293Z · EA · GW

"Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.

If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists."

Comment by Alex HT on Why we should grow One for the World chapters alongside EA student groups · 2020-11-04T14:18:07.547Z · EA · GW

Thanks for writing this! I and an EA community builder I know found it interesting and helpful.

I'm pleased you have a 'counterarguments' section, though I think there are some counterarguments missing:

  • OFTW groups may crowd out GWWC groups. You mention the anchoring effect on 1%, but there's also the danger of anchoring on a particular cause area. OFTW is about ending extreme poverty, whereas GWWC is about improving the lives of others (much broader)

  • OFTW groups may crowd out EA groups. If there's a OFTW group at a university, the EA group may have to compete, even if the groups are officially collaborating. In any case, they groups will be competing for attention of the altruistically motivated people at the university

  • Because OFTW isn't cause neutral, it might not be a great introduction to EA. For some people, having lots of exposure to OFTW might even make them less receptive to EA, because of anchoring on a specific cause. As you say "Since it is a cause-specific organization working to alleviate extreme global poverty, that essentially erases EA’s central work of evaluating which causes are the most important." I agree with you that trying to impartially work out which cause is best to work on is core to EA

  • OFTW's direct effects (donations to end extreme poverty) may not be as uncontroversially good as they seem. See this talk by Hilary Greaves from the Student Summit: https://www.youtube.com/watch?v=fySZIYi2goY&ab_channel=CentreforEffectiveAltruism

-OFTW outreach could be so broad and shallow that it doesn't actually select that strongly for future dedicated EAs. In a comment below, Jack says "OFTW on average engages a donor for ~10-60 mins before they pledge (and pre-COVID this was sometimes as little as 2 mins when our volunteers were tabling)". Of course, people who take that pledge will be more likely to become dedicated EAs than the average student, but there are many other ways to select at that level

Comment by Alex HT on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-09-03T09:28:00.685Z · EA · GW

Thanks, that's helpful for thinking about my career (and thanks for asking that question Michael!) 

Edit: helpful for thinking about my career because I'm thinking about getting economics training, which seems useful for answering specific sub-questions in detail ('Existential Risk and Economic Growth' being the perfect example of this),  but one economic model alone is very unlikely to resolve a big question.

Comment by Alex HT on If you value future people, why do you consider near term effects? · 2020-08-29T14:22:39.092Z · EA · GW

Thank you :) I've corrected it

Comment by Alex HT on Urgency vs. Patience - a Toy Model · 2020-08-20T09:09:24.168Z · EA · GW
  1. I think I've conflated patient longtermist work with trajectory change (with the example of reducing x-risk in 200 years time being patient, but not trajectory change). This means the model is really comparing trajectory change with XRR. But trajectory change could be urgent (eg. if there was a lock-in event coming soon), and XRR could be patient. 
    1. (Side note: There are so many possible longtermist strategies! Any combination of  is a distinct strategy. This is interesting as often people conceptualise the available strategies as either patient, broad, trajectory change or urgent, narrow, XRR but there's actually at least six other strategies)
  2. This model completely neglects meta strategic work along the lines of 'are we at the hinge of history?' and 'should we work on XRR or something else?'. This could be a big enough shortcoming to render the model useless. But this meta work does have to cash out as either increasing the probability of technological maturity, or in improving the quality of the future. So I'm not sure how worrisome the shortcoming is. Do you agree that meta work has to cash out in one of those areas?
  3. I had s-risks in mind when I caveated it as 'safely' reaching technological maturity, and was including s-risk reduction in XRR. But I'm not sure if that's the best way to think about it, because the most worrying s-risks seem to be of the form: we do reach technological maturity, but the quality is large and negative. So it seems that s-risks are more like 'quality increasing' than 'probability increasing'. The argument for them being 'probability increasing' is that I think the most empirically likely s-risks might primarily be risks associated with transitions to technological maturity, just like other existential risks. But again, this conflates XRR with urgency (and so trajectory change with patience) 
Comment by Alex HT on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T07:41:00.657Z · EA · GW

Thanks for writing this, I like that it's short and has a section on subjective probability estimates. 

  1. What would you class as longterm x-risk (reduction) vs. nearterm? Is it entirely about the timescale rather than the approach? Eg. hypothetically very fast institutional reform could be nearterm, and doing AI safety field building research in academia could hypothetically be longterm if you thought it would pay off very late. Or do you think the longterm stuff necessarily has to be investment or intitutional reform?
  2. Is the main crux for 'Long-term x-risk matters more than short-term risk' around how transformative the next two centuries will be? If we start getting technologically mature, then x-risk might decrease significantly. Or do you think we might reach technological maturity, and x-risk will be low, but we should still work on reducing it?
  3. What do you think about the assumption that 'efforts can reduce x-risk by an amount proportional to the current risk'? That seems maybe appropriate for medium levels of risk eg. 1-10%, but if risk is small, like 0.01-1%, it might get very difficult to halve the risk. 
Comment by Alex HT on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-20T07:16:47.058Z · EA · GW

This is really interesting and I'd like to hear more. Feel free to just answer the easiest questions:

Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia? 

What kinds of specialisation do you think we'd want - subject knowledge? Along different subject lines to academia? 

Do you think EA should primarily use existing academia for training new researchers, or should there be lots of RSP-type things?

What do you see as the current route into longtermist research? It seems like entry-level research roles are relatively rare, and generally need research experience. Do you think this is a good model?

Comment by Alex HT on What (other) posts are you planning on writing? · 2020-08-20T06:52:02.823Z · EA · GW

I'd really like to see "If causes differ astronomically in EV, then personal fit in career choice is unimportant"

Comment by Alex HT on [deleted post] 2020-08-19T14:02:33.660Z

Toy Model

Let  be the value of the longterm future. Let  be the probability that our descendants safely reach technological maturity. Let  be the expected quality of the longterm future, given that we safely reach technological maturity.  Then the value of the longterm future is:

This ignores all the value in the longterm future that occurs when our descendants don't safely reach technological maturity. 

Assume that we can choose between doing some urgent longtermist work, say existential risk reduction - , or some patient longtermist work, let's call this global priorities research - . Assume that the existential risk reduction work increases the probability that our descendants safely reach technological maturity, but has no other effect on the quality of the future. Assume that the global priorities research increases the quality of the longterm future conditional on it occurring, but has no effect on existential risk.

Consider some small change in either existential risk reduction work or global priorities research. You can imagine this as $10 trillion, or 'what the EA community focuses on for the next 50 years', or something like that. Then for some small finite change in risk reduction, , or in global priorities research, , the change in the value of the longterm future will be:

Dropping the subscripts and dividing the first equation by the other:

Rewriting in more intuitive terms:

Critiquing the Model

I've made the assumption that x-risk reduction work doesn't otherwise affect the quality of the future, and patient longtermist work doesn't affect the probability of existential risk. Obviously, this isn't true. However, I don't think that reduces the value of the model much as I'm just trying to get a rough estimate of which produces more value - increasing the probability of space colonisation, or increasing the quality of the civilisation the colonises space. 

I have the suspicion that most of the value of broad, patient longtermist work (such as much of the philosophy being done at GPI, moral circle expansion

I've made the assumption that we can ignore all value other than worlds where we safely reach technological maturity.  This seems pretty intuitive to me, given the likely quality, size, and duration of a technologically mature society, and my ethical views. 

Putting some numbers in 

Let's put some numbers in. Toby Ord thinks that with a big effort, humanity can reduce the probability of existential risk this century from  to 1/6. That would make the fractional increase in probability of survival  (it goes from  to ). Assume for simplicity that x-risk after this century is zero. 

For GPR to be cost effective with XRR given these numbers (so the above equation is ), the fractional increase in the value of the future for a comparable amount of work would have to be .

Though Toby's numbers are really quite favourable to XRR, so putting in your own seems good. 

Eg. If you think X-risk is , and we could reduce it to  with some amount of effort, then the fractional increase in probability of survival is about  (it goes from  to ). So for GPR to be cost competitive, we'd have to be able to increase the value of the future by \(5.6\\)% with a similar amount of work that the XRR would have taken.

Implications

Would it take a similar amount of effort to reduce the probability of existential risk this century from  to 1/6 and to increase the fractional value of the future conditional on it occuring by ? My intuition is that the latter is actually much harder than the former. Remember, you've got to make the whole future  better for all time. What do you think?

Some things going into this are:

  • I think it's pretty likely () that there will be highly transformative events over the next two centuries. It seems really hard to make detailed plans with steps that happen after these highly transformative events.
  • I'm not sure if research about how the world works now actually helps much for people understanding how the world works after these highly transformative events. If we're all digital minds, or in space, or highly genetically modified then understanding how today's poverty, ecosystems, or governments worked might not be very helpful.
  • The minds doing research after the transition might be much more powerful than current researchers. A lower bound seems like 200+IQ humans (and lots more of them than are researchers now), a reasonable expectation seems like a group of superhuman narrow AIs, an upper bound seems like a superintelligent general AI. I think these could do much better research, much faster than current humans working in our current institutions. Of course, building the field means these future researchers have more to work with when they get started. But I guess this is negligible compared to increasing the probability that these future researchers exist, given how much faster they would be.

Having said that, I don't have a great understanding of the route to value of longtermist research there is that doesn't contribute to reducing or understanding existential risk (and I think it probably valuable for epistemic modesty reasons). 

I should also say that lots of actual 'global priorities research' does a lot to understand and reduce x-risk, and could understood as XRR work. I wonder how useful a concept 'global priorities research is', and whether it's too broad. 

Questions

  • Do you think this model is right enough to be at all useful? 
  • What numbers do you think are appropriate to put into this model? If a given unit of XRR work increases the probability of survival by , how much value could it have created via trajectory change? Any vague/half-baked considerations here are appreciated.
  • What's the best way to conceptualise the value of non-XRR longtermist work? Is it 'make the future go better for the rest of time'? Does it rely on a lock-in event, like transformative technologies, to make the benefits semi-permanent? 
Comment by Alex HT on Super-exponential growth implies that accelerating growth is unimportant in the long run · 2020-08-11T11:04:58.744Z · EA · GW

Thanks for writing this. I'd love to see your napkin math

Comment by Alex HT on Are there superforecasts for existential risk? · 2020-07-08T08:33:10.839Z · EA · GW

Thanks for the answer.

Will MacAskill mentioned in this comment that he'd 'expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.'

You're a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will's view, rather than the median FHI view?

Comment by Alex HT on Are there superforecasts for existential risk? · 2020-07-08T08:32:52.868Z · EA · GW

Thanks for the answer.

Will MacAskill mentioned in this comment that he'd 'expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.'

You're a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will's view, rather than the median FHI view?

Comment by Alex HT on Are there superforecasts for existential risk? · 2020-07-08T08:15:25.157Z · EA · GW

Thanks, those look good and I wasn't aware of them

Comment by Alex HT on The Moral Value of Information - edited transcript · 2020-07-03T19:16:20.072Z · EA · GW

Yep - the author can click on the image and then drag from the corner to enlarge them (found this difficult to find myself)

Comment by Alex HT on AI Governance Reading Group Guide · 2020-06-30T12:26:45.987Z · EA · GW

It's pretty blank - something like this

Comment by Alex HT on 'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper · 2020-06-22T08:13:27.866Z · EA · GW

Yeah, that seems right to me.

On doubling consumption though, if you can suggest a policy that increases growth consistently, eventually you might cause consumption to be doubled (at some later time consumption under the faster growth will be twice as much as it would have been with the slower growth). Do you mean you don't think you could suggest a policy change that would increase the growth rate by much?

Comment by Alex HT on 'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper · 2020-06-16T09:28:37.966Z · EA · GW

Great to hear this has been useful!

I think if is around 1 then yes, spreading longtermism probably looks better than accelerating growth. Though I don't know how expensive it is to double someone's consumption in the long-run.

Doubling someone's consumption by just giving them extra money might cost $30,000 for 50 years=~$0.5million. It seems right to me that there are ways to reduce the discount rate that are much cheaper than half a million dollars for 13 basis points. Eg. some community building probably takes a person's discount rate from around 2% to around 0% for less than half a million dollars.

I don't know how much cheaper it might be to double someone's consumption by increasing growth but I suspect that spreading longtermism still looks better for this value of .

How confident are you that is around 1? I haven't looked into it and don't know how much consensus there is.

Comment by Alex HT on Existential Risk and Economic Growth · 2020-06-14T21:40:49.598Z · EA · GW

I've written a summary here in case you haven't seen it: https://forum.effectivealtruism.org/posts/CsL2Mspa6f5yY6XtP/existential-risk-and-growth-summary

Comment by Alex HT on If you value future people, why do you consider near term effects? · 2020-05-25T10:06:10.556Z · EA · GW

What do you think absorbers might be in cases of complex cluelessness? I see that delaying someone on the street might just cause them to spend 30 seconds less procrastinating, but how might this work for distributing bednets, or increasing economic growth?

Maybe there's an line of argument around nothing being counterfactual in the long-term - because every time you solve a problem someone else was going to solve it eventually. Eg. if you didn't increase growth in some region, someone else would have 50 years later. And now you did it they won't. But this just sounds like a weirdly stable system and I guess this isn't what you have in mind

Comment by Alex HT on [Stats4EA] Expectations are not Outcomes · 2020-05-20T10:57:31.107Z · EA · GW

Thanks for writing this. I hadn't though it about this explicitly and think it's useful. The bite-sized format is great. A series of posts would be great too.

Comment by Alex HT on Existential Risk and Economic Growth · 2020-05-08T10:42:46.221Z · EA · GW

So you think the hazard rate might go from around 20% to around 1%? That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct.

I don't have any specific stories tbh, I haven't thought about it (and maybe it's just pretty implausible idk).

Comment by Alex HT on Existential Risk and Economic Growth · 2020-05-05T11:31:14.162Z · EA · GW

Not the author but I think I understand the model so can offer my thoughts:

1. Why do the time axes in many of the graphs span hundreds of years? In discussions about AI x-risk, I mostly see something like 20-100 years as the relevant timescale in which to act (i.e. by the end of that period, we will either go extinct or else build an aligned AGI and reach a technological singularity). Looking at Figure 7, if we only look ahead 100 years, it seems like the risk of extinction actually goes up in the accelerated growth scenario.

The model is looking at general dynamics of risk from the production of new goods, and isn’t trying to look at AI in any kind of granular way. The timescales on which we see the inverted U-shape depend on what values you pick for different parameters, so there are different values for which the time axes would span decades instead of centuries. I guess that picking a different growth rate would be one clear way to squash everything into a shorter time. (Maybe this is pretty consistent with short/medium AI timelines, as they probably correlate strongly with really fast growth).

I think your point about AI messing up the results is a good one -- the model says that a boom in growth has a net effect to reduce x-risk because, while risk is increased in the short-term, the long-term effects cancel that out. But if AI comes in the next 50-100 years, then the long-term benefits never materialise.

2. What do you think of Wei Dai's argument that safe AGI is harder to build than unsafe AGI and we are currently putting less effort into the former, so slower growth gives us more time to do something about AI x-risk (i.e. slower growth is better)?

Sure, maybe there’s a lock-in event coming in the next 20-200 years which we can either

  • Delay (by decreasing growth) so that we have more time to develop safety features, or
  • Make more safety-focussed (by increasing growth) so it is more likely to lock in a good state

I’d think that what matters is resources (say coordination-adjusted-IQ-person-hours or whatever) spent on safety rather than time that could available to be spent on safety if we wanted. So if we’re poor and reckless, then more time isn’t necessarily good. And this time spent being less rich also might make other x-risks more likely. But that’s a very high level abstraction, and doesn’t really engage with the specific claim too closely so keen to hear what you think.

3. What do you think of Eliezer Yudkowsky's argument that work for building an unsafe AGI parallelizes better than work for building a safe AGI, and that unsafe AGI benefits more in expectation from having more computing power than safe AGI, both of which imply that slower growth is better from an AI x-risk viewpoint?

The model doesn’t say anything about this kind of granular consideration (and I don’t have strong thoughts of my own).

4. What do you think of Nick Bostrom's urn analogy for technological developments? It seems like in the analogy, faster growth just means pulling out the balls at a faster rate without affecting the probability of pulling out a black ball. In other words, we hit the same amount of risk but everything just happens sooner (i.e. growth is neutral).

In the model, risk depends on production of consumption goods, rather than the level of consumption technology. The intuition behind this is that technological ideas themselves aren’t dangerous, it’s all the stuff people do with the ideas that’s dangerous. Eg. synthetic biology understanding isn’t itself dangerous, but a bunch of synthetic biology labs producing loads of exotic organisms could be dangerous.

But I think it might make sense to instead model risk as partially depending on technology (as well as production). Eg. once we know how to make some level of AI, the damage might be done, and it doesn’t matter whether there are 100 of them or just one.

And the reason growth isn’t neutral in the model is that there are also safety technologies (which might be analogous to making the world more robust to black balls). Growth means people value life more so they spend more on safety.

5. Looking at Figure 7, my "story" for why faster growth lowers the probability of extinction is this: The richer people are, the less they value marginal consumption, so the more they value safety (relative to consumption). Faster growth gets us sooner to the point where people are rich and value safety. So faster growth effectively gives society less time in which to mess things up (however, I'm confused about why this happens; see the next point). Does this sound right? If not, I'm wondering if you could give a similar intuitive story.

Sounds right to me.

6. I am confused why the height of the hazard rate in Figure 7 does not increase in the accelerated growth case. I think equation (7) for δ_t might be the cause of this, but I'm not sure. My own intuition says accelerated growth not only condenses along the time axis, but also stretches along the vertical axis (so that the area under the curve is mostly unaffected).

The hazard rate does increase for the period that there is more production of consumption goods, but this means that people are now richer, earlier than they would have been so they value safety more than they would otherwise.

As an extreme case, suppose growth halted for 1000 years. It seems like in your model, the graph for hazard rate would be constant at some fixed level, accumulating extinction probability during that time. But my intuition says the hazard rate would first drop near zero and then stay constant, because there are no new dangerous technologies being invented. At the opposite extreme, suppose we suddenly get a huge boost in growth and effectively reach "the end of growth" (near period 1800 in Figure 7) in an instant. Your model seems to say that the graph would compress so much that we almost certainly never go extinct, but my intuition says we do experience a lot of risk for extinction. Is my interpretation of your model correct, and if so, could you explain why the height of the hazard rate graph does not increase?

Hmm yeah, this seems like maybe the risk depends in part on the rate of change of consumption technologies - because if no new techs are being discovered, it seems like we might be safe from anthropogenic x-risk.

But, even if you believe that the hazard rate would decay in this situation, maybe what's doing the work is that you're imagining that we're still doing a lot of safety research, and thinking about how to mitigate risks. So that the consumption sector is not growing, but the safety sector continues to grow. In the existing model, the hazard rate could decay to zero in this case.

I guess I'm also not sure if I share the intuition that the hazard rate would decay to zero. Sure, we don't have the technology right now to produce AGI that would constitute an existential risk but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways? It seems plausible to me that if we kept our current level of technology and production then we'd have a non-trivial chance each year of killing ourselves off.

What's doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it's not but that if growth stopped we'd keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we'd eventually be safe?

Comment by Alex HT on What posts do you want someone to write? · 2020-04-30T15:06:21.376Z · EA · GW

Now done here. It's a ~10 page summary that someone with college-level math can understand (though I think you could read it, skip the math, and get the general idea).