Posts

Venn diagrams of existential, global, and suffering catastrophes 2020-07-15T12:28:12.651Z · score: 39 (14 votes)
Some history topics it might be very valuable to investigate 2020-07-08T02:40:17.734Z · score: 56 (23 votes)
3 suggestions about jargon in EA 2020-07-05T03:37:29.053Z · score: 83 (41 votes)
Civilization Re-Emerging After a Catastrophic Collapse 2020-06-27T03:22:43.226Z · score: 28 (12 votes)
I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. 2020-05-11T09:35:22.543Z · score: 16 (7 votes)
Existential risks are not just about humanity 2020-04-28T00:09:55.247Z · score: 15 (8 votes)
Differential progress / intellectual progress / technological development 2020-04-24T14:08:52.369Z · score: 30 (17 votes)
Clarifying existential risks and existential catastrophes 2020-04-24T13:27:43.966Z · score: 22 (10 votes)
A central directory for open research questions 2020-04-19T23:47:12.003Z · score: 51 (22 votes)
Database of existential risk estimates 2020-04-15T12:43:07.541Z · score: 74 (31 votes)
Some thoughts on Toby Ord’s existential risk estimates 2020-04-07T02:19:31.217Z · score: 50 (25 votes)
My open-for-feedback donation plans 2020-04-04T12:47:21.582Z · score: 25 (15 votes)
What questions could COVID-19 provide evidence on that would help guide future EA decisions? 2020-03-27T05:51:25.107Z · score: 7 (2 votes)
What's the best platform/app/approach for fundraising for things that aren't registered nonprofits? 2020-03-27T03:05:46.791Z · score: 5 (1 votes)
Fundraising for the Center for Health Security: My personal plan and open questions 2020-03-26T16:53:45.549Z · score: 14 (7 votes)
Will the coronavirus pandemic advance or hinder the spread of longtermist-style values/thinking? 2020-03-19T06:07:03.834Z · score: 11 (6 votes)
[Link and commentary] Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society 2020-03-14T09:04:10.955Z · score: 14 (5 votes)
Suggestion: EAs should post more summaries and collections 2020-03-09T10:04:01.629Z · score: 39 (17 votes)
Quotes about the long reflection 2020-03-05T07:48:36.639Z · score: 50 (23 votes)
Where to find EA-related videos 2020-03-02T13:40:18.971Z · score: 19 (11 votes)
Causal diagrams of the paths to existential catastrophe 2020-03-01T14:08:45.344Z · score: 33 (15 votes)
Morality vs related concepts 2020-02-10T08:02:10.570Z · score: 14 (9 votes)
What are information hazards? 2020-02-05T20:50:25.882Z · score: 11 (10 votes)
Four components of strategy research 2020-01-30T19:08:37.244Z · score: 18 (12 votes)
When to post here, vs to LessWrong, vs to both? 2020-01-27T09:31:37.099Z · score: 12 (6 votes)
Potential downsides of using explicit probabilities 2020-01-20T02:14:22.150Z · score: 25 (14 votes)
[Link] Charity Election 2020-01-19T08:02:09.114Z · score: 8 (5 votes)
Making decisions when both morally and empirically uncertain 2020-01-02T07:08:26.681Z · score: 11 (5 votes)
Making decisions under moral uncertainty 2020-01-01T13:02:19.511Z · score: 33 (12 votes)
MichaelA's Shortform 2019-12-22T05:35:17.473Z · score: 10 (4 votes)
Are there other events in the UK before/after EAG London? 2019-08-11T06:38:12.163Z · score: 9 (7 votes)

Comments

Comment by michaela on A list of good heuristics that the case for AI X-risk fails · 2020-07-16T10:53:28.274Z · score: 3 (2 votes) · EA · GW

I'd second the suggestion of checking out the comments.

There are also a few extra comments on the LessWrong version of the post, which aren't on the Alignment Forum version.

Here's the long-winded comment I made there:

I think this list is interesting and potentially useful, and I think I'm glad you put it together. I also generally think it's a good and useful norm for people to seriously engage with the arguments they (at least sort-of/overall) disagree with.

But I'm also a bit concerned about how this is currently presented. In particular:

  • This is titled "A list of good heuristics that the case for AI x-risk fails".
  • The heuristics themselves are stated as facts, not as something like "People may believe that..." or "Some claim that..." (using words like "might" could also help).
    • A comment of yours suggests you've already noticed this. But I think it'd be pretty quick to fix.
  • Your final paragraph, a very useful caveat, comes after listing all the heuristics as facts.

I think these things will have relatively small downsides, given the likely quite informed and attentive audience here. But a bunch of psychological research I read a while ago (2015-2017) suggests there could be some degree of downsides. E.g.:

Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered. The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning--giving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning--reminding people that facts are not always properly checked before information is disseminated--was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether.

And also:

Information presented in news articles can be misleading without being blatantly false. Experiment 1 examined the effects of misleading headlines that emphasize secondary content rather than the article’s primary gist. [...] We demonstrate that misleading headlines affect readers’ memory, their inferential reasoning and behavioral intentions, as well as the impressions people form of faces. On a theoretical level, we argue that these effects arise not only because headlines constrain further information processing, biasing readers toward a specific interpretation, but also because readers struggle to update their memory in order to correct initial misconceptions.

Based on that sort of research (for a tad more info on it, see here), I'd suggest:

  • Renaming this to something like "A list of heuristics that suggest the case for AI x-risk is weak" (or even "fails", if you've said something like "suggest" or "might")
  • Rephrasing the heuristics to stated as disputable (or even false) claims, rather than facts. E.g., "Some people may believe that this concern is being voiced exclusively by non-experts like Elon Musk, Steven Hawking, and the talkative crazy guy next to you on the bus." ETA: Putting them in quote marks might be another option for that.
  • Moving what's currently the final paragraph caveat to before the list of heuristics.
  • Perhaps also adding sub-points about the particularly disputable dot points. E.g.:
    • "(But note that several AI experts have now voiced concern about the possibility of major catastrophes from advanced AI system, although there's still not consensus on this.)"

I also recognise that several of the heuristics really do seem good, and probably should make us at least somewhat less concerned about AI. So I'm not suggesting trying to make the heuristics all sound deeply flawed. I'm just suggesting perhaps being more careful not to end up with some readers' brains, on some level, automatically processing all of these heuristics as definite truths that definitely suggest AI x-risk isn't worth of attention.

Sorry for the very unsolicited advice! It's just that preventing gradual slides into false beliefs (including from well-intentioned efforts that do actually contain the truth in them!) is sort of a hobby-horse of mine.

[And then I replied to myself with the following]

Also, one other heuristic/proposition that, as far as I'm aware, is simply factually incorrect (rather than "flawed but in debatable ways" or "actually pretty sound") is "AI researchers didn't come up with this concern, Hollywood did. Science fiction is constructed based on entertaining premises, not realistic capabilities of technologies." So there it may also be worth pointing out in some manner that, in reality, quite early on prominent AI researchers raised concerns somewhat similar to those discussed now.

E.g., I. J. Good apparently wrote in 1959:

Whether [an intelligence explosion] will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machines. The important thing will be to give them the aim of serving human beings.
Comment by michaela on Collection of good 2012-2017 EA forum posts · 2020-07-16T07:19:27.348Z · score: 4 (2 votes) · EA · GW

Just remembered another 2017 post I liked and reference often (with the memory triggered by referencing it again): Considering Considerateness: Why communities of do-gooders should be exceptionally considerate.

Comment by michaela on Quotes about the long reflection · 2020-07-15T00:11:41.579Z · score: 4 (2 votes) · EA · GW

On (b): The first thing to note is that the Long Reflection doesn't require stopping any actions "that could have a long term impact", and certainly not stopping people considering such actions. (I assume by "consider" you meant "consider doing it this year", or something like that?)

It requires stopping people taking actions that we're not yet confident won't turn out to have been major, irreversible mistakes. So people could still do things we're already very confident are good, or things that are relatively minor.

Some good stuff from The Precipice on this, mainly from footnotes:

The ultimate aim of the Long Reflection would be to achieve a final answer to the question of which is the best kind of future for humanity. [...]
We would not need to fully complete this process before moving forward. What is essential is to be sufficiently confident in the broad shape of what we are aiming at before taking each bold and potentially irreversible action - each action that could plausibly lock in substantial aspects of our future trajectory.

Also:

We might adopt the guiding principle of minimising lock-in. Or to avoid the double negative, of preserving our options.
[Endnote:] Note that even on this view options can be instrumentally bad if they would close off many other options. So there would be instrumental value to closing off such options (for example, the option of deliberately causing our own extinction). One might thus conclude that the only thing we should lock in is the minimisation of lock-in.
This is an elegant and reasonable principle, but could probably be improved upon by simply delaying our ability to choose such options, or making them require a large supermajority (techniques that are often used when setting up binding multiparty agreements such as constitutions and contracts). That way we help avoid going extinct by accident (a clear failing of wisdom in any society), while still allowing for the unlikely possibility that we later come to realise our extinction would be for the best.

Also:

There may yet be ethical questions about our longterm future which demand even more urgency than existential security, so that they can’t be left until later. These would be important to find and should be explored concurrently with achieving existential security.

Somewhat less relevant:

Protecting our potential (and thus existential security more generally) involves locking in a commitment to avoid existential catastrophe. Seen in this light, there is an interesting tension with the idea of minimising lock-in (here [link]). What is happening is that we can best minimise overall lock-in (coming from existential risks) by locking in a small amount of other constraints.
But we should still be extremely careful locking anything in, as we might risk cutting off what would have turned out to be the best option. One option would be to not strictly lock in our commitment to avoid existential risk (e.g. by keeping total risk to a strict budget across all future centuries), but instead to make a slightly softer commitment that is merely very difficult to overturn. Constitutions are a good example, typically allowing for changes at later dates, but setting a very high bar to achieving this.

With this in mind, we can tweak your question to "Some actions that could turn out to be major, irreversible mistakes from a the perspective of the long-term future could be taken unilaterally. How could people be stopped from doing that during the Long Reflection?"

This ends up being roughly equivalent to the question "How could we get existential risk per year low enough that we can be confident of maintaining our potential for the entire duration of the Long Reflection (without having to take actions like locking in our best guess to avoid being preempted by something worse)?"

I don't think anyone has a detailed answer to that. But one sort-of promising thing is that we may have to end up with some decent ideas of answers to that in order to just avoid existential catastrophe in the first place. I.e., conditional on humanity getting to a Long Reflection process, my credence that humanity has good answers to those sorts of problems is higher than my current credence on that matter.

(This is also something I plan to discuss a bit more in those upcoming(ish) drafts.)

Comment by michaela on Quotes about the long reflection · 2020-07-15T00:02:48.323Z · score: 2 (1 votes) · EA · GW

I think being left slightly confused about the long reflection after reading these quotes is quite understandable. These quotes don't add up to a sufficiently detailed treatment of the topic.

Luckily, since I posted this, Toby Ord gave a somewhat more detailed treatment in Chapter 7 of The Precipice, as well as in his 80k interview. These sources provide Ord's brief thoughts on roughly the questions you raise. Though I still think more work needs to be done here, including on matters related to your question (b). I've got some drafts coming up which will discuss similar matters, and hopefully MacAskill's book on longtermism will go into more detail on the topic as a whole.

On (a): I don't think everyone should be working on these questions, nor does Ord. I'd guess MacAskill doesn't, though I'm not sure. He might mean something like "the 10 billion people interested and suited to this work, out of the 20+ billion people alive per generation at that point", or "this is one of the major tasks being undertaken by humanity, with 10 billion people per generation thus contributing at least indirectly, e.g. by keeping the economy moving".

I also suspect we should, or at least will, spend under 10,000 years on this (even if we get our act together regarding existential risks).

Ord writes in The Precipice:

It is unclear [exactly how] long such a period of reflection would need to be. My guess is that it would be worth spending centuries (or more) before embarking on major irreversible changes to our future - committing ourselves to one vision or another. This may sound like a long time from our perspective, but life and progress in most areas would not be put on hold. Something like the Renaissance may be a useful example to bear in mind, with intellectual projects spanning several centuries and many fields of endeavour. If one is thinking about extremely longterm projects, such as whether and how we should settle other galaxies (which would take millions of years to reach), then I think we could stand to spend even longer making sure we are reaching the right decision.
Comment by michaela on Tips for Volunteer Management · 2020-07-14T12:15:29.561Z · score: 5 (3 votes) · EA · GW

Thanks for this post.

Another post on this general topic which readers might also find interesting is Case Study: Volunteer Research and Management at ALLFED. (I didn't write that post and haven't worked with ALLFED, but what I've heard about their volunteer program sounds impressive to me.)

Comment by michaela on MichaelA's Shortform · 2020-07-14T02:50:22.178Z · score: 2 (1 votes) · EA · GW

That looks very helpful - thanks for sharing it here!

Comment by michaela on You have more than one goal, and that's fine · 2020-07-12T12:16:59.537Z · score: 4 (2 votes) · EA · GW

Thanks for this post. I think it provides a useful perspective, and I've sent it to a non-EA friend of mine who's interested in EA, but concerned by the way that it (or utilitarianism, really) can seem like it'd be all-consuming.

I also found this post quite reminiscent of Purchase Fuzzies and Utilons Separately (which I also liked). And something that I think might be worth reading alongside this is Act utilitarianism: criterion of rightness vs. decision procedure.

Comment by michaela on Collection of good 2012-2017 EA forum posts · 2020-07-12T12:11:46.942Z · score: 6 (3 votes) · EA · GW

Thanks for this collection!

Another 2017 post I quite liked and have often drawn on in my thinking or in conversation is Act utilitarianism: criterion of rightness vs. decision procedure.

Comment by michaela on Some history topics it might be very valuable to investigate · 2020-07-12T10:51:56.570Z · score: 2 (1 votes) · EA · GW

Thanks for those answers and thoughts!

And good idea to add the Foundational Questions link to the directory - I've now done so.

Comment by michaela on Some history topics it might be very valuable to investigate · 2020-07-12T02:46:08.492Z · score: 3 (2 votes) · EA · GW

Thanks for sharing those topic ideas, links to resources, and general thoughts on the intersection of history research and EA! I think this post is made substantially more useful by now having your comment attached. And your comment has also further increased how excited I'd be to see more EA-aligned history research (with the caveats that this doesn't necessarily require having a history background, and that I'm not carefully thinking through how to prioritise this against other useful things EAs could be doing).

If you do end up making a top-level post related to your comment, please do comment about it here and on the central directory of open research questions.

It's long been on my to-do list to go through GPI and CLR's research agendas more thoroughly to work out if there are other suggestions for historical research on there. I haven't done that to make this post so I may have missed things.

Yeah, that sounds valuable. I generated my list of 10 topics basically just "off the top of my head", without looking at various research agendas for questions/topics for which history is highly relevant. So doing that would likely be a relatively simple step to make a better, fuller version of a list like this.

Hopefully SI's work offers a second example of an exception to the "recurring theme" you note in that 1) SI's case studies are effectively a "deeper or more rigorous follow-up analysis" after ACE's social movement case study project -- if anything, I worry that they're too deep and rigorous and that this has drastically cut down the number of people who put the time into reading them, and 2) I at least had an undergraduate degree in history :D

Yeah, that makes sense to me. I've now edited in a mention of SI after AI Impacts. I hadn't actively decided against mentioning SI, just didn't think to do so. And the reason for that is probably just that I haven't read much of that work. (Which in turn is probably because (a) I lean longtermist but don't prioritise s-risks over x-risks, so the work by SI that seems most directly intended to improve farm animal advocacy seems to me valuable but not a top priority for my own learning, and (b) I think not much of that work has been posted to the Forum?) But I read and enjoyed "How tractable is changing the course of history?", and the rest of what you describe sounds cool and relevant.

Focusing in on "I worry that they're too deep and rigorous and that this has drastically cut down the number of people who put the time into reading them" - do you think that that can't be resolved by e.g. cross-posting "executive summaries" to the EA Forum, so that people at least read those? (Genuine question; I'm working on developing my thoughts on how best to do and disseminate research.)

Also, that last point reminds me of another half-baked thought I've had but forgot to mention in this post: Perhaps the value of people who've done such history research won't entirely or primarily be in the write-ups which people can then read, but rather in EA then having "resident experts" on various historical topics and methodologies, who can be the "go-to person" for tailored recommendations and insights regarding specific decisions, other research projects, etc. Do you have thoughts on that (rather vague) hypothesis? For example, maybe even if few people read SI's work on those topics, if they at least know that SI did that research, they can come to SI when they have specific, relevant questions and thereby get a bunch of useful input in a quick, personalised way.

(This general idea could also perhaps apply to research more broadly, not just to history research for EA, but that's the context in which I've thought about it recently.)

Comment by michaela on The career coordination problem · 2020-07-11T03:13:03.091Z · score: 4 (2 votes) · EA · GW

I'd agree with the idea people should take personal fit very seriously, with passion/motivation for a career path being a key part of that. And I'd agree with your rationale for that.

But I also think that many people could become really, genuinely fired up about a wider range of career paths than they might currently think (if they haven't yet tried or thought about those career paths). And I also think that many people could be similarly good fits for, or similarly passionate about, multiple career paths. For these people, which career path will have the greatest need for more people like them in a few years can be very useful as a way of shortlisting the things to test one's ability to become passionate about, and/or a "tie-breaker" between paths one has already shortlisted based on passions/motivations/fit.

For example, I'm currently quite passionate about research, but have reason to believe I could become quite passionate about operations-type roles, about roles at the intersection of those two paths (like research management), and maybe about other paths like communications or non-profit entrepreneurship. So which of those roles - rather than which roles in general - will be the most marginally useful in a few years time seems quite relevant for my career planning.

(I think this is probably more like a different emphasis to your comment, rather than a starkly conflicting view.)

Comment by michaela on The career coordination problem · 2020-07-11T03:04:47.636Z · score: 3 (2 votes) · EA · GW
we’ve found that releasing substandard data can get people on the wrong track

I've seen indications and arguments that suggest this is true when 80,000 Hours releases data or statements they don't want people to take too seriously. Do you (or does anyone else) have thoughts on whether it's the case that anyone releasing "substandard" (but somewhat relevant and accurate) data on a topic will tend to be worse than there being no explicit data on a topic?

Basically, I'm tentatively inclined to think that some explicit data is often better than no explicit data, as long as it's properly caveated, because people can just update their beliefs only by the appropriate amount. (Though that's definitely not fully or always true; see e.g. here.) But then 80k is very prestigious and trusted by much of the EA community, so I can see why people might take statements or data from 80k too seriously, even if 80k tells them not to.

So maybe it'd be net positive for something like what the OP requests to be done by the EA Survey or some random EA, but net negative if 80k did it?

Comment by michaela on 3 suggestions about jargon in EA · 2020-07-11T02:35:56.430Z · score: 2 (1 votes) · EA · GW

Yes, I think these are all valid points. So my suggestion would indeed be to often provide a brief explanation and/or a link, rather than to always do that. I do think I've sometimes seen people explain jargon unnecessarily in a way that's a bit awkward and presumptuous, and perhaps sometimes been that person myself.

In my articles for the EA Forum, I often include just links rather than explanations, as that gives readers the choice to get an explanation if they wish. And in person, I guess I'd say that it's worth:

  • entertaining both the hypothesis that using jargon without explanation would make someone feel confused/excluded, and the hypothesis that explaining jargon would make the person feel they're perceived as more of a "newcomer" than they really are
  • then trying to do whatever seems best based on the various clues and cues
    • with the options available including more than just "assume they know the jargon" and "assume they don't and therefore do a full minute spiel on it"; there are also options like giving a very brief explanation that feels natural, or asking if they've come across that term

One last thing I'd say is that I think the fact jargon is used as a marker of belonging is also another reason to sometimes use jargon-free statements or explain the jargon, to avoid making people who don't know the jargon feel excluded. (I guess I intended that point to be implicit in saying that explanations and/or hyperlinks of jargon "may make [people] feel more welcomed and less disorientated or excluded".)

Comment by michaela on Some history topics it might be very valuable to investigate · 2020-07-11T02:13:27.039Z · score: 2 (1 votes) · EA · GW

That definitely sounds good to me. My personal impression is that there are many EAs who could be doing some good research on-the-side (in a volunteer-type capacity), and many research questions worth digging into, and that we should therefore be able to match these people with these questions and get great stuff stuff. And it seems good to have some sort of way of coordinating that.

Though I also get the impression that this is harder than it sounds, for reasons I don't fully understand, and that mentorship (rather than just collaboration) is also quite valuable.

So I'd suggest someone interested in setting up that sort of crowdsourcing or coordination system might want to reach out to EdoArad, Peter Slattery, and/or David Janku. The first two of those people commented on my central directory for open research questions, and David is involved with (runs?) Effective Thesis. All seem to know more than me about this sort of thing. And it might even make sense to somehow combine any new attempts at voluntary research crowdsourcing or collaborations with initiatives they've already set up.

Comment by michaela on Effective Thesis: updates from 2019 and call for collaborators · 2020-07-09T10:54:53.808Z · score: 2 (1 votes) · EA · GW
2) Focusing on non-EAs and people on the borders of the community rather than on EAs - it seems to me so far that many people who are highly involved in EA can find similarly good advice as we would be able to give them in their own circles so the counterfactual impact in this group is smaller.

That sounds right to me, and indeed like an argument that pushes in favour of focusing on non-EAs or people on the borders. (Though I don't know how to balance that against other arguments.)

In fact, a related point that came to mind is that it seems possible Effective Thesis could be a good intervention simply from the perspective of expanding the EA community, separate from expanding the EA-aligned researcher community or the amount of high-impact research done.

For example, maybe Effective Thesis looks to non-EA uni students like a concrete service they just want to engage with for their own career plans, without them having to be sold yet on anything more than a vague sense of "having an impact". And then via Effective Thesis and the coaching, they learn about EA and priority cause areas, learn how they can help, and get useful EA connections. And then even if they move out of research later, they might do something like working on important problems in the civil service or founding a high-impact charity, and maintain an EA mindset and connections to the community.

Whereas a EA group at their university might not have appealed to that person, as it didn't obviously advance their existing plans in a concrete way.

I think part of why that seems plausible to me is that I think a similar process might help explain why 80,000 Hours and GiveWell have both served well for expanding the EA community. They both offer a service that can seem directly useful to anyone who at least just wants to "have an impact", in some vague sense, even if that person isn't yet bought into things like utilitarianism or caring about various neglected populations (people in other countries, future generations, nonhumans, etc.).

Have you thought about how much impact ET might be able to have on just expanding the EA community?

Comment by michaela on Effective Thesis: updates from 2019 and call for collaborators · 2020-07-09T10:54:33.142Z · score: 2 (1 votes) · EA · GW
“Working with ALLFED was greatly influenced by effective thesis, as it allowed me to contact them in the first place. I might have come across ALLFED without effective thesis, but I am relatively sure that I would not have contacted them.”

I'm glad Effective Thesis had this impact. But that quote also gave me the feeling that we also need to simply more strongly push the message to just reach out to people. Similar to the message just apply for things. There are costs to reaching out to people and applying (a bit of time from both parties), but they're usually quite small, really. And the upsides can be huge - it seems it can often meaningfully improve one's research or career plans, and perhaps even sometimes more than double the lifetime impact one will have. (I don't have any data on this, but it seems very plausible.)

It seems like there might be many EAs waiting for "permission" or "validation", or self-selecting out of reaching out to people, applying to things, doing cheap tests of fit, etc. I don't know how to resolve that issue at scale, but hopefully harping on about the value of just going for it can help a bit.

Comment by michaela on Effective Thesis: updates from 2019 and call for collaborators · 2020-07-09T10:52:25.149Z · score: 2 (1 votes) · EA · GW

Thanks for your work on Effective Thesis, and for this post. I haven't interacted with Effective Thesis myself, but it strikes me as the sort of thing that should exist and that fills an important gap, so I'm glad you've created this initiative to fill that gap. I also found the taxonomy in "Other interventions with the same goal" surprisingly interesting.

Some questions and thoughts came to mind, which I'll split into a few comments.

He said that Effective Thesis got him involved in EA which he hadn’t heard about before and also helped him with specific topic choice (counterfactually 40 %).

Do you mean he estimated a 40% chance he would've chosen the same topic anyway? Or a 40% chance he would've heard about EA later anyway? Or something else?

A comparison to the previous report (August 2018 - January 2019) would suggest that in 2019 there were fewer people applying from Europe (by 13 %).

Were there actually fewer people applying from Europe? Or more people from elsewhere, such that the proportion from Europe fell? One story sounds like some degree of shrinking, whereas the other sounds like growth and diversification.

I would expect there to be quite a lot of research-talented people (especially in non-English speaking countries because of the language and geographic barriers) who would be good to reach out to and who could produce very good research later on. I would expect outreach to these people to be significantly more neglected in comparison with outreach to people from prestigious unis, and thus it might be effective to focus on them.

What kinds of "outreach" are you suggesting are neglected for those people, compared to people from prestigious unis? Things like EA outreach in general? Or things more similar to Effective Thesis? I would've thought there are few other things very much like Effective Thesis anywhere, including at prestigious unis, except I guess for students who can get positions at places like GPI.

I would say the goal of Effective Thesis is “influencing which research is generated” (in comparison with e.g. “improving science as a whole”).

Unimportant point that came to mind: This seems reminiscent of the idea of advancing differential progress, rather than just any progress (in the sense of developments or advancements, whether or not they're morally good not necessarily morally good).

Comment by michaela on Democracy Promotion as an EA Cause Area · 2020-07-09T00:15:37.527Z · score: 2 (1 votes) · EA · GW

Oh, yes, something I forgot to mention explicitly was that it sounded like you were talking primarily about timescales of centuries, which I don’t think is typically what longtermists are focused on. I think the typical view among longtermists is something like the following: "If things go well, humanity - or whatever we become - could last for such an incredibly long time that even a very small tweak to our trajectory, which lasts a substantial portion of that time, will ultimately matter a huge deal.* And it can matter much more than a larger 'boost' that would ultimately 'wash out' on the scale of years, decades, or centuries."

This isn't to say that economic growth isn't important for longtermists, but rather that, if it is important to longtermists, that may be primarily because of its effects on other aspects of our trajectory. E.g., existential risk. (And it's currently not totally clear whether it's good or bad for x-risk, though I think the evidence leans somewhat towards it being good; see e.g. Existential Risk and Economic Growth. Other sources are linked to from my crucial questions series.)

Though growth could also matter more "directly", because a faster spread to the stars may reduce the ultimate astronomical waste. (There are also longtermists who may not care about astronomical waste, such as suffering-focused longtermists.)

In any case, the way you made the argument felt more "medium-termist" than "longtermist" to me. I share that feeling partly because it may provide useful info regarding how persuasive other longtermists would find that argument, and whether they feel it's really a "longtermist" argument.

*If you want to be mathy, you can think of this as the area between two slightly different curves ultimately being very large, if we travel a far enough distance along the x axis.

Comment by michaela on Exploring the Streisand Effect · 2020-07-08T10:22:53.916Z · score: 4 (2 votes) · EA · GW

Thanks for this post; I found it quite interesting, and very nicely written. I look forward to reading your "thoughts on what EAs in particular can learn from this", if you do get around to writing those up.

Comment by michaela on Some history topics it might be very valuable to investigate · 2020-07-08T09:17:33.036Z · score: 8 (3 votes) · EA · GW

Oh, another very broad category of topics that I perhaps should've mentioned explicitly is the history of basically any specific topic EAs care about. E.g., history of concerns about animal welfare, arguments about AI risk and AI safety, the randomista movement, philanthropy ...

Comment by michaela on Democracy Promotion as an EA Cause Area · 2020-07-08T09:10:55.041Z · score: 5 (3 votes) · EA · GW

Questions about the neglectedness and tractability of this area

You write:

Another prominent source of funding for pro-democracy programs is the Open Society Foundations (OPF). According to their website (OpenSocietyFoundations.org), the organization's 2020 budget of $1.2 billion included $140.5 million to improve democratic practices, along with another $77.3 million for human rights movements and institutions.

But then you also write:

With a few exceptions, democracy promotion seems to be largely neglected outside of the promotion of U.S. foreign policy interests

Do you say that because the democracy-promotion budget of the OPF and similar actors is much smaller than that of US government bodies? Or because you see the OPF and similar actors as also promoting US foreign policy interests?

Also, regarding tractability, you write:

A review essay on the efficacy of tools of external democracy promotion finds that non-coercive tools like foreign aid that is conditioned on democratic reforms and election monitoring are effective [...]
Another effective tool of democracy promotion is pro-democracy foreign aid. [...] The bulk of the recent evidence suggests that increasing pro-democracy aid may prove to be an effective intervention for EA organizations”

Is the idea here always that pro-democracy foreign aid creates an incentive for regimes to make democratic reforms so that they get (or continue to get) that aid? Or is this sort of aid sometimes effective merely by helping prop up good institutions or things like that?

I ask because, if the benefits are always or primarily caused by the incentive effects, I'd worry about whether EA would really be able to throw enough money at this to even get noticed, when we're talking about national budgets.

What are you thoughts on that?

Comment by michaela on Democracy Promotion as an EA Cause Area · 2020-07-08T09:07:41.969Z · score: 5 (3 votes) · EA · GW

Thanks for writing this! I found it quite interesting, and appreciated the clear engagement with the relevant academic literatures.

Some thoughts on the importance of this cause area

1. It seems like work to promote democracy could also be quite good from the perspective of reducing long-term risks from malevolent actors. And perhaps some interventions proposed in that post would also be good from the perspective of the other benefits of democracy. So there may be synergies between these two new/mini/sub cause areas.

2.

There are reasons to think democratization is important not only in the near term but also from a longtermist perspective. Political institutions are a stronger determinant of a country's wealth than weather or culture. [6] There is substantial empirical evidence that economic development is highly path dependent-- economic and political institutions persist for hundreds of years, and have corresponding consequences for economic development.[7] Because of long-term institutional persistence, improving democratic institutions today can lead to better institutions--and correspondingly better economic outcomes--not only in the near-term but also for the "long-term future."

I think most "longtermists" don't see increasing economic growth as especially valuable except in relation to how it affects where humanity "ends up" (e.g., via affecting existential risk, global catastrophic risk, or how wide our moral circles ultimately are). For example, Benjamin Todd from 80k writes:

One way to help the future we don’t think is a contender is speeding it up. Some people who want to help the future focus on bringing about technological progress, like developing new vaccines, and it’s true that these create long-term benefits. However, we think what most matters from a long-term perspective is where we end up, rather than how fast we get there. Discovering a new vaccine probably means we get it earlier, rather than making it happen at all.

I share that sort of view to some extent, though I think it's slightly overstated, given that I think speeding up development could affect how much of the universe we can ultimately reach (this is related to the astronomical waste argument).

In my draft series on Crucial questions for longtermists, I include the question "How does speeding up development affect the expected value of the future?", some sub-questions, and a collection of sources related to these questions. You or other readers might find those sources interesting.

(By the way, I've now added links to this post from that series, under the question "What are the best actions for speeding up development? How good are they?" and under the topic "Importance of, and best approaches to, improving institutions and/or decision-making".)

3. I have a vague sense that EAs engaging in democracy promotion, especially under an explicitly EA banner, might have downsides such as making the Chinese government averse to EA, which would seem plausibly quite bad for other issues (e.g., ability to coordinate on AI safety or to help foster animal welfare communities in China).

I'd also obviously feel quite uncomfortable about not discussing any pro-democracy efforts for fear of upsetting non-democratic regimes. And all cause areas will face some downsides. But this does seem like something perhaps worth bearing in mind when deciding how much to prioritise this cause area against other cause areas that also plausibly deserve our resources anyway.

Comment by michaela on Democracy Promotion as an EA Cause Area · 2020-07-08T08:47:23.433Z · score: 4 (2 votes) · EA · GW

I haven't read any of the papers cited here, nor am I especially well-versed in relevant areas of economics. But my initial reaction to "Most of the biggest growth accelerations have occurred in autocracies" is that that sounds like a correlation that's relatively unlikely to be explained by autocracy causing more growth, and relatively unlikely to conflict with the idea that democracy causes better growth (with all other factors held constant).

In particular, I've heard it argued that "catch-up" growth is substantially easier than "cutting-edge" growth. If that's true, then that might suggest autocracies might have experienced growth accelerations more often than democracies not because being an autocracy causes a higher chance of having a growth acceleration, but rather because:

  • it happens to be that autocracies have less often been on the "cutting edge" than democracies, so they've more often been able to engage in "catch-up" growth
  • on top of that, being an autocracy makes it more likely that a country will have deceleration episodes, which then creates additional opportunities for "catch-up" growth

A related or perhaps identical possibility is that autocracies may sometimes implement policies that actively harm their economies in major ways, such that merely removing those policies could cause rapid growth. (I have in mind the transition from Mao's policies to Deng Xiaoping's policies, though I don't know much about that transition, really.)

This is all rather speculative. I'd be interested to hear thoughts on this from someone with more knowledge of the area.

Comment by michaela on Democracy Promotion as an EA Cause Area · 2020-07-08T08:38:20.920Z · score: 5 (3 votes) · EA · GW

I basically agree with this, and was going to say something similar.

Though it does seem likely that EA organisations would be somewhat less likely to be perceived as biased or self-interested than US government agencies would be. And this post suggests that the US government is "Perhaps the largest single source of spending on global democracy promotion". So it seems to me like:

  • it was fair for the OP to have raised this as an advantage
  • it'd be even better if the OP had also noted that the geographical distribution of EAs probably makes that advantage smaller than one might otherwise think, and means there may be no advantage at all compared to things like UN agencies or non-EA foundations
Comment by michaela on Problem areas beyond 80,000 Hours' current priorities · 2020-07-08T06:58:31.683Z · score: 6 (4 votes) · EA · GW

(Just commenting to tie two conversations together: Another Forum user has now asked the related question Are there lists of causes (that seemed promising but are) known to be ineffective?, and given some arguments for why such a list might be useful.)

Comment by michaela on Are there lists of causes (that seemed promising but are) known to be ineffective? · 2020-07-08T06:54:33.892Z · score: 14 (10 votes) · EA · GW

This seems to me like a good question/a good idea.

Some quick thoughts:

  • I can't think of such a list (at least, off the top of my head).
  • There was a very related comment thread on a recent post from 80,000 Hours. I'd recommend checking that out. (It doesn't provide the sort of list you're after, but touches on some reasons for and against making such a list.)
    • I've now also commented a link to this question from that thread, to tie these conversations together.
  • I'd suggest avoiding saying "known to be ineffective" (or "known to be low-priority", or whatever). I think we'd at best create a list of causes we have reason to be fairly confident are probably low-priority. More likely, we'd just have a list of causes we have some confident are low-priority, but not much confidence, because once they started to seem low-priority we (understandably) stopped looking into them.
    • To compress that into something more catchy, we could maybe say "a list of causes that were looked into, but that seem to be low-priority". Or even just "a list of causes that seem to be low-priority".
  • This sort of list could be generated not only for causes, but also for interventions, charities, and/or career paths.
    • E.g., I imagine looking through some of the "shallow reviews" from GiveWell and Charity Entrepreneurship could help one create lists of charities and interventions that were de-prioritised for specific reasons, and that thus may not be worth looking into in future.
Comment by michaela on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-08T02:51:24.216Z · score: 7 (2 votes) · EA · GW

Happy to hear this list seems helpful, and thanks for the suggestion! I've now polished & slightly expanded my comment into a top level post: Some history topics it might be very valuable to investigate.

(I also encourage people in that post to suggest additional topics they think it'd be good to explore, so hopefully the post can become something of a "hub" for that.)

Comment by michaela on Some history topics it might be very valuable to investigate · 2020-07-08T02:47:29.139Z · score: 11 (8 votes) · EA · GW

Mini meta tangent: Part of me wanted to call this “10 history topics it might be very valuable to investigate”. But I felt like maybe it’s good for EA to have a norm against that sort of listicle-style title, which promises a specific number of things, as such titles seem to be oddly enticing. It seems like maybe posts with that sort of title would grab more attention, relative to other EA Forum posts, than they really warrant. (I don't mean that any article with such a title would warrant little attention, just that they might get a "unfair boost" relative to other posts.)

I think my feeling on that was informed in part by Scott Alexander's writing on asymmetric weapons, in which he says, among other things:

Logical debate has one advantage over narrative, rhetoric, and violence: it’s an asymmetric weapon. That is, it’s a weapon which is stronger in the hands of the good guys than in the hands of the bad guys.

In this case, it's not about good guys vs bad guys, but about more useful vs less useful posts. Perhaps we should try to minimise the number of things that boost the attention an article gets other than things that closely track how useful the article is.

Meanwhile, I recently published a post I called 3 suggestions about jargon in EA. Maybe, with this in mind, I should’ve called that “Some suggestions about jargon in EA”, to avoid grabbing more attention than it warranted. (I didn't really think about this issue when I posted that, for some reason.)

Does anyone else have thoughts on whether EA should have a norm against listicle-style numbered titles, or on whether we already implicitly do have such a norm?

(By the way, I didn’t specifically aim to have 10 history topics in this post; it just happened to be that 9 initially came to mind, and then later I was thinking about the malevolence post so I added a 10th topic related to that.)

Comment by michaela on The case for investing to give later · 2020-07-08T01:44:30.349Z · score: 4 (2 votes) · EA · GW

Relevant quote from Philip Trammell's interview on the 80,000 Hours podcast:

Philip Trammell: [...] in this write-up, I do try to make it clear that by investment, I really am explicitly including things like fundraising and at least certain kinds of movement building which have the same effect of turning resources now, not into good done now, but into more resources next year with which good will be done. I would be just a little careful to note that this has to be the sort of movement building advocacy work that really does look like fundraising in the sense that you’re not just putting more resources toward the cause next year, but toward the whole mindset of either giving to the cause or investing to give more in two years’ time to the cause. You might spend all your money and get all these recruits who are passionate about the cause that you’re trying to fund, but then they just do it all next year.
Robert Wiblin: The fools!
Philip Trammell: Right. And I don’t know exactly how high fidelity in this respect movement building tends to be or EA movement building in particular has been. So that’s one caveat. [Michael's note: Somewhat less relevant from here onwards.] I guess another one is that when you’re actually investing, you’re generally creating new resources. You’re actually building the factories or whatever. Whereas when you’re just doing fundraising, you’re movement building, you’re just diverting resources from where they otherwise would have gone.
Robert Wiblin: You’re redistributing from some efforts to others.
Philip Trammell: Yeah. And so you have to think that what people otherwise would have done with the resources in question is of negligible value compared to what they’ll do after the funds had been put in your pot. And you might think that if you just look at what people are spending their money on, the world as a whole… I mean you might not, but you might. And if you do, it might seem like this is a safe assumption to make, but the sorts of people you’re most likely to recruit are the ones who probably were most inclined to do the sort of thing that you wanted anyway on their own. My intuition is that it’s easy to overestimate the real real returns to advocacy and movement building in this respect. But I haven’t actually looked through any detailed numbers on this. It’s just a caveat I would raise.

(I think he also discusses similar matters in his write-up, but I can't remember for sure.)

Comment by michaela on Estimating the Philanthropic Discount Rate · 2020-07-08T01:37:37.777Z · score: 2 (1 votes) · EA · GW

(Possibly somewhat rambly, sorry)

2. I think I now have a better sense of what you mean.

2a. It sounds like, when you wrote:

The current relatively high probability of extinction will maintain indefinitely.

...you'd include "The high probability maintains for a while, and then we do go extinct" as a case where the high probability maintains indefinitely?

This seems an odd way of phrasing things to me, given that, if we go extinct, the probability that we go extinct at any time after that is 0, and the probability that we are extinct at any time after that is 1. So whatever the current probability is, it would change after that point. (Though I guess we could talk about the probability that we will be extinct at the end of a time period, which would be high - 1 - post-extinction, so if that probability is currently high it could then stay high indefinitely, even if the actual probability changes.)

I thought you were instead talking about a case where the probability stays relatively high for a very long time, without us going extinct. (That seemed to me like the most intuitive interpretation of the current probability maintaining indefinitely.) That's why I was saying that that's just unlikely "by definition", basically.

Relatedly, when you wrote:

Currently, we have a relatively high probability of extinction, but if we survive through the current crucial period, then this probability will dramatically decrease.

Would that hypothesis include cases where we don't survive through the current period?

My view would basically be that the probability might be low now or might be relatively high. And if it is relatively high, then it must be either that it'll go down before a long time passes or that we'll become extinct. I'm not currently sure whether that means I split my credence over the 1st and 2nd views you outline only, or over all 3.

2b. It also sounds like you were actually focusing on an argument like that the "natural" extinction rate must be low, given how long humanity has survived thus far. This would be similar to an argument Ord gives in The Precipice, and that's also given in this paper I haven't actually read, which says in the abstract:

Using only the information that Homo sapiens has existed at least 200,000 years, we conclude that the probability that humanity goes extinct from natural causes in any given year is almost guaranteed to be less than one in 14,000, and likely to be less than one in 87,000.

That's an argument I agree with. I also see it as a reason to believe that, if we handle all the anthropogenic extinction risks, the extinction risk level from then on would be much lower than it might now be.

Though I'm not sure I'd draw from it the implication you draw: it seems totally plausible we could enter a state with a new, higher "background" extinction rate, which is also driven by our activities. And it seems to me that the only obvious reasons to believe this state wouldn't last a long time are (a) the idea that humanity will likely strive to get out of this state, and (b) the simple fact that, if the rate is high enough and lasts for long enough, extinction happening at some point becomes very likely. (One can also argue against believing that we'd enter such a state in the first place, or that we've done so thus far - I'm just talking about why we might not believe the state would last a long time, if we did enter it.)

So when you say:

if we assume a 0.2% annual probability of extinction, that gives a 1 in 10^174 chance of surviving 200,000 years, which requires an absurdly strong update away from the prior.

Wouldn't it make more sense to instead say something like: "The non-anthropogenic annual human extinction rate seems likely to be less than 1 in 87,000. To say the current total annual human extinction rate is 1 in 500 (0.2%) requires updating away from priors by a factor of 174 (87,000/500)." (Perhaps this should instead be phrased as "...requires thinking that humans have caused the total rate to increase by a factor of 174.")

Updating by a factor of 174 seems far more reasonable than the sort of update you referred to.

And then lasting 200,000 years at such an annual rate is indeed extremely implausible, but I don't think anyone's really arguing against that idea. The implication of a 0.2% annual rate, which isn't reduced, would just be that extinction becomes very likely in much less than 200,000 years.

3.

The conclusion does not follow, for two reasons. The value of reducing x-risk might actually be lower if x-risk is higher.

I haven't read that paper, but Ord makes what I think is a similar point in The Precipice. But, if I recall correctly, that was in a simple model, and he thought that in a more realistic model it does seem important how high the risk is now.

Essentially, I think x-risk work may be most valuable if the "background" x-risk level is quite low, but currently the risk levels are unusually high, such that (a) the work is urgent (we can't just punt to the future, or there'd be a decent chance that future wouldn't materialise), and (b) if we do succeed in that work, humanity is likely to last for a long time.

If instead the risk is high now but this is because there are new and large risks that emerge in each period, and what we do to fix them doesn't help with the later risks, then that indeed doesn't necessarily suggest x-risk work is worth prioritising.

And if instead the risk is pretty low across all time, that can still suggest x-risk work is worth prioritising, because we have a lower chance of succumbing to a risk in any given period but would lose more in expectation if we do. (And that's definitely an interesting and counterintuitive implication of that argument that Ord mentions.) But I think being in that situation would push somewhat more in favour of things like investing, movement-building, etc., rather than working on x-risks "directly" "right now".

So if we're talking about the view that "Currently, we have a relatively high probability of extinction, but if we survive through the current crucial period, then this probability will dramatically decrease", I think more belief in that view does push more in favour of work on x-risks now.

(I could be wrong about that, though.)

4. Thanks for the clarification!

Comment by michaela on Estimating the Philanthropic Discount Rate · 2020-07-08T01:00:38.775Z · score: 2 (1 votes) · EA · GW

Yes, but I hadn't put it up anywhere. Thanks for making me realise that might be useful to people!

You can find the transcript here. And I've also now linked to it from the YouTube video description.

Comment by michaela on Estimating the Philanthropic Discount Rate · 2020-07-08T00:35:00.044Z · score: 2 (1 votes) · EA · GW
Perhaps it would be more accurate to say that an organization that avoids value drift and also consumes its resources slowly (more slowly than r - g) will gain resources over time.

To check I'm understanding, is the key mechanism here the idea that they can experience compounding returns that are greater than overall economic growth, and therefore come to control a larger portion of the world's resources over time?

Comment by michaela on 3 suggestions about jargon in EA · 2020-07-07T23:47:57.750Z · score: 2 (1 votes) · EA · GW

Hmm, looking again at Greaves' paper, it seems like it really is the case that the concept of "cluelessness" itself, in the philosophical literature, is meant to be something quite absolute. From Greaves' introduction:

The cluelessness worry. Assume determinism.1 Then, for any given (sufficiently precisely described) act A, there is a fact of the matter about which possible world would be realised – what the future course of history would be – if I performed A. Some acts would lead to better consequences (that is, better future histories) than others. Given a pair of alternative actions A1, A2, let us say that
(OB: Criterion of objective c-betterness) A1 is objectively c-better than A2 iff the consequences of A1 are better than those of A2.
It is obvious that we can never be absolutely certain, for any given pair of acts A1, A2, of whether or not A1 is objectively c-better than A2. This in itself would be neither problematic nor surprising: there is very little in life, if anything, of which we can be absolutely certain. Some have argued, however, for the following further claim:
(CWo: Cluelessness Worry regarding objective c-betterness) We can never have even the faintest idea, for any given pair of acts (A1, A2), whether or not A1 is objectively c-better than A2.
This ‘cluelessness worry’ has at least some more claim to be troubling.

So at least in her account of how other philosophers have used the term, it refers to not having "even the faintest idea" which act is better. This also fits with what "cluelessness" arguably should literally mean (having no clue at all). This seems to me (and I think to Greaves'?) quite distinct from the idea that it's very very very* hard to predict which act is better, and thus even whether an act is net positive.

And then Greaves later calls this "simple cluelessness", and introduces the idea of "complex cluelessness" for something even more specific and distinct from the basic idea of things being very very very hard to predict.

Having not read many other papers on cluelessness, I can't independently verify that Greaves is explaining their usage of "cluelessness" well. But from that, it does seem to me that "cluelessness" is intended to refer to something more specific (and, in my view, less well-founded) than what I've seen some EAs use it to refer to (the very true and important idea that many actions are very very very hard to predict the value of).

(Though I haven't re-read the rest of the paper for several months, so perhaps "never have even the faintest idea" doesn't mean what I'm taking it to mean, or there's some other complexity that counters my points.)

*I'm not stepping away from saying "extremely hard to predict", because one might argue that that should, taken literally, mean "as hard to predict as anything could ever be", which might be the same as "so hard to predict that we can't have even the faintest idea".

Comment by michaela on Estimating the Philanthropic Discount Rate · 2020-07-07T10:51:42.069Z · score: 2 (1 votes) · EA · GW

Thanks for this reply!

1.

The possibility of, say, extinction is a discount on utility, not on money

By that, do you mean that extinction makes future utility less valuable? Or that it means there may be less future utility (because there are no humans to experience utility), for reasons unrelated to how effectively money can create utility?

(Sorry if this is already well-explained by your equations.)

2.

it was basically on the assumption that we should converge on true beliefs over time.

I think my quick take would be that that's a plausible assumption, and that I definitely expect convergence towards the truth on average across areas, but that there seems a non-trivial chance of indefinitely failing to land on the truth itself in a given area. If that quick take is a reasonable one, then I think this might push slightly more in favour of work to estimate the philanthropic discount rate, as it means we'd have less reason to expect humanity to work it out eventually "by default".

4. To check I roughly understood, is the following statement approximately correct? "The chance of events that leave one with no assets at all can't be captured in the standard theoretical model, so we have to use a separate term for it, which is the expropriation rate. Whereas the chance of events that result in the loss of some but not all of one's assets is already captured in the standard theoretical model, so we don't include it in the expropriation rate."

Comment by michaela on The case for investing to give later · 2020-07-07T07:26:16.929Z · score: 2 (1 votes) · EA · GW

I definitely agree that:

  • The distinction you raise is important
  • The linked sources are most relevant to value drift among "individual[s] investing their own money with the intention of donating it later on"
  • That what's most relevant here is instead value drift among "individual[s] legally-binding themselves to donate, for example by giving to a donor-advised fund"
  • And that it would be valuable to do historical research relevant to the latter kind of value drift

(And I think those points are not merely true but important.)

But I also think that:

  • The linked sources seem somewhat relevant to the latter type of value drift, and worth using as a starting point, if we have little else to go on.
    • Consider that we always have to generalise from one context to another, and any historical research we do that seems more relevant to the "legally binding" or "donor-advised" aspects of the matter at hand might also be less relevant to the "EA" and "modern society" aspects.
  • The (I think?) purely speculative arguments as to why the latter type of value drift would occur at a lower rate do seem worth bringing up, and worth using to update one's estimates. However, it's not clear to me that those arguments are more robust than trying to generalise from the semi-relevant data we have would be.
  • Under such conditions, my conservative guess at the relevant value drift rate would be close to the 10% level, not 5 times lower.
  • If I was to decide that the linked sources' data was totally irrelevant, then it'd seem this post doesn't really provide any relevant data, only speculative argument. (Though there is data elsewhere that's arguably relevant, e.g. regarding waqfs.) Under those conditions, I think the range of value drift rates I'd see as plausible would stretch from close to 0% to close to 100%, and thus my conservative guess might have to be quite high.
Comment by michaela on 3 suggestions about jargon in EA · 2020-07-07T00:50:20.307Z · score: 4 (2 votes) · EA · GW

Thanks for this comment. I found it interesting both as pushback on my point, and as a quick overview of parts of philosophy!

Some thoughts in response:

Firstly, I wrote "I also wrote about a third example of misuse of jargon I’ve seen, but then decided it wasn’t really necessary to include a third example". Really, I should've written "...but then decided it wasn't really necessary to include a third example, and that that example was one I was less sure of and where I care less about defending the jargon in question." Part of why I was less sure is that I've only read two papers on the topic (Greaves' and Mogensen's), and that was a few months ago. So I have limited knowledge on how other philosophers use the term.

That said, I think the term's main entry point into EA is Greaves' and Mogensen's papers and Greaves on the 80k podcast (though I expect many EAs heard it second-hand rather than from these sources directly). And it seems to me that at least those two philosophers want the term to mean something more specific than "it’s extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positive", because otherwise the term wouldn't include the idea that we can't just use expected value reasoning. Does that sound right to you?

More generally, I got the impression that cluelessness, as used by academics, refers to at least that idea of extreme difficulty in predictions, but usually more as well. Does that sound right?

This might be analogous to how existential risk includes extinction risk, but also more. In such cases, if one is actually just talking about the easy-to-express-in-normal-language component of the technical concept rather than the entire technical concept, it seems best to use normal language rather than the jargon.

Then there's the issue that "cluelessness" is also just a common term in everyday English, like free will and truth, and unlike existential risk or the unilateralist's curse. That does indeed muddy the matter somewhat, and reduce the extent to which "misuse" would confuse or erode the jargon's meaning.

One thing I'd say there is that, somewhat coincidentally, I've found the phrase "I've got no clue" a bit annoying for years before getting into EA, in line with my general aversion to absolute statements and black-and-white thinking. Relatedly, I think that, even ignoring the philosophical concept, "cluelessness about the future", taken literally, implies something extreme and similar to what I think the philosophical concept is meant to imply. Which seems like a very small extra reason to avoid it when a speaker doesn't really mean to imply we can't know anything about the consequences of our actions. But that's probably a fairly subjective stance, which someone could reasonably disagree with.

Comment by michaela on 3 suggestions about jargon in EA · 2020-07-07T00:25:13.233Z · score: 4 (2 votes) · EA · GW

Good example! And it makes me realise that perhaps I should've indicated that the scope of this post should be jargon in EA and the rationality community, as I think similar suggestions would be useful there too.

A post that feels somewhat relevant, though it's not about jargon, is Less Wrong Rationality and Mainstream Philosophy. One quote from that: "Moreover, standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century."

(None of this is to deny that the EA and rationality communities are doing a lot of things excellently, and "punching well above their weight" in terms of insights had, concepts generated/collected/refined, good done, etc. It's merely to deny a particularly extreme view on EA/rationality's originality, exceptionalism, etc.)

Comment by michaela on Estimating the Philanthropic Discount Rate · 2020-07-06T05:45:32.858Z · score: 3 (2 votes) · EA · GW

Miscellaneous thoughts and questions

1.

First, I should note that it doesn't really make sense to model the rate of changes in opportunities as part of the discount rate. Future utility doesn't become less valuable due to changes in opportunities; rather, money becomes less (or more) at producing utility.

I agree with the latter sentence. But isn't basically the same thing true for the other factors you discuss (everything except pure time preference)? It seems like all of those factors are about how effectively we can turn money into utility, rather than about the value of future utility. And is that really a reason that it doesn't make sense to include those factors in the "discount rate" (as opposed to the "pure time discounting rate")?

As you write:

But even if we do not admit any pure time preference, we may still discount the value of future resources for four core reasons:
[...]

Or perhaps, given the text that follows the "First, I should note" passage, you really meant to be talking about something like how changes in opportunities may often be caused by donations themselves, rather than something that exogenously happens over time?

2.

Over a sufficiently long time horizon, it seems our estimate will surely converge on the true discount rate, even if we don't invest much in figuring it out.

Could you explain why you say this? Is it a generalised notion that humanity will converge on true beliefs about all things, if given enough time? (If so, I find it hard to see why we should be confident of that, as it seems there could also be stasis or more Darwinian dynamics.) Or is there some specific reason to suspect convergence on the truth regarding discount rates in particular?

3.

Arguably, existential risk matters a lot more than value drift. Even in the absence of any philanthropic intervention, people generally try to make life better for themselves. If humanity does not go extinct, a philanthropist's values might eventually actualize, depending on their values and on the direction humanity takes. Under most (but not all) plausible value systems and beliefs about the future direction of humanity, existential risk looks more important than value drift. The extent to which it looks more important depends on how much better one expects the future world to be (conditional on non-extinction) with philanthropic intervention than with its default trajectory.

I think these are important points. I've collected some relevant "crucial questions" and sources in my draft series on Crucial questions for longtermists. E.g., in relation to the question "How close to optimal would trajectories be “by default” (assuming no existential catastrophe)?" It's possible you or other readers would find that draft post, or the sources linked to from it, interesting (and I'd also welcome feedback).

4.

Such events do not existentially threaten one's financial position, so they should not be considered as part of the expropriation rate for our purposes.

Could you explain why we should only consider things that could wipe out one's assets, rather than things that result in loss of "some but not all" of one's assets, in the expropriation rate for our purposes? Is it something to do with the interest rate already being boosted upwards to account for risks of losing some but not all of one's assets, but for some reason not being boosted upwards to account for events that wipe out one's assets? If so, could you explain why that would be the case.

(This may be a naive question; I lack a background in econ, finance, etc. Feel free to just point me to a Wikipedia article or whatever.)

5.

Observe that even when assets are distributed across multiple funds, expropriation and value drift still reduce the expected rate of return on investments in a way that looking at historical market returns does not account for. This is a good trade—decreasing the discount rate and decreasing the investment rate by the same amount probably increases utility in most situations

I didn't understand these sentences. If you think you'd be able to explain them without too much effort, I'd appreciate that. (But no worries if not - my confusion may just reflect my lack of relevant background, which you're not obliged to make up for!)

Comment by michaela on Estimating the Philanthropic Discount Rate · 2020-07-06T05:43:10.117Z · score: 2 (1 votes) · EA · GW

Thoughts on value drift and movement collapse

1. You talk about value drift in several places, and also list as one of your "questions that merit future investigation:"

Research on historical movements and learn more about why they failed or succeeded

I share the view that those are important topics in general and in relation to the appropriate discount rate, and would also be excited to see work on that question.

Given the importance and relevance of these topics, you or other readers may therefore find useful my collections of sources on value drift, and of EA analyses of how social social movements rise, fall, can be influential, etc. (The vast majority of these sources were written by other people; I primarily just collect them.)

2.

It seems to me that the probability of value drift is mostly independent across individuals, although I can think of some exceptions (e.g., if ties weaken within the effective altruism community, this could increase the overall rate of value drift).

Wouldn't one big exception be movement collapse? Or a shift in movement priorities towards something less effective, which then becomes ossified due to information cascades, worse epistemic norms, etc.? Both scenarios seem unpleasantly plausible to me. And they seem perhaps not far less likely than a given EA's values drifting, conditional on EA remaining intact and effective (but I haven't thought about those relative likelihoods much at all).

3.

On the bright side, one survey found that wealthier individuals tend to have a lower rate of value drift, which means the dollar-weighted value drift rate might not be quite as bad as 10%.

That's interesting. Can you recall which survey that was?

Comment by michaela on Estimating the Philanthropic Discount Rate · 2020-07-06T05:38:11.746Z · score: 2 (1 votes) · EA · GW

Some additional thoughts on existential and extinction risk

1.

Michael Aird (2020), "Database of existential risk estimates" (an EA Forum post accompanying the above-linked spreadsheet), addresses the fact that we only have extremely rough estimates of the extinction probability. He reviews some of the implications of this fact, and ultimately concludes that attempting to construct such estimates is still worthwhile. I think he explains the relevant issues pretty well, so I won't address this problem other than to say that I basically endorse Aird's analysis.

I'm very glad you seem to have found this database useful as one input into this valuable-seeming project!

If any readers are interested in my arguments/analysis on that matter, I'd actually recommend instead my EAGxVirtual Unconference talk. It's basically a better structured version of my post (though lacking useful links), as by then I'd had a couple extra months to organise my thoughts on the topic.

2.

If we use a moderately high estimate for the current probability of extinction (say, 0.2% per year), it seems implausible that this probability could remain at a similar level for thousands of years. A 0.2% annual extinction probability translates into a 1 in 500 million chance that humanity lasts longer than 10,000 years. Humanity has already survived for about 200,000 years, so on priors, this tiny probability seems extremely suspect.

I'm not sure I see the reasoning in that last sentence. It seems like you're saying that something which has a 1 in 500 million chance of happening is unlikely - which is basically true "by definition" - and that we know this because humanity already survived about 200,000 years - which seems somewhat irrelevant, and in any case unnecessary to point out? Wouldn't it be sufficiently merely to note that an annual 0.2% chance of A happening (whatever A is) means that, over a long enough time, it's extremely likely that either A has happened already or the annual chance actually went down?

Relatedly, you write:

The third claim allows us to use a nontrivial long-term discount due to existential risk. I find it the least plausible of the three—not because of particularly any good inside-view argument, but because it seems unlikely on priors.

Can't we just say it is unlikely - it logically must involve extremely low probabilities, even if we abstract away all the specifics - rather than than it seems unlikely on priors, or based on some reference class forecasting, or the like?

(Maybe I'm totally misunderstanding what you're getting at here.)

3.

One of these three claims must be true:
1. The annual probability of extinction is quite low, on the order of 0.001% per year or less.
2. Currently, we have a relatively high probability of extinction, but if we survive through the current crucial period, then this probability will dramatically decrease.
[...] The second claim seems to represent the most common view among long-term-focused effective altruists.

Personally, I see it as something like "There's a 5-90% chance that people like Toby Ord are basically right, and thus that 2 is true. I'm not very confident about that, and 1 is also very plausible. But this is enough to make the expected value of existential risk reduction very high (as long as there are tractable reduction strategies which wouldn't be adopted "by default")."

I suspect that something like that perspective - in which view 2 is not given far more credence than view 1, but ends up seeming especially decision-relevant - is quite common among longtermist EAs. (Though I'm sure there are also many with more certainty in view 2.)

(This isn't really a key point - just sharing how I see things.)

4. When you say "long-run discount rate", do you mean "discount-rate that applies after the short-run", or "discount rate that applies from now till a very long time from now"? I'm guessing you mean the former?

I ask because you say:

If we accept the first or second claim, this implies existential risk has nearly zero impact on the long-run discount rate

But it seems like the second claim - a high extinction risk now, which declines later - could still imply a non-trivial total existential risk across all time (e.g., 25%), just with this mostly concentrated over the coming decades or centuries.

Comment by michaela on Estimating the Philanthropic Discount Rate · 2020-07-06T05:34:48.058Z · score: 2 (1 votes) · EA · GW

Existential risk ≠ extinction risk ≠ global catastrophic risk

For an expanded version of the following points, see Clarifying existential risks and existential catastrophes and/or 3 suggestions about jargon in EA.

There are some places where you seem to use the terms "existential risk" and "extinction risk" as interchangeable. For example, you write:

I do not think it is obvious that reducing the probability of extinction does more good per dollar than the value drift rate, which naively suggests the effective altruist community should invest relatively more into reducing value drift. But I find it plausible that, upon further analysis, it would become clear that existential risk matters much more.

Additionally, it seems that, to get your "annual extinction probability" estimate, some of the estimates you use from the spreadsheet I put together are actually existential risk, global catastrophic risk, or collapse risk. For example, you seem to use Ord's estimate of total existential risk, Rees' estimate of the odds that our present civilization on earth will survive to the end of the present century, and Simpson's estimate that “Humanity’s prognosis for the coming century is well approximated by a global catastrophic risk of 0.2% per year" (emphases added).

But, as both Bostrom and Ord make clear in their writings on existential risk, extinction is not the only possible type of existential catastrophe. There could also be an unrecoverable collapse or an unrecoverable dystopia. And many global catastrophes would not be existential catastrophes.

I see this as important because:

  • Overlooking that there are possible types of existential catastrophe other than extinction might lead to us doing too little to protect against them.
  • Relatedly, using the term "existential risk" when one really means "extinction risk" might make existential risk less effective as jargon that can efficiently convey this key thing many EAs care about.
  • Existential risk and global catastrophic risk are both very likely at least a bit higher than extinction risk (since they include a large number of possible events). And I'd guess collapse risk might be higher as well. So you may end up with an overly high extinction risk estimate in your discount rate.
    • Alternatively, if existential risk is actually the most appropriate thing to include in your discount rate (rather than extinction risk), using estimates of extinction risk alone may lead your discount rate being too low. This is because extinction risk estimates overlook the risk of unrecoverable collapse or dystopia.

To be clear, I have no problems with sources that just talk about extinction risk. Often, that's the appropriate scope for a given piece of work. I just have a pet peeve with people really talking about extinction risk, but using the term existential risk, or vice versa.

Also to be clear, you're far from the only person who's done that, and this isn't really a criticism of the substance of the post (though it may suggest that the estimates should be tweaked somewhat).

Comment by michaela on Estimating the Philanthropic Discount Rate · 2020-07-06T05:32:06.387Z · score: 2 (1 votes) · EA · GW

Thanks for this post! It seems to me like quite an interesting and impressive overview of this important topic. I look forward to reading more work from you related to patient-philanthropy-type things (assuming you intend to pursue more work on these topics?).

A bunch of questions and points came to mind as I was reading, which I'll split into a few separate comments. Sorry for the impending flood of words - take it as a signal of how interesting I found your post!

Firstly, it happens to be that I was also working on a post with a somewhat similar scope to this one and to Sjir Hoeijmakers' one. My post was already drafted, but not published, and is entitled Crucial questions about optimal timing of work and donations. It has a somewhat different focus, and primarily just overviews some importnat questions and arguments, without making this post's valiant effort to actually provide estimates and recommendations.

My draft's marginal value is probably lower than I'd expected, given that you and Sjir have now published your perhaps more substantive work! But feel free to take a look, in case it might be useful - and I'd also welcome feedback. (That goes for both Michael Dickens and other readers.)

I suspect what I'll do is make a few tweaks to my draft in light of the two new posts, and then publish it as another perspective or way of framing things, despite some overlap in content and purpose.

Comment by michaela on Long-term investment fund at Founders Pledge · 2020-07-06T03:52:31.355Z · score: 3 (2 votes) · EA · GW
It could be interesting to explore/offer funds with different distribution thresholds (for example, saving all funds for 100+ years out versus donating a small percentage every year or nearly every year while still letting assets compound) for donors that have different distribution preferences. Knowing your money will be used to better the world every year in the present while also compounding indefinitely into the future to help future generations may be appealing.

That indeed sounds like a valuable idea to me.

Personally, my current best guess is that EA should move more in the direction of "patient philanthropy", and that it makes sense for most of my own giving to take that form, at least until research into and debate on that topic has progressed further. But I also quite like giving some amount now. So I'm continuing to give at 10% per year - as per my Giving What We Can pledge, which I took before learning about the arguments for patient philanthropy, but I think I'd like doing that anyway. And then I try to save and invest a lot beyond that, so that I can hopefully eventually do something like a "backdated" Further Pledge, in which I'll ultimately give as much as I earned from ~2019 onwards beyond ~15-30k USD per year.

Maybe this is actually the optimal strategy, for reasons such as giving each year helping me avoid value drift, or me having taken the pledge already and it being a good general principle to stick to one's promises (see also). But I think I'd like doing this even if it wasn't optimal.

So I imagine many other people would also find it appealing to have the option of getting to feel that they're doing good each year. And it seems plausible to me that making that option available, alongside arguments for mostly taking the "patient" approach, would actually increase the total funding allocated to the "patient" approach.

Comment by michaela on The case for investing to give later · 2020-07-06T03:36:12.085Z · score: 3 (2 votes) · EA · GW

Some thoughts on value drift:

1. I've collected a bunch of relevant sources here, which you or other readers may find useful.

2.

For instance, these three sources (1,2,3) collectively suggest a yearly value drift rate of ~10% for individuals within the effective altruism community.
However, the short-term value drift rate also seems much easier to influence positively [...]
Given the availability of these strategies, I currently see 2% as a conservative estimate for the short-term[5] value drift rate for a strongly committed and strategic investor-philanthropist.

I think it would be reasonable for one's best guess of the value drift rate for a "strongly committed and strategic investor-philanthropist" to be notably below the ~10% suggested by those three sources. This is because (a) those three sources don't provide very robust evidence, and (b), as you note, a strategic investor-philanthropist could make a conscious effort to reduce their value drift rate.

But (b) also seems a quite speculative and non-robust argument, at this stage. So it doesn't seem to me that 2% should be called a "conservative" estimate. It also seems like 0.5%, the "best guess" used in the spreadsheet, is a very low estimate, given the evidence we have. Is there other evidence you have in mind that leads you to see 2% as conservative, and 0.5% as a best guess?

(To be clear, I currently, tentatively lean towards the idea that EAs should likely move more in the direction of patient philanthropy. And I don't think these points would overturn that. But they might temper it somewhat.)

3.

this estimate depends a lot on the the hypothetical investor-philanthropist in question, so I invite the reader to make their own estimates based on the case they are considering
[Footnote:] Granted, there are obvious difficulties in estimating one’s own expected value drift rate.

One difficulty that seems to me especially worth noting is the end-of-history illusion: "a psychological illusion in which individuals of all ages believe that they have experienced significant personal growth and changes in tastes up to the present moment, but will not substantially grow or mature in the future".

I would expect this to cause a systematic bias towards underestimating one's own likelihood of value drift. (But it's hard to say how strong that bias would be, or whether it's outweighed by other factors.)

Comment by michaela on The case for investing to give later · 2020-07-06T03:35:29.535Z · score: 5 (2 votes) · EA · GW

Thanks for this post! I found it quite interesting and useful.

One thing that stood out to me in particular was the distinction you made between exogenous learning and endogenous learning. It often seems hard to tease apart "doing good now" - or whatever we wish to call it - from "punting to the future", and to determine which is better. And this seems to in part be due to the ways that doing good now can also help us do good later, and thus have similar effects to punting. (I plan to write a post related to this soon.) So I think that future discussions on the topic can likely benefit from that explicit conceptual distinction between how our knowledge will improve if we simply wait and how our knowledge will improve if we do something that improves our knowledge.

I also liked the distinction between changes in availability of opportunities and changes in how much we know about opportunities (learning), for similar reasons.

It happens to be that I was also working on a post with a somewhat similar scope to this one, and to some extent to Michael Dickens' post. My post was already drafted, but not published, and is entitled Crucial questions about optimal timing of work and donations. I'd say the key differences in scope are that my draft surveys a somewhat broader set of questions, and makes less of an effort to actually provide estimates or recommendations (it more so overviews some important questions and arguments, without taking a stance).

My draft's marginal value is probably lower than I'd expected, given that this good work by you and Dickens has now been published! But feel free to take a look, in case it might be useful - and I'd also welcome feedback. (That goes for both Sjir and other readers.)

I suspect what I'll do is make a few tweaks to my draft in light of the two new posts, and then publish it as another perspective or way of framing things, despite some overlap in content and purpose.

Comment by michaela on 3 suggestions about jargon in EA · 2020-07-05T23:27:21.996Z · score: 4 (2 votes) · EA · GW

Good question/point! I definitely didn't mean to imply that EAs were the first people to recognise the idea that true information can sometimes cause harm. If my post did seem to imply that, that's perhaps a good case study in how easy it is to fall short of my third suggestion, and thus why it's good to make a conscious effort on that front!

But I'm pretty sure the term "information hazard" was publicly introduced in Bostrom's 2011 paper. And my sentence preceding that example was "It seems to me that people in the EA community have developed a remarkable number of very useful concepts or terms".

I said "or terms" partly because it's hard to say when something is a new concept vs an extension or reformulation of an old one (and the difference may not really matter). I also said that partly because I think new terms (jargon) can be quite valuable even if they merely serve as a shorthand for one specific subset of all the things people sometimes mean by another, more everyday term. E.g.,"dangerous information" and "dangerous knowledge" might sometimes mean (or be taken to mean) "information/knowledge which has a high chance of being net harmful", whereas "information hazard" just conveys at least a non-trivial chance of at least some harm.

As for whether it was a new concept: the paper provided a detailed treatment of the topic of information hazards, including a taxonomy of different types. I think one could argue that this amounted to introducing the new concept of "information hazards", which was similar to and built on earlier concepts such as "dangerous information". (But one could also argue against that, and it might not matter much whether we decide to call it a new concept vs an extension/new version of existing ones.)

Comment by michaela on 3 suggestions about jargon in EA · 2020-07-05T03:38:34.316Z · score: 8 (5 votes) · EA · GW

I also wrote about a third example of misuse of jargon I’ve seen, but then decided it wasn’t really necessary to include a third example. Here the example is anyway, for anyone interested:

Cluelessness

What the term is meant to refer to: “Cluelessness” is a technical term within philosophy. It seems to have been used to refer to multiple different concepts (e.g., “simple” vs “complex” cluelessness). These concepts are complicated, and I tentatively believe that they’re not useful, so I won’t try to explain them here. If you’re interested in the concept, see Greaves, Greaves & Wiblin, or Mogensen.

What the term is sometimes mistakenly used for: I’ve seen some effective altruists use the term “cluelessness” to refer simply to the idea that it’s extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positive. This idea seems to me clearly true and important. But it can also be easily and concisely expressed without jargon.

And I’m almost certain that philosophers writing about “cluelessness” very specifically want the term to mean something more specific than just the above idea. This is because they want to talk about situations in which expected value reasoning might be impossible or ill-advised. (I tentatively disagree with the distinction and implications they’re drawing, but it seems useful to recognise that they wish to draw that distinction and those implications.)

Comment by michaela on EAF’s ballot initiative doubled Zurich’s development aid · 2020-07-03T04:43:14.909Z · score: 8 (2 votes) · EA · GW

Minor point: I think consequentialism vs non-consequentialism is one relevant distinction, but that distinctions between consequentialist views which value different things (not just utility) are also relevant. E.g., I imagine a person could be a consequentialist who primarily valued negative liberty, which might lead to being very concerned about taxation.

Reasons this is a minor point:

  • Just valuing “liberty” might leave this initiative looking positive, as global health and development work probably increases positive liberty substantially.
  • Even a focus on negative liberty might still leave this initiative looking positive, if it turns out that increasing taxation in Zurich to pay for more global health and development work causes a net increase in negative liberty. E.g. by helping lead to more democratisation. (I'm not saying that this is the case; just that it seems plausible.)
  • I haven't actually heard anyone endorse a consequentialist view which emphasises negative liberty. Though it's possible that that's a good way to interpret some libertarians' views.

(Also, nice work on this ballot initiative, and thanks for the write-up!)

Comment by michaela on gavintaylor's Shortform · 2020-07-02T09:07:32.690Z · score: 3 (2 votes) · EA · GW

In case you hadn't seen it: 80,000 Hours recently released a post with a brief discussion of the problem area of atomically precise manufacturing. That also has links to a few relevant sources.

Comment by michaela on Civilization Re-Emerging After a Catastrophic Collapse · 2020-07-02T04:38:37.934Z · score: 2 (1 votes) · EA · GW

Interesting perspectives, thanks for sharing.

I think the only point on which my view seem to differ from yours here is that I think I have lower confidence that market-style economies and democracy-style political systems would emerge by default. (But that's not informed by very much, and I hope to learn more on the topic in future. Relevantly, I haven't read The End of History.)

I think the following are some inputs informing my lower confidence on that matter. But note that I haven't looked into any of these points much, and I mean this more as laying out my current thinking than as trying to make a compelling case.

  • It seems quite plausible that, even if market-style economies and democracy-style political systems are the most effective types of systems given how the world has been for the past centuries, other types of systems might be more effective if the background conditions changed quite a bit.
    • E.g., perhaps if there was a major disruption, after which most countries had a Soviet style system with industry intact, market economies and democracies wouldn't perform well against that. Perhaps because of limited ability to trade, or because the Soviet style systems would coordinate to curtail the democracies and market economies.
    • One thing that makes me less concerned about this is that the "good" systems first began emerging when the rest of the world didn't have those systems. So clearly it's not necessary for everywhere to have free markets and democracies in order for that to emerge. But we don't have precedents for things like a global situation in which most countries are very non-democratic systems and have modern technology.
    • Perhaps this could be phrased as the possibility that there are multiple Nash equilibria for the international system, and in the current one it's best to be fairly democratic and have fairly free markets, but there might be other equilibria where that's not the case.
    • I think this and some of my other "inputs" were inspired in part by Beckstead.
  • Relatedly, it seems possible for a system to achieve something like "singleton" status, such that we no longer have evolution towards the most effective solutions, and the singleton can stay as it is indefinitely even if it's not very effective.
    • Perhaps this is analogous to a company attaining monopoly status, creating high barriers to entry, and then getting away with providing poor products at high prices.
  • The Soviet Union lasted for a while, and it doesn't seem that its eventual demise was a foregone conclusion. (Again, I should note that I haven't looked into these points much.)
  • China seems to have maintained decent growth and avoided substantial democratisation thus far, despite repeated predictions suggesting this would've changed by now.
  • When I was reading 1984, it did feel plausible that such a system could last for a very long time.
    • Arguably, this could serve as a sketch of the sort of singleton scenario I allude to above.
    • Though of course we should be wary of the way that a lot of vivid details can make something seem plausible, and of generalisation from fictional evidence.

As I said, I'm uncertain about these points, and I'd be interested to hear people's thoughts on them.