What is going on in the world?

post by Katja_Grace · 2021-01-18T04:47:22.615Z · EA · GW · 25 comments

This is a link post for https://meteuphoric.com/2021/01/17/what-is-going-on-in-the-world/

Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take:

It’s a draft. What should I add? (If, in life, you’ve chosen among ways to improve the world, is there a simple story within which your choices make particular sense?)


Comments sorted by top scores.

comment by S_Adi (skaditya2601@gmail.com) · 2021-01-18T12:57:50.661Z · EA(p) · GW(p)

I think it's also of noteworthy to include the trillions of sentient farmed animals that are and will be exploited and are being put through intense suffering for rest of the future as the demand for animal products continues to increase . Also the gigantic scale of suffering of the wild animals most of whom suffer and die in painful ways soon after coming into existence.

comment by smclare · 2021-01-18T09:57:58.311Z · EA(p) · GW(p)

Some things worth adding might be:

  • Several Asian economies are growing rapidly, and China is on track to become a major world power sometime this century (worth including since you mention the apparent decline of the US/West)
  • There is massive global inequality, and while many lower income countries are now growing more steadily they are not projected to narrow the north/south wealth divide anytime soon
  • Humans are raising billions of animals for food in very poor conditions
comment by Ramiro · 2021-01-20T14:58:05.856Z · EA(p) · GW(p)

>There is massive global inequality...

One could add: "and disparities in power might increase and lead us to some sort of techno-feudalism."

comment by brentonmayer (brentonmayer91) · 2021-01-18T12:18:39.040Z · EA(p) · GW(p)

It's really cool to see these laid out next to another like this! Thanks for posting  Katja :) 

comment by Linch · 2021-01-19T08:01:56.161Z · EA(p) · GW(p)

We (most humans in most of the world) lived or are living in a golden age, with more material prosperity and better physical health* than ever before. 2020 was shitty, and the second derivative might be negative, but the first derivative still looks clearly positive on the timescale of decades, as well as a (measured from history, not counterfactual) really high baseline. On a personal level, my consumption is maybe 2 orders of magnitude higher than that of my grandparents  at my age(might become closer to 3 if I was less EA). So I'd be interested in adding a few sentences like:

  • For the first time in recorded history, the vast majority of humans are much richer than their ancestors.
  • Even in the midst of a raging pandemic, human  deaths from infectious disease still account for less than 1/3 of all deaths.
  • People have access to more and better information than ever before.

I think as EAs, it's easy to have a pretty negative view of the world (because we want to fix on what we can fix, and also pay attention to a lot of things we currently can't fix in the hopes that one day we can figure out what to fix later), but obviously there is still a lot of good in the world (and there might be much more to come), and it might be valuable to have concrete reminders of what we ought to cherish and protect.

* I think it's plausible/likely that we're emotionally and intellectually healthier as well, but this case is more tenuous. 

comment by meerpirat · 2021-01-19T08:19:12.124Z · EA(p) · GW(p)

Related to wealth: I recently heard Tyler Cowen describing himself as an "information billionaire" and hoping to become an information trillionaire. I wonder how one would quantify it, but it seems true that our ability to understand the world is also growing rapidly.

comment by MichaelA · 2021-01-20T08:41:04.372Z · EA(p) · GW(p)

Yeah, I agree with that. 

On this, I really like this brief post from Our World in Data: The world is much better; The world is awful; The world can be much better. (Now that I have longtermist priorities, I feel like another useful slogan in a similar spirit could be something like "The world could become so much better; The world could end or become so much worse; We could help influence which of those things happens.")

comment by Ramiro · 2021-01-20T14:49:34.927Z · EA(p) · GW(p)

>with more material prosperity and better physical health* than ever before

I agree. But you see, in some population dynamics, variation is correlated with increased risk of extinction.

>my consumption is maybe 2 orders of magnitude higher than that of my grandparents  at my age

That might be precisely part of the problem. We are just starting to be seriously concerned about the externalities of this increase in consumption, and a good deal of it is conspicuous or with things people often regret (over)consuming (like soft drinks, addictive stuff, or just time spent in social media) - while a lot of people still starve.

comment by Linch · 2021-01-20T19:36:18.206Z · EA(p) · GW(p)

Thanks for your comment!

I agree. But you see, in some population dynamics, variation is correlated with increased risk of extinction.

I think I don't follow your point. If I understand correctly, the linked paper (at least from the abstract, I have not read it) talks about population-size variation, which has an intuitive/near-tautological relationship with increased risk of extinction, rather than variation overall. 

That might be precisely part of the problem. 

Sorry can you specify more what the problem is? If you mean that the problem is an inefficient distribution of limited resources, I agree that it's morally bad that I have access to a number of luxuries while others starve, and the former is casually upstream of the latter. However, in the long run we can only get maybe 1-2 orders of magnitude gains from a more equitable distribution of resources globally (though some rich  individuals/gov'ts can create more good than that by redistributing their own resources), but we can get much more through other ways to create more stuff/better experiences. 

We are just starting to be seriously concerned about the externalities of this increase in 


Who's this "we?" :P

comment by Ardenlk · 2021-01-19T02:40:25.936Z · EA(p) · GW(p)

Maybe: the smartest species the planet and maybe the universe has produced is in the early stages of realising it's responsible for making things go well for everyone.

comment by Ramiro · 2021-01-20T14:42:05.445Z · EA(p) · GW(p)

Worse: most of the members of that species don't realize this responsibility, and indeed consistently act against it, either to satisfy self-regarding or parochial preferences

comment by Jakob_J · 2021-01-18T10:45:59.353Z · EA(p) · GW(p)
  • Most human effort is being wasted on endeavors with no abiding value.
  • Nothing we do matters for any of several reasons (moral non-realism, infinite ethics, living in a simulation, being a Boltzmann brain, ..?)


Things certainly feel very doom & gloom right now, but I still think there is scope for optimism in the current moment. If I had been asked in February last year what the best and worst outcomes would have been of the pandemic a year later, I would probably have guessed a whole lot worse than what turned out to be the case. I also don't think that we are living in some special age of incompetent governance right now, and I would argue that throughout history we have come up with policies that have been disastrously wrong one way or the other. Competence have appeared elsewhere - as Tyler Cowen has argued, businesses seem unusually competent in the current crises compared to governments. Where would we have been without supermarkets' supply chains, Amazon, Pfizer, Zoom etc during the pandemic? According to this article there are more reasons to be optimistic than pessimistic right now:

  • As people lose jobs and income, many go hungry. Projections from the Food and Agricultural Organization point to an increase in the global number of chronically undernourished from 8.9 to around 9.9 per cent. A terrible outcome, but it still represents a reduction by a quarter since 2000.
  • It took mankind 3,000 years to develop a vaccine against polio and smallpox. Moderna designed a vaccine against Covid-19 in two days. Had we faced this new coronavirus in 2005, we would not have had the technology to even imagine such mRNA vaccines, if it had appeared in 1975 we would not have the ability to read the genome of the virus, if it came in 1950, we would not have had a single ventilator on the planet.
  • [T]he progress of the last few decades has been so fast, and human creativity under duress so impressive, that even major setbacks only pushes us back a few years. Only three years in history have been better in terms of GDP per capita, extreme poverty and child mortality – 2017, 2018 and 2019.
comment by jackmalde · 2021-01-20T11:06:21.696Z · EA(p) · GW(p)

Thanks for doing this! 

One suggestion - I think it would be cool to have more links included so that people can read more if they're interested. 

comment by MichaelA · 2021-01-20T08:43:45.690Z · EA(p) · GW(p)

The following statements from Luke Muehlhauser feel relevant:

Basically, if I help myself to the common (but certainly debatable) assumption that “the industrial revolution” is the primary cause of the dramatic trajectory change in human welfare around 1800-1870, then my one-sentence summary of recorded human history is this:

>Everything was awful for a very long time, and then the industrial revolution happened.

(The linked post provides interesting graphs and discussion to justify/flesh out this story.)

Though I guess that's less of a plot of the present moment, and more of a plot of the moment's origin story (with hints as to what the plot of the present  moment might be).

comment by brentonmayer (brentonmayer91) · 2021-01-19T08:48:41.720Z · EA(p) · GW(p)

Through overpopulation and  excessive consumption, humanity is depleting its natural resources, polluting its habitat, and causing the extinction of other species. Continuing like this will lead to the collapse of civilisation and likely our own extinction. 


This one seems very common to me, and sadly people often feel fatalistic about it. 

Two things that feeling might come from:

  • People rarely talking about aspects of it which are on a positive trajectory (e.g. the population of whales, acid rain, CFC emissions, UN population projections). 
  • The sense that there are so related things to solve - such that even if we managed to fix (say) climate change then we'd still see (say) our fisheries cause the collapse of the ocean's ecosystem. 
comment by Max_Daniel · 2021-01-18T15:37:53.337Z · EA(p) · GW(p)

Thank you, I found this pretty interesting. Of course no single one-sentence narrative will capture everything that goes on in the world, but in practice we need to reduce complexity and focus anyway, and may implicitly adopt similar narratives anyway, so I found it interesting to reflect explicitly on them.

FWIW, the one that resonates most for me personally was:

  • There are risks to the future of humanity (‘existential risks’), and vastly more is at stake in these than in anything else going on (if we also include catastrophic trajectory changes). Meanwhile the world’s thinking and responsiveness to these risks is incredibly minor and they are taken unseriously.

A lot of the ones appealing to 'weird' issues (acausal trade, quantum worlds, simulations, ...) ring true and important to me, but seem less directly relevant to my actual actions.

My reaction to a lot of the 'generic' ones (externalities, wasted efforts, ...) is something like: "This sounds true, but I'm not sure why I should think I'll be able to do something about this."

comment by MichaelA · 2021-01-20T09:16:32.564Z · EA(p) · GW(p)

Another possible story, which could underpin some efforts along the lines of patient altruism [? · GW] / punting to the future [EA · GW]: "There will probably be key actions that need taking in the coming decades, centuries, or millennia, which will have a huge influence over the whole rest of the future. There are some potential ways to set up future people to take those actions better in expectation, yet very few people are thinking strategically and working intensely on doing that. So that's probably the best thing we can do right now."

Those "potential ways" of punting to the future could be things like building a community of people with good values and epistemics or increasing the expected future wealth or influence of such people.

And this story could involve thinking there will be a future time that's much "higher leverage" / more "hingey" / more "influential", or thinking that there are larger returns to some ways of "punting to the future", or both. 

(See also. [EA · GW])

(Personally, I find this sort of story at least plausible [EA · GW], and it influences me somewhat.)

comment by OllieBase · 2021-01-19T18:25:51.285Z · EA(p) · GW(p)

The US is falling apart rapidly (on the scale of years), as evident in US politics departing from sanity and honor, sharp polarization, violent civil unrest, hopeless pandemic responses, ensuing economic catastrophe, one in a thousand Americans dying by infectious disease in 2020, and the abiding popularity of Trump in spite of it all.

(I note that you're just outlining potential worldviews, not necessarily defending them)

I don't think this is all that unique to the US. I think at least 5 out of 7 of these things could also be applied to the UK and France; the UK has a higher COVID-19 death rate than the US and there has been ongoing civil unrest in France for over two years now. In fact, the US is outside the top 10 in terms of COVID-19 deaths per capita.

This doesn't mean I'm pessimistic about all of those countries too - it just makes me think that this is how the world looks when we experience a pandemic (and... use Twitter?). 

comment by Linch · 2021-01-19T08:04:38.760Z · EA(p) · GW(p)

I'm curious if there's a point about energy use that's large enough to be added to the list. Intuitively I think no (for the same reason that climate change doesn't seem as important as the above points), but on the scale of centuries, the story of humanity is intertwined with the story of energy use, so perhaps on an outside view this is just actually really underrated and important.

comment by MakoYass · 2021-02-05T07:29:39.343Z · EA(p) · GW(p)

Infinite Ethics is solved by LDT btw. The multiverse is probably infinite (I don't know where this intuition comes from but come it does), but if so, there are infinite instances of you strewn through it, and you are effectively controlling all of them acausally. Some non-zero measure of all of that is entangled with your decisions.

comment by MichaelA · 2021-01-20T08:54:19.477Z · EA(p) · GW(p)

Personally, the simple stories that I pretty much endorse, and that are among the stories within which my choices would make sense, are basically "low-confidence", "expected value", and/or "portfolio" versions of some of these (particularly those focused on existential risks). One such story would be:

There's a non-trivial chance that there are risks to the future of humanity (‘existential risks’), and that vastly more is at stake in these than in anything else going on. Meanwhile the world’s thinking and responsiveness to these risks is incredibly minor and they are taken unseriously. So, in expectation, it'd be a really, really good idea if some people acted to reduce these risks.

("Non-trivial" probably understates my actual beliefs. When I forced myself to try to estimate total existential risk by 2120, I came up with a very tentative 13% [LW · GW]. But I think I might behave similarly even if my estimate was quite a bit lower.)

What I mean by "portfolio" versions is basically that I think I'd endorse tentative versions of a wide range of the stories you mention, which leads me to think there should be at least some people focused on basically acting as if each of those stories are true (though ideally remembering that that's super uncertain). And then I can slot into that portfolio in the way that makes sense on the margin, given my particular skills, interests, etc.

(All that said, I think there's a good argument for stating the stories more confidently, simply, and single-mindedly for the purposes of this post.)

comment by Ramiro · 2021-01-20T15:08:06.067Z · EA(p) · GW(p)

>Nothing we do matters for any of several reasons (moral non-realism, infinite ethics, living in a simulation, being a Boltzmann brain, ..?)

I wonder if, in this context, metaethical discussions are overrated. Even if philosophical debates that open the door to nihilism and are endemic in the rationalist community - like Pascal’s mugging, infinite utility, Boltzmann brain (or any simulation / Platonic cave-like reasoning) etc. - are  serious philosophical conundrums, they don't seem (at least from a pragmatic perspective, taking normative uncertainty analysis into account) to entail any relevant change of course in the foreseeable future. I mean, nihilism might be true, but unless you’re certain about it, it doesn’t seem to be practically relevant for decision-making.

comment by MichaelA · 2021-01-20T09:07:54.315Z · EA(p) · GW(p)

Another potential story could go something like this: "Advances in artificial intelligence, and perhaps some other technologies, have begun to have major impacts on the income, wealth, and status of various people, increasing inequality and sometimes increasing unemployment. This then increases dissatisfaction and instability with our political and economic systems. These trends are all likely to increase in future, and this could lead to major upheavals and harms."

I'm not sure if all those claims are accurate, and don't personally see that as one of the most important stories to be paying attention to. But it seems plausible and somewhat commonly believed among sensible people.

comment by MichaelA · 2021-01-20T09:04:02.496Z · EA(p) · GW(p)

AI agents will control the future, and which ones we create is the only thing about our time that will matter in the long run. Major subplots: ...

I think there are plausible and plausibly important plots similar to this, and subplots similar to the subplots below it, but that differ in a few ways from what's stated there. For example, I think I'm more inclined towards the following generalised version of that story:

AI systems will control the future or simply destroy our future, and how our actions influence the way that plays out is the only thing about our time that will matter in the long run. Major subplots: ...

This version of the story could capture: 

  • The possibility that the AI systems rapidly lead to human extinction but then don't really cause any other major things in particular, and have no [other] goals
    • I feel like it'd be odd to say that that's a case where the AI systems "control the future"
  • The possibility that the AI systems who cause these consequences aren't really "agents" in a standard sense
  • The possibility that what matters about our time is not simply "which [agents] we create", but also things like when and how we deploy them and what incentive structures we put them in

One thing that that "generalised story" still doesn't clearly capture is the potential significance of how humans use the AI systems. E.g., a malicious human actor or state could use an AI agent that's aligned with the actor, or a set of AI services/tools, in ways that cause major harm. (Or conversely, humans could use these things in ways that cause major benefits.)

comment by Ramiro · 2021-01-20T15:14:16.678Z · EA(p) · GW(p)

Really, thanks for the post. I think it's quite important to have such a list.

  • If we take anthropic reasoning and our observations about space seriously, we appear very likely to be in a ‘Great Filter’, which appears likely to kill us (and unlikely to be AI).

I’m not sure if we could say “very likely,” though the odds are surely relevant. I'm no expert, but I guess the case about the solution to the Fermi Paradox is still open, ranging from what prob distribution one uses to model the problem, to our location in the Milky Way. For instance, being “close” to its boarder might make it easier for us to survive extreme events happening in more central (and crowded) regions, but also harder to spot activity in the other side of the galaxy.

And, if there’s a Great Filter ahead, I think one can say “it’s unlikely to be AI” only in the same sense we can say “Team A is the favorite, but it’s unlikely to be the winner – too many other competitors.” I don’t see, right now, better candidates for a Great Filter than some surprising technological innovation.