Against longtermism

post by Brian Lui · 2022-08-11T05:37:46.868Z · EA · GW · 32 comments

Contents

  Summary
  Can we be effective?
  The road to hell is paved with good intentions
    1960 AD
    1100 AD
    50 BC
  In the past, what sorts of events have benefitted future humans?
  We can achieve longtermism without longtermism
  Conclusion
None
32 comments

Hello, it’s me again! I’ve been happily reading through the EA introductory concepts and I would like to have my say on “Longtermism”, which I read about at 80000 hours. I have also read Against the Social Discount Rate, which argues that the discount rate for future people should be zero.

Epistemic status: I read some articles and papers on the 80000 hours and effective altruism websites, and thought about whether it made sense. ~12 hours

Summary

 

Can we be effective?

Derek Parfit, who co-wrote Against the Social Discount rate, has said the following:

“Why should costs and benefits receive less weight, simply because they are further in the future? When the future comes, these benefits and costs will be no less real. Imagine finding out that you, having just reached your twenty-first birthday, must soon die of cancer because one evening Cleopatra wanted an extra helping of dessert. How could this be justified?”

This has been quoted several times, even though it’s an absurd argument on its face. Imagine the world where Cleopatra skipped dessert. How does this cure cancer? I can think of two possibilities.

  1. Cleopatra spends the extra time saved by skipping dessert, and invents biology, chemistry, biochemistry, oncology, and an efficacious cancer treatment. I assign this close to zero probability.
  2. By skipping dessert, the saved resources cause a chain reaction that makes Egypt, at the time a client state of the Roman Republic, significantly stronger. I think this is quite unlikely.

Did you see the rhetorical sleight of hand? Parfit claimed that skipping dessert leads to a cure for cancer. We are supposed to take as axiomatic that a small sacrifice now will have benefits in the future. But in fact, we cannot assume this.

Edit: I learned in the comments that I misunderstood this example - it was a hypothetical to show that "time discount rate" is invalid. I agree time discount rate is invalid, so I don't have anything against this example in its context. Sorry about my misunderstanding!

***

Most of the 80000 hours article attempts to persuade the reader that longtermism is morally good, by explaining the reasons that we should consider future people. But the part about how we are able to benefit future people is very short. Here is the entire segment, excerpted:

We can “impact” the future. The implicit assumption – so obvious that it’s not even stated – is that, sure, maybe we don’t know exactly how we can be the most effective. But if we put our minds to it, surely we could come up with interventions with results that range from zero (in the worst case) to positive.

 

The road to hell is paved with good intentions

Would this have been true in the past? I imagined what a high conviction longtermist would do at various points of time in history. Our longtermist would be an elite in the society of the time, someone with the ability to impact things. Let’s call him “Steve”. Steve adopts the values of the time he travels to, just as a longtermist in 2022 adopts the values of 2022 when deciding what benefits future people.

1960 AD

The Cold War has started, and the specter of nuclear winter is terrifying. Steve is worried about nuclear existential risk, but realizes that he has no hope of getting the United States and the Soviet Union to disarm. Instead, he focuses on what could impact people in the far future. The answer is immediately obvious: nuclear meltdowns and radioactive nuclear waste. Meltdowns can contaminate land for tens of thousands of years, and radioactive waste can similarly be dangerous for thousands of years. Therefore, Steve uses his influence to obstruct and delay the construction of nuclear power plants. Future generations will be spared the blight of thousands of nuclear power plants everywhere.

1100 AD

Steve is a longtermist in Europe. Thinking for the benefit of future humans, he realizes that he must save as many future souls as possible, so that they are able to enter heaven and enjoy an abundance of utils. What better way to do this than to reclaim the Holy Land from Islamic rule? Some blood may be shed and lives may be lost, but the expected value is strongly positive. Therefore, Steve uses his power to start the Crusades, saving many souls over the next 200 years.

50 BC

Steve is now living in Egypt. Thinking of saving future people from cancer, he convinces Cleopatra to skip dessert. Somehow, this causes Egypt to enter a golden age, and Roman rule over Europe lasts a century longer than it would have.

Unfortunately, this time Steve messed up. He forgot that the Industrial Revolution, which was the starting point for a massive upgrade of humanity’s living standards and required for cancer cures, happened due to a confluence of factors in the United Kingdom (relatively high bargaining power of labor, the Magna Carta, balance of power between nobles and the Crown, the strength of the Catholic Church). Roman domination was incompatible with all of those things, and its increased longevity actually delayed the cure for cancer by a century!

Edit: the source of the example is invalid, but I think the theme of "hard to know future outcomes" in the second paragraph is still plausible if we accept the first paragraph's hypothesis.

***

I believe these examples show that it’s really unlikely that a longtermist at any point in history would have a good sense of how to benefit future people. Well-intentioned interventions could just as likely turn out to be harmful. I don’t see any reason why the current moment in time would be different. The world is a complex system, and trying to affect the far future state of a complex system is a fool’s errand.

 

In the past, what sorts of events have benefitted future humans?

Great question. The articles I read generally point to economic growth as the main cause of prosperity. Economic growth is said to increase due to technological innovation such as discoveries, social innovation such as more effective forms of government, and larger population which has a multiplier effect.

Let’s look at a few major breakpoints and see whether longtermism was a significant factor. Some examples are the discovery of fire, sedentary agriculture, invention of the wheel, invention of writing, invention of the printing press, and the industrial revolution.

In summary, it looks as though most advances that have benefitted the future come about because people have a problem they want to solve, or they want to increase the immediate benefits to themselves.

 

We can achieve longtermism without longtermism

There are examples of people taking actions that look like they require a longtermism mindset to make sense. For example:

 

But note, an explanation that does not include longtermism is available for all of these cases:

 

Longtermism is also not required for many popular causes commonly associated with it. Taking existential risks as an example:

 

Conclusion

The main point is that intervening for long term reasons is not productive, because we cannot assume that interventions are positive. Historically, interventions based on “let’s think long term”, instead of solving an immediate problem, have tended to be negative or negligible in effect.

Additionally, longtermism was not a motivating factor behind previous increases in prosperity. It is not necessary to tackle most current cause areas, such as existential risk. Longtermism is costly, because it reduces popular support for effective altruism through “crowding out” and “weirdness” effects.

Why do we think that longtermism, now, will have a positive effect and will be a motivating factor?

If it does not serve any useful purpose, then why focus on longtermism?


 

32 comments

Comments sorted by top scores.

comment by ThomasW (ThomasWoodside) · 2022-08-11T07:08:28.027Z · EA(p) · GW(p)

I'm quite happy that you are thinking critically about what you are reading! I don't think you wrote a perfect criticism (see below), but the act of taking the time to write a criticism and posting it to a public venue is not an easy step. EA always needs people who are willing and eager to probe its ethical foundations. Below I'm going to address some of your specific points, mostly in a critical way. I do this not because I think your criticism is bad (though I do disagree with a lot of it), but because I think it can be quite useful to engage with newer people who take the time to write reasonably good reactions to something they've read. Hopefully, what I say below is somewhat useful for understanding the reasons for longtermism and what I see as some flaws in your argument. I would love for you to reply with any critiques of my response.

This has been quoted several times, even though it’s an absurd argument on its face. Imagine the world where Cleopatra skipped dessert. How does this cure cancer?

It doesn't, and that's not Parfit's point. Parfit's point is that if one were to employ a discount rate, Cleopatra's dessert would matter more than nearly anything today. Since (he claims) this is clearly wrong, there is something clearly wrong with a discount rate.

Most of the 80000 hours article attempts to persuade the reader that longtermism is morally good, by explaining the reasons that we should consider future people. But the part about how we are able to benefit future people is very short.

Well yes, but that's because it's in the other pages linked there. Mostly, this has to do with thinking about whether existential risks exist soon, and whether there is anything we can do about them. That isn't really in the scope of that article but I agree the article doesn't show it.

The world is a complex system, and trying to affect the far future state of a complex system is a fool’s errand.

That isn't entirely true. There are some things that routinely affect the far future of complex systems. For instance, complex systems can collapse, and if you can get them to collapse, you can pretty easily affect its far future. If it's about to collapse due to an extremely rare event, then preventing that collapse can affect its far future state.

Let’s look at a few major breakpoints and see whether longtermism was a significant factor.

Obviously, it wasn't. But of course it wasn't! There wasn't even longtermism at all, so it wasn't a significant factor in anyone's decisions. Maybe you are trying to say "people can make long term changes without being motivated by longtermism." But that doesn't say anything about whether longtermism might make them better at creating long term changes than they otherwise would be.

We can achieve longtermism without longtermism

I generally agree with this and so do many others. For instance see here [EA · GW] and here [EA · GW]. However, I think it's possible that this may not be true at some time in the future. I personally would like to have longtermism around, in case there is really something where it matters, mostly because I think it is roughly correct as a theory of value. Some people may even think this is already the case. I don't want to speak for anyone, but my sense is that people who work on suffering risk are generally considering longtermism but don't care as much about existential risk.

The main point is that intervening for long term reasons is not productive, because we cannot assume that interventions are positive. Historically, interventions based on “let’s think long term”, instead of solving an immediate problem, have tended to be negative or negligible in effect.

First, I agree that interventions may be negative, and I think most longtermists would also strongly agree with this. In terms of whether historical "long term" interventions have been negative, you've asserted it but you haven't really shown it. I would be very interested in research on this; I'm not aware of any. If this were true, I do think that would be a knock against longtermism as a theory of action (though not decisive, and not against longtermism as a theory of value). Though it maybe could still be argued that we live at "the hinge of history" where longtermism is especially useful.


I made some distinguishment between theory of value and theory of action. A theory of value (or axiology) is a theory about what states of the world are most good. For instance, it might say that a world with more happiness, or more justice, is better than a world with less. A theory of action is a theory about what you should do; for instance, that we should take whichever action produces the maximum expected happiness. Greaves and MacAskill make the case for longtermism as both. But it's possible you could imagine longtermism as a theory of value but not a theory of action.

For instance, you write:

Some blood may be shed and lives may be lost, but the expected value is strongly positive.

Various philosophers, such as Parfit himself, have suggested that for this reason, many utilitarians should actually "self-efface" their morality. In other words, they should perhaps start to believe that killing large numbers of people is bad, even if it increases utility, because they might simply be wrong about the utility calculation, or might delude themselves into thinking what they already wanted to do produces a lot of utility. I gave some more resources/quotes here [EA(p) · GW(p)].

Thanks for writing!

Replies from: Brian Lui
comment by Brian Lui · 2022-08-12T00:18:05.432Z · EA(p) · GW(p)

Thanks ThomasWoodside! I noticed the forum has relatively low throughput so I decided to "learn in public" as it were :)

I understand the Cleopatra paragraph now and I've edited my post. I wasn't able to understand his point before, so I got it wrong. Thanks for explaining it!

 

Obviously, it wasn't. But of course it wasn't! There wasn't even longtermism at all, so it wasn't a significant factor in anyone's decisions. Maybe you are trying to say "people can make long term changes without being motivated by longtermism." But that doesn't say anything about whether longtermism might make them better at creating long term changes than they otherwise would be.

This is a good point. I wanted to show "longtermism is not necessary for long term changes", which I think is pretty likely. The more venturesome idea is "longtermism would not make better long term changes", and those examples don't address that point.

My intuition is that a longtermism mindset likely would not have a significant positive impact (such as the imaginary examples I wrote), but it's pretty hard to "prove" that because we don't have a counterfactual history. We could go through historical examples of people with longterm views (in journals and diaries?), and see whether they had positive or negative impact. That might be a big project though.

 

I generally agree with this and so do many others. For instance see here [EA · GW] and here [EA · GW]. 

These are really good links, thank you!

 

In terms of whether historical "long term" interventions have been negative, you've asserted it but you haven't really shown it. I would be very interested in research on this; I'm not aware of any. If this were true, I do think that would be a knock against longtermism as a theory of action (though not decisive, and not against longtermism as a theory of value). Though it maybe could still be argued that we live at "the hinge of history" where longtermism is especially useful.

Same! I agree this is a weakness of my post. Theory of action vs theory of value is a good concept - I don't have a strong view on longtermism as a theory of value, I mostly care about the theory of action.

comment by matthew.vandermerwe · 2022-08-11T10:16:15.747Z · EA(p) · GW(p)

Nuclear war similarly can be justified without longtermism, which we know because this has been the case for many decades already

Much of the mobilization against nuclear risk from the 1940s onwards was explictly grounded in the threat of human extinction — from the Russell-Einsten manifesto to grassroots movements like Women Strike for Peace with the slogan "End the Arms Race not the Human Race"

Replies from: Herbie Bradley, Brian Lui
comment by hb574 (Herbie Bradley) · 2022-08-11T12:29:04.158Z · EA(p) · GW(p)

Concern about the threat of human extinction is not longtermism (see Scott Alexander's well known forum post about this), which I think is the point that the OP is making.

comment by Brian Lui · 2022-08-12T03:04:39.629Z · EA(p) · GW(p)

Yes, exactly - it's grounded in concern about human extinction, not longtermism. The section "We can achieve longtermism without longtermism" in my posts talks about the difference.

comment by Mauricio · 2022-08-11T07:30:15.132Z · EA(p) · GW(p)

Thanks for writing - I skimmed so may have missed things, but I think these arguments have significant weaknesses, e.g.:

  • They draw a strong conclusion about major historical patterns just based on guesswork about ~12 examples (including 3 that are explicitly taken from the author's imagination).
  • They do not consider examples which suggest long-term thinking has been very beneficial.
    • E.g. some sources suggest that Lincoln had long-term motivations for permanently abolishing slavery, saying, "The abolition of slavery by constitutional provision settles the fate, for all coming time, not only of the millions now in bondage, but of unborn millions to come--a measure of such importance that these two votes must be procured."
  • As another comment suggests, the argument does not consider ways in which our time might be different (e.g. unusually many people are trying to have long-term impacts, people are less ignorant, tech advances may create rare opportunities for long-term impact).
Replies from: evelynciara, Guy Raveh, Brian Lui
comment by BrownHairedEevee (evelynciara) · 2022-08-12T04:33:48.121Z · EA(p) · GW(p)

Another example of long-term thinking working well is Ben Franklin's bequests to the cities of Boston and Philadelphia, which grew for 200 years before being cashed out. (Also one of the inspirations for the Patient Philanthropy Fund.)

Replies from: Brian Lui
comment by Brian Lui · 2022-08-12T10:06:58.744Z · EA(p) · GW(p)

Thank you, this is a great example of longtermism thinking working out, that would have been unlikely to happen without it!

comment by Guy Raveh · 2022-08-11T08:51:10.515Z · EA(p) · GW(p)

To your Lincoln example I'd add good governance attempts in general - the US constitution appears to have been written with the express aim of providing long term democratic and stable government.

Replies from: Brian Lui
comment by Brian Lui · 2022-08-12T03:09:05.303Z · EA(p) · GW(p)

Thanks for adding this as an additional example - the US constitution is a very good example of how longtermism can achieve negative results! There's a growing body of research from political scientists that the constitution is a major cause of a lot of US governance problems, for example here

comment by Brian Lui · 2022-08-12T03:12:59.961Z · EA(p) · GW(p)

I think the slavery example is a strong example of longtermism having good outcomes, and it probably increased the amount of urgency to reduce slavery.

My base rate for "this time it's different" arguments are low, except for ones that focus on extinction risk. Like if you mess up and everyone dies, that's unrecoverable. But for other things I am skeptical.

comment by Moritz von Knebel · 2022-08-11T06:26:15.979Z · EA(p) · GW(p)

Re Cleopatra:

The argument is not that Cleopatra's action is the beginning of a causal chain. In fact, the present and the future need not be linked causally at all for Parfit's argument to make sense.

Instead, what he employs is a "reductio ad absurdum" - he takes the non-longtermist position to an extreme where it has counterintuitive implications.

If discounting WAS true, then any of Cleopatra's actions (even something insignificant as eating dessert) would've mattered so much more than anything that happens today (including curing cancer). This seems counterintuitive to most of us. Therefore, something is wrong with this kind of discounting.

comment by Karthik Tadepalli (therealslimkt) · 2022-08-11T07:01:46.349Z · EA(p) · GW(p)

In the past, all events with big positive impacts on the future occurred because people wanted to solve a problem or improve their circumstances, not because of longtermism.

Here's a parallel argument.

Before effective altruism was conceived, all events that generated good consequences occurred because people wanted to solve a problem or improve their circumstances, not because of EA. Since EA was not necessary to achieve any of those good consequences, EA is irrelevant.

The problem with both arguments is that the point of an ideology like EA or longtermism is to increase the likelihood that people take actions to make big positive impacts in the future. The printing press, the wheel, and all good things of the past occurred without us having values of human rights, liberalism, etc. This is not an argument for why these beliefs don't matter.

Replies from: Guy Raveh, Brian Lui
comment by Guy Raveh · 2022-08-11T08:52:35.777Z · EA(p) · GW(p)

It is, however, an argument for why we should normally look beyond EA to find people/organizations/opportunities for solving big problems.

Replies from: therealslimkt
comment by Karthik Tadepalli (therealslimkt) · 2022-08-11T08:59:22.262Z · EA(p) · GW(p)

Yes, if the post was simply arguing that we should look beyond longtermism for opportunities to solve big problems it would have more validity. As it stands the argument is a non sequitur.

comment by Brian Lui · 2022-08-12T03:16:04.882Z · EA(p) · GW(p)

Valid - basically I was doing a two part post. First part is "longtermism isn't a necessary condition", because I thought there would be pushback to that. If we accept this, then we consider the second part, "longtermism may not have a positive effect as assumed". If I knew the first part was uncontroversial I would have cut it out.

Replies from: therealslimkt
comment by Karthik Tadepalli (therealslimkt) · 2022-08-12T04:17:43.198Z · EA(p) · GW(p)

Rhetorically that just seems strange with all your examples. Human rights are also not a "necessary condition" by your standard, since good things have technically happened without them. But they are practically speaking a necessary condition for us to have strong norms of doing good things that respect human rights, such as banning slavery. So I think this is a bait-and-switch with the idea of "necessary condition".

Replies from: Brian Lui
comment by Brian Lui · 2022-08-12T10:06:10.955Z · EA(p) · GW(p)

What do you think would be a good way to word it?

One of the ideas is that longtermism probably does not increase the EV of decisions made for future people. Another is that we increase the EV of future people as a side effect of normal doing things. The third is that increasing the EV of future people is something we should care about.

If all of these are true, then it should be true that we don't need longtermism, I think?

Replies from: therealslimkt
comment by Karthik Tadepalli (therealslimkt) · 2022-08-12T17:09:42.203Z · EA(p) · GW(p)

Yes, if you showed that longtermism does not increase the EV of decisions for future people relative to normal doing things, that would be a strong argument against longtermism.

comment by kbog · 2022-08-11T12:04:24.497Z · EA(p) · GW(p)

Some comments on "the road to hell is paved with good intentions"

This podcast is kind of relevant: Tom Moynihan on why prior generations missed some of the biggest priorities of all - 80,000 Hours (80000hours.org)

So people in the Middle Ages believed that the best thing was to save more souls, but I don't think that exactly failed. That is, if a man's goal was to have more people believe in Christianity, and he went with sincerity in the Crusades or colonial missionary expeditions, he probably did help achieve that goal.

Likewise, for people in the 1700s, 1800s and early 1900s, when the dominant paradigm shifted to one of human progress, I think people could reliably find ways to improve long-term progress. New science and technology, liberal politics, etc all would have been straightforward and effective methods to get humanity further on the track of rising population, improved quality of life, and scientific advancement.

Point is, I think people have always tended to be significantly more right than wrong about how to change the world. It's not too too hard to understand how one person's actions might contribute to an overriding global goal. The problem is in the choice of such an overriding paradigm. The first paradigm was that the world was stagnant/repetitive/decaying and just a prelude to the afterlife. The second paradigm was that the world is progressing and things will only get steadily better via science and reason. Today we largely reject both these paradigms, and instead we have a view of precarity - that an incredibly good future is in sight but only if we proceed with caution, wisdom, good institutions and luck. And I think the deepest risk is not that we are unable to understand how to make our civilization more cautious and wise, but that this whole paradigm ends up being wrong.

I don't mean to particularly agree or disagree with your original post, I just think this is a helpful clarification of the point.

Replies from: Brian Lui
comment by Brian Lui · 2022-08-12T03:18:44.743Z · EA(p) · GW(p)

Point is, I think people have always tended to be significantly more right than wrong about how to change the world. It's not too too hard to understand how one person's actions might contribute to an overriding global goal. The problem is in the choice of such an overriding paradigm. The first paradigm was that the world was stagnant/repetitive/decaying and just a prelude to the afterlife. The second paradigm was that the world is progressing and things will only get steadily better via science and reason. Today we largely reject both these paradigms, and instead we have a view of precarity - that an incredibly good future is in sight but only if we proceed with caution, wisdom, good institutions and luck. And I think the deepest risk is not that we are unable to understand how to make our civilization more cautious and wise, but that this whole paradigm ends up being wrong.

 

I like this description of your viewpoint a lot! The entire paradigm for "good outcomes" may be wrong. And we are unlikely to be aware of our paradigm due to "fish in water" perspective problems.

comment by Noah Scales · 2022-08-11T21:33:43.600Z · EA(p) · GW(p)

Interesting write-up, thanks!

Elsewhere in his article, Parfit discusses probability discounting, which Parfit demonstrates does not always correlate with time-based discounting.  I think probability discounting is closer to the intuition you have regarding the irrelevance of Cleopatra's spending on dessert vs whether cures for cancer exist now. 

Parfit's example involving Cleopatra is meant to show that, for example, spending on Cleopatra's dessert would be ranked by a policy-maker of her government  as having higher value than spending whose higher value (curing cancer) would not accrue until presumably thousand's of years later, if the policymaker used temporal discounting to make the comparison .

Your line of argument against longtermism, that today's actions have increasingly uncertain far-future outcomes, might coincide with the belief that probability discounting is a good thing even though time-based discounting is, as Parfit claims, nonsense. However, I see a dilemma in that, assuming that Cleopatra's government expected her empire to continue indefinitely, its policymakers could allocate money during Cleopatra's time toward identifying and curing human disease over the long-term. It would be a reasonable expectation on the policymakers' part that  the empire's chances of  discovering cures for cancer would go up, not down, over a few thousand years. 

Replies from: Brian Lui
comment by Brian Lui · 2022-08-12T04:19:58.784Z · EA(p) · GW(p)

Agreed, "probability discounting" is the most accurate term for this. Also, I struck out the part about Cleopatra in the original post, now that I understand the point behind it!

comment by smountjoy · 2022-08-11T07:14:40.111Z · EA(p) · GW(p)

If it does not serve any useful purpose, then why focus on longtermism?

I think you're right that we can make a good case for increased spending on nuclear safety, pandemic preparedness, and AI safety without appeal to longtermism. But here's one useful purpose of longtermism: only the longtermist arguments suggest that those causes are overwhelmingly important; and because of the longtermist arguments, we have many talented people are working zealously to solve those issues—people who would otherwise be working on other things.

Obviously this doesn't address your concern that longtermism is incorrect; it's merely a reason why, if longtermism is correct, it's a useful thing to talk about.

comment by acylhalide (Samuel Shadrach) · 2022-08-11T07:08:23.767Z · EA(p) · GW(p)

re: 1960 AD and nuclear power plants

It is still not obvious in 2022 whether abundant nuclear power plants are net good - namely whether the extra energy we obtain is worth weapons proliferation and other risks. Or atleast it isn't obvious to me, and EA doesn't seem to have consensus on this either (yet).

Just wished to point this out - if you're looking for less controversial examples.

Replies from: kbog
comment by kbog · 2022-08-11T11:43:13.285Z · EA(p) · GW(p)

I recall the Founder's Pledge report on climate change some years ago discussed nuclear proliferation from nuclear energy and it seemed like nuclear power plants could equally promote proliferation or work against it (the latter by using up the supply of nuclear fuel). Considering how many lives have been taken by fossil fuels, I feel it's clear that nuclear energy has been net good. That said I have a hard time believing that a longtermist in the 1960s would oppose nuclear power plants.

Not that I disagree with the general idea that if you imagine longtermists in the past, they could have come up with a lot of neutral or even harmful ideas.

Replies from: Samuel Shadrach
comment by acylhalide (Samuel Shadrach) · 2022-08-11T13:05:00.377Z · EA(p) · GW(p)

Yup, no idea what a longtermist in 1960 would've done. I would be keen on reading the report though. I'm also interested in future-looking policy - for instance if nuclear power plants become prevalent in a much larger set of countries than they are today. If they are indeed net positive it does seem useful to establish consensus that that is so! Right now the topic has become politicised but at the same time I feel there isn't good enough technical material for someone to read and go "yes this is obviously net good".

Replies from: kbog
comment by kbog · 2022-08-11T13:53:21.987Z · EA(p) · GW(p)

Here is the report (at first I'd been unable to find it)

If they are indeed net positive it does seem useful to establish consensus that that is so!

At this section of my policy platform I have compiled sources with all the major arguments I could find regarding nuclear power. Specifically, under the heading "Fission power should be supported although it is expensive and not necessary"

https://happinesspolitics.org/platform.html#cleanenergy

I think with this compilation of pros/cons, and a background understanding that fossil fuel use is harmful, it is easy to see that nuclear is at least better than using fossil fuels.

comment by Moritz von Knebel · 2022-08-11T06:30:30.034Z · EA(p) · GW(p)

Out of curiosity: The phrase "Past performance is not indicative of future results" is often brought up when doing the kind of historic analysis you are presenting.

How much do you think this applies here? Would things look different if we had an Effective Altruism Movement centuries ago?

Replies from: Brian Lui
comment by Brian Lui · 2022-08-12T03:26:43.632Z · EA(p) · GW(p)

Effective Altruism Movements in the past could have a wide range of results. For example, the Fabian Society might be an example of a positive impact. In the same time period, Communism would be another output of such a movement.

I think past performance is generally indicative of future results. Unless you have a good reason to think that 'this time is different', and you have a thesis for why the differences will lead to a materially changed outcome, it's better to use the past as the base case.

comment by Brian Lui · 2022-08-12T04:18:17.847Z · EA(p) · GW(p)

I just found this forum post [EA · GW] which is talking about the same ballpark of things! Mostly agree with the forum post too.

comment by Benjamin Start · 2022-08-11T07:24:16.929Z · EA(p) · GW(p)

Only read the summary. I agree.