post by [deleted]
score: 0 (0 votes) ·
This is a link post for
Comments sorted by top scores.
comment by RyanCarey
· score: 12 (12 votes) · EA
) · GW
Re page 9, I think the talk of a civilization maintaining exponential growth is unconvincing. The growth rate of a civilization should ultimately be bounded cubically (your civ grows outward like a sphere), whereas the risk is exponential. Exponentials in general defeat polynomials, giving finite EV in the limit of t, regardless of the parameters.
comment by Carl_Shulman
· score: 9 (9 votes) · EA
) · GW
That's our best understanding.
But there is then an argument on this account to attend to whatever small credence one may have in indefinite exponential growth in value. E.g. if you could build utility monsters such that every increment of computational power let them add another morally important order of magnitude to their represented utility, or hypercomputers were somehow possible, or we could create baby universes.
comment by JanBrauner
· score: 5 (5 votes) · EA
) · GW
You write: "In this discussion, there are two considerations that might at first have ap-
peared to be crucial, but turn out to look less important. The first such consid-
eration is whether existence is in general good or bad, `a la Benatar (2008). If
existence really should turn out to be a harm, sufficiently unbiased descendants
would plausibly be able to end it. This is the option value argument. In turn,
option value itself might appear to be a decisive argument against doing some-
thing so irreversible as ending humanity: we should temporise, and delegate
this decision to our descendants. But not everyone enjoys option value, and
those who suffer are relatively less likely to do so. If our descendants are selfish,
and find it advantageous to allow the suffering of powerless beings, we may not
wish to give them option value. If our descendants are altruistic, we do want
civilisation to continue, but for reasons that are more general than option value."
Since the option value argument is not very strong, it seems to be a very important consideration "whether existence in general is good or bad" - or, less dichotomous, where the threshold for a life worth living lies. Space colonization means more (sentient) beings. If our descendants are altruistic (or have values that we, upon reflection, would endorse), everything is fine anyway. If our descendants are selfish, and the threshold for a life worth living is fairly low, then not much harm will be done (as long as they don't actively value causing harm, which seems unlikely). If they are selfish and the threshold is fairly high - i.e. a lot of things in a life have to go right in order to make the life worth living - then most powerless beings will probably have bad lives, possibly rendering overall utility negative.