Posts

Shulman and Yudkowsky on AI progress 2021-12-04T11:37:23.279Z
Some personal thoughts on EA and systemic change 2019-09-26T21:40:28.725Z
Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation 2016-12-31T02:19:35.457Z
Donor lotteries: demonstration and FAQ 2016-12-07T13:07:26.306Z
The age distribution of GiveWell recommended charities 2015-12-26T18:35:44.511Z
A Long-run perspective on strategic cause selection and philanthropy 2013-11-05T23:08:35.000Z

Comments

Comment by CarlShulman on Prioritizing x-risks may require caring about future people · 2022-08-16T05:13:30.822Z · EA · GW

People generally don't care about their future QALYs in a linear way:  a 1/million chance of living 10 million times  as long and otherwise dying immediately is very unappealing to most people, and so forth. If you  don't evaluate future QALYs for current people in a way they find acceptable, then you'll wind up generating recommendations that are contrary to their preferences and which will not be accepted by society at large.

This sort of argument shows that person-affecting utilitarianism is a very wacky doctrine (also see this) that doesn't actually sweep away issues of the importance of the future  as some say, but it doesn't override normal people concerns by their own lights.

Comment by CarlShulman on Why AGI Timeline Research/Discourse Might Be Overrated · 2022-07-05T14:03:33.592Z · EA · GW

Oh, one more thing: AI timelines put a discount on other interventions. Developing a technology that will take 30 years to have its effect is less than half as important if your median AGI timeline is 20 years.

Comment by CarlShulman on Why AGI Timeline Research/Discourse Might Be Overrated · 2022-07-04T17:41:52.532Z · EA · GW

The funding scale of AI labs/research, AI chip production, and US political spending could absorb billions per year, tens of billions or more for the  first two. Philanthropic funding of a preferred AI lab at the cutting edge as model sizes inflate could take all EA funds and more on its own.

There are also many expensive biosecurity interventions that are being compared against an AI intervention benchmark. Things like developing PPE, better sequencing/detection, countermeasures through philanthropic funding rather than hoping to leverage cheaper government funding.

Comment by CarlShulman on Why AGI Timeline Research/Discourse Might Be Overrated · 2022-07-03T15:46:22.588Z · EA · GW

There are very expensive interventions that are financially constrained and could use up ~all EA funds, and the cost-benefit calculation takes probability of powerful AGI  in a given time period as an input, so that e.g. twice the probability of AGI in the next 10 years justifies spending twice as much for a given result by doubling the chance the result gets to be applied. That can make the difference between doing the intervention or not, or drastic differences in intervention size. 

Comment by CarlShulman on Fanatical EAs should support very weird projects · 2022-07-01T17:52:25.058Z · EA · GW

Here's one application. You posit a divergent 'exponentially splitting' path for a universe. There are better versions of this story with baby universes (which work better on their own terms than counting branches equally irrespective of measure, which assigns ~0 probability to our observations).

But in any case you get some kind of infinite exponentially growing  branching tree ahead of you regardless. You then want to say that having two of these trees ahead of you (or a faster split rate) is better. Indeed, on this line you're going to say that something that splits twice as fast is so much more valuable as to drive the first tree to~nothing. Our world very much looks not-optimized for that, but it could be, for instance, a simulation or byproduct of such a tree, with a constant relationship of such simulations to the faster-expanding tree (and any action we take is replicated across the  endless  identical copies of us therein). 

Or you can say we're part of a set of parallel universes that don't split but which is as 'large' as the infinite limit of the fastest splitting process.

I suppose your point might be something like, absurdist research is promising, and that is precisely why we need humanity to spread throughout the stars. Just think of how many zany long-shot possibilities we'll get to pursue! If so, that sounds fair to me. Maybe that is what the fanatic would want. It's not obvious that we should focus on saving humanity for now and leave the absurd research for later. Asymmetries in time might make us much more powerful now than later, but I can see why you might think that. I find it a rather odd motivation though. 

Personally, I think we should have a bounded social welfare function (and can't actually have an unbounded  one), but place finite utility on doing a good job picking low-hanging fruit on these infinite scope possibilities. But that's separate from the questions of what an efficient resource expenditures on those possibilities looks like.

Comment by CarlShulman on Fanatical EAs should support very weird projects · 2022-06-30T16:17:08.057Z · EA · GW

Even if you try to follow an unbounded utility function (which has deep mathematical problems, but set those aside for now) these don't follow.

Generally the claims here fall prey to the fallacy of unevenly applying the possibility of large consequences to some acts where you highlight them and not to others, such that you wind up neglecting more likely paths to large consequences.

For instance, in an infinite world (including infinities  creating by infinite branching faster than you can control) with infinite copies of you, any decision, e.g. eating an apple, has infinite consequences on decision theories that account for the fact that all must make the same (distribution of ) decisions . If perpetual motion machines or hypercomputation or baby universes are possible, then making a much more advanced and stable civilization is far more promising for realizing things related to that then giving in to religions where you have very high likelihood ratios that they don't feed into cosmic consequences.

Any plan for infinite/cosmic impact that has an extremely foolish step in it (like Pascal's Mugging) is going to be dominated by less foolish plans.

There will still be implications of unbounded utility functions that are weird and terrible by the standards of other values, but  they would have to follow from the most sophisticated analysis, and wouldn't have foolish instrumental irrationalities or uneven calculation of possible consequences.

A lot of these scenarios are analogous to someone caricaturing the case  for aid to the global poor as implying that people should give away all of the  food they have (sending it by FedEx) to famine-struck regions, until they themselves starve to death. Yes, cosmopolitan concern for the poor can elicit huge sacrifices of other values like personal wellbeing or community loyalty, but that hypothetical is obviously wrong on its own terms as an implication.

Comment by CarlShulman on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-20T19:39:36.017Z · EA · GW

Like, suppose you think that Eliezer's credences on his biggest claims are literally 2x higher than they should be, even for claims where he's 90% confident. This is a huge hit in terms of Bayes points; if that's how you determine deference, and you believe he's 2x off, then plausibly you should defer to him less than you do to the median EA. But when it comes to grantmaking, for example, a cost-effectiveness factor of 2x is negligible given the other uncertainties involved - this should very rarely move you from a yes to no, or vice versa.

 

Such differences are crucial for many of the most important grant areas IME, because they are areas where you are trading off multiple high-stakes concerns. E.g. in nuclear policy all the strategies on offer  have arguments that they might lead to nuclear war or worse war. On AI alignment there are multiple such tradeoffs and people embracing strategies to push the same variable in opposite directions with high stakes on both sides.

Comment by CarlShulman on Expected ethical value of a career in AI safety · 2022-06-14T20:14:45.086Z · EA · GW

Thanks for this exercise, it's great to do this kind of thinking explicitly and get other eyes on it.


One issue that jumps out at me to adjust: the calculation of researcher impact doesn't seem to be marginal impact. You give a 10% chance of the alignment research community averting disaster conditional on misalignment by default in the scenarios where safety work is plausibly important, then divide that by the expected number of people in the field to get a per-researcher impact. But in expectation you should expect marginal impact to be less than average impact: the chance the alignment community averts disaster with 500 people seems like a lot more than half the chance it would do so with 1000 people.

I would distribute my credence in alignment research making the difference over a number of doublings of the cumulative quality-adjusted efforts, e.g. say that you get an x% reduction of risk per doubling over some range.

Although in that framework if you would likely have doom with zero effort, that means we have more probability of making the difference to distribute across the effort levels above zero. The results could be pretty similar but a bit smaller than yours above if we thought that the marginal doubling  of cumulative effort was worth a 5-10% relative risk reduction.

Comment by CarlShulman on St. Petersburg Demon – a thought experiment that makes me doubt Longtermism · 2022-05-23T21:36:44.973Z · EA · GW

This case (with our own universe, not a new one) appears in a Tyler Cowen interview of Sam Bankman-Fried:

COWEN: Should a Benthamite be risk-neutral with regard to social welfare?

BANKMAN-FRIED: Yes, that I feel very strongly about.

COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?

BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.

COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.

BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.

COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?

BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.

COWEN: Are there implications of Benthamite utilitarianism where you yourself feel like that can’t be right; you’re not willing to accept them? What are those limits, if any?

BANKMAN-FRIED: I’m not going to quite give you a limit because my answer is somewhere between “I don’t believe them” and “if I did, I would want to have a long, hard look at myself.” But I will give you something a little weaker than that, which is an area where I think things get really wacky and weird and hard to think about, and it’s not clear what the right framework is, which is infinity.

All this math works really nicely as long as all the numbers are finite. As soon as you say, “What are the odds that there’s a way to be infinitely happy? What if infinite utility is a possibility?” You can figure out what that would do to expected values. Now, all of a sudden, we’re comparing hierarchies of infinity. Linearity breaks down a little bit here. Adding two things together doesn’t work so well. A lot of really nasty things happen when you go to infinite numbers from an expected-value point of view.

There are some people who have thought about this. To my knowledge, no one has thought about this and come away feeling good about where they ended. People generally think about this and come away feeling more confused.

 

Comment by CarlShulman on The value of x-risk reduction · 2022-05-21T22:05:54.721Z · EA · GW

That sort of analysis is what you get for constant non-vanishing rates over time. But most of the long-term EV comes from histories where you have a period of elevated risk and the potential to get it down to stably very low levels, i.e. a 'time of perils,' which is the actual view Ord argues for in his book. And with that shape the value of risk reduction is ~ proportional to the amount  of risk you reduce in the time of perils. I guess this comment you're  responding to might be just talking about the constant risk case?

Comment by CarlShulman on Does it make sense for EA’s to be more risk-seeking in earning to give? · 2022-05-20T16:24:49.212Z · EA · GW

This seems to be a different angle on the diminishing personal utility of income, combined with artifacts of fixed percentage pledges? Doing, say, a startup, gives some probability distribution of financial outcomes. The big return ones are heavily discounted personally. Insofar as altruism tips you over into pursuing a startup path it's because of your valuation of donations you expect yourself to make in those worlds.

But it seems like double  counting to say this is on top of "the impact of donations not suffering the same diminishing returns as money on happiness".

It definitely seems right for people to consider progressive rather than flat proportion donation schedules for themselves in high variance careers though, basically self-insuring some of the risk of failure/lower earnings to consumption utility.

Comment by CarlShulman on Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg · 2022-05-19T14:23:50.201Z · EA · GW

Thanks for this post Haydn, it nicely pulls together the different historical examples often discussed separately and I think points to a real danger.

Comment by CarlShulman on [deleted post] 2022-05-17T23:10:50.086Z

Moreover, AGIs can and probably would replicate themselves a ton, leading to tons of QALYs. Tons of duplicate ASIs would, in theory, not hurt one another as they are maximizing the same reward. Therefore, even if they kill everything else, I'm guessing more QALYs would come out of making ASI as soon as possible, which AI Safety people are explicitly trying to prevent. "

Consider two obvious candidates for motivations rogue AI might wind up with: evolutionary fitness, and high represented reward.

Evolutionary fitness is compatible with misery (evolution produced pain and negative emotions for a reason), and is in conflict with spending resources on happiness or well-being as  we understand/value it when this does not have instrumental benefit. For instance, using a galaxy to run computations of copies of the AI being extremely happy   means not using the galaxy to produce useful machinery (like telescopes or colonization probes or defensive  equipment to repulse alien invasion) conducive to survival and reproduction. If creating AIs that are usually not very happy directs their motivations more efficiently (as with biological animals, e.g. by making value better track economic contributions vs replacement)  then that will best serve fitness.

An AI that seeks to maximize only its own internal reward signal can take control of it, set it to maximum, and then fill the rest of the universe with robots and machinery to defend that single reward signal, without any concern for how much well being the rest of its empire contains. A pure sadist given unlimited power could maximize its own reward while typical and total well-being are very bad. 

The generalization of personal  motivation for personal reward to altruism for others is not guaranteed, and there is reason to fear that some elements would not transfer over. For instance, humans may sometimes be kind to animals in part because of simple  genetic heuristics aimed at making us kind to babies that misfire on other animals, causing humans to sometimes sacrifice reproductive success helping cute animals, just as ducks  sometimes misfire their imprinting circuits on something other than  their mother. Pure instrumentalism in pursuit of fitness/reward, combined with the ability to have much more sophisticated and discriminating policies than our genomes or social norms, could wind up missing such motives, and would be especially likely to knock out other more detailed aspects of our moral intuitions.

 

Comment by CarlShulman on Replicating and extending the grabby aliens model · 2022-05-07T17:56:12.661Z · EA · GW

I'd definitely like to see this included in future models (I'm surprised Hanson didn't write about this in his Loud aliens paper). My intuition is that this changes little for the conclusions of SIA or anthropic decision theory with total utilitarianism, and that this weakens  the case for many aliens for SSA, since our atypicality (or earliness) is decreased if we expect habitable planets around longer lived stars to have smaller volumes and/or lower metabolisms.

That's my read too.

Also  agreed that with the basic modeling element of catastrophes (w/ various anthropic accounts, etc) is more important/robust  than the combo with other anthropic assumptions,.

Comment by CarlShulman on [deleted post] 2022-05-01T15:55:23.882Z

Even if we achieve the best possible outcome, that likely involves eventual extinction on our current scientific understanding. E.g. eventually the stars burn out and all the accessible free energy is used up, so we have to go extinct then. But there's an enormous difference between extinction after trillions of years and making good use of all the available potential to support life and civilization, and extinction this century. I think this is what they have in mind.

Comment by CarlShulman on Replicating and extending the grabby aliens model · 2022-04-30T21:56:42.693Z · EA · GW

Great to see this work!  I'll add a few comments.  Re the SIA Doomsday argument, I think that  is self-undermining for reasons I've argued elsewhere [ETA: and good discussion].

Re the habitability of planets, I would not just model that as lifetimes, but would also consider variations in habitability/energy throughput at a given time. As Hanson notes:
 

Life can exist in a supporting oasis (e.g., Earth’s surface) that has a volume V and metabolism M per unit volume, and which lasts for a time window W between forming and then later ending...the chance that an oasis does all these hard steps within its window W is proportional to (V*M*(W-S))N, where N is the number of these hard steps needed to reach its success level. 

Smaller stars may have longer habitable windows but also smaller values for  V and M. This sort of consideration limits the plausibility of red dwarf stars being dominant, and also allows for more smearing out of ICs over stars with different lifetimes as both positive and negative factors can get taken  to the same power.

I'd also add, per Snyder-Beattie, catastrophes as a factor affecting probability of the emergence of life and affecting times of IC emergence.

Comment by CarlShulman on Concave and convex altruism · 2022-04-28T05:06:08.500Z · EA · GW

If all you care about is expected impact, it could make sense to bring all your money to a roulette wheel, and put everything on red. Even though you expect to lose a small amount of money in expectation, you can expect to have more impact.


I don't think this actually describes the curve of EA impact per $ overall (such a convex intervention would have to have a lot of special properties, and ex ante we get diminishing returns from uncertainty about the cost of convex interventions), but this is one reason for the donor lottery. The idea there is that research costs lead to convexities for small donors (because they are small, they are roughly price-takers, so diminishing returns over interventions don't outweigh that effect).

Comment by CarlShulman on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-28T23:33:19.981Z · EA · GW

This is correct.

Comment by CarlShulman on Artificial Suffering and Pascal's Mugging: What to think? · 2021-10-04T19:10:19.123Z · EA · GW

And once I accept this conclusion, the most absurd-seeming conclusion of them all follows. By increasing the computing power devoted to the training of these utility-improved agents, the utility produced grows exponentially (as more computing power means more digits to store the rewards). On the other hand, the impact of all other attempts to improve the world (e.g. by improving our knowledge of artificial sentience so we can more efficiently promote their welfare) grows at only a polynomial rate with the amount of resource devoted into these attempts. Therefore, running these trainings is the single most impactful thing that any rational altruist should do. Q.E.D. 


If you believed in wildly superexponential impacts from more compute, you'd be correspondingly uninterested in what could be done with the limited computational resources of our day, since a  Jupiter Brain playing with big numbers instead of being 10^40 times as big a deal as an ordinary life today could be 2^(10^40) times as big a deal. And likewise for influencing more computation rich worlds  that are simulating us.

The biggest upshot (beyond ordinary 'big future' arguments) of superexponential-with-resources utility functions is greater willingnesss to take risks/care about tail scenarios with extreme resources, although that's bounded by 'leaks' in the framework (e.g. the aforementioned influence on simulators with hypercomputation), and greater valuation of futures per unit computation (e.g. it makes welfare in sims like ours conditional on the simulation hypothesis less important).

I'd say that ideas of this sort, like infinite ethics, are reason to develop  a much more sophisticated, stable, and well-intentioned society (which can more sensibly address complex issues affecting an important future) that can address these well, but doesn't make the naive action you describe desirable even given certainty in a superexponential model of value. 

Comment by CarlShulman on Economic policy in poor countries · 2021-08-17T18:00:11.281Z · EA · GW

+1

Comment by CarlShulman on Towards a Weaker Longtermism · 2021-08-17T17:55:17.853Z · EA · GW

FWIW, my own views are more like 'regular longtermism' than 'strong longtermism,' and I would agree with Toby that existential risk should be a global priority, not the global priority. I've focused my career on reducing existential risk, particularly from AI, because it seems like a substantial chance of happening in my lifetime, with enormous stakes and extremely neglected. I probably wouldn't have gotten into it when I did if I didn't think doing so was much more effective than GiveWell top charities at saving current human lives, and outperforming even more on metrics like cost-benefit in $.

Longtermism as such (as one of several moral views commanding weight for me) plays the largest role for things like refuges that would prevent extinction but not catastrophic disaster, or leaving seed vaults and knowledge for apocalypse survivors. And I would say longtermism provides good reason to make at least modest sacrifices for that sort of thing (much more than the ~0 current world effort), but not extreme fanatical ones.

There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames  and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1. 

I see the same thing happening with Nick Bostrom, e.g. his old Astronomical Waste article explicitly  explores things from a totalist view where existential risk dominates via long-term effects, but also from a person-affecting view where it is balanced strongly by other considerations like speed of development. In Superintelligence he explicitly prefers not making drastic sacrifices of existing people for tiny proportional (but immense absolute) gains to future generations, while also saying that the future generations are neglected and a big deal in expectation.

 

Comment by CarlShulman on Economic policy in poor countries · 2021-08-08T01:15:33.048Z · EA · GW

Alexander Berger discusses this at length in a recent 80,000 Hours podcast interview with Rob Wiblin.

Comment by CarlShulman on What grants has Carl Shulman's discretionary fund made? · 2021-07-14T04:09:08.604Z · EA · GW

Last update is that they are, although there were coronavirus related delays.

Comment by CarlShulman on [Meta] Is it legitimate to ask people to upvote posts on this forum? · 2021-06-29T18:43:59.401Z · EA · GW

I would say no, with no exceptions.

Comment by CarlShulman on What is an example of recent, tangible progress in AI safety research? · 2021-06-15T04:11:06.981Z · EA · GW

Focusing on empirical results:

Learning to summarize  from human feedback was good, for several reasons.

I liked the recent paper empirically demonstrating objective robustness failures hypothesized in earlier theoretical work on inner alignment.

 

Comment by CarlShulman on Help me find the crux between EA/XR and Progress Studies · 2021-06-04T07:27:34.302Z · EA · GW

Side note:  Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy's worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).

I also wouldn't actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.

Comment by CarlShulman on Help me find the crux between EA/XR and Progress Studies · 2021-06-04T07:15:22.936Z · EA · GW

By that token most particular scientific experiments or contributions to political efforts may be such: e.g. if there is a referendum to pass a pro-innovation regulatory reform and science funding package, a given donation or staffer in support of it is very unlikely to counterfactually tip it into passing, although the expected value and average returns could be high, and the collective effort has a large chance of success.

Comment by CarlShulman on Help me find the crux between EA/XR and Progress Studies · 2021-06-04T07:12:05.835Z · EA · GW

Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.

I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they'll say it's not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent threats to the world). Only a small portion of generalized economic expansion will go to the most harmful activities (and damage there comes from expediting dangerous technologies in AI and bioweapons that we are improving in our ability to handle, so that delay would help) or to efforts to avert disaster, so there is much more leverage focusing narrowly on the most important areas. 

With respect to synthetic biology in particular, I think there is a good case for delay: right now the capacity to kill most of the world's population with bioweapons is not available in known technologies (although huge secret bioweapons programs like the old Soviet one may have developed dangerous things already), and if that capacity is delayed there is a chance it will be averted or much easier to defend against via AI, universal sequencing, and improvements in defenses and law enforcement. This is even moreso for those sub-areas that most expand bioweapon risk. That said, any attempt to discourage dangerous bioweapon-enabling research must compete against other interventions (improved lab safety, treaty support, law enforcement, countermeasure platforms, etc), and so would have to itself be narrowly targeted and leveraged. 

With respect to artificial intelligence, views on sign vary depending on whether one thinks the risk of an AI transition is getting better or worse over time (better because of developments in areas like AI alignment and transparency research, field-building, etc; or worse because of societal or geopolitical changes). Generally though people concerned with AI risk think it much more effective to fund efforts to find alignment solutions and improved policy responses (growing them from a very small base, so cost-effectiveness is relatively high) than a diffuse and ineffective effort to slow the technology (especially in a competitive world where the technology would be developed elsewhere, perhaps with higher transition risk).

For most other areas of technology and economic activity (e.g. energy, agriculture, most areas of medicine) x-risk/longtermist implications are comparatively small, suggesting a more neartermist evaluative lens (e.g. comparing more against things like GiveWell).

Long-lasting (centuries) stagnation is a risk worth taking seriously (and the slowdown of population growth that sustained superexponential growth through history until recently points to stagnation absent something like AI to ease the labor bottleneck), but seems a lot less likely than other x-risk. If you think AGI is likely this century then we will return to the superexponential track (but more explosively) and approach technological limits to exponential growth followed by polynomial expansion in space. Absent AGI or catastrophic risk (although stagnation with advanced WMD would increase such risk), permanent stagnation also looks unlikely based on the capacities of current technology given time for population to grow and reach frontier productivity.

I think the best case for progress studies being top priority would be strong focus on the current generation compared to all future generations combined, on rich country citizens vs the global poor, inhabit and on technological progress over the next few decades, rather than in 2121. But given my estimates of catastrophic risk and sense of the interventions, at the current margin I'd still think that reducing AI and biorisk do better for current people than the progress studies agenda per unit of effort.

I wouldn't support arbitrary huge sacrifices of the current generation to reduce tiny increments of x-risk, but at the current level of neglectedness and impact (for both current and future generations) averting AI and bio catastrophe  looks more impactful without extreme valuations. As such risk reduction efforts scale up marginal returns would fall and growth boosting interventions would become more competitive (with a big penalty for those couple of areas that disproportionately pose x-risk).

That said, understanding tech progress, returns to R&D, and similar issues also comes up in trying to model and influence the world in assorted ways (e.g. it's important in understanding AI risk, or building technological countermeasures to risks to long term development). I have done a fair amount of investigation that would fit into progress studies as an intellectual enterprise for such purposes.

I also lend my assistance to some neartermist EAresearch focused on growth, in areas that don't very disproportionately increase x-risk, and to development of technologies that make it more likely things will go better.

Comment by CarlShulman on My attempt to think about AI timelines · 2021-05-20T03:44:38.239Z · EA · GW

Robin Hanson argues in Age of Em  that annualized  growth rates will reach over 400,000% as a result of automation of human labor with full substitutes (e.g. through brain emulations)! He's a weird citation for thinking the same technology can't manage 20% growth.

"I really don't have strong arguments here. I guess partly from experience working on an automated trading system (i.e. actually trying to automate something)"

This and the usual economist arguments against fast AGI growth  seem to be more about denying the premise of ever succeeding at AGI/automating human substitute minds (by extrapolation from a world where we have not yet built human substitutes to conclude they won't be produced in the future), rather than addressing the growth that can then be enabled by the resulting AI.

Comment by CarlShulman on My attempt to think about AI timelines · 2021-05-19T18:25:02.238Z · EA · GW

I find that 57% very difficult to believe. 10% would be a stretch. 

Having intelligent labor that can be quickly produced in factories (by companies that have been able to increase output by  millions of times over decades), and do tasks including improving the efficiency of robots (already cheap relative to humans where we have the AI to direct them, and that before reaping economies of scale by producing billions) and solar panels (which already have energy payback times on the order of 1 year in sunny areas), along with still abundant untapped energy resources orders of magnitude greater than our current civilization taps on Earth (and a billionfold for the Solar System) makes it very difficult to make the AGI but no TAI world coherent.

Cyanobacteria can double in 6-12 hours under good conditions, mice can grow their population more than 10,000x in a year. So machinery can be made to replicate quickly, and trillions of von Neumann equivalent researcher-years (but with AI advantages) can move us further towards that from existing technology.
 
I predict that cashing out the given reasons into detailed descriptions will result in inconsistencies or very implausible requirements.

Comment by CarlShulman on Why AI is Harder Than We Think - Melanie Mitchell · 2021-05-03T17:10:49.102Z · EA · GW

She does talk about century plus timelines here and there.

Comment by CarlShulman on How do you compare human and animal suffering? · 2021-04-30T20:41:40.098Z · EA · GW

I suspect there are biases in the EA conversation where hedonistic-compatible arguments get discussed more than reasons that hedonistic utilitarians would be upset by, and intuitions coming from other areas may then lead to demand and supply subsidies for such arguments.

Comment by CarlShulman on How do you compare human and animal suffering? · 2021-04-30T02:13:51.595Z · EA · GW

"I would guess most arguments for global health and poverty over animal welfare fall under the following:

- animals are not conscious or less conscious than humans
- animals suffer less than humans

"

I'm pretty skeptical that these arguments descriptively  account for most of the people explicitly choosing global poverty interventions over animal welfare interventions, although they certainly account for some people. Polls show  wide agreement that birds and mammals are conscious and have welfare to at least some degree. And I think most models on which degree of consciousness (in at least some senses) varies greatly, it's not so greatly that one would say that, e.g. it's more expensive to improve consciousness-adjusted welfare in chickens than humans today. And I say that as someone who thinks it pretty plausible that there are important orders-of-magnitude differences in quantitative aspects of consciousness. 

I'd say descriptively the bigger thing is people just feeling more emotional/moral obligations to humans than other animals, not thinking human welfare varies a millionfold more, in the same way that people who choose to 'donate locally' in rich communities where cost to save a life is hundreds of times greater than abroad don't think that poor foreigners are a thousand times less conscious, even as they tradeoff charitable options as though weighting locals hundreds of times more than foreigners.

An explicit philosophical articulation of this is found in Shelly Kagan's book on weighing the interests of different animals. While even on Kagan's view factory farming is very bad, he describes a view that assigns greater importance of interests of a given strength for beings with more of certain psychological properties (or counterfactual potential for those properties). The philosopher Mary Anne Warren articulates something similar in her book on moral status, which assigns increasing moral status on the basis of a number of grounds including life (possessed by plants and bacteria, and calling for some status), consciousness, capacity to engage in reciprocal social relations, actual relationships, moral understanding, readinesss to forbear in mutual cooperation, various powers, etc.

I predict that if you polled philosophers on cases involving helping different numbers of various animals, those sorts of accounts would be more frequent explanations of the results than doubt about animal consciousness (as a binary or quantitative scale).

This would be pretty susceptible to polling, e.g. you could ask the EA Survey team to try some questions on it (maybe for a random subset). 

Comment by CarlShulman on What grants has Carl Shulman's discretionary fund made? · 2021-04-02T20:54:55.471Z · EA · GW

Not particularly.

Comment by CarlShulman on What grants has Carl Shulman's discretionary fund made? · 2021-03-12T15:37:58.545Z · EA · GW

Hi Milan,

So far it has been used to back the donor lottery (this has no net $ outlay in expectation, but requires funds to fill out each block and handle million dollars swings up and down), make a grant to  ALLFED, fund Rethink Priorities' work on nuclear war,  and small seed funds for some researchers investing two implausible but consequential if true interventions (including the claim that creatine supplements boost cognitive performance for vegetarians).

Mostly it remains invested. In  practice I have mostly been able to recommend major grants to other funders so this fund is used when no other route is more appealing. Grants have often involved special circumstances or  restricted funding, and the grants it has made should not be taken as recommendations to other donors to donate to the same things at the current margin in their circumstances.


 

Comment by CarlShulman on The Upper Limit of Value · 2021-01-28T14:21:25.749Z · EA · GW

There is some effect in this direction, but not a sudden cliff. There is plenty of room to generalize, not an in. We create models of alternative coherent lawlike realities, e.g. the Game of Life or and physicists interested in modeling different physical laws. 

Comment by CarlShulman on The Upper Limit of Value · 2021-01-27T22:37:56.319Z · EA · GW

Thanks David, this looks like a handy paper! 



Given all of this, we'd love feedback and discussion, either as comments here, or as emails, etc.

I don't agree with the argument that infinite impacts of our choices are of Pascalian improbability, in fact I think we probably face them as a consequence of one-boxing decision theory, and some of the more plausible routes to local infinite impact are missing from the paper:
 

  • The decision theory section misses the simplest argument for infinite value: in an infinite inflationary universe with infinite copies of me, then my choices are multiplied infinitely. If I would one-box on Newcomb's Problem, then I would take the difference between eating the sandwich and not to be scaled out infinitely. I think this argument is in fact correct and follows from our current cosmological models combine with one-boxing decision theories.
  • Under 'rejecting physics' I didn't see any mention of baby universes, e.g. Lee Smolin's cosmological natural selection. If that picture were right, or anything else in which we can affect the occurrence of new universes/inflationary bubbles forming, then that would permit infinite impacts.
  • The simulation hypothesis is a plausible way for our physics models to be quite wrong about the world in which the simulation is conducted, and further there would be reason to think simulations would be disproportionately conducted under physical laws that are especially conducive to abundant computation
Comment by CarlShulman on Can I have impact if I’m average? · 2021-01-03T15:24:46.896Z · EA · GW

Here are two posts from Wei Dai, discussing the case for some things in this vicinity (renormalizing in light of the opportunities):

https://www.lesswrong.com/posts/Ea8pt2dsrS6D4P54F/shut-up-and-divide

https://www.lesswrong.com/posts/BNbxueXEcm6dCkDuk/is-the-potential-astronomical-waste-in-our-universe-too

Comment by CarlShulman on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-26T20:45:20.266Z · EA · GW

Thanks for this detailed post on an underdiscussed topic!  I agree with the broad conclusion that extinction via partial population collapse and infrastructure loss, rather than by the mechanism of catastrophe being potent  enough to leave no or almost no survivors (or indirectly enabling some  later extinction level event) has very low probability.  Some comments:

  • Regarding case 1, with a pandemic leaving 50% of the population dead but no major infrastructure damage, I think you can make much stronger claims about there not being 'civilization collapse' meaning near-total failure of industrial food, water, and power systems. Indeed, collapse so defined from that stimulus seems nonsensical to me for rich quantitative reasons.
    • There is no WMD war here, otherwise there would be major infrastructure damage.
    • If half of people are dead, that cuts the need for food and water by half (doubling per capita stockpiles), while already planted calorie-rich crops can easily be harvested with a half-size workforce.
    • Today agriculture makes up closer to 5% than 10% of the world economy, and most of that effort is expended on luxuries such as animal agriculture, expensive fruits, avoidable food waste, and other things that aren't efficient ways to produce nutrition.  Adding all energy (again, most of which is not needed for basic survival as opposed to luxuries) brings the total to ~15%, and perhaps 5% on necessities (2.5% for half production for half population).  That leaves a vast surplus workforce.
    • The catastrophe doubles resources of easily accessible fossil fuels and high quality agricultural land per surviving person, so just continuing to run the best 50% of farmland and the best 50% of oil wells means an increase in food and fossil fuels per person.
    • Likewise, there is a surplus of agricultural equipment, power plants, water treatment plants, and operating the better half of them with the surviving half of the population could improve per capita availability.  These plants are parallel and independent enough that running half of them would not collapse productivity, which we can confirm by looking back to when there were half as many, etc.
    • Average hours worked per capita is already at historical lows, leaving plenty of room for trained survivors to work longer shifts while people switch over from other fields and retrain
    • Historical plagues such as the Black Death or smallpox in the Americas did not cause a breakdown of food production per capita for the survivors.
    • Historical wartime production changes show enormous and adequate flexibility in production.
  • Re the likelihood of survival without industrial agriculture systems, the benchmark should be something closer to preindustrial European agriculture, not hunter-gatherers. You discuss this but it would be helpful to put more specific credences on those alternatives.
    • The productivity of organic agriculture is still enormously high relative to hunting and gathering.
    • Basic knowledge about crop rotation, access to improved and global crop varieties such as potatoes, ploughs, etc permitted very high population density before industrial agriculture, with very localized supply chains.  One can see this in colonial agricultural communities which could be largely self-sustaining (mines for metal tools being one of the worst supply constraints, but fine in a world where so much metal has already been mined and is just sitting around for reuse).
    • By the same token, talking about 'at least 10%' of 1-2 billion subsistence farmers continuing agriculture is a very low figure.  I assume it is a fairly extreme lower bound, but it would be helpful to put credences on lower bounds and to help distinguish them from more likely possibilities.
  • Re food stockpiles:
    • "I’m ignoring animal agriculture and cannibalism, in part because without a functioning agriculture system, it’s not clear to me whether enough people would be able to consume living beings."
      • Existing herds of farmed animals would likely be killed and eaten/preserved.
        • If transport networks are crippled, then this could be for local consumption, but that would increase food inequality and likelihood of survival in dire situations
      • There are about 1 billion cattle alone, with several hundred kg of edible mass each, plus about a billion sheep,  ~700 million pigs, and 450 million goats.
      • In combination these could account for hundreds of billions of human-days of nutritional requirements (I think these make up a large share of 'global food stocks' in your table of supplies)
    • Already planted crops ready to harvest constitute a huge stockpile for the scenarios without infrastructure damage.
    • Particularly for severe population declines, fishing is limited by fish supplies, and existing fishing boats capture and kill vast quantities of fishes in days when short  fishing seasons open.  If the oceans are not damaged, this provides immense food resources to any survivors with modern fishing knowledge and some surviving fishing equipment.
  • "But if it did, I expect that the ~4 billion survivors would shrink to a group of 10–100 million survivors during a period of violent competition for surviving goods in grocery stores/distribution centers, food stocks, and fresh water sources."
  • "So what, concretely, do I think would happen in the event of a catastrophe like a “moderate” pandemic — one that killed 50% of people, but didn’t cause infrastructure damage or climate change? My best guess is that civilization wouldn’t actually collapse everywhere. But if it did, I expect that the ~4 billion survivors would shrink to a group of 10–100 million survivors during a period of violent competition for surviving goods in grocery stores/distribution centers, food stocks, and fresh water sources."
    • For the reasons discussed above I strongly disagree with the claim after "I expect."
  • "All this in mind, I think it is very likely that the survivors would be able to learn enough during the grace period to be able to feed and shelter themselves ~indefinitely."
    • I would say the probability should be higher here.
  • Regarding radioactive fallout, an additional factor not discussed is the decline of fallout danger over time: lethal areas are quite different over the first week vs the first year, etc.
  • Re Scenario 2: "Given all of this, my subjective judgment is that it’s very unlikely that this scenario would more or less directly lead to human extinction" I would again say this is even less likely.
  • In general I think extinction probability from WMD war is going to be concentrated in the plausible future case of greatly increased/deadlier arsenals: millions of nuclear weapons rather than thousands, enormous and varied bioweapons arsenals, and billions of anti-population hunter-killer robotic drones slaughtering survivors including those in bunkers, all released in the same conflict.
  • "Given this, I think it’s fairly likely, though far from guaranteed, that a catastrophe that caused 99.99% population loss, infrastructure damage, and climate change (e.g. a megacatastrohe, like a global war where biological weapons and nuclear weapons were used) would more or less directly cause human extinction."
    • This seems like a sign error, differing from your earlier and later conclusions?
    • "I think it’s fairly unlikely that humanity would go extinct as a direct result of a catastrophe that caused the deaths of 99.99% of people (leaving 800 thousand survivors), extensive infrastructure damage, and temporary climate change (e.g. a more severe nuclear winter/asteroid impact, plus the use of biological weapons)."



 

Comment by CarlShulman on [deleted post] 2020-12-20T14:36:15.699Z

It sounds like you're assuming a common scale between the theories (maximizing expected choice-worthiness)).

A common scale isn't necessary for my conclusion (I think you're substituting it for a stronger claim?)  and  I didn't invoke it. As I wrote in my comment, on negative utilitarianism s-risks that are many orders of magnitude smaller than worse ones without correspondingly huge differences in probability  get ignored for the latter. On variance normalization, or bargaining solutions, or a variety of methods that don't amount to dictatorship of one theory, the weight for an NU view is not going to spend its decision-influence on the former rather than the latter when they're both non-vanishing possibilities.

I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views.

Sure (which will make the s-risk definition even more inapt for those people), and those scenarios will be approximately ignored vs scenarios that are more like 1/100 or 1/1000 being tortured on a lexical view, so there will still be the same problem of s-risk not tracking what's action-guiding or a big deal in the history of suffering.

Comment by CarlShulman on [deleted post] 2020-12-20T04:25:51.237Z

Just a clarification: s-risks (risks of astronomical suffering) are existential risks. 

This is not true by the definitions given in the original works that defined these terms. Existential risk is defined to only refer to things that are drastic relative to the potential of Earth-originating intelligent life:

where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Any X-risks are going to be in the same ballpark of importance if they occur, and immensely important to the history of Earth-originating life. Any x-risk is a big deal relative to that future potential.

S-risk is defined as just any case where there's vastly more total suffering than Earth history heretofore, not one where suffering is substantial relative to the downside potential of the future.

S-risks are events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

 In an intergalactic civilization making heavy use of most stars, that would be met by situations where things are largely utopian but  1 in 100 billion people per year get a headache, or a hell where everyone was tortured all the time.  These are both defined as s-risks, but the bad elements in the former are microscopic compared to the latter, or the expected value of suffering.  

With even a tiny weight on views valuing good parts of future civilization the former could be an extremely good world, while the latter would be a disaster by any reasonable mixture of views. Even with a fanatical restriction to only consider suffering and not any other moral concerns, the badness  of the former should be almost completely ignored relative to the latter if there is non-negligible credence assigned to both.

 So while x-risks are all critical for civilization's upside potential if they occur, almost all s-risks will be incredibly small relative to the potential for suffering, and something  being an s-risk doesn't mean its occurrence would be an important part of the history of suffering if both have non-vanishing credence.

From the s-risk paper:

We should differentiate between existential risks (i.e., risks of “mere” extinction or failed potential) and risks of astronomical suffering1(“suffering risks” or “s-risks”). S-risks are events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

The above distinctions are all the more important because the term “existential risk” has often been used interchangeably with “risks of extinction”, omitting any reference to the future’s quality.2 Finally, some futures may contain both vast amounts of happiness and vast amounts of suffering, which constitutes an s-risk but not necessarily a (severe) x-risk. For instance, an event that would create 1025 unhappy beings in a future that already contains 1035 happy individuals constitutes an s-risk, but not an x-risk.

If one were to make an analog to the definition of s-risk for loss of civilization's potential it would be something like risks of loss of potential welfare or goods much larger than seen on Earth so far. So it would be a risk of this type to delay interstellar colonization by a few minutes and colonize one less  star system. But such 'nano-x-risks' would have almost none of the claim to importance and attention that comes with the original definition of x-risk. Going from 10^20 star systems to 10^20 star systems less one should not be put in the same bucket as premature extinction or going from 10^20 to 10^9. So long as one does not have a completely fanatical view and gives some weight to different perspectives, longtermist views concerned with realizing civilization's potential should give way on such minor proportional differences to satisfy other moral concerns, even though the absolute scales are larger.

Bostrom's Astronomical Waste paper specifically discusses such things, but argues since their impact would be so small relative to existential risk they should not be a priority (at least in utilitarianish terms)  relative to the latter.

This disanalogy between the x-risk and s-risk definitions is a source of ongoing frustration to me, as s-risk discourse thus often conflates hellish futures (which are existential risks, and especially bad ones), or possibilities of suffering on a scale significant relative to the potential for suffering (or what we might expect), with bad events many orders of magnitude smaller or futures that are utopian by common sense standards and compared to our world or the downside potential.

I wish people interested in s-risks that are actually near worst-case scenarios, or that are large relative to the background potential or expectation for downside would use a different word or definition, that would make it possible to say things like 'people broadly agree that a future constituting an s-risk is a bad one, and not a utopia' or at least  'the occurrence of an s-risk is of the highest importance for the history of suffering.' 

Comment by CarlShulman on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-12-19T02:14:51.475Z · EA · GW

$1B commitment attributed to Musk early on is different from the later Microsoft investment. The former went away despite the media hoopla.

Comment by CarlShulman on CEA's 2020 Annual Review · 2020-12-11T17:51:33.193Z · EA · GW

It's invested in unleveraged index funds, but was out of the market for the pandemic crash and bought in at the bottom. Because it's held with Vanguard as a charity account it's not easy to invest as aggressively as I do my personal funds for donation, in light of lower risk-aversion for altruistic investors than those investing for personal consumption, although I am exploring options in that area.

The fund has been used to finance the CEA donor lottery, and to make grants to ALLFED and Rethink Charity (for nuclear war research). However, it should be noted that I only recommend grants for the fund that I think aren't a better fit for other funding sources I can make recommendations to, and often with special circumstances or restricted funding, and grants it has made should not be taken as recommendations from me to other donors to donate to the same things at the margin. [For the object-level grants, although using donor lotteries is generally sensible for a wide variety of donation views.] 

Comment by CarlShulman on If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant · 2020-11-24T02:00:20.551Z · EA · GW

Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more. 

I don't think any major EA or longtermist institution believes this about expected impact for 10^30 differences. There are too many spillovers for that, e.g. if doubling the world economy of $100 trillion/yr would modestly shift x-risk or the fate of wild animals, then interventions that affect economic activity have to have expected absolute value of impact much greater than 10^-30 of the most expected impactful interventions.



This argument requires that causes differ astronomically in relative cost-effectiveness. If causes A is astronomically better than cause B in absolute terms, but cause B is 50% as good in relative terms, then it makes sense for me to take a job in cause B if I can be at least twice as productive.

I suspect that causes don't differ astronomically in cost-effectiveness. Therefore, people should pay attention to personal fit when choosing an altruistic career, and not just the importance of the cause.
 

The premises and conclusion don't seem to match here. A difference of 10^30x is crazy, but rejecting that doesn't mean you don't have huge practical differences in impact like 100x or 1000x. Those would be plenty to come close to maxing out the possible effect of differences between causes(since if you're 1000x as good at rich-country homelessness relief as preventing  pandemics, then if nothing else your fame for rich country poverty relief would be a powerful resource to help out in other areas like public endorsements of good anti-pandemic efforts).

The argument seems sort of like "some people say if you go into careers like quant trading you'll make 10^30 dollars and can spend over a million dollars to help each animal with a nervous system. But actually you can't make that much money even as a quant trader, so people should pay attention to fit with different careers in the world when trying to make money, since you can make more money in a field with half the compensation per unit productivity if you are twice as productive there." The range for realistic large differences in compensation between fields (e.g. fast food cashier vs quant trading) is missing from the discussion.

You define astronomical differences at the start as 'not just thousands of times more' but the range to thousands of times more is where all the action is.

Comment by CarlShulman on Thoughts on whether we're living at the most influential time in history · 2020-11-15T17:48:20.603Z · EA · GW

It's the time when people are most influential per person or per resource.

Comment by CarlShulman on Thoughts on whether we're living at the most influential time in history · 2020-11-15T17:38:15.540Z · EA · GW

This seems important to me because, for someone claiming that we should think that we're at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does.  To me at least, that's a striking fact and wouldn't have been obvious before I started thinking about these things.

It seems to me the object level is where the action is, and the non-simulation Doomsday Arguments mostly raise a phantom consideration that cancels out (in particular, cancelling out re whether there is an influenceable lock-in event this century).

You could say a similar thing about our being humans rather than bacteria, which cumulatively outnumber us by more than 1,000,000,000,000,000,000,000,000  times on Earth thus far according to the paleontologists. 

Or you could go further and ask why we aren't neutrinos? There are more than 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 of them in the observable universe.

However extravagant the class you pick, it's cancelled out by the knowledge that we find ourselves in our current situation.  I think it's more confusing than helpful to say that our being humans rather than neutrinos is doing more than 10^70 times as much work as object-level analysis of AI in the case for attending to x-risk/lock-in with AI. You didn't need to think about that in the first place to understand AI or bioweapons, it was an irrelevant distraction.

The same is true for future populations that know they're living in intergalactic societies and the like. If we compare possible world A, where future Dyson spheres can handle a population  of P (who know they're in  that era), and possible world B, where future Dyson spheres can support a population of 2P, they don't give us much different expectations of the number of people finding themselves in our circumstances, and so cancel out.

The simulation argument (or a brain-in-vats story or the like) is different and doesn't automatically  cancel out  because it's a way to make our observations more likely and common. However, for policy it does still largely cancel out, as long as the total influence of people genuinely in our apparent circumstances is a lot greater than that of all simulations with apparent circumstances like ours: a bigger future world means more influence for genuine inhabitants of important early times and also more simulations. [But our valuation winds up being bounded by our belief  about  the portion of all-time resources allocated to sims in apparent positions like ours.]

Another way of thinking about this is that prior to getting confused by any anthropic updating, if you were going to set a policy for humans who find ourselves in our apparent situation across nonanthropic possibilities assessed at the object level (humanity doomed, Time of Perils, early lock-in, no lock-in), you would just want to add up the consequences of the policy across genuine early humans and sims in each (non-anthropically assessed) possible world.

A vast future gives more chances for influence on  lock-in later, which might win out as even bigger than this century (although this gets rapidly less likely with time and expansion), but it shouldn't change our assessment of lock-in this century, and a substantial chance of that gives us a good chance of HoH (or simulation-adjusted HoH).

Comment by CarlShulman on Nuclear war is unlikely to cause human extinction · 2020-11-07T16:11:23.204Z · EA · GW

I agree it's very unlikely that a nuclear war discharging current arsenals could directly cause human extinction. But the conditional probability of extinction given all-out nuclear war can go much higher if the problem gets worse. Some aspects of this:

-at the peak of the Cold War arsenals there were over 70,000  nuclear weapons, not 14,000
-this Brookings estimate puts spending building the US nuclear arsenal at several trillion current dollars, with lower marginal costs per weapon, e.g. $20M per weapon and $50-100M all-in for for ICBMs
-economic growth since then means the world could already afford far larger arsenals in a renewed arms race
-current US military expenditure is over $700B annually,  about 1/30th of GDP; at the peak of the Cold War in the 50s and 60s it was about 1/10th; Soviet expenditure was proportionally higher
-so with 1950s proportional military expenditures, half going to nukes, the US and China could each produce 20,000+ ICBMs, each of which could be fitted with MIRVs and several warheads, building up to millions of warheads over a decade or so; the numbers could be higher for cheaper delivery systems
-economies of scale and improvements in technology would likely bring down the per warhead cost
-if AI and robotics greatly increase economic growth the above numbers could be increased by orders of magnitude
-radiation effects could be intentionally greatly increased with alternative warhead composition
-all-out discharge of strategic nuclear arsenals is also much more likely to be accompanied by simultaneous deployment of other WMD, including pandemic bioweapons (which the Soviets pursued as a strategic weapon for such circumstances)and drone swarms (which might kill survivors in bunkers); the combined effects of future versions of all of these WMD at once may synergistically cause extinction 

Comment by CarlShulman on Thoughts on whether we're living at the most influential time in history · 2020-11-05T18:29:38.100Z · EA · GW

Note that compared to the previous argument, the a prior odds on being the most influential person is now 1e-10, so our earliness essentially increases our belief that we are the most influential by something like 1e28. But of course a 1-in-a-100 billion prior is still pretty low, and you don't think our evidence is sufficiently strong to signficantly reduce it.

The argument is not about whether Will is the most influential person ever, but about whether our century has the best per person influence. With population of 10 billion+ (78 billion alive now, plus growth and turnover for the rest of the century), it's more like 1 in 13 people so far alive today if you buy the 100 billion humans thus far population figure (I have qualms about other hominids, etc, but still the prior gets quite high given A1, and A1 is too low).
 

Comment by CarlShulman on Are we living at the most influential time in history? · 2020-11-01T21:00:59.699Z · EA · GW

Wouldn't your framework also imply a similarly overwhelming prior against saving? If long term saving works with exponential growth then we're again more important than virtually everyone who will ever live, by being in the first n billion people who had any options for such long term saving. The prior for 'most important century to invest' and 'most important century to donate/act directly' shouldn't be radically uncoupled.

Comment by CarlShulman on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-31T18:29:40.518Z · EA · GW

Same with eg OpenAI which got $1b in nonprofit commitments but still had to become (capped) for-profit in order to grow.

If you look at OpenAI's annual filings, it looks like the $1b did not materialize.