Posts

Some personal thoughts on EA and systemic change 2019-09-26T21:40:28.725Z · score: 186 (81 votes)
Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation 2016-12-31T02:19:35.457Z · score: 40 (26 votes)
Donor lotteries: demonstration and FAQ 2016-12-07T13:07:26.306Z · score: 41 (40 votes)
The age distribution of GiveWell recommended charities 2015-12-26T18:35:44.511Z · score: 13 (15 votes)
A Long-run perspective on strategic cause selection and philanthropy 2013-11-05T23:08:35.000Z · score: 8 (7 votes)

Comments

Comment by carl_shulman on Towards zero harm: animal-free and land-free food · 2020-10-24T23:33:10.047Z · score: 2 (1 votes) · EA · GW

Thanks for pointing out that paper. Yes, it does seem like some of these companies are relying on cheap hydropower and carbon pricing.

If photovoltaics keep falling in price they could ease the electricity situation, but their performance would be degraded in nuclear winter (although not in some other situations interfering with conventional agriculture).

 

Comment by carl_shulman on Towards zero harm: animal-free and land-free food · 2020-10-23T17:29:10.440Z · score: 8 (5 votes) · EA · GW

Three forerunners are Air Protein (US), Solar Foods (Finland) and the Utilization of Carbon Dioxide Institute (Japan).

Thanks, I was familiar with the general concept here, and specific companies working with methane, but not the electrolysis based companies. I had thought that wouldn't be practical given the higher price of electrolysis hydrogen vs natural gas hydrogen.
 

 A production cost of $5-$6 per kilogram of 100 percent protein. It aims to have Solein on the market and in millions of meals by 2021, but before then it needs to scale-up from pilot plant to major commercial production, and Solein needs regulatory approval for human consumption.

Claims like these are many times more common than delivery, but this seems interesting enough to be worth examining.

Comment by carl_shulman on Which is better for animal welfare, terraforming planets or space habitats? And by how much? · 2020-10-19T23:40:56.468Z · score: 31 (11 votes) · EA · GW

I think this has potential to be a crucial consideration with regard to our space colonization strategy


I see this raised often, but it seems like it's clearly the wrong order of magnitude to make any noticeable proportional difference to the broad story of a space civilization, and I've never seen a good counterargument to that point.

Wikipedia has a fine page on orders of magnitude for power.  Solar energy received by Earth from the Sun is 1.740*10^17 W, vs 3.846*10^26W for total solar energy output, a difference of 2 billion times. Mars is further from the Sun and smaller, so receives almost another order of magnitude less solar flux. 

Surfaces of planets are a miniscule portion of the habitable universe, whatever lives there won't meaningfully directly affect aggregate population or welfare statistics of an established space civilization. The frame of the question is quantitatively much more extreme than treating the state of affairs in the tiny principality of Liechtenstein as of comparable importance to the state of affairs for the rest of the Earth.

I currently would guess that space habitats are better because they offer a more controlled environment due to greater surveillance as well human proximity, whereas an ecosystem on a planet would by and large be unmanaged wilderness, 

Even on Mars (and moreso on the other even less hospitable planets in our system) support for life would have to be artificially constructed, and the life biologically altered (e.g. to deal with differences in gravity), moreso for planets around stars with different properties. So in terms of human control over the creation of the environment the tiny slice of extraterrestrial planets shouldn't be expected to be very different in expected pseudowild per unit of solar flux, within one OOM. 


if we can determine which method creates more wellbeing with some confidence, and we can tractably influence on the margin whether humanity chooses one or the other. e.g. SpaceX wants to colonize Mars whereas BlueOrigin wants to build O'Neill cylinders, so answering this question may imply supporting one company over the other.

Influence by this channel seems to be ~0. Almost all the economic value of space comes from building structures in space, not on planetary surfaces, and leaving planets intact wastes virtually all of the useful minerals in them. Early primitive Mars bases (requiring space infrastructure to get them there) that are not self-sustaining societies will in no way noticeably substitute for the use of the other 99.99999%+ of extraterrestrial resources in the Solar System that are not on the surface of Mars in the long run. Any effects along these lines would be negligible compared to other channels (like Elon Musk making money, or which is more successful at building space industry).

Comment by carl_shulman on The scale of direct human impact on invertebrates · 2020-09-07T16:00:39.590Z · score: 9 (3 votes) · EA · GW

Thanks for the interesting post. Could you say more about the epistemic status of agricultural pesticides as the largest item in this category, e.g. what chance that in 3 years you would say another item (maybe missing from this list) is larger? And what ratio do you see between agricultural pesticides and other issues you excluded from the category (like climate change and partially naturogenic outcomes)?

Comment by carl_shulman on 'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions · 2020-08-25T18:29:17.007Z · score: 11 (4 votes) · EA · GW
But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"),

The main dynamic I have in mind there is 'country X being overwhelmingly technologically advantaged/disadvantaged ' treated as an outcome on par with global destruction, driving racing, and the necessity for international coordination to set global policy.

I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.

Biotech threats are driven by violence. On AI, for rational regulators of a global state, a 1% or 10% chance of destroying society looks enough to mobilize immense resources and delay deployment of dangerous tech for safety engineering and testing. There are separate epistemic and internal coordination issues that lead to failures of rational part of the rational social planner model (e.g. US coronavirus policy has predictably failed to serve US interests or even the reelection aims of current officeholders, underuse of Tetlockian forecasting) that loom large (it's hard to come up with a rational planner model explaining observed preparation for pandemics and AI disasters).

I'd say that given epistemic rationality in social policy setting, then you're left with a big international coordination/brinksmanship issue, but you would get strict regulation against blowing up the world for small increments of profit.

Comment by carl_shulman on 'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions · 2020-08-24T16:24:10.171Z · score: 24 (9 votes) · EA · GW

I'd say it's the other way around, because longtermism increases both rewards and costs in prisoner's dilemmas. Consider an AGI race or nuclear war. Longtermism can increase the attraction of control over the future (e.g. wanting to have a long term future following religion X instead of Y, or communist vs capitalist). During the US nuclear monopoly some scientists advocated for preemptive war based on ideas about long-run totalitarianism. So the payoff stakes of C-C are magnified, but likewise for D-C and C-D.

On the other hand, effective bargaining and cooperation between players today is sufficient to reap almost all the benefits of safety (most of which depend more on not investing in destruction than investing in safety, and the threat of destruction for the current generation is enough to pay for plenty of safety investment).

And coordinating on deals in the interest of current parties is closer to the curent world than fanatical longtermism.

But the critical thing is that risk is not just an 'investment in safety' but investments in catastrophically risky moves driven by games ruled out by optimal allocation.

Comment by carl_shulman on A New X-Risk Factor: Brain-Computer Interfaces · 2020-08-23T23:13:33.407Z · score: 11 (8 votes) · EA · GW

Thanks for this substantive and useful post. We've looked at this topic every few years in unpublished work at FHI to think about whether to prioritize it. So far it hasn't looked promising enough to pursue very heavily, but I think more careful estimates of the inputs and productivity of research in the field (for forecasting relevant timelines and understanding the scale of the research) would be helpful. I'll also comment on a few differences between the post and my models of BCI issues:

  • It does not seem a safe assumption to me that AGI is more difficult than effective mind-reading and control, since the latter requires complex interface with biology with large barriers to effective experimentation; my guess is that this sort of comprehensive regime of BCI capabilities will be preceded by AGI, and your estimate of D is too high
  • The idea that free societies never stabilize their non-totalitarian character, so that over time stable totalitarian societies predominate, leaves out the applications of this and other technologies to stabilizing other societal forms (e.g. security forces making binding oaths to principles of human rights and constitutional government, backed by transparently inspected BCI, or introducing AI security forces designed with similar motivations), especially if the alternative is predictably bad; also other technologies like AGI would come along before centuries of this BCI dynamic
  • Global dominance is blocked by nuclear weapons, but dominance of the long-term future through a state that is a large chunk of the world outgrowing the rest (e.g. by being ahead in AI or space colonization once economic and military power is limited by resources) is more plausible, and S is too low
  • I agree the idea of creating aligned AGI through BCI is quite dubious (it basically requires having aligned AGI to link with, and so is superfluous; and could in any case be provided by the aligned AGI if desired long term), but BCI that actually was highly effective for mind-reading would make international deals on WMD or AGI racing much more enforceable, as national leaders could make verifiable statements that they have no illicit WMD programs or secret AGI efforts, or that joint efforts to produce AGI with specific objectives are not being subverted; this seems to be potentially an enormous factor
  • Lie detection via neurotechnology, or mind-reading complex thoughts, seems quite difficult, and faces structural issues in that the representations for complex thoughts are going to be developed idiosyncratically in each individual, whereas things like optic nerve connections and the lower levels of V1 can be tracked by their definite inputs and outputs, shared across humans
  • I haven't seen any great intervention points here for the downsides, analogous to alignment work for AI safety, or biosecurity countermeasures against biological weapons;
  • If one thought BCI technology was net helpful one could try to advance it, but it's a moderately large and expensive field so one would likely need to leverage by advocacy or better R&D selection within the field to accelerate it enough to matter and be competitive with other areas of x-risk reduction activity

I think if you wanted to get more attention on this, likely the most effective thing to do would be a more rigorous assessment of the technology and best efforts nuts-and-bolts quantitative forecasting (preferably with some care about infohazards before publication). I'd be happy to give advice and feedback if you pursue such a project.

Comment by carl_shulman on 'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions · 2020-08-23T20:21:39.358Z · score: 37 (12 votes) · EA · GW

My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.

If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in surveillance, PPE, research, and other capacities in response to data about previous coronaviruses such as SARS to stop COVID-19 in its tracks. Renewable energy research funding would be vastly higher than it is today, as would AI technical safety. As advanced AI developments brought AI catstrophic risks closer, there would be no competitive pressures to take risks with global externalities in development either by firms or nation-states.

Externalities massively reduce the returns to risk reduction, with even the largest nation-states being only a small fraction of the world, individual politicians much more concerned with their term of office and individual careers than national-level outcomes, and individual voters and donors constituting only a minute share of the affected parties. And conflict and bargaining problems are entirely responsible for war and military spending, central to the failure to overcome externalities with global climate policy, and core to the threat of AI accident catastrophe.

If those things were solved, and the risk-reward tradeoffs well understood, then we're quite clearly in a world where we can have very low existential risk and high consumption. But if they're not solved, the level of consumption is not key: spending on war and dangerous tech that risks global catastrophe can be motivated by the fear of competitive disadvantage/local catastrophe (e.g. being conquered) no matter how high consumption levels are.

Comment by carl_shulman on Should We Prioritize Long-Term Existential Risk? · 2020-08-21T18:38:20.908Z · score: 13 (6 votes) · EA · GW
People often argue that we urgently need to prioritize reducing existential risk because we live in an unusually dangerous time. If existential risk decreases over time, one might intuitively expect that efforts to reduce x-risk will matter less later on. But in fact, the lower the risk of existential catastrophe, the more valuable it is to further reduce that risk.
Think of it like this: if we face a 50% risk of extinction per century, we will last two centuries on average. If we reduce the risk to 25%, the expected length of the future doubles to four centuries. Halving risk again doubles the expected length to eight centuries. In general, halving x-risk becomes more valuable when x-risk is lower.

This argument starts with assumptions implying that civilization has on the order of a 10^-3000 chance of surviving a million years, a duration typical of mammalian species. In the second case it's 10^-1250. That's a completely absurd claim, a result of modeling as though you have infinite certainty in a constant hazard rate.

If you start with some reasonable credence that we're not doomed and can enter a stable state of low risk, this effect becomes second order or negligible. E.g. leaping off from the Precipice estimates, say there's expected 1/6 extinction risk this century, and 1/6 for the rest of history. I.e. probably we stabilize enough for civilization to survive as long as feasible. If the two periods were uncorrelated, then this reduces the value of preventing an existential catastrophe this century by between 1/6 and 1/3rd compared to preventing one after the risk of this century. That's not negligible, but also not first order, and the risk of catastrophe would also cut the returns of saving for the future (your investments and institution/movement-building for x-risk 2 are destroyed if x-risk 1 wipes out humanity).

[For the Precipice estimates, it's also worth noting that part of the reason for risk being after this century is credence on critical tech developments like AGI happening after this century, so if we make it through that this century, then risk in the later periods is lower since we've already passed through the dangerous transition and likely developed the means for stabilization at minimal risk.]

Scenarios where we are 99%+ likely to go prematurely extinct, from a sequence of separate risks that would each drive the probability of survival low, are going to have very low NPV of the future population, but we should not be near-certain that we are in such a scenario, and with uncertainty over reasonable parameter values you wind up with the dominant cases being those with substantial risk followed by substantial likelihood of safe stabilization, and late x-risk reduction work is not favored over reduction soon.

The problem with this is similar to the problem with not modelling uncertainty about discount rates discussed by Weitzman. If you project forward 100 years, scenarios with high discount rates drop out of your calculation, while the low discount rates scenarios dominate at that point. Likewise, the longtermist value of the long term future is all about the plausible scenarios where hazard rates give a limited cumulative x-risk probability over future history.


This result might not hold up if:
In future centuries, civilization will reduce x-risk to such a low rate that it will become too difficult to reduce any further.

It's not required that it *will* do so, merely that it may plausibly go low enough that the total fraction of the future lost to such hazard rates doesn't become overwhelmingly high.

Comment by carl_shulman on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-21T06:15:07.108Z · score: 10 (3 votes) · EA · GW

"The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save."

That was explicitly discussed at the time. I cited the blog post as a historical reference illustrating that such considerations were in mind, not as a comprehensive publication of everything people discussed at the time, when in fact there wasn't one. That's one reason, in addition to your novel contributions, I'm so happy about your work! GPI also has a big hopper of projects adding a lot of value by further developing and explicating ideas that are not radically novel so that they have more impact and get more improvement and critical feedback.

If you would like to do further recorded discussions about your research, I'm happy to do so anytime.

Comment by carl_shulman on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-18T16:40:56.503Z · score: 2 (1 votes) · EA · GW

The Stern discussion.

Comment by carl_shulman on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-17T21:16:59.435Z · score: 21 (9 votes) · EA · GW

Hanson's If Uploads Come First is from 1994, his economic growth given machine intelligence is from 2001, and uploads were much discussed in transhumanist circles in the 1990s and 2000s, with substantial earlier discussion (e.g. by Moravec in his 1988 book Mind Children). Age of Em added more details and has a number of interesting smaller points, but the biggest ideas (Malthusian population growth by copying and economic impacts of brain emulations) are definitely present in 1994. The general idea of uploads as a technology goes back even further.

Age of Em should be understood like Superintelligence, as a polished presentation and elaboration of a set of ideas already locally known.

Comment by carl_shulman on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-17T21:08:48.118Z · score: 6 (3 votes) · EA · GW

My recollection is that back in 2008-12 discussions would often cite the Stern Review, which reduced pure time preference to 0.1% per year, and thus concluded massive climate investments would pay off, the critiques of it noting that it would by the same token call for immense savings rates (97.5% according to Dasgupta 2006), and the defenses by Stern and various philosophers that pure time preference of 0 was philosophically appropriate.

In private discussions and correspondence it was used to make the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. I cited it for this in this 2012 blog post. People also discussed how this would go away if sufficient investment was applied patiently (whether for altruistic or other reasons), ending the era of dreamtime finance by driving pure time preference towards zero.

Comment by carl_shulman on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-15T00:45:18.354Z · score: 28 (13 votes) · EA · GW
Trammell also argued that most people use too high a discount rate, so patient philanthropists should compensate by not donating any money; as far as I know, this is a novel argument.

This has been much discussed from before the beginning of EA, Robin Hanson being a particularly devoted proponent.

Comment by carl_shulman on The case for investing to give later · 2020-08-12T19:00:36.325Z · score: 16 (5 votes) · EA · GW
  • My biggest issue is that I don't think returns to increased donations are flat, with the highest returns coming from entering into neglected areas where EA funds are already, or would be after investment, large relative to the existing funds, and I see returns declining closer to logarithmically than flat with increased EA resources;
    • This is not correctly modeled in your guesstimate, despite it doing a Monte Carlo draw over different rates of diminishing returns, because it ignores the correlations between diminishing returns and impact of existing spending: if EA makes truly outsized altruistic returns, it will be by doing things that are much better than typical, and so the accounts on which more neglected activities are the best thing to do now have higher current philanthropic returns as well as faster diminishing returns
    • Likewise, high investment returns are associated with moving along the diminishing returns curve in the future, as diminishing marginal returns are not exogenous when EA is a large share of activity in an area; by drawing investment returns and diminishing returns from separate variables, your results wind up dominated by cases where explosive growth in EA funds is accompanied by flat marginal returns that are extremely implausible because of the missing correlations
    • These reflect a general problem with Guesstimate models, it's easy to create independent draws of variables that are not independent of each other and get answers exponentially off as one considers longer time frames or more variables
  • Regarding prognostications of future equity returns, I think it's worthwhile to follow other fundamental projections in breaking down equity returns into components such as P/E, economic growth, growth in corporate profits as a share of the economy etc; in particular, this reveals that some past sources of equity returns can't be extrapolated indefinitely, e.g. 100%+ corporate profit shares are not possible and huge profit shares would likely be accompanied by higher corporate or investment taxes, while early stock returns involved low rates of stock ownership and high transaction costs
  • When there are diminishing returns to spending in a given year, being forced to spend assets too quickly in response to a surprise does lower efficiency of spending, so regulatory changes requiring increased disbursement rates can be harmful
  • Mission hedging and tying funding to epistemic claims can be very important for altruistic investing; e.g. if scenarios where AI risk is higher are correlated with excess returns for AI firms, then an allocation to address that risk might overweight AI securities
Comment by carl_shulman on The case for investing to give later · 2020-08-12T18:35:46.134Z · score: 10 (3 votes) · EA · GW

GiveWell top charities are relatively extreme in the flatness of their returns curves among areas EA is active in, which is related to their being part of a vast funding pool of global health/foreign aid spending, which EA contributions don't proportionately increase much.

In other areas like animal welfare and AI risk EA is a very large proportional source of funding. So this would seem to require an important bet that areas with relatively flat marginal returns curves are and will be the best place to spend.

Comment by carl_shulman on The case for investing to give later · 2020-08-12T18:29:25.100Z · score: 6 (3 votes) · EA · GW

I agree risks of expropriation and costs of market impact rise as a fund gets large relative to reference classes like foundation assets (eliciting regulatory reaction) let alone global market capitalization. However, each year a fund gets to reassess conditions and adjust its behavior in light of those changing parameters, i.e. growing fast while this is all things considered attractive, and upping spending/reducing exposure as the threat of expropriation rises. And there is room for funds to grow manyfold over a long time before even becoming as large as the Bill and Melinda Gates Foundation, let alone being a significant portion of global markets. A pool of $100B, far larger than current EA financial assets, invested in broad indexes and borrowing with margin loans or foundation bonds would not importantly change global equity valuations or interest rates.

Regarding extreme drawdowns, they are the flipside of increased gains, so are a question of whether investors have the courage of their convictions regarding the altruistic returns curve for funds to set risk-aversion. Historically, Kelly criterion leverage on a high-Sharpe portfolio could have provided some reassurance with being ahead of a standard portfolio over very long time periods, even with great local swings.

Comment by carl_shulman on Improving the future by influencing actors' benevolence, intelligence, and power · 2020-07-21T20:01:40.359Z · score: 31 (12 votes) · EA · GW

Thanks for the post. One concern I have about the use of 'power' is that it tends to be used for fairly flexible ability to pursue varied goals (good or bad, wisely or foolishly). But many resources are disproportionately helpful for particular goals or levels of competence. E.g. practices of rigorous reproducible science will give more power and prestige to scientists working on real topics, or who achieve real results, but it also constraint what they can do with that power (the norms make it harder for a scientist who wins stature thereby to push p-hacked pseudoscience for some agenda). Similarly, democracy increases the power of those who are likely to be elected, while constraining their actions towards popular approval. A charity evaluator like GiveWell may gain substantial influence within the domain of effective giving, but won't be able to direct most of its audience to charities that have failed in well powered randomized control trials.

This kind of change, which provides power differentially towards truth, or better solutions, should be of relatively greater interest to those seeking altruistic effectiveness (whereas more flexible power is of more interest to selfish actors or those with aims that hold up less well under those circumstances). So it makes sense to place special weight on asymmetric tools favoring correct views, like science, debate, and betting.

Comment by carl_shulman on Investing to Give Beginner Advice? · 2020-06-22T20:15:35.000Z · score: 2 (1 votes) · EA · GW

Thanks, edited.

Comment by carl_shulman on How to make the most impactful donation, in terms of taxes? · 2020-06-20T20:42:19.414Z · score: 4 (2 votes) · EA · GW

Wayne, the case for leverage with altruistic investment is in no way based on the assumption that arithmetic returns equal median or log returns. I have belatedly added links to several documents that go into the issues at length above,.

The question is whether leverage increases the expected impact of your donations, taking into account issues such as diminishing marginal returns. Up to a point (the Kelly criterion level), increasing leverage drives up long-run median returns and growth rates at the expense of greater risk (much less than the increase in arithmetic returns).

The expected $ donated do grow with the increased arithmetic returns (multiplied by leverage less borrowing costs, etc), but they become increasingly concentrated in outcomes of heavy losses or a shrinking minority of increasingly extreme gains. In personal retirement, you value money less as you have more of it at a quite rapid rate, which means the optimal amount of risk to take for returns is less than the rate that maximizes long-run growth (the Kelly criterion), and vastly less than maximizing arithmetic returns.

In altruism when you are a small portion of funding for the causes you support you have much less reason to be risk-averse, as the marginal value of a dollar donated won't change a lot if it goes from $30M to $30M+$100k in a given year. At the level of the whole cause, something closer to Kelly looks sensible.

Comment by carl_shulman on Investing to Give Beginner Advice? · 2020-06-19T16:14:55.810Z · score: 4 (2 votes) · EA · GW

E.g. the VIX, a measure of stock market volatility (and risk plays a role in formulae for leverage) is above 30 right now, close to twice the typical level. Although that's a quantitative matter, and considering future donation streams (which are not invested), pushes towards more (see the book Lifecycle Investing). But people shouldn't do anything involving leverage before understanding it thoroughly.

Comment by carl_shulman on Investing to Give Beginner Advice? · 2020-06-17T16:53:13.478Z · score: 45 (19 votes) · EA · GW

This is a brief response, so please don't rush intemperately into things before understanding what you're doing on the basis of any of this. For general finance information, especially about low-fee index investing, I recommend Bogleheads (the wiki and the forum):

https://www.bogleheads.org/wiki/Main_Page

For altruistic investment, the biggest differentiating factors are 1) maximizing tax benefits of donation; 2) greater willingness to take risks than with personal retirement, suggesting some leverage.

Some tax benefits worth noting in the US:

1) If you purchase multiple securities you can donate those which increase, avoiding capital gains tax, and sell those that decline (tax-loss harvesting), allowing you to cancel out other capital gains tax and deduct up to $3000/yr of ordinary income.

2) You can get a deduction for donating to charity (this is independent of and combines with avoiding capital gains on donations of appreciated securities). But this is only if you itemize deductions (so giving up the standard deduction), and thus is best to do only once in a few years, concentrating your donations to make itemizing worthwhile. There is a cap of 60% of income (100% this year because of the CARES act) for deductible cash contributions, 30% for donations of appreciated securities (although there can be carryover).

3) You can donate initially to a donor advised fund to collect the tax deduction early and have investments grow inside tax-free, saving you from taxes on dividends, interest and any sales of securities that you aren't transferring directly to a charity. However, DAFs charge fees that take back some of these gains, and have restrictions on available investment options (specifically most DAFs won't permit leverage).

Re leverage, this increases the likelihood of the investment going very high or very low, with the optimal level depending on a number of factors . Here are some discussions of the considerations:

https://mdickens.me/2020/06/21/samuelson_share_predict_optimal_leverage/?fbclid=IwAR1E7WTtv3KAajK_bjlZc_49YbZB5MkK97RJc74qcC9urkgAEsKB0KIjYjw

https://reducing-suffering.org/should-altruists-leverage-investments/

https://forum.effectivealtruism.org/posts/g4oGNGwAoDwyMAJSB/how-much-leverage-should-altruists-use
https://docs.google.com/document/d/10oDwoulY6jR01ufewyO3XOQvA85Yys7LXWgTUrJt980/edit#heading=h.gl3bx4art973

My own preference would be to make a leveraged investment that can't go to a negative value so you don't need to monitor it constantly, e.g. a leveraged index ETF (e.g. UPRO, TQQQ, or SOXL), or a few. If it collapses you can liquidate and tax-loss harvest. If it appreciates substantially then donate the appreciated ETF in chunks to maximize your tax deduction (e.g. bunching it up when your marginal tax rate will be high to give up to the 30% maximum deduction limits).

Comment by carl_shulman on Million dollar donation: penny for your thoughts? · 2020-06-16T19:52:17.144Z · score: 28 (9 votes) · EA · GW

My preference within this genre would be for something with more leverage and greater expected impact at the expense of local linearity. I'd especially call out the Center for Global Development, which has a history of policy wins that I think justify its budget many times over, and Target Malaria for work getting gene drives deployed to eliminate vector-borne diseases such as malaria. I'd prefer one dollar to these over multiple dollars to AMF, or the recommendations in the report.

Comment by carl_shulman on How to make the most impactful donation, in terms of taxes? · 2020-06-16T00:40:15.780Z · score: 15 (9 votes) · EA · GW

In the US, you might also invest the money in high risk high return investments that are more likely to increase a lot or decline to near zero over time (e.g. a leveraged ETF, to limit your downside to your investment), hold them for a year or more, and then sell them to realize losses or donate the appreciated securities if they rise. This has several benefits:

  • If the investment declines, you get to take the loss and use it to reduce capital gains tax or deduct from ordinary income (up to $3000 a year) if you have no capital gains
  • If it appreciates you get the regular tax deduction (you can give up to 30% of income in appreciated assets), and also avoid capital gains tax
  • Because it is high risk, if it appreciates it is more likely to appreciate a lot, so when donated it will help you clear the standard deduction by a larger amount
  • The elevated expected returns can increase the expected value of your donation quite a lot (i.e.. on average you will give a lot more dollars, even though they will be concentrated in a smaller fraction of the possibilities)

Don't do this before reading up extensively, but here are several discussions of the issues from an altruistic perspective.

https://reducing-suffering.org/should-altruists-leverage-investments/

https://docs.google.com/document/d/10oDwoulY6jR01ufewyO3XOQvA85Yys7LXWgTUrJt980/edit#heading=h.gl3bx4art973

https://docs.google.com/document/d/10oDwoulY6jR01ufewyO3XOQvA85Yys7LXWgTUrJt980/edit#heading=h.gl3bx4art973

https://forum.effectivealtruism.org/posts/g4oGNGwAoDwyMAJSB/how-much-leverage-should-altruists-use

Comment by carl_shulman on Why might one value animals far less than humans? · 2020-06-09T20:00:27.230Z · score: 40 (10 votes) · EA · GW

If you start decomposing minds into their computational components, you find many orders of magnitude differences in the numbers of similar components. E.g. both a honeybee and a human may have visual experience, but the latter will have on the order of 10,000 times as many photoreceptors, with even larger disparities in the number of neurons and computations for subsequent processing. If each edge detection or color discrimination (or higher level processing) additively contributes some visual experience, then you have immense differences in the total contributions.

Likewise, for reinforcement learning consequences of pain or pleasure rewards: larger brains will have orders of magnitude more neurons, synapses, associations, and dispositions to be updated in response to reward. Many thousands of subnetworks could be carved out with complexity or particular capabilities greater than those of the honeybee.

On the other side, trivially tiny computer programs we can make today could make for minimal instantiations of available theories of consciousness, with quantitative differences between the minimal examples and typical examples. See also this discussion. A global workspace may broadcast to thousands of processes or billions.

We can also consider minds much larger than humans, e.g. imagine a network of humans linked by neural interfaces, exchanging memories, sensory input, and directions to action. As we increased the bandwidth of these connections and the degree of behavioral integration, eventually you might have a system that one could consider a single organism, but with vastly greater numbers of perceptions, actions, and cognitive processes than a single human. If we started with 1 billion humans who gradually joined their minds together in such a network, should we say that near the end of the process their total amount of experience or moral weight is reduced to that of 1-10 humans? I'd guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds, and so there could be giant minds with vastly greater than human quantities of experience.

The usual discussions on this topic seem to assume that connecting and integrating many mental processes almost certainly destroys almost all of their consciousness and value, which seems questionable both for the view itself and for the extreme weight put on it. With a fair amount of credence on the view that the value is not almost all destroyed, the expected value of big minds is enormously greater than that of small minds.

Comment by carl_shulman on X-risks to all life v. to humans · 2020-06-03T20:58:09.532Z · score: 16 (10 votes) · EA · GW

Welcome, and thanks for posting!

"kills 99% of all humans...In each scenario, humanity almost certainly goes extinct."

I don't think that this is true for your examples, but rather that humanity would almost certainly not be directly extinguished or even prevented from recovering technology by a disaster that killed 99%. 1% of humans surviving is a population of 78 million, more than the Roman Empire, with knowledge of modern agricultural techniques like crop rotation or the moldboard plough, and vast supplies built by our civilization. For a dinosaur-killer asteroid, animals such as crocodiles and our own ancestors survived that, and our technological abilities to survive and recover are immensely greater (we have food reserves, could tap energy from the oceans and dead biomass, can produce food using non-solar energy sources, etc). So not only would human extinction be quite unlikely, but by that point nonhuman extinctions would be very thorough.

For a pandemic, an immune 1% (immunologically or behaviorally immune) rebuild civilization. If we stipulate a disease that killed all humans but generally didn't affect other taxa, then chimpanzees (with whom we have a common ancestor only millions of years ago) are well positioned to take the same course again, as well as more distant relatives if primates perished, so I buy a relatively high credence in intelligence life reemerging there.

Comment by carl_shulman on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T17:38:13.857Z · score: 11 (8 votes) · EA · GW

It doesn't seem like mere pedantry if it requires substantial revision of the view to retain the same action recommendations. Symmetric person-affecting total utilitarianism does look to be dominated by these sorts of possibilities of large stocks of necessary beings without some other change. I'm curious what your take on the issues raised in that post is.

Comment by carl_shulman on Are we living at the most influential time in history? · 2020-03-09T22:12:09.603Z · score: 2 (1 votes) · EA · GW

Plus the Soviet bioweapons program was actively at work to engineer pathogens for enhanced destructiveness during the 70s and 80s using new biotechnology (and had been using progessively more advanced methods through the 20th century.

Comment by carl_shulman on Growth and the case against randomista development · 2020-02-17T22:31:27.744Z · score: 17 (5 votes) · EA · GW

I think that kind of spikiness (1000, 200, 100 with big gaps between) isn't the norm. Often one can proceed to weaker and indirect versions of a top intervention (funding scholarships to expand the talent pipelines for said think-tanks, buying them more Google Ads to publicize their research) with lower marginal utility that smooth out the returns curve, as you do progressively less appealing and more ancillary versions of the 1000-intervention until they start to get down into the 200-intervention range.

Comment by carl_shulman on How Much Leverage Should Altruists Use? · 2020-01-08T04:20:04.301Z · score: 4 (3 votes) · EA · GW

I agree that carefully-vetted institutional solutions are probably where one would like to end up.

Comment by carl_shulman on How Much Leverage Should Altruists Use? · 2020-01-07T19:32:57.095Z · score: 6 (5 votes) · EA · GW

I agree that the EMH-consistent version of this still suggests the collective EA portfolio should be more leveraged, and more managing correlations across donors now and over time, and that there is a large factor literature in support of these factors (although in general academic finance suffers from datamining/backtesting and EMH dissipation of real factors that become known).

Re the text you quoted, I just mean that if EAs damage their portfolios (e.g. by taking large amounts of leverage and not properly monitoring it so that leverage ratios explode, taking out the portfolio) that's fewer EA dollars donated (aside from the reputational effects), and I would want to do more to ensure readers don't go half-cocked and blow up their portfolios without really knowing what they are doing.


Comment by carl_shulman on How Much Leverage Should Altruists Use? · 2020-01-07T07:21:40.191Z · score: 44 (17 votes) · EA · GW

I appreciate many important points in this essay about the additional considerations for altruistic investing, including taking more risk for return than normal because of lower philanthropic risk aversion, attending to correlations with other donors (current and future), and the variations in diminishing returns curve for different causes and interventions popular in effective altruism. I think there are very large gains that could be attained by effective altruists better making use of these considerations.

But at the same time I am quite disconcerted by the strong forward-looking EMH violating claims about massively outsized returns to the specific investment strategies, despite the limited disclaimers (and the factor literature). Concern for relative performance only goes so far as an explanation for predicting such strong inefficiencies going forward: the analysis would seem to predict that, e.g. very wealthy individuals investing on their own accounts will pile into such strategies if their advantage is discernible. I would be much less willing to share the article because of the inclusion of those elements.

I would also add some of the special anti-risk considerations for altruists and financial writing directed at them, e.g.

  • Investment blowups that cause personal financial difficulties attributed to effective altruism having fallout on others
  • Investment writings taken as advice damaging especially valuable assets that would otherwise be used for altruism (just the flip side of the value of improvements)
  • Relative underperformance (especially blame-drawing) of the broad EA portfolio contributing to weaker reputation for effective altruism, interacting negatively with the very valuable asset (much of the EA portfolio) of entry of new altruists

On net, it still looks like EA should be taking much more risk than is commonly recommended for individual retirement investments, and I'd like to see active development of this sort of thinking, but want to emphasize the importance of caution and rigor in doing so.


Comment by carl_shulman on How Much Leverage Should Altruists Use? · 2020-01-07T06:59:21.788Z · score: 5 (4 votes) · EA · GW

Interactive Brokers allows much higher leverage for accounts with portfolio margin enabled, e.g. greater than 6:1. That requires options trading permissions, in turn requiring some combination of options experience and an online (easy) test.

I would be more worried about people blowing up their life savings with ill-considered extreme leverage strategies and the broader fallout of that.


Comment by carl_shulman on Summary of my academic paper “Effective Altruism and Systemic Change” · 2019-09-26T20:48:08.955Z · score: 18 (7 votes) · EA · GW

Agreed (see this post for an argument along these lines), but it would require much higher adoption and so merits the critique relative to alternatives where the donations can be used more effectively.

I have reposted the comment as a top-level post.

Comment by carl_shulman on Summary of my academic paper “Effective Altruism and Systemic Change” · 2019-09-26T19:34:03.927Z · score: 77 (23 votes) · EA · GW

My sense of what is happening regarding discussions of EA and systemic change is:


  • Actual EA is able to do assessments of systemic change interventions including electoral politics and policy change, and has done so a number of times
    • Empirical data on the impact of votes, the effectiveness of lobbying and campaign spending work out without any problems of fancy decision theory or increasing marginal returns
      • E.g. Andrew Gelman's data on US Presidential elections shows that given polling and forecasting uncertainty a marginal vote in a swing state average something like a 1 in 10 million chance of swinging an election over multiple elections (and one can save to make campaign contributions
      • 80,000 Hours has a page (there have been a number of other such posts and discussion, note that 'worth voting' and 'worth buying a vote through campaign spending or GOTV' are two quite different thresholds) discussing this data and approaches to valuing differences in political outcomes between candidates; these suggest that a swing state vote might be worth tens of thousands of dollars of income to rich country citizens
        • But if one thinks that charities like AMF do 100x or more good per dollar by saving the lives of the global poor so cheaply, then these are compatible with a vote being worth only a few hundred dollars
        • If one thinks that some other interventions, such as gene drives for malaria eradication, animal advocacy, or existential risk interventions are much more cost-effective than AMF, that would lower the value further except insofar as one could identify strong variation in more highly-valued effects
      • Experimental data on the effects of campaign contributions suggest a cost of a few hundred dollars per marginal vote (see, e.g. Gerber's work on GOTV experiments)
      • Prediction markets and polling models give a good basis for assessing the chance of billions of dollars of campaign funds swinging an election
      • If there are increasing returns to scale from large-scale spending, small donors can convert their funds into a small chance of huge funds, e.g. using a lottery, or more efficiently (more than 1/1,000 chance of more than 1000x funds) through making longshot bets in financial markets, so IMR are never a bar to action (also see the donor lottery)
      • The main thing needed to improve precision for such estimation of electoral politics spending is carefully cataloging and valuing different channels of impact (cost per vote and electoral impact per vote are well-understood)
        • More broadly there are also likely higher returns than campaign spending in some areas such as think tanks, lobbying, and grassroots movement-building; ballot initiative campaign spending is one example that seems like it may have better returns than spending on candidates (and EAs have supported several ballot initiatives financially, such as restoration of voting rights to convicts in Florida, cage bans, and increased foreign spending)
    • A recent blog post by the Open Philanthropy Project describes their cost-effectiveness estimates from policy search in human-oriented US domestic policy, including criminal justice reform, housing reform, and others
      • It states that thus far even ex ante estimates of effect there seem to have only rarely outperformed GiveWell style charities
      • However it says: "One hypothesis we’re interested in exploring is the idea of combining multiple sources of leverage for philanthropic impact (e.g., advocacy, scientific research, helping the global poor) to get more humanitarian impact per dollar (for instance via advocacy around scientific research funding or policies, or scientific research around global health interventions, or policy around global health and development). Additionally, on the advocacy side, we’re interested in exploring opportunities outside the U.S.; we initially focused on U.S. policy for epistemic rather than moral reasons, and expect most of the most promising opportunities to be elsewhere. "
    • Let's Fund's fundraising for climate policy work similarly made an attempt to estimate the impacts of their proposed intervention in this sort of fashion; without endorsing all the details of their analysis, I think it is an example of EA methodologies being quite capable of modeling systemic interventions
    • Animal advocates in EA have obviously pursued corporate campaigns and ballot initiatives which look like systemic change to me, including quantitative estimates of the impact of the changes and the effects of the campaigns
  • The great majority of critics of EA invoking systemic change fail to present the simple sort of quantitative analysis given above for the interventions they claim, and frequently when such analysis is done the intervention does not look competitive by EA lights
    • A common reason for this is EAs taking into account the welfare of foreigners, nonhuman animals and future generations; critics may propose to get leverage by working through the political system but give up on leverage from concern for neglected beneficiaries, and in other cases the competition is interventions that get leverage from advocacy or science combined with a focus on neglected beneficiaries
    • Sometimes systemic change critiques come from a Marxist perspective that assumes Marxist revolution will produce a utopia, whereas empirically such revolution has been responsible for impoverishing billions of people, mass killing, the Cold War, (with risk of nuclear war) and increased tensions between China and democracies, creating large object-level disagreements with many EAs who want to accurately forecast the results of political action
  • Nonetheless, my view is that historical data do show that the most efficient political/advocacy spending, particularly aiming at candidates and issues selected with an eye to global poverty or the long term, does have higher returns than GiveWell top charities (even ignoring nonhumans and future generations or future technologies); one can connect the systemic change critique as a position in intramural debates among EAs about the degree to which one should focus on highly linear, giving as consumption, type interventions
    • E.g. I would rather see $1000 go to something like the Center for Global Development, Target Malaria's gene drive effort, or the Swiss effective foreign aid ballot initiative than the Against Malaria Foundation
    • I do think it is true that well-targeted electoral politics spending has higher returns than AMF, because of the impacts of elections on things such as science, foreign aid, great power war, AI policy, etc, provided that one actually directs one's efforts based on the neglected considerations
  • EAs who are willing to consider riskier and less linear interventions are mostly already pursuing fairly dramatic systemic change, in areas with budgets that are small relative to political spending (unlike foreign aid),
    • Global catastrophic risks work is focused on research and advocacy to shift the direction of society as a whole on critical issues, and the collapse of human civilization or its replacement by an undesirable successor would certainly be a systemic change
    • As mentioned previously, short-term animal EA work is overwhelmingly focused on systemic changes, through changing norms and laws, or producing technologies that would replace and eliminate the factory farming system
    • A number of EA global poverty focused donors do give to organizations like CGD, meta interventions to grow the EA movement (which can eventually be cashed in for larger systemic change), and groups like GiveWell or the Poverty Action Lab
      • Although there is a relative gap in longtermist and high-risk global poverty work compared to other cause areas, that does make sense in terms of ceiling effects, arguments for the importance of trajectory changes from a longtermist perspective, and the role of GiveWell as a respected charity evaluator providing a service lacking for other areas
    • Issue-specific focus in advocacy makes sense for these areas given the view that they are much more important than the average issue and with very low spending
  • As funding expands in focused EA priority issues, eventually diminishing returns there will equalize with returns for broader political spending, and activity in the latter area could increase enormously: since broad political impact per dollar is flatter over a large range political spending should either be a very small or very large portion of EA activity
    • Essentially, the cost per vote achieved through things like campaign spending is currently set by the broader political culture and has the capacity to absorb billions of dollars at similar cost-effectiveness to the current level, so it should either be the case that EA funds very little of it or enormous amounts of it
      • There is a complication in that close elections or other opportunities can vary the effectiveness of political spending over time, which would suggest saving most funds for those
    • The considerations are similar to GiveDirectly: since cash transfers could absorb all EA funds many times over at similar cost-effectiveness (with continued rapid scaling), it should take in either very little or almost all EA funding; in a forced choice it should either be the case that most funding goes to cash transfers, whereas for other interventions with diminishing returns on the relevant scale as mixed portfolio will yield more impact
    • For now areas like animal advocacy and AI safety with budgets of only tens of millions of dollars are very small relative to political spending, and the impact of the focused work (including relevant movement building) makes more of a difference to those areas than a typical difference between political candidates; but if billions of dollars were being spent in those areas it would seem that political activity could be a competitive use (e.g. supporting pro-animal candidates for office)


Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-13T17:27:41.191Z · score: 6 (4 votes) · EA · GW

My read is that Millenarian religious cults have often existed in nontrivial numbers, but as you say the idea of systematic, let alone accelerating, progress (as opposed to past golden ages or stagnation) is new and coincided with actual sustained noticeable progress. The Wikipedia page for Millenarianism lists ~all religious cults, plus belief in an AI intelligence explosion.

So the argument seems, first order, to reduce to the question of whether credence in AI growth boom (to much faster than IR rates) is caused by the same factors as religious cults rather than secular scholarly opinion, and the historical share/power of those Millenarian sentiments as a share of the population. But if one takes a narrower scope (not exceptionally important transformation of the world as a whole, but more local phenomena like the collapse of empires or how long new dynasties would last) one sees smaller distortion of relative importance for propaganda frequently (not that it was necessarily believed by outside observers).

Comment by carl_shulman on Ask Me Anything! · 2019-09-13T01:47:38.615Z · score: 16 (6 votes) · EA · GW
She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad.

That's an awfully (in)convenient interval to have! That is the unique position for an interval of that length with no distinguishing views about any parts of the interval, such that integrating over it gives you a probability of 0.5 and expected impact of 0.

The standard response to that is that you should weigh all these and do what is in expectation best, according to your best-guess credences. But maybe we just don’t have sufficiently fine-grained credences for this to work,

If the argument from cluelessness depends on giving that kind of special status to imprecise credences, then I just reject them for the general reason that coarsening credences leads to worse decisions and predictions (particularly if one has done basic calibration training and has some numeracy and skill at prediction). There is signal to be lost in coarsening on individual questions. And for compound questions with various premises or contributing factors making use of the signal on each of those means your views will be moved by signal.

Chapter 3 of Jeffrey Friedman's book War and Chance: Assessing Uncertainty in International Politics presents data and arguments showing large losses from coarsening credences instead of just giving a number between 0 and 1. I largely share his negative sentiments about imprecise credences, especially.

[VOI considerations around less investigated credences that are more likely to be moved by investigation are fruitful grounds to delay action to acquire or await information that one expects may be actually attained, but are not the same thing as imprecise credences.]

(In contrast, it seems you thought I was referring to AI vs some other putative great longtermist intervention. I agree that plausible longtermist rivals to AI and bio are thin on the ground.)

That was an example of the phenomenon of not searching a supposedly vast space and finding that in fact the # of top-level considerations are manageable (at least compared to thousands), based off experience with other people saying that there must be thousands of similarly plausible risks. I would likewise say that the DeepMind employee in your example doesn't face thousands upon thousands of ballpark-similar distinct considerations to assess.

Comment by carl_shulman on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T19:05:28.039Z · score: 8 (5 votes) · EA · GW

I think that is basically true in practice, but I am also saying that even absent those pragmatic considerations constraining utilitarianism, I still would hold these other non-utilitarian normative views and reject things like not leaving some space for existing beings for a tiny proportional increase in resources for utility monsters.

Comment by carl_shulman on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T18:55:23.724Z · score: 7 (4 votes) · EA · GW

The first words of my comment were "I don't identify as a utilitarian" (among other reasons because I reject the idea of things like feeding all existing beings to utility monsters for a trivial proportional gains to the latter, even absent all the pragmatic reasons not to; even if I thought such things more plausible it would require extreme certainty or non-pluralism to get such fanatical behavior).

I don't think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic considerations, e.g. what if they are actually a computer simulation designed to provide data about and respond to other civilizations, or the principle of their action provides evidence about what other locally dominant dictators on other planets will do including for other ideologies, or if they contact alien life?

But you could elaborate on the scenario to stipulate such things not existing in the hypothetical, and get a situation where your character would commit atrocities, and measures to prevent the situation hadn't been taken when the risk was foreseeable.

That's reason for everyone else to prevent and deter such a person or ideology from gaining the power to commit such atrocities while we can, such as in our current situation. That would go even more strongly for negative utilitarianism, since it doesn't treat any life or part of life as being intrinsically good, regardless of the being in question valuing it, and is therefore even more misaligned with the rest of the world (in valuation of the lives of everyone else, and in the lives of their descendants). And such responses give reason even for utilitarian extremists to take actions that reduce such conflicts.

Insofar as purely psychological self-binding is hard, there are still externally available actions, such as visibly refraining from pursuit of unaccountable power to harm others, and taking actions to make it more difficult to do so, such as transferring power to those with less radical ideologies, or ensuring transparency and accountability to them.

Comment by carl_shulman on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T18:13:09.657Z · score: 12 (4 votes) · EA · GW
It is worth noting that this is not, as it stands, a reply available to a pure traditional utilitarian.

That's why the very first words of my comment were "I don't identify as a utilitarian."



Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-07T02:19:19.233Z · score: 35 (21 votes) · EA · GW
I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms. But I hope you see that it does show that the whole thing comes down to whether you choose a prior like you did, or another reasonable alternative... Additionally, if you didn’t know which of these priors to use and used a mixture with mine weighted in to a non-trivial degree, this would also lead to a substantial prior probability of HoH.

I think this point is even stronger, as your early sections suggest. If we treat the priors as hypotheses about the distribution of events in the world, then past data can provide evidence about which one is right, and (the principle of) Will's prior would have given excessively low credence to humanity's first million years being the million years when life traveled to the Moon, humanity becoming such a large share of biomass, the first 10,000 years of agriculture leading to the modern world, and so forth. So those data would give us extreme evidence for a less dogmatic prior being correct.

Comment by carl_shulman on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-06T18:04:59.214Z · score: 29 (15 votes) · EA · GW

I don't identify as a utilitarian, but I am more sympathetic to consequentialism than the vast majority of people, and reject such thought experiments (even in the unlikely event they weren't practically self-defeating: utilitarians should want to modify their ideology and self-bind so that they won't do things that screw over the rest of society/other moral views, so that they can reap the larger rewards of positive-sum trades rather than negative-sum conflict). The contractarian (and commonsense and pluralism, but the theory I would most invoke for theoretical understanding is contractarian) objection to such things greatly outweighs the utilitarian case.

For the tiny population of Earth today (which is astronomically small compared to potential future populations) the idea becomes even more absurd. I would agree with Bostrom in Superintelligence (page 219) that failing to leave one galaxy, let alone one solar system for existing beings out of billions of galaxies would be ludicrously monomaniacal and overconfident (and ex ante something that 100% convinced consequentialists would have very much wanted to commit to abstain from).

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T18:06:33.197Z · score: 30 (13 votes) · EA · GW

Szilard anticipated nuclear weapons (and launched a large and effective strategy to cause the liberal democracies to get them ahead of totalitarian states, although with regret), and was also concerned about germ warfare (along with many of the anti-nuclear scientists). See this 1949 story he wrote. Szilard seems very much like an agenty sophisticated anti-xrisk actor.

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T17:34:33.016Z · score: 13 (10 votes) · EA · GW
I do agree that, at the moment, EA is mainly investing (e.g. because of Open Phil and because of human capital and because much actual expenditure is field-building-y, as you say). But it seems like at the moment that’s primarily because of management constraints and weirdness of borrowing-to-give (etc), rather than a principled plan to spread giving out over some (possibly very long) time period.

I agree that many small donors do not have a principled plan and are trying to shift the overall portfolio towards more donation soon (which can have the effect of 100% now donation for an individual who is small relative to the overall portfolio).

However, I think that institutionally there are in fact mechanisms to regulate expenditures:

  • Evaluations of investments in movement-building involve estimations of the growth of EA resources that will result, and comparisons to financial returns; as movement-building returns decline they will start to fall under the financial return benchmark and no longer be expanded in that way
  • The Open Philanthropy Project has blogged about its use of the concept of a 'last dollar' opportunity cost of funds, asking for current spending whether in expectation it will do more good than saving it for future opportunities; assessing last dollars opportunity cost involves use of market investment returns, and the value of savings as insurance for the possibility of rare conditions that could provide enhanced returns (a collapse of other donors in core causes rather than a glut, major technological developments, etc)
  • Some other large and small donors likewise take into account future opportuntiies
  • Advisory institutions such as 80,000 Hours, charity evaluators, grantmakers, and affiliated academic researchers are positioned to advise change if donors start spending down too profligately (I for one stand ready for this wrt my advice to longtermist donors focused on existential risk)

All that said, it's valuable to improve broader EA community understanding of intertemporal tradeoffs, and estimation of the relevant parameters to determine disbursement rates better.

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T17:03:45.434Z · score: 16 (8 votes) · EA · GW
My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.
[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable.

What credence to 'this century is the most HoH-ish there will ever be henceforth?' That claim soaks up credence from trends towards diminishing influence over time, and our time is among the very first to benefit from longtermist altruists actually existing to get non-zero returns from longtermist strategies and facing plausible x-risks. The combination of those two factors seems to have a good shot at 'most HoH century' but substantially better than that for 'most HoH century remaining.'

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T16:40:29.562Z · score: 15 (4 votes) · EA · GW
Here are two distinct views:
Strong Longtermism := The primary determinant of the value of our actions is the effects of those actions on the very long-run future.
The Hinge of History Hypothesis (HoH) :=  We are living at the most influential time ever. 
It seems that, in the effective altruism community as it currently stands, those who believe longtermism generally also assign significant credence to HoH; I’ll precisify ‘significant’ as >10% when ‘time’ is used to refer to a period of a century, but my impression is that many longtermists I know would assign >30% credence to this view.  It’s a pretty striking fact that these two views are so often held together — they are very different claims, and it’s not obvious why they should so often be jointly endorsed.

Two clear and common channels I have seen are:

  • Longtermism leads to looking around for things that would have lasting impacts (e.g. Parfit and Singer attending to existential risk, and noticing that a large portion of all technological advances have been in the last few centuries, and a large portion of the remainder look likely to come in the next few centuries, including the technologies for much higher existential risk)
  • People pay attention to the fact that the last few centuries have accounted for so much of all technological progress, and the likely gains to be had in the next few centuries (based on our knowledge of physical laws, existence proofs, from biology, and trend extrapolation), noticing things that can have incredibly long-lasting effects that dwarf short-run concerns
Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T16:32:03.719Z · score: 23 (8 votes) · EA · GW

> Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that's the hinge moment.

I agree (and have used in calculations about optimal disbursement and savings rates) that the chance of a future altruist funding crash is an important reason for saving (e.g. medium-scale donors can provide insurance against a huge donor like the Open Philanthropy Project not entering an important area or being diverted). However, the particularly relevant kind of event for saving is the possibility of a 'catastrophe' that cuts other altruistic funding or similar while leaving one's savings unaffected. Good Ventures going awry fits that bill better than a nuclear war (which would also destroy a DAF saving for the future with high probability).

Saving extra for a catastrophe that destroys one's savings and the broader world at the same rate is a bet on proportional influence being more important in the poorer smaller post-disaster world, which seems like a weaker consideration. Saving or buying insurance to pay off in those cases, e.g. with time capsule messages to post-apocalyptic societies, or catastrophe bonds/insurance contracts to release funds in the event of a crash in the EA movement, get more oomph.

I'll also flag that we're switching back and forth here between the question of which century has the highest marginal impact per unit resources and which periods are worth saving/expending how much for.

>Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high.

I think this is true for what little EV of 'most important century' remains so far out, but that residual is very small. Note that Martin Weitzman's argument for discounting the future at the lowest possible rate (where we consider even very unlikely situations where discount rates remain low to get a low discount rate for the very long-term) gives different results with an effectively bounded utility function. If we face a limit like '~max value future' or 'utopian light-cone after a great reflection' then we can't make up for increasingly unlikely scenarios with correspondingly greater incremental probability of achieving ~ that maximum: diminishing returns mean we can't exponentially grow our utility gained from resources indefinitely (going from 99% of all wealth to 99.9% or 99.999% and so on will yield only a bounded increment to the chance of a utopian long-term). A related limit to growth (although there is some chance it could be avoided, making it another drag factor) comes if the chances of expropriation rise as one's wealth becomes a larger share of the world (a foundation with 50% of world wealth would be likely to face new taxes).


Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T16:10:28.429Z · score: 11 (7 votes) · EA · GW

Thinking further, I would go with importance among those options for 'total influence of an era' but none of those terms capture the 'per capita/resource' element, and so all would tend to be misleading in that way. I think you would need an explicit additional qualifier to mean not 'this is the century when things will be decided' but 'this is the century when marginal influence is highest, largely because ~no one tried or will try.'

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T06:36:47.672Z · score: 16 (8 votes) · EA · GW

I'd guess a longtermist altruist movement would have wound up with a flatter GCR porfolio at the time. It might have researched nuclear winter and dirty bombs earlier than in OTL (and would probably invest more in nukes than today's EA movement), and would have expedited the (already pretty good) reaction to the discovery of asteroid risk. I'd also guess it would have put a lot of attention on the possibility of stable totalitarianism as lock-in.

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T06:31:52.526Z · score: 22 (7 votes) · EA · GW
You might think the counterfactual is unfair here, but I wouldn’t regard it as accessible to someone in 1600 to know that they could make contributions to science and the Enlightenment as a good way of influencing the long-run future. 

Is longtermism accessible today? That's a philosophy of a narrow circle, as Baconian science and the beginnings of the culture of progress were in 1600. If you are a specialist focused on moral reform and progress today with unusual knowledge, your might want to consider a counterpart in the past in a similar position for their time.