Posts

Some personal thoughts on EA and systemic change 2019-09-26T21:40:28.725Z · score: 173 (68 votes)
Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation 2016-12-31T02:19:35.457Z · score: 30 (23 votes)
Donor lotteries: demonstration and FAQ 2016-12-07T13:07:26.306Z · score: 38 (38 votes)
The age distribution of GiveWell recommended charities 2015-12-26T18:35:44.511Z · score: 13 (15 votes)
A Long-run perspective on strategic cause selection and philanthropy 2013-11-05T23:08:35.000Z · score: 8 (7 votes)

Comments

Comment by carl_shulman on Summary of my academic paper “Effective Altruism and Systemic Change” · 2019-09-26T20:48:08.955Z · score: 16 (6 votes) · EA · GW

Agreed (see this post for an argument along these lines), but it would require much higher adoption and so merits the critique relative to alternatives where the donations can be used more effectively.

I have reposted the comment as a top-level post.

Comment by carl_shulman on Summary of my academic paper “Effective Altruism and Systemic Change” · 2019-09-26T19:34:03.927Z · score: 77 (23 votes) · EA · GW

My sense of what is happening regarding discussions of EA and systemic change is:


  • Actual EA is able to do assessments of systemic change interventions including electoral politics and policy change, and has done so a number of times
    • Empirical data on the impact of votes, the effectiveness of lobbying and campaign spending work out without any problems of fancy decision theory or increasing marginal returns
      • E.g. Andrew Gelman's data on US Presidential elections shows that given polling and forecasting uncertainty a marginal vote in a swing state average something like a 1 in 10 million chance of swinging an election over multiple elections (and one can save to make campaign contributions
      • 80,000 Hours has a page (there have been a number of other such posts and discussion, note that 'worth voting' and 'worth buying a vote through campaign spending or GOTV' are two quite different thresholds) discussing this data and approaches to valuing differences in political outcomes between candidates; these suggest that a swing state vote might be worth tens of thousands of dollars of income to rich country citizens
        • But if one thinks that charities like AMF do 100x or more good per dollar by saving the lives of the global poor so cheaply, then these are compatible with a vote being worth only a few hundred dollars
        • If one thinks that some other interventions, such as gene drives for malaria eradication, animal advocacy, or existential risk interventions are much more cost-effective than AMF, that would lower the value further except insofar as one could identify strong variation in more highly-valued effects
      • Experimental data on the effects of campaign contributions suggest a cost of a few hundred dollars per marginal vote (see, e.g. Gerber's work on GOTV experiments)
      • Prediction markets and polling models give a good basis for assessing the chance of billions of dollars of campaign funds swinging an election
      • If there are increasing returns to scale from large-scale spending, small donors can convert their funds into a small chance of huge funds, e.g. using a lottery, or more efficiently (more than 1/1,000 chance of more than 1000x funds) through making longshot bets in financial markets, so IMR are never a bar to action (also see the donor lottery)
      • The main thing needed to improve precision for such estimation of electoral politics spending is carefully cataloging and valuing different channels of impact (cost per vote and electoral impact per vote are well-understood)
        • More broadly there are also likely higher returns than campaign spending in some areas such as think tanks, lobbying, and grassroots movement-building; ballot initiative campaign spending is one example that seems like it may have better returns than spending on candidates (and EAs have supported several ballot initiatives financially, such as restoration of voting rights to convicts in Florida, cage bans, and increased foreign spending)
    • A recent blog post by the Open Philanthropy Project describes their cost-effectiveness estimates from policy search in human-oriented US domestic policy, including criminal justice reform, housing reform, and others
      • It states that thus far even ex ante estimates of effect there seem to have only rarely outperformed GiveWell style charities
      • However it says: "One hypothesis we’re interested in exploring is the idea of combining multiple sources of leverage for philanthropic impact (e.g., advocacy, scientific research, helping the global poor) to get more humanitarian impact per dollar (for instance via advocacy around scientific research funding or policies, or scientific research around global health interventions, or policy around global health and development). Additionally, on the advocacy side, we’re interested in exploring opportunities outside the U.S.; we initially focused on U.S. policy for epistemic rather than moral reasons, and expect most of the most promising opportunities to be elsewhere. "
    • Let's Fund's fundraising for climate policy work similarly made an attempt to estimate the impacts of their proposed intervention in this sort of fashion; without endorsing all the details of their analysis, I think it is an example of EA methodologies being quite capable of modeling systemic interventions
    • Animal advocates in EA have obviously pursued corporate campaigns and ballot initiatives which look like systemic change to me, including quantitative estimates of the impact of the changes and the effects of the campaigns
  • The great majority of critics of EA invoking systemic change fail to present the simple sort of quantitative analysis given above for the interventions they claim, and frequently when such analysis is done the intervention does not look competitive by EA lights
    • A common reason for this is EAs taking into account the welfare of foreigners, nonhuman animals and future generations; critics may propose to get leverage by working through the political system but give up on leverage from concern for neglected beneficiaries, and in other cases the competition is interventions that get leverage from advocacy or science combined with a focus on neglected beneficiaries
    • Sometimes systemic change critiques come from a Marxist perspective that assumes Marxist revolution will produce a utopia, whereas empirically such revolution has been responsible for impoverishing billions of people, mass killing, the Cold War, (with risk of nuclear war) and increased tensions between China and democracies, creating large object-level disagreements with many EAs who want to accurately forecast the results of political action
  • Nonetheless, my view is that historical data do show that the most efficient political/advocacy spending, particularly aiming at candidates and issues selected with an eye to global poverty or the long term, does have higher returns than GiveWell top charities (even ignoring nonhumans and future generations or future technologies); one can connect the systemic change critique as a position in intramural debates among EAs about the degree to which one should focus on highly linear, giving as consumption, type interventions
    • E.g. I would rather see $1000 go to something like the Center for Global Development, Target Malaria's gene drive effort, or the Swiss effective foreign aid ballot initiative than the Against Malaria Foundation
    • I do think it is true that well-targeted electoral politics spending has higher returns than AMF, because of the impacts of elections on things such as science, foreign aid, great power war, AI policy, etc, provided that one actually directs one's efforts based on the neglected considerations
  • EAs who are willing to consider riskier and less linear interventions are mostly already pursuing fairly dramatic systemic change, in areas with budgets that are small relative to political spending (unlike foreign aid),
    • Global catastrophic risks work is focused on research and advocacy to shift the direction of society as a whole on critical issues, and the collapse of human civilization or its replacement by an undesirable successor would certainly be a systemic change
    • As mentioned previously, short-term animal EA work is overwhelmingly focused on systemic changes, through changing norms and laws, or producing technologies that would replace and eliminate the factory farming system
    • A number of EA global poverty focused donors do give to organizations like CGD, meta interventions to grow the EA movement (which can eventually be cashed in for larger systemic change), and groups like GiveWell or the Poverty Action Lab
      • Although there is a relative gap in longtermist and high-risk global poverty work compared to other cause areas, that does make sense in terms of ceiling effects, arguments for the importance of trajectory changes from a longtermist perspective, and the role of GiveWell as a respected charity evaluator providing a service lacking for other areas
    • Issue-specific focus in advocacy makes sense for these areas given the view that they are much more important than the average issue and with very low spending
  • As funding expands in focused EA priority issues, eventually diminishing returns there will equalize with returns for broader political spending, and activity in the latter area could increase enormously: since broad political impact per dollar is flatter over a large range political spending should either be a very small or very large portion of EA activity
    • Essentially, the cost per vote achieved through things like campaign spending is currently set by the broader political culture and has the capacity to absorb billions of dollars at similar cost-effectiveness to the current level, so it should either be the case that EA funds very little of it or enormous amounts of it
      • There is a complication in that close elections or other opportunities can vary the effectiveness of political spending over time, which would suggest saving most funds for those
    • The considerations are similar to GiveDirectly: since cash transfers could absorb all EA funds many times over at similar cost-effectiveness (with continued rapid scaling), it should take in either very little or almost all EA funding; in a forced choice it should either be the case that most funding goes to cash transfers, whereas for other interventions with diminishing returns on the relevant scale as mixed portfolio will yield more impact
    • For now areas like animal advocacy and AI safety with budgets of only tens of millions of dollars are very small relative to political spending, and the impact of the focused work (including relevant movement building) makes more of a difference to those areas than a typical difference between political candidates; but if billions of dollars were being spent in those areas it would seem that political activity could be a competitive use (e.g. supporting pro-animal candidates for office)


Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-13T17:27:41.191Z · score: 5 (3 votes) · EA · GW

My read is that Millenarian religious cults have often existed in nontrivial numbers, particularly but as you say the idea of systematic, let alone accelerating, progress (as opposed to past golden ages or stagnation) is new and coincided with actual sustained noticeable progress. The Wikipedia page for Millenarianism lists ~all religious cults, plus belief in an AI intelligence explosion.

So the argument seems, first order, to reduce to the question of whether credence in AI growth boom (to much faster than IR rates) is caused by the same factors as religious cults rather than secular scholarly opinion, and the historical share/power of those Millenarian sentiments as a share of the population. But if one takes a narrower scope (not exceptionally important transformation of the world as a whole, but more local phenomena like the collapse of empires or how long new dynasties would last) one sees smaller distortion of relative importance for propaganda frequently (not that it was necessarily believed by outside observers).

Comment by carl_shulman on Ask Me Anything! · 2019-09-13T01:47:38.615Z · score: 16 (6 votes) · EA · GW
She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad.

That's an awfully (in)convenient interval to have! That is the unique position for an interval of that length with no distinguishing views about any parts of the interval, such that integrating over it gives you a probability of 0.5 and expected impact of 0.

The standard response to that is that you should weigh all these and do what is in expectation best, according to your best-guess credences. But maybe we just don’t have sufficiently fine-grained credences for this to work,

If the argument from cluelessness depends on giving that kind of special status to imprecise credences, then I just reject them for the general reason that coarsening credences leads to worse decisions and predictions (particularly if one has done basic calibration training and has some numeracy and skill at prediction). There is signal to be lost in coarsening on individual questions. And for compound questions with various premises or contributing factors making use of the signal on each of those means your views will be moved by signal.

Chapter 3 of Jeffrey Friedman's book War and Chance: Assessing Uncertainty in International Politics presents data and arguments showing large losses from coarsening credences instead of just giving a number between 0 and 1. I largely share his negative sentiments about imprecise credences, especially.

[VOI considerations around less investigated credences that are more likely to be moved by investigation are fruitful grounds to delay action to acquire or await information that one expects may be actually attained, but are not the same thing as imprecise credences.]

(In contrast, it seems you thought I was referring to AI vs some other putative great longtermist intervention. I agree that plausible longtermist rivals to AI and bio are thin on the ground.)

That was an example of the phenomenon of not searching a supposedly vast space and finding that in fact the # of top-level considerations are manageable (at least compared to thousands), based off experience with other people saying that there must be thousands of similarly plausible risks. I would likewise say that the DeepMind employee in your example doesn't face thousands upon thousands of ballpark-similar distinct considerations to assess.

Comment by carl_shulman on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T19:05:28.039Z · score: 8 (5 votes) · EA · GW

I think that is basically true in practice, but I am also saying that even absent those pragmatic considerations constraining utilitarianism, I still would hold these other non-utilitarian normative views and reject things like not leaving some space for existing beings for a tiny proportional increase in resources for utility monsters.

Comment by carl_shulman on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T18:55:23.724Z · score: 7 (4 votes) · EA · GW

The first words of my comment were "I don't identify as a utilitarian" (among other reasons because I reject the idea of things like feeding all existing beings to utility monsters for a trivial proportional gains to the latter, even absent all the pragmatic reasons not to; even if I thought such things more plausible it would require extreme certainty or non-pluralism to get such fanatical behavior).

I don't think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic considerations, e.g. what if they are actually a computer simulation designed to provide data about and respond to other civilizations, or the principle of their action provides evidence about what other locally dominant dictators on other planets will do including for other ideologies, or if they contact alien life?

But you could elaborate on the scenario to stipulate such things not existing in the hypothetical, and get a situation where your character would commit atrocities, and measures to prevent the situation hadn't been taken when the risk was foreseeable.

That's reason for everyone else to prevent and deter such a person or ideology from gaining the power to commit such atrocities while we can, such as in our current situation. That would go even more strongly for negative utilitarianism, since it doesn't treat any life or part of life as being intrinsically good, regardless of the being in question valuing it, and is therefore even more misaligned with the rest of the world (in valuation of the lives of everyone else, and in the lives of their descendants). And such responses give reason even for utilitarian extremists to take actions that reduce such conflicts.

Insofar as purely psychological self-binding is hard, there are still externally available actions, such as visibly refraining from pursuit of unaccountable power to harm others, and taking actions to make it more difficult to do so, such as transferring power to those with less radical ideologies, or ensuring transparency and accountability to them.

Comment by carl_shulman on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T18:13:09.657Z · score: 12 (4 votes) · EA · GW
It is worth noting that this is not, as it stands, a reply available to a pure traditional utilitarian.

That's why the very first words of my comment were "I don't identify as a utilitarian."



Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-07T02:19:19.233Z · score: 32 (19 votes) · EA · GW
I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms. But I hope you see that it does show that the whole thing comes down to whether you choose a prior like you did, or another reasonable alternative... Additionally, if you didn’t know which of these priors to use and used a mixture with mine weighted in to a non-trivial degree, this would also lead to a substantial prior probability of HoH.

I think this point is even stronger, as your early sections suggest. If we treat the priors as hypotheses about the distribution of events in the world, then past data can provide evidence about which one is right, and (the principle of) Will's prior would have given excessively low credence to humanity's first million years being the million years when life traveled to the Moon, humanity becoming such a large share of biomass, the first 10,000 years of agriculture leading to the modern world, and so forth. So those data would give us extreme evidence for a less dogmatic prior being correct.

Comment by carl_shulman on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-06T18:04:59.214Z · score: 30 (14 votes) · EA · GW

I don't identify as a utilitarian, but I am more sympathetic to consequentialism than the vast majority of people, and reject such thought experiments (even in the unlikely event they weren't practically self-defeating: utilitarians should want to modify their ideology and self-bind so that they won't do things that screw over the rest of society/other moral views, so that they can reap the larger rewards of positive-sum trades rather than negative-sum conflict). The contractarian (and commonsense and pluralism, but the theory I would most invoke for theoretical understanding is contractarian) objection to such things greatly outweighs the utilitarian case.

For the tiny population of Earth today (which is astronomically small compared to potential future populations) the idea becomes even more absurd. I would agree with Bostrom in Superintelligence (page 219) that failing to leave one galaxy, let alone one solar system for existing beings out of billions of galaxies would be ludicrously monomaniacal and overconfident (and ex ante something that 100% convinced consequentialists would have very much wanted to commit to abstain from).

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T18:06:33.197Z · score: 29 (12 votes) · EA · GW

Szilard anticipated nuclear weapons (and launched a large and effective strategy to cause the liberal democracies to get them ahead of totalitarian states, although with regret), and was also concerned about germ warfare (along with many of the anti-nuclear scientists). See this 1949 story he wrote. Szilard seems very much like an agenty sophisticated anti-xrisk actor.

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T17:34:33.016Z · score: 13 (10 votes) · EA · GW
I do agree that, at the moment, EA is mainly investing (e.g. because of Open Phil and because of human capital and because much actual expenditure is field-building-y, as you say). But it seems like at the moment that’s primarily because of management constraints and weirdness of borrowing-to-give (etc), rather than a principled plan to spread giving out over some (possibly very long) time period.

I agree that many small donors do not have a principled plan and are trying to shift the overall portfolio towards more donation soon (which can have the effect of 100% now donation for an individual who is small relative to the overall portfolio).

However, I think that institutionally there are in fact mechanisms to regulate expenditures:

  • Evaluations of investments in movement-building involve estimations of the growth of EA resources that will result, and comparisons to financial returns; as movement-building returns decline they will start to fall under the financial return benchmark and no longer be expanded in that way
  • The Open Philanthropy Project has blogged about its use of the concept of a 'last dollar' opportunity cost of funds, asking for current spending whether in expectation it will do more good than saving it for future opportunities; assessing last dollars opportunity cost involves use of market investment returns, and the value of savings as insurance for the possibility of rare conditions that could provide enhanced returns (a collapse of other donors in core causes rather than a glut, major technological developments, etc)
  • Some other large and small donors likewise take into account future opportuntiies
  • Advisory institutions such as 80,000 Hours, charity evaluators, grantmakers, and affiliated academic researchers are positioned to advise change if donors start spending down too profligately (I for one stand ready for this wrt my advice to longtermist donors focused on existential risk)

All that said, it's valuable to improve broader EA community understanding of intertemporal tradeoffs, and estimation of the relevant parameters to determine disbursement rates better.

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T17:03:45.434Z · score: 16 (8 votes) · EA · GW
My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.
[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable.

What credence to 'this century is the most HoH-ish there will ever be henceforth?' That claim soaks up credence from trends towards diminishing influence over time, and our time is among the very first to benefit from longtermist altruists actually existing to get non-zero returns from longtermist strategies and facing plausible x-risks. The combination of those two factors seems to have a good shot at 'most HoH century' but substantially better than that for 'most HoH century remaining.'

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T16:40:29.562Z · score: 14 (3 votes) · EA · GW
Here are two distinct views:
Strong Longtermism := The primary determinant of the value of our actions is the effects of those actions on the very long-run future.
The Hinge of History Hypothesis (HoH) :=  We are living at the most influential time ever. 
It seems that, in the effective altruism community as it currently stands, those who believe longtermism generally also assign significant credence to HoH; I’ll precisify ‘significant’ as >10% when ‘time’ is used to refer to a period of a century, but my impression is that many longtermists I know would assign >30% credence to this view.  It’s a pretty striking fact that these two views are so often held together — they are very different claims, and it’s not obvious why they should so often be jointly endorsed.

Two clear and common channels I have seen are:

  • Longtermism leads to looking around for things that would have lasting impacts (e.g. Parfit and Singer attending to existential risk, and noticing that a large portion of all technological advances have been in the last few centuries, and a large portion of the remainder look likely to come in the next few centuries, including the technologies for much higher existential risk)
  • People pay attention to the fact that the last few centuries have accounted for so much of all technological progress, and the likely gains to be had in the next few centuries (based on our knowledge of physical laws, existence proofs, from biology, and trend extrapolation), noticing things that can have incredibly long-lasting effects that dwarf short-run concerns
Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T16:32:03.719Z · score: 23 (8 votes) · EA · GW

> Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that's the hinge moment.

I agree (and have used in calculations about optimal disbursement and savings rates) that the chance of a future altruist funding crash is an important reason for saving (e.g. medium-scale donors can provide insurance against a huge donor like the Open Philanthropy Project not entering an important area or being diverted). However, the particularly relevant kind of event for saving is the possibility of a 'catastrophe' that cuts other altruistic funding or similar while leaving one's savings unaffected. Good Ventures going awry fits that bill better than a nuclear war (which would also destroy a DAF saving for the future with high probability).

Saving extra for a catastrophe that destroys one's savings and the broader world at the same rate is a bet on proportional influence being more important in the poorer smaller post-disaster world, which seems like a weaker consideration. Saving or buying insurance to pay off in those cases, e.g. with time capsule messages to post-apocalyptic societies, or catastrophe bonds/insurance contracts to release funds in the event of a crash in the EA movement, get more oomph.

I'll also flag that we're switching back and forth here between the question of which century has the highest marginal impact per unit resources and which periods are worth saving/expending how much for.

>Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high.

I think this is true for what little EV of 'most important century' remains so far out, but that residual is very small. Note that Martin Weitzman's argument for discounting the future at the lowest possible rate (where we consider even very unlikely situations where discount rates remain low to get a low discount rate for the very long-term) gives different results with an effectively bounded utility function. If we face a limit like '~max value future' or 'utopian light-cone after a great reflection' then we can't make up for increasingly unlikely scenarios with correspondingly greater incremental probability of achieving ~ that maximum: diminishing returns mean we can't exponentially grow our utility gained from resources indefinitely (going from 99% of all wealth to 99.9% or 99.999% and so on will yield only a bounded increment to the chance of a utopian long-term). A related limit to growth (although there is some chance it could be avoided, making it another drag factor) comes if the chances of expropriation rise as one's wealth becomes a larger share of the world (a foundation with 50% of world wealth would be likely to face new taxes).


Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T16:10:28.429Z · score: 10 (6 votes) · EA · GW

Thinking further, I would go with importance among those options for 'total influence of an era' but none of those terms capture the 'per capita/resource' element, and so all would tend to be misleading in that way. I think you would need an explicit additional qualifier to mean not 'this is the century when things will be decided' but 'this is the century when marginal influence is highest, largely because ~no one tried or will try.'

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T06:36:47.672Z · score: 16 (8 votes) · EA · GW

I'd guess a longtermist altruist movement would have wound up with a flatter GCR porfolio at the time. It might have researched nuclear winter and dirty bombs earlier than in OTL (and would probably invest more in nukes than today's EA movement), and would have expedited the (already pretty good) reaction to the discovery of asteroid risk. I'd also guess it would have put a lot of attention on the possibility of stable totalitarianism as lock-in.

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T06:31:52.526Z · score: 22 (7 votes) · EA · GW
You might think the counterfactual is unfair here, but I wouldn’t regard it as accessible to someone in 1600 to know that they could make contributions to science and the Enlightenment as a good way of influencing the long-run future. 

Is longtermism accessible today? That's a philosophy of a narrow circle, as Baconian science and the beginnings of the culture of progress were in 1600. If you are a specialist focused on moral reform and progress today with unusual knowledge, your might want to consider a counterpart in the past in a similar position for their time.

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-05T06:27:14.811Z · score: 2 (1 votes) · EA · GW
But ‘doesn't seem to be at all an option’ seems overstated to me.

In expectation, just as a result of combining comparability within a few OOM on likelihood of a hinge in the era/transition, but far more in population. I was not ruling out specific scenarios, in the sense that it is possible that a random lottery ticket is the winner and worth tens of millions of dollar, but not an option for best investment.

Generally, I'm thinking in expectations since they're more action-guiding.


Comment by carl_shulman on Aging research and population ethics · 2019-09-05T01:29:29.280Z · score: 10 (3 votes) · EA · GW
it turns out that saving people by hastening the arrival of LEV wouldn't prevent births and could actually increase the average fertility rate of the world. This leads to a counterintuitive result: Aging research could be even more valuable under the impersonal view of population ethics.

The sense of 'even more valuable' meant here seem to be something like 'more adjusted morally relevant QALYs.' But a total view of population ethics (contrasted with a symmetric person-affecting view) generically massively increases the potential QALYs at stake, and shifts the relative choiceworthiness of different options, so that on the impersonal view life extension is less valuable compared to the alternatives (and thus less of a priority for actual efforts) even if more important in absolute terms:

  • Because animal populations turn over extremely rapidly and our interventions generally take too long to help current animals (only changing the conditions of future generations), the impersonal view vastly amplifies the relative importance of helping them relative to long-lived humans
  • Considerations of existential risk for future generations potentially affect populations many orders of magnitude larger than current human populations (and most of those generations will in any case have access to life-extension), so the total view strongly favors interventions that yield QALYs through effects on long run civilizational trajectories or survival, rather than effects like those from medical life extension
  • Life extension may help affect distant future generations, and fertility boosts increase growth, but doesn't seem very well-targeted to that task

So I think the counterintuitive result is counterintuitive because it's not asking the right (action-guiding) question, and in action-guiding terms the person-affecting view does much more strongly favor life extension.


Comment by carl_shulman on Ask Me Anything! · 2019-09-04T23:20:21.358Z · score: 16 (10 votes) · EA · GW

> From the perspective of longtermism, for any particular action, there are thousands of considerations/ scenarios that point in the direction of the action being good, and thousands of considerations/ scenarios that point in the direction of the action being bad.

I worry that this type of problem is often exaggerated, e.g. with the suggestion that 'proposed x-risk A has some arguments going for it, but one could make arguments for thousands of other things' when the thousands of other candidates are never produced and could not be produced and appear to be in the same ballpark. When one makes a serious effort to catalog serious candidates at reasonable granularity the scope of considerations is vastly more manageable than initially suggested, but cluelessness is invoked in lieu of actually doing the search, or a representative subset of the search.

Comment by carl_shulman on Ask Me Anything! · 2019-09-04T23:06:53.307Z · score: 4 (3 votes) · EA · GW

People didn't quite have the relevant knowledge, since they didn't have sound plant and animal breeding programs or predictions of inheritance.

Comment by carl_shulman on Are we living at the most influential time in history? · 2019-09-04T22:38:00.815Z · score: 85 (32 votes) · EA · GW

Thanks for this post Will, it's good to see some discussion of this topic. Beyond our previous discussions, I'll add a few comments below.


hingeyness

I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them.

I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.

The possibility of engineered plagues causing an apocalypse was a grave concern of forward thinking people in the early 20th century as biological weapons were developed and demonstrated. Many of the anti-nuclear scientists concerned for the global prospects of humanity were also concerned about germ warfare.

Both of the above also had prominent fictional portrayals to come to mind for longtermist altruists engaging in a wide-ranging search. If there had been a longtermist altruist movement trying to catalog risks of human extinction I think they would have found both of the above, and could have worked to address them (there was reasonable scientific uncertainty about AI timelines, and people could reasonably have developed a lot more of the theory and analysis for AI alignment related topics at the time; on biological weapons arms control could have been much more effective, better governance of DURC developed, etc).

I think this goes to a broader question about the counterfactual to use for your HoH measure: there wasn't any longtermist altruist community as such in these periods, so the actual returns of all longtermist altruist strategies were zero. To talk about what they would have been one needs to consider a counterfactual in which we anachronistically introduce at least some minimal version of longtermist altruism, and what one includes in that intervention will affect the result one extracts from the exercise.

So, in general, hinginess is increasing, because our ability to think about the long-run effects of our actions, evaluate them, and prioritise accordingly, is increasing. 

I agree we are learning more about how to effectively exert resources to affect the future, but if your definition is concerned with the effect of a marginal increment of resources (rather than the total capacity of an era), then you need to wrestle with the issue of diminishing returns. Smallpox eradication was extraordinarily high return compared to the sorts of global health interventions being worked on today with a more crowded field. Founding fields like AI safety or population ethics is much better on a per capita basis than expanding them by 1% after they have developed more. The longtermist of 1600 would indeed have mostly 'invested' in building a movement and eventually in things like financial assets when movement-building returns fell below financial returns, but they also should have made concrete interventions like causing the leveraged growth of institutions like science and the Enlightenment that looked to have a fair chance of contributing to HoH scenarios over the coming centuries, and those could have paid off.

This is analogous to the general point in financial markets that asset classes with systematically high returns only have them before those returns are widely agreed on to be valuable and accessible. So startup founders or CEOs can earn large excess returns in expected value for their huge concentrated positions in their firms (in their founder shareholding and stock-based compensation) because of asymmetric information and incentive problems: investors want the founder or CEO to have a concentrated position to ensure good management, but the risk-adjusted value of a concentrated position is less for the same expected value, so the net arrangement delivers a lot of excess expected value.

A world in which everyone has shared correct values and strong knowledge of how to improve things is one in which marginal longtermist resources are gilding the lily. Insofar as one knows that longtermist altruists happen to find themselves with some advantages (e.g. high education in an era of educational inequality, and longtermism relevant values or knowledge in particular) is a potentially important asset to make use of.

The simulation update argument against HoH

I would note that the creation of numerous simulations of HoH-type periods doesn't reduce the total impact of the actual HoH folk. E.g. say that HoH folk get to influence 10^60 future people, and also get their lives simulated 10^50 times (with no ability to impact things beyond their own lives), while folk in a non-HOH Earthly period get to influence 10^55 future people and get simulated 10^42 times. Because simulations account for a small minority of the total influence, the expected value of an action (or the evidential value of a strategy across all like minds) is still driven primarily by the non-simulated cases. Seeming HoH folk may be simulated more often, but still have most of their influence through unsimulated shaping of history.

If simulations were so numerous that most of the value in history lay in simulations, rather than in basement-level influence, then things might be different. But I think argument #3 doesn't work for this reason.

Third, even if we’re at some enormously influential time right now, if there’s some future time that is even more influential, then the most obvious EA activity would be to invest resources (whether via financial investment or some sort of values-spreading) in order that our resources can be used at that future, more high-impact, time. Perhaps there’s some reason why that plan doesn’t make sense; but, currently, almost no-one is even taking that possibility seriously. 

I think this overstates the case. Diminishing returns to expenditures in a particular time favor a nonzero disbursement rate (e.g. with logarithmic returns to spending at a given time 10x HoH levels would drive a 10x expenditure for a given period).

Most resources associated with EA are in investments. As Michael Dickens writes, small donors are holding most of their assets as human capital and not borrowing against it, while large donors such as Good Ventures are investing the vast majority of their financial assets for future donations. Insofar as people who have not yet entered EA but will are part of the broad EA portfolio, the annual disbursement rate of the total portfolio is even lower, perhaps 1-2% or less. And investment returns mean that equal allocation of NPV of current assets between time periods yields larger total spending in future periods (growing with the investment rate).

Moreover, quite a lot of EA donations actually consist in field- and movement-building (EA, longtermism, x-risk reduction), to the point of drawing criticism about excessive inward focus in some cases. Insofar as those fields are actually built they will create and attract resources with some flexibility to address future problems, and look like investments (this is not universal; e.g. GiveDirectly cash transfers had a larger field-building element when GiveDirectly was newer, but it is hard to recover increased future altruistic capacities later from cash transfers).

Looking through history, some candidates for particularly influential times might include the following (though in almost every case, it seems to me, the people of the time would have been too intellectually impoverished to have known how hingey their time was and been able to do anything about it

I would distinguish between an era being important (on the metric of how much an individual or unit of resource could do) because its population was low, because there was important potential for a lock-in event in a period, and because of high visibility/tractability of longtermist altruists affecting such events (although the effects of that on marginal returns are nonobvious because of crowding, and the highest returns being on neglected assets).

The population factor gets ~monotonically and astronomically worse over time. The chance of lock-in should be distributed across eras (more by technological levels than calendar years), with more as technology advances towards actual high-fidelity stabilization as a possibility (via extinction or lock-in), and less over time thereafter due to pre-emption (if there is a 1/1000 year chance of stabilization in extinction or a locked-in civilization, then the world will almost certainly be in a stable state a million years hence, thus the expected per year chance of stabilization needs to decline enormously on average over the coming era, in addition to falling per capita influence; this is related to LaPlace's rule of succession, the longer we go under some conditions without an event happening the less likely that it will happen on the next timestep, even aside from the object-level reasons re the speed of light and lock-in tech).

So I would say both the population and pre-emption (by earlier stabillization) factors intensely favor earlier eras in per resource hingeyness, constrained by the era having any significant lock-in opportunities and the presence of longtermists.

When I check that against the opportunities of past periods that does make sense to me. It seems quite plausible that 1960 was a much better time for a marginal altruist to take object-level actions to reduce long-run x-risk (and better in expected terms without the benefit of hindsight regarding things like nuclear doomsday devices and AI and BW timelines) by building relevant fields with less crowding (building a good EAish movement looks even better; 'which is the hingeyest period' is distinct from 'is hingeyness declining faster than ROI for financial instruments or movement-building').

The growth of a longtermist altruist movement in particular would mean marginal per capita hingeyness (drawn around longtermist interests) should seriously decline going forward.

In contrast, if the hingiest times are in the future, it’s likely that this is for reasons that we haven’t thought of. But there are future scenarios that we can imagine now that would seem very influential

For the later scenarios here you're dealing with much larger populations. If the plausibility of important lock-in is similar for solar colonization and intergalactic colonization eras, but the population of the latter is billions of times greater, it doesn't seem to be at all an option that it could be the most HoH period on a per resource unit basis.

Comment by carl_shulman on Ask Me Anything! · 2019-08-20T20:55:14.741Z · score: 4 (2 votes) · EA · GW

The annual total of all spending on electoral campaigns in the US is only a few billion dollars. So aggregating across all of that activity the per $ (and per staffer) impact is still going to be quite large.

Comment by carl_shulman on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-26T01:55:52.466Z · score: 6 (3 votes) · EA · GW

One way of thinking about this from a recent Open Phil blog post:

We have only explored a small portion of the space of possible causes in this broad area, and continue to expect that advocacy or scientific research, perhaps more squarely aimed at the global poor, could have outsized impacts. Indeed, GiveWell seems to agree this is possible, with their expansion into considering advocacy opportunities within global health and development...One hypothesis we’re interested in exploring is the idea of combining multiple sources of leverage for philanthropic impact (e.g., advocacy, scientific research, helping the global poor) to get more humanitarian impact per dollar (for instance via advocacy around scientific research funding or policies, or scientific research around global health interventions, or policy around global health and development).

I agree with the thesis that EA focused on global poverty on average has neglected research and advocacy on pro-development institutions relative to their importance and cost.


Comment by carl_shulman on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-12T20:01:46.201Z · score: 11 (4 votes) · EA · GW

> I'm at like 30-40% that the beneficial effects are real.)

Right, so you would want to show that 30-40% of interventions with similar literatures pan out. I think the figure is less.

Scott referred to [edit: one] failure to replicate in his post.


Comment by carl_shulman on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-12T19:58:13.626Z · score: 0 (0 votes) · EA · GW

[Deleted.]

Comment by carl_shulman on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-11T20:10:27.811Z · score: 47 (15 votes) · EA · GW

That sounds a bit like the argument 'either this claim is right, or it's wrong, so there's a 50% chance it's true.'

One needs to attend to base rates. Our bad academic knowledge-generating process throws up many, many illusory interventions with purported massive effects for each amazing intervention we find, and the amazing interventions that we do find disproportionately were easier to show (with the naked eye, visible macro-correlations, consistent effects with well-powered studies, etc).

People are making similar arguments about cold fusion, psychic powers (of many different varieties), many environmental and nutritional contaminants, brain training, carbon dioxide levels, diets, polyphasic sleep, assorted purported nootropics, many psychological/parenting/educational interventions, etc.

Testing how your prior applies across a spectrum of other cases (past and present) is helpful for model checking. If psychedelics are a promising EA cause how many of those others qualify? If many do, then any one isn't so individually special, although one might want to have a systematic program of systematically doing rigorous testing of all the wacky claims of large impact that can be tested cheaply.

If not, then it would be good to explain what exactly makes psychedelics different from the rest.

I think the case for psychedelics the OP has made doesn't pass this standard yet, so doesn't meet the standard for an EA cause area.

Comment by carl_shulman on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-11T06:39:14.063Z · score: 27 (12 votes) · EA · GW
On the flip side, it may be possible that the "true believers" actually are on to something, but they have a hard time formalizing their procedure into something that can be replicated on a massive scale. So if larger studies fail to replicate the results from the small studies, this may be the reason why.

Do you have any examples of this actually happening? I have seen it as an excuse for things that never pan out many times, but I don't recall an instance of it actually delivering. E.g. in Many Labs 2 and other mass reproducibility efforts, you don't find a minority of experimenters with a 'knack' who get the effect but can't pass it on to others.

Comment by carl_shulman on Small animals have enormous brains for their size · 2019-02-27T22:56:30.276Z · score: 21 (7 votes) · EA · GW

Recent large sample within-family data does seem to establish causal effects of brain size on intelligence and educational attainment. The genetic correlation is ~0.4, so most of the genetic variance isn't working through overall brain size.

Some kinds of features that could contribute to genetic variance in humans, but not scale for arbitrary differences across species:

  • Mutation load (the rate at which this is trimmed back, and thus the equilibrium load, depends on the strength of selection for cognitive abilities)
  • Motivation: attention to learning, play, imitation, and language comes at the expense of attention to other things
  • Pleiotropy with other selection combined with evolutionary limits (selection for lower aggression also causes white patches in fur via changes in neural crests, and retention of a variety of juvenile features), e.g. selection for disease resistance changing pathways so as to accidentally impair brain function (with the change surviving because of its benefits)
  • Alleles that provide resistance to disease (genetic variance is maintained in a Red Queen's Race) that damages the brain would be a source of genetic variance, likewise variants affecting nutrition or other environmental influences
Comment by carl_shulman on Quantifying anthropic effects on the Fermi paradox · 2019-02-27T21:41:30.313Z · score: 15 (6 votes) · EA · GW

Thank you for this excellent and detailed post, I expect to use it in the future as a go-to reference for explaining this point. You might be interested in an old paper where Nick Bostrom and I went through some of this reasoning (with similar conclusions but much less explanation) in the course of discussing the implications of anthropic theories for the possible difficulty of evolving intelligence.

I am not so sure about the specific numerical estimates you give, as opposed to the ballpark being within a few orders of magnitude for SIA and ADT+total views (plus auxiliary assumptions), i.e. the vicinity of "(roughly the largest value that doesn’t make the Fermi observation too unlikely, as shown in the next two sections". But that's compatible with much or most of our expected on the total view coming from scenarios where we don't overlap with aliens much.

" However, varying the planet formation rate at particular times in the history of the Universe can make a large difference."

We also update our uncertainty about this sort of temporal structure to some extent from our observation of late existence. Ideally we would want to let as much as possible vary so that we don't asymmetrically immunize some parameters against update.

"For this reason I will ignore scenarios where life is extraordinarily unlikely to colonise the Universe, by making fs loguniform between 10−4 and 1."

This seems overall too pessimistic to me as a pre-anthropic prior for colonization (~10% credence).

Comment by carl_shulman on Cost-Effectiveness of Aging Research · 2019-01-31T17:23:00.909Z · score: 7 (5 votes) · EA · GW

I don't think you can define aging research so narrowly and get the same expected impact. E.g. De Grey's SENS includes curing cancer as one of many subgoals, and radical advances in stem cell biology and genetic engineering, massive fields that don't fall under 'aging research.' The more dependent progress in an area is advances from outside that field, the less reliable this sort of projection will be.

Comment by carl_shulman on Expected cost per life saved of the TAME trial · 2019-01-29T23:47:14.071Z · score: 3 (2 votes) · EA · GW

Hi Emanuele,

I saw your request for commentary on Facebook, so here are some off-the-cuff comments (about 1 hour's worth so take with appropriate grains of salt, but summarizing prior thinking):

  • My prior take on metformin was that it seems promising for its space (albeit with mixed evidence, and prior longevity drug development efforts haven't panned out, but the returns would be very high for medical research if true), although overall the space looks less promising than x-risk reduction to me; the following comments will be about details of the analysis where I would currently differ
  • The suggestion of this trial moving forward LEV by 3+ years through an icebreaker effect boosting research looks wildly implausible to me
    • LEV is not mainly bottlenecked on 'research on aging,' e.g. de Grey's proposals require radical advances in generally medically applicable stem cell and genetic engineering technologies that already receive massive funding and are quite challenging; the ability to replace diseased cells with genetically engineered stem cell derived tissues is already a major priority, and curing cancer is a small subset of SENS
    • Much of the expected gain in biomedical technology is not driven by shifts within biology, and advances within a particular medical field are heavily driven by broader improvements (e.g. computers, CRISPR, genome sequencing, PCR, etc); if LEV is far off and heavily dependent on other areas, then developments in other fields will make it comparatively easy for aging research to benefit from 'catch up growth' reducing the expected value of immediate speedup (almost all of which would have washed away if LEV happens in the latter half of the century)
    • In particular, if automating R&D with AI is easier than LEV, and would moot prior biomedical research, then that adds an additional discount factor; I would bet that this happens before LEV through biomedical research
    • Getting approval to treat 'aging' isn't actually particularly helpful relative to approval for 'diseases of aging' since all-cause mortality requires larger trials and we don't have great aging biomarkers; and the NIH has taken steps in that direction regardless
    • Similar stories have been told about other developments and experiments, which haven't had massive icebreaker effects
    • Combined, these effects look like they cost a couple orders of magnitude
  • From my current epistemic state the expected # of years added by metformin looks too high
  • Re the Guesstimate model the statistical power of the trial is tightly tied to effect size; the larger the effect size the fewer people you need to show results; that raises the returns of small trials, but means you have diminishing returns for larger ones (you are spending more money to detect smaller effects so marginal cost-effectiveness goes a lot lower than average cost-effectiveness, reflecting high VOI of testing the more extravagant possibility)
  • Likewise the proportion using metformin conditional on a positive result is also correlated with effect size (which raises average EV, but shifts marginal EV lower proportionate to average EV); also the proportion of users seems too low to me conditional on success

Comment by carl_shulman on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-01-29T23:45:41.446Z · score: 5 (3 votes) · EA · GW

One issue I would add to your theoretical analysis: with assigning 1000+ QALYs to letting someone reach LEV is that people commonly don't claim linear utility with lifespan, i.e. they would often prefer to live to 80 with certainty rather than die at 20 with 90% probability and live to 10,000 with 10% probability.

I agree it's worth keeping the chance that people will be able to live much longer in the future in mind when assessing benefits to existing people (I would also add the possibility of drastic increases in quality of life through technology). I'd guess most of this comes from broader technological improvements (e.g. via AI) rather than reaching LEV through biomedical approaches), but not with extreme confidence.

However, I don't think it has very radical implications for cause prioritization since, as you note, deaths for any reason (include malaria and global catastrophes) deny those people a chance at LEV. LEV-related issues are also mainly a concern for existing humans, so to the extent one gives a boost for enormous impacts on nonhuman animals and the existence of future generations, LEV speedup won't reap much of those boosts.

Within the field of biomedical research, aging looks relatively promising, and I think on average the best-targeted biomedical research does well for current people compared to linear charity in support of deployment (e.g. gene drives vs bednets). But it's not a slam dunk because the problems are so hard (including ones receiving massive investment). I don't see it as strongly moving most people who prefer to support bednets over malaria gene drives, farmed animal welfare over gene drives, or GCR reduction over gene drives.

Comment by carl_shulman on How High Contraceptive Use Can Help Animals? · 2018-12-30T17:51:22.749Z · score: 7 (5 votes) · EA · GW

" Oh, is the concern that they're looking at a more biased subset of possible effects (by focusing primarily on effects that seem positive)? "

Yes. It doesn't mention other analyses that have come to opposite conclusions by considering effects on wild animals and long-term development.

Comment by carl_shulman on How High Contraceptive Use Can Help Animals? · 2018-12-30T05:10:35.563Z · score: 27 (15 votes) · EA · GW

If you're going to select interventions specifically to reduce the human population and have downstream consequences, it seems absolutely essential to take a broader view of the empirical consequences than in the linked report. E.g. among others, effects on wild animals (not mentioned but most immediate animal effects of this change will be on wild animals), future technological advancement, and global catastrophic risks have good cases for being far larger and plausibly of opposite sign to the effects discussed in the report but are not mentioned even as areas for further investigation.

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-24T20:23:07.102Z · score: 6 (4 votes) · EA · GW

What about a report along the lines of 'I am donating in support of X, for highly illegible reasons relating to my intuition from looking at their work, and private information I have about them personally'?

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-23T21:10:51.599Z · score: 6 (4 votes) · EA · GW

This is a good point, and worth highlighting in discussion of reports (especially as we get more data on the effects of winning on donation patterns). On the other hand, the average depth and quality of investigation by winners (and the access they got) does seem higher than what they would otherwise have done, whilst less than expert donors.

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-23T21:05:02.758Z · score: 4 (3 votes) · EA · GW

I don't think this is true. The probabilities and payouts are the same for any given participant, regardless of what others do, so people who are unlikely to write up a report don't reduce the average number of reports produced by those who would.

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-23T21:00:25.584Z · score: 4 (3 votes) · EA · GW

Except that the pot size isn't constrained by the participation of small donors: the CEA donor lottery has fixed pot sizes guaranteed by large donors, and the largest donors could be ~risk-neutral over lotteries with pots of many millions of donors. So there is no effect of this kind, and there is unlikely to ever be one except at ludicrously large scales (where one could use derivatives or the like to get similar effects).

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-23T20:50:56.030Z · score: 4 (3 votes) · EA · GW

Yes, the main effect balances out like that.

But insofar as the lottery enhances the effectiveness of donors (by letting them invest more in research if they win, amortized against a larger donation), then you want donors doing good to be enhanced and donors doing bad not to be enhanced. So you might want to try to avoid boosting pot size available to bad donors, and ensure good donors have large pots available. The CEA lottery is structured so that question doesn't arise.

There is also the minor issue of correlation with other donors in the same block mentioned in the above comment, although you could ask CEA for a separate block if some unusual situation meant your donation plans would change a lot if you found out another block participant had won.

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-23T04:46:10.612Z · score: 8 (4 votes) · EA · GW

> but also in the other 80% of worlds you have a preference for your money being allocated by people who are more thoughtful.

For the CEA donor lottery, the pot size is fixed independent of one's entry as the guarantor (Paul Christiano last year, the regranting pool I am administering this year) puts in funds for any unclaimed tickets. So the distribution of funding amounts for each entrant is unaffected by other entrants. It's set up this way specifically so that people don't even have to think about the sort of effect you discuss (the backstop fund has ~linear value of funds over the relevant range, so that isn't an impact either).

The only thing that participating in the same lottery block as someone else matters for is correlations between your donations and theirs. E.g. if you would wind up choosing a different charity to give to depending on whether another participant won the lottery. But normally the behavior of one other donor wouldn't change what you think is the best opportunity.

Comment by carl_shulman on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-22T18:19:43.954Z · score: 7 (5 votes) · EA · GW

What happened in those cases?

Comment by carl_shulman on Open Thread #43 · 2018-12-10T00:30:59.821Z · score: 5 (4 votes) · EA · GW

I would love to see a canonical post making this argument, conflating EA with the benefits of maxing out personal warm fuzzies is one of my pet peeves.

Comment by carl_shulman on Why we have over-rated Cool Earth · 2018-12-09T04:00:48.151Z · score: 3 (2 votes) · EA · GW

I actually happen to think that the report was too dismissive of more leveraged climate change interventions that I expected could be a lot better than the estimates for Cool Earth (especially efficient angles on scientific research and political activity in the climate space), but the OP is suggesting that the original Cool Earth numbers (which indicate much lower cost-effectiveness than charities recommended by EAs in other areas with more robust data) were overstated, not understated (as the original report would suggest due to regression to the mean and measurement error).

Comment by carl_shulman on Why we have over-rated Cool Earth · 2018-12-09T03:56:37.837Z · score: 4 (3 votes) · EA · GW

One thing to emphasize more than that writeup did, is that in EA terms donating to such a lightly researched intervention (a few months work) is very likely dominated by donations to better research the area, finding higher expected value options and influencing others.

On the other hand, the point estimates in that report favored other charities like AMF over Cool Earth anyway, a conclusion strengthened by the OP critique (not that it excludes something else orders of magnitude better being found like unusual energy research, very effective political lobbying, geoengineering, etc; Open Philanthropy has made a few climate grants that look relatively leveraged).

And I agree with John Maxwell about it being oversold in some cases.

Comment by carl_shulman on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-14T20:51:00.280Z · score: 21 (9 votes) · EA · GW

" I’d like to hear if there’s been any relevant work done on this topic (either within EA organizations or within general academia). Increasing returns is a fairly common topic within economics, so I figure there is plenty of relevant research out there on this. "

These are my key reasons (with links to academic EA and other discussions) for seeing diminishing returns as the relevant situation on average for EA as a whole, and in particular the most effective causes:

  • If problems can be solved, and vary in difficulty over multiple orders of magnitude (in required inputs), you will tend to see diminishing returns as you plot the number of problems solved with increasing resources; see this series of posts by Owen Cotton-Barratt and others
  • Empirically, we do see systematic diminishing returns to R&D inputs across fields of scientific and technological innovation, and for global total factor productivity; but historically the greatest successes of philanthropy, reductions in poverty, and increased prosperity have stemmed from innovation, and many EA priorities involve research and development
  • In politics and public policy the literatures on lobbying and campaign finance suggest diminishing returns
  • In growing new movements, there is an element of compounding returns, as new participants carry forward work (including further growth), and so influencing; this topic has been the subject of a fair amount of EA attention
  • When there are varied possible expenditures with widely varying cost-effectiveness and some limits on room for more funding (eventually, there may be increasing returns before that), then working one's way from the most effective options to the least produces a diminishing returns curve at a scale large enough to encompass multiple interventions; Toby Ord discusses the landscape of global health interventions having this property
  • Elaborating on the idea of limits to funding and scaling: an extremely cost-effective intervention with linear or increasing returns that scaled to very large expenditures would often imply impossibly large effects; there can be cheap opportunities to save a human life today for $100 under special circumstances, but there can't be trillions of dollars worth of such opportunities, since you would be saving more than the world population; likewise the probability of premature extinction cannot fall below 0, etc
  • So far EA is still small and unusual relative to the world, and much of its activity is harvesting low-hanging fruit from areas with diminishing returns (a consequence of those fruit) that couldn't be scaled to extremes (this is least true for linear aid interventions added to the already large global aid and local programs, and in particular GiveDirectly, but holds for what I would consider more promising, in cost-effectiveness, EA global health interventions such as gene drive R&D for malaria eradication); as EA activity expands more currently underfunded areas will see returns diminish to the point of falling behind interventions with more linear or increasing returns but worse current cost-effectiveness
  • Experience with successes using neglectedness (which in prioritization practice does involve looking at the reasons for neglect) thus far, at least on dimensions for which feedback has yet arrived

" Ideally, we would like to not simply select causes that are neglected, but to select causes that are neglected for reasons other than their impact. "

Agreed.

Comment by carl_shulman on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-14T19:59:24.459Z · score: 7 (5 votes) · EA · GW

The examples in the post have expected utilities assigned using inconsistent methodologies. If it's possible to have long-run effects on future generations, then many actions will have such effects (elections can cause human extinction sometimes, an additional person saved from malaria could go on to cause or prevent extinction). If ludicrously vast universes and influence over them are subjectively possible, then we should likewise consider being less likely to get ludicrous returns if we are extinct or badly-governed (see 'empirical stabilization assumptions' in Nick Bostrom's infinite ethics paper). We might have infinite impact (under certain decision theories) when we make a decision to eat a sandwich if there are infinite physically identical beings in the universe who will make the same decision as us.

Any argument of the form "consider type of consequence X, which is larger than consequences you had previously considered, as it applies to option A" calls for application of X to analyzing other options. When you do that you don't get any 10^100 differences in expected utility of this sort, without an overwhelming amount of evidence to indicate that A has 10^100+ times the impact on X as option B or C (or your prior over other and unknown alternatives you may find later).

Comment by carl_shulman on Reducing existential risks or wild animal suffering? · 2018-11-04T04:06:08.334Z · score: 1 (1 votes) · EA · GW

A strictly positive critical level that is low enough such that it would not result in the choice for that counter-intuitive situation, is still posiible.

As a matter of mathematics this appears impossible. For any critical level c that you pick where c>0, there is some level of positive welfare w where c>w>0, with relative utility u, 0>u, u=c-w.

There will then be some quantity of expected negative utility and relative utility people with relative utility between 0 and u that variable CLU would prefer to the existence of you with c and w. You can use gambles (with arbitrarily divisible probabilities) or aggregation across similar people to get arbitrarily close to zero. So either c<=0 or CLU will recommend creation of negative utility and relative utility people to prevent your existence for some positive welfare levels.

Comment by carl_shulman on Reducing existential risks or wild animal suffering? · 2018-11-03T20:17:40.207Z · score: 1 (1 votes) · EA · GW

This objection to fixed critical level utilitarianism can be easily avoided with variable critical level utilitarianism. Suppose there is someone with a positive utility (a very happy person), who sets his critical level so high that a situation should be chosen where he does not exist, and where extra people with negative utilities exist. Why would he set such a high critical level? He cannot want that. This is even more counter-intuitive than the repugnant sadistic conclusion. With fixed critical level utilitarianism, such counter-intuitive conclusion can occur because everyone would have to accept the high critical level. But variable critical level utilitarianism can easily avoid it by taking lower critical levels.

Such situations exist for any critical level above zero, since any critical level above zero means treating people with positive welfare as a bad thing, to be avoided even at the expense of some amount of negative welfare.

If you think the idea of people with negative utility being created to prevent your happy existence is even more counterintuitive than people having negative welfare to produce your happy existence, it would seem your view would demand that you set a critical value of 0 for yourself.

For example: I have a happy life with a positive utility. But if one could choose another situation where I did not exist and everyone else was maximally happy and satisfied, I would prefer (if that would still be an option) that second situation, even if I don’t exist in that situation.

A situation where you don't exist but uncounted trillions of others are made maximally happy is going to be better in utilitarian terms (normal, critical-level, variable, whatever), regardless of your critical level (or theirs, for that matter). A change in your personal critical level only changes the actions recommended by your variable CLU when it changes the rankings of actions in terms of relative utilities, such that the actions were close to within a distance on the scale of one life.

In other words, that's a result of the summing up of (relative) welfare, not a reason to misstate your valuation of your own existence.

Comment by carl_shulman on Reducing existential risks or wild animal suffering? · 2018-11-03T00:47:09.561Z · score: 7 (7 votes) · EA · GW

I have several issues with the internal consistency of this argument:

  • If individuals are allowed to select their own critical levels to respect their autonomy and preferences in any meaningful sense, that seems to imply respecting those people who value their existence and so would set a low critical level; then you get an approximately total view with regards to those sorts of creatures, and so a future populated with such beings can still be astronomically great
  • The treatment of zero levels seems inconsistent: if it is contradictory to set a critical level below the level one would prefer to exist, it seems likewise nonsensical to set it above that level
  • You suggest that people set their critical levels based on their personal preferences about their own lives, but then you make claims about their choices based on your intuitions about global properties like the Repugnant Conclusion, with no link between the two
  • The article makes much about avoiding repugnant sadistic conclusion, but the view you seem to endorse at the end would support creating arbitrary numbers of lives consisting of nothing but intense suffering to prevent the existence of happy people with no suffering who set their critical level to an even higher level than the actual one

On the first point, you suggest that that individuals get to set their own critical levels based on their preferences about their own lives. E.g.

The lowest preferred critical level is zero: if a person would choose a negative critical level, that person would accept a situation where he or she can have a negative utility, such as a life not worth living. Accepting a situation that one would not prefer, is basically a contradiction.

So if my desires and attitudes are such that I set a critical level well below the maximum, then my life can add substantial global value. E.g. if A has utility +5 and sets critical value 0, B has utility +5 and chooses critical value 10, and C has utility -5 and critical value 10, then 3 lives like A will offset one life like C, and you can get most of the implications of the total view, and in particular an overwhelmingly high value of the future if the future is mostly populated with beings who favor existing and set low critical levels for themselves (which one could expect from people choosing features of their descendants or selection).

On the second point, returning to this quote:

The lowest preferred critical level is zero: if a person would choose a negative critical level, that person would accept a situation where he or she can have a negative utility, such as a life not worth living. Accepting a situation that one would not prefer, is basically a contradiction.

I would note that utility in the sense of preferences over choices, or a utility function, need not correspond to pleasure or pain. The article is unclear on the concept of utility it is using but the above quote seems to require a preference base, i.e. zero utility is defined as the point at which the person would prefer to be alive rather than not. But then if 0 is the level at which one would prefer to exist, isn't it equally contradictory to have a higher critical level and reject lives that you would prefer? Perhaps you are imagining someone who thinks 'given that I am alive I would rather live than die, but I dislike having coming into existence in the first place, which death would not change.' But in this framework that would just be negative utility part of the assessment of the overall life (and people without that attitude can be unbothered).

Regarding the third point, if each of us choose our own critical level autonomously, I do not get to decree a level for others. But the article makes several arguments that seem to conflate individual and global choice by talking about everyone choosing a certain level, e.g.:

If people want to move safely away from the sadistic repugnant conclusion and other problems of rigid critical level utilitarianism, they should choose a critical level infinitesimally close to (but still below) their highest preferred levels.

But if I set a very high critical level for myself, that doesn't lower the critical levels of others, and so the repugnant conclusion can proceed just fine with the mildly good lives of those who choose low critical levels for themselves. Having the individuals choose for themselves based on prior prejudices about global population ethics also defeats the role of the individual choice as a way to derive the global conclusion. I don't need to be a total utilitarian in general to approve of my existence in cases in which I would prefer to exist.

Lastly, a standard objection to critical level views is that they treat lives below the critical level (but better than nothing by the person's own lights and containing happiness but not pain) as negative, and so will endorse creating lives of intense suffering by people who wish they had never existed to prevent the creation of multiple mildly good lives. With the variable critical level account all those cases would still go through using people who choose high critical levels (with the quasi-negative view, it would favor creating suicidal lives of torment to offset the creation of blissful beings a bit below the maximum). I don't see that addressed in the article.