Comment by carl_shulman on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-12T20:01:46.201Z · score: 11 (4 votes) · EA · GW

> I'm at like 30-40% that the beneficial effects are real.)

Right, so you would want to show that 30-40% of interventions with similar literatures pan out. I think the figure is less.

Scott referred to [edit: one] failure[] to replicate in his post.

Comment by carl_shulman on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-12T19:58:13.626Z · score: 0 (0 votes) · EA · GW


Comment by carl_shulman on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-11T20:10:27.811Z · score: 47 (15 votes) · EA · GW

That sounds a bit like the argument 'either this claim is right, or it's wrong, so there's a 50% chance it's true.'

One needs to attend to base rates. Our bad academic knowledge-generating process throws up many, many illusory interventions with purported massive effects for each amazing intervention we find, and the amazing interventions that we do find disproportionately were easier to show (with the naked eye, visible macro-correlations, consistent effects with well-powered studies, etc).

People are making similar arguments about cold fusion, psychic powers (of many different varieties), many environmental and nutritional contaminants, brain training, carbon dioxide levels, diets, polyphasic sleep, assorted purported nootropics, many psychological/parenting/educational interventions, etc.

Testing how your prior applies across a spectrum of other cases (past and present) is helpful for model checking. If psychedelics are a promising EA cause how many of those others qualify? If many do, then any one isn't so individually special, although one might want to have a systematic program of systematically doing rigorous testing of all the wacky claims of large impact that can be tested cheaply.

If not, then it would be good to explain what exactly makes psychedelics different from the rest.

I think the case for psychedelics the OP has made doesn't pass this standard yet, so doesn't meet the standard for an EA cause area.

Comment by carl_shulman on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-11T06:39:14.063Z · score: 27 (12 votes) · EA · GW
On the flip side, it may be possible that the "true believers" actually are on to something, but they have a hard time formalizing their procedure into something that can be replicated on a massive scale. So if larger studies fail to replicate the results from the small studies, this may be the reason why.

Do you have any examples of this actually happening? I have seen it as an excuse for things that never pan out many times, but I don't recall an instance of it actually delivering. E.g. in Many Labs 2 and other mass reproducibility efforts, you don't find a minority of experimenters with a 'knack' who get the effect but can't pass it on to others.

Comment by carl_shulman on Small animals have enormous brains for their size · 2019-02-27T22:56:30.276Z · score: 21 (7 votes) · EA · GW

Recent large sample within-family data does seem to establish causal effects of brain size on intelligence and educational attainment. The genetic correlation is ~0.4, so most of the genetic variance isn't working through overall brain size.

Some kinds of features that could contribute to genetic variance in humans, but not scale for arbitrary differences across species:

  • Mutation load (the rate at which this is trimmed back, and thus the equilibrium load, depends on the strength of selection for cognitive abilities)
  • Motivation: attention to learning, play, imitation, and language comes at the expense of attention to other things
  • Pleiotropy with other selection combined with evolutionary limits (selection for lower aggression also causes white patches in fur via changes in neural crests, and retention of a variety of juvenile features), e.g. selection for disease resistance changing pathways so as to accidentally impair brain function (with the change surviving because of its benefits)
  • Alleles that provide resistance to disease (genetic variance is maintained in a Red Queen's Race) that damages the brain would be a source of genetic variance, likewise variants affecting nutrition or other environmental influences
Comment by carl_shulman on Quantifying anthropic effects on the Fermi paradox · 2019-02-27T21:41:30.313Z · score: 15 (6 votes) · EA · GW

Thank you for this excellent and detailed post, I expect to use it in the future as a go-to reference for explaining this point. You might be interested in an old paper where Nick Bostrom and I went through some of this reasoning (with similar conclusions but much less explanation) in the course of discussing the implications of anthropic theories for the possible difficulty of evolving intelligence.

I am not so sure about the specific numerical estimates you give, as opposed to the ballpark being within a few orders of magnitude for SIA and ADT+total views (plus auxiliary assumptions), i.e. the vicinity of "(roughly the largest value that doesn’t make the Fermi observation too unlikely, as shown in the next two sections". But that's compatible with much or most of our expected on the total view coming from scenarios where we don't overlap with aliens much.

" However, varying the planet formation rate at particular times in the history of the Universe can make a large difference."

We also update our uncertainty about this sort of temporal structure to some extent from our observation of late existence. Ideally we would want to let as much as possible vary so that we don't asymmetrically immunize some parameters against update.

"For this reason I will ignore scenarios where life is extraordinarily unlikely to colonise the Universe, by making fs loguniform between 10−4 and 1."

This seems overall too pessimistic to me as a pre-anthropic prior for colonization (~10% credence).

Comment by carl_shulman on Cost-Effectiveness of Aging Research · 2019-01-31T17:23:00.909Z · score: 7 (5 votes) · EA · GW

I don't think you can define aging research so narrowly and get the same expected impact. E.g. De Grey's SENS includes curing cancer as one of many subgoals, and radical advances in stem cell biology and genetic engineering, massive fields that don't fall under 'aging research.' The more dependent progress in an area is advances from outside that field, the less reliable this sort of projection will be.

Comment by carl_shulman on Expected cost per life saved of the TAME trial · 2019-01-29T23:47:14.071Z · score: 3 (2 votes) · EA · GW

Hi Emanuele,

I saw your request for commentary on Facebook, so here are some off-the-cuff comments (about 1 hour's worth so take with appropriate grains of salt, but summarizing prior thinking):

  • My prior take on metformin was that it seems promising for its space (albeit with mixed evidence, and prior longevity drug development efforts haven't panned out, but the returns would be very high for medical research if true), although overall the space looks less promising than x-risk reduction to me; the following comments will be about details of the analysis where I would currently differ
  • The suggestion of this trial moving forward LEV by 3+ years through an icebreaker effect boosting research looks wildly implausible to me
    • LEV is not mainly bottlenecked on 'research on aging,' e.g. de Grey's proposals require radical advances in generally medically applicable stem cell and genetic engineering technologies that already receive massive funding and are quite challenging; the ability to replace diseased cells with genetically engineered stem cell derived tissues is already a major priority, and curing cancer is a small subset of SENS
    • Much of the expected gain in biomedical technology is not driven by shifts within biology, and advances within a particular medical field are heavily driven by broader improvements (e.g. computers, CRISPR, genome sequencing, PCR, etc); if LEV is far off and heavily dependent on other areas, then developments in other fields will make it comparatively easy for aging research to benefit from 'catch up growth' reducing the expected value of immediate speedup (almost all of which would have washed away if LEV happens in the latter half of the century)
    • In particular, if automating R&D with AI is easier than LEV, and would moot prior biomedical research, then that adds an additional discount factor; I would bet that this happens before LEV through biomedical research
    • Getting approval to treat 'aging' isn't actually particularly helpful relative to approval for 'diseases of aging' since all-cause mortality requires larger trials and we don't have great aging biomarkers; and the NIH has taken steps in that direction regardless
    • Similar stories have been told about other developments and experiments, which haven't had massive icebreaker effects
    • Combined, these effects look like they cost a couple orders of magnitude
  • From my current epistemic state the expected # of years added by metformin looks too high
  • Re the Guesstimate model the statistical power of the trial is tightly tied to effect size; the larger the effect size the fewer people you need to show results; that raises the returns of small trials, but means you have diminishing returns for larger ones (you are spending more money to detect smaller effects so marginal cost-effectiveness goes a lot lower than average cost-effectiveness, reflecting high VOI of testing the more extravagant possibility)
  • Likewise the proportion using metformin conditional on a positive result is also correlated with effect size (which raises average EV, but shifts marginal EV lower proportionate to average EV); also the proportion of users seems too low to me conditional on success

Comment by carl_shulman on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-01-29T23:45:41.446Z · score: 5 (3 votes) · EA · GW

One issue I would add to your theoretical analysis: with assigning 1000+ QALYs to letting someone reach LEV is that people commonly don't claim linear utility with lifespan, i.e. they would often prefer to live to 80 with certainty rather than die at 20 with 90% probability and live to 10,000 with 10% probability.

I agree it's worth keeping the chance that people will be able to live much longer in the future in mind when assessing benefits to existing people (I would also add the possibility of drastic increases in quality of life through technology). I'd guess most of this comes from broader technological improvements (e.g. via AI) rather than reaching LEV through biomedical approaches), but not with extreme confidence.

However, I don't think it has very radical implications for cause prioritization since, as you note, deaths for any reason (include malaria and global catastrophes) deny those people a chance at LEV. LEV-related issues are also mainly a concern for existing humans, so to the extent one gives a boost for enormous impacts on nonhuman animals and the existence of future generations, LEV speedup won't reap much of those boosts.

Within the field of biomedical research, aging looks relatively promising, and I think on average the best-targeted biomedical research does well for current people compared to linear charity in support of deployment (e.g. gene drives vs bednets). But it's not a slam dunk because the problems are so hard (including ones receiving massive investment). I don't see it as strongly moving most people who prefer to support bednets over malaria gene drives, farmed animal welfare over gene drives, or GCR reduction over gene drives.

Comment by carl_shulman on How High Contraceptive Use Can Help Animals? · 2018-12-30T17:51:22.749Z · score: 7 (5 votes) · EA · GW

" Oh, is the concern that they're looking at a more biased subset of possible effects (by focusing primarily on effects that seem positive)? "

Yes. It doesn't mention other analyses that have come to opposite conclusions by considering effects on wild animals and long-term development.

Comment by carl_shulman on How High Contraceptive Use Can Help Animals? · 2018-12-30T05:10:35.563Z · score: 27 (15 votes) · EA · GW

If you're going to select interventions specifically to reduce the human population and have downstream consequences, it seems absolutely essential to take a broader view of the empirical consequences than in the linked report. E.g. among others, effects on wild animals (not mentioned but most immediate animal effects of this change will be on wild animals), future technological advancement, and global catastrophic risks have good cases for being far larger and plausibly of opposite sign to the effects discussed in the report but are not mentioned even as areas for further investigation.

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-24T20:23:07.102Z · score: 6 (4 votes) · EA · GW

What about a report along the lines of 'I am donating in support of X, for highly illegible reasons relating to my intuition from looking at their work, and private information I have about them personally'?

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-23T21:10:51.599Z · score: 6 (4 votes) · EA · GW

This is a good point, and worth highlighting in discussion of reports (especially as we get more data on the effects of winning on donation patterns). On the other hand, the average depth and quality of investigation by winners (and the access they got) does seem higher than what they would otherwise have done, whilst less than expert donors.

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-23T21:05:02.758Z · score: 4 (3 votes) · EA · GW

I don't think this is true. The probabilities and payouts are the same for any given participant, regardless of what others do, so people who are unlikely to write up a report don't reduce the average number of reports produced by those who would.

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-23T21:00:25.584Z · score: 4 (3 votes) · EA · GW

Except that the pot size isn't constrained by the participation of small donors: the CEA donor lottery has fixed pot sizes guaranteed by large donors, and the largest donors could be ~risk-neutral over lotteries with pots of many millions of donors. So there is no effect of this kind, and there is unlikely to ever be one except at ludicrously large scales (where one could use derivatives or the like to get similar effects).

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-23T20:50:56.030Z · score: 4 (3 votes) · EA · GW

Yes, the main effect balances out like that.

But insofar as the lottery enhances the effectiveness of donors (by letting them invest more in research if they win, amortized against a larger donation), then you want donors doing good to be enhanced and donors doing bad not to be enhanced. So you might want to try to avoid boosting pot size available to bad donors, and ensure good donors have large pots available. The CEA lottery is structured so that question doesn't arise.

There is also the minor issue of correlation with other donors in the same block mentioned in the above comment, although you could ask CEA for a separate block if some unusual situation meant your donation plans would change a lot if you found out another block participant had won.

Comment by carl_shulman on Should donor lottery winners write reports? · 2018-12-23T04:46:10.612Z · score: 8 (4 votes) · EA · GW

> but also in the other 80% of worlds you have a preference for your money being allocated by people who are more thoughtful.

For the CEA donor lottery, the pot size is fixed independent of one's entry as the guarantor (Paul Christiano last year, the regranting pool I am administering this year) puts in funds for any unclaimed tickets. So the distribution of funding amounts for each entrant is unaffected by other entrants. It's set up this way specifically so that people don't even have to think about the sort of effect you discuss (the backstop fund has ~linear value of funds over the relevant range, so that isn't an impact either).

The only thing that participating in the same lottery block as someone else matters for is correlations between your donations and theirs. E.g. if you would wind up choosing a different charity to give to depending on whether another participant won the lottery. But normally the behavior of one other donor wouldn't change what you think is the best opportunity.

Comment by carl_shulman on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-22T18:19:43.954Z · score: 7 (5 votes) · EA · GW

What happened in those cases?

Comment by carl_shulman on Open Thread #43 · 2018-12-10T00:30:59.821Z · score: 5 (4 votes) · EA · GW

I would love to see a canonical post making this argument, conflating EA with the benefits of maxing out personal warm fuzzies is one of my pet peeves.

Comment by carl_shulman on Why we have over-rated Cool Earth · 2018-12-09T04:00:48.151Z · score: 3 (2 votes) · EA · GW

I actually happen to think that the report was too dismissive of more leveraged climate change interventions that I expected could be a lot better than the estimates for Cool Earth (especially efficient angles on scientific research and political activity in the climate space), but the OP is suggesting that the original Cool Earth numbers (which indicate much lower cost-effectiveness than charities recommended by EAs in other areas with more robust data) were overstated, not understated (as the original report would suggest due to regression to the mean and measurement error).

Comment by carl_shulman on Why we have over-rated Cool Earth · 2018-12-09T03:56:37.837Z · score: 4 (3 votes) · EA · GW

One thing to emphasize more than that writeup did, is that in EA terms donating to such a lightly researched intervention (a few months work) is very likely dominated by donations to better research the area, finding higher expected value options and influencing others.

On the other hand, the point estimates in that report favored other charities like AMF over Cool Earth anyway, a conclusion strengthened by the OP critique (not that it excludes something else orders of magnitude better being found like unusual energy research, very effective political lobbying, geoengineering, etc; Open Philanthropy has made a few climate grants that look relatively leveraged).

And I agree with John Maxwell about it being oversold in some cases.

Comment by carl_shulman on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-14T20:51:00.280Z · score: 21 (9 votes) · EA · GW

" I’d like to hear if there’s been any relevant work done on this topic (either within EA organizations or within general academia). Increasing returns is a fairly common topic within economics, so I figure there is plenty of relevant research out there on this. "

These are my key reasons (with links to academic EA and other discussions) for seeing diminishing returns as the relevant situation on average for EA as a whole, and in particular the most effective causes:

  • If problems can be solved, and vary in difficulty over multiple orders of magnitude (in required inputs), you will tend to see diminishing returns as you plot the number of problems solved with increasing resources; see this series of posts by Owen Cotton-Barratt and others
  • Empirically, we do see systematic diminishing returns to R&D inputs across fields of scientific and technological innovation, and for global total factor productivity; but historically the greatest successes of philanthropy, reductions in poverty, and increased prosperity have stemmed from innovation, and many EA priorities involve research and development
  • In politics and public policy the literatures on lobbying and campaign finance suggest diminishing returns
  • In growing new movements, there is an element of compounding returns, as new participants carry forward work (including further growth), and so influencing; this topic has been the subject of a fair amount of EA attention
  • When there are varied possible expenditures with widely varying cost-effectiveness and some limits on room for more funding (eventually, there may be increasing returns before that), then working one's way from the most effective options to the least produces a diminishing returns curve at a scale large enough to encompass multiple interventions; Toby Ord discusses the landscape of global health interventions having this property
  • Elaborating on the idea of limits to funding and scaling: an extremely cost-effective intervention with linear or increasing returns that scaled to very large expenditures would often imply impossibly large effects; there can be cheap opportunities to save a human life today for $100 under special circumstances, but there can't be trillions of dollars worth of such opportunities, since you would be saving more than the world population; likewise the probability of premature extinction cannot fall below 0, etc
  • So far EA is still small and unusual relative to the world, and much of its activity is harvesting low-hanging fruit from areas with diminishing returns (a consequence of those fruit) that couldn't be scaled to extremes (this is least true for linear aid interventions added to the already large global aid and local programs, and in particular GiveDirectly, but holds for what I would consider more promising, in cost-effectiveness, EA global health interventions such as gene drive R&D for malaria eradication); as EA activity expands more currently underfunded areas will see returns diminish to the point of falling behind interventions with more linear or increasing returns but worse current cost-effectiveness
  • Experience with successes using neglectedness (which in prioritization practice does involve looking at the reasons for neglect) thus far, at least on dimensions for which feedback has yet arrived

" Ideally, we would like to not simply select causes that are neglected, but to select causes that are neglected for reasons other than their impact. "


Comment by carl_shulman on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-14T19:59:24.459Z · score: 7 (5 votes) · EA · GW

The examples in the post have expected utilities assigned using inconsistent methodologies. If it's possible to have long-run effects on future generations, then many actions will have such effects (elections can cause human extinction sometimes, an additional person saved from malaria could go on to cause or prevent extinction). If ludicrously vast universes and influence over them are subjectively possible, then we should likewise consider being less likely to get ludicrous returns if we are extinct or badly-governed (see 'empirical stabilization assumptions' in Nick Bostrom's infinite ethics paper). We might have infinite impact (under certain decision theories) when we make a decision to eat a sandwich if there are infinite physically identical beings in the universe who will make the same decision as us.

Any argument of the form "consider type of consequence X, which is larger than consequences you had previously considered, as it applies to option A" calls for application of X to analyzing other options. When you do that you don't get any 10^100 differences in expected utility of this sort, without an overwhelming amount of evidence to indicate that A has 10^100+ times the impact on X as option B or C (or your prior over other and unknown alternatives you may find later).

Comment by carl_shulman on Reducing existential risks or wild animal suffering? · 2018-11-04T04:06:08.334Z · score: 1 (1 votes) · EA · GW

A strictly positive critical level that is low enough such that it would not result in the choice for that counter-intuitive situation, is still posiible.

As a matter of mathematics this appears impossible. For any critical level c that you pick where c>0, there is some level of positive welfare w where c>w>0, with relative utility u, 0>u, u=c-w.

There will then be some quantity of expected negative utility and relative utility people with relative utility between 0 and u that variable CLU would prefer to the existence of you with c and w. You can use gambles (with arbitrarily divisible probabilities) or aggregation across similar people to get arbitrarily close to zero. So either c<=0 or CLU will recommend creation of negative utility and relative utility people to prevent your existence for some positive welfare levels.

Comment by carl_shulman on Reducing existential risks or wild animal suffering? · 2018-11-03T20:17:40.207Z · score: 1 (1 votes) · EA · GW

This objection to fixed critical level utilitarianism can be easily avoided with variable critical level utilitarianism. Suppose there is someone with a positive utility (a very happy person), who sets his critical level so high that a situation should be chosen where he does not exist, and where extra people with negative utilities exist. Why would he set such a high critical level? He cannot want that. This is even more counter-intuitive than the repugnant sadistic conclusion. With fixed critical level utilitarianism, such counter-intuitive conclusion can occur because everyone would have to accept the high critical level. But variable critical level utilitarianism can easily avoid it by taking lower critical levels.

Such situations exist for any critical level above zero, since any critical level above zero means treating people with positive welfare as a bad thing, to be avoided even at the expense of some amount of negative welfare.

If you think the idea of people with negative utility being created to prevent your happy existence is even more counterintuitive than people having negative welfare to produce your happy existence, it would seem your view would demand that you set a critical value of 0 for yourself.

For example: I have a happy life with a positive utility. But if one could choose another situation where I did not exist and everyone else was maximally happy and satisfied, I would prefer (if that would still be an option) that second situation, even if I don’t exist in that situation.

A situation where you don't exist but uncounted trillions of others are made maximally happy is going to be better in utilitarian terms (normal, critical-level, variable, whatever), regardless of your critical level (or theirs, for that matter). A change in your personal critical level only changes the actions recommended by your variable CLU when it changes the rankings of actions in terms of relative utilities, such that the actions were close to within a distance on the scale of one life.

In other words, that's a result of the summing up of (relative) welfare, not a reason to misstate your valuation of your own existence.

Comment by carl_shulman on Reducing existential risks or wild animal suffering? · 2018-11-03T00:47:09.561Z · score: 7 (7 votes) · EA · GW

I have several issues with the internal consistency of this argument:

  • If individuals are allowed to select their own critical levels to respect their autonomy and preferences in any meaningful sense, that seems to imply respecting those people who value their existence and so would set a low critical level; then you get an approximately total view with regards to those sorts of creatures, and so a future populated with such beings can still be astronomically great
  • The treatment of zero levels seems inconsistent: if it is contradictory to set a critical level below the level one would prefer to exist, it seems likewise nonsensical to set it above that level
  • You suggest that people set their critical levels based on their personal preferences about their own lives, but then you make claims about their choices based on your intuitions about global properties like the Repugnant Conclusion, with no link between the two
  • The article makes much about avoiding repugnant sadistic conclusion, but the view you seem to endorse at the end would support creating arbitrary numbers of lives consisting of nothing but intense suffering to prevent the existence of happy people with no suffering who set their critical level to an even higher level than the actual one

On the first point, you suggest that that individuals get to set their own critical levels based on their preferences about their own lives. E.g.

The lowest preferred critical level is zero: if a person would choose a negative critical level, that person would accept a situation where he or she can have a negative utility, such as a life not worth living. Accepting a situation that one would not prefer, is basically a contradiction.

So if my desires and attitudes are such that I set a critical level well below the maximum, then my life can add substantial global value. E.g. if A has utility +5 and sets critical value 0, B has utility +5 and chooses critical value 10, and C has utility -5 and critical value 10, then 3 lives like A will offset one life like C, and you can get most of the implications of the total view, and in particular an overwhelmingly high value of the future if the future is mostly populated with beings who favor existing and set low critical levels for themselves (which one could expect from people choosing features of their descendants or selection).

On the second point, returning to this quote:

The lowest preferred critical level is zero: if a person would choose a negative critical level, that person would accept a situation where he or she can have a negative utility, such as a life not worth living. Accepting a situation that one would not prefer, is basically a contradiction.

I would note that utility in the sense of preferences over choices, or a utility function, need not correspond to pleasure or pain. The article is unclear on the concept of utility it is using but the above quote seems to require a preference base, i.e. zero utility is defined as the point at which the person would prefer to be alive rather than not. But then if 0 is the level at which one would prefer to exist, isn't it equally contradictory to have a higher critical level and reject lives that you would prefer? Perhaps you are imagining someone who thinks 'given that I am alive I would rather live than die, but I dislike having coming into existence in the first place, which death would not change.' But in this framework that would just be negative utility part of the assessment of the overall life (and people without that attitude can be unbothered).

Regarding the third point, if each of us choose our own critical level autonomously, I do not get to decree a level for others. But the article makes several arguments that seem to conflate individual and global choice by talking about everyone choosing a certain level, e.g.:

If people want to move safely away from the sadistic repugnant conclusion and other problems of rigid critical level utilitarianism, they should choose a critical level infinitesimally close to (but still below) their highest preferred levels.

But if I set a very high critical level for myself, that doesn't lower the critical levels of others, and so the repugnant conclusion can proceed just fine with the mildly good lives of those who choose low critical levels for themselves. Having the individuals choose for themselves based on prior prejudices about global population ethics also defeats the role of the individual choice as a way to derive the global conclusion. I don't need to be a total utilitarian in general to approve of my existence in cases in which I would prefer to exist.

Lastly, a standard objection to critical level views is that they treat lives below the critical level (but better than nothing by the person's own lights and containing happiness but not pain) as negative, and so will endorse creating lives of intense suffering by people who wish they had never existed to prevent the creation of multiple mildly good lives. With the variable critical level account all those cases would still go through using people who choose high critical levels (with the quasi-negative view, it would favor creating suicidal lives of torment to offset the creation of blissful beings a bit below the maximum). I don't see that addressed in the article.

Comment by carl_shulman on Problems with EA representativeness and how to solve it · 2018-08-15T18:03:42.541Z · score: 7 (4 votes) · EA · GW

Imagine we lived in a world just like ours but where the development of AI, global pandemics, etc. are just not possible: for whatever reason, those huge risks are just not there

If that was the only change our century would still look special with regard to the possibility of lasting changes short of extinction, e.g. as discussed in this posts by Nick Beckstead. There is also the astronomical waste argument: delay in interstellar colonization by 1 year means losing out on all the galaxies reachable (before separation by the expansion of the universe) by colonization begun in year n-1 instead of n. The population of our century is vanishingly small compared to future centuries, so the ability of people today to affect the colonized volume is accordingly vastly greater on a per capita basis, and the loss of reachable galaxies to delayed colonization is irreplaceable as such.

So we would still be in a very special and irreplaceable position, but less so.

For our low-population generation to really not be in a special position, especially per capita, it would have to be the case that none of our actions have effects on much more populous futures as a whole. That would be very strange, but if it were true then there wouldn't be any large expected impacts of actions on the welfare of future people.

But how should we weight that against the responsibility to help people alive today, since we are the only ones who can do it (future generations will not be able to replace us in that role)?

I'm not sure I understand the scenario. This sounds like a case where an action to do X makes no difference because future people will do X (and are more numerous and richer). In terms of Singer's drowning child analogy, that would be like a case where many people are trying to save the child and extras don't make the child more likely to be saved, i.e. extra attempts at helping have no counterfactual impact. In that case there's no point in helping (although it may be worth trying if there is enough of a chance that extra help will turn out to be necessary after all).

So we could consider a case where there are many children in the pond, say 20, and other people are gathered around the pond and will save 10 without your help, but 12 with your help. There are also bystanders who won't help regardless. However, there is also a child on land who needs CPR, and you are the only one who knows how to provide it. If you provide the CPR instead of pulling children from the pond, then 10+1=11 children will be saved instead of 12. I think in that case you should save the two children from drowning instead of the one child with CPR, even though your ability to help with CPR is more unique, since it is less effective.

Likewise, it seems to me that if we have special reason to help current people at the expense of much greater losses to future generations, it would be because of flow-through effects, or some kind of partiality (like favoring family over strangers), or some other reason to think the result is good (at least by our lights), rather than just that future generations cannot act now (by the same token, billions of people could but don't intervene to save those dying of malaria or suffering in factory farms today).

Comment by carl_shulman on Problems with EA representativeness and how to solve it · 2018-08-12T22:12:06.028Z · score: 8 (5 votes) · EA · GW

The argument is that some things in the relatively near term have lasting effects that cannot be reversed by later generations. For example, if humanity goes extinct as a result of war with weapons of mass destruction this century, before it can become more robust (e.g. by being present on multiple planets, creating lasting peace, etc), then there won't be any future generations to act in our stead (for at least many millions of years for another species to follow in our footsteps, if that happens before the end of the Earth's habitability).

Likewise, if our civilization was replaced this century by unsafe AI with stable less morally valuable ends, then future generations over millions of years would be controlled by AIs pursuing those same ends.

This period appears exceptional over the course of all history so far in that we might be able to destroy or permanently worsen the prospects of civilizations as a result of new technologies, but before we have reached a stable technological equilibrium or dispersed through space.

Comment by carl_shulman on EA Funds - An update from CEA · 2018-08-09T18:10:22.776Z · score: 4 (4 votes) · EA · GW

I don't know, you could email and ask. If Chloe wanted to take only large donations one could use a donor lottery to turn a small donation into a chance of a large one.

Comment by carl_shulman on EA Funds - An update from CEA · 2018-08-09T07:21:56.249Z · score: 3 (3 votes) · EA · GW

Would it be a good idea to create an EA Fund for U.S. criminal justice?

Open Phil's Chloe Cockburn has a fund for external donors. See Open Phil's recent blog post:

Chloe Cockburn leads our work in this area, and as such has led our outreach to other donors. To date, we estimate that her advice to other donors (i.e., other than Dustin and Cari) has resulted in donations moved (in the same sense as the metric GiveWell tracks) that amount to a reasonable fraction (>25%) of the giving she has managed for Open Philanthropy.

It appears that interest in her recommendations has been growing, and we have recently decided to support the creation of a separate vehicle - the Accountable Justice Action Fund - to make it easier for donors interested in criminal justice reform to make donations to a pool of funds overseen by Chloe. The Fund is organized as a 501(c)(4) organization; those interested in contributing to AJAF should contact us.

Comment by carl_shulman on Current Estimates for Likelihood of X-Risk? · 2018-08-08T01:47:27.889Z · score: 3 (3 votes) · EA · GW

Not by default, but I hope to get more useful forecasts that are EA action-relevant in the future performed and published.

Comment by carl_shulman on When causes multiply · 2018-08-07T21:31:59.296Z · score: 7 (7 votes) · EA · GW

"Note also that while we’re looking at such large pools of funding, the EA community will hardly be able to affect the funding ratio substantially. Therefore, this type of exercise will often just show us which single cause should be prioritised by the EA community and thereby act additive after all. This is different if we look at questions with multiplicative factors in which the decisions by the EA community can affect the input ratios like whether we should add more talent to the EA community or focus on improving existing talent."

I agree that multiplicative factors are a big deal for areas where we collectively have strong control over key variables, rather than trying to move big global aggregates. But I think it's the latter that we have in mind when talking about 'causes' rather than interventions or inputs working in particular causes (e.g. investment in hiring vs activities of current employees). For example:

"Should the EA community focus to add its resources on the efforts to reduce GCRs or to add them to efforts to help humanity flourish?"

If you're looking at global variables like world poverty rates, or total risk of extinction it requires quite a lot of absolute impact before you make much of a proportional change.

E.g. if you reduce the prospective risk of existential catastrophe from 10% to 9%, you might increase the benefits of saving lives through AMF by a fraction of a percent, as it would be more likely that civilization would survive to see benefits of the AMF donations. But a 1% change would be unlikely to drastically alter allocations between catastrophic risks and AMF. And a 1% change in existential risk is an enormous impact: even in terms of current humans (relevant for comparison to AMF) that could represent tens of millions of expected current lives (depending on the timeline of catastrophe), and immense considering other kinds of beings and generations. If one were having such amazing impact in a scalable fashion it would seem worth going further at that point.

Diminishing returns of our interventions on each of these variables seems a much more important consideration that multiplicative effects between these variables: cost per percentage point of existential risk reduced is likely to grow many times as one moves along the diminishing returns curve.

"We could also think of the technical ideas to improve institutional decision making like improving forecasting abilities as multiplying with those institution’s willingness to implement those ideas."

If we're thinking about institutions like national governments changing willingness to implement the ideas seems much less elastic than improving the methods. If we look at a much narrower space, e.g. the EA community or a few actors in some core areas, the multiplicative factors key fields and questions.

If I was going to look for cross-cause multiplicative effects it would likely be for their effects on the EA community (e.g. people working on cause A generate some knowledge or reputation that helps improve the efficiency of work on cause B, which has more impact if cause B efforts are larger).

Comment by carl_shulman on Current Estimates for Likelihood of X-Risk? · 2018-08-07T19:38:27.708Z · score: 5 (5 votes) · EA · GW

GJ results (as opposed Good Judgment Open) aren't public, but Open Phil has an account with them. This is from a batch of nuclear war probability questions I suggested that Open Phil commission to help assess nuclear risk interventions.

Comment by carl_shulman on Current Estimates for Likelihood of X-Risk? · 2018-08-07T01:25:14.998Z · score: 7 (7 votes) · EA · GW

The fixed 0.1% extinction risk is used as a discount rate in the Stern report. That closes the model to give finite values (instead of infinite benefits) after they exclude pure temporal preference discounting on ethical grounds. Unfortunately, the assumption of infinite confidence in a fixed extinction rate, gives very different (lower) expected values than a distribution that accounts for the possibility of extinction risks eventually becoming stably low for long periods (the Stern version gives a probability of less than 1 in 20,000 to civilization surviving another 10,000 years, when agriculture is already 10,000 years old).

Comment by carl_shulman on Current Estimates for Likelihood of X-Risk? · 2018-08-07T01:14:51.411Z · score: 14 (14 votes) · EA · GW

Earlier this year Good Judgment superforecasters (in nonpublic data) gave a median probability of 2% that a state actor would make a nuclear weapon attack killing at least 1 person before January 1, 2021. Conditional on that happening they gave an 84% probability of 1-9 weapons detonating, 13% to 10-99, 2% to 100-999, and 1% to 100 or more.

Here's a survey of national security experts which gave a median 5% chance of a nuclear great power conflict killing at least 80 million people over 20 years, although some of the figures in the tables look questionable (mean less than half of median).

It's not clear how much one should trust these groups in this area. Over a longer time scale I would expect the numbers to be higher, since there is information that we are currently not in a Cold War (or hot war!), and various technological and geopolitical factors (e.g. the shift to multipolar military power and the rise of China) may drive it up.

Comment by carl_shulman on Current Estimates for Likelihood of X-Risk? · 2018-08-06T21:55:19.195Z · score: 14 (14 votes) · EA · GW

The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.

I think differences over that range matter a lot, both within a long-termist perspective and over a pluralist distribution across perspectives.

At the high end of that range the low-hanging fruit of x-risk reduction will also be very effective at saving the lives of already existing humans, making them less dependent on concern for future generations.

At the low end, non-existential risk trajectory changes look more important within long-termist frame, or capacity building for later challenges.

Magnitude of risk also importantly goes into processes for allocating effort under moral uncertainty and moral pluralism.

Comment by carl_shulman on Open Thread #39 · 2018-06-05T23:24:23.012Z · score: 3 (3 votes) · EA · GW

You could just link to a Google Drive (or other) copy of your full article in your comment, both to let people read the article, and to accumulate the karma for a top-level post if people like it.

Comment by carl_shulman on Why don't many effective altruists work on natural resource scarcity? · 2018-06-04T06:17:35.888Z · score: 1 (1 votes) · EA · GW

To be clear, bees are dying at high rates (and have been for some time) and this is imposing costs on agriculture, and that could get worse, and addressing that is likely a fine use of resources for agricultural R&D and protection.

But that is very different from posing a major risk of human extinction or civilization collapse via breakdown of the ability of agriculture to produce food (particularly the biggest, wind-pollinated, staple crops). That is the exaggerated threat which I say does not check out.

Comment by carl_shulman on The counterfactual impact of agents acting in concert · 2018-05-28T05:31:01.928Z · score: 4 (4 votes) · EA · GW

Are you neglecting to count the negative impact from causing other people to do the suboptimal thing? If I use my funds to set up an exploding matching grant that will divert the funds of other donors from better things too a less effective charity, that is a negative part of my impact.

Comment by carl_shulman on The person-affecting value of existential risk reduction · 2018-04-13T18:43:25.755Z · score: 9 (9 votes) · EA · GW

Other person affecting views consider people who will necessarily exist (however cashed out) rather than whether they happen to exist now (planting a bomb with a timer of 1000 years is still accrues person-affecting harm). In a 'extinction in 100 years' scenario, this view would still count the harm of everyone alive then who dies, although still discount the foregone benefit of people who 'could have been' subsequently in the moral calculus.

Butterfly effects change the identities of at least all yet-to-be conceived persons, so this would have to not be interested in particular people, but population sizes/counterparts.

Comment by carl_shulman on Opportunities for individual donors in AI safety · 2018-04-01T02:36:14.203Z · score: 1 (1 votes) · EA · GW

If you find your opportunities are being constrained by small donation size, you can use donor lotteries to trade your donation for a small chance of a large budget (just get in touch with CEA if you need a chance at a larger pot). You may also be interested in a post I made on this subject.

Comment by Carl_Shulman on [deleted post] 2018-01-14T18:46:34.618Z

Did you collect base rate information for other initiatives before campaigns (which tend to lower approval relative to pre-campaign polling) for that parameter?

Comment by carl_shulman on 80,000 Hours annual review released · 2018-01-03T03:17:15.171Z · score: 1 (1 votes) · EA · GW

Here is 80k's mea culpa on replaceability.

Comment by carl_shulman on The expected value of the long-term future · 2017-12-29T00:45:42.739Z · score: 9 (9 votes) · EA · GW

That's our best understanding.

But there is then an argument on this account to attend to whatever small credence one may have in indefinite exponential growth in value. E.g. if you could build utility monsters such that every increment of computational power let them add another morally important order of magnitude to their represented utility, or hypercomputers were somehow possible, or we could create baby universes.

Comment by carl_shulman on Announcing the 2017 donor lottery · 2017-12-21T22:18:22.799Z · score: 1 (1 votes) · EA · GW

I can understand that a winner selecting a non-EA cause might end up having to convince CEA of their decision,

See Sam's comment below:

"to emphasise this, as CEA is running this lottery for the benefit of the community, it's important for the community to have confidence that CEA will follow their recommendations (otherwise people might be reticent to participate). So, to be clear, while CEA makes the final call on the grant, unless there's a good reason not to (see the 'Caveats and Limitations' section on the Lotteries page) we'll do our best to follow a donor's recommendation, even if it's to a recipient that wouldn't normally be thought of as a strictly EA."

Are there advocacy-related reasons for donating directly to charities instead of joining such a lottery?

One data point: last year Jacob Steinhardt put a majority of his donations into the lottery for expected direct impact, and then allocated the remainder himself for practice donating and signaling value.

Comment by carl_shulman on Announcing the 2017 donor lottery · 2017-12-21T22:06:12.381Z · score: 2 (2 votes) · EA · GW

there seems to be a strong cultural norm in my country against allowing lottery winners to remain anonymous... This is not the case in Europe, where it is far more common for lottery winners to remain anonymous. When the rules for anonymity were being drafted, was any thought given to this issue?

If a lottery organization is conducting a draw itself, and could rig the draw, publishing the winner's identity allows people to detect fraud, e.g. if the lottery commissioner's family members keep winning that would indicate skulduggery. I think this is the usual reason for requiring publicity. Did you have another in mind?

In the case of CEA's lottery (and last year's lottery), the actual draw is the U.S. National Institute of Science and Technology public randomness beacon, outside of CEA's control, which allows every participant to know whether their #s were drawn.

When the rules for anonymity were being drafted, was any thought given to this issue?

Someone raised the possibility of people who didn't want publicity/celebrity being discouraged from making use of the option, as part of the general aim of making it usable to as many donors as possible.

Comment by carl_shulman on Donor lottery details · 2017-12-21T04:01:24.958Z · score: 2 (2 votes) · EA · GW

I'm glad to know that my portion of the donor lottery funds are being used in such a positive manner.

I would add, though, that participation doesn't affect the expected payout to any player's recommendations (and in the CEA lottery setup, it doesn't affect the pot size or draw probability). I.e. if other donor lottery players planned to donate their funds to something completely useless, that doesn't make any difference for you (unless hearing that they had made that donation outside the lottery context would have changed your own charity pick).

Comment by carl_shulman on Announcing the 2017 donor lottery · 2017-12-18T05:40:50.226Z · score: 0 (0 votes) · EA · GW

Could you explain your first sentence? What risks are you talking about?

Probably the risks of moving down the diminishing returns curve. E.g. if Good Ventures put its entire endowment into a donor lottery (e.g. run by BMGF) for a 1/5 chance of 5x endowment diminishing returns would mean that returns to charitable dollars would be substantially higher in the worlds where they lost than when they won. If they put 1% of their endowment into such a lottery this effect would be almost imperceptibly small but nonzero. Similar issues arise for the guarantor.

With pots that are small relative to the overall field or the guarantor's budget (or the field of donors the guarantor considers good substitutes) these costs are tiny but for very big pots they become less negligible.

Also, how does one lottery up further if all the block sizes are $100k?

Take your 100k and ask Paul (or CEA, to get in touch with another backstopping donor) for a personalized lottery. If very large it might involve some haircut for Paul. A donor with more resources could backstop a larger amount without haircut. If there is recurrent demand for this (probably after donor lotteries become more popular) then standardized arrangements for that would likely be set up (I would try to do so, at least).

Comment by carl_shulman on Announcing the 2017 donor lottery · 2017-12-17T18:44:59.992Z · score: 2 (2 votes) · EA · GW

Right, non-EAs entering the lottery get to improve their expected donation quality but don't change the expected payouts for anyone else (and we generally don't have reason to worry about correlating donation sizes via the lottery with them, unless you would otherwise want to switch your donation depending on slight changes in the amount of non-EA donations in whatever area).

Comment by carl_shulman on Announcing the 2017 donor lottery · 2017-12-17T18:26:08.580Z · score: 3 (3 votes) · EA · GW

The point of a donor lottery is to help donors move to an efficient scale to research their donations or cut transaction costs. But there are important diminishing returns to donations if those donations are large relative to total funding for a cause or organization. So it is possible to have a pot that is inefficiently large, so that small donors risk not plucking low-hanging fruit. If the odds and payouts were determined by the unknown level of participation, then a surge of interest could result in an inefficiently large pot (worse, one that is set after people have entered).

$100,000 is small enough relative to total EA giving, and most particular causes in EA, not to worry much about that, but large enough to support increased research while reducing the expected costs thereof. If a lottery winner, after some further consideration, wants to try to lottery up to a still larger scale they can request that. However, overly large pots cannot be retroactively shrunk after winning them.

We just want to be really careful about unintended incentives?

One of the most common mistakes people have on hearing about donor lotteries is worrying about donors with different priorities. So making it crystal clear that you don't affect the likelihood of payouts for donors to other causes (and thus the benefits of additional research and reduced transaction costs for others) is important.

Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation

2016-12-31T02:19:35.457Z · score: 28 (22 votes)

Donor lotteries: demonstration and FAQ

2016-12-07T13:07:26.306Z · score: 38 (38 votes)

The age distribution of GiveWell recommended charities

2015-12-26T18:35:44.511Z · score: 13 (15 votes)

A Long-run perspective on strategic cause selection and philanthropy

2013-11-05T23:08:35.000Z · score: 6 (6 votes)