Posts

Comments

Comment by carlshulman on An argument for more effectual giving · 2014-03-10T05:11:00.000Z · score: 0 (0 votes) · EA · GW

"According to independent sources, it can save a life for a little under $2,400 on average, a price that likely beats that of any other organization"

GiveWell doesn't claim that. First, AMF is not at this exact moment one of their recommended charities until it is able to commit more of its stockpiled cash reserves. Second, GiveWell has not yet evaluated vast swathes of the charity world, and is actively investing in finding better opportunities, with one of GiveWell's founders saying that he personally expects that a pick twice as good will likely be found in a few years. Third, so long as there is significant noise and uncertainty in evaluating and identifying charities, top-recommended does not translate into 'actual best' with high confidence, since rankings reflect a mix or underlying quality and noise. Fourth, as of March 9, 2014, GiveWell's page on AMF says this:

"Using $6.13 as the total cost per net, we estimate the cost per child life saved through an AMF LLIN distribution at about $3,400"

Comment by carlshulman on Where I'm giving and why: Will MacAskill · 2014-01-01T23:06:00.000Z · score: 0 (0 votes) · EA · GW

CSER is at startup stage, with a lot of valuable resources going underutilized, so it looks more leveraged.

Comment by carlshulman on Where I'm giving and why: Will MacAskill · 2013-12-31T02:19:00.000Z · score: 1 (1 votes) · EA · GW

"setting up an Effective Altruism Fund"

The cheap and easy first step along these lines would be for CEA to make a page on its website where people saving to donate later, or putting money in Donor Advised Funds, could register the amounts saved/invested and their intentions for the funds. This would be a very cheap and easy effort (you could even just use a Google Form) and would allow evidence of interest to accumulate.

You don't need $20,000 to do it, just a bit of staff time, and there are definitely people saving for later or using DAFs (Peter Hurford just posted about his plans along those lines, earlier in this blog series).

"Even after trying to correct for obvious personal bias, I think that CEA wins out for my comparatively small donation; if I had info that the relevant position at CSER wouldn’t be funded anyway, and if I had more to give (e.g. ~$70k) then I think that CSER would be better...This expenditure is also pretty lumpy, and I don’t expect them to get all their donations from small individual donations, so it seems to me that donating 1/50th of the cost of a program manager isn’t as good as 1/50th of the value of a program manager. For those with a larger amount to give, the situation is different."

Why not make a long odds bet with a wealthy counterparty or use high-risk derivatives to get a chance at making the large donation? In principle, economies of scale like this should always be subject to circumvention at modest cost.

Also see: http://www.indiegogo.com/

"that Lomborg’s (inaccurate) reputation as a climate skeptic might taint the idea of global prioritisation.)"

He does accept the scientific consensus and relies on IPCC figures, but he does seem to spend a really disproportionate portion of his writing and speaking on the idea of trading off climate mitigation costs against more effective interventions. It is far less common to see him pitting highly effective global public health interventions against farm subsidies, military spending, social security, rich country health care, tax cuts, or other non-climate competing expenditures.

Comment by carlshulman on What's the best domestic charity? · 2013-12-10T22:17:00.000Z · score: 0 (0 votes) · EA · GW

I agree with Peter that the examples are a bit wild and distracting from the piece.

Mindfulness meditation a) comes across as a bit strange, perhaps causing confusion with religious groups; b) seems to be a bit out of the blue with regards to evidence.

Getting EA aligned with partisan politics (rather than issue-based politics aimed at affecting the policies parties adopt as they compete) means spending a lot of effort on lower-priority issues, and compromising the ability to reach out broadly.

For example, with respect to effective foreign aid, large portions of private aid donations come from people across the political spectrum. Government foreign aid is often seen as more an issue of the left, but George W. Bush was notable for massive expansions of public health aid, as in PEPFAR, which have saved millions of lives in Africa. If a number of the most important political issues have cross-cutting appeal, it could be quite costly to alienate potential allies based on conflicts on lower-priority issues (along with the reduced efficiency of a general or unconditional push, rather than say supporting high-impact policies, and factions in any parties advancing those goals).

Comment by carlshulman on A Long-run perspective on strategic cause selection and philanthropy · 2013-12-07T02:15:00.000Z · score: 0 (0 votes) · EA · GW

Thanks for the clarifications Cari, they definitely give a better picture of Good Ventures' take on these questions.

"Both Good Ventures and GiveWell care deeply about protecting the long-term trajectory of civilization, and our research reflects this."

It's great to hear that news about Good Ventures. One thing I would add: I didn't mean to imply that GiveWell places no significant weight on this, just that based on our conversations with various GiveWell staff that weight seem to be smaller (to a degree which varies depending on the staff member).

"We feel that we’re doing what we can in terms of paying for information directly."

Glad to hear it.

"I hope that helps to clarify, and I also hope you and Nick will keep following and giving input on the GiveWell Labs effort, since it's so closely aligned with your long-term thinking about strategic cause selection and philanthropy."

We certainly will. As it happens, I just scheduled another meeting with GiveWell Labs yesterday.

Comment by carlshulman on A Long-run perspective on strategic cause selection and philanthropy · 2013-11-08T04:39:00.000Z · score: 3 (3 votes) · EA · GW

Hi Cari,

I would certainly agree they are quite similar, and think that Good Ventures is closer to how we would do things than almost all existing foundations, and tremendously good news for the world.

This makes sense. Nick has worked at GiveWell. I have been following and interacting with GiveWell since its founding. We share access to much of our respective knowledge bases, surrounding intellectual communities, and an interest in effective altruism, so we should expect a lot of overlap in approaches to solve similar problems.

Growing the EA community's capabilities and quality of decision-making, pursuing high value-of-information questions about the available philanthropic options, and similar efforts are robustly valuable.

It's harder for me to pin down differences with GV because of my uncertainty about Good Ventures' reasoning behind some of its choices. Posting conversations makes it easier to see what information GV has access to, but I feel I know a lot more about GW's internal thinking than GV's.

Relative to GiveWell, I think we may care more about protecting the long-term trajectory of civilization relative to short-term benefits. And, speaking for myself at least, I am more skeptical that optimizing for short-term QALYs or similar measures will turn out to be very close to optimizing for long-term metrics. I'm not sure about GV's take on those questions.

At the tactical level, and again speaking for myself and not for Nick, based on my current state of knowledge I don't see how GV's ratio of learning-by-granting relative to granting to fund direct learning efforts is optimal for learning.

For example, GiveWell and Good Ventures now provide the vast majority of funding for AMF. I am not convinced that moving from $15 MM to $20 MM of AMF funding provides information close in value to what could be purchased if one spent $5 MM more directly on information-gathering. GiveWell's main argument on this point has been inability to hire using cash until recently, but it seems to me that existing commercial and other services can be used to buy valuable knowledge.

I'll mention a few examples that come to mind. ScienceExchange, a marketplace that connects funders and scientific labs willing to take on projects for hire, is being used by the Center for Open Science to commission replications of scientific studies of interest. Polling firms can perform polls and surveys, of relevant experts or of the general public or donors, for hire in a standardized fashion. Consulting firms with skilled generalists or industry experts can be commissioned at market rates to acquire data and perform analysis in particular areas. Professional fundraising firms could have been commissioned to try street fundraising or direct mail and the like for AMF to learn whether those approaches are effective for GiveWell's top charities.

Also, in buying access to information from nonprofit organizations, it's not easy for me to understand the relationship between the extent of access/information and the grant, e.g. why make a grant sufficient to hire multiple full time staff-years in exchange for one staff-day of time? I can see various reasons why one might do this, such as wariness from nonprofits about sharing potentially embarrassing information, compensating for extensive investments required to produce the information in the first place, testing RFMF hypotheses, and building a reputation, but given what I know now I am not confident that the price is right if some of these grants really are primarily about gaining information. [However, the grants are still relatively small compared to your overall resources, so such overpayment is not a severe problem if it is a problem.]

Zooming out back to the big picture, I'll reiterate that we are very much on the same page and are great fans of GV's work.

Comment by carlshulman on A Long-run perspective on strategic cause selection and philanthropy · 2013-11-06T11:55:00.000Z · score: 3 (3 votes) · EA · GW

There are many object-level lines of evidence to discuss, but this is not the place for great detail (I recommend Nick Bostrom's forthcoming book). One of the most information-dense is that that's surveys sent to the top 100 most-cited individuals in AI (identified using Microsoft's academic search tool) resulted in a median estimate comfortably within the century, including substantial and non-negligible probability for the next few decades. The results were presented at the Philosophy and Theory of AI conference earlier this year and are on their way to publication.

Expert opinion is not terribly reliable on such questions, and we should probably widen our confidence intervals (extensive research shows that naive individuals give overly narrow intervals), assigning more weight to AI surprisingly soon and surprisingly late than otherwise. We might also try to correct against a possible optimistic bias (which would bias towards shorter timelines and lower risk estimates).

The surveyed experts also assigned credences in very bad or existentially catastrophic outcomes that, if taken literally, would suggest that AI poses the largest existential risk (although some respondents may have interpreted the question to include comparatively lesser harms).

Extinction-level asteroid, volcanoes, and other natural catastrophes are relatively well-characterized and pose extremely low annual risk based on empirical evidence of past events. GiveWell's shallow analysis pages discuss several of these, and the edited volume "Global Catastrophic Risks" has more on these and others.

Climate scientists and the IPCC have characterized the risk of conditions threatening human extinction as very unlikely conditional on nuclear winter or severe continued carbon emissions, i.e. these are far more likely to cause large economic losses and death than to permanently disrupt human civilization.

Advancing biotechnology may make artificial diseases intentionally engineered to cause human extinction by large and well-resourced biowarfare programs an existential threat, although there is a very large gap between the difficulty of creating a catastrophic pathogen and civilization-ending one.

An FHI survey of experts at an Oxford Global Catastrophic Risks conference asked participants to assign credences to the risk of various levels of harm from different sources in the 21st century, including over 1 billion deaths and extinction. Median estimates assigned greater credence to human extinction from AI than conventional threats including nuclear war or engineered pandemics, but greater credence to casualties of at least 1 billion from the conventional threats.

So the relative importance of AI is greater in terms of existential risk than global catastrophic risk, but seems at least comparable in the latter area as well.