Posts

International Relations; States, Rational Actors, and Other Approaches (Policy and International Relations Primer Part 4) 2020-01-22T08:29:39.023Z · score: 22 (14 votes)
An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) 2020-01-05T12:54:34.826Z · score: 18 (11 votes)
Policy and International Relations - What Are They? (Primer for EAs, Part 2) 2020-01-02T12:01:21.222Z · score: 22 (14 votes)
Introduction: A Primer for Politics, Policy and International Relations 2019-12-31T19:27:46.293Z · score: 61 (28 votes)
When To Find More Information: A Short Explanation 2019-12-28T18:00:56.172Z · score: 58 (30 votes)
Carbon Offsets as an Non-Altruistic Expense 2019-12-03T11:38:21.223Z · score: 16 (11 votes)
Davidmanheim's Shortform 2019-11-27T12:34:36.732Z · score: 3 (1 votes)
Steelmanning the Case Against Unquantifiable Interventions 2019-11-13T08:34:07.820Z · score: 45 (21 votes)
Updating towards the effectiveness of Economic Policy? 2019-05-29T11:33:17.366Z · score: 11 (9 votes)
Challenges in Scaling EA Organizations 2018-12-21T10:53:27.639Z · score: 37 (18 votes)
Is Suffering Convex? 2018-10-21T11:44:48.259Z · score: 13 (11 votes)

Comments

Comment by davidmanheim on Challenges in Scaling EA Organizations · 2020-02-02T16:24:47.213Z · score: 3 (2 votes) · EA · GW

I'd strongly agree with Drucker, both here and generally. The issue I have is that EA culture already has strong values and norms, ones that don't necessarily need to be shaped in the same ways because they are already strong in many ways - though careful thought is certainly important. And a very important but unusual concern is that without care, the founder effects, culture, norms and values can easily erode when organizations, or the ecosystem as a whole, grows.

Comment by davidmanheim on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T14:11:25.108Z · score: 1 (1 votes) · EA · GW

SARS was very unusual, and serves as a partial counterexample. On the other hand, the "trend" being shown is actually almost entirely a function of the age groups of the people infected - it was far more fatal in the elderly. With that known now, we have a very reasonable understanding of what occurred - which is that because the elderly were infected more often in countries where SARS reached later, and the countries are being aggregated in this graph, the raw estimate behaved very strangely.

Comment by davidmanheim on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T11:20:06.412Z · score: 12 (10 votes) · EA · GW

And for preventing transmission, I know it seems obvious, but you need to actually wash your hands. Also, it seems weird, by studies indicate that brushing teeth seems to help reduce infection rates.

And covering your mouth with a breathing mask may be helpful, as long as you're not, say, touching food with your hands that haven't been washed recently and then eating. Also, even if there is no Coronavirus, in general, wash your hands before eating. Very few people are good about doing this, but it will help.

Comment by davidmanheim on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T11:17:11.084Z · score: 20 (15 votes) · EA · GW

This is the boring take, but it's worth noting that conditional on this spreading widely, perhaps the most important things to do are mitigating health impacts on you, not preventing transmission. And that means staying healthy in general, perhaps especially regarding cardiovascular health - a good investment regardless of the disease, but worth re-highlighting.

I'm not a doctor, but I do work in public health. Based on my understanding of the issues involved, if you want to take actions now to minimize severity later if infected, my recommendations are:

  • Exercise (which will help with cardiovascular health)
  • Lose excess weight (which can exacerbate breathing issues)
  • Get enough sleep (which assists your immune system generally)
  • Eat healthy (again, general immune system benefits)
Comment by davidmanheim on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T11:08:25.618Z · score: 13 (4 votes) · EA · GW

No, the case fatality rate isn't actually 3%, that's the rate based on identified cases, and it's always higher than the true rate.

Comment by davidmanheim on Davidmanheim's Shortform · 2020-01-26T12:51:13.801Z · score: 6 (2 votes) · EA · GW

Update: The total raised by Israeli EAs from the survey is on the order of 100,000 NIS, or about $30,000 now, which I think could plausibly double or even triple once started, at least in the next couple years. Given that tax rebates in Israel are a flat 35%, the planned organization would save Israeli EAs 35,000 NIS / $10,000 annually now, and 2-3 times as much in the coming few years. If there are very low administrative costs, this is plausibly enough that it is worth having on a utilitarian calculus basis, as outlined in my above pre-commitment, IF there were someone suitable to run it that had enough time and / or was willing to work cheaply enough.

However, given the value of my time working on other things, however, it is not enough for me to think that I should run the organization.

To get this started, I do have ideas I'm happy to share about what needs to be done, and the growth potential makes a strong case for this to be investigated further. AT the same time, I think it's worse for someone ineffective / not aligned to start this than to simply wait until the need is larger, so I am deferring on this.

Comment by davidmanheim on International Relations; States, Rational Actors, and Other Approaches (Policy and International Relations Primer Part 4) · 2020-01-26T12:39:00.925Z · score: 1 (1 votes) · EA · GW

It's true that assuming single peaked preferences is usually really central to rational actor approaches, but there are a few different issues that exist which should be separated. Arrows theorem is in many cases, no voting system is Pareto-compatible, non-dictatorial, and allows independence of irrelevant alternatives.

First, as you noted, these classes of preference don't imply that there are coherent ranked preferences in a group (unless we also have only a single continuous preference dimension). If I prefer rice to beans to corn for dinner and you prefer beans to corn to rice, while our friend prefers corn to rice to beans, it's not a continuous system, and there's no way that voting will help - any alternative has 2/3rds of voters opposed. (Think this isn't a ever relevant issue? Remember Brexit?)

Second, even if the domain is continuous, if there is more than one dimension, it still can fail. For example, we need to order lunch and dinner together, I want 75% beans, and 25% rice for dinner, and 50% of each for lunch, and it's a monotonic and continuous preference - i.e. the farther away from my preferred split we get, the less I like it. If I take a bunch of similar types of preferences about these meals and need to make a single large order, arrow's theorem shows that there may be no voting system that allows people to agree on any particular combination for the two meals - there can be a majority opposed to any one order.

And third, it's sometimes simply incorrect as a description of people preferences. As an example, a voter might reasonably have preferences for either high taxes and strong regulation with a strong social safety net so that people can depend on the government, OR low taxes, little regulation, and no safety net so that people need to build social organizations to mutually support one another, and say that anything in between is a worse idea than either. These preferences are plausibly collapsible to a single dimension, but they still admit Arrow's problem because they are not single-peaked.

But in each case, it's not a problem for reality, it's a problem with our map. And if we're making decisions, we should want an accurate map - which is what the series of posts is hoping to help people build.

Comment by davidmanheim on An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) · 2020-01-22T08:37:05.963Z · score: 1 (1 votes) · EA · GW
Do you mean Aristotle’s “Politics”?

Yes, I did. Whoops, fixed.

In general, yes, international relations is a complex adaptive system, and that could be relevant. But I'm just not sure how far the tools of complexity theory can get you in this domain. I would agree that complexity science approaches seem closely related to game theoretic rational actor models, where slight changes can lead to wildly different results, and they are unstable in the chaos theory / complexity sense. I discuss that issue briefly in the next post, now online, but as far as I am aware, complexity theory not a focus anywhere in international relations or political science. (If you have links that discuss it, I'd love to see them.)

Comment by davidmanheim on An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) · 2020-01-11T17:43:39.458Z · score: 1 (1 votes) · EA · GW

Thanks! (Now fixed)

Comment by davidmanheim on When To Find More Information: A Short Explanation · 2020-01-04T18:34:44.830Z · score: 4 (3 votes) · EA · GW

Good writeup, and cool tool. I may use it and/or point to it in the future.

I agree that when everything is already quantified, and you can do this. The chapter in HtMA is also fantastic. But it's fairly rare that people have already quantified all of the relevant variables and properly explored what the available decisions are or what they would affect - and not doing so can materially change the VoI, and are far more important to do anyways.

That said, no, basic VoI isn't hard. It's just that the use case is fairly narrow, and the conceptual approach is incredibly useful in the remainder of cases, even those cases where actually quantifying everything or doing to math is incredibly complex or even infeasible.

Comment by davidmanheim on Policy and International Relations - What Are They? (Primer for EAs, Part 2) · 2020-01-03T07:39:13.751Z · score: 3 (2 votes) · EA · GW

I definitely see a wide variety of techniques used in applied public policy, as I said in the next paragraph. The work I did at RAND was very interdisciplinary, and drew on a wide variety of academic disciplines - but it was also decision support and applied policy analysis, not academic public policy.

And I was probably not generous enough about what types of methods are used in academic public policy - but my view is colored by the fact that the scope in many academic departments seems almost shockingly narrow compared to what I was used to, or even what seems reasonable. The academic side, meaning people I see going for tenure in public policy departments, seems to focus pretty narrowly on econometric methods for estimating impact of interventions. They also do ex-post cost benefit analyses, but those use econometric estimates of impact to estimate the benefits. And when academic ex-ante analysis is done, it's usually part of a study using econometric or RCT estimates to project the impact.

Comment by davidmanheim on On Collapse Risk (C-Risk) · 2020-01-02T12:50:10.410Z · score: 12 (5 votes) · EA · GW

Good to see more people thinking about this, but the vocabulary you say is needed already exists - look for things talking about "Global Catastrophic Risks" or "GCRs".

A few other notes:

It would help if you embedded the images. (You just need to copy the image address from imgur.)

" with a significant role played by their . " <- ?

" the ability for the future of our civilisation to deviate sufficiently from our set of values as to render this version of humanity meaningless from today’s perspective, similar to the ship of Theseus problem. " <- I don't think that's a useful comparison.


Comment by davidmanheim on What ever happened to PETRL (People for the Ethical Treatment of Reinforcement Learners)? · 2019-12-31T19:38:41.691Z · score: 1 (1 votes) · EA · GW

I'm not sure exactly who was running things, but I assumed the work is related to / continued by FRI, given the overlap in people involved.

Comment by davidmanheim on When To Find More Information: A Short Explanation · 2019-12-31T10:40:54.876Z · score: 1 (1 votes) · EA · GW

Seriously - start with the 5 pages I recommended, and that should give you enough information (VoI FTW!) to decide if you want to read Chapters 1 & 2 as well.

(But Chapters 3 and 4 really *are* irrelevant unless you happen to be designing a biosurveillance system or a terrorism threat early warning detection system that uses classified information.)

Comment by davidmanheim on When To Find More Information: A Short Explanation · 2019-12-31T10:38:05.771Z · score: 11 (5 votes) · EA · GW

This is an area I should probably write more about, but I have a harder time being pithy, and haven't tried to distill my thoughts enough. But since you asked....

As a first approximation, you want to first consider the plausible value of the decision. If it's choosing a career, for example, the difference between a good choice and a bad one is plausibly a couple million dollars. You almost certainly don't want to spend more than a small fraction of that gathering information, but you do want to spend up to, say, 5% on thinking about the decision. (Yes, I'd say spending a year or two exploring the options before picking a career is worthwhile, if you're really uncertain - but you shouldn't need to be. See below.)

Once you have some idea of what the options are, you should pick what about the different options are good or bad - or uncertain. This should form the basis of at least a pro/con list - which is often enough by itself. (See my simulation here.) If you see that one option is winning on that type of list, you should probably just pick it - unless there are uncertainties that would change your mind.

Next, list those key uncertainties. In the career example, these might include: Will I enjoy doing the work?, How likely am I to be successful in the area?, How likely is the field to continue to be viable in the coming decades?, and How easy or hard is it to transition into/out of?

Notice that some of the uncertainties matter significantly, and others don't. We have a tool that's useful for this, which is the theoretical maximum of VoI, called Value of Perfect Information. This is the difference in value between knowing the answer with certainty, and the current decision. (Note: not knowing the future with certainty, but rather knowing the correct answer to the uncertainty. For example, knowing that you would have a 70% chance of being successful and making tons of money in finance.) Now ask yourself: If I knew the answer, would it change my decision? If the answer is no, drop it from the list of key uncertainties. If a relatively small probability of success would still leave finance as your top option, because of career capital and the potentially huge payoff, maybe this doesn't matter. Alternatively, if even a 95% chance of success wouldn't matter because you don't know if you'd enjoy it, it still doesn't matter - so move on to other questions.

If the answer is that knowing the answer would change your mind, you need to ask what information you could plausibly get about the question, and how expensive it is. For instance, you currently think there's a 50% chance you'd enjoy working in finance. Spending a summer interning would make you sure one way or the other - but the cost in time is very high. It might be worth it, but there are other possibilities. Spending 15 minutes talking to someone in the field won't make you certain, but will likely change your mind to think the odds are 90% or 10% - and in the former case, you can still decide to apply for a summer internship, and in the latter case, you can drop the idea now.

You should continue with this process of finding key things that would change your mind until you either think that you're unlikely to change your mind further, or the cost of more certainty is high enough compared to the value of the decision that it's not obviously worth the investment of time and money. (If it's marginal or unclear, unless the decision is worth tens or hundreds of millions of dollars, significant further analysis is costly enough that you may not want to do it. If you're unsure which way to decide at that point, you should flip a coin about the eventual decision - and if you're uncertain enough to use a coin flip, then just do the riskier thing.)

Comment by davidmanheim on When To Find More Information: A Short Explanation · 2019-12-31T10:10:34.892Z · score: 1 (1 votes) · EA · GW

Yes, that was partially the conclusion of my dissertation - and see my response to the above comment.

Comment by davidmanheim on 8 things I believe about climate change · 2019-12-30T13:53:11.513Z · score: 1 (0 votes) · EA · GW

From what I understand, Geoengineering is mostly avoided because people claim (incorrectly, in my view) it is a signal that the country thinks there is no chance to fix the problem by limiting emissions. In addition, people worry that it has lots of complex impacts we don't understand. As we understand the impacts better, it becomes more viable - and more worrisome. And as it becomes clearer over the next 20-30 years that a lot of the impacts are severe, it becomes more likely to be tried.

Comment by davidmanheim on Learning to ask action-relevant questions · 2019-12-29T06:52:53.176Z · score: 3 (3 votes) · EA · GW

I've heard "action relevant" used more often - but both are used.

Comment by davidmanheim on Learning to ask action-relevant questions · 2019-12-29T06:52:11.213Z · score: 12 (5 votes) · EA · GW

Another potentially useful heuristic is to pick a research question where the answer is useful whether or not you find what you'd expect. For example, “Are house fires more frequent in households with one or more smokers?" is very decision relevant if the answer is "Far more likely," but not useful if the answer is "No," or "A very little bit." (But if a questions is only relevant if you get an unlikely answer, it's even less useful. For example, “How scared are Londoners of house fires?” is plausibly very decision relevant if the answer turns out to be "Not at all, and they take no safety measures" - but that's very unlikely to be the answer.)

A better question might be "Which of the following behaviors or characteristics correlates with increased fire risk; presence of school-aged children, smoking, building age, or income?" Notice that this is more complex than the previous question, but if you're gathering information about smoking, the other questions are relatively easy to find information about as well - and make the project much more likely to find something useful.

(The decision-theoretic optimal is questions that are decision-relevant in proportion to the likelihood you'll find each answer. But even if it's very valuable in expectation, from a career perspective, you don't want to spend time on questions that have a good chance of being a waste of time, even if they have a small chance of being really useful - but this is a trade-off that requires reflection, because it leads people to take fewer risks, and from a social benefit perspective at least, most people take too few risks already.)

Comment by davidmanheim on 8 things I believe about climate change · 2019-12-28T21:39:16.949Z · score: 9 (7 votes) · EA · GW

(Great idea. But I think this would work better if you had the top comment be just "Here for easy disagreement:" then had the sub comments be the ranges, so that the top comment could be upvoted for visibility.)

Edit: In case this isn't clear, the parents was changed. Much better!


Comment by davidmanheim on 8 things I believe about climate change · 2019-12-28T21:37:26.893Z · score: 2 (2 votes) · EA · GW

The other fairly plausible GCR that is discussed is biological. Black death likely killed 20% of the population (excluding the Americas, but not China or Africa, which we affected) in the middle ages. Many think that bioengineered pathogens or other threats could plausibly have similar effects now. Supervolcanos and asteroids are also on the list of potential GCRs, but we have better ideas about their frequency / probability.

Of course, Toby's book will discuss all of this - and it's coming out soon!

Comment by davidmanheim on 8 things I believe about climate change · 2019-12-28T21:31:41.275Z · score: 16 (4 votes) · EA · GW

I agree overall. The best case I've heard for Climate Change as an indirect GCR, which seems unlikely but not at all implausible, is not about direct food shortages, but rather the following scenario:

Assume state use of geoengineering to provide cloud cover, reduce heat locally, or create rain. Once this is started, they will quickly depend on it as a way to mitigate climate change, and the population will near-universally demand that it continue. Given the complexity and global nature of weather, however, this is almost certain to create non-trivial effects on other countries. If this starts causing crop failures or deadly heat waves in the affected countries, they would feel justified escalating this to war, regardless of who would be involved - such conflicts could easily involve many parties. In such a case, in a war between nuclear powers, there is little reason to think they would be willing to stop a non-nuclear options.

Comment by davidmanheim on 8 things I believe about climate change · 2019-12-28T20:06:21.177Z · score: 11 (3 votes) · EA · GW

You'd need to think there was a very significant failure of markets to assume that food supplies wouldn't be adapted quickly enough to minimize this impact. That's not impossible, but you don't need central management to get people to adapt - this isn't a sudden change that we need to prep for, it's a gradual shift. That's not to say there aren't smart things that could significantly help, but there are plenty of people thinking about this, so I don't see it as neglected of likely to be high-impact.

Comment by davidmanheim on Brief summary of key disagreements in AI Risk · 2019-12-26T20:10:20.659Z · score: 4 (3 votes) · EA · GW

"* Will something less than superhuman AI pose similar extreme risks? If yes: How much less, how far in advance will we see it coming, when will it come, how easy is it to solve?"

I don't think there is any disagreement that there are such things. I think that the key disagreement is whether there will be sufficient warning , and how easy it will be to solve / prevent.

Not to speak on their behalf, but my understanding of MIRI's view on this issue is that there are likely to be such issues, but they aren't as fundamentally hard as ASI alignment, and while there should be people working on the pre-ASI risks, we need all the time we can invest on solving the really hard parts of the eventual risk from ASI.

Comment by davidmanheim on Which banks are most EA-friendly? · 2019-12-26T17:47:36.900Z · score: 16 (8 votes) · EA · GW

I suspect the choice of bank is rather unimpactful, even for those wtih a few million dollars in deposits. For most of us, it's really not worth time trying to optimize - you're better off finding a site that reviews banks and compares fees, etc. But if you are concerned about the systemic risks and externalities imposed by banks, I would recommend finding a credit union rather than a bank - or at least find a small commercial bank rather than a large national bank or an investment house for banking. (But again, I suspect convenience and fees are a more important factor.)

Edit: To clarify a bit, the marginal impact of giving money to charities is significant, while the marginal impact of giving your savings to a bank is fairly minor - it just gives them a slightly larger balance sheet to make loans, though most are not exactly short on cash nowadays. But if you want to think about systemic change for banks as potentially an important issue, picking where to put your money isn't as important as contacting your senators to tell them you want them to regulate banks for tightly.

Comment by davidmanheim on Carbon Offsets as an Non-Altruistic Expense · 2019-12-04T19:22:29.104Z · score: 1 (1 votes) · EA · GW

No, because given a socially optimal level of carbon, there's no net harm to offset - any carbon emissions are net socially neutral, or positive. (That doesn't imply there are no distributional concerns, but I'd buy the argument that purchasing DALYs generally is better in that case.)

I'm not a strict utilitarian, and so the issue I have with offsetting harm A with benefit B is that harms affect different individuals. There was no agreement by those harmed by A that they are OK with being harmed as long as those who benefit from B are happier. This is similar to the argument against buying reductions in meat consumption, or reducing harm to animals in other cost effective ways, to offset eating meat yourself - the animals being killed didn't agree, even if there is a net benefit to animals overall.

Comment by davidmanheim on Carbon Offsets as an Non-Altruistic Expense · 2019-12-04T05:20:42.101Z · score: 5 (3 votes) · EA · GW

Because society hasn't chosen to put in place a tax, I see the commitment as not just to self-tax, but rather to offset the harm being done. As I argued above, I don't think that internalizing externalities is an altruistic act. Conversely, I don't think that you can offset one class of harm to others with a generalized monetary penance, unless there is a social decision to tax to optimize the level of an activity. As an optimal taxation argument, spending the self-tax money on global poverty does internalize the externality, but it does not compensate for the specific harm.

I certainly agree that donations above the amount of harm done would be an altrustic act, and then the question is whether it's the most effective use of your altruism budget - and like you, I put that money elsewhere.

Comment by davidmanheim on Eight high-level uncertainties about global catastrophic and existential risk · 2019-12-03T09:40:04.324Z · score: 7 (4 votes) · EA · GW

Related to #4, I have a paper under review, with a preprint HERE, discussing an aspect of fragility.

Title: Systemic Fragility as a Vulnerable World

Abstract: The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss a new hypothesis that complexity of a certain type can itself function as a source of risk. This ”Fragile World Hypothesis” is compared to Bostroms ”Vulnerable World Hypothesis”, and the assumptions and potential mitigations are contrasted.

Comment by davidmanheim on Davidmanheim's Shortform · 2019-11-27T12:34:36.882Z · score: 8 (4 votes) · EA · GW

Precommitting based on Survey Outcomes for Proposed "Giving Effectively Israel.

(Epistemic Status: Public Statement for Future Reference)


We're planning to field a survey about current giving, and potentially funding a Israeli-tax-deductible organization. The first question is whether there is sufficient demand to make funding the program worthwhile.

Setting this up involves a fair amount of upfront costs for lawyers to ensure that this is entirely above board. It seems worthwhile to try to engage a relatively prestigious / respected firm, to ensure that this is done correctly. There is a risk that we find out that they don't think it's possible to do (subjective estimate: 25%), in which case we would stop the project, hopefully having spent less than the expected full cost.

My upfront claim is that this would be worthwhile to seek funding for if the cost of a lawyer and setting up the nonprofit is less than 25% of the expected tax-saving to EAs over the next 3 years, as inferred from the survey.

Comment by davidmanheim on Reality is often underpowered · 2019-11-25T09:23:14.459Z · score: 12 (5 votes) · EA · GW

Very much agree with the key points, which are related to what I wrote here.

My unsatisfying conclusion was that there are three approaches when facing an "appropriately underpowered" question:

  1. Don’t try to answer these questions empirically, use other approaches.
    If data cannot resolve the problem to the customary “standard” of p<0.05, then use qualitative approaches or theory driven methods instead.
  2. Estimate the effect and show that it is statistically non-significant.
    This will presumably be interpreted as the effect having a small or insignificant practical effect, despite the fact that that isn’t how p-values work.
  3. Do a Bayesian analysis with comparisons of different prior beliefs to show how the posterior changes.
    This will not alter the fact that there is too little data to convincingly show an answer, and is difficult to explain. Properly uncertain prior beliefs will show that the answer is still uncertain after accounting for the new data, but will perhaps shift the estimated posterior slightly to the right, and narrow the distribution.

I also strongly agree with Will's comment that this doesn't (always) imply that we shouldn't do such work, it just means that we're doing qualitative work, which as he suggests, can be valuable in different ways.

Comment by davidmanheim on List of ways in which cost-effectiveness estimates can be misleading · 2019-11-25T08:51:36.526Z · score: 3 (2 votes) · EA · GW

Good points. (Also, I believe am personally required to upvote posts that reference Goodhart's law.)

But I think both regression to the mean and Goodhart's law are covered, if perhaps too briefly, under the heading "Estimates based on past data might not be indicative of the cost-effectiveness in the future."

Comment by davidmanheim on List of ways in which cost-effectiveness estimates can be misleading · 2019-11-25T08:48:08.669Z · score: 12 (6 votes) · EA · GW

Undervaluing Diversification: Optimizing for highest Benefit-Cost ratios will systematically undervalue diversification, especially when the analyses are performed individually, instead of as part of a portfolio-building process.

Example 1: Investing in 100 projects to distribute bed-nets correlates the variance of outcomes in ways that might be sub-optimal, even if they are the single best project type. The consequent fragility of the optimized system has various issues, such as increased difficulty embracing new intervention types, or the possibility that the single "best" intervention is actually found to be sub-optimal (or harmful,) destroying the reputation of those who optimized for it exclusively, etc.

Example 2: The least expensive way to mitigate many problems is to concentrate risks or harms. For example, on cost-benefit grounds, the best site for a factory is the industrial areas, not the residential areas. This means that the risks of fires, cross-contamination, and knock-on-effects of any accidents increase because they are concentrated in small areas. Spreading out the factories somewhat will reduce this risk, but the risk externality is a function of the collective decision to pick the lowest cost areas, not any one cost-benefit analysis.

Additional concern: Optimizing for low social costs as measured by economic methods will involve pushing costs on the poorest people, because they typically have the lowest value-to-avoid-harm.

Comment by davidmanheim on List of ways in which cost-effectiveness estimates can be misleading · 2019-11-25T08:40:42.747Z · score: 5 (2 votes) · EA · GW

Re: Bias towards measurable results

A closely related issue is justification-bias, where expectations that the cost-benefit analysis be justified leads t0 exclusion of disputed values. One example of this is the US Army Corps of Engineers, which produces Cost-Benefit analyses that are then given to congress for funding. Because some values (ecological diversity, human enjoyment, etc.) are both hard to quantify, and the subject of debate between political groups, including them leaves the analysis open to far more debate. The pressure to exclude them leads to their implicit minimization.

Comment by davidmanheim on Which Community Building Projects Get Funded? · 2019-11-18T11:36:27.264Z · score: 5 (2 votes) · EA · GW

For those of us (like myself) who, for family reasons or otherwise, are unable to move to a hub or location with a comparative advantage for any type of EA work, there are local chapters in many places. (Even if mine is a 30 minute drive or 45 miute bus ride away.) Those can sometimes benefit from more support, but where it is most needed, and where there is capacity to do so, I understand that it is already happening, or at least starting to. But it doesn't mean it makes sense to fund EA orgs everywhere, because coordination costs and duplication are real issues. And communities that want EA infrastructure can build their own, and often have done so. On the other hand, if they are so small that they don't have locals that want to build a community and can support doing so, I don't think funding them from EA grants makes sense anyways.

Given that, I certainly agree that there are orgs that would benefit from being located in the "EA Diaspora" specifically in the places you listed. But in many cases, they DO have such organizations already, and a large EA community. Not coincidentally, they are also very well connected with the EA hubs, so that I'd guess many grants to those places would have been excluded in the analysis. S There is no lack of EA policy-focused orgs or EA community infrastructure in DC and the surrounding area, given the number of EA-aligned orgs that are working there - notably, Georgetown's CSET and Johns Hopkins' CHS. Similarly the NYC EA chapter is among the largest, and not only is it a vibrant community, but is also where GiveDirectly is located. China I'm less familiar with, and is a very different discussion but I don't see anything stopping people interested in those types of work from moving to those places instead of SF / Oxford to be involved in EA orgs. Otherwise, starting EA orgs that replicate work being done in the hubs seems like a low priority, ineffective activity.

Comment by davidmanheim on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-18T11:13:40.823Z · score: 1 (1 votes) · EA · GW

I think my example of corruption reduction captures most of the types of interventions that people have suggested are useful but hard-to quantify, but other examples would be happiness focused work, or pushing for systemic change of various sorts.

Tech risks involving GCRs that are a decade or more away are much more future-focused in the sense that different arguments apply, as I said in the original post.

Comment by davidmanheim on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-17T08:06:46.238Z · score: 2 (2 votes) · EA · GW

Agreed - but as the link I included argues, the information we have is swamped by our priors, and isn't particularly useful for making objective conclusions

Comment by davidmanheim on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-17T08:05:38.329Z · score: 1 (1 votes) · EA · GW

Yes - but if it is expected to be very high value, I'd think that they'd be pushing for a new EA charity with it as a focus, as they have done in the past. Most were dropped because the work they did wasn't as valuable as the top charities.

Comment by davidmanheim on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-17T08:00:46.727Z · score: 1 (1 votes) · EA · GW

I think we can drop the Bletchley park discussion. On the present-day stuff, I think they key point is that future-focused interventions have a very different set of questions than present-day non-quantifiable interventions, and you're plausibly correct that they are underfunded - but I was trying to focus on the present-day non-quantifiable interventions.

Comment by davidmanheim on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-15T09:28:35.307Z · score: 1 (1 votes) · EA · GW

Bletchley park as an intervention wasn't mostly focused on enigma, at least in the first part of the war. It was tremendously effective anyways, as should have been expected. The fact that new and harder codes were being broken was obviously useful as well, and from what I understood, was being encouraged by the leadership alongside the day-to-day codebreaking work.

And re: AI alignment, it WAS being funded. Regarding nanotech risks and geoengineering safety now, it's been a focus of discussion at CSER and FHI, at least - and there is agreement about the relatively low priority of each compared to other work. (But if someone qualified and aligned with EA goals wanted to work on it more, there's certainly funding available.)

Comment by davidmanheim on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-15T09:23:07.949Z · score: 1 (1 votes) · EA · GW

Agreed on all points!

I'd note that the problem with predicting magnitudes is simply that it's harder to do than predicting a binary "will it replicate," though both are obviously valuable.


Comment by davidmanheim on Which Community Building Projects Get Funded? · 2019-11-15T09:21:26.589Z · score: 1 (1 votes) · EA · GW

I agree that this seems like a useful analysis - any chance you have time to read through the grants and write it up?

Comment by davidmanheim on Which Community Building Projects Get Funded? · 2019-11-15T09:19:44.408Z · score: 5 (4 votes) · EA · GW

I haven't looked at the specific grants, but my understanding was that EA orgs with specific purposes would not usually fund many of the activities that EA grants are used for, since the purpose of the grants is to do something new or different than extant organizations. (Also, organizations usually have organizational and logistical constraints that make expanding to new areas of work inadvisable - look at how badly most mergers go in the corporate world, for instance.

But I agree there are some chicken-and-egg issues. I'm less sure, however, whether geographic diversity is as useful as it normally would be given the advantages of concentrating people in places with significant extant EA infrastructure and networks that enable collaboration.

Comment by davidmanheim on Which Community Building Projects Get Funded? · 2019-11-14T12:56:49.397Z · score: 21 (11 votes) · EA · GW

It seems that there is a critical endogenous factor for location; the people really interested in running EA projects, and who are capable of running them best, gravitate to EA hubs, and have moved there. Many of the most dedicated and capable EAs moved to these hubs and work at these organizations, while the less dedicated/capable ones did not try to do so, or weren't hired. It's clear that many of the groups are pulling in EAs from other parts of the world, so the concentration is in fact reflecting this movement. This doesn't explain the entire bias, and I agree that networks matter for funding and this can be very problematic, but it's a critical factor.

Comment by davidmanheim on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-14T05:55:32.826Z · score: 1 (1 votes) · EA · GW

As I said in the epistemic status, I'm far less certain than I once was, and on the whole I'm now skeptical. As I said in the post and earlier comments, I still think there are places where unquantifiable interventions are very valuable, I just think that unless it's obvious that they will be (see: Diamond Law of Evaluation,) I'd claim that quantifiably effective interventions are in expectation better.

Comment by davidmanheim on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-13T16:50:53.554Z · score: 4 (2 votes) · EA · GW

See my comment above. Bletchley park was exactly the sort of intervention that doesn't need any pushing. It was funded immediately because of how obvious the benefit was. That''s not retrospective.

If you were to suggest something similar now that were politically feasible and similarly important to a country, I'd be shocked if it wasn't already happening. Invest in AI and advanced technologies? Check. Invest in Global Health Security? Also check. So the things left to do are less obviously good ideas.

Comment by davidmanheim on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-13T16:47:57.289Z · score: 3 (1 votes) · EA · GW

You say that the distribution needs to be "very" fat tailed - implying that we have a decent chance of finding interventions order of mangitude more eefective than bed-nets. I disagree. The very most effective possible interventions, where the cost-benefit ratio is insanely large, are things that we don't need to run as interventions. For instance, telling people to eat when they have food so they don't starve would be really impactful if it weren't unnecessary because of how obviously beneficial it is.

So I don't think bednets are a massive outlier - they just have a relatively low saturation compared to most comparably effective interventions. The implication of my model is that most really effective interventions are saturated, often very quickly. Even expensive systemic efforts like vaccinations for smallpox got funded fairly rapidly after such universal eradication was possible, and the less used vaccines are either less effective, for less critical diseases, or are more expensive and/or harder to distribute. (And governments and foundations are running those campaigns, successfully, without needing EA pushing or funding.) And that's why we see few very effective outliers - and since the underlying distribution isn't fat tailed, even more effective interventions are even rarer, and those that did exist are gone very quickly.


On prediction, I agree that the conclusion is one of epistemic modesty rather than confident claims of non-effectiveness. But the practical implication of that modesty is that for any specific intervention, if we fund it thinking it may be really impactful, we're incredibly unlikely to be correct.

Also, I'm far more skeptical than you about 'sophisticated' estimates. Having taken graduate courses in econometrics, I'll say that the methods are sometimes really useful, but the assumptions never apply, and unless the system model is really fantastic, the prediction error once accounting for model specification uncertainty is large enough that most such econometric analyses of these sorts of really complex, poorly understood systems like corruption or poverty simply don't say anything.

Comment by davidmanheim on Deliberation May Improve Decision-Making · 2019-11-08T11:04:59.624Z · score: 3 (2 votes) · EA · GW

I don't have time to discuss this in the level of detail that is warranted, but you might look into the history of ACUS and how it was funded, successful at doing deliberative decision making, defunded, restarted, etc. https://en.wikipedia.org/wiki/Administrative_Conference_of_the_United_States

Now they are doing work like this - https://www.acus.gov/working-groups

Comment by davidmanheim on Aid Scepticism and Effective Altruism · 2019-07-30T14:49:12.114Z · score: 2 (2 votes) · EA · GW
In proportion to the needs...

Again, I don't think that's relevant. I can easily ruin systems with a poorly spent $10m regardless of how hard it is to fix them.

I am not sure I understand why international funding should displace local expertise...

You're saying that these failure modes are avoidable, but I'm not sure they are in fact being avoided.

The building of those health institutions takes a long time, the results come slowly with a time lag of 10+ years.

Yes, and slow feedback is a great recipe for not noticing how badly you're messing things up. And yes, classic GiveWell type analysis doesn't work well to consider complex policy systems, which is exactly why they are currently aggressively hiring people with different types of relevant expertise to consider those types of issues.

And speaking of this, here's an interesting paper Rob Wiblin just shared on complexity and difficulty of decisionmaking in these domains; https://philiptrammell.com/static/simplifying_cluelessness.pdf

Comment by davidmanheim on Aid Scepticism and Effective Altruism · 2019-07-30T04:59:51.577Z · score: 2 (2 votes) · EA · GW

Yes, there are plausible tipping points, but I'm not talkin about that. I'm arguing that this isn't "small amounts of money," and it is well into the amounts where international funding displaces building local expertise, makes it harder to focus on building health systems generally instead of focusing narrowly, undermines the need for local governments to take responsibility, etc.

I still think these are outweighed by the good, but the impacts are not trivial.

Comment by davidmanheim on Aid Scepticism and Effective Altruism · 2019-07-17T12:42:44.986Z · score: 1 (1 votes) · EA · GW

I don't understand why your argument responds to mine. They don't need to be big enough to directly solve problems to be large enough to have critical systemic side effects.