Posts

New Top EA Cause: Politics 2020-04-01T07:53:27.737Z · score: 28 (22 votes)
Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics 2020-03-04T17:06:42.972Z · score: 41 (24 votes)
International Relations; States, Rational Actors, and Other Approaches (Policy and International Relations Primer Part 4) 2020-01-22T08:29:39.023Z · score: 22 (14 votes)
An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) 2020-01-05T12:54:34.826Z · score: 18 (11 votes)
Policy and International Relations - What Are They? (Primer for EAs, Part 2) 2020-01-02T12:01:21.222Z · score: 22 (14 votes)
Introduction: A Primer for Politics, Policy and International Relations 2019-12-31T19:27:46.293Z · score: 62 (29 votes)
When To Find More Information: A Short Explanation 2019-12-28T18:00:56.172Z · score: 58 (30 votes)
Carbon Offsets as an Non-Altruistic Expense 2019-12-03T11:38:21.223Z · score: 16 (11 votes)
Davidmanheim's Shortform 2019-11-27T12:34:36.732Z · score: 3 (1 votes)
Steelmanning the Case Against Unquantifiable Interventions 2019-11-13T08:34:07.820Z · score: 45 (21 votes)
Updating towards the effectiveness of Economic Policy? 2019-05-29T11:33:17.366Z · score: 11 (9 votes)
Challenges in Scaling EA Organizations 2018-12-21T10:53:27.639Z · score: 39 (20 votes)
Is Suffering Convex? 2018-10-21T11:44:48.259Z · score: 13 (11 votes)

Comments

Comment by davidmanheim on Conditional Donations · 2020-05-11T10:01:36.852Z · score: 11 (4 votes) · EA · GW

Two short points.

First, there was a lot of work by Robin Hanson 15-20 years ago on Conditional contracts and prediction-based contracts that might be relevant.

Second, a key issue with this sort of donation is that the organizations themselves are left with a lot of uncertainty until the contract resolves. If the contracts are really transparent, they might have some idea what is happening, but it seems likely that tons of such contracts would lead to really messy and highly uncertain future cash flows that would make planning much harder. I'm unsure if there's a clear way to fix this, but it's probably worth thinking about more. (The alternative is for people to just wait on making the donation, which is not at all transparent and makes precommitment and coordination around joint giving impossible, but obviously requires much less complexity.)

Comment by davidmanheim on Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? · 2020-04-30T12:25:59.440Z · score: 2 (2 votes) · EA · GW

Re: #2 -For vaccines, that seems unlikely given that companies with the highest probability of success are already pouring money into this. A clear benefit of the proposal is to reduce risk that if they fail, which is very plausible, or are less effective than at least some alternatives, which is even more likely, the competition will be months and months behind. And for other equipment, it seems even less likely.

Comment by davidmanheim on What are some tractable approaches for figuring out if COVID causes long term damage to those who recover? · 2020-04-28T12:29:39.005Z · score: 1 (1 votes) · EA · GW

The vast amounts of funding and research from every source that is currently starting / ongoing about COVID.

Examples:

https://www.covid19funding.org/

https://hub.jhu.edu/novel-coronavirus-information/research-preparedness/research-preparedness-covid-19-funding-opportunities/

Comment by davidmanheim on What are some tractable approaches for figuring out if COVID causes long term damage to those who recover? · 2020-04-27T03:56:05.892Z · score: 2 (2 votes) · EA · GW

This seems to be very low on neglectedness, and not particularly high on tractability either.

Comment by davidmanheim on Database of existential risk estimates · 2020-04-17T14:47:25.924Z · score: 3 (3 votes) · EA · GW
It seems to me that it could be valuable to pool together new estimates from the "general EA public"

I think this is basically what Metaculus already does.

(But the post seems good / useful.)

Comment by davidmanheim on Why do we need philanthropy? Can we make it obsolete? · 2020-04-10T05:26:27.850Z · score: 1 (1 votes) · EA · GW

I think we should be willing to embrace a system that has a better mix of voluntary philanthropy, non-traditional-government programs for wealth transfer, and government decisionmaking. It's the second category I'm most excited about, which looks a lot like decentralized proposals. I'm concerned that most extant decentralized proposals, however, have little if any tether to reality. On the other hand, I'm unsure that larger governments would help, instead of hurt, in addressing these challenges.

Comment by davidmanheim on Why do we need philanthropy? Can we make it obsolete? · 2020-04-08T09:46:36.267Z · score: 6 (5 votes) · EA · GW

I claim that "fixing" coordination failures is a bad and/or incoherent idea.

Coordination isn't fully fixable because people have different goals, and scaling has inevitable and unavoidable costs. Making a single global government would create waste on a scale that current governments don't even approach.

As people get richer overall, the resources available for public benefit have grown. This seems likely to continue. But directing those resources fails. Democracy doesn't scale well, and any move away from democracy comes with corresponding ability to abuse power.

In fact, I think the best solution for this is to allow individuals to direct their money how they want, instead of having a centralized system - in a word, philanthropy.

Comment by davidmanheim on New Top EA Cause: Politics · 2020-04-06T11:38:19.235Z · score: 7 (4 votes) · EA · GW

I've actually done this, and talked to others about it. The critical path, in short, is reliable vaccine, facilities for production, and replication for production.

But this has nothing to do with your announcing your candidacy for office - congratulations on deciding to run, and good luck with your campaign!

Comment by davidmanheim on The case for building more and better epistemic institutions in the effective altruism community · 2020-04-03T08:48:16.190Z · score: 3 (2 votes) · EA · GW

Also, strongly agree on #3 - see my post from last year: https://forum.effectivealtruism.org/posts/yQWYLaCgG3L6H2Lya/challenges-in-scaling-ea-organizations

Comment by davidmanheim on US Non-Profit? Get Free* Money From the Gov on 3 Apr! · 2020-04-02T18:32:45.148Z · score: 3 (4 votes) · EA · GW

It's the only time I can remember where it seems unfortunate that EA as a movement is good at planning and ensuring that critical nonprofits have sufficient runway.

Comment by davidmanheim on The case for building more and better epistemic institutions in the effective altruism community · 2020-04-02T17:48:51.295Z · score: 5 (3 votes) · EA · GW

Re: #2, I've argued for minimal institutions where possible - relying on markets or existing institutions rather than building new ones, where possible.

For instance, instead of setting up a new organization to fund a certain type of prize, see if you can pay an insurance company to "insure" the risk of someone winning, as determined by some criteria, and them have them manage the financials. Or, as I'm looking at for incentifying building vaccine production now, offer cheap financing for companies instead of running a new program to choose and order vaccines to get companies to produce them.

Comment by davidmanheim on New Top EA Causes for 2020? · 2020-04-01T07:58:38.731Z · score: 11 (5 votes) · EA · GW

Politics! (See linked post.)

Comment by davidmanheim on How would you advise someone decide between different biosecurity interventions? · 2020-03-30T17:20:06.685Z · score: 7 (3 votes) · EA · GW

1) There's an entire Global Health Security Agenda that has been shouting about what needs to be done for a decade, as have many other organizations - CHS, the US's Blue Ribbon Panel, Georgetown's GHSS, and I'm sure other places internationally. Ask them where to spend your money, or better yet, read their previous reports that already tell you what needs to be done.

2) For groups that are willing to think about biosecurity risks, or take advice from people who do, think about differential tech development when picking technology to fund. There are lots of technologies that have a clear upside, and almost no downside - biosurveillance, diagnostic technology, vaccine platforms, etc. Don't fund research into gain of function, and try to limit and weigh carefully when deciding what potential dual-use technology to fund.

3) For government decisionmakers - don't throw money into new bureaucracy. We have lots of existing bureaucracy, much of which should be reformed, but replacing it with a new structure and adding layers isn't going to help. And in the US, don't allow a post-9/11 move like what led to building the DHS.

Comment by davidmanheim on What promising projects aren't being done against the coronavirus? · 2020-03-26T12:27:30.084Z · score: 4 (4 votes) · EA · GW

People should be working on funding proposals for Bio-X risk mitigation policies, such as greater international coordination, better health monitoring systems, investment in non-disease specific symptomatic surveillance, and similar. These are likely to be far easier to fund in 3-6 months, as a huge pool of money is allocated to work on fixing the next pandemic.

Comment by davidmanheim on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-22T10:02:23.514Z · score: 2 (4 votes) · EA · GW

I personally, writing as a superforecaster, think that this isn't particularly useful. Superforecasters tend to be really good at evaluating and updating based on concrete evidence, but I'm far less sure about whether their ability to evaluate arguments is any better than that of a similarly educated / intelligent group. I do think that FHI is a weird test case, however, because it is selecting on the outcome variable - people who think existential risks are urgent are actively trying to work there. I'd prefer to look at, say, the views of a groups of undergraduates after taking a course on existential risk. (And this seems like an easy thing to check, given that there are such courses ongoing.)

Comment by davidmanheim on What COVID-19 questions should Open Philanthropy pay Good Judgment to work on? · 2020-03-22T08:33:07.563Z · score: 2 (2 votes) · EA · GW

Really happy to see this type of crowdsourcing!

Comment by davidmanheim on What promising projects aren't being done against the coronavirus? · 2020-03-22T08:32:19.712Z · score: 8 (4 votes) · EA · GW

Mitigation work seems very low on neglectedness.

I'd encourage more work on planning for post-COVID reactions and policy proposals, as well as thinking about how EAs can be in a good position to influence such decisions.

Comment by davidmanheim on What are some 1:1 meetings you'd like to arrange, and how can people find you? · 2020-03-21T19:18:40.889Z · score: 8 (4 votes) · EA · GW

Who are you?

David Manheim, PhD in Public policy, working with both FHI's bio team, and a few other projects

What are some things people can talk to you about? (e.g. your areas of experience/expertise)

I'm currently focused on global catastrophic biological risks, (I'm not interested in talking about COVID response or planning,) and systemic existential risks, especially technological fragility and systemic stability.

What are things you'd like to talk to other people about?

Definitions for AI forecasting - working on a project on this and hoping to hear more from people about where there is confusion or disagreement about the definitions that is making the discussion less helpful.

How can people get in touch with you?

Calendly, Twitter (DMs open!), or email, myfullnamenopunctuation@gmail.com (Note: I'm in Israel so note large differences in time zones.)

Comment by davidmanheim on Are countries sharing ventilators to fight the coronavirus? · 2020-03-17T08:46:48.109Z · score: 8 (4 votes) · EA · GW

Yes - China is already sending ventilators to Italy.

https://abcnews.go.com/Business/wireStory/latest-austria-limits-movement-nationwide-amid-virus-69603237

"Optimal" sharing is a very complex and hard to agree on goal, but there is movement in this direction. More promising is that governments are gearing up to have companies produce the gear - the UK has asked Rolls-Royce to produce them: https://www.nytimes.com/aponline/2020/03/16/business/ap-financial-markets-the-latest.html

Comment by davidmanheim on On Collapse Risk (C-Risk) · 2020-03-13T07:50:18.391Z · score: 2 (2 votes) · EA · GW

There are a variety of definitions, but most of the GCR literature is in fact concerned with collapse risks. See Nick Bostrom's book on the topic, for example, or Open Philanthropy's definition: https://www.openphilanthropy.org/research/cause-reports/global-catastrophic-risks/global-catastrophic-risks

Comment by davidmanheim on Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics · 2020-03-06T08:22:02.969Z · score: 1 (1 votes) · EA · GW

There is a lot of discussion in the literature about setting up local testing centers, and a significant drawback is that even staffed with nurses and trained volunteers, there are real quality control and process issues. Given that, I can't imagine that home testing wouldn't have far larger problems. For example, if samples weren't gathered and handled exactly correctly, I'd expect the false negative rates could be incredibly high, and people who would otherwise self-isolate or get tested correctly would assume they could go out.

Comment by davidmanheim on Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics · 2020-03-05T11:03:01.051Z · score: 3 (2 votes) · EA · GW

I have uploaded it to preprints.org, linked above, pending the final layout and publication. (With an open source license in both cases.)

Comment by davidmanheim on Are there good EA projects for helping with COVID-19? · 2020-03-04T17:09:11.095Z · score: 10 (6 votes) · EA · GW

Based on a paper that was just accepted for publication, I outlined 4 areas where I think there is still critical work that can be done to prepare for wider-scale disruption, should it occur. They are:

1) Enable people to stay isolated effectively.
2) Triage and manage medical care remotely.
3) Manage critical services through disruptions.
4) Ensure transport systems remain functional.

For details about what each means, see the linked post.

Comment by davidmanheim on Challenges in Scaling EA Organizations · 2020-02-02T16:24:47.213Z · score: 3 (2 votes) · EA · GW

I'd strongly agree with Drucker, both here and generally. The issue I have is that EA culture already has strong values and norms, ones that don't necessarily need to be shaped in the same ways because they are already strong in many ways - though careful thought is certainly important. And a very important but unusual concern is that without care, the founder effects, culture, norms and values can easily erode when organizations, or the ecosystem as a whole, grows.

Comment by davidmanheim on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T14:11:25.108Z · score: 1 (1 votes) · EA · GW

SARS was very unusual, and serves as a partial counterexample. On the other hand, the "trend" being shown is actually almost entirely a function of the age groups of the people infected - it was far more fatal in the elderly. With that known now, we have a very reasonable understanding of what occurred - which is that because the elderly were infected more often in countries where SARS reached later, and the countries are being aggregated in this graph, the raw estimate behaved very strangely.

Comment by davidmanheim on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T11:20:06.412Z · score: 12 (10 votes) · EA · GW

And for preventing transmission, I know it seems obvious, but you need to actually wash your hands. Also, it seems weird, by studies indicate that brushing teeth seems to help reduce infection rates.

And covering your mouth with a breathing mask may be helpful, as long as you're not, say, touching food with your hands that haven't been washed recently and then eating. Also, even if there is no Coronavirus, in general, wash your hands before eating. Very few people are good about doing this, but it will help.

Comment by davidmanheim on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T11:17:11.084Z · score: 22 (16 votes) · EA · GW

This is the boring take, but it's worth noting that conditional on this spreading widely, perhaps the most important things to do are mitigating health impacts on you, not preventing transmission. And that means staying healthy in general, perhaps especially regarding cardiovascular health - a good investment regardless of the disease, but worth re-highlighting.

I'm not a doctor, but I do work in public health. Based on my understanding of the issues involved, if you want to take actions now to minimize severity later if infected, my recommendations are:

  • Exercise (which will help with cardiovascular health)
  • Lose excess weight (which can exacerbate breathing issues)
  • Get enough sleep (which assists your immune system generally)
  • Eat healthy (again, general immune system benefits)
Comment by davidmanheim on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T11:08:25.618Z · score: 13 (4 votes) · EA · GW

No, the case fatality rate isn't actually 3%, that's the rate based on identified cases, and it's always higher than the true rate.

Comment by davidmanheim on Davidmanheim's Shortform · 2020-01-26T12:51:13.801Z · score: 6 (2 votes) · EA · GW

Update: The total raised by Israeli EAs from the survey is on the order of 100,000 NIS, or about $30,000 now, which I think could plausibly double or even triple once started, at least in the next couple years. Given that tax rebates in Israel are a flat 35%, the planned organization would save Israeli EAs 35,000 NIS / $10,000 annually now, and 2-3 times as much in the coming few years. If there are very low administrative costs, this is plausibly enough that it is worth having on a utilitarian calculus basis, as outlined in my above pre-commitment, IF there were someone suitable to run it that had enough time and / or was willing to work cheaply enough.

However, given the value of my time working on other things, however, it is not enough for me to think that I should run the organization.

To get this started, I do have ideas I'm happy to share about what needs to be done, and the growth potential makes a strong case for this to be investigated further. AT the same time, I think it's worse for someone ineffective / not aligned to start this than to simply wait until the need is larger, so I am deferring on this.

Comment by davidmanheim on International Relations; States, Rational Actors, and Other Approaches (Policy and International Relations Primer Part 4) · 2020-01-26T12:39:00.925Z · score: 2 (2 votes) · EA · GW

It's true that assuming single peaked preferences is usually really central to rational actor approaches, but there are a few different issues that exist which should be separated. Arrows theorem is in many cases, no voting system is Pareto-compatible, non-dictatorial, and allows independence of irrelevant alternatives.

First, as you noted, these classes of preference don't imply that there are coherent ranked preferences in a group (unless we also have only a single continuous preference dimension). If I prefer rice to beans to corn for dinner and you prefer beans to corn to rice, while our friend prefers corn to rice to beans, it's not a continuous system, and there's no way that voting will help - any alternative has 2/3rds of voters opposed. (Think this isn't a ever relevant issue? Remember Brexit?)

Second, even if the domain is continuous, if there is more than one dimension, it still can fail. For example, we need to order lunch and dinner together, I want 75% beans, and 25% rice for dinner, and 50% of each for lunch, and it's a monotonic and continuous preference - i.e. the farther away from my preferred split we get, the less I like it. If I take a bunch of similar types of preferences about these meals and need to make a single large order, arrow's theorem shows that there may be no voting system that allows people to agree on any particular combination for the two meals - there can be a majority opposed to any one order.

And third, it's sometimes simply incorrect as a description of people preferences. As an example, a voter might reasonably have preferences for either high taxes and strong regulation with a strong social safety net so that people can depend on the government, OR low taxes, little regulation, and no safety net so that people need to build social organizations to mutually support one another, and say that anything in between is a worse idea than either. These preferences are plausibly collapsible to a single dimension, but they still admit Arrow's problem because they are not single-peaked.

But in each case, it's not a problem for reality, it's a problem with our map. And if we're making decisions, we should want an accurate map - which is what the series of posts is hoping to help people build.

Comment by davidmanheim on An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) · 2020-01-22T08:37:05.963Z · score: 1 (1 votes) · EA · GW
Do you mean Aristotle’s “Politics”?

Yes, I did. Whoops, fixed.

In general, yes, international relations is a complex adaptive system, and that could be relevant. But I'm just not sure how far the tools of complexity theory can get you in this domain. I would agree that complexity science approaches seem closely related to game theoretic rational actor models, where slight changes can lead to wildly different results, and they are unstable in the chaos theory / complexity sense. I discuss that issue briefly in the next post, now online, but as far as I am aware, complexity theory not a focus anywhere in international relations or political science. (If you have links that discuss it, I'd love to see them.)

Comment by davidmanheim on An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) · 2020-01-11T17:43:39.458Z · score: 1 (1 votes) · EA · GW

Thanks! (Now fixed)

Comment by davidmanheim on When To Find More Information: A Short Explanation · 2020-01-04T18:34:44.830Z · score: 4 (3 votes) · EA · GW

Good writeup, and cool tool. I may use it and/or point to it in the future.

I agree that when everything is already quantified, and you can do this. The chapter in HtMA is also fantastic. But it's fairly rare that people have already quantified all of the relevant variables and properly explored what the available decisions are or what they would affect - and not doing so can materially change the VoI, and are far more important to do anyways.

That said, no, basic VoI isn't hard. It's just that the use case is fairly narrow, and the conceptual approach is incredibly useful in the remainder of cases, even those cases where actually quantifying everything or doing to math is incredibly complex or even infeasible.

Comment by davidmanheim on Policy and International Relations - What Are They? (Primer for EAs, Part 2) · 2020-01-03T07:39:13.751Z · score: 3 (2 votes) · EA · GW

I definitely see a wide variety of techniques used in applied public policy, as I said in the next paragraph. The work I did at RAND was very interdisciplinary, and drew on a wide variety of academic disciplines - but it was also decision support and applied policy analysis, not academic public policy.

And I was probably not generous enough about what types of methods are used in academic public policy - but my view is colored by the fact that the scope in many academic departments seems almost shockingly narrow compared to what I was used to, or even what seems reasonable. The academic side, meaning people I see going for tenure in public policy departments, seems to focus pretty narrowly on econometric methods for estimating impact of interventions. They also do ex-post cost benefit analyses, but those use econometric estimates of impact to estimate the benefits. And when academic ex-ante analysis is done, it's usually part of a study using econometric or RCT estimates to project the impact.

Comment by davidmanheim on On Collapse Risk (C-Risk) · 2020-01-02T12:50:10.410Z · score: 11 (6 votes) · EA · GW

Good to see more people thinking about this, but the vocabulary you say is needed already exists - look for things talking about "Global Catastrophic Risks" or "GCRs".

A few other notes:

It would help if you embedded the images. (You just need to copy the image address from imgur.)

" with a significant role played by their . " <- ?

" the ability for the future of our civilisation to deviate sufficiently from our set of values as to render this version of humanity meaningless from today’s perspective, similar to the ship of Theseus problem. " <- I don't think that's a useful comparison.


Comment by davidmanheim on What ever happened to PETRL (People for the Ethical Treatment of Reinforcement Learners)? · 2019-12-31T19:38:41.691Z · score: 1 (1 votes) · EA · GW

I'm not sure exactly who was running things, but I assumed the work is related to / continued by FRI, given the overlap in people involved.

Comment by davidmanheim on When To Find More Information: A Short Explanation · 2019-12-31T10:40:54.876Z · score: 1 (1 votes) · EA · GW

Seriously - start with the 5 pages I recommended, and that should give you enough information (VoI FTW!) to decide if you want to read Chapters 1 & 2 as well.

(But Chapters 3 and 4 really *are* irrelevant unless you happen to be designing a biosurveillance system or a terrorism threat early warning detection system that uses classified information.)

Comment by davidmanheim on When To Find More Information: A Short Explanation · 2019-12-31T10:38:05.771Z · score: 11 (5 votes) · EA · GW

This is an area I should probably write more about, but I have a harder time being pithy, and haven't tried to distill my thoughts enough. But since you asked....

As a first approximation, you want to first consider the plausible value of the decision. If it's choosing a career, for example, the difference between a good choice and a bad one is plausibly a couple million dollars. You almost certainly don't want to spend more than a small fraction of that gathering information, but you do want to spend up to, say, 5% on thinking about the decision. (Yes, I'd say spending a year or two exploring the options before picking a career is worthwhile, if you're really uncertain - but you shouldn't need to be. See below.)

Once you have some idea of what the options are, you should pick what about the different options are good or bad - or uncertain. This should form the basis of at least a pro/con list - which is often enough by itself. (See my simulation here.) If you see that one option is winning on that type of list, you should probably just pick it - unless there are uncertainties that would change your mind.

Next, list those key uncertainties. In the career example, these might include: Will I enjoy doing the work?, How likely am I to be successful in the area?, How likely is the field to continue to be viable in the coming decades?, and How easy or hard is it to transition into/out of?

Notice that some of the uncertainties matter significantly, and others don't. We have a tool that's useful for this, which is the theoretical maximum of VoI, called Value of Perfect Information. This is the difference in value between knowing the answer with certainty, and the current decision. (Note: not knowing the future with certainty, but rather knowing the correct answer to the uncertainty. For example, knowing that you would have a 70% chance of being successful and making tons of money in finance.) Now ask yourself: If I knew the answer, would it change my decision? If the answer is no, drop it from the list of key uncertainties. If a relatively small probability of success would still leave finance as your top option, because of career capital and the potentially huge payoff, maybe this doesn't matter. Alternatively, if even a 95% chance of success wouldn't matter because you don't know if you'd enjoy it, it still doesn't matter - so move on to other questions.

If the answer is that knowing the answer would change your mind, you need to ask what information you could plausibly get about the question, and how expensive it is. For instance, you currently think there's a 50% chance you'd enjoy working in finance. Spending a summer interning would make you sure one way or the other - but the cost in time is very high. It might be worth it, but there are other possibilities. Spending 15 minutes talking to someone in the field won't make you certain, but will likely change your mind to think the odds are 90% or 10% - and in the former case, you can still decide to apply for a summer internship, and in the latter case, you can drop the idea now.

You should continue with this process of finding key things that would change your mind until you either think that you're unlikely to change your mind further, or the cost of more certainty is high enough compared to the value of the decision that it's not obviously worth the investment of time and money. (If it's marginal or unclear, unless the decision is worth tens or hundreds of millions of dollars, significant further analysis is costly enough that you may not want to do it. If you're unsure which way to decide at that point, you should flip a coin about the eventual decision - and if you're uncertain enough to use a coin flip, then just do the riskier thing.)

Comment by davidmanheim on When To Find More Information: A Short Explanation · 2019-12-31T10:10:34.892Z · score: 1 (1 votes) · EA · GW

Yes, that was partially the conclusion of my dissertation - and see my response to the above comment.

Comment by davidmanheim on 8 things I believe about climate change · 2019-12-30T13:53:11.513Z · score: 1 (0 votes) · EA · GW

From what I understand, Geoengineering is mostly avoided because people claim (incorrectly, in my view) it is a signal that the country thinks there is no chance to fix the problem by limiting emissions. In addition, people worry that it has lots of complex impacts we don't understand. As we understand the impacts better, it becomes more viable - and more worrisome. And as it becomes clearer over the next 20-30 years that a lot of the impacts are severe, it becomes more likely to be tried.

Comment by davidmanheim on Learning to ask action-relevant questions · 2019-12-29T06:52:53.176Z · score: 3 (3 votes) · EA · GW

I've heard "action relevant" used more often - but both are used.

Comment by davidmanheim on Learning to ask action-relevant questions · 2019-12-29T06:52:11.213Z · score: 12 (5 votes) · EA · GW

Another potentially useful heuristic is to pick a research question where the answer is useful whether or not you find what you'd expect. For example, “Are house fires more frequent in households with one or more smokers?" is very decision relevant if the answer is "Far more likely," but not useful if the answer is "No," or "A very little bit." (But if a questions is only relevant if you get an unlikely answer, it's even less useful. For example, “How scared are Londoners of house fires?” is plausibly very decision relevant if the answer turns out to be "Not at all, and they take no safety measures" - but that's very unlikely to be the answer.)

A better question might be "Which of the following behaviors or characteristics correlates with increased fire risk; presence of school-aged children, smoking, building age, or income?" Notice that this is more complex than the previous question, but if you're gathering information about smoking, the other questions are relatively easy to find information about as well - and make the project much more likely to find something useful.

(The decision-theoretic optimal is questions that are decision-relevant in proportion to the likelihood you'll find each answer. But even if it's very valuable in expectation, from a career perspective, you don't want to spend time on questions that have a good chance of being a waste of time, even if they have a small chance of being really useful - but this is a trade-off that requires reflection, because it leads people to take fewer risks, and from a social benefit perspective at least, most people take too few risks already.)

Comment by davidmanheim on 8 things I believe about climate change · 2019-12-28T21:39:16.949Z · score: 9 (7 votes) · EA · GW

(Great idea. But I think this would work better if you had the top comment be just "Here for easy disagreement:" then had the sub comments be the ranges, so that the top comment could be upvoted for visibility.)

Edit: In case this isn't clear, the parents was changed. Much better!


Comment by davidmanheim on 8 things I believe about climate change · 2019-12-28T21:37:26.893Z · score: 2 (2 votes) · EA · GW

The other fairly plausible GCR that is discussed is biological. Black death likely killed 20% of the population (excluding the Americas, but not China or Africa, which we affected) in the middle ages. Many think that bioengineered pathogens or other threats could plausibly have similar effects now. Supervolcanos and asteroids are also on the list of potential GCRs, but we have better ideas about their frequency / probability.

Of course, Toby's book will discuss all of this - and it's coming out soon!

Comment by davidmanheim on 8 things I believe about climate change · 2019-12-28T21:31:41.275Z · score: 16 (4 votes) · EA · GW

I agree overall. The best case I've heard for Climate Change as an indirect GCR, which seems unlikely but not at all implausible, is not about direct food shortages, but rather the following scenario:

Assume state use of geoengineering to provide cloud cover, reduce heat locally, or create rain. Once this is started, they will quickly depend on it as a way to mitigate climate change, and the population will near-universally demand that it continue. Given the complexity and global nature of weather, however, this is almost certain to create non-trivial effects on other countries. If this starts causing crop failures or deadly heat waves in the affected countries, they would feel justified escalating this to war, regardless of who would be involved - such conflicts could easily involve many parties. In such a case, in a war between nuclear powers, there is little reason to think they would be willing to stop a non-nuclear options.

Comment by davidmanheim on 8 things I believe about climate change · 2019-12-28T20:06:21.177Z · score: 11 (3 votes) · EA · GW

You'd need to think there was a very significant failure of markets to assume that food supplies wouldn't be adapted quickly enough to minimize this impact. That's not impossible, but you don't need central management to get people to adapt - this isn't a sudden change that we need to prep for, it's a gradual shift. That's not to say there aren't smart things that could significantly help, but there are plenty of people thinking about this, so I don't see it as neglected of likely to be high-impact.

Comment by davidmanheim on Brief summary of key disagreements in AI Risk · 2019-12-26T20:10:20.659Z · score: 5 (4 votes) · EA · GW

"* Will something less than superhuman AI pose similar extreme risks? If yes: How much less, how far in advance will we see it coming, when will it come, how easy is it to solve?"

I don't think there is any disagreement that there are such things. I think that the key disagreement is whether there will be sufficient warning , and how easy it will be to solve / prevent.

Not to speak on their behalf, but my understanding of MIRI's view on this issue is that there are likely to be such issues, but they aren't as fundamentally hard as ASI alignment, and while there should be people working on the pre-ASI risks, we need all the time we can invest on solving the really hard parts of the eventual risk from ASI.

Comment by davidmanheim on Which banks are most EA-friendly? · 2019-12-26T17:47:36.900Z · score: 16 (8 votes) · EA · GW

I suspect the choice of bank is rather unimpactful, even for those wtih a few million dollars in deposits. For most of us, it's really not worth time trying to optimize - you're better off finding a site that reviews banks and compares fees, etc. But if you are concerned about the systemic risks and externalities imposed by banks, I would recommend finding a credit union rather than a bank - or at least find a small commercial bank rather than a large national bank or an investment house for banking. (But again, I suspect convenience and fees are a more important factor.)

Edit: To clarify a bit, the marginal impact of giving money to charities is significant, while the marginal impact of giving your savings to a bank is fairly minor - it just gives them a slightly larger balance sheet to make loans, though most are not exactly short on cash nowadays. But if you want to think about systemic change for banks as potentially an important issue, picking where to put your money isn't as important as contacting your senators to tell them you want them to regulate banks for tightly.

Comment by davidmanheim on Carbon Offsets as an Non-Altruistic Expense · 2019-12-04T19:22:29.104Z · score: 1 (1 votes) · EA · GW

No, because given a socially optimal level of carbon, there's no net harm to offset - any carbon emissions are net socially neutral, or positive. (That doesn't imply there are no distributional concerns, but I'd buy the argument that purchasing DALYs generally is better in that case.)

I'm not a strict utilitarian, and so the issue I have with offsetting harm A with benefit B is that harms affect different individuals. There was no agreement by those harmed by A that they are OK with being harmed as long as those who benefit from B are happier. This is similar to the argument against buying reductions in meat consumption, or reducing harm to animals in other cost effective ways, to offset eating meat yourself - the animals being killed didn't agree, even if there is a net benefit to animals overall.

Comment by davidmanheim on Carbon Offsets as an Non-Altruistic Expense · 2019-12-04T05:20:42.101Z · score: 5 (3 votes) · EA · GW

Because society hasn't chosen to put in place a tax, I see the commitment as not just to self-tax, but rather to offset the harm being done. As I argued above, I don't think that internalizing externalities is an altruistic act. Conversely, I don't think that you can offset one class of harm to others with a generalized monetary penance, unless there is a social decision to tax to optimize the level of an activity. As an optimal taxation argument, spending the self-tax money on global poverty does internalize the externality, but it does not compensate for the specific harm.

I certainly agree that donations above the amount of harm done would be an altrustic act, and then the question is whether it's the most effective use of your altruism budget - and like you, I put that money elsewhere.