A (Very) Short History of the Collapse of Civilizations, and Why it Matters 2020-08-30T07:49:42.397Z · score: 42 (20 votes)
New Top EA Cause: Politics 2020-04-01T07:53:27.737Z · score: 31 (23 votes)
Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics 2020-03-04T17:06:42.972Z · score: 41 (24 votes)
International Relations; States, Rational Actors, and Other Approaches (Policy and International Relations Primer Part 4) 2020-01-22T08:29:39.023Z · score: 23 (15 votes)
An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) 2020-01-05T12:54:34.826Z · score: 18 (11 votes)
Policy and International Relations - What Are They? (Primer for EAs, Part 2) 2020-01-02T12:01:21.222Z · score: 22 (14 votes)
Introduction: A Primer for Politics, Policy and International Relations 2019-12-31T19:27:46.293Z · score: 63 (30 votes)
When To Find More Information: A Short Explanation 2019-12-28T18:00:56.172Z · score: 60 (31 votes)
Carbon Offsets as an Non-Altruistic Expense 2019-12-03T11:38:21.223Z · score: 19 (14 votes)
Davidmanheim's Shortform 2019-11-27T12:34:36.732Z · score: 3 (1 votes)
Steelmanning the Case Against Unquantifiable Interventions 2019-11-13T08:34:07.820Z · score: 45 (21 votes)
Updating towards the effectiveness of Economic Policy? 2019-05-29T11:33:17.366Z · score: 11 (9 votes)
Challenges in Scaling EA Organizations 2018-12-21T10:53:27.639Z · score: 40 (21 votes)
Is Suffering Convex? 2018-10-21T11:44:48.259Z · score: 13 (11 votes)


Comment by davidmanheim on EA Israel Strategy 2020-21 · 2020-09-30T11:33:35.434Z · score: 9 (6 votes) · EA · GW

I won't address all of these, especially since I'm not deeply involved in all of them, but on #3, there has been some discussion, and they are doing some work on this. We are trying to start such groups, but it's different than most other countries. This is mostly because college is done post-mandatory military service, starting at age 21 or older, and usually even later than that, so the students more career focused. That gives less time for activities like EA groups.

Comment by davidmanheim on Does any thorough discussion of moral parliaments exist? · 2020-09-17T09:53:27.322Z · score: 1 (1 votes) · EA · GW

We are hoping to kind-of address the issue in that post in a paper I'm working on with Anders Sandberg - I'll let you know when we're ready to share it, if you'd like.

Comment by davidmanheim on Why we’re excited to fund charities’ work a few years in the future · 2020-09-01T11:54:40.868Z · score: 4 (4 votes) · EA · GW

I think that overall, this makes sense, but I'm surprised about an omission about counterfactual impact, a concent I would think would be significant. Specifically, is there any concern that (perhaps primarily non-EA) donors will see that the nonprofit is well-funded, and would counter-factually donated more to full the funding gap?

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-09-01T11:42:10.264Z · score: 1 (1 votes) · EA · GW

I'd be comfortable with 1% - I'd take a bet at 100:1 conditional on land warfare in China or the US with a clear victor, they winner still would at the most extreme, restore a modified modern national government controlled by citizens that had heavy restrictions on what it was allowed to do, following the post-WWII model in Japan and Germany. (I'd take the bet, but in that case, I wouldn't expect both parties to survive to collect on the bet, whichever way it ends.)

That's because the post-WWII international system is built with structures that almost entirely prevent wars of conquest, and while I don't see that system as being strong, I also don't think the weaknesses are ones leading to those norms breaking down.

But maybe, despite sticking to my earlier claim, the post-WWII replacement of Japan's emperor with a democracy is exactly the class of case we should be discussing as an example relevant to the more general question of whether civilizations are conquered rather than collapse. And the same logic would apply to Iraq, and other nations the US "helped" along the road to democracy, since they were at least occasionally - though by no means always - failing states. And Iraq was near collapse because of conflict with Iran and sanctions, not because of internal decay. (I'm less knowledgeable about the stability of Japanese culture pre-WWII.)

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-09-01T11:30:59.466Z · score: 1 (1 votes) · EA · GW

Yes, and I would include a significant discussion of this in a longer version of this post, or a paper. However, I think we mostly disagree about what people's priors or prior models were in choosing what to highlight. (I see no-one using historical records of invasions / conquered nations independent of when it contributed to a later collapse, as relevant to discussions of collapse.)

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-31T20:47:45.974Z · score: 1 (1 votes) · EA · GW

China could replace the US as a dominant power, but they wouldn't actually take over the US the way nations used to conquer and replace the culture of other countries.

And I agree that it's not obvious that interconnection on net increases fragility, but I think that it's clear, as I argued in the paper, that technology which creates the connection is fragile, and getting more so.

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-31T20:44:54.530Z · score: 1 (1 votes) · EA · GW

Yes, that seems clearer and accurate - but I think it's clear that the types of external societies that are developing independently and are able to mount an attack, as occurred for Greece, Rome, when Ghengis Khan invaded Europe, etc. That means that in my view the key source of external pressure to topple a teetering system that does not exist now, rather than competition between peer nations. That seems a bit more like what I think of as inducing a bias, but your point is still well taken.

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-31T13:48:40.734Z · score: 1 (1 votes) · EA · GW

This was imprecise - I meant that collapses were catastrophes for the civilizations involved, and current collapses would also be catastrophes, one which I agree would be significantly worse if they impacted humanity's longer term trajectory. And yes, some collapses may have been net benefits - though I think the collapse of early agricultural societies did set those societies back, and were catastrophes for them - we just think that the direction of those societies was bad, so we're unperturbed that they collapsed. The same would be said of the once-impending collapse of the antebellum South in the US, where economics was going to destroy their economy, i.e. slavery. But despite the simplicity of the cause, slavery, I will greatly simplify the political dynamics leading to the outbreak of the civil war and say that they started a war to protect their culture instead of allowing the North to supplant them. This seems like a clear civilizational catastrophe, with some large moral benefits from ending slavery.

I think that unlike the Antebelllum south, and early exploitative agricultural societies, the collapse of Rome was also a collapse that hurt civilization's medium-term trajectory, despite taking quite a long time. And I'm hoping the ongoing collapse of the post-WWII international order isn't a similar devolution.

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-31T13:37:50.146Z · score: 1 (1 votes) · EA · GW

The first issue is that my question was whether civilizations collapse - in the sense that the system collapses to the point where large portions die - infrequently or very infrequently. The argument is that conquered civilizations are "missing data" in that it seems very likely that an unstable or otherwise damaged society that has a higher chance of collapse, whether due to invasion or to other factors, also has a higher chance of being supplanted rather than us seeing a collapse. So I noted that we have data missing in a way that introduces a bias.

The second issue is what a collapse would look like and involve. Because civilization is more tightly interconnected, many types of collapse would be universal, rather than local. (See both Bostrom's Vulnerable world paper and my Fragile world paper for examples of how technology could lead to that occurring.) Great power wars could trigger or accelerate such a collapse, but they wouldn't lead to decoupled sociotechnical systems, or any plausible scenarios that would allow a winner to replace the loser.

Does that make sense?

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-30T20:47:29.803Z · score: 4 (3 votes) · EA · GW

Agreed - and see Eliezer Yudkowsky's take on this idea.

Comment by davidmanheim on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-12T06:16:04.135Z · score: 23 (15 votes) · EA · GW

As a semi-outside viewer, who both works with several people in RSP, and has visited FHI back in the good-old-non-pandemic days, I highly recommend that EAs both apply to the program, especially if they aren't sure if it's right for them (but see who this program is for,) and talk to or work with people in the program now.

That said, I think that these comments are both accurate, and don't fully reflect some of the ancillary benefits of the program - especially ones that are not yet experienced because they will only be obvious when talking to future alumni of the program. For example, in five years, I suspect alumni of the program will say:

  • It's very prestigious step on a CV for future work, especially for EAs that are considering policy or academic work outside of the narrow EA world, and would benefit from a boost in getting in.
  • It gives people (well-funded) time to work on expanding their horizons, and focus on making sure they can do or enjoy doing a given type of work. It can also set them up for their next step by giving them direct experience in almost any area they want to work in.
  • The network of RSPs is likely to be very valuable in the next decade as the RSP program grows and matures, and the alumni will presumably be able to stay connected to each other, and also to connect with current / past RSPs.
Comment by davidmanheim on Customized COVID-19 risk analysis as a high value area · 2020-07-29T13:30:18.692Z · score: 2 (2 votes) · EA · GW

This is still under very active development, but the github repository is here, and a toy version of what we'd like to produce with better estimates is here, as a Rshiny App.

Comment by davidmanheim on Crucial questions for longtermists · 2020-07-29T13:27:29.201Z · score: 10 (6 votes) · EA · GW

This is really fantastic, and seems like there is a project that could be done as a larger collaboration, building off of this post.

It would be a significant amount of additional work, but it seems very valuable to list resources relevant to each question - especially as some seem important, but have been partly addressed. (For example, re: estimates of natural pandemic risks, see my paper, and then Andrew Snyder-Beattie's paper.)

Given that, would you be interested in having this put into a Google Doc and inviting people to collaborate on a more comprehensive overall long-termist research agenda document?

Comment by davidmanheim on Customized COVID-19 risk analysis as a high value area · 2020-07-24T11:14:46.150Z · score: 4 (4 votes) · EA · GW

This sounds slightly related to something 1DaySooner is just starting, which is a risk model for a HCT, which will look at the risk of death, and hopefully also of long term disability. Ideally, it would also consider the probability conditional on rescue therapies being available or becoming available. To do that, we're focusing on a population subset, but the basis for the model is data that includes multiple ages, so extending that is easy.

It is likely that this model can be plugged into models for the other portions of the risk, isolation, etc. and it might be useful to collaborate. It's also an important project on its own, so if there are people interested in working with us on that, I'd be happy to find more volunteers familiar with R and data analysis.

Comment by davidmanheim on Are there superforecasts for existential risk? · 2020-07-08T17:05:26.005Z · score: 8 (5 votes) · EA · GW

I'll speak for the consensus when I say I think there's not a clear way to decide if this is correct without actually doing it - and the outcome would depend a lot on what level of engagement the superforecasters had with these ideas already. (If I got to pick the 5 superforecasters, even excluding myself, I could guarantee it was either closer to FHI's viewpoints, or to Will's.) Even if we picked from a "fair" reference class, if I could have them spend 2 weeks at FHI talking to people there, I think a reasonable proportion would be convinced - though perhaps this is less a function of updating neutrally towards correct ideas as it is the emergence of consensus in groups.

Lastly, I have tremendous respect for Will, but I don't know that he's calibrated particularly well to make a prediction like this. (Not that I know he isn't - I just don't have any reason to think he's spent much time working on this skillset.)

Comment by davidmanheim on Are there superforecasts for existential risk? · 2020-07-07T18:51:06.460Z · score: 25 (12 votes) · EA · GW

Yes, but it is hard, and they don't work well. They can, however, be done at least slightly better.

Good Judgement was asked to forecast the risk of a nuclear war in the next year - which helps somewhat with the time frame question. Unfortunately, the brier score incentives are still really weak.

Ozzie Gooen and others have talked a lot about how to make forecasting better. Some of the ideas that he has suggested relate to how to forecast longer term questions. I can't find a link to a public document, but here's one example (which may have been someone else's suggestion):

You ask people to forecast what probability people will assign in 5 years to the question "will there be a nuclear war by 2100?" (You might also ask whether there will be a nuclear war in the next 5 years, of course.) By using this trick, you can have the question (s) resolve in 5 years, and have an approximate answer based on iterated expectation. But extending this, you can also have them predict what probability people will assign in 5 years to the probability they will assign in another 5 years to the question "will there be a nuclear war by 2100" - and by chaining predictions like this, you can transform very long term questions into series of shorter term questions.

There is other work in this vein, but to simplify, all of it takes the form "can we do something clever to slightly reduce the issues that exist with the fundamentally hard question of getting short term answers to long term questions." As far as I can see, there aren't any simple answers.

Comment by davidmanheim on Civilization Re-Emerging After a Catastrophic Collapse · 2020-07-03T11:32:23.717Z · score: 3 (2 votes) · EA · GW

I disagree somewhat on a few things, but I'm not very strongly skeptical of any of these points. I do have a few points to consider about these issues.

Re: stable long term despotism, you might look into the idea of "hydraulic empires" and their stability. I think that short of having a similar monopoly, short of a global singleton, other systems are unstable enough that they should evolve towards whatever is optimal. However, nuclear weapons, if developed early by one state, could also create a quasi-singularity. And I think the Soviet Union was actually less stable than it appears in retrospect, except for their nuclear monopoly.

I do worry that some aspects of central control would be more effective at creating robust technological growth given clear tech ladders, compared to the way uncontrolled competition works in market economies, since markets are better at the explore side of the explore-exploit spectrum, and dictatorships are arguably better at exploitation. (In more than one sense.)

Re: China, the level of technology is stabilizing their otherwise fragile control of the country. I would be surprised if similar stability is possible longer term without either a hydraulic empire, per above, or similarly invasive advanced technologies - meaning that they would come fairly late. It's possible faster technology development would make this more likely.

In retrospect, 1984 seems far less worrying than a Brave New World - style anti-utopia. (But it's unclear that lots of happy people guided centrally is actually as negative as it is portrayed, at least according to some versions of utilitarianism.)

Comment by davidmanheim on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-03T09:59:38.272Z · score: 6 (4 votes) · EA · GW

"The right question" has 2 components. First is that the thing you're asking about is related to what you actually want to know, and second is that it's a clear and unambiguously resolvable target. These are often in tension with each other.

One clear example is COVID-19 cases - you probably care about total cases much more than confirmed cases, but confirmed cases are much easier to use for a resolution criteria. You can make more complex questions to try to deal with this, but that makes them harder to forecast. Forecasting excess deaths, for example, gets into whether people are more or less likely to die in a car accident during COVID-19, and whether COVID reduction measures also blunt the spread of influenza. And forecasting retrospective population percentages that are antibody positive runs into issues with sampling, test accuracy, and the timeline for when such estimates are made - not to mention relying on data that might not be gathered as of when you want to resolve the question.

Comment by davidmanheim on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-03T09:47:47.669Z · score: 3 (2 votes) · EA · GW

I think that as you forecast different domains, more common themes can start to emerge. And I certainly find that my calibration is off when I feel personally invested in the answer.

And re:

How does the distribution skill / hours of effort look for forecasting for you?

I would say there's a sharp cutoff in terms of needing a minimal level of understanding (which seems to be fairly high, but certainly isn't above, say, the 10th percentile.) After that, it's mostly effort, and skill that is gained via feedback.

Comment by davidmanheim on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-01T06:01:13.430Z · score: 6 (4 votes) · EA · GW

I already said I'd stop messing with him now.

Comment by davidmanheim on Civilization Re-Emerging After a Catastrophic Collapse · 2020-07-01T05:53:02.484Z · score: 3 (2 votes) · EA · GW

I'm very uncertain about details, and have low confidence in all of these claims we agree about, but I agree with your assessment overall.

I've assumed that while speed changes, the technology-tree is fairly unalterable - you need goods metals and similar to make many things through 1800s-level technology, you need large-scale industry to make good metals, etc. But that's low confidence, and I'd want to think about it more. (This paper looks interesting:

Regarding political systems, I think that market economies with some level of distributed control, and political systems that allow feedback in somewhat democratic ways are social technologies that we don't have clear superior alternatives to, despite centuries of thought. I'd argue that Fukuyama was right in "End of History" about the triumph of democracy and capitalism, it's just that the end state seems to take longer than he assumed.

And finally, yes, the details of how they technologies and social systems play out in terms of cosmopolitan attitudes and the societal goals they reflect are much less clear. In general, I think that humans are far more culturally plastic than people assume, and very different values are possible and compatible with flourishing in the general sense. But (if it were possible to know the answer,) I wouldn't be too surprised to find out that nearly fixed tech trees + nearly fixed social technology trees mean that cosmopolitan attitudes are a very strong default, rather than an accidental contingent reality.

Comment by davidmanheim on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-29T09:51:41.204Z · score: 2 (2 votes) · EA · GW

I was focusing on "how much similarity we should expect between a civilization that has recovered and one that never collapsed in the first place," and I was saying that the degree of similarity in terms of likely progress is low, conditioning on any level of societal memory of the idea that progress is possible, and knowing (or seeing artifacts of the fact) that there once were billions of people who had flying machines and instant communication.

Comment by davidmanheim on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T10:45:35.597Z · score: 19 (8 votes) · EA · GW

I think there's a clear counterargument, which is that the central ingredient lacking in developing technologies was a lack of awareness that progress in a given area is possible. Unless almost literally all knowledge is destroyed, a recovery doesn't have this problem.

(Note: this seems to be a consensus view among people I talk to who have thought about collapse scenarios, but I can claim that only very loosely, based on a few conversations.)

Comment by davidmanheim on Why "animal welfare" is a thing? · 2020-06-28T10:08:00.556Z · score: 1 (1 votes) · EA · GW

You still seem confused. You say your views are controversial, as if this community doesn't allow for and value controversial opinions, and think that it's the claims you made. That is not the case. Hopefully this comment is clear enough to explain.

1. This was a low-effort post. It was full of half-formed ideas, contained neither a title or a introduction that related to the remainder of the post, nor a clear conclusion. The sentences were not complete, and there was clearly no grammar check.

2. Look at successful posts on the forum. They contain full sentences, have a clear topic and thoughts about a topic that are explained clearly, and engage with past discussion. It's important to notice the standards in a given forum before participating. In this case, you didn't bother looking at other posts or understanding the community norms.

3. You have not engaged with other posts, and may not have even read them. Your first attempt to post or comment reflects that lack of broader engagement. You have no post history to make people think you have given this any thought whatsoever.

4. Your unrelated comments link to your other irrelevant work, which seems crass.

Comment by davidmanheim on Thoughts on The Weapon of Openness · 2020-06-25T13:37:16.154Z · score: 1 (1 votes) · EA · GW

I think 30 years is an overstatement, thought it's hard to quantify. However, I can think of a few things that makes me think this gap is likely to exist, and be significant in cryptography, and even more specifically in cryptanalysis. For hacking, the gap is clearly smaller, but a still nontrivial amount - perhaps 2 years.

Comment by davidmanheim on Cause Prioritization in Light of Inspirational Disasters · 2020-06-08T17:22:17.682Z · score: 12 (8 votes) · EA · GW

Maybe this wasn't your intent, but the title is a bit ambiguous about the word "inspire" - it seems as though you might be advocating for actions that inspire disasters, as opposed to making the case for allowing disasters that are themselves inspiring.

Comment by davidmanheim on Why might one value animals far less than humans? · 2020-06-08T16:08:09.690Z · score: 3 (2 votes) · EA · GW

Regarding 3, no, it's unclear and depends on the specific animal, what we think their qualia are like, and the specific class of experience you think are valuable.

Comment by davidmanheim on Why might one value animals far less than humans? · 2020-06-08T16:06:33.008Z · score: 1 (1 votes) · EA · GW

It's a bit more complex than that. If you think animals can't anticipate pain, or can anticipate it but cannot understand the passage of time, or understand that pain might continue, you could see an argument for animal suffering being less important than human suffering.

So yes, this could go either way - but it's still a reason one might value animals less.

Comment by davidmanheim on Some thoughts on deference and inside-view models · 2020-06-08T07:45:18.810Z · score: 1 (3 votes) · EA · GW

Correlation usually implies higher value in sources of outside variance, even if the mean is slightly lower. We should actively look for additional sources of high-value variance. And we often see that smart people outside of EA often have valuable criticisms, once we can get past the instinctive "we're being attacked" response.

Comment by davidmanheim on Why might one value animals far less than humans? · 2020-06-08T07:13:32.263Z · score: 10 (4 votes) · EA · GW

1) Different options or uncertainty about the moral relevance of different qualia.

It's unclear that physical pain is the same experience for humans, cats, fish, and worms.

Even if it is the same mental experience, the moral value may differ due to the lack of memory or higher brain function. For example, I think there's a good argument that pain that isn't remembered, for instance via the use of Scopolamine, is (still morally relevant but) less bad than pain experienced that is remembered. Beings incapable of remembering or anticipating pain would have intrinsically less morally relevant experiences - perhaps far less.

2) Higher function as a relevant factor in assessing moral badness of negative experiences

I think that physical pain is bad, but when considered in isolation, it's not the worst thing that can happen. Suffering includes the experience of anticipation of bad, the memory of it occurring, the appreciation of time and lack of hope, etc. People would far prefer to have 1 hour of pain and the knowledge that it would be over at that point than have 1 hour of pain but not be sure when it would end. They'd also prefer to know when the pain would occur, rather than have it be unexpected. These seem to significantly change the moral importance of pain, even by orders of magnitude.

3) Different value due to potential for positive emotion.

If joy and elation is only possible for humans, it may be that they have higher potential for moral value than animals. This would be true even if their negative potential was the same. In such a case, we might think that the loss of potential was morally important, and say that the death of a human, with the potential for far more positive experience, is more morally important than the death of an animal.

Comment by davidmanheim on SHOW: A framework for shaping your talent for direct work · 2020-05-28T13:21:18.636Z · score: 1 (1 votes) · EA · GW

I strongly feel this is incorrect. Coordination is incredibly expensive, is already a major pain point and source of duplication and wasted effort, and having lots of self-directed go-getters will make that worse.

Comment by davidmanheim on Are there historical examples of excess panic during pandemics killing a lot of people? · 2020-05-28T10:27:01.284Z · score: 15 (5 votes) · EA · GW

The issue with 1976 is that they reacted reasonably when considering only short term questions of public health, but plausibly overreacted from the perspective of longer term ability to keep the population healthy.

The book by Neustadt and Fineberg is the classic historical case study, and was a well done postmortem. It does a good job talking about the points on both sides of the issue, and why the decisions were made as they were:

I would argue that given information available, they made the approximately correct decision, but the costs were higher than they expected in a way that they could have predicted, had they thought more about public reaction to failure. I will note that it's very likely this failure had far less predictable but very significant consequences over the next 50 years, given that the fear and overreaction afterwards is part of the background of most of the people skeptical of vaccines, and plausibly created or fed the initial fearmongering.

Comment by davidmanheim on Are there historical examples of excess panic during pandemics killing a lot of people? · 2020-05-28T10:11:33.741Z · score: 17 (8 votes) · EA · GW

It's a bit different than what you are looking for, and historical cases are earlier than would be relevant directly, but there were certainly many documented cases of pogroms happening during various epidemics during the middle ages and Renaissance when a minority group (usually Jews) were blamed and massacred.

This isn't quite what has happened so far, but I can certainly imagine a case where a modern pandemic could similarly exacerbate class or racial tensions leading to violence.

Comment by davidmanheim on Conditional Donations · 2020-05-11T10:01:36.852Z · score: 11 (4 votes) · EA · GW

Two short points.

First, there was a lot of work by Robin Hanson 15-20 years ago on Conditional contracts and prediction-based contracts that might be relevant.

Second, a key issue with this sort of donation is that the organizations themselves are left with a lot of uncertainty until the contract resolves. If the contracts are really transparent, they might have some idea what is happening, but it seems likely that tons of such contracts would lead to really messy and highly uncertain future cash flows that would make planning much harder. I'm unsure if there's a clear way to fix this, but it's probably worth thinking about more. (The alternative is for people to just wait on making the donation, which is not at all transparent and makes precommitment and coordination around joint giving impossible, but obviously requires much less complexity.)

Comment by davidmanheim on Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? · 2020-04-30T12:25:59.440Z · score: 2 (2 votes) · EA · GW

Re: #2 -For vaccines, that seems unlikely given that companies with the highest probability of success are already pouring money into this. A clear benefit of the proposal is to reduce risk that if they fail, which is very plausible, or are less effective than at least some alternatives, which is even more likely, the competition will be months and months behind. And for other equipment, it seems even less likely.

Comment by davidmanheim on What are some tractable approaches for figuring out if COVID causes long term damage to those who recover? · 2020-04-28T12:29:39.005Z · score: 1 (1 votes) · EA · GW

The vast amounts of funding and research from every source that is currently starting / ongoing about COVID.


Comment by davidmanheim on What are some tractable approaches for figuring out if COVID causes long term damage to those who recover? · 2020-04-27T03:56:05.892Z · score: 2 (2 votes) · EA · GW

This seems to be very low on neglectedness, and not particularly high on tractability either.

Comment by davidmanheim on Database of existential risk estimates · 2020-04-17T14:47:25.924Z · score: 6 (5 votes) · EA · GW
It seems to me that it could be valuable to pool together new estimates from the "general EA public"

I think this is basically what Metaculus already does.

(But the post seems good / useful.)

Comment by davidmanheim on Why do we need philanthropy? Can we make it obsolete? · 2020-04-10T05:26:27.850Z · score: 1 (1 votes) · EA · GW

I think we should be willing to embrace a system that has a better mix of voluntary philanthropy, non-traditional-government programs for wealth transfer, and government decisionmaking. It's the second category I'm most excited about, which looks a lot like decentralized proposals. I'm concerned that most extant decentralized proposals, however, have little if any tether to reality. On the other hand, I'm unsure that larger governments would help, instead of hurt, in addressing these challenges.

Comment by davidmanheim on Why do we need philanthropy? Can we make it obsolete? · 2020-04-08T09:46:36.267Z · score: 6 (5 votes) · EA · GW

I claim that "fixing" coordination failures is a bad and/or incoherent idea.

Coordination isn't fully fixable because people have different goals, and scaling has inevitable and unavoidable costs. Making a single global government would create waste on a scale that current governments don't even approach.

As people get richer overall, the resources available for public benefit have grown. This seems likely to continue. But directing those resources fails. Democracy doesn't scale well, and any move away from democracy comes with corresponding ability to abuse power.

In fact, I think the best solution for this is to allow individuals to direct their money how they want, instead of having a centralized system - in a word, philanthropy.

Comment by davidmanheim on New Top EA Cause: Politics · 2020-04-06T11:38:19.235Z · score: 7 (4 votes) · EA · GW

I've actually done this, and talked to others about it. The critical path, in short, is reliable vaccine, facilities for production, and replication for production.

But this has nothing to do with your announcing your candidacy for office - congratulations on deciding to run, and good luck with your campaign!

Comment by davidmanheim on The case for building more and better epistemic institutions in the effective altruism community · 2020-04-03T08:48:16.190Z · score: 3 (2 votes) · EA · GW

Also, strongly agree on #3 - see my post from last year:

Comment by davidmanheim on US Non-Profit? Get Free* Money From the Gov on 3 Apr! · 2020-04-02T18:32:45.148Z · score: 5 (5 votes) · EA · GW

It's the only time I can remember where it seems unfortunate that EA as a movement is good at planning and ensuring that critical nonprofits have sufficient runway.

Comment by davidmanheim on The case for building more and better epistemic institutions in the effective altruism community · 2020-04-02T17:48:51.295Z · score: 5 (3 votes) · EA · GW

Re: #2, I've argued for minimal institutions where possible - relying on markets or existing institutions rather than building new ones, where possible.

For instance, instead of setting up a new organization to fund a certain type of prize, see if you can pay an insurance company to "insure" the risk of someone winning, as determined by some criteria, and them have them manage the financials. Or, as I'm looking at for incentifying building vaccine production now, offer cheap financing for companies instead of running a new program to choose and order vaccines to get companies to produce them.

Comment by davidmanheim on New Top EA Causes for 2020? · 2020-04-01T07:58:38.731Z · score: 11 (5 votes) · EA · GW

Politics! (See linked post.)

Comment by davidmanheim on How would you advise someone decide between different biosecurity interventions? · 2020-03-30T17:20:06.685Z · score: 8 (4 votes) · EA · GW

1) There's an entire Global Health Security Agenda that has been shouting about what needs to be done for a decade, as have many other organizations - CHS, the US's Blue Ribbon Panel, Georgetown's GHSS, and I'm sure other places internationally. Ask them where to spend your money, or better yet, read their previous reports that already tell you what needs to be done.

2) For groups that are willing to think about biosecurity risks, or take advice from people who do, think about differential tech development when picking technology to fund. There are lots of technologies that have a clear upside, and almost no downside - biosurveillance, diagnostic technology, vaccine platforms, etc. Don't fund research into gain of function, and try to limit and weigh carefully when deciding what potential dual-use technology to fund.

3) For government decisionmakers - don't throw money into new bureaucracy. We have lots of existing bureaucracy, much of which should be reformed, but replacing it with a new structure and adding layers isn't going to help. And in the US, don't allow a post-9/11 move like what led to building the DHS.

Comment by davidmanheim on What promising projects aren't being done against the coronavirus? · 2020-03-26T12:27:30.084Z · score: 4 (4 votes) · EA · GW

People should be working on funding proposals for Bio-X risk mitigation policies, such as greater international coordination, better health monitoring systems, investment in non-disease specific symptomatic surveillance, and similar. These are likely to be far easier to fund in 3-6 months, as a huge pool of money is allocated to work on fixing the next pandemic.

Comment by davidmanheim on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-22T10:02:23.514Z · score: 8 (5 votes) · EA · GW

I personally, writing as a superforecaster, think that this isn't particularly useful. Superforecasters tend to be really good at evaluating and updating based on concrete evidence, but I'm far less sure about whether their ability to evaluate arguments is any better than that of a similarly educated / intelligent group. I do think that FHI is a weird test case, however, because it is selecting on the outcome variable - people who think existential risks are urgent are actively trying to work there. I'd prefer to look at, say, the views of a groups of undergraduates after taking a course on existential risk. (And this seems like an easy thing to check, given that there are such courses ongoing.)

Comment by davidmanheim on What COVID-19 questions should Open Philanthropy pay Good Judgment to work on? · 2020-03-22T08:33:07.563Z · score: 2 (2 votes) · EA · GW

Really happy to see this type of crowdsourcing!

Comment by davidmanheim on What promising projects aren't being done against the coronavirus? · 2020-03-22T08:32:19.712Z · score: 8 (4 votes) · EA · GW

Mitigation work seems very low on neglectedness.

I'd encourage more work on planning for post-COVID reactions and policy proposals, as well as thinking about how EAs can be in a good position to influence such decisions.