Posts

The Upper Limit of Value 2021-01-27T14:15:03.200Z
The Folly of "EAs Should" 2021-01-06T07:04:54.214Z
A (Very) Short History of the Collapse of Civilizations, and Why it Matters 2020-08-30T07:49:42.397Z
New Top EA Cause: Politics 2020-04-01T07:53:27.737Z
Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics 2020-03-04T17:06:42.972Z
International Relations; States, Rational Actors, and Other Approaches (Policy and International Relations Primer Part 4) 2020-01-22T08:29:39.023Z
An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) 2020-01-05T12:54:34.826Z
Policy and International Relations - What Are They? (Primer for EAs, Part 2) 2020-01-02T12:01:21.222Z
Introduction: A Primer for Politics, Policy and International Relations 2019-12-31T19:27:46.293Z
When To Find More Information: A Short Explanation 2019-12-28T18:00:56.172Z
Carbon Offsets as an Non-Altruistic Expense 2019-12-03T11:38:21.223Z
Davidmanheim's Shortform 2019-11-27T12:34:36.732Z
Steelmanning the Case Against Unquantifiable Interventions 2019-11-13T08:34:07.820Z
Updating towards the effectiveness of Economic Policy? 2019-05-29T11:33:17.366Z
Challenges in Scaling EA Organizations 2018-12-21T10:53:27.639Z
Is Suffering Convex? 2018-10-21T11:44:48.259Z

Comments

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-20T18:14:57.009Z · EA · GW

Strongly endorsed.

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-17T11:07:43.213Z · EA · GW

I want to point out that there's something unfair that you did here. You pointed out that AI safety is more important, and that there were two doctors that left medical practice. Ryan does AI safety now, but Greg does Biosecurity, and frankly, the fact he has an MD is fairly important for his ability to interact with policymakers in the UK.  So one of your examples is at least very weak, if not evidence for the opposite of what you claimed.

"A reliable way to actually do a lot of good as a doctor" doesn't just mean not practicing; many doctors are in research, or policy, making a far greater difference - and their background in clinical medicine can be anywhere from a useful credential to being critical to their work.

Comment by davidmanheim on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-11T15:52:14.191Z · EA · GW

I agree that we agree ;)

I particularly endorse the claim about tractability and effectiveness of technical changes to internal nuclear weapon security  and contingency planning, both with moderate confidence.

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-10T11:08:19.632Z · EA · GW

It's not contradictory, but it seems like your comment goes against his post's insistence on the nuance. Will was being careful about this sort of absolutism, and I think at least part of the reason for doing so - not alienating those who differ on specifics , and treating out conclusions as tentative - is the point I am highlighting. Perhaps I'm reading his words too closely, but that's the reason I wrote the introduction the way I did; I was making the point that his nuance is instructive.

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-10T11:01:31.747Z · EA · GW

I think it would be good to be clearer in our communication and say that we don't consider local opera houses, pet sanctuaries, homeless shelters, or private schools to be good cause areas, but there might be other good reasons for you to donate to them.

 

I made a similar claim here, regarding carbon offsets:
https://forum.effectivealtruism.org/posts/brTXG5pS3JgTatP7i/carbon-offsets-as-an-non-altruistic-expense

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-10T10:58:08.579Z · EA · GW

At least for people I know it seems to have been really good advice, at least the doctor part.


It seems like this is almost certain to be true given post-hoc selection bias, regardless of whether or not it is good - it doesn't differentiate between worlds where it is alienating or bad advice, and some people leave the community, and ones where it is good.

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-08T10:53:51.871Z · EA · GW

Strongly agree substantively about the adjacency of your point, and about the desire for a well-rounded world. I think it's a different thread of thought than mine, but it is worth being clear about as well. And see my reply to Jacob_J elsewhere in the comments, here, for how I think that can work even for individuals.

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-08T10:50:33.026Z · EA · GW

I think that negative claims are often more polarizing than positive ones, but I agree that there is a reason to advocate for a large movement that applies science and reasoning to do some good. I just think it already exists, albeit in a more dispersed form than a single "EA-lite." (It's what almost every large foundation already does, for example.) 

I do think that there is a clear need for an "EA-Heavy," i.e. core EA, in which we emphasize the "most" in the phrase "do the most good." My point here is that I think that this core group should be more willing to allow for diversity of action and approach. And in fact, I think the core of EA, the central thinkers and planners, people at CEA, Givewell, Oxford, etc. already advocate this. I just don't think the message has been given as clearly as possible to everyone else.

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-08T10:45:17.260Z · EA · GW

If you're pledging 10% of your income to EA causes, none of that money should go the local opera house or your kid's private school. (And if you instead pledge 50%, or 5%, the same is true of the other 50%, or 95%.)

What you do with the remainder of your money is a separate question - and it has moral implications, but that's a different discussion. I've said this elsewhere, but think it's worth repeating:
Most supporters of EA don't tell people not to go out to nice restaurants and get gourmet food for themselves, or not to go the the opera, or not to support local organizations they are involved with or wish to support, including the arts. The consensus simply seems to be that people shouldn't confuse supporting a local museum with attempting to effectively maximize global good with effective altruism.

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-08T10:38:58.708Z · EA · GW

I think I agree with you on the substantive points, and didn't think that people would misread it as making the bolder claim if they read the post, given that I caveated most of the statements fairly explicitly. If this was misleading, I apologize.

Comment by davidmanheim on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-08T09:15:18.092Z · EA · GW

I certainly agree that this is worth thinking about, but I also think it's worth suggesting that the analysis here is be a bit myopic. Of course, it seems particularly relevant because many EAs are in the US. And it seems unconceivable that the world will change drastically in this one particular way, but far larger plausible changes are on the horizon.  (Though as I've noted n various conversations for a while, Americans might want to personally consider their options for where else they might want to live if the US decline continues.) 

And if the worst case happens, we're still likely looking at a decades-long process, during which most of the worst effects are mitigated by other countries taking up the slack, and pushing for the US's decline to be minimally disruptive to the world. Nations and empires have collapsed before, and in many cases it was was bad, even very bad. (Though in other cases, like the dissolution of the British empire, there were compensating changes, like the rise of the US and the far more egalitarian and peaceful post-WWII order.) So preventing a bad collapse is plausibly as important a cause as preventing another pandemic like COVID-19 - albeit far less certain to occur, and far less certain to be bad for the world. And it's not of the same order of magnitude of many other longtermist causes, since it's highly likely that conditional on the unlikely case of severe collapse in the US, humanity will be fine.

All that said, again, I don't disagree with the analysis overall - this is worth taking seriously.

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-06T16:42:01.475Z · EA · GW

Whoops! My apologies to both individuals - this is now fixed. (I don't know what I was looking at when I wrote this, but I vaguely recall that there was a second link which I was thinking of linking to which I can no longer find where Peter made a similar point. If not, additional apologies!)

Comment by davidmanheim on The Folly of "EAs Should" · 2021-01-06T16:39:46.075Z · EA · GW

I am not suggesting avoiding the word "should" generally, as I said in the post. I thought it was clear that I am criticizing the way in which overly narrowing the ideal of what is and is not EA, and unreasonably narrowing what is normatively acceptable within the movement, which I keep seeing, is harmful. I think it's clear that this can be done without claiming that everything is EA, or refraining from making normative statements altogether.

Regarding criticising Givewell's reliance on RCTs, I think there is room for a diversity of opinion. It's certainly reasonable to claim that as a matter of decision analysis, non-RCT evidence should be considered, and that risk-neutrality and unbiased decision making require treating less convincing evidence as valid, if weaker. (I'm certainly of that opinion.)

On the other hand, there is room for some effective altruists  who prefer to be somewhat risk-averse to correctly view RCTs as more certain evidence than most other forms, and prefer interventions with clear evidence of that sort. So instead of saying that GiveWell should not rely as heavily on RCTs, or that EA organizations should do other things, I think  we can, and should, make the case that there is an alternative approach which treats RCTs as only a single type of evidence, and that the views of Givewell and similar EA orgs are not the only valid way to approach effective giving. (And I think that this view is at least understood, and partly shared by many EA organizations and individuals, including many at Givewell.)

Comment by davidmanheim on The Fermi Paradox has not been dissolved · 2020-12-20T07:22:15.111Z · EA · GW

To respond to your substantive point, intergalactic travel is possible, but slow - on the order of tens of millions of years at the very fastest.  And the distribution of probable civilizations is tilted towards late in galactic evolution because of the need for heavier elements, so it's unclear that early civilizations are possible, or at least as likely.

And somewhat similar to your point, see my tweet from a couple years back:

"We don't see time travelers. This means either time travel is impossible, or humanity doesn't survive. 

Evidence of the theoretical plausibility of time travel is therefore strong evidence that we will be extinct in the nearer term future."

Comment by davidmanheim on The Fermi Paradox has not been dissolved · 2020-12-13T08:50:33.001Z · EA · GW

I think the post is well reasoned and useful in pointing out a few shortcoming in the paper, but fails to make the point you're hoping for.

First and most importantly, with your preferred parameter choices, the 6% chance of no life in the Milky Way still almost certainly implies that the lack of alien signals is due to the fact that they are simply too far away to have been seen; the density of intelligence implied by that model is still very low. That means even your conclusion dissolves the initial "paradox." At the most, it leaves the likelihood of the existence of a future great filter, based on the evidence of not seeing alien signals, far weaker than was previously argued for.

Second, a number of your arguments seem to say that we could have counterfactual evidence, and should use that as evidence. For example, "as far as we know it is equally possible that we could have found ourselves on a 9 billion year old Earth..." (it cannot, given the habitability window for life on earth,) or "Presumably life could evolve multiple times on the same planet..." (True, but not relevant for the model, since once life has emerged it passes this step - and we see no evidence of it happening on earth..) Even if these were correct, they should be reflected in the prior or model structure, as Robin Hanson suggests ("try once steps.")

In any case, I think that a closer review of some of the data points is useful, and think the post was useful.

Comment by davidmanheim on Prize: Interesting Examples of Evaluations · 2020-12-10T06:55:23.839Z · EA · GW

Thanks - I'm happy to see that this was useful, and strongly encourage prize-based crowdsourcing like this in the future, as it seems to work well.

That said, given my association with QURI, I elected to have the prize money donated to Givewell.

Comment by davidmanheim on What are the "PlayPumps" of Climate Change? · 2020-12-06T10:56:20.265Z · EA · GW

This - especially the "offset you flight CO2 emissions" BS, where they "buy" non-counterfactual emissions reductions.

Comment by davidmanheim on Long-Term Future Fund: November 2020 grant recommendations · 2020-12-03T17:10:41.446Z · EA · GW

I would strongly support the ability of the fund to make anonymous grants, despite the decreased transparency, with suitable outside review - as happened in this case. 

First, for a graduate student,  I understand that it isn't necessarily positive to be publicly known as being well funded. (This applies far less to people doing research, for who funding is a stronger signal.) Second, I think that while transparency on the part of funders is very important, respective individuals' privacy is an important norm, and allows people who are doing valuable work but might otherwise only want to apply for less public grants,  likely focused on other topics, or to seek non-research employment, to ask for money to work on longtermist projects.

Potential COI disclosure: I have applied to the fund for future funding, and have received one grant from them in the past. (I have no interest in personally receiving anonymous funding.)

Comment by davidmanheim on Make a $10 donation into $35 · 2020-12-02T10:59:46.337Z · EA · GW

Ditto, ~3 minutes, +$35 for MIRI.

Comment by davidmanheim on An experiment to evaluate the value of one researcher's work · 2020-12-01T21:21:57.859Z · EA · GW

This sounds great, and happy if you want to use my posts for this. 

I also am super-happy that the Goodhart paper was used as an example of a "fairly valuable" paper! I should look at my other non-forum-post output and consider the score-to-time-and-effort ratio, to see if I can maximize the ratio by doing more of specific types of work, or emphasizing different projects.

Comment by davidmanheim on Introducing Probably Good: A New Career Guidance Organization · 2020-11-15T07:17:52.562Z · EA · GW

I like this, but have a few concerns. First, you need to pick a good outcome metrics, and most are high-variance and not very informative / objective. I also think the hoped-for outcomes are different, since 80k wants a few people to pick high-priority career paths, and probably good wants slight marginal improvements along potentially non-ideal career paths. And lastly, you can't reliably randomize, since many people who might talk to Probably Good will be looking at 80k as well. Given all of that, I worry that even if you pick something useful to measure, the power / sample size needed, given individual variance, would be very large.

Still, I'd be happy to help Sella / Omer work through this and set it up, since I suspect they will get more applicants than they will be able to handle, and randomizing seems like a reasonable choice - and almost any type of otherwise useful follow-up survey can be used in this way once they are willing to randomize.

 

Comment by davidmanheim on Incentive Problems With Current Forecasting Competitions. · 2020-11-11T10:08:23.313Z · EA · GW

This is great, and it deals with a few points I didn't, but here's my tweetstorm from the beginning of last year about the distortion of scoring rules alone:

https://twitter.com/davidmanheim/status/1080458380806893568

If you're interested in probability scoring rules, here's a somewhat technical and nit-picking tweetstorm about why proper scoring for predictions and supposedly "incentive compatible" scoring systems often aren't actually a good idea.

First, some background. Scoring rules are how we "score" predictions - decide how good they are. Proper scoring rules are ones where a predictor's score is maximized when it give it's true best guess. Wikipedia explains; en.wikipedia.org/wiki/Scoring_r…

A typical improper scoring rule is the "better side of even" rule, where every time your highest probability is assigned to the actual outcome, you get credit. In that case, people have no reason to report probabilities correctly - just pick a most likely outcome and say 100%.

There are many proper scoring rules. Examples include logarithmic scoring, where your score is the log of the probability assigned to the correct answer, and Brier score, which is the mean squared error. de Finetti et al. lays out the details here; link.springer.com/chapter/10.100…

These scoring rules are all fine as long as people's ONLY incentive is to get a good score.  

In fact, in situations where we use quantitative rules, this is rarely the case. Simple scoring rules don't account for this problem. So what kind of misaligned incentives exist?

Bad places to use proper scoring rules #1 - In many forecasting applications, like tournaments, there is a prestige factor in doing well without a corresponding penalty for doing badly. In that case, proper scoring rules incentivise "risk taking" in predictions, not honesty.

Bad places to use proper scoring rules #2 - In machine learning, scoring rules are used for training models that make probabilistic predictions. If predictions are then used to make decisions that have asymmetric payoffs for different types of mistakes., it's misaligned.

Bad places to use proper scoring rules #3 - Any time you want the forecasters to have the option to say answer unknown. If this is important - and it usually is - proper scoring rules can disincentify or overincentify not guessing, depending on how that option is treated.

Using a metric that isn't aligned with incentives is bad. (If you want to hear more, follow me. I can't shut up about it.)  

Carvalho discusses how proper scoring is misused; https://viterbi-web.usc.edu/~shaddin/cs699fa17/docs/Carvalho16.pdf

Anyways, this paper shows a bit of how to do better; https://pubsonline.informs.org/doi/abs/10.1287/deca.1110.0216

Fin.

Comment by davidmanheim on EA Israel Strategy 2020-21 · 2020-09-30T11:33:35.434Z · EA · GW

I won't address all of these, especially since I'm not deeply involved in all of them, but on #3, there has been some discussion, and they are doing some work on this. We are trying to start such groups, but it's different than most other countries. This is mostly because college is done post-mandatory military service, starting at age 21 or older, and usually even later than that, so the students more career focused. That gives less time for activities like EA groups.

Comment by davidmanheim on Does any thorough discussion of moral parliaments exist? · 2020-09-17T09:53:27.322Z · EA · GW

We are hoping to kind-of address the issue in that post in a paper I'm working on with Anders Sandberg - I'll let you know when we're ready to share it, if you'd like.

Comment by davidmanheim on Why we’re excited to fund charities’ work a few years in the future · 2020-09-01T11:54:40.868Z · EA · GW

I think that overall, this makes sense, but I'm surprised about an omission about counterfactual impact, a concent I would think would be significant. Specifically, is there any concern that (perhaps primarily non-EA) donors will see that the nonprofit is well-funded, and would counter-factually donated more to full the funding gap?

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-09-01T11:42:10.264Z · EA · GW

I'd be comfortable with 1% - I'd take a bet at 100:1 conditional on land warfare in China or the US with a clear victor, they winner still would at the most extreme, restore a modified modern national government controlled by citizens that had heavy restrictions on what it was allowed to do, following the post-WWII model in Japan and Germany. (I'd take the bet, but in that case, I wouldn't expect both parties to survive to collect on the bet, whichever way it ends.)

That's because the post-WWII international system is built with structures that almost entirely prevent wars of conquest, and while I don't see that system as being strong, I also don't think the weaknesses are ones leading to those norms breaking down.

But maybe, despite sticking to my earlier claim, the post-WWII replacement of Japan's emperor with a democracy is exactly the class of case we should be discussing as an example relevant to the more general question of whether civilizations are conquered rather than collapse. And the same logic would apply to Iraq, and other nations the US "helped" along the road to democracy, since they were at least occasionally - though by no means always - failing states. And Iraq was near collapse because of conflict with Iran and sanctions, not because of internal decay. (I'm less knowledgeable about the stability of Japanese culture pre-WWII.)

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-09-01T11:30:59.466Z · EA · GW

Yes, and I would include a significant discussion of this in a longer version of this post, or a paper. However, I think we mostly disagree about what people's priors or prior models were in choosing what to highlight. (I see no-one using historical records of invasions / conquered nations independent of when it contributed to a later collapse, as relevant to discussions of collapse.)

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-31T20:47:45.974Z · EA · GW

China could replace the US as a dominant power, but they wouldn't actually take over the US the way nations used to conquer and replace the culture of other countries.

And I agree that it's not obvious that interconnection on net increases fragility, but I think that it's clear, as I argued in the paper, that technology which creates the connection is fragile, and getting more so.

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-31T20:44:54.530Z · EA · GW

Yes, that seems clearer and accurate - but I think it's clear that the types of external societies that are developing independently and are able to mount an attack, as occurred for Greece, Rome, when Ghengis Khan invaded Europe, etc. That means that in my view the key source of external pressure to topple a teetering system that does not exist now, rather than competition between peer nations. That seems a bit more like what I think of as inducing a bias, but your point is still well taken.

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-31T13:48:40.734Z · EA · GW

This was imprecise - I meant that collapses were catastrophes for the civilizations involved, and current collapses would also be catastrophes, one which I agree would be significantly worse if they impacted humanity's longer term trajectory. And yes, some collapses may have been net benefits - though I think the collapse of early agricultural societies did set those societies back, and were catastrophes for them - we just think that the direction of those societies was bad, so we're unperturbed that they collapsed. The same would be said of the once-impending collapse of the antebellum South in the US, where economics was going to destroy their economy, i.e. slavery. But despite the simplicity of the cause, slavery, I will greatly simplify the political dynamics leading to the outbreak of the civil war and say that they started a war to protect their culture instead of allowing the North to supplant them. This seems like a clear civilizational catastrophe, with some large moral benefits from ending slavery.

I think that unlike the Antebelllum south, and early exploitative agricultural societies, the collapse of Rome was also a collapse that hurt civilization's medium-term trajectory, despite taking quite a long time. And I'm hoping the ongoing collapse of the post-WWII international order isn't a similar devolution.

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-31T13:37:50.146Z · EA · GW

The first issue is that my question was whether civilizations collapse - in the sense that the system collapses to the point where large portions die - infrequently or very infrequently. The argument is that conquered civilizations are "missing data" in that it seems very likely that an unstable or otherwise damaged society that has a higher chance of collapse, whether due to invasion or to other factors, also has a higher chance of being supplanted rather than us seeing a collapse. So I noted that we have data missing in a way that introduces a bias.

The second issue is what a collapse would look like and involve. Because civilization is more tightly interconnected, many types of collapse would be universal, rather than local. (See both Bostrom's Vulnerable world paper and my Fragile world paper for examples of how technology could lead to that occurring.) Great power wars could trigger or accelerate such a collapse, but they wouldn't lead to decoupled sociotechnical systems, or any plausible scenarios that would allow a winner to replace the loser.

Does that make sense?

Comment by davidmanheim on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-30T20:47:29.803Z · EA · GW

Agreed - and see Eliezer Yudkowsky's take on this idea.

Comment by davidmanheim on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-12T06:16:04.135Z · EA · GW

As a semi-outside viewer, who both works with several people in RSP, and has visited FHI back in the good-old-non-pandemic days, I highly recommend that EAs both apply to the program, especially if they aren't sure if it's right for them (but see who this program is for,) and talk to or work with people in the program now.

That said, I think that these comments are both accurate, and don't fully reflect some of the ancillary benefits of the program - especially ones that are not yet experienced because they will only be obvious when talking to future alumni of the program. For example, in five years, I suspect alumni of the program will say:

  • It's very prestigious step on a CV for future work, especially for EAs that are considering policy or academic work outside of the narrow EA world, and would benefit from a boost in getting in.
  • It gives people (well-funded) time to work on expanding their horizons, and focus on making sure they can do or enjoy doing a given type of work. It can also set them up for their next step by giving them direct experience in almost any area they want to work in.
  • The network of RSPs is likely to be very valuable in the next decade as the RSP program grows and matures, and the alumni will presumably be able to stay connected to each other, and also to connect with current / past RSPs.
Comment by davidmanheim on Customized COVID-19 risk analysis as a high value area · 2020-07-29T13:30:18.692Z · EA · GW

This is still under very active development, but the github repository is here, and a toy version of what we'd like to produce with better estimates is here, as a Rshiny App.

Comment by davidmanheim on Crucial questions for longtermists · 2020-07-29T13:27:29.201Z · EA · GW

This is really fantastic, and seems like there is a project that could be done as a larger collaboration, building off of this post.

It would be a significant amount of additional work, but it seems very valuable to list resources relevant to each question - especially as some seem important, but have been partly addressed. (For example, re: estimates of natural pandemic risks, see my paper, and then Andrew Snyder-Beattie's paper.)

Given that, would you be interested in having this put into a Google Doc and inviting people to collaborate on a more comprehensive overall long-termist research agenda document?

Comment by davidmanheim on Customized COVID-19 risk analysis as a high value area · 2020-07-24T11:14:46.150Z · EA · GW

This sounds slightly related to something 1DaySooner is just starting, which is a risk model for a HCT, which will look at the risk of death, and hopefully also of long term disability. Ideally, it would also consider the probability conditional on rescue therapies being available or becoming available. To do that, we're focusing on a population subset, but the basis for the model is data that includes multiple ages, so extending that is easy.

It is likely that this model can be plugged into models for the other portions of the risk, isolation, etc. and it might be useful to collaborate. It's also an important project on its own, so if there are people interested in working with us on that, I'd be happy to find more volunteers familiar with R and data analysis.

Comment by davidmanheim on Are there superforecasts for existential risk? · 2020-07-08T17:05:26.005Z · EA · GW

I'll speak for the consensus when I say I think there's not a clear way to decide if this is correct without actually doing it - and the outcome would depend a lot on what level of engagement the superforecasters had with these ideas already. (If I got to pick the 5 superforecasters, even excluding myself, I could guarantee it was either closer to FHI's viewpoints, or to Will's.) Even if we picked from a "fair" reference class, if I could have them spend 2 weeks at FHI talking to people there, I think a reasonable proportion would be convinced - though perhaps this is less a function of updating neutrally towards correct ideas as it is the emergence of consensus in groups.

Lastly, I have tremendous respect for Will, but I don't know that he's calibrated particularly well to make a prediction like this. (Not that I know he isn't - I just don't have any reason to think he's spent much time working on this skillset.)

Comment by davidmanheim on Are there superforecasts for existential risk? · 2020-07-07T18:51:06.460Z · EA · GW

Yes, but it is hard, and they don't work well. They can, however, be done at least slightly better.

Good Judgement was asked to forecast the risk of a nuclear war in the next year - which helps somewhat with the time frame question. Unfortunately, the brier score incentives are still really weak.

Ozzie Gooen and others have talked a lot about how to make forecasting better. Some of the ideas that he has suggested relate to how to forecast longer term questions. I can't find a link to a public document, but here's one example (which may have been someone else's suggestion):

You ask people to forecast what probability people will assign in 5 years to the question "will there be a nuclear war by 2100?" (You might also ask whether there will be a nuclear war in the next 5 years, of course.) By using this trick, you can have the question (s) resolve in 5 years, and have an approximate answer based on iterated expectation. But extending this, you can also have them predict what probability people will assign in 5 years to the probability they will assign in another 5 years to the question "will there be a nuclear war by 2100" - and by chaining predictions like this, you can transform very long term questions into series of shorter term questions.

There is other work in this vein, but to simplify, all of it takes the form "can we do something clever to slightly reduce the issues that exist with the fundamentally hard question of getting short term answers to long term questions." As far as I can see, there aren't any simple answers.

Comment by davidmanheim on Civilization Re-Emerging After a Catastrophic Collapse · 2020-07-03T11:32:23.717Z · EA · GW

I disagree somewhat on a few things, but I'm not very strongly skeptical of any of these points. I do have a few points to consider about these issues.

Re: stable long term despotism, you might look into the idea of "hydraulic empires" and their stability. I think that short of having a similar monopoly, short of a global singleton, other systems are unstable enough that they should evolve towards whatever is optimal. However, nuclear weapons, if developed early by one state, could also create a quasi-singularity. And I think the Soviet Union was actually less stable than it appears in retrospect, except for their nuclear monopoly.

I do worry that some aspects of central control would be more effective at creating robust technological growth given clear tech ladders, compared to the way uncontrolled competition works in market economies, since markets are better at the explore side of the explore-exploit spectrum, and dictatorships are arguably better at exploitation. (In more than one sense.)

Re: China, the level of technology is stabilizing their otherwise fragile control of the country. I would be surprised if similar stability is possible longer term without either a hydraulic empire, per above, or similarly invasive advanced technologies - meaning that they would come fairly late. It's possible faster technology development would make this more likely.

In retrospect, 1984 seems far less worrying than a Brave New World - style anti-utopia. (But it's unclear that lots of happy people guided centrally is actually as negative as it is portrayed, at least according to some versions of utilitarianism.)

Comment by davidmanheim on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-03T09:59:38.272Z · EA · GW

"The right question" has 2 components. First is that the thing you're asking about is related to what you actually want to know, and second is that it's a clear and unambiguously resolvable target. These are often in tension with each other.

One clear example is COVID-19 cases - you probably care about total cases much more than confirmed cases, but confirmed cases are much easier to use for a resolution criteria. You can make more complex questions to try to deal with this, but that makes them harder to forecast. Forecasting excess deaths, for example, gets into whether people are more or less likely to die in a car accident during COVID-19, and whether COVID reduction measures also blunt the spread of influenza. And forecasting retrospective population percentages that are antibody positive runs into issues with sampling, test accuracy, and the timeline for when such estimates are made - not to mention relying on data that might not be gathered as of when you want to resolve the question.

Comment by davidmanheim on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-03T09:47:47.669Z · EA · GW

I think that as you forecast different domains, more common themes can start to emerge. And I certainly find that my calibration is off when I feel personally invested in the answer.


And re:

How does the distribution skill / hours of effort look for forecasting for you?

I would say there's a sharp cutoff in terms of needing a minimal level of understanding (which seems to be fairly high, but certainly isn't above, say, the 10th percentile.) After that, it's mostly effort, and skill that is gained via feedback.

Comment by davidmanheim on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-01T06:01:13.430Z · EA · GW

I already said I'd stop messing with him now.

Comment by davidmanheim on Civilization Re-Emerging After a Catastrophic Collapse · 2020-07-01T05:53:02.484Z · EA · GW

I'm very uncertain about details, and have low confidence in all of these claims we agree about, but I agree with your assessment overall.

I've assumed that while speed changes, the technology-tree is fairly unalterable - you need goods metals and similar to make many things through 1800s-level technology, you need large-scale industry to make good metals, etc. But that's low confidence, and I'd want to think about it more. (This paper looks interesting: http://gamestudies.org/1201/articles/tuur_ghys.)

Regarding political systems, I think that market economies with some level of distributed control, and political systems that allow feedback in somewhat democratic ways are social technologies that we don't have clear superior alternatives to, despite centuries of thought. I'd argue that Fukuyama was right in "End of History" about the triumph of democracy and capitalism, it's just that the end state seems to take longer than he assumed.

And finally, yes, the details of how they technologies and social systems play out in terms of cosmopolitan attitudes and the societal goals they reflect are much less clear. In general, I think that humans are far more culturally plastic than people assume, and very different values are possible and compatible with flourishing in the general sense. But (if it were possible to know the answer,) I wouldn't be too surprised to find out that nearly fixed tech trees + nearly fixed social technology trees mean that cosmopolitan attitudes are a very strong default, rather than an accidental contingent reality.

Comment by davidmanheim on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-29T09:51:41.204Z · EA · GW

I was focusing on "how much similarity we should expect between a civilization that has recovered and one that never collapsed in the first place," and I was saying that the degree of similarity in terms of likely progress is low, conditioning on any level of societal memory of the idea that progress is possible, and knowing (or seeing artifacts of the fact) that there once were billions of people who had flying machines and instant communication.

Comment by davidmanheim on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T10:45:35.597Z · EA · GW

I think there's a clear counterargument, which is that the central ingredient lacking in developing technologies was a lack of awareness that progress in a given area is possible. Unless almost literally all knowledge is destroyed, a recovery doesn't have this problem.

(Note: this seems to be a consensus view among people I talk to who have thought about collapse scenarios, but I can claim that only very loosely, based on a few conversations.)

Comment by davidmanheim on Why "animal welfare" is a thing? · 2020-06-28T10:08:00.556Z · EA · GW

You still seem confused. You say your views are controversial, as if this community doesn't allow for and value controversial opinions, and think that it's the claims you made. That is not the case. Hopefully this comment is clear enough to explain.

1. This was a low-effort post. It was full of half-formed ideas, contained neither a title or a introduction that related to the remainder of the post, nor a clear conclusion. The sentences were not complete, and there was clearly no grammar check.

2. Look at successful posts on the forum. They contain full sentences, have a clear topic and thoughts about a topic that are explained clearly, and engage with past discussion. It's important to notice the standards in a given forum before participating. In this case, you didn't bother looking at other posts or understanding the community norms.

3. You have not engaged with other posts, and may not have even read them. Your first attempt to post or comment reflects that lack of broader engagement. You have no post history to make people think you have given this any thought whatsoever.

4. Your unrelated comments link to your other irrelevant work, which seems crass.

Comment by davidmanheim on Thoughts on The Weapon of Openness · 2020-06-25T13:37:16.154Z · EA · GW

I think 30 years is an overstatement, thought it's hard to quantify. However, I can think of a few things that makes me think this gap is likely to exist, and be significant in cryptography, and even more specifically in cryptanalysis. For hacking, the gap is clearly smaller, but a still nontrivial amount - perhaps 2 years.

Comment by davidmanheim on Cause Prioritization in Light of Inspirational Disasters · 2020-06-08T17:22:17.682Z · EA · GW

Maybe this wasn't your intent, but the title is a bit ambiguous about the word "inspire" - it seems as though you might be advocating for actions that inspire disasters, as opposed to making the case for allowing disasters that are themselves inspiring.

Comment by davidmanheim on Why might one value animals far less than humans? · 2020-06-08T16:08:09.690Z · EA · GW

Regarding 3, no, it's unclear and depends on the specific animal, what we think their qualia are like, and the specific class of experience you think are valuable.

Comment by davidmanheim on Why might one value animals far less than humans? · 2020-06-08T16:06:33.008Z · EA · GW

It's a bit more complex than that. If you think animals can't anticipate pain, or can anticipate it but cannot understand the passage of time, or understand that pain might continue, you could see an argument for animal suffering being less important than human suffering.

So yes, this could go either way - but it's still a reason one might value animals less.