Posts

Max_Daniel's Shortform 2019-12-13T11:17:10.883Z · score: 21 (7 votes)
When should EAs allocate funding randomly? An inconclusive literature review. 2018-11-17T14:53:38.803Z · score: 34 (22 votes)

Comments

Comment by max_daniel on Max_Daniel's Shortform · 2020-03-25T16:05:59.869Z · score: 20 (5 votes) · EA · GW

[Epistemic status: info from the WHO website and Wikipedia, but I overall invested only ~10 min, so might be missing something.]

Under the 2005 International Health Regulations (IHR), states have a legal duty to respond promptly to a PHEIC.
[Note by me: The International Health Regulations include multiple instances of "public health emergency of international concern". By contrast, they include only one instance of "pandemic", and this is in the term "pandemic influenza" in a formal statement by China rather than the main text of the regulation.]
  • The WHO declared a PHEIC due to COVID-19 on January 30th.
  • The OP was prompted by a claim that the timing of the WHO using the term "pandemic" provides an argument against epistemic modesty. (Though I appreciate this was less clear in the OP than it could have been, and maybe it was a bad idea to copy my Facebook comment here anyway.) From the Facebook comment I was responding to:
For example, to me, the WHO taking until ~March 12 to call this a pandemic*, when the informed amateurs I listen to were all pretty convinced that this will be pretty bad since at least early March, is at least some evidence that trusting informed amateurs has some value over entirely trusting people usually perceived as experts.
  • Since the WHO declaring a PHEIC seems much more consequential than them using the term "pandemic", the timing of the PHEIC declaration seems more relevant for assessing the merits of the WHO response, and thus for any argument regarding epistemic modesty.
  • Since the PHEIC declaration happened significantly earlier, any argument based on the premise that it happened too late is significantly weaker. And whatever the apparent initial force of this weaker argument, my undermining response from the OP still applies.
  • So overall, while the OP's premise appealing to major legal/institutional consequences of the WHO using the term "pandemic" seems false, I'm now even more convinced of the key claim I wanted to argue for: that the WHO response does not provide an argument against epistemic modesty in general, nor for the epistemic superiority of "informed amateurs" over experts on COVID-19.
Comment by max_daniel on Max_Daniel's Shortform · 2020-03-23T16:31:48.605Z · score: 5 (3 votes) · EA · GW

Thank you for pointing this out! It sounds like my guess was probably just wrong.

My guess was based on a crude prior on international organizations, not anything I know about the WHO specifically. I clarified the epistemic status in the OP.

Comment by max_daniel on Max_Daniel's Shortform · 2020-03-23T09:29:58.406Z · score: 6 (4 votes) · EA · GW

[Epistemic status: speculation based on priors about international organizations. I know next to nothing about the WHO specifically.]

[On the WHO declaring COVID-19 a pandemic only (?) on March 12th. Prompted by this Facebook discussion on epistemic modesty on COVID-19.]

- [ETA: this point is likely wrong, cf. Khorton's comment below. However, I believe the conclusion that the timing of WHO declarations by itself doesn't provide a significant argument against epistemic modesty still stands, as I explain in a follow-up comment below.] The WHO declaring a pandemic has a bunch of major legal and institutional consequences. E.g. my guess is that among other things it affects the amounts of resources the WHO and other actors can utilize, the kind of work the WHO and others are allowed to do, and the kind of recommendations the WHO can make.

- The optimal time for the WHO to declare a pandemic is primarily determined by these legal and institutional consequences. Whether COVID-19 is or will in fact be a pandemic in the everyday or epidemiological sense is an important input into the decision, but not a decisive one.

- Without familiarity with the WHO and the legal and institutional system it is a part of, it is very difficult to accurately assess the consequences of the WHO declaring a pandemic. Therefore, it is very hard to evaluate the timing of the WHO's declaration without such familiarity. And being even maximally well-informed about COVID-19 itself isn't even remotely sufficient for an accurate evaluation.

- The bottom line is that the WHO officially declaring that COVID-19 is a pandemic is a totally different thing from any individual persuasively arguing that COVID-19 is or will be a pandemic. In a language that would accurately reflect differences in meaning, me saying that COVID-19 is a pandemic and the WHO declaring COVID-19 is a pandemic would be done using different words. It is simply not the primary purpose of this WHO speech act to be an early, accurate, reliable, or whatever indicator of whether "COVID-19 is a pandemic", to predict its impact, or any other similar thing. It isn't primarily epistemic in any sense.

- If just based on information about COVID-19 itself someone confidently thinks that the WHO ought to have declared a pandemic earlier, they are making a mistake akin to the mistake reflected by answering "yes" to the question "could you pass me the salt?" without doing anything.

So did the WHO make a mistake by not declaring COVID-19 to be a pandemic earlier, and if so how consequential was it? Well, I think the timing was probably suboptimal just because my prior is that most complex institutions aren't optimized for getting the timing of such things exactly right. But I have no idea how consequential a potential mistake was. In fact, I'm about 50-50 on whether the optimal time would have been slightly earlier or slightly later. (Though substantially earlier seems significantly more likely optimal than substantially later.)

Comment by max_daniel on Activism for COVID-19 Local Preparedness · 2020-03-02T19:56:19.920Z · score: 3 (2 votes) · EA · GW

I've also heard that 40-70% figure (e.g. from German public health officials like the director of Germany's equivalent of the CDC). But I'm confused for the reason you stated. So I'd also appreciate an answer.

Some hypotheses (other than the 40-70% being just wrong) I can think of, though my guess is none of them is right:

(a) The 40-70% are a very long-term figure like risk of life-time infection assuming that the virus becomes permanently endemic.

(b) There being many more undetected than confirmed cases.

(c) The slowdown in new cases in Hubei only being temporary, i.e. expecting it to accelerate again and reaching 40-70% there.

(d) Thinking that the virus will spread more widely outside of Hubei, e.g. because one expects less drastic prevention/mitigation measures. [ETA: This comment seems to point to (d).]

Comment by max_daniel on Max_Daniel's Shortform · 2020-02-26T18:00:06.552Z · score: 7 (3 votes) · EA · GW
I don't think that peculiarities of what kinds of EA work we're most enthusiastic about lead to much of the disagreement. When I imagine myself taking on various different people's views about what work would be most helpful, most of the time I end up thinking that valuable contributions could be made to that work by sufficiently talented undergrads.

I agree we have important disagreements other than what kinds of EA work we're most enthusiastic about. While not of major relevance for the original issue, I'd still note that I'm surprised by what you say about various other people's view on EA, and I suspect it might not be true for me: while I agree there are some highly-valuable tasks that could be done by recent undergrads, I'd guess that if I made a list of the most valuable possible contributions then a majority of the entries would require someone to have a lot of AI-weighted generic influence/power (e.g. the kind of influence over AI a senior government member responsible for tech policy has, or a senior manager in a lab that could plausibly develop AGI), and that because of the way relevant existing institutions are structured this would usually require a significant amount of seniority. (It's possible for some smart undergrads to embark on a path culminating in such a position, but my guess this is not the kind of thing you had in mind.)

I am pretty skeptical of this. Eg I suspect that people like Evan (sorry Evan if you're reading this for using you as a running example) are extremely unlikely to remain unidentified, because one of the things that they do is think about things in their own time and put the results online. [...]
I am not intending to include beliefs and preferences in my definition of "great person", except for preferences/beliefs like being not very altruistic, which I do count.

I don't think these two claims are plausibly consistent, at least if "people like Evan" is also meant to exclude beliefs and preferences: For instance, if someone with Evan-level abilities doesn't believe that thinking in their own time and putting results online is a worthwhile thing to do, then the identification mechanism you appeal to will fail. More broadly, someone's actions will generally depend on all kinds of beliefs and preferences (e.g. on what they are able to do, on what people around them expect, on other incentives, ...) that are much more dependent on the environment than relatively "innate" traits like fluid intelligence. The boundary between beliefs/preferences and abilities is fuzzy, but as I suggested at the end of my previous comment, I think for the purpose of this discussion it's most useful to distinguish changes in value we can achieve (a) by changing the "environment" of existing people vs. (b) by adding more people to the pool.

Could you name a profile of such a person, and which of the types of work I named you think they'd maybe be as good at as the people I named?

What do you mean by "profile"? Saying what properties they have, but without identifying them? Or naming names or at least usernames? If the latter, I'd want to ask the people if they're OK with me naming them publicly. But in principle happy to do either of these things, as I agree it's a good way to check if my claim is plausible.

I think my definition of great might be a higher bar than yours, based on the proportion of people who I think meet it?

Maybe. When I said "they might be great", I meant something roughly like: if it was my main goal to find people great at task X, I'd want to invest at least 1-10 hours per person finding out more about how good they'd be at X (this might mean talking to them, giving them some sort of trial tasks etc.) I'd guess that for between 5 and 50% of these people I'd eventually end up concluding they should work full-time doing X or similar.

Also note that originally I meant to exclude practice/experience from the relevant notion of "greatness" (i.e. it just includes talent/potential). So for some of these people my view might be something like "if they did 2 years of deliberate practice, they then would have a 5% to 50% chance of meeting the bar for X". But I know think that probably the "marginal value from changing the environment vs. marginal value from adding more people" operationalization is more useful, which would require "greatness" to include practice/experience to be consistent with it.

If we disagree about the bar, I suspect that me having bad models about some of the examples you gave explains more of the disagreement than me generally dismissing high bars. "Functional programming" just doesn't sound like the kind of task to me with high returns to super-high ability levels, and similar for community building; but it't plausible that there are bundles of tasks involving these things where it matters a lot if you have someone whose ability is 6 instead of 5 standard deviations above the mean (not always well-defined, but you get the idea). E.g. if your "task" is "make a painting that will be held in similar regards as the Mona Lisa" or "prove P != NP" or "be as prolific as Ramanujan at finding weird infinite series for pi", then, sure, I agree we need an extremely high bar.

For what it's worth, I think that you're not credulous enough of the possibility that the person you talked to actually disagreed with you--I think you might doing that thing whose name I forget where you steelman someone into saying the thing you think instead of the thing they think.

Thanks for pointing this out. FWIW, I think there likely is both substantial disagreement between me and that person and that I misunderstood their view in some ways.

Comment by max_daniel on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-02-26T12:56:22.696Z · score: 21 (12 votes) · EA · GW

You might also be interested in John Halstead's and Johannes Ackva's recent Climate & Lifestyle Report for Founders Pledge. They point out that taking into account policy effects can dramatically change the estimated climate impact of lifestyle choices, and on children specifically they say that:

The biggest discrepancy here concerns the climate effect of having children. For the reasons given, we think our estimate of the effect of having children is more accurate for people living in the EU or US states with strong climate policy, such as California, New York, as well as other states in the Northeast. Indeed, even outside the US states with strong climate policy, we think the estimate accounting for policy is much closer to the truth, since emissions per head are also declining at the national level, and climate policy is likely to strengthen across the US in the next few decades.

After taking into account policy effects, they find that the climate impact of having children is comparable to some other lifestyle choices such as living car-free. (I.e. it's not the case that the climate impact of having children is orders of magnitude larger, as one might naively think w/o considering policy effects.)

For more detail, see their section 3.

Comment by max_daniel on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-02-26T12:44:52.278Z · score: 18 (10 votes) · EA · GW

I agree that blanket endorsements of anti-natalism (whether for climate or other reasons) in EA social media spaces are concerning, and I appreciate you taking the time to write down why you think they are misguided.

FWIW, my reaction to this post is: you present a valid argument (i.e. if I believed all your factual premises, then I'd think your conclusion follows), but this post by itself doesn't convince me that the following factual premise is true:

The magnitude of [your kids'] impact on the climate is likely to be much, much smaller than any of the three other factors I have raised.

At first glance, this seems highly non-obvious to me. I'd probably at least want to see a back-of-the-envelope calculation before believing this is right.

(And I'm not sure it is: I agree that your kids' impact on the climate would be more causally distant than their impact on your own well-being, your career, etc. However, conversely, there is a massive scale difference: impacts on climate affect the well-being of many people in many generations, not just your own. Notably, this is also true for impacts on your career, in particular if you try to improve the long-term future. So my first-pass guess is that the expected impact will be dominated by the non-obvious comparison of these two "distant" effects.)

Comment by max_daniel on Max_Daniel's Shortform · 2020-02-21T12:35:19.747Z · score: 4 (2 votes) · EA · GW

Thanks, very interesting!

I agree the examples you gave could be done by a recent graduate. (Though my guess is the community building stuff would benefit from some kinds of additional experience that has trained relevant project management and people skills.)

I suspect our impressions differ in two ways:

1. My guess is I consider the activities you mentioned less valuable than you do. Probably the difference is largest for programming at MIRI and smallest for Hubinger-style AI safety research. (This would probably be a bigger discussion.)

2. Independent of this, my guess would be that EA does have a decent number of unidentified people who would be about as good as people you've identified. E.g., I can think of ~5 people off the top of my head of whom I think they might be great at one of the things you listed, and if I had your view on their value I'd probably think they should stop doing what they're doing now and switch to trying one of these things. And I suspect if I thought hard about it, I could come up with 5-10 more people - and then there is the large number of people neither of us has any information about.

Two other thoughts I had in response:

  • It might be quite relevant if "great people" refers only to talent or also to beliefs and values/preferences. E.g. my guess is that there are several people who could be great at functional programming who either don't want to work for MIRI, or don't believe that this would be valuable. (This includes e.g. myself.) If to count as "great person" you need to have the right beliefs and preferences, I think your claim that "EA needs more great people" becomes stronger. But I think the practical implications would differ from the "greatness is only about talent" version, which is the one I had in mind in the OP.
  • One way to make the question more precise: At the margin, is it more valuable (a) to try to add high-potential people to the pool of EAs or (b) change the environment (e.g. coordination, incentives, ...) to increase the expected value of activities by people in the current pool. With this operationalization, I might actually agree that the highest-value activities of type (a) are better than the ones of type (b), at least if the goal is finding programmers for MIRI and maybe for community building. (I'd still think that this would be because, while there are sufficiently talented people in EA, they don't want to do this, and it's hard to change beliefs/preferences and easier to get new smart people excited about EA. - Not because the community literally doesn't have anyone with a sufficient level of innate talent. Of course, this probably wasn't the claim the person I originally talked to was making.)
Comment by max_daniel on Thoughts on electoral reform · 2020-02-19T11:17:37.843Z · score: 18 (6 votes) · EA · GW

(The following summary [not by me] might be helpful to some readers not familiar with the book:

https://casparoesterheld.com/2017/06/18/summary-of-achen-and-bartels-democracy-for-realists/ )

Comment by max_daniel on How do you feel about the main EA facebook group? · 2020-02-19T11:14:51.014Z · score: 4 (3 votes) · EA · GW

I almost never read the EA Facebook group. But I tend to generally dislike Facebook, and there simply is no Facebook group I regularly use. I think I joined the EA Facebook group in early 2016, though it's possible that it was a few months earlier or later. (In fact, I didn't have a Facebook account previously. I only created one because a lot of EA communication seemed to happen via Facebook, which I found somewhat annoying.) Based on my very infrequent visits, I don't have a sense that it changed significantly. But I'm not sure if I would have noticed.

Comment by max_daniel on Max_Daniel's Shortform · 2020-02-19T10:31:10.026Z · score: 21 (12 votes) · EA · GW

[On https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ ]

  • [ETA: After having talked to more people, it now seems to me that disagreeing on this point more often explains different reactions than I thought it would. I'm also now less confident that my impression that there wasn't bad faith from the start is correct, though I think I still somewhat disagree with many EAs on this. In particular, I've also seen plenty of non-EA people who don't plausibly have a "protect my family" reaction say the piece felt like a failed attempt to justify a negative bottom line that was determined in advance.] (Most of the following doesn't apply in cases where someone is acting in bad faith and is determined to screw you over. And in fact I've seen the opposing failure mode of people assuming good faith for too long. But I don't think this is a case of bad faith.)
  • I've seen some EAs react pretty negatively or angrily to that piece. (Tbc, I've also seen different reactions.) Some have described the article as a "hit piece".
  • I don't think it qualifies as a hit piece. More like a piece that's independent/pseudo-neutral/ambiguous and tried to stick to dry facts/observations but in some places provides a distorted picture by failing to be charitable / arguably missing the point / being one-sided and selective in the observation it reports.
  • I still think that reporting like this is net good, and that the world would be better if there was more of it at the margin, even if it has flaws similarly severe to that one. (Tbc, I think there would have been a plausibly realistic/achievable version of that article that would have been better, and that there is fair criticism one can direct at it.)
  • To put it bluntly, I don't believe that having even maximally well-intentioned and intelligent people at key institutions is sufficient for achieving a good outcome for the world. I find it extremely hard to have faith in a setup that doesn't involve a legible system/structure with things like division of labor, checks and balances, procedural guarantees, healthy competition, and independent scrutiny of key actors. I don't know if the ideal system for providing such outside scrutiny will look even remotely like today's press, but currently it's one of the few things in this vein that we have for nonprofits, and Karen Hao's article is an (albeit flawed) example of it.
  • Whether this specific article was net good or not seems pretty debatable. I definitely see reasons to think it'll have bad consequences, e.g. it might crowd out better reporting, might provide bad incentives by punishing orgs for trying to do good things, ... I'm less wedded to a prediction of this specific article's impact than to the broader frame for interpreting and reacting to it.
  • I find something about the very negative reactions I've seen worrying. I of course cannot know what they were motivated by, but some seemed like I would expect someone to react who's personally hurt because they judge a situation as being misunderstood, feels like they need to defend themself, or like they need to rally to protect their family. I can relate to misunderstandings being a painful experience, and have sympathy for it. But I also think that if you're OpenAI, or "the EA community", or anyone aiming to change the world, then misunderstandings are part of the game, and that any misunderstanding involves at least two sides. The reactions I'd like to see would try to understand what has happened and engage constructively with how to productively manage the many communication and other challenges involved in trying to do something that's good for everyone without being able to fully explain your plans to most people. (An operationalization: If you think this article was bad, I think that ideally the hypothesis "it would be good it we had better reporting" would enter your mind as readily as the hypothesis "it would be good if OpenAI's comms team and leadership had done a better job".)
Comment by max_daniel on Max_Daniel's Shortform · 2020-02-14T18:50:21.982Z · score: 18 (13 votes) · EA · GW

[Is longtermism bottlenecked by "great people"?]

Someone very influential in EA recently claimed in conversation with me that there are many tasks X such that (i) we currently don't have anyone in the EA community who can do X, (ii) the bottleneck for this isn't credentials or experience or knowledge but person-internal talent, and (iii) it would be very valuable (specifically from a longtermist point of view) if we could do X. And that therefore what we most need in EA are more "great people".

I find this extremely dubious. (In fact, it seems so crazy to me that it seems more likely than not that I significantly misunderstood the person who I think made these claims.) The first claim is of course vacuously true if, for X, we choose some ~impossible task such as "experience a utility-monster amount of pleasure" or "come up with a blueprint for how to build safe AGI that is convincing to benign actors able to execute it". But of course more great people don't help with solving impossible tasks.

Given the size and talent distribution of the EA community my guess is that for most apparent X, the issue either is that (a) X is ~impossible, or (b) there are people in EA who could do X, but the relevant actors cannot identify them, or (c) acquiring the ability to do X is costly (e.g. perhaps you need time to acquire domain-specific expertise), even for maximally talented "great people", and the relevant actors either are unable to help pay that cost (e.g. by training people themselves, or giving them the resources to allow them to get training elsewhere) or make a mistake by not doing so.

My best guess for the genesis of the "we need more great people" perspective: Suppose I talk a lot to people at an organization that thinks there's a decent chance we'll develop transformative AI soon but it will go badly, and that as a consequence tries to grow as fast as possible to pursue various ambitious activities which they think reduces that risk. If these activities are scalable projects with short feedback loops on some intermediate metrics (e.g. running some super-large-scale machine learning experiments), then I expect I would hear a lot of claims like "we really need someone who can do X". I think it's just a general property of a certain kind of fast-growing organization that's doing practical things in the world that everything constantly seems like it's on fire. But I would also expect that, if I poked a bit at these claims, it would usually turn out that X is something like "contribute to this software project at the pace and quality level of our best engineers, w/o requiring any management time" or "convince some investors to give us much more money, but w/o anyone spending any time transferring relevant knowledge". If you see that things break because X isn't done, even though something like X seems doable in principle (perhaps you see others do it), it's tempting to think that what you need is more "great people" who can do X. After all, people generally are the sort of stuff that does things, and maybe you've actually seen some people do X. But it still doesn't follow that in your situation "great people" are the bottleneck ...

Curious if anyone has examples of tasks X for which the original claims seem in fact true. That's probably the easiest way to convince me that I'm wrong.

Comment by max_daniel on Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' · 2020-01-30T13:23:13.564Z · score: 2 (1 votes) · EA · GW

Thank you for sharing your reaction!

Would be interested to hear if the authors have though through this.

I haven't, but it's possible that my coauthors have. I generally agree that it might be worthwhile to think along the lines you suggested.

Comment by max_daniel on Max_Daniel's Shortform · 2020-01-17T12:00:49.073Z · score: 2 (1 votes) · EA · GW

Thanks for sharing your reaction! There is some chance that I'll write up these and maybe other thoughts on AI strategy/governance over the coming months, but it depends a lot on my other commitments. My current guess is that it's maybe only 15% likely that I'll think this is the best use of my time within the next 6 months.

Comment by max_daniel on Long-term investment fund at Founders Pledge · 2020-01-09T13:09:56.067Z · score: 14 (10 votes) · EA · GW

That sounds great! I find the arguments for giving (potentially much) later intriguing and underappreciated. (If I had to allocate a large amount of money myself, I'm not sure what I'd end up doing. But overall it seems good to me if there is at least the option to invest.) I'd be very excited for such a fund to exist - partly because I expect that setting it up and running it will provide a bunch of information on empirical questions relevant for deciding whether investing into such a fund beats giving now.

Comment by max_daniel on Max_Daniel's Shortform · 2020-01-08T14:26:22.019Z · score: 7 (5 votes) · EA · GW

[Some of my tentative and uncertain views on AI governance, and different ways of having impact in that area. Excerpts, not in order, from things I wrote in a recent email discussion, so not a coherent text.]

1. In scenarios where OpenAI, DeepMind etc. become key actors because they develop TAI capabilities, our theory of impact will rely on a combination of affecting (a) 'structure' and (b) 'content'. By (a) I roughly mean how the relevant decision-making mechanisms look like irrespective of the specific goals and resources of the actors the mechanism consists of; e.g., whether some key AI lab is a nonprofit or a publicly traded company; who would decide by what rules/voting scheme how Windfall profits would be redistributed; etc. By (b) I mean something like how much the CEO of a key firm, or their advisors, care about the long-term future. -- I can see why relying mostly on (b) is attractive, e.g. it's arguably more tractable; however, some EA thinking (mostly from the Bay Area / the rationalist community to be honest) strikes me as focusing on (b) for reasons that seem ahistoric or otherwise dubious to me. So I don't feel convinced that what I perceive to be a very stark focus on (b) is warranted. I think that figuring out if there are viable strategies that rely more on (a) is better done from within institutions that have no ties with key TAI actors, and also might be best done my people that don't quite match the profile of the typical new EA that got excited about Superintelligence or HPMOR. Overall, I think that making more academic research in broadly "policy relevant" fields happen would be a decent strategy if one ultimately wanted to increase the amount of thinking on type-(a) theories of impact.

2. What's the theory of impact if TAI happens in more than 20 years? More than 50 years? I think it's not obvious whether it's worth spending any current resources on influencing such scenarios (I think they are more likely but we have much less leverage). However, if we wanted to do this, then I think it's worth bearing in mind that academia is one of few institutions (in a broad sense) that has a strong track record of enabling cumulative intellectual progress over long time scales. I roughly think that, in a modal scenario, no-one in 50 years is going to remember anything that was discussed on the EA Forum or LessWrong, or within the OpenAI policy team, today (except people currently involved); but if AI/TAI was still (or again) a hot topic then, I think it's likely that academic scholars will read academic papers by Dafoe, his students, the students of his students etc. Similarly, based on track records I think that the norms and structure of academia are much better equipped than EA to enable intellectual progress that is more incremental and distributed (as opposed to progress that happens by way of 'at least one crisp insight per step'; e.g. the Astronomical Waste argument would count as one crisp insight); so if we needed such progress, it might make sense to seed broadly useful academic research now. 

[...]

My view is closer to "~all that matters will be in the specifics, and most of the intuitions and methods for dealing with the specifics are either sort of hard-wired or more generic/have different origins than having thought about race models specifically". A crux here might be that I expect most of the tasks involved in dealing with the policy issues that would come up if we got TAI within the next 10-20 years to be sufficiently similar to garden-variety tasks involved in familiar policy areas that as a first pass: (i) if theoretical academic research was useful, we'd see more stories of the kind "CEO X / politician Y's success was due to idea Z developed through theoretical academic research", and (ii) prior policy/applied strategy experience is the background most useful for TAI policy, with usefulness increasing with the overlap in content and relevant actors; e.g.: working with the OpenAI policy team on pre-TAI issues > working within Facebook on a strategy for how to prevent the government to split up the firm in case a left-wing Democrat wins > business strategy for a tobacco company in the US > business strategy for a company outside of the US that faces little government regulation > academic game theory modeling. That's probably too pessimistic about the academic path, and of course it'll depend a lot on the specifics (you could start in academia to then get into Facebook etc.), but you get the idea.

[...]

Overall, the only somewhat open question for me is whether ideally we'd have (A) ~only people working quite directly with key actors or (B) a mix of people working with key actors and more independent ones e.g. in academia. It seems quite clear to me that the optimal allocation will contain a significant share of people working with key actors [...]

If there is a disagreement, I'd guess it's located in the following two points: 

(1a) How big are countervailing downsides from working directly with, or at institutions having close ties with, key actors? Here I'm mostly concerned about incentives distorting the content of research and strategic advice. I think the question is broadly similar to: If you're concerned about the impacts of the British rule on India in the 1800s, is it best to work within the colonial administration? If you want to figure out how to govern externalities from burning fossil fuels, is it best to work in the fossil fuel industry? I think the cliche left-wing answer to these questions is too confident in "no" and is overlooking important upsides, but I'm concerned that some standard EA answers in the AI case are too confident in "yes" and are overlooking risks. Note that I'm most concerned about kind of "benign" or "epistemic" failure modes: I think it's reasonably easy to tell people with broadly good intentions apart from sadists or even personal-wealth maximizers (at least in principle -- if this will get implemented is another question); I think it's much harder to spot cases like key people incorrectly believing that it's best if they keep as much control for themselves/their company as possible because after all they are the ones with both good intentions and an epistemic advantage (note that all of this really applies to a colonial administration with little modification, though here in cases such as the "Congo Free State" even the track record of "telling personal-wealth maximizers apart from people with humanitarian intentions" maybe isn't great -- also NB I'm not saying that this argument would necessarily be unsound; i.e. I think that in some situations these people would be correct).

(1b) To what extent to we need (a) novel insights as opposed to (b) an application of known insights or common-sense principles? E.g., I've heard claims that the sale of telecommunication licenses by governments is an example where post-1950 research-level economics work in auction theory has had considerable real-world impact, and AFAICT this kind of auction theory strikes me as reasonably abstract and in little need of having worked with either governments or telecommunication firms. Supposing this is true (I haven't really looked into this), how many opportunities of this kind are there in AI governance? I think the case for (A) is much stronger if we need little to no (a), as I think the upsides from trust networks etc. are mostly (though not exclusively) useful for (b). FWIW, my private view actually is that we probably need very little of (a), but I also feel like I have a poor grasp of this, and I think it will ultimately come down to what high-level heuristics to use in such a situation.


Comment by max_daniel on Max_Daniel's Shortform · 2019-12-17T11:56:08.786Z · score: 52 (21 votes) · EA · GW

[Some of my high-level views on AI risk.]

[I wrote this for an application a couple of weeks ago, but thought I might as well dump it here in case someone was interested in my views. / It might sometimes be useful to be able to link to this.]

[In this post I generally state what I think ​before ​updating on other people’s views – i.e., what’s ​sometimes known as​ ‘impressions’ as opposed to ‘beliefs.’]

Summary

  • Transformative AI (TAI) – the prospect of AI having impacts at least as consequential as the Industrial Revolution – would plausibly (~40%) be our best lever for influencing the long-term future if it happened this century, which I consider to be unlikely (~20%) but worth betting on.
  • The value of TAI depends not just on the technological options available to individual actors, but also on the incentives governing the strategic interdependence between actors. Policy could affect both the amount and quality of technical safety research and the ‘rules of the game’ under which interactions between actors will play out.

Why I'm interested in TAI as a lever to improve the long-run future

I expect my perspective to be typical of someone who has become interested in TAI through their engagement with the effective altruism (EA) community. In particular,

  • My overarching interest is to make the lives of as many moral patients as possible to go as well as possible, no matter where or when they live; and
  • I think that in the world we find ourselves in – it could have been otherwise –, this goal entails strong longtermism,​ i.e. the claim that “the primary determinant of the value of our actions today is how those actions affect the very long-term future.”

Less standard but not highly unusual (within EA) high-level views I hold more tentatively:

  • The indirect long-run impacts of our actions are extremely hard to predict and don’t ‘cancel out’ in expectation. In other words, I think that what ​Greaves (2016)​ calls ​complex cluelessness​ is a pervasive problem. In particular, evidence that an action will have desirable effects in the short term generally is ​not​ a decisive reason to believe that this action would be net positive overall, and neither will we be able to establish the latter through any other means.
  • Increasing the relative influence of longtermist actors is one of the very few strategies we have good reasons to consider net positive. Shaping TAI is a particularly high-leverage instance of this strategy, where the main mechanism is reaping an ‘epistemic rent’ from having anticipated TAI earlier than other actors. I take this line of support to be significantly more robust than any ​particular story on how TAI might pose a global catastrophic risk including even broad operationalizations of the ‘value alignment problem.’

My empirical views on TAI

I think the strongest reasons to expect TAI this century are relatively outside-view-based (I talk about this century just because I expect that later developments are harder to predictably influence, not because I think a century is particularly meaningful time horizon or because I think TAI would be less important later):

  • We’ve been able to automate an increasing number of tasks (with increasing performance and falling cost), and I’m not aware of a convincing argument for why we should be ​highly confident​ that this trend will stop short of ​full automation –​ i.e., AI systems being able to do all tasks more economically efficiently than humans –, despite moderate scientific and economic incentives to find and publish one.
  • Independent types of weak evidence such as ​trend extrapolation​ and ​expert​ ​surveys​ suggest we might achieve full automation this century.
  • Incorporating full automation into macroeconomic growth models predicts – at least under some a​ssumptions – a sustained higher rate of economic growth (e.g. ​Hanson 2001​, Nordhaus 2015​, ​Aghion et al. 2017​), which arguably was the main driver of the welfare-relevant effects of the Industrial Revolution.
  • Accelerating growth this century is consistent with extrapolating historic growth rates, e.g. Hanson (2000[1998])​.

I think there are several reasons to be skeptical, but that the above succeeds in establishing a somewhat robust case for TAI this century not being wildly implausible.

My impression is that I’m less confident than the typical longtermist EA in various claims around TAI, such as:

  • Uninterrupted technological progress would eventually result in TAI;
  • TAI will happen this century;
  • we can currently anticipate any specific way of positively shaping the impacts of TAI;
  • if the above three points were true then shaping TAI would be the most cost-effective way of improving the long-term future.

My guess is this is due to different priors, and due to frequently having found extant specific arguments for TAI-related claims (including by staff at FHI and Open Phil) less convincing than I would have predicted. I still think that work on TAI is among the few best shots for current longtermists.

Comment by max_daniel on Max_Daniel's Shortform · 2019-12-13T11:17:11.201Z · score: 34 (13 votes) · EA · GW

What's the right narrative about global poverty and progress? Link dump of a recent debate.

The two opposing views are:

(a) "New optimism:" [1] This is broadly the view that, over the last couple of hundred years, the world has been getting significantly better, and that's great. [2] In particular, extreme poverty has declined dramatically, and most other welfare-relevant indicators have improved a lot. Often, these effects are largely attributed to economic growth.

  • Proponents in this debate were originally Bill Gates, Steven Pinker, and Max Roser. But my loose impression is that the view is shared much more widely.
  • In particular, it seems to be the orthodox view in EA; cf. e.g. Muehlhauser listing one of Pinker's books in his My worldview in 5 books post, saying that "Almost everything has gotten dramatically better for humans over the past few centuries, likely substantially due to the spread and application of reason, science, and humanism."

(b) Hickel's critique: Anthropologist Jason Hickel has criticized new optimism on two grounds:

  • 1. Hickel has questioned the validity of some of the core data used by new optimists, claiming e.g. that "real data on poverty has only been collected since 1981. Anything before that is extremely sketchy, and to go back as far as 1820 is meaningless."
  • 2. Hickel prefers to look at different indicators than the new optimists. For example, he has argued for different operationalizations of extreme poverty or inequality.

Link dump (not necessarily comprehensive)

If you only read two things, I'd recommend (1) Hasell's and Roser's article explaining where the data on historic poverty comes from and (2) the take by economic historian Branko Milanovic.

By Hickel (i.e. against "new optimism"):

By "new optimists":

Commentary by others:

My view

  • I'm largely unpersuaded by Hickel's charge that historic poverty data is invalid. Sure, it's way less good than contemporary data. But based on Hasell's and Roser's article, my impression is that the data is better than I would have thought, and its orthodox analysis and interpretation more sophisticated than I would have thought. I would be surprised if access to better data would qualitatively change the "new optimist" conclusion.
  • I think there is room for debate over which indicators to use, and that Hickel makes some interesting points here. I find it regrettable that the debate around this seems so adversarial.
  • Still, my sense is that there is an important, true, and widely underappreciated (particularly by people on the left, including my past self) core of the "new optimist" story. I'd expect looking at other indicators could qualify that story, or make it less simplistic, point to important exceptions etc. - but I'd probably consider a choice of indicators that painted an overall pessimistic picture as quite misleading and missing something important.
  • On the other hand, I would quite strongly want to resist the conclusion that everything in this debate is totally settled, and that the new optimists are clearly right about everything, in the same way in which orthodox climate science is right about climate change being anthropogenic, or orthodox medicine is right about homeopathy not being better than placebo. But I think the key uncertainties are not in historic poverty data, but in our understanding of wellbeing and its relationship to environmental factors. Some examples of why I think it's more complicated
    • The Easterlin paradox
    • The unintuitive relationship between (i) subjective well-being in the sense of the momentary affective valence of our experience on one hand and (ii) reported life satisfaction. See e.g. Kahneman's work on the "experiencing self" vs. "remembering self".
    • On many views, the total value of the world is very sensitive to population ethics, which is notoriously counterintuitive. In particular, on many plausible views, the development of the total welfare of the world's human population is dominated by its increasing population size.
  • Another key uncertainty is the implications of some of the discussed historic trends for the value of the world going forward, about which I think we're largely clueless. For example, what are the effects of changing inequality on the long-term future?

[1] It's not clear to me if "new optimism" is actually new. I'm using Hickel's label just because it's short and it's being used in this debate anyway, not to endorse Hickel's views or make any other claim.

[2] There is an obvious problem with new optimism, which is that it's anthropocentric. In fact, on many plausible views, the total axiological value of the world at any time in the recent past may be dominated by the aggregate wellbeing of nonhuman animals; even more counterintuitively, it may well be dominated by things like the change in the total population size of invertebrates. But this debate is about human wellbeing, so I'll ignore this problem.

Comment by max_daniel on Rethink Priorities Impact Survey · 2019-12-02T19:19:30.594Z · score: 17 (8 votes) · EA · GW

Thanks for posting this! I'd really like to see more organizations evaluate their impact, and publish about their analysis.

Just a quick note: You mention that I indicated I "found [y]our work on nuclear weapons somewhat useful". This is correct. I'd like to note that the main reason why I don't find it very useful simply is that I currently don't anticipate to work on nuclear security personally, or to make any decisions that depend on my understanding of nuclear security. In general, how "useful" people find your work is a mix of their focus and the quality of your work (which in this case AFAICT is very high, though I haven't reviewed it in detail), which might make it hard to interpret the results.

Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-11T16:48:40.561Z · score: 6 (2 votes) · EA · GW

Regarding your "outside view" point: I agree with what you say here, but think it cannot directly undermine my original "outside view" argument. These clarifications may explain why:

  • My original outside view argument appealed to the process by which certain global health interventions such as distributing bednets have been selected rather than their content. The argument is not "global health is a different area from economic growth, therefore a health intervention is unlikely to be optimal for accelerating growth"; instead it is "an intervention that has been selected to be optimal according to some goal X is unlikely to also be optimal according to a different goal Y".
    • In particular, if GiveWell had tried to identify those interventions that best accelerate growth, I think my argument would be moot (no matter what interventions they had come up with, in particular in the hypothetical case where distributing bednets had been the result of their investigation).
    • In general, I think that selecting an intervention that's optimal for furthering some goal needs to pay attention to all of importance, tractability, and neglectedness. I agree that it would be bad to exclusively rely on the heuristics "just focus on the most important long-term outcome/risk" when selecting longtermist interventions, just as it would be bad to just rely on the heuristics "work on fighting whatever disease has the largest disease burden globally" when selecting global health interventions. But I think these would just be bad ways to select interventions, which seems orthogonal to the question when an intervention selected for X will also be optimal for Y. (In particular, I don't think that my original outside view argument commits me to the conclusion that in the domain of AI safety it's best to directly solve the largest or most long-term problem, whatever that is. I think it does recommend to deliberately select an intervention optimized for reducing AI risk, but this selection process should also take into account feedback loops and all the other considerations you raised.)
  • The main way I can see to undermine this argument would be to argue that a certain pair of goals X and Y is related in such a way that interventions optimal for X are also optimal for Y (e.g., X and Y are positively correlated, though this in itself wouldn't be sufficient). For example, in this case, such an argument could be of the type "our best macroeconomic models predict that improving health in currently poor countries would have a permanent rate effect on growth, and empirically it seems likely that the potential for sustained increases in the growth rate is largest in currently poor countries" (I'm not saying this claim is true, just that I would want to see something like this).
Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-11T15:49:45.798Z · score: 1 (1 votes) · EA · GW
The "inside view" point is that Christiano's estimate only takes into account the "price of a life saved". But in truth GiveWell's recommendations for bednets or deworming are to a large measure driven by their belief, backed by some empirical evidence, that children who grow up free of worms or malaria become adults who can lead more productive lives. This may lead to better returns than what his calculations suggest. (Micronutrient supplementation may also be quite efficient in this respect.)

I think this is a fair point. Specifically, I agree that GiveWell's recommendations are only partly (in the case of bednets) or not at all (in the case of deworming) based on literally averting deaths. I haven't looked at Paul Christiano's post in sufficient detail to say for sure, but I agree it's plausible that this way of using "price of a life saved" calculations might effectively ignore other benefits, thus underestimating the benefits of bednet-like interventions compared to GiveWell's analysis.

I would need to think about this more to form a considered view, but my guess is this wouldn't change my mind on my tentative belief that global health interventions selected for their short-term (say, anything within the next 20 years) benefits aren't optimal growth interventions. This is largely because I think the dialectical situation looks roughly like this:

  • The "beware suspicious convergence" argument implies that it's unlikely (though not impossible) that health interventions selected for maximizing certain short-term benefits are also optimal for accelerating long-run growth. The burden of proof is thus with the view that they are optimal growth interventions.
  • In addition, some back-of-the-envelope calculations suggest the same conclusion as the first bullet point.
  • You've pointed out a potential problem with the second bullet point. I think it's plausible to likely that this significantly to totally removes the force of the second bullet point. But even if the conclusion of the calculations were completely turned on their head, I don't think they would by themselves succeed in defeating the first bullet point.
Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-10T16:53:06.919Z · score: 2 (2 votes) · EA · GW

As I said in another comment, one relevant complication seems to be that risk and growth interact. In particular, the interaction might be such that speeding up growth could actually have negative value. This has been debated for a long time, and I don't think the answer is obvious. It might something we're clueless about.

(See Paul Christiano's How useful is “progress”? for an ingenious argument for why either

  • (a) "People are so badly mistaken (or their values so misaligned with mine) that they systematically do harm when they intend to do good, or"
  • (b) "Other (particularly self-interested) activities are harmful on average."

Conditional on (b) we might worry that speeding up growth would work via increasing the amount or efficiency of various self-interested activities, and thus would be harmful.

I'm not sure if I buy the argument, though. It is based on "approximat[ing] the changes that occur each day as morally neutral on net". But on longer timescales it seems that we should be highly uncertain about the value of changes. It thus seems concerning to me to look at a unit of time for which the magnitude of change is unintuitively small, round it to zero, and extrapolate from this to a large-scale conclusion.)

Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-10T16:23:39.687Z · score: 4 (3 votes) · EA · GW

You say that:

I will [...] focus instead on a handful of simple model cases. [...] These models will be very simple. In my opinion, nothing of value is being lost by proceeding in this way.

I agree in the sense that I think your simple models succeed in isolating an important consideration that wouldn't itself be qualitatively altered by looking at a more complex model.

However, I do think (without implying that this contradicts anything you have said in the OP) that there are other crucial premises for the argument concluding that reducing existential risk is the best strategy for most EAs. I'd like to highlight three, without implying that this list is comprehensive.

  • One important question is how growth and risk interact. Specifically, it seems that we face existential risks of two different types: (a) 'exogenous' risks with the property that their probability per wall-clock time doesn't depend on what we do (perhaps a freak physics disaster such as vacuum decay); and (b) 'endogenous' risks due to our activities (e.g. AI risk). The probability of such endogenous risks might correlate with proxies such as economic growth or technological progress, or more specific kinds of these trends. As an additional complication, the distinction between exogenous and endogenous risks may not be clear-cut, and arguably is itself endogenous to the level of progress - for example, an asteroid strike could be an existential risk today but not for an intergalactic civilization. Regarding growth, we might thus think that we face a tradeoff where faster growth would on one hand reduce risk by allowing us to more quickly reach thresholds that would make us invulnerable to some risks, but on the other hand might exacerbate endogenous risks that increase with the rate of growth. (A crude model for why there might be risks of the latter kind: perhaps 'wisdom' increases at fixed linear speed, and perhaps the amount of risk posed by a new technology decreases with wisdom.)
    • I think "received wisdom" is roughly that most risk is endogenous, and that more fine-grained differential intellectual or technological progress aimed at specifically reducing such endogenous risk (e.g. working on AI safety rather then generically increasing technological progress) is therefore higher-value than shortening the window of time during which we're exposed to some exogenous risks.
    • See for example Paul Christiano, On Progress and Prosperity
    • A somewhat different lense is to ask how growth will affect the willingness of impatient actors - i.e., those that discount future resources at a higher rate than longtermists - to spend resources on existential risk reduction. This is part of what Leopold Aschenbrenner has examined in his paper on Existential Risk and Economic Growth.
  • More generally, the value of existential risk reduction today depends on the distribution of existential risk over time, including into the very long-run future, and on whether todays effort would have permanent effects on that distribution. This distribution might in turn depend on the rate of growth, e.g. for the reasons mentioned in the previous point. For an excellent discussion, see Tom Sittler's paper on The expected value of the long-term future. In particular, the standard argument for existential risk reduction requires the assumption that we will eventually reach a state with much lower total risk than today.
  • A somewhat related issue is the distribution of opportunities to improve the long-term future over time. Specifically, will there be more efficient longtermist interventions in, say, 50 years? If yes, this would be another reason to favor growth over reducing risk now. Though more specifically it would favor growth, not of the economy as a whole, but of the pool of resources dedicated to improving the long-term future - for example, through 'EA community building' or investing to give later. Relatedly, the observation that longtermists are unusually patient (i.e. discount future resources at a lower rate) is both a reason to invest now and give later, when longtermists control a larger share of the pie - and a consideration increasing the value of "ensuring that the future proceeds without disruptions", potentially by using resources now to reduce existential risk. For more, see e.g.:
Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-10T15:30:42.771Z · score: 7 (6 votes) · EA · GW

You describe the view you're examining as:

cause areas related to existential risk reduction, such as AI safety, should be virtually infinitely preferred to other cause areas such as global poverty

You then proceed by discussing considerations that are somewhat specific to the specific types of interventions you're comparing - i.e., reducing extinction risk versus speeding up growth.

You might be interested in another type of argument questioning this view. These arguments attack the "virtually infinitely" part of the view, in a way that's agnostic about the interventions being compared. For such arguments, see e.g.:

Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-10T13:55:24.362Z · score: 14 (10 votes) · EA · GW

Thank you, I think this is an excellent post!

I also sympathize with your confusion. - FWIW, I think that a fair amount of uncertainty and confusion about the issues you've raised here is the epistemically adequate state to be in. (I'm less sure whether we can reliably reduce our uncertainty and confusion through more 'research'.) I tentatively think that the "received longtermist EA wisdom" is broadly correct - i.e. roughly that the most good we can do usually (for most people in most situations) is by reducing specific existential risks (AI, bio, ...) -, but I think that

  • (i) this is not at all obvious or settled, and involves judgment calls on my part which I could only partly make explicit and justify; and
  • (ii) the optimal allocation of 'longtermist talent' will have some fraction of people examining whether this "received wisdom" is actually correct, and will also have some distribution across existential risk reduction, what you call growth interventions, and other plausible interventions aimed at improving the long-term future (e.g. "moral circle expansion") - for basically the "switching cost" and related reasons you mention [ETA: see also sc. 2.4 of GPI's research agenda].

One thing in your post I might want to question is that, outside of your more abstract discussion, you phrase the question as whether, e.g., "AI safety should be virtually infinitely preferred to other cause areas such as global poverty". I'm worried that this is somewhat misleading because I think most of your discussion rather concerns the question whether, to improve the long-term future, it's more valuable to (a) speed up growth or to (b) reduce the risk of growth stopping. I think AI safety is a good example of a type-(b) intervention, but that most global poverty interventions likely aren't a good example of a type-(a) intervention. This is because I would find it surprising if an intervention that has been selected to maximize some measure of short-term impact also turned out to be optimal for speeding up growth in the long-run. (Of course, this is a defeatable consideration, and I acknowledge that there might be economic arguments that suggest that accelerating growth in currently poor countries might be particularly promising to increase overall growth.) In other words, I think that the optimal "growth intervention" Alice would want to consider probably isn't, say, donating to distribute bednets; I don't have a considered view on what it would be instead, but I think it might be something like: doing research in a particularly dynamic field that might drive technological advances; or advocating changes in R&D or macroeconomic policy. (For some related back-of-the-envelope calculations, see Paul Christiano's post on What is the return to giving?; they suggest "that good traditional philanthropic opportunities have a return of around 10 and the best available opportunities probably have returns of 100-1000, with most of the heavy hitters being research projects that contribute to long term tech progress and possibly political advocacy", but of course there is a lot of room for error here. See also this post for how maximally increasing technological progress might look like.)

Lastly, here are some resources on the "increase growth vs. reduce risk" question, which you might be interested in if you haven't seen them:

  • Paul Christiano's post on (literal) Astronomical waste, where he considers the permanent loss of value from delayed growth due to cosmological processes (expansion, stars burning down, ...). In particular, he also mentions the possibility that "there is a small probability that the goodness of the future scales exponentially with the available resources", though he ultimately says he favors roughly what you called the plateau view.
  • In an 80,000 Hours podcast, economist Tyler Cowen argues that "our overwhelming priorities should be maximising economic growth and making civilization more stable".
  • For considerations about how to deal with uncertainty over how much utility will grow as a function of resources, see GPI's research agenda, in particular the last bullet point of section 1.4. (This one deals with the possibility of infinite utilities, which raises somewhat similar meta-normative issues. I thought I remembered that they also discuss the literal point you raised - i.e. what if utility will in the long-run grow exponentially? -, but wasn't able to find it.)

I might follow up in additional comments with some pointers to issues related to the one you discuss in the OP.

Comment by max_daniel on Opinion: Estimating Invertebrate Sentience · 2019-11-07T10:05:42.991Z · score: 8 (6 votes) · EA · GW

Very interesting, thank you!

Potential typo: In the following paragraph, I think "likely yes" should probably read "very probably yes".

Although incomplete, there is direct evidence that individuals of these taxa exhibit features which, according to expert agreement, seem to be necessary –although not sufficient– for consciousness (Bateson, 1991; Broom, 2013; EFSA, 2005; Elwood, 2011; Fiorito, 1986; Sneddon et al., 2014; Sneddon, 2017) (see the first criterion of the ‘likely yes’ category);

(Super minor: It also seems slightly inconsistent that the section title "Very Probably Yes" is capitalized, while others aren't.)

Comment by max_daniel on Why we think the Founders Pledge report overrates CfRN · 2019-11-05T10:28:15.462Z · score: 6 (5 votes) · EA · GW

Thanks to Sanjay and you, I found this very interesting to follow!

Just a quick note on your point 2:

On the counterfactual impact of funds. I agree that this is in principle a gap in the CEA. However, this criticism also applies to almost all CEAs I have ever seen. Accounting for all counterfactuals in CEA models is very hard.

It's quite possible that I'm missing something here, because I'm not familiar with the context, and haven't tried to dig into the details. - But FWIW, my initial reaction was that this response doesn't quite speak to what I understood to be the specific concern in the OP, i.e.:

To be clear, this section is *not* considering the opportunity cost of your (the donor’s) money going to CfRN. Rather, CfRN enables other funds to be raised and donated to REDD+ – this section is considering the opportunity cost on those funds.

I agree with you (Halstead) that the opportunity cost of the donor's money is almost never accounted for in a CEA. (This also wouldn't be required conceptually - choosing the donation target with maximal cost-effectiveness across all CEAs would be sufficient to minimize opportunity cost, conditional on the CEAs being individually correct and jointly comprehensive.)

I also agree that "accounting for all counterfactuals in CEA models is very hard".

However, from my (possibly uninformed) perspective, the force of the OP's argument is due to an appeal to a somewhat specific, contingent property of the intervention under consideration - namely that this CEA is assessing the cost-effectiveness of a donation the primary purpose of which is to cause a change in the allocation of other (here, largely government) funds.

I think this situation is not at all analogous to "money to [FHI] could have gone to global development". I in fact think it's similar to why e.g. 80,000 Hours considers plan changes as their main metric. Or, to give a hypothetical example, consider a health intervention aiming to make people buy more vegetables; when assessing the impact of this intervention, I would want to know whether people who end up buying more vegetables have reduced their expenses for grains or chocolate or clothing, whether they've taken out a loan to buy more vegetables etc. - And this concern is quite distinct from the concern that "donations to fund this intervention could also have been donated elsewhere".

Comment by max_daniel on Notes on 'Atomic Obsession' (2009) · 2019-11-04T10:05:05.732Z · score: 2 (2 votes) · EA · GW

Thanks for mentioning this. I had meant to refer to this accident, but after spending 2 more minutes looking into got the impression that there is less consensus on what happened than I thought.

Specifically, the Wikipedia article says:

However, Michael H. Maggelet and James C. Oskins, authors of Broken Arrow: The Declassified History of U.S. Nuclear Weapons Accidents, dispute this claim, citing a declassified report. They point out that the arm-ready switch was in the safe position, the high-voltage battery was not activated (which would preclude the charging of the firing circuit and neutron generator necessary for detonation), and the Rotary Safing Switch was destroyed, preventing energisation of the X-Unit (which controlled the firing capacitors). The tritium reservoir used for fusion boosting was also full and had not been injected into the weapon primary. This would have resulted in a significantly reduced primary yield and would not have ignited the weapon's fusion secondary stage.

One of the Wikipedia references is a blog post by one of the authors mentioned above, with the title Goldsboro- 19 Steps Away from Detonation. Some quotes:

Bomb 2, the object of the Goldsboro controversy, was not "one step" away from detonation. [...]
In Bomb 2, the High Voltage Thermal Battery was not activated, so no electrical power could reach any components necessary to fire the weapon and produce a nuclear explosion. [...]
While the Ready/Safe Switch in Bomb 2 showed "armed" after recovery, it was actually safe [...]. Most importantly, the high voltage necessary to fire bomb components was not present for bomb 2.
[...]
How close was the Goldsboro bomb to producing a nuclear explosion? Not at all.

I didn't attempt to understand the specific technical claims (not even if there is a dispute about technical facts, or just a different interpretation of how to describe the same facts in terms of how far away the bombs was from detonating), and so can't form my own view.

Do you have any sense what source to trust here?

In any case, my understanding is that nuclear weapons usually had many safety features, and that it's definitely true that one or a few of them failed in several instances.

Comment by max_daniel on Notes on 'Atomic Obsession' (2009) · 2019-10-30T12:29:36.626Z · score: 29 (15 votes) · EA · GW
Wars are not caused by weapons or arms races

My impression is that there has been a lot of both theoretical and empirical research on arms races in the field of international relations, and that this claim is still contested. I therefore find it hard to be confident in this claim.

For example, Siverson and Diehl (p. 214 in Midlarsky, ed., 1989) sardonically note that “[i]f there is any consensus among arms race studies, it is that some arms races lead to war and some do not.” Fifteen years later, Glaser (2004) still opens with:

Are arms races dangerous? This basic international relations question has received extensive attention. A large quantitative empirical literature addresses the consequences of arms races by focusing on whether they correlate with war, but remains divided on the answer.

On one hand, there are several theoretical models that posit mechanisms how arms buildups could causally contribute to wars.

  • Security dilemma/spiral model: If states can't distinguish offensive from defensive military capabilities and have incomplete information about each other's goals - in particular, whether they face a "revisionist" state that would seize an opportunity to attack because it wants to acquire more territory -, their desire for security will compel them to engage in a spiral of arming (e.g. Jervis 1978, 2017[1976]). [While commonly cited as a way how arms races could cause wars, I think this idea is somewhat muddy, and in particular it often remains unclear whether the posited mechanism is an irrational stimulus-response cascade or some reason why rational actors would engage in an arms race culminating in a situation where war is a rational response to an external threat. See e.g. Glaser 2000, 2004. Similarly, it's unclear whether even in this model the arms race is a cause of war or rather a mere symptom of underlying structural causes such as incomplete information or states' inability to commit to more cooperative policies; see Fearon 1995 and Diehl & Crescenzi 1998.] A different approach of explaining escalation dynamics culminating in war is Vasquez's (1993) "steps-to-war" theory.
  • Costly deterrence: If the opportunity cost of military expenditures required for deterrence becomes too large, and if military spending could be reduced after a successful war, then it can be rational to take one's chances and attack (e.g. Powell 1993, Fearon 2018).
  • Preventive war: If a state anticipates that the balance of power would change in an adversary's favor, they might want to attack now (e.g. Levy 1987). Allison (2017) has popularized this idea as Thucidydes's trap and applied it to current US-China relations. The worry that an adversary could acquire a new weapons technology arguably is a special case; as you suggest, the 2003 Iraq War is often seen as an instance, which has inspired a wave of recent scholarship (e.g. Bas & Coe 2012).

On the other hand, there has been extensive empirical research on the arms race-conflict relationship. Stoll (2017) and Mitchell & Pickering (2017) provide good surveys. My takeaway is that the conclusions of early research (e.g. Wallace 1979) should be discarded due to methodological flaws [1], but that some more recent research is interesting. For example, several studies suggest a change in the arms race-war relationship post-WW2, contra your suggestion that the relationship has been similar since at least WW1. Of course, a major limitation is that conclusions are mostly about correlations rather than causation. Some examples (emphases mine):

Gibler, Rider, and Hutchison (2005) add to the literature by addressing a potential selection bias present in many studies. They attribute this to the unit of analysis—a dispute—which presupposes that deterrence has already failed. In an attempt to resolve this, they identify arms races independently of dispute occurrence and use this to test if arms races either deter or escalate MIDs. Using a sample of strategic rival dyads between 1816 and 1993, it was shown that arms races increase the probability of both disputes and war. [Mitchell & Pickering 2017]
Gibler, Rider, and Hutchison (2005) study conventional arms races and war. [...] Only 13 of 79 wars identified by the Correlates of War Project from 1816 through 1992 were preceded by an arms race. As well, only 25 of the 174 strategic rivals identified by Thompson (2001) had an arms race before a war. [Stoll 2017]

Important empirical literature has also placed arms racing in a broader theoretical context to improve comprehension. The “steps-to-war” approach introduced by Vasquez (1993) includes arms races as one of a number factors that contribute to an escalation of violence between states. A good deal of empirical work has tested this approach in the decades since it was first introduced (Colaresi & Thompson, 2005; Senese & Vasquez, 2008; Vasquez, 1996, 2000, 2004, 2009; Vasquez & Henehan, 2001).
[...]
Beginning with Sample’s (2000) multivariate analysis, research on the arms race–war relationship has accounted for territorial disputes and other factors that may influence the outbreak of war. The literature had not, however, examined the relationship between arms races and other steps to war. Vasquez (2004) and Senese and Vasquez (2005, 2008) address this and find that other power politics practices (e.g., alliances and rivalry) do not eliminate the arms race–war relationship.
[...]
Building upon these earlier findings, Sample (2012) conducted another analysis that divided the temporal domain into three separate eras: 1816 to 2001, 1816 to 1944, and 1945 to 2001. She further controlled for state rivalries—dividing the data into disputes within rivalry and disputes outside of rivalry—and used three different measures of rivalry to compare the findings. The results showed that mutual military buildups had a substantial impact on conflict escalation to war, between both rivals and non-rivals. This suggests the relationship between arms races and war is not an artifact of rivalry (see Rider et al., 2011, for a contrary view). [Mitchell & Pickering 2017]

Rider, Findley, and Diehl (2011) [...] also study the relationship between rivalries, arms races, and war. The time period of their study is 1816–2000. They use Diehl’s operationalization of an arms race. They find that taking rivalries into account is important to understanding that relationship. In particular, locked-in rivalries (those rivalries that have experienced a large number of disputes) that experience an arms race are more likely to experience a war. [Stoll 2017]

Endnotes:

[1] E.g. Stoll (2017), emphasis mine:

The broader issue is about the basic research design used by Wallace. He did not examine whether arms races lead to war. He looked at dyads that engaged in militarized interstate disputes and asked whether if prior to the dispute the dyad engaged in rapid military growth. If so, Wallace predicted (and his results—with the caveats of other studies noted above—supported this) that the states were very likely to engage in war.
For the moment let us accept Wallace’s findings. Understanding the conditions under which a dyad that engages in a militarized interstate dispute is more likely to end in war is a contribution to understanding why wars happen. But it does not explain the relationship between arms races and war. Even if we accept Wallace’s index as a valid indicator his research design does not allow for the possibility that there may be many arms races that are not associated with disputes. Including these cases may produce very different conclusions about the linkage between arms races and war.

References:

Allison, G. (2017). Destined for war: can America and China escape Thucydides's trap?. Houghton Mifflin Harcourt.
Bas, M. A., & Coe, A. J. (2012). Arms diffusion and war. Journal of Conflict Resolution, 56(4), 651-674.
Diehl, P. F., & Crescenzi, M. J. (1998). Reconfiguring the arms race-war debate. Journal of Peace Research, 35(1), 111-118.
Fearon, J. D. (1995). Rationalist explanations for war. International organization, 49(3), 379-414.
Fearon, J. D. (2018). Cooperation, conflict, and the costs of Anarchy. International Organization, 72(3), 523-559.
Glaser, C. L. (2000). The causes and consequences of arms races. Annual Review of Political Science, 3(1), 251-276.
Glaser, C. L. (2004). When are arms races dangerous? Rational versus suboptimal arming. International Security, 28(4), 44-84.
Jervis, R. (1978). Cooperation under the security dilemma. World politics, 30(2), 167-214.
Jervis, R. (2017). Perception and Misperception in International Politics: New Edition. Princeton University Press.
Levy, J. S. (1987). Declining power and the preventive motivation for war. World Politics, 40(1), 82-107.
Powell, R. (1993). Guns, butter, and anarchy. American Political Science Review, 87(1), 115-132.
Siverson, R., Diehl, P., & Midlarsky, M. (1989). Handbook of War Studies.
Vasquez, J. A. (1993). The war puzzle (No. 27). Cambridge University Press.
Wallace, M. D. (1979). Arms races and escalation: Some new evidence. Journal of Conflict Resolution, 23(1), 3-16.
Comment by max_daniel on Notes on 'Atomic Obsession' (2009) · 2019-10-30T10:22:15.330Z · score: 5 (4 votes) · EA · GW

On the deterrence effect of nuclear weapons, I think the following empirical finding is interesting. (Though not conclusive, as this kind of research design cannot establish causality.)

Sample (2000) found that arms races increase the chances of both MIDs [militarized interstate disputes] and the likelihood that an MID will escalate to full-scale war. However, she discovered that this was only the case in disputes that occurred before World War II. Similarly, territorial disputes were no longer found to be associated with escalation in the post–World War II era. Sample suggested that the presence of nuclear weapons was a possible explanation for why arms races in the post-war era were found to be less likely to result in the outbreak of war than those that occurred prior. She introduced a nuclear weapons variable to test this and found that the probability of war decreased to .05 when nuclear weapons were present during a mutual military buildup. Sample’s discovery of the potential pacifying effect of nuclear weapons was an important contribution to our understanding of how quantitative and qualitative arms race–war relationships differ.

Quote from Mitchell and Pickering, 2017, an encyclopedia article reviewing work on arms races (emphasis mine). On the impact of nukes, they continue (emphasis mine):

The advent of nuclear weapons thus appears to have changed the arms race–conflict relationship. It is important to note in this regard that many policymakers seem to place nuclear weapons in a different conceptual category than conventional weapons. As Sagan (1996, p. 55) has argued, nuclear weapons “are more than tools of national security; they are political objects of considerable importance in domestic debates and internal bureaucratic struggles and can also serve as international normative symbols of modernity and identity.” There have also been attempts to explain “nuclear reversal” cases by which states forgo or give up on their programs (Campbell et al., 2004; Levite, 2003; Paul, 2000; Reiss, 1995; Rublee, 2009). Research has shown that the possession of such weapons is contingent upon both willingness and opportunity (Jo & Gartzke, 2007). While security concerns and technological capabilities are significant determinants of whether states pursue the development of nuclear weapons, the possession of such weapons is dependent upon such factors as domestic politics and international considerations (Jo & Gartzke, 2007). Furthermore, states are heavily dependent upon sensitive nuclear assistance from more advanced nuclear states when attempting to develop a nuclear arsenal (Kroenig, 2009a, 2009b).2 The nature of nuclear weapons acquisition is thus multifaceted and may not always be motivated by arms races. Once acquired, however, nuclear capabilities seem to impact the likelihood of conflict escalation and disputes between states.
[...]
[Gibler et al. 2005] controlled for several variables previously demonstrated to be predictors of conflict in a dyad. Among these was the joint presence of nuclear weapons, which was shown to prevent the outbreak of war (as no war has occurred in a dyad where both states possessed nuclear weapons). However, if both states had nuclear weapons, this was found to actually increase the probability of MID onset. Subsequent research has shown that nuclear dyads have engaged in a large number of militarized disputes short of war and may be even more likely to engage in MIDs than non-nuclear states or asymmetric pairs of states (see, e.g., Beardsley & Asal, 2009; Rauchhaus, 2009).
Gibler et al.’s (2005) discovery that nuclear dyads are less likely to engage in all-out war between rivals but more likely to engage in MIDs and hostile action short of war contributes to the broader understanding of the role nuclear weapons play in state decisions to use military force. Although a detailed discussion of nuclear deterrence is outside the scope of this article, it is important to highlight a key debate within this context. Among those who believe that nuclear weapons can serve as a deterrent (often referred to as “proliferation optimists”), some have argued that possession can deter aggression at all levels (Jervis, 1989; Waltz, 1990). Others, meanwhile, have contested that possession secures states from high-level conflict escalation (e.g., war) but increasingly contributes to lower-level hostile action (e.g., MIDs) (Snyder, 1965; Snyder & Diesing, 1977). This concept is known as the “stability-instability paradox,” which states that “to the extent that the military balance is stable at the level of all-out nuclear war, it will become less stable at lower levels of violence” (Jervis, 1984, p. 31).
Comment by max_daniel on Notes on 'Atomic Obsession' (2009) · 2019-10-30T10:00:02.223Z · score: 8 (7 votes) · EA · GW
If Iran actually does develop something of an atomic arsenal, it will likely find, following the experience of all other states so armed, that the bombs are essentially useless and a very considerable waste of money and effort

This claim strikes me as particularly dubious intuitively. I don't have specific evidence in favor of my intuition, but I think I would want to see quite substantial evidence for Mueller's claim to believe it, as I think my prior is driven by the following considerations:

  • At first glance, it seems that Iran's adversaries are also concerned about the prospect of Iran acquiring nukes. For example, the US seems to be willing to pay a substantial cost in terms of tensions with European allies in order to take a tougher stance toward Iran, e.g. the Trump administration cancelling the nuclear deal. Similarly, there clearly were risks involved in deploying the Stuxnet cyber weapon against Iran. (This is an interesting case because Stuxnet was targeted specifically at Iran's nuclear program; so the potential response "Iran's adversaries are using the prospect of a nuclear Iran merely as a pretext to push through policies that hurt Iran more generally, e.g. economically" does not work in this case.)
  • More broadly, Mueller essentially seems to claim that there is some very widespread delusion: While in fact nuclear weapons are just a waste of money, all of the following actors are making the same epistemic error of believing the opposite (as indicated by their revealed preferences): most Democrats in the US; most Republicans in the US; most people across the political spectrum in Israel; the government of Saudi Arabia; both "moderate" and conservative politicians in Iran; the government of Russia, etc. What is more, incentives to correct this epistemic error surely aren't super great, but they are not zero either: If, say, a Democratic US President is making a big foreign policy blunder by accepting considerable cost to prevent Iran from acquiring nuclear weapons, why aren't there more Republicans who jump onto this opportunity to embarrass the government? Why is the prospect of a nuclear Iran able to - at least to some extent - unite a diverse set of actors such as the US, the EU, Israel, Saudi Arabia, and Russia behind a common foreign policy objective, i.e., to prevent Iran from acquiring nuclear weapons?
  • I also think the claim flies in the face of common sense. In particular, Israel is a tiny country in a vulnerable geographic position, it has been attacked several times since its inception, and Iran has consistently taken a very hostile stance toward it (and not just via cheap talk, but also by e.g. sponsoring insurgent groups in Lebanon). At first glance, I find the suggestion that the additional option of (explicit or implicit) nuclear threats against Israel would not hurt Israel's interests hard to believe. Similarly, the US has a history of recent interference in Middle Eastern countries via conventional wars, see Afghanistan and Iraq. I think an American attack on Iran with the objective of regime change within the next decade is at least plausible, and everyone knows this. On the other hand, I don't think anyone has ever tried to attack a nuclear weapons state with a regime change objective. (AFAIK the only direct military conflicts between nuclear weapons states were a 1969 border conflict between China and the USSR, and the 1999 Kargil War between India and Pakistan - both cases in which all sides clearly had much more limited objectives.) Again, I think the idea that the US would invade an Iran armed with nuclear weapons is on its face implausible. If this is true, then possessing nuclear weapons would decrease one of the arguably major risks to Iranian sovereignty - so how can they be a "considerable waste of money and effort"?
Comment by max_daniel on Notes on 'Atomic Obsession' (2009) · 2019-10-29T16:57:38.295Z · score: 4 (3 votes) · EA · GW
(2) I continue to worry that the so-far (apparently) perfect safety and security record for nuclear weapons will eventually end, which could (but probably won't) have global catastrophic effects

My guess is you're very likely aware of this, but for other readers it might be worth pointing out that the safety record is "perfect" only if the outcome of interest is a nuclear detonation.

There were, however, several accidents where the conventional explosives (that would trigger a nuclear detonation in intended use cases) in a nuclear weapon detonated (but where safety features prevented a nuclear detonation). E.g., accidents involving bombers in 1958 and 1968, the latter also causing radioactive contamination of an uninhabited part of Greenland; and some accidents involving missiles, e.g. in 1980. See also Wikipedia's list of military nuclear accidents.

More broadly, the sense I got e.g. from Schlosser's book Command and Control is that within the US government and military it was a contested issue how much to weigh safety versus cost and desired military capabilities such as readiness. The book mentions several individuals working in government or at nuclear weapon manufacturers campaigning for additional safety measures or changes of risky policies, with mixed success - overall it seemed to me that the US arsenal did for decades contain weapons for which we at least couldn't rule out an accidental nuclear detonation with extreme confidence.

(Similar remarks apply to security. I forgot the details, but it doesn't inspire confidence that senior US decision-makers on some occasions worried about scenarios in which European allies such as Turkey might seize scarcely guarded US nuclear weapons during a crisis.)

Comment by max_daniel on Notes on 'Atomic Obsession' (2009) · 2019-10-29T16:30:52.294Z · score: 6 (4 votes) · EA · GW
It is likely that no “loose nukes” — nuclear weapons missing from their proper storage locations and available for purchase in some way — exist

This squares well with my weakly held prior, based on crude beliefs such that most dangers around terrorism are exaggerated.

However, I'm wondering how Mueller treats the question of whether we would know. E.g., during a 2007 incident in the US, several nuclear weapons were mistakenly loaded onto a bomber that was unguarded for hours at both its start and target locations; no-one realized the weapons were missing for about 36 hours, and the whole problem was only discovered once someone discovered the nukes in the bomber.

My guess is that nuclear weapons handling procedures would probably have uncovered eventually that some warheads were missing at the storage location. But as this incident illustrates it's (i) unclear when, and (ii) there is room for human error (according to Schlosser's Command and Control, the incident was only possible because four different crews failed to check whether the relevant missiles were loaded with nuclear warheads, even though all of them were supposed to).

Also note that there were a very small number (2-5 based on a loose memory) of accidents in which nuclear weapons were lost and, as far as we know, never recovered. E.g., over Canada in 1950, and in the sea near Japan in 1965. Of course, most likely these weapons haven't been discovered by anyone, and thus are not "available for purchase".

So while "likely" seems plausible to me, I find it hard to have extreme confidence in there being no "loose nukes".

More relevantly, I'd hope that Mueller discusses all of these cases, or else I'd decrease my confidence in his claims.

Comment by max_daniel on Notes on 'Atomic Obsession' (2009) · 2019-10-29T16:15:06.640Z · score: 5 (3 votes) · EA · GW

I understand that you mostly just provide a summary rather than giving reasons to believe the claims in the book, but FWIW I find some of the claims hard to believe. I'll give more detail in other comments.

For now, some general questions:

  • What kind of evidence is the book based on? (E.g. archival research, interviews with decision-makers, theoretical models, ...)
  • Does Mueller have a credible debunking explanation for why most people in the national security community (as well as fields such as international relations, nuclear strategy etc., AFAICT) disagree with him?
Comment by max_daniel on Helen Toner: Building Organizations · 2019-10-24T16:16:09.660Z · score: 2 (2 votes) · EA · GW

Thanks, that's good to know.

I guess I'm mostly comparing the two EA orgs I've worked with, and my memory of informal conversations in the community, with my experience elsewhere, e.g. at a student-run global poverty nonprofit I volunteered for while at university. It's possible that my sample of EA conversations was unrepresentative, or that I'm comparing to an unusually high baseline.

Comment by max_daniel on Helen Toner: Building Organizations · 2019-10-24T11:29:48.434Z · score: 7 (5 votes) · EA · GW

Thanks for making this available!

My impression from having worked at 2 EA organizations in the last few years, and the conversations I've had in the community more generally, is that paying more attention to and sharing more thoughts about how to build orgs and other organizational issues could be very valuable. E.g. on: management, hiring, how to scale, communication, feedback, pros and cons of different decision-making structures, how to productively confront interpersonal conflict, accountability/oversight mechanisms, ...

I think one of the strengths of the EA community is that improving our reasoning as individuals has really become part of our 'cultural DNA'. I'm thinking of all the discussions on cognitive biases, forecasting, how to weigh different types of evidence, etc. If the cultural status and available knowledge on 'organizational improvement' could become as strong as for 'self-improvement', that would make me really optimistic about what we can achieve.

Comment by max_daniel on What are the best arguments for an exclusively hedonistic view of value? · 2019-10-21T20:07:50.264Z · score: 5 (5 votes) · EA · GW

Here are some overviews:

https://plato.stanford.edu/entries/hedonism/#EthHed

https://plato.stanford.edu/entries/well-being/#Hed

My guess is that ultimately you'll just find yourself in an irresolvable standoff of differing intuitions with people who favor a different view of value. Philosophers have debated this question for millennia to decades (depending on how we count) and haven't reached agreement, so I think in the absence of some methodological revolution settling this question is hopeless. (Though of course, you clarifying your own thinking, or arriving at a view you feel more confident in yourself, seem feasible.)

Comment by max_daniel on Reality is often underpowered · 2019-10-20T18:34:08.619Z · score: 9 (6 votes) · EA · GW

I agree with your points, but from my perspective they somewhat miss the mark.

Specifically, your discussion seems to assume that we have a fixed, exogenously given set of propositions or factors X, Y, ..., and that our sole task is to establish relations of correlation and causation between them. In this context, I agree on preferring "wide surveys" etc.

However, in fact, doing research also requires the following tasks:

  • Identify which factors X, Y, ... to consider in the first place.
  • Refine the meaning of the considered factors X, Y, ... by clarifying their conceptual and hypothesized empirical relationships to other factors.
  • Prioritize which of the myriads of possible correlational or causal relationships between the factors X, Y, ... to test.

I think that depth can help with these three tasks in ways in which breadth can't.

For instance, in Will's example, my guess is that the main value of considering the history of Objectivism does not come from moving my estimate for the strength of the hypothesis "X = romantic involvement between movement leaders -> Y = movement collapses". Rather, the source of value is including "romantic involvement between movement leaders" into the set of factors I'm considering in the first place. Only then am I able to investigate its relation to outcomes of interests, whether by a "wide survey of cases" or otherwise. Moreover, I might only have learned about the potential relevance of "romantic involvement between movement leaders" by looking at some depth into the history of Objectivism. (I know very little about Objectivism, and so don't know if this is true in this instance; it's certainly possible that the issue of romantic involvement between Objectivist leaders is so well known that it would be mentioned in any 5-sentence summary one would encounter during a breadth-first process. But it also seems possible that it's not, and I'm sure I could come up with examples where the interesting factor was buried deeply.)

My model here squares well with your observation that a "common feature among superforecasters is they read a lot", and in fact makes a more specific prediction: I expect that we'd find that superforecasters read a fair amount (say, >10% of their total reading) of deep, small-n case studies - for example, historical accounts of a single war, economic policy, or biographies.

[My guess is that my comment is largely just restating Will's points from his above comment in other words.]


(FWIW, I think some generators of my overall model here are:

  • Frequently experiencing disagreements I have with others, especially around AI timelines and takeoff scenarios, as noticing a thought like "Uh... I just think your overall model of the world lacks depth and detail." rather than "Wait, I've read about 50 similar cases, and only 10 of them are consistent with your claim".
  • Semantic holism, or at least some of the arguments usually given in its favor.
  • Some intuitive and fuzzy sense that, in the terminology of this Julia Galef post, being a "Hayekian" has worked better for me than being a "Planner", including for making epistemic progress.
  • Some intuitive and fuzzy sense of what I've gotten out of "deep" versus "broad" reading. E.g. my sense is that reading Robert Caro's monumental, >1,300-page biography of New York city planner Robert Moses has had a significant impact on my model of how individuals can attain political power, albeit by adding a bunch of detail and drawing my attention to factors I previously wouldn't have considered rather than by providing evidence for any particular hypothesis.)



Comment by max_daniel on What actions would obviously decrease x-risk? · 2019-10-09T19:20:10.529Z · score: 5 (3 votes) · EA · GW

I agree. However, your reply makes me think that I didn't explain my view well: I do, in fact, believe that it is not obvious that, say, setting up seed banks is "better than doing nothing" - and more generally, that nothing is obviously better than doing nothing.

I suspect that my appeal to "diverting attention and funding" as a reason for this view might have been confusing. What I had in mind here was not an argument about opportunity cost: while true, I did not want to say that an actor that set up a seed bank could perhaps have done better by doing something else instead (say, donating to ALLFED).

Instead, I was thinking of effects on future decisions (potentially by other actors), as illustrated by the following example:

  • Compare the world in which, at some time t0, some actor A decides to set up a seed bank (say, world w1) with the world w2 in which A decides to do nothing at t0.
  • It could be the case that, in w2, at some later time t1, a different actor B makes a decision that:
    • Causes a reduction in the risk of extinction from nuclear war that is larger than the effect of setting up a seed bank at t0. (This could even be, say, the decision to set up two seed banks.)
    • Happened only because A did not set up a seed bank at t0, and so in particular does not occur in world w1. (Perhaps a journalist in w2 wrote a piece decrying the lack of seed banks, which inspired B - who thus far was planning to become an astronaut - to devote her career to setting up seed banks.)

Of course, this particular example is highly unlikely. And worlds w1 and w2 would differ in lots of other aspects. But I believe considering the example is sufficient to see that extinction risk from nuclear war might be lower in world w2 than in w1, and thus that setting up a seed bank is not obviously better than doing nothing.

Comment by max_daniel on What actions would obviously decrease x-risk? · 2019-10-09T17:14:15.698Z · score: 6 (3 votes) · EA · GW

I agree, and in fact "nothing is obviously good" describes my (tentative) view reasonably well, at least if (i) the bar for 'obviously' is sufficiently high and (ii) 'good' is to be understood as roughly 'maximizing long-term aggregate well-being.'

Depending on the question one is trying to answer, this might not be a useful perspective. However, I think that when our goal is to actually select and carry out an altruistic action, this perspective is the right one: I'd want to simply compare the totality of the (wellbeing-relevant) consequences with the relevant counterfactual (e.g., no action, or another action), and it would seem arbitrary to me to exclude certain effects because they are due to a general or indirect mechanism.

(E.g., suppose for the sake of the argument that I'm going to die in a nuclear war that would not have happened in a world without seed banks. - I'd think that my death makes the world worse, and I'd want someone deciding about seed banks today to take this disvalue into account; this does not depend on whether the mechanism is that nuclear bombs can be assembled from seeds, or that seed banks have crowded out nuclear deproliferation efforts, or whatever.)

Comment by max_daniel on My experience on a summer research programme · 2019-10-09T16:59:05.466Z · score: 4 (3 votes) · EA · GW
Minor: you may want to have shorter paragraphs - to partition the text more - to increase readability.

Thank you, I much appreciate specific suggestions. This makes sense to me, and I'll try to keep it in mind for future posts.

Comment by max_daniel on My experience on a summer research programme · 2019-10-09T12:17:43.751Z · score: 24 (7 votes) · EA · GW

[I co-directed the summer programme described in the OP, together with Rose Hadshar. Views are my own.]

Hi Denise and Siebe, thank you very much for your comments. I think they are valuable contributions to the conversation about the opportunities and risks of work at EA organizations.

I'm actually concerned (and have been so for a while) that many people in the community overestimate the value of work at EA organizations compared to alternatives. For example, if a friend told me that the only option they've considered for their summer was this summer programme, I'd strongly advise them to also look at opportunities outside the EA community. I think that widely promoted messages, e.g. about the value of 'direct work', have contributed to this (what I think is a) misperception (NB I don't believe that the sources of these messages actually intended to promote this perception, more that the nuance of their intentions got inadvertently lost as messages were propagated through the community). I therefore particularly appreciate that you point out some specific downsides a summer programme such as the one discussed here, and give some actionable reasons to consider alternatives. (And that you do this in a public place, accessible to those with limited opportunity to have high-bandwidth conversations with people working at EA organizations or other relevant experience.)

That being said, my current view is that this summer programme can be a good choice for some people in some circumstances. Based on my experience this summer and the feedback from our summer fellows, I'd also guess that for several of them the summer fellowship was actually more valuable than what they'd have done otherwise. I'd like to gesture at my reasoning, in the hope that this is helpful for people considering applying to a similar programme. (Caveat: as the OP also points out, this programme might look quite different next year, if it'll be run at all. I recommend checking with whoever will be directing it.)

  • Just to prevent misunderstandings, I'd like to mention that this year's programme differed in major ways from previous versions (and was run by different people). In fact, I had been participating in the programme in 2018, and had a mixed experience; when planning this year's programme, I've deliberately tried to learn from this experience (I had also reached out to my fellow fellows to ask them about their experience). E.g., I had appreciated the intellectual autonomy and the opportunity to informally talk to staff at CEA, FHI, and GPI (which shared an office at the time), but I had also often felt lonely and isolated and would have liked more supervision for one of my two projects. As a result, for this year we designed several activities aimed at fostering a collaborative and open culture among the summer fellows, giving them opportunities to introduce themselves and their projects to other staff at the office, and paired each summer fellow with a mentor. 
  • Overall, I think my experience last year was very unlike the experience of this year's fellows: For example, after a prompt to consider what they'd have done otherwise, this year 10 out of 11 people answered 9 or 10 on a 1-10 scale for "How much better or worse do you feel about the summer fellowship than about what you would have done otherwise?" (and the other one with an 8); on the same scale for "If the same program (same organizers, same structure etc.) happened next year, how strongly would you recommend a friend (with similar background to you before the fellowship) to apply?", 9 answered with a 9 or 10 (and the other ones with 7 and 8). While I have concerns about the validity of such self-reported impressions for assessing the value of the programme (which I think will depend on things like future career outcomes), I'm confident that I'd have given significantly less positive responses about my experience last year. 
  • Denise, you're right that this year's programme did "not by default result in job offers being made to the best performers," and wasn't intended to do so. I don't believe this is necessarily a downside, however. In my view, one of the main benefit of this programme is to help people decide whether they'll want to aim for similar work in the future. I think this will often be valuable for people who wouldn't be available for full employment after the programme, e.g. because they'll continue their university studies. In addition, I believe that several of the benefits of this year's programme were enabled by us fostering a relatively non-evaluative atmosphere, which I worry would not have been possible in a programme designed to identify future hires. So overall I think that (a) programmes that are essentially extended "work trials" and (b) programmes that won't usually lead to future employment have different upsides and downsides, and that there is room for both. (I do believe it's important to be transparent about which of the two one is running. I'm not aware of any of this year's summer fellows erroneously believing the programme would result in job offers, but I'd consider it a severe communications mistake on our part if this has happened.) As an aside, I think that a programme like this can to some extent further people's careers. For example, for the summer fellows I mentored I'm now able to provide a credible reference for applications to jobs in EA, and in fact expect to do so over the next few weeks in 1-2 cases. I do, however, agree that these benefits are importantly different from the prospect of a relevant job offer, and I doubt that these benefits alone would make the summer programme worthwhile.
  • My impression is that our mentoring setup worked quite well, at least from the fellows' perspective: 7 said they found mentoring "very valuable", 3 "valuable", and 1 "~neutral;" similarly, 10 out of 11 people said they'd like their mentor to mentor other people in the future, 9 of them maximally strongly. This seems consistent with responses on specific ways in which mentoring provided value and ways in which it could have been better. That being said, I do think there is room for improvement - for example, we weren't in all cases able to find a mentor with relevant domain expertise. My loose impression from this year was that cases in which the mentor had domain expertise were a much "safer bet" in terms of mentoring being valuable. In other cases, I think we were able to overcome this challenge to some extent: for example, I worked with two of my mentees to help them reach out to various domain experts, as a result of which they had several conversations; I still don't expect this provided as much value as direct and regular supervision from someone with domain expertise would have. (On the other hand, I think domain expertise is helpful all else equal, but far from sufficient for good mentoring. We did try to provide guidance to mentors, though I think with mixed success.) Anecdotally, based on my own experience in academia (and the distribution among my friends), my guess is that we've provided significantly more mentoring than the median academic internship, though except in 1-4 cases not as much quality-weighted mentorship as in the right tail (say, >95th percentile) of the academic mentorship distribution. (My wild guess is the situation is better in industry, though with other downsides, e.g. less autonomy.)
  • As mentioned earlier, I think the programme did a decent job at helping people to decide if they want to work in similar roles in the future - and does so in a way that is less costly to people than, say, applying for a full-time job at these organizations. So in this sense I do think it had career-related benefits, albeit by improving future decisions rather than by getting a job immediately. For instance, I think I would have benefitted significantly from the opportunity to work at CEA or FHI for a few weeks before deciding whether to accept the offer I got for FHI's Research Scholars Programme, a 2-year role. (As I said, I participated in last year's summer programme - however, I needed to decide to apply to RSP before the summer programme started, and I got the RSP offer quite early during the summer.) In fact, I believe that in at least one case the summer fellowship helped a fellow to decide that they do not want to accept an offer to work at an EA organization, and I think this was among the most tangible, valuable outcomes of this summer. Siebe, I agree with you that this has to weighed against the potential to get more information about other paths by doing internships elsewhere. However, I think that conversely these other internships would provide less information on how much one would like to work at, say, FHI. I think that it depends on the specifics of someone's circumstances which of these things is more valuable to explore.
  • Summer fellows were paid. (With the exception of a very small number of people who couldn't be paid for legal reasons.) That being said, I do believe there might be room for improvement here: for instance, on a scale from 1 to 5 on whether they felt the amount of financial support was adequate 8 out of 11 fellows responded with 4 or 5 - I think that ideally this would be 11 out of 11. I'm also concerned that 10 out of 11 fellows indicated that their savings stayed constant or (in 3 cases) decreased; while I don't think this is catastrophic given the short duration of the programme, I think it suggests the programme is by far not as accessible to people with a less comfortable financial situation than I would like.
  • Siebe, I agree that "EA-branded organizations are young and often run these programs without much prior experience." For example, my highest degree is a master's in an unrelated field, and I've had maybe 2 years of work experience relevant to me running such a programme and mentoring researchers (though potentially somewhat less or much more depending on how you count things I did during my university years outside of a paid role). I agree this is a drawback, all else being equal. However, I think it might be relevant to note that (i) Rose Hadshar, the co-director of this year's programme, has been the Project Manager for the Research Scholars Programme since last fall, which I think is very relevant work experience (more so than my previous one), (ii) we consulted with several people with significantly more relevant experience, including staff involved in running previous versions of the summer programme. I also think the "all else being equal" clause is important, and possibly not true in expectation: for example, I think it's fair to say that Rose and I really cared about this programme and were highly motivated to make it go as well as we could. By contrast, I've heard many stories from my friends in which they did an internship at an established institution, working with people that had decades of work experience, but came away very frustrated because they felt the internship programme had been poorly thought out, consisted of boring work, or suffered from an uninspiring organizational culture. Clearly, many internship programmes outside of the EA community won't suffer from these problems (and some at EA organizations will), but overall I believe it's worth finding out as much as possible about the specifics of an internship programme one is considering rather than relying too much on proxies such as prior experience. 

I think most of these points provide complementary information to the considerations you raised, rather than undermining them. I think they're relevant if someone wanted to form an overall view on whether to apply to such a programme, but I'm not trying to dispute the value of your comments. I'm curious to what extent the points I raised change your overall view on the upsides and downsides of this summer programme, or if you have any further questions.

Comment by max_daniel on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-09T11:36:23.819Z · score: 5 (3 votes) · EA · GW

Thanks, this is helpful!

I hadn't considered the possibility that techniques prior to Gendlin might have included focusing-like techniques, and especially that he's claiming to have synthesized what was already there. This makes me less confident in my impression. What you say about the textbooks you read definitely also moves my view somewhat.

(By contrast, what you wrote about studies on focusing probably makes me somewhat reduce my guess on the strength of the evidence of focusing, but obviously I'm highly uncertain here as I'm extrapolating from weak cues - studies by Gendlin himself, correlational claim of intuitively dubious causal validity - rather than having looked at the studies themselves.)

This all still doesn't square well with my own experiences with and models of therapy, but they may well be wrong or idiosyncratic, so I don't put much weight on them. In particular, 20-30% of sessions still seems higher than what I would guess, but overall this doesn't seem sufficiently important or action-relevant that I'd be interested to get at the bottom of this.

Comment by max_daniel on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-08T23:24:10.643Z · score: 9 (5 votes) · EA · GW

I don't think this is very important for my overall view on CFAR's curriculum, but FWIW I was quite surprised by you describing Focusing as

a technique that forms the basis of a significant fraction of modern therapeutic techniques and I consider a core skill for doing emotional processing. I've benefited a lot from this, and it also has a pretty significant amount of evidence behind it (both in that it's pretty widely practiced, and in terms of studies), though only for the standards of behavioral psychology, so I would still take that with a grain of salt.

Maybe we're just using "significant fraction" differently, but my 50% CI would have been that focusing is part of 1-3 of the 29 different "types of psychotherapy" I found on this website (namely "humanistic integrative psychotherapy", and maybe "existential psychotherapy" or "person-centred psychotherapy and counselling"). [Though to be fair on an NHS page I found, humanistic therapy was one of 6 mentioned paradigms.] Weighting by how common the different types of therapy are, I'd expect an even more skewed picture: my impression is that the most common types of therapy (at least in rich, English-speaking countries and Germany, which are the countries I'm most familiar with) are cognitive-behavioral therapy and various kinds of talking therapy (e.g. psychoanalytic, i.e. broadly Freudian), and I'd be surprised if any of those included focusing. My guess is that less than 10% of psychotherapy sessions happening in the above countries include focusing, potentially significantly less than that.

My understanding had been that focusing was developed by Eugene Gendlin, who after training in continental philosophy and publications on Heidegger became a major though not towering (unlike, say, Freud) figure in psychotherapy - maybe among the top decile but not the top percentile in terms of influence among the hundreds of people who founded their own "schools" of psychotherapy.

I've spent less than one hour looking into this, and so might well be wrong about any of this - I'd appreciate corrections.

Lastly, I'd appreciate some pointers to studies on focusing. I'm not doubting that they exist - I'm just curious because I'm interested in psychotherapy and mental health, but couldn't find them quickly (e.g. I searched for "focusing Gendlin" on Google Scholar).

Comment by max_daniel on Andreas Mogensen's "Maximal Cluelessness" · 2019-10-08T22:35:56.691Z · score: 1 (1 votes) · EA · GW

Thank you for raising this, I think I was too quick here in at least implicitly suggesting that this defence would work in all cases. I definitely agree with you that we have some desires that are about the future, and that it would misdescribe our desires to conceive all of them to be about present causal antecedents.

I think a more modest claim I might be able to defend would be something like:

The justification of everyday actions does not require an appeal to preferences with the property that, epistemically, we ought to be clueless about their content.

For example, consider the action of not wandering blindly into the road. I concede that some ways of justifying this action may involve preferences about whose contents we ought to be clueless - perhaps the preference to still be alive in 40 years is such a preference (though I don't think this is obvious, cf. "dodge the bullet" above). However, I claim there would also be preferences, sufficient for justification, that don't suffer from this cluelessness problem, even though they may be about the future - perhaps the preference to still be alive tomorrow, or to meet my friend tonight, or to give a lecture next week.

Comment by max_daniel on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-08T10:23:18.538Z · score: 7 (5 votes) · EA · GW

Seconded.

(I'm wondering whether this phenomenon could also be due to people using downvotes for different purposes. For example, I use votes roughly to convey my answer to the question "Would I want to see more posts like this on the Forum?", and so I frequently upvote comments I disagree with. By contrast, someone might use votes to convey "Do I think the claims made in this comment are true?".)

Comment by max_daniel on What actions would obviously decrease x-risk? · 2019-10-07T23:26:54.101Z · score: 3 (2 votes) · EA · GW

Thanks for this suggestion. I agree that in general brainstorming and debates are best kept separate. I also wouldn't want to discourage anyone from posting an answer to this question - in fact, I'm unusually interested in more answers to this question. I'm not sure if you were saying that you in particular feel discouraged from ideating as a response to seeing my comment, but I'm sorry if so. I'm wondering if you would have liked me to explain why I was expressing my disagreement, and to explicitly say that I value you suggesting answers to the original question (which I do)?

Comment by max_daniel on What actions would obviously decrease x-risk? · 2019-10-07T18:03:22.356Z · score: 9 (6 votes) · EA · GW

Again, I disagree this is obvious. Just some ways in which this could be negative:

  • It could turn out that some of the research required for rapid vaccine development can be misused or exacerbate other risk.
  • The availability of rapid vaccine manufacturing methods could lead to a false sense of confidence among decisionmakers, leading to them effectively neglecting other important prevention and mitigation measures against biorisk.
Comment by max_daniel on What actions would obviously decrease x-risk? · 2019-10-07T18:01:17.524Z · score: 8 (5 votes) · EA · GW

Again, I disagree that any of these is obvious:

  • Ease of communication also opens up more opportunities for rash decisions and premature messages, can reduce the time available for decisions, and the potential for this infrastructure to be misused by malign actors.
  • Biorisk mitigation being higher status could contribute to making the dangers of bioweapons more widely known among malign actors, thus making it more likely that they're being developed.
  • Pakistan not having nukes would alter the geopolitical situation in South Asia in major ways, with repercussions for the relationships between the major powers India, China, and the US. I find it highly non-obvious what the net effect of this would be.
Comment by max_daniel on What actions would obviously decrease x-risk? · 2019-10-07T17:57:47.500Z · score: 9 (6 votes) · EA · GW

If "uncontroversially" means something like "we can easily see that their net effect is to reduce extinction risk," then I disagree. To give just two examples, the known availability of alternative foods might decrease the perceived cost of nuclear war, thus making it more likely; and it might instill a sense of false confidence in decision-makers, effectively diverting attention and funding from more effective risk-reduction measures. I'm perhaps willing to believe that, after weighing up all these considerations, we can all agree that the net effect is to reduce risk, but I think this is far from obvious.