Posts

Max_Daniel's Shortform 2019-12-13T11:17:10.883Z · score: 21 (7 votes)
When should EAs allocate funding randomly? An inconclusive literature review. 2018-11-17T14:53:38.803Z · score: 34 (22 votes)

Comments

Comment by max_daniel on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-27T08:35:16.491Z · score: 36 (17 votes) · EA · GW

On "become a specialist on Russia or India": I've sometimes wondered if there might be a case for focusing on countries that could become great powers, or otherwise geopolitically important, on somewhat longer timescales, say 2-5 decades.

[ETA: It occurred to me that the reasoning below also suggests that EA community building in these countries might be valuable. See also here, here, here, and here for relevant discussion.]

One example could be Indonesia: it is the 4th most populous country, its economy is the 16th largest by nominal GDP but has consistently grown faster than those of high-income countries. It's also located in an area that many think will become increasingly geopolitically important, e.g. Obama's foreign policy is sometimes described as a 'pivot to the Pacific'. (Caveat: I've spent less than 10 minutes looking into Indonesia.)

More generally, we could ask: if we naively extrapolated current trends forward by a couple of decades, which countries will emerge as, say, "top 10 countries" on relevant metrics by 2050.

Some arguments in favor:

  • Many EAs are in their twenties, and in the relevant kind of social science disciplines and professional careers my impression is that people tend to have most of their impact in their 40s to 50s. (There is a literature on scientific productivity by age, see e.g. Simonton, 1997, for a theoretical model that also surveys empirical findings. I haven't checked how consistent or convincing these findings are.) Focusing on countries that become important in a couple of decades would align well with this schedule.
  • It's plausible to me that due to 'short-term biases' in traditional foreign policy, scholarship etc., these countries will be overly neglected by the 'academic and policy markets' relative to their knowable expected value for longtermists.
  • For this reason, it might also be easier to have an outsized influence. E.g. in all major foreign policy establishments, there already are countless people specializing on Russia. However, there may be the opportunity to be one of the very few, say, 'Indonesia specialists' by the time there is demand for them (and also to have an influence on the whole field of Indonesia specialists due to founder effects).

Some arguments against:

  • The whole case is just armchair speculation. I have no experience in any of the relevant areas, don't understand how they work etc.
  • If it's true that, say, Indonesia isn't widely viewed as important today, this also means there will be few opportunities (e.g. jobs, funding, ...) to focus on it.
  • Even if the basic case was sound, I expect it would be an attractive option only for very few people. For example, I'd guess this path would involve living in, say, Indonesia for months to years, which is not something many people will be prepared to do.
Comment by max_daniel on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-27T08:10:29.976Z · score: 2 (1 votes) · EA · GW

I agree (with the caveat that I have much less relevant domain knowledge than Rohin, so you should presumably give less weight to my view).

Several years ago, I attended a small conference on 'formal mathematics', i.e. proof assistants, formal verification, and related issues. As far as I remember, all of the real-world applications mentioned there were of the type "catching bugs in your implementations of ideas". For example, I remember someone saying they had verified software used in air traffic control. This does suggest formal verification can be useful in real-world safety applications. However, I'd guess (very speculative) that these kind of applications won't benefit much from further research, and wouldn't need to be done by someone with an understanding of EA - they seem relatively standard and easy to delegate, and at least in principle doable with current tools.

Comment by max_daniel on Max_Daniel's Shortform · 2020-06-27T08:07:29.089Z · score: 2 (1 votes) · EA · GW

Great, thank you!

I saw that you asked Howie for input - are there other people you think it would be good to talk to on this topic?

Comment by max_daniel on Max_Daniel's Shortform · 2020-06-27T08:05:36.443Z · score: 4 (2 votes) · EA · GW

Thanks for the suggestion. I plan to make this list more discoverable once I feel like it's reasonably complete, e.g. by turning it into its own top-level post or appending it to a top-level post writeup of my research on this topic.

Comment by max_daniel on Problem areas beyond 80,000 Hours' current priorities · 2020-06-26T13:49:09.868Z · score: 4 (2 votes) · EA · GW

I agree that most people seem to think this is true about wild-animal welfare. However, I don't think this means wild-animal welfare is well described as a longtermist issue. The definition of longtermism is about when most of the value of our actions is going to accrue, not about when we expect to take more direct actions. So I think the natural reading of 'longtermist issue' is 'an issue that we think is important because working on it will have good consequences for the very long-run future' (or something even stronger, like being among the most valuable issues from that perspective), not 'an issue we don't expect to directly work on in the short term'.

Comment by max_daniel on Max_Daniel's Shortform · 2020-06-26T13:41:01.297Z · score: 28 (9 votes) · EA · GW

[See this research proposal for context. I'd appreciate pointers to other material.]

[WIP, not comprehensive] Collection of existing material on 'impact being heavy-tailed'

Conceptual foundations

  • Newman (2005) provides a good introduction to powers laws, and reviews several mechanisms generating them, including: combinations of exponentials; inverses of quantities; random walks; the Yule process [also known as preferential attachment]; phase transitions and critical phenomena; self-organized criticality.
  • Terence Tao, in Benford’s law, Zipf’s law, and the Pareto distribution, offers a partial explanation for why heavy-tailed distributions are so common empirically.
  • Clauset et al. (2009[2007]) explain why it is very difficult to empirically distinguish power laws from other heavy-tailed distributions (e.g. log-normal). In particular, seeing a roughly straight line in a log-log plot is not sufficient to identify a power law, despite such inferences being popular in the literature. Referring to power-law claims by others, they find that “the distributions for birds, books, cities, religions, wars, citations, papers, proteins, and terrorism are plausible power laws, but they are also plausible log-normals and stretched exponentials.” (p. 26)
  • Lyon (2014) argues that, unlike commonly believed, the Central Limit Theorem cannot explain why normal distributions are so common. (The critique also applies to the analog explanation for the log-normal distribution, an example of a heavy-tailed distribution.) Instead, an appeal to the principle of maximum entropy is suggested.

Impact in general / cause-agnostic

EA community building

Global health

Misc

  • Will MacAskill, in an interview by Lynette Bye, on the distribution of impact across work time for a fixed individual: "Maybe, let's say the first three hours are like two thirds the value of the whole eight-hour day. And then, especially if I'm working six days a week, I'm not convinced the difference between eight and ten hours is actually adding anything in the long term."
Comment by max_daniel on Max_Daniel's Shortform · 2020-06-26T12:46:17.072Z · score: 5 (3 votes) · EA · GW

[Context: This is a research proposal I wrote two years ago for an application. I'm posing it here because I might want to link to it. I plan to spend a few weeks looking into a subquestion: how heavy-tailed is EA talent, and what does this imply for EA community building?]

Research proposal: Assess claims that "impact is heavy-tailed"

Why is this valuable?

EAs frequently have to decide how much resources to invest to estimate the utility of their available options; e.g.:

  • How much time to invest to identify the best giving opportunity?
  • How much research to do before committing to a small set of cause areas?
  • When deciding whether to hire someone now or a more talented person in the future, when is it worth to wait?

One major input to such questions is how heavy-tailed the distribution of altruistic impact is: The better the best options are relative to a random option, the more valuable it is to identify the best options.

Claims like “impact is heavy-tailed” are widely accepted in the EA community—with major strategic consequences (e.g. [1], “Talent is high variance”)—but have sometimes been questioned [2, 3, 4, 5].

These claims are often made in an imprecise way, which makes it hard to estimate the extent of their practical implications (should you spend a month or a year doing research before deciding?), and hard to check if one actually disagrees about them. E.g., is the claim that we can now see that Einstein did much more for progress in physics than 90% of the world population at his time, or that in 1900 our subjective expected value for the progress Einstein would make would have been much higher than the value for a random physics graduate student, or something in between?

Suggested approach

1. Collect several claims of this type that have been made.
2. Review statistical measures of heavy-tailedness.
3. Limit the project’s scope appropriately. E.g., focus just on the claim that “talent is heavy-tailed” and its implications for community building.
4. Refine claims into precise candidate versions, i.e. something like “looking backwards, the empirical distribution of the number of published papers by researcher looks like it was sampled from a distribution that doesn’t have finite variance” rather than “researcher talent is heavy-tailed”.
5. Assess the veracity of those claims, based on published arguments about them and general properties of heavy-tailed distributions (e.g. [6]). Perhaps gather additional data.
6. Write up the results in an accessible way that highlights the true, precise claims and their practical implications.

Concerns

  • There probably are good reasons for why “impact is heavy-tailed” is widely accepted. I’m therefore unlikely to produce actionable results.
  • The proposed level of analysis may be too general.
Comment by max_daniel on Problem areas beyond 80,000 Hours' current priorities · 2020-06-26T08:00:00.573Z · score: 9 (2 votes) · EA · GW

Yes, FWIW I was also quite surprised to see wild-animal welfare described as a longtermist issue. (Caveat: I haven't listened to the podcast with Persis, so it's possible it contains an explanation that I've missed.). So I'd also be interested in an answer to this.

Comment by max_daniel on Max_Daniel's Shortform · 2020-06-16T09:35:40.564Z · score: 2 (1 votes) · EA · GW
Task X for which the claim seems most true for me is "coming up with novel and important ideas". This seems to be very heavy-tailed, and not very teachable.

I agree that the impact from new ideas will be heavy tailed - i.e. a large share of the total value from new ideas will be from the few best ideas, and few people. I'd also guess that this kind of creativity is not that teachable. (Though not super certain about both.)

I feel less sure that 'new ideas' is among the things most needed in EA, when discounted by the difficulty of generating them. (I do think there probably are a number of undiscovered and highly important ideas out there, partly based on EA's track record and partly based on a sense that there are a lot of things we don't know or understand about how to make the long-term future go well.) If I had to guess where to optimally invest flexible resources at the margin, I feel highly uncertain whether it would be in "find people who're good at generating new ideas" versus things like "advance known research directions" or "accumulate AI-weighted influence/power".

Comment by max_daniel on Max_Daniel's Shortform · 2020-06-15T12:09:35.812Z · score: 4 (2 votes) · EA · GW

I agree that, basically by definition, higher talent means higher returns on learning. My claim was not that talent is unimportant, but roughly that the answer to "Why don't we have anyone in the community who can do X?" more often is "Because no-one has spent enough effort practicing X." than it is "Because there is no EA who is sufficiently talented that they could do X well given an optimal environment, training etc.".

(More generally, I agree that the OP could do a better job at framing the debate, setting out the key considerations and alternative views etc. I hope to write an improved version in the next few months.)

Comment by max_daniel on EA Survey 2019 Series: Community Information · 2020-06-12T19:20:01.699Z · score: 6 (3 votes) · EA · GW

Thank you for your excellent work on the survey! Like the other posts in this series, I found this very interesting.

One quick suggestion: it's unclear to me if respondents interpreted "EA job" as (a) "job at an EA organization" or (b) "high-impact job according to EA principles". E.g. doing AI safety research in academia would be an EA job according to (b) but not (a) [except maybe for a few exceptions such as CHAI]. It could be good to clarify this in future surveys.

(This is assuming that "Too hard to get EA job" is all information respondents saw. If there was additional information indicating whether (a) or (b) was intended, it would be good to clarify this in the post.)

Comment by max_daniel on What are the leading critiques of "longtermism" and related concepts · 2020-06-04T08:26:10.913Z · score: 5 (3 votes) · EA · GW

I also like the arguments in The Precipice. But per my above comment, I'm not sure if they are arguments for longtermism, strictly speaking. As far as I recall, The Precipice argues for something like "preventing existential risk is among our most important moral concerns". This is consistent with, but neither implied nor required by longtermism: if you e.g. thought that there are 10 other moral concerns of similar weight, and you choose to mostly focus on those, I don't think your view is 'longtermist' even in the weak sense. This is similar to how someone who thinks that protecting the environment is somewhat important but doesn't focus on this concern would not be called an environmentalist.

Comment by max_daniel on What are the leading critiques of "longtermism" and related concepts · 2020-06-04T08:18:54.072Z · score: 15 (6 votes) · EA · GW

[Not primarily a criticism of your comment, I think you probably agree with a lot of what I say here.]

Instead it depends on something much more general like 'whatever is of value, there could be a lot more of it in the future'.

Yes, but in addition your view in normative ethics needs to have suitable features, such as:

  • A sufficiently aggregative axiology. Else the belief that there will be much more of all kinds of stuff in the future won't imply that the overall goodness of the world mostly hinges on its long-term future. For example, if you think total value is a bounded function of whatever the sources of value are (e.g. more happy people are good up to a total of 10 people, but additional people add nothing), longtermism may not go through.
  • [Only for 'deontic longtermism':] A sufficiently prominent role of beneficence, i.e. 'doing what has the best axiological consequences', in the normative principles that determine what you ought to do. For example, if you think that keeping some implicit social contract with people in your country trumps beneficence, longtermism may not go through.

(Examples are to illustrate the point, not to suggest they are plausible views.)

I'm concerned that some presentations of "non-consequentialist" reasons for longtermism sweep under the rug the important difference between the actual longtermist claim that improving the long-term future is of particular concern relative to other goals and the weaker claim that improving or preserving the long-term future is one ethical consideration among many, with it being underdetermined how they trade off against each other.

So for example, sure, if we don't prevent extinction we are uncooperative toward previous generations because we frustrate their 'grand project of humanity'. That might be a good, non-consequentialist reason to prevent extinction. But without specifying the full normative view, it is really unclear how much to focus on this relative to other responsibilities.

Note that I actually do think that something like longtermist practical priorities follow from many plausible normative views, including non-consequentialist ones. Especially if one believes in a significant risk of human extinction this century. But the space of such views is vast, and which views are and aren't plausible is contentious. So I think it's important to not present longtermism as an obvious slam dunk, or to only consider (arguably implausible) objections that completely deny the ethical relevance of the long-term future.

Comment by max_daniel on Some thoughts on deference and inside-view models · 2020-06-03T09:08:15.889Z · score: 7 (4 votes) · EA · GW

Thanks, I think this is a useful clarification. I'm actually not sure if I even clearly distinguished these cases in my thinking when I wrote my previous comments, but I agree the thing you quoted is primarily relevant to when end-to-end stories will be externally validated. (By which I think you mean something like: they would lead to an 'objective' solution, e.g. maths proof, if executed without major changes.)

The extent to which we agree depends on what counts as end-to-end story. For example, consider someone working on ML transparency claiming their research is valuable for AI alignment. My guess is:

  • If literally everything they can say when queried is "I don't know how transparency helps with AI alignment, I just saw the term in some list of relevant research directions", then we both are quite pessimistic about the value of that work.
  • If they say something like "I've made the deliberate decision not to focus on research for which I can fully argue it will be relevant to AI alignment right now. Instead, I just focus on understanding ML transparency as best as I can because I think there are many scenarios in which understanding transparency will be beneficial.", and then they say something showing they understand longtermist thought on AI risk, then I'm not necessarily pessimistic. I'd think they won't come up with their own research agenda in the next two years, but depending on the circumstances I might well be optimistic about that person's impact over their whole career, and I wouldn't necessarily recommend them to change their approach. I'm not sure what you'd think, but I think initially I read you as being pessimistic in such a case, and this was partly what I was reacting against.
  • If they give an end-to-end story for how their work fits within AI alignment, then all else equal I consider that to be a good sign. However, depending on the circumstances I might still think the best long-term strategy for that person is to postpone the direct pursuit of that end-to-end story and instead focus on targeted deliberate practice of some of the relevant skills, or at least complement the direct pursuit with such deliberate practice. For example, if someone is very junior, and their story says that mathematical logic is important for their work, I might recommend they grab a logic textbook and work through all the exercises. My guess is we disagree on such cases, but that the disagreement is somewhat gradual; i.e. we both agree about extreme cases, but I'd more often recommend more substantial deliberate practice.
Comment by max_daniel on Some thoughts on deference and inside-view models · 2020-06-03T07:28:16.637Z · score: 8 (3 votes) · EA · GW

Yes, agree. Though anecdotally my impression is that Wiles is an exception, and that his strategy was seen as quite weird and unusual by his peers.

I think I agree that in general there will almost always be a point at which it's optimal to switch to a more end-to-end strategy. In Wiles's case, I don't think his strategy would have worked if he had switched as an undergraduate, and I don't think it would have worked if he had lived 50 years earlier (because the conceptual foundations used in the proof had not been developed yet).

This can also be a back and forth. E.g. for Fermat's Last Theorem, perhaps number theorists were justified in taking a more end-to-end approach in the 19th century because there had been little effort using then-modern tools; and indeed, I think partly stimulated by attempts to prove FLT (and actually proving it in some special cases), they developed some of the foundations of classical algebraic number theory. Maybe then people had understood that the conjecture resists attempts to prove it directly given then-current conceptual tools, and at this point it would have become more fruitful to spend more time on less direct approaches, though they could still be guided by heuristics like "it's useful to further develop the foundations of this area of maths / our understanding of this kind of mathematical object because we know of a certain connection to FLT, even though we wouldn't know how exactly this could help in a proof of FLT". Then, perhaps in Wiles's time, it was time again for more end-to-end attempts etc.

I'm not confident that this is a very accurate history of FLT, but reasonably confident that the rough pattern applies to a lot of maths.

Comment by max_daniel on Some thoughts on deference and inside-view models · 2020-06-02T08:14:20.528Z · score: 2 (1 votes) · EA · GW

I think having a roadmap, and choosing subproblems as close as possible to the final problem, are often good strategies, perhaps in a large majority of cases.

However, I think there at least three important types of exceptions:

  • When it's not possible to identify any clear subproblems or their closeness to the final problem is unclear (perhaps AI alignment is an example, though I think it's less true today than it was, say, 10 years ago - at least if you buy e.g. Paul Christiano's broad agenda).
  • When the close, or even all known, subproblems have resisted solutions for a long time, e.g. Riemann hypothesis.
  • When one needs tools/subproblems that seem closely related only after having invested a lot of effort investigating them, rather than in advance. E.g. squaring the circle - "if you want to understand constructions with ruler and compass, do a lot of constructions with ruler and compass" was a bad strategy. Though admittedly it's unclear if one can identify examples of this type in advance unless they are also examples of one of the previous two types.

Also, I of course acknowledge that there are limits to the idea of exploring subproblems that are less closely related. For example, I think no matter what mathematical problem you want to solve, I think it would be a very bad strategy to study dung beetles or to become a priest. And to be fair, I think at least in hindsight the idea of studying close subproblems will almost always appear to be correct. To return to the example of squaring the circle: once people had realized that the set of points you can construct with ruler and compass are closed under basic algebraic operations in the complex plane, it was possible and relatively easy to see how certain problems in algebra number theory were closely related. So the problem was less that it's intrinsically better to focus on less related subproblems, but more that people didn't properly understand what would count as helpfully related.

Comment by max_daniel on Some thoughts on deference and inside-view models · 2020-06-02T07:27:30.055Z · score: 3 (2 votes) · EA · GW

Yes, good points. I basically agree. I guess this could provide another argument in favor of Buck's original view, namely that the AI alignment problem is young and so worth attacking directly. (Though there are differences between attacking a problem directly and having an end-to-end story for how to solve it, which may be worth paying attention to.)

I think your view is also born out by some examples from the history of maths. For example, the Weil conjectures were posed in 1949, and it took "only" a few decades to prove them. However, some of the key steps were known from the start, it just required a lot of work and innovation to complete them. And so I think it's fair to characterize the process as a relatively direct, and ultimately successful, attempt to solve a big problem. (Indeed, this is an example of the effect where the targeted pursuit of a specific problem led to a lot of foundational/theoretical innovation, which has much wider uses.)

Comment by max_daniel on Some thoughts on deference and inside-view models · 2020-06-02T07:13:24.772Z · score: 4 (2 votes) · EA · GW

My reading is that career/status considerations are only one of at least two major reasons Tao mentions. I agree those may be less relevant in the AI alignment case, and are not centrally a criterion for how good a problem solving strategy is.

However, Tao also appeals to the required "mathematical preparation", which fits with you mentioning skilling up and losing naivety. I do think these are central criteria for how good a problem solving strategy is. If I want to build a house, it would be a bad strategy to start putting it together with my bare hands; it would be better to first build a hammer and other tools, and understand how to use them. Similarly, it would be better to acquire and understand the relevant tools before attempting to solve a mathematical problem.

Comment by max_daniel on Some thoughts on deference and inside-view models · 2020-05-31T16:10:44.311Z · score: 5 (3 votes) · EA · GW

I agree, and also immediately thought of pure mathematics as a counterexample. E.g., if one's most important goal was to prove the Riemann hypothesis, then I claim (based on my personal experience of doing maths, though e.g. Terence Tao seems to agree) that it'd be a very bad strategy to only do things where one has an end-to-end story for how they might contribute to a proof of the Riemann hypothesis. This is true especially if one is junior, but I claim it would be true even for a hypothetical person eventually proving the Riemann conjecture, except maybe in some of the very last stages of them actually figuring out the proof.

I think the history of maths also provides some suggestive examples of the dangers of requiring end-to-end stories. E.g., consider some famous open questions in Ancient mathematics that were phrased in the language of geometric constructions with ruler and compass, such as whether it's possible to 'square the circle'. It was solved 2,000 years after it was posed using modern number theory. But if you had insisted that everyone working on it has an end-to-end story for how what they're doing contributes to solving that problem, I think there would have been a real risk that people continue thinking purely in ruler-and-compass terms and we never develop modern number theory in the first place.

--

The Planners vs. Hayekians distinction seems related. The way I'm understanding Buck is that he thinks that, at least within AI alignment, a Planning strategy is superior to a Hayekian one (i.e. roughly one based on optimizing robust heuristics rather than an end-to-end story).

--

One of the strongest defenses of Buck's original claim I can think of would appeal specifically to the "preparadigmatic" stage of AI alignment. I.e. roughly the argument would be: sure, perhaps in areas where we know of heuristics that are robustly good to pursue it can sometimes be best to do so; however, the challenge with AI alignment precisely is that we do not know of such heuristics, hence there simply is no good alternative to having an end-to-end story.

Comment by max_daniel on Aligning Recommender Systems as Cause Area · 2020-05-25T10:34:27.622Z · score: 3 (2 votes) · EA · GW

Thanks for sharing your perspective. I find it really helpful to hear reactions from practitioners.

Comment by max_daniel on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-18T08:43:43.555Z · score: 20 (12 votes) · EA · GW

This discussion (incl. child comments) was one of the most interesting things I read in the last weeks, maybe months. - Thank you for having it publicly. :)

Comment by max_daniel on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-18T08:40:05.448Z · score: 7 (4 votes) · EA · GW

(FWIW, when reading the above discussion I independently had almost exactly the same reaction as the following before reading it in Richard's latest comment:

This argument feels to me like saying "We shouldn't keep building bigger and bigger bombs because in the limit of size they'll form a black hole and destroy the Earth."

)

Comment by max_daniel on Pangea: The Worst of Times · 2020-04-07T08:08:31.494Z · score: 6 (4 votes) · EA · GW

Minor: The in-text links to endnotes link to the page for making a new forum post, which presumably is not intended.

Comment by max_daniel on Pangea: The Worst of Times · 2020-04-07T08:04:27.717Z · score: 3 (2 votes) · EA · GW

Minor: The following sentence right before section 4 starts seems jumbled?

This is true for when the magnitude of warming was comparable to what we are in for in the next 200 years, and when, on a regional basis, the rate of warming was comparable to what we are in for in the next 200 years.
Comment by max_daniel on Pangea: The Worst of Times · 2020-04-07T08:03:48.484Z · score: 16 (8 votes) · EA · GW

Excellent post. I highly value such summaries of known and relevant factual information, and views on lessons we can learn. Thank you for putting this together.

I'm also curious what prompted you to look into this topic?

Comment by max_daniel on Max_Daniel's Shortform · 2020-03-25T16:05:59.869Z · score: 22 (6 votes) · EA · GW

[Epistemic status: info from the WHO website and Wikipedia, but I overall invested only ~10 min, so might be missing something.]

Under the 2005 International Health Regulations (IHR), states have a legal duty to respond promptly to a PHEIC.
[Note by me: The International Health Regulations include multiple instances of "public health emergency of international concern". By contrast, they include only one instance of "pandemic", and this is in the term "pandemic influenza" in a formal statement by China rather than the main text of the regulation.]
  • The WHO declared a PHEIC due to COVID-19 on January 30th.
  • The OP was prompted by a claim that the timing of the WHO using the term "pandemic" provides an argument against epistemic modesty. (Though I appreciate this was less clear in the OP than it could have been, and maybe it was a bad idea to copy my Facebook comment here anyway.) From the Facebook comment I was responding to:
For example, to me, the WHO taking until ~March 12 to call this a pandemic*, when the informed amateurs I listen to were all pretty convinced that this will be pretty bad since at least early March, is at least some evidence that trusting informed amateurs has some value over entirely trusting people usually perceived as experts.
  • Since the WHO declaring a PHEIC seems much more consequential than them using the term "pandemic", the timing of the PHEIC declaration seems more relevant for assessing the merits of the WHO response, and thus for any argument regarding epistemic modesty.
  • Since the PHEIC declaration happened significantly earlier, any argument based on the premise that it happened too late is significantly weaker. And whatever the apparent initial force of this weaker argument, my undermining response from the OP still applies.
  • So overall, while the OP's premise appealing to major legal/institutional consequences of the WHO using the term "pandemic" seems false, I'm now even more convinced of the key claim I wanted to argue for: that the WHO response does not provide an argument against epistemic modesty in general, nor for the epistemic superiority of "informed amateurs" over experts on COVID-19.
Comment by max_daniel on Max_Daniel's Shortform · 2020-03-23T16:31:48.605Z · score: 5 (3 votes) · EA · GW

Thank you for pointing this out! It sounds like my guess was probably just wrong.

My guess was based on a crude prior on international organizations, not anything I know about the WHO specifically. I clarified the epistemic status in the OP.

Comment by max_daniel on Max_Daniel's Shortform · 2020-03-23T09:29:58.406Z · score: 6 (4 votes) · EA · GW

[Epistemic status: speculation based on priors about international organizations. I know next to nothing about the WHO specifically.]

[On the WHO declaring COVID-19 a pandemic only (?) on March 12th. Prompted by this Facebook discussion on epistemic modesty on COVID-19.]

- [ETA: this point is likely wrong, cf. Khorton's comment below. However, I believe the conclusion that the timing of WHO declarations by itself doesn't provide a significant argument against epistemic modesty still stands, as I explain in a follow-up comment below.] The WHO declaring a pandemic has a bunch of major legal and institutional consequences. E.g. my guess is that among other things it affects the amounts of resources the WHO and other actors can utilize, the kind of work the WHO and others are allowed to do, and the kind of recommendations the WHO can make.

- The optimal time for the WHO to declare a pandemic is primarily determined by these legal and institutional consequences. Whether COVID-19 is or will in fact be a pandemic in the everyday or epidemiological sense is an important input into the decision, but not a decisive one.

- Without familiarity with the WHO and the legal and institutional system it is a part of, it is very difficult to accurately assess the consequences of the WHO declaring a pandemic. Therefore, it is very hard to evaluate the timing of the WHO's declaration without such familiarity. And being even maximally well-informed about COVID-19 itself isn't even remotely sufficient for an accurate evaluation.

- The bottom line is that the WHO officially declaring that COVID-19 is a pandemic is a totally different thing from any individual persuasively arguing that COVID-19 is or will be a pandemic. In a language that would accurately reflect differences in meaning, me saying that COVID-19 is a pandemic and the WHO declaring COVID-19 is a pandemic would be done using different words. It is simply not the primary purpose of this WHO speech act to be an early, accurate, reliable, or whatever indicator of whether "COVID-19 is a pandemic", to predict its impact, or any other similar thing. It isn't primarily epistemic in any sense.

- If just based on information about COVID-19 itself someone confidently thinks that the WHO ought to have declared a pandemic earlier, they are making a mistake akin to the mistake reflected by answering "yes" to the question "could you pass me the salt?" without doing anything.

So did the WHO make a mistake by not declaring COVID-19 to be a pandemic earlier, and if so how consequential was it? Well, I think the timing was probably suboptimal just because my prior is that most complex institutions aren't optimized for getting the timing of such things exactly right. But I have no idea how consequential a potential mistake was. In fact, I'm about 50-50 on whether the optimal time would have been slightly earlier or slightly later. (Though substantially earlier seems significantly more likely optimal than substantially later.)

Comment by max_daniel on Activism for COVID-19 Local Preparedness · 2020-03-02T19:56:19.920Z · score: 3 (2 votes) · EA · GW

I've also heard that 40-70% figure (e.g. from German public health officials like the director of Germany's equivalent of the CDC). But I'm confused for the reason you stated. So I'd also appreciate an answer.

Some hypotheses (other than the 40-70% being just wrong) I can think of, though my guess is none of them is right:

(a) The 40-70% are a very long-term figure like risk of life-time infection assuming that the virus becomes permanently endemic.

(b) There being many more undetected than confirmed cases.

(c) The slowdown in new cases in Hubei only being temporary, i.e. expecting it to accelerate again and reaching 40-70% there.

(d) Thinking that the virus will spread more widely outside of Hubei, e.g. because one expects less drastic prevention/mitigation measures. [ETA: This comment seems to point to (d).]

Comment by max_daniel on Max_Daniel's Shortform · 2020-02-26T18:00:06.552Z · score: 7 (3 votes) · EA · GW
I don't think that peculiarities of what kinds of EA work we're most enthusiastic about lead to much of the disagreement. When I imagine myself taking on various different people's views about what work would be most helpful, most of the time I end up thinking that valuable contributions could be made to that work by sufficiently talented undergrads.

I agree we have important disagreements other than what kinds of EA work we're most enthusiastic about. While not of major relevance for the original issue, I'd still note that I'm surprised by what you say about various other people's view on EA, and I suspect it might not be true for me: while I agree there are some highly-valuable tasks that could be done by recent undergrads, I'd guess that if I made a list of the most valuable possible contributions then a majority of the entries would require someone to have a lot of AI-weighted generic influence/power (e.g. the kind of influence over AI a senior government member responsible for tech policy has, or a senior manager in a lab that could plausibly develop AGI), and that because of the way relevant existing institutions are structured this would usually require a significant amount of seniority. (It's possible for some smart undergrads to embark on a path culminating in such a position, but my guess this is not the kind of thing you had in mind.)

I am pretty skeptical of this. Eg I suspect that people like Evan (sorry Evan if you're reading this for using you as a running example) are extremely unlikely to remain unidentified, because one of the things that they do is think about things in their own time and put the results online. [...]
I am not intending to include beliefs and preferences in my definition of "great person", except for preferences/beliefs like being not very altruistic, which I do count.

I don't think these two claims are plausibly consistent, at least if "people like Evan" is also meant to exclude beliefs and preferences: For instance, if someone with Evan-level abilities doesn't believe that thinking in their own time and putting results online is a worthwhile thing to do, then the identification mechanism you appeal to will fail. More broadly, someone's actions will generally depend on all kinds of beliefs and preferences (e.g. on what they are able to do, on what people around them expect, on other incentives, ...) that are much more dependent on the environment than relatively "innate" traits like fluid intelligence. The boundary between beliefs/preferences and abilities is fuzzy, but as I suggested at the end of my previous comment, I think for the purpose of this discussion it's most useful to distinguish changes in value we can achieve (a) by changing the "environment" of existing people vs. (b) by adding more people to the pool.

Could you name a profile of such a person, and which of the types of work I named you think they'd maybe be as good at as the people I named?

What do you mean by "profile"? Saying what properties they have, but without identifying them? Or naming names or at least usernames? If the latter, I'd want to ask the people if they're OK with me naming them publicly. But in principle happy to do either of these things, as I agree it's a good way to check if my claim is plausible.

I think my definition of great might be a higher bar than yours, based on the proportion of people who I think meet it?

Maybe. When I said "they might be great", I meant something roughly like: if it was my main goal to find people great at task X, I'd want to invest at least 1-10 hours per person finding out more about how good they'd be at X (this might mean talking to them, giving them some sort of trial tasks etc.) I'd guess that for between 5 and 50% of these people I'd eventually end up concluding they should work full-time doing X or similar.

Also note that originally I meant to exclude practice/experience from the relevant notion of "greatness" (i.e. it just includes talent/potential). So for some of these people my view might be something like "if they did 2 years of deliberate practice, they then would have a 5% to 50% chance of meeting the bar for X". But I know think that probably the "marginal value from changing the environment vs. marginal value from adding more people" operationalization is more useful, which would require "greatness" to include practice/experience to be consistent with it.

If we disagree about the bar, I suspect that me having bad models about some of the examples you gave explains more of the disagreement than me generally dismissing high bars. "Functional programming" just doesn't sound like the kind of task to me with high returns to super-high ability levels, and similar for community building; but it't plausible that there are bundles of tasks involving these things where it matters a lot if you have someone whose ability is 6 instead of 5 standard deviations above the mean (not always well-defined, but you get the idea). E.g. if your "task" is "make a painting that will be held in similar regards as the Mona Lisa" or "prove P != NP" or "be as prolific as Ramanujan at finding weird infinite series for pi", then, sure, I agree we need an extremely high bar.

For what it's worth, I think that you're not credulous enough of the possibility that the person you talked to actually disagreed with you--I think you might doing that thing whose name I forget where you steelman someone into saying the thing you think instead of the thing they think.

Thanks for pointing this out. FWIW, I think there likely is both substantial disagreement between me and that person and that I misunderstood their view in some ways.

Comment by max_daniel on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-02-26T12:56:22.696Z · score: 21 (12 votes) · EA · GW

You might also be interested in John Halstead's and Johannes Ackva's recent Climate & Lifestyle Report for Founders Pledge. They point out that taking into account policy effects can dramatically change the estimated climate impact of lifestyle choices, and on children specifically they say that:

The biggest discrepancy here concerns the climate effect of having children. For the reasons given, we think our estimate of the effect of having children is more accurate for people living in the EU or US states with strong climate policy, such as California, New York, as well as other states in the Northeast. Indeed, even outside the US states with strong climate policy, we think the estimate accounting for policy is much closer to the truth, since emissions per head are also declining at the national level, and climate policy is likely to strengthen across the US in the next few decades.

After taking into account policy effects, they find that the climate impact of having children is comparable to some other lifestyle choices such as living car-free. (I.e. it's not the case that the climate impact of having children is orders of magnitude larger, as one might naively think w/o considering policy effects.)

For more detail, see their section 3.

Comment by max_daniel on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-02-26T12:44:52.278Z · score: 18 (10 votes) · EA · GW

I agree that blanket endorsements of anti-natalism (whether for climate or other reasons) in EA social media spaces are concerning, and I appreciate you taking the time to write down why you think they are misguided.

FWIW, my reaction to this post is: you present a valid argument (i.e. if I believed all your factual premises, then I'd think your conclusion follows), but this post by itself doesn't convince me that the following factual premise is true:

The magnitude of [your kids'] impact on the climate is likely to be much, much smaller than any of the three other factors I have raised.

At first glance, this seems highly non-obvious to me. I'd probably at least want to see a back-of-the-envelope calculation before believing this is right.

(And I'm not sure it is: I agree that your kids' impact on the climate would be more causally distant than their impact on your own well-being, your career, etc. However, conversely, there is a massive scale difference: impacts on climate affect the well-being of many people in many generations, not just your own. Notably, this is also true for impacts on your career, in particular if you try to improve the long-term future. So my first-pass guess is that the expected impact will be dominated by the non-obvious comparison of these two "distant" effects.)

Comment by max_daniel on Max_Daniel's Shortform · 2020-02-21T12:35:19.747Z · score: 4 (2 votes) · EA · GW

Thanks, very interesting!

I agree the examples you gave could be done by a recent graduate. (Though my guess is the community building stuff would benefit from some kinds of additional experience that has trained relevant project management and people skills.)

I suspect our impressions differ in two ways:

1. My guess is I consider the activities you mentioned less valuable than you do. Probably the difference is largest for programming at MIRI and smallest for Hubinger-style AI safety research. (This would probably be a bigger discussion.)

2. Independent of this, my guess would be that EA does have a decent number of unidentified people who would be about as good as people you've identified. E.g., I can think of ~5 people off the top of my head of whom I think they might be great at one of the things you listed, and if I had your view on their value I'd probably think they should stop doing what they're doing now and switch to trying one of these things. And I suspect if I thought hard about it, I could come up with 5-10 more people - and then there is the large number of people neither of us has any information about.

Two other thoughts I had in response:

  • It might be quite relevant if "great people" refers only to talent or also to beliefs and values/preferences. E.g. my guess is that there are several people who could be great at functional programming who either don't want to work for MIRI, or don't believe that this would be valuable. (This includes e.g. myself.) If to count as "great person" you need to have the right beliefs and preferences, I think your claim that "EA needs more great people" becomes stronger. But I think the practical implications would differ from the "greatness is only about talent" version, which is the one I had in mind in the OP.
  • One way to make the question more precise: At the margin, is it more valuable (a) to try to add high-potential people to the pool of EAs or (b) change the environment (e.g. coordination, incentives, ...) to increase the expected value of activities by people in the current pool. With this operationalization, I might actually agree that the highest-value activities of type (a) are better than the ones of type (b), at least if the goal is finding programmers for MIRI and maybe for community building. (I'd still think that this would be because, while there are sufficiently talented people in EA, they don't want to do this, and it's hard to change beliefs/preferences and easier to get new smart people excited about EA. - Not because the community literally doesn't have anyone with a sufficient level of innate talent. Of course, this probably wasn't the claim the person I originally talked to was making.)
Comment by max_daniel on Thoughts on electoral reform · 2020-02-19T11:17:37.843Z · score: 18 (6 votes) · EA · GW

(The following summary [not by me] might be helpful to some readers not familiar with the book:

https://casparoesterheld.com/2017/06/18/summary-of-achen-and-bartels-democracy-for-realists/ )

Comment by max_daniel on How do you feel about the main EA facebook group? · 2020-02-19T11:14:51.014Z · score: 4 (3 votes) · EA · GW

I almost never read the EA Facebook group. But I tend to generally dislike Facebook, and there simply is no Facebook group I regularly use. I think I joined the EA Facebook group in early 2016, though it's possible that it was a few months earlier or later. (In fact, I didn't have a Facebook account previously. I only created one because a lot of EA communication seemed to happen via Facebook, which I found somewhat annoying.) Based on my very infrequent visits, I don't have a sense that it changed significantly. But I'm not sure if I would have noticed.

Comment by max_daniel on Max_Daniel's Shortform · 2020-02-19T10:31:10.026Z · score: 21 (12 votes) · EA · GW

[On https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ ]

  • [ETA: After having talked to more people, it now seems to me that disagreeing on this point more often explains different reactions than I thought it would. I'm also now less confident that my impression that there wasn't bad faith from the start is correct, though I think I still somewhat disagree with many EAs on this. In particular, I've also seen plenty of non-EA people who don't plausibly have a "protect my family" reaction say the piece felt like a failed attempt to justify a negative bottom line that was determined in advance.] (Most of the following doesn't apply in cases where someone is acting in bad faith and is determined to screw you over. And in fact I've seen the opposing failure mode of people assuming good faith for too long. But I don't think this is a case of bad faith.)
  • I've seen some EAs react pretty negatively or angrily to that piece. (Tbc, I've also seen different reactions.) Some have described the article as a "hit piece".
  • I don't think it qualifies as a hit piece. More like a piece that's independent/pseudo-neutral/ambiguous and tried to stick to dry facts/observations but in some places provides a distorted picture by failing to be charitable / arguably missing the point / being one-sided and selective in the observation it reports.
  • I still think that reporting like this is net good, and that the world would be better if there was more of it at the margin, even if it has flaws similarly severe to that one. (Tbc, I think there would have been a plausibly realistic/achievable version of that article that would have been better, and that there is fair criticism one can direct at it.)
  • To put it bluntly, I don't believe that having even maximally well-intentioned and intelligent people at key institutions is sufficient for achieving a good outcome for the world. I find it extremely hard to have faith in a setup that doesn't involve a legible system/structure with things like division of labor, checks and balances, procedural guarantees, healthy competition, and independent scrutiny of key actors. I don't know if the ideal system for providing such outside scrutiny will look even remotely like today's press, but currently it's one of the few things in this vein that we have for nonprofits, and Karen Hao's article is an (albeit flawed) example of it.
  • Whether this specific article was net good or not seems pretty debatable. I definitely see reasons to think it'll have bad consequences, e.g. it might crowd out better reporting, might provide bad incentives by punishing orgs for trying to do good things, ... I'm less wedded to a prediction of this specific article's impact than to the broader frame for interpreting and reacting to it.
  • I find something about the very negative reactions I've seen worrying. I of course cannot know what they were motivated by, but some seemed like I would expect someone to react who's personally hurt because they judge a situation as being misunderstood, feels like they need to defend themself, or like they need to rally to protect their family. I can relate to misunderstandings being a painful experience, and have sympathy for it. But I also think that if you're OpenAI, or "the EA community", or anyone aiming to change the world, then misunderstandings are part of the game, and that any misunderstanding involves at least two sides. The reactions I'd like to see would try to understand what has happened and engage constructively with how to productively manage the many communication and other challenges involved in trying to do something that's good for everyone without being able to fully explain your plans to most people. (An operationalization: If you think this article was bad, I think that ideally the hypothesis "it would be good it we had better reporting" would enter your mind as readily as the hypothesis "it would be good if OpenAI's comms team and leadership had done a better job".)
Comment by max_daniel on Max_Daniel's Shortform · 2020-02-14T18:50:21.982Z · score: 20 (14 votes) · EA · GW

[Is longtermism bottlenecked by "great people"?]

Someone very influential in EA recently claimed in conversation with me that there are many tasks X such that (i) we currently don't have anyone in the EA community who can do X, (ii) the bottleneck for this isn't credentials or experience or knowledge but person-internal talent, and (iii) it would be very valuable (specifically from a longtermist point of view) if we could do X. And that therefore what we most need in EA are more "great people".

I find this extremely dubious. (In fact, it seems so crazy to me that it seems more likely than not that I significantly misunderstood the person who I think made these claims.) The first claim is of course vacuously true if, for X, we choose some ~impossible task such as "experience a utility-monster amount of pleasure" or "come up with a blueprint for how to build safe AGI that is convincing to benign actors able to execute it". But of course more great people don't help with solving impossible tasks.

Given the size and talent distribution of the EA community my guess is that for most apparent X, the issue either is that (a) X is ~impossible, or (b) there are people in EA who could do X, but the relevant actors cannot identify them, or (c) acquiring the ability to do X is costly (e.g. perhaps you need time to acquire domain-specific expertise), even for maximally talented "great people", and the relevant actors either are unable to help pay that cost (e.g. by training people themselves, or giving them the resources to allow them to get training elsewhere) or make a mistake by not doing so.

My best guess for the genesis of the "we need more great people" perspective: Suppose I talk a lot to people at an organization that thinks there's a decent chance we'll develop transformative AI soon but it will go badly, and that as a consequence tries to grow as fast as possible to pursue various ambitious activities which they think reduces that risk. If these activities are scalable projects with short feedback loops on some intermediate metrics (e.g. running some super-large-scale machine learning experiments), then I expect I would hear a lot of claims like "we really need someone who can do X". I think it's just a general property of a certain kind of fast-growing organization that's doing practical things in the world that everything constantly seems like it's on fire. But I would also expect that, if I poked a bit at these claims, it would usually turn out that X is something like "contribute to this software project at the pace and quality level of our best engineers, w/o requiring any management time" or "convince some investors to give us much more money, but w/o anyone spending any time transferring relevant knowledge". If you see that things break because X isn't done, even though something like X seems doable in principle (perhaps you see others do it), it's tempting to think that what you need is more "great people" who can do X. After all, people generally are the sort of stuff that does things, and maybe you've actually seen some people do X. But it still doesn't follow that in your situation "great people" are the bottleneck ...

Curious if anyone has examples of tasks X for which the original claims seem in fact true. That's probably the easiest way to convince me that I'm wrong.

Comment by max_daniel on Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' · 2020-01-30T13:23:13.564Z · score: 2 (1 votes) · EA · GW

Thank you for sharing your reaction!

Would be interested to hear if the authors have though through this.

I haven't, but it's possible that my coauthors have. I generally agree that it might be worthwhile to think along the lines you suggested.

Comment by max_daniel on Max_Daniel's Shortform · 2020-01-17T12:00:49.073Z · score: 2 (1 votes) · EA · GW

Thanks for sharing your reaction! There is some chance that I'll write up these and maybe other thoughts on AI strategy/governance over the coming months, but it depends a lot on my other commitments. My current guess is that it's maybe only 15% likely that I'll think this is the best use of my time within the next 6 months.

Comment by max_daniel on Long-term investment fund at Founders Pledge · 2020-01-09T13:09:56.067Z · score: 14 (10 votes) · EA · GW

That sounds great! I find the arguments for giving (potentially much) later intriguing and underappreciated. (If I had to allocate a large amount of money myself, I'm not sure what I'd end up doing. But overall it seems good to me if there is at least the option to invest.) I'd be very excited for such a fund to exist - partly because I expect that setting it up and running it will provide a bunch of information on empirical questions relevant for deciding whether investing into such a fund beats giving now.

Comment by max_daniel on Max_Daniel's Shortform · 2020-01-08T14:26:22.019Z · score: 7 (5 votes) · EA · GW

[Some of my tentative and uncertain views on AI governance, and different ways of having impact in that area. Excerpts, not in order, from things I wrote in a recent email discussion, so not a coherent text.]

1. In scenarios where OpenAI, DeepMind etc. become key actors because they develop TAI capabilities, our theory of impact will rely on a combination of affecting (a) 'structure' and (b) 'content'. By (a) I roughly mean how the relevant decision-making mechanisms look like irrespective of the specific goals and resources of the actors the mechanism consists of; e.g., whether some key AI lab is a nonprofit or a publicly traded company; who would decide by what rules/voting scheme how Windfall profits would be redistributed; etc. By (b) I mean something like how much the CEO of a key firm, or their advisors, care about the long-term future. -- I can see why relying mostly on (b) is attractive, e.g. it's arguably more tractable; however, some EA thinking (mostly from the Bay Area / the rationalist community to be honest) strikes me as focusing on (b) for reasons that seem ahistoric or otherwise dubious to me. So I don't feel convinced that what I perceive to be a very stark focus on (b) is warranted. I think that figuring out if there are viable strategies that rely more on (a) is better done from within institutions that have no ties with key TAI actors, and also might be best done my people that don't quite match the profile of the typical new EA that got excited about Superintelligence or HPMOR. Overall, I think that making more academic research in broadly "policy relevant" fields happen would be a decent strategy if one ultimately wanted to increase the amount of thinking on type-(a) theories of impact.

2. What's the theory of impact if TAI happens in more than 20 years? More than 50 years? I think it's not obvious whether it's worth spending any current resources on influencing such scenarios (I think they are more likely but we have much less leverage). However, if we wanted to do this, then I think it's worth bearing in mind that academia is one of few institutions (in a broad sense) that has a strong track record of enabling cumulative intellectual progress over long time scales. I roughly think that, in a modal scenario, no-one in 50 years is going to remember anything that was discussed on the EA Forum or LessWrong, or within the OpenAI policy team, today (except people currently involved); but if AI/TAI was still (or again) a hot topic then, I think it's likely that academic scholars will read academic papers by Dafoe, his students, the students of his students etc. Similarly, based on track records I think that the norms and structure of academia are much better equipped than EA to enable intellectual progress that is more incremental and distributed (as opposed to progress that happens by way of 'at least one crisp insight per step'; e.g. the Astronomical Waste argument would count as one crisp insight); so if we needed such progress, it might make sense to seed broadly useful academic research now. 

[...]

My view is closer to "~all that matters will be in the specifics, and most of the intuitions and methods for dealing with the specifics are either sort of hard-wired or more generic/have different origins than having thought about race models specifically". A crux here might be that I expect most of the tasks involved in dealing with the policy issues that would come up if we got TAI within the next 10-20 years to be sufficiently similar to garden-variety tasks involved in familiar policy areas that as a first pass: (i) if theoretical academic research was useful, we'd see more stories of the kind "CEO X / politician Y's success was due to idea Z developed through theoretical academic research", and (ii) prior policy/applied strategy experience is the background most useful for TAI policy, with usefulness increasing with the overlap in content and relevant actors; e.g.: working with the OpenAI policy team on pre-TAI issues > working within Facebook on a strategy for how to prevent the government to split up the firm in case a left-wing Democrat wins > business strategy for a tobacco company in the US > business strategy for a company outside of the US that faces little government regulation > academic game theory modeling. That's probably too pessimistic about the academic path, and of course it'll depend a lot on the specifics (you could start in academia to then get into Facebook etc.), but you get the idea.

[...]

Overall, the only somewhat open question for me is whether ideally we'd have (A) ~only people working quite directly with key actors or (B) a mix of people working with key actors and more independent ones e.g. in academia. It seems quite clear to me that the optimal allocation will contain a significant share of people working with key actors [...]

If there is a disagreement, I'd guess it's located in the following two points: 

(1a) How big are countervailing downsides from working directly with, or at institutions having close ties with, key actors? Here I'm mostly concerned about incentives distorting the content of research and strategic advice. I think the question is broadly similar to: If you're concerned about the impacts of the British rule on India in the 1800s, is it best to work within the colonial administration? If you want to figure out how to govern externalities from burning fossil fuels, is it best to work in the fossil fuel industry? I think the cliche left-wing answer to these questions is too confident in "no" and is overlooking important upsides, but I'm concerned that some standard EA answers in the AI case are too confident in "yes" and are overlooking risks. Note that I'm most concerned about kind of "benign" or "epistemic" failure modes: I think it's reasonably easy to tell people with broadly good intentions apart from sadists or even personal-wealth maximizers (at least in principle -- if this will get implemented is another question); I think it's much harder to spot cases like key people incorrectly believing that it's best if they keep as much control for themselves/their company as possible because after all they are the ones with both good intentions and an epistemic advantage (note that all of this really applies to a colonial administration with little modification, though here in cases such as the "Congo Free State" even the track record of "telling personal-wealth maximizers apart from people with humanitarian intentions" maybe isn't great -- also NB I'm not saying that this argument would necessarily be unsound; i.e. I think that in some situations these people would be correct).

(1b) To what extent to we need (a) novel insights as opposed to (b) an application of known insights or common-sense principles? E.g., I've heard claims that the sale of telecommunication licenses by governments is an example where post-1950 research-level economics work in auction theory has had considerable real-world impact, and AFAICT this kind of auction theory strikes me as reasonably abstract and in little need of having worked with either governments or telecommunication firms. Supposing this is true (I haven't really looked into this), how many opportunities of this kind are there in AI governance? I think the case for (A) is much stronger if we need little to no (a), as I think the upsides from trust networks etc. are mostly (though not exclusively) useful for (b). FWIW, my private view actually is that we probably need very little of (a), but I also feel like I have a poor grasp of this, and I think it will ultimately come down to what high-level heuristics to use in such a situation.


Comment by max_daniel on Max_Daniel's Shortform · 2019-12-17T11:56:08.786Z · score: 61 (24 votes) · EA · GW

[Some of my high-level views on AI risk.]

[I wrote this for an application a couple of weeks ago, but thought I might as well dump it here in case someone was interested in my views. / It might sometimes be useful to be able to link to this.]

[In this post I generally state what I think ​before ​updating on other people’s views – i.e., what’s ​sometimes known as​ ‘impressions’ as opposed to ‘beliefs.’]

Summary

  • Transformative AI (TAI) – the prospect of AI having impacts at least as consequential as the Industrial Revolution – would plausibly (~40%) be our best lever for influencing the long-term future if it happened this century, which I consider to be unlikely (~20%) but worth betting on.
  • The value of TAI depends not just on the technological options available to individual actors, but also on the incentives governing the strategic interdependence between actors. Policy could affect both the amount and quality of technical safety research and the ‘rules of the game’ under which interactions between actors will play out.

Why I'm interested in TAI as a lever to improve the long-run future

I expect my perspective to be typical of someone who has become interested in TAI through their engagement with the effective altruism (EA) community. In particular,

  • My overarching interest is to make the lives of as many moral patients as possible to go as well as possible, no matter where or when they live; and
  • I think that in the world we find ourselves in – it could have been otherwise –, this goal entails strong longtermism,​ i.e. the claim that “the primary determinant of the value of our actions today is how those actions affect the very long-term future.”

Less standard but not highly unusual (within EA) high-level views I hold more tentatively:

  • The indirect long-run impacts of our actions are extremely hard to predict and don’t ‘cancel out’ in expectation. In other words, I think that what ​Greaves (2016)​ calls ​complex cluelessness​ is a pervasive problem. In particular, evidence that an action will have desirable effects in the short term generally is ​not​ a decisive reason to believe that this action would be net positive overall, and neither will we be able to establish the latter through any other means.
  • Increasing the relative influence of longtermist actors is one of the very few strategies we have good reasons to consider net positive. Shaping TAI is a particularly high-leverage instance of this strategy, where the main mechanism is reaping an ‘epistemic rent’ from having anticipated TAI earlier than other actors. I take this line of support to be significantly more robust than any ​particular story on how TAI might pose a global catastrophic risk including even broad operationalizations of the ‘value alignment problem.’

My empirical views on TAI

I think the strongest reasons to expect TAI this century are relatively outside-view-based (I talk about this century just because I expect that later developments are harder to predictably influence, not because I think a century is particularly meaningful time horizon or because I think TAI would be less important later):

  • We’ve been able to automate an increasing number of tasks (with increasing performance and falling cost), and I’m not aware of a convincing argument for why we should be ​highly confident​ that this trend will stop short of ​full automation –​ i.e., AI systems being able to do all tasks more economically efficiently than humans –, despite moderate scientific and economic incentives to find and publish one.
  • Independent types of weak evidence such as ​trend extrapolation​ and ​expert​ ​surveys​ suggest we might achieve full automation this century.
  • Incorporating full automation into macroeconomic growth models predicts – at least under some a​ssumptions – a sustained higher rate of economic growth (e.g. ​Hanson 2001​, Nordhaus 2015​, ​Aghion et al. 2017​), which arguably was the main driver of the welfare-relevant effects of the Industrial Revolution.
  • Accelerating growth this century is consistent with extrapolating historic growth rates, e.g. Hanson (2000[1998])​.

I think there are several reasons to be skeptical, but that the above succeeds in establishing a somewhat robust case for TAI this century not being wildly implausible.

My impression is that I’m less confident than the typical longtermist EA in various claims around TAI, such as:

  • Uninterrupted technological progress would eventually result in TAI;
  • TAI will happen this century;
  • we can currently anticipate any specific way of positively shaping the impacts of TAI;
  • if the above three points were true then shaping TAI would be the most cost-effective way of improving the long-term future.

My guess is this is due to different priors, and due to frequently having found extant specific arguments for TAI-related claims (including by staff at FHI and Open Phil) less convincing than I would have predicted. I still think that work on TAI is among the few best shots for current longtermists.

Comment by max_daniel on Max_Daniel's Shortform · 2019-12-13T11:17:11.201Z · score: 40 (14 votes) · EA · GW

What's the right narrative about global poverty and progress? Link dump of a recent debate.

The two opposing views are:

(a) "New optimism:" [1] This is broadly the view that, over the last couple of hundred years, the world has been getting significantly better, and that's great. [2] In particular, extreme poverty has declined dramatically, and most other welfare-relevant indicators have improved a lot. Often, these effects are largely attributed to economic growth.

  • Proponents in this debate were originally Bill Gates, Steven Pinker, and Max Roser. But my loose impression is that the view is shared much more widely.
  • In particular, it seems to be the orthodox view in EA; cf. e.g. Muehlhauser listing one of Pinker's books in his My worldview in 5 books post, saying that "Almost everything has gotten dramatically better for humans over the past few centuries, likely substantially due to the spread and application of reason, science, and humanism."

(b) Hickel's critique: Anthropologist Jason Hickel has criticized new optimism on two grounds:

  • 1. Hickel has questioned the validity of some of the core data used by new optimists, claiming e.g. that "real data on poverty has only been collected since 1981. Anything before that is extremely sketchy, and to go back as far as 1820 is meaningless."
  • 2. Hickel prefers to look at different indicators than the new optimists. For example, he has argued for different operationalizations of extreme poverty or inequality.

Link dump (not necessarily comprehensive)

If you only read two things, I'd recommend (1) Hasell's and Roser's article explaining where the data on historic poverty comes from and (2) the take by economic historian Branko Milanovic.

By Hickel (i.e. against "new optimism"):

By "new optimists":

Commentary by others:

My view

  • I'm largely unpersuaded by Hickel's charge that historic poverty data is invalid. Sure, it's way less good than contemporary data. But based on Hasell's and Roser's article, my impression is that the data is better than I would have thought, and its orthodox analysis and interpretation more sophisticated than I would have thought. I would be surprised if access to better data would qualitatively change the "new optimist" conclusion.
  • I think there is room for debate over which indicators to use, and that Hickel makes some interesting points here. I find it regrettable that the debate around this seems so adversarial.
  • Still, my sense is that there is an important, true, and widely underappreciated (particularly by people on the left, including my past self) core of the "new optimist" story. I'd expect looking at other indicators could qualify that story, or make it less simplistic, point to important exceptions etc. - but I'd probably consider a choice of indicators that painted an overall pessimistic picture as quite misleading and missing something important.
  • On the other hand, I would quite strongly want to resist the conclusion that everything in this debate is totally settled, and that the new optimists are clearly right about everything, in the same way in which orthodox climate science is right about climate change being anthropogenic, or orthodox medicine is right about homeopathy not being better than placebo. But I think the key uncertainties are not in historic poverty data, but in our understanding of wellbeing and its relationship to environmental factors. Some examples of why I think it's more complicated
    • The Easterlin paradox
    • The unintuitive relationship between (i) subjective well-being in the sense of the momentary affective valence of our experience on one hand and (ii) reported life satisfaction. See e.g. Kahneman's work on the "experiencing self" vs. "remembering self".
    • On many views, the total value of the world is very sensitive to population ethics, which is notoriously counterintuitive. In particular, on many plausible views, the development of the total welfare of the world's human population is dominated by its increasing population size.
  • Another key uncertainty is the implications of some of the discussed historic trends for the value of the world going forward, about which I think we're largely clueless. For example, what are the effects of changing inequality on the long-term future?

[1] It's not clear to me if "new optimism" is actually new. I'm using Hickel's label just because it's short and it's being used in this debate anyway, not to endorse Hickel's views or make any other claim.

[2] There is an obvious problem with new optimism, which is that it's anthropocentric. In fact, on many plausible views, the total axiological value of the world at any time in the recent past may be dominated by the aggregate wellbeing of nonhuman animals; even more counterintuitively, it may well be dominated by things like the change in the total population size of invertebrates. But this debate is about human wellbeing, so I'll ignore this problem.

Comment by max_daniel on Rethink Priorities Impact Survey · 2019-12-02T19:19:30.594Z · score: 19 (9 votes) · EA · GW

Thanks for posting this! I'd really like to see more organizations evaluate their impact, and publish about their analysis.

Just a quick note: You mention that I indicated I "found [y]our work on nuclear weapons somewhat useful". This is correct. I'd like to note that the main reason why I don't find it very useful simply is that I currently don't anticipate to work on nuclear security personally, or to make any decisions that depend on my understanding of nuclear security. In general, how "useful" people find your work is a mix of their focus and the quality of your work (which in this case AFAICT is very high, though I haven't reviewed it in detail), which might make it hard to interpret the results.

Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-11T16:48:40.561Z · score: 11 (3 votes) · EA · GW

Regarding your "outside view" point: I agree with what you say here, but think it cannot directly undermine my original "outside view" argument. These clarifications may explain why:

  • My original outside view argument appealed to the process by which certain global health interventions such as distributing bednets have been selected rather than their content. The argument is not "global health is a different area from economic growth, therefore a health intervention is unlikely to be optimal for accelerating growth"; instead it is "an intervention that has been selected to be optimal according to some goal X is unlikely to also be optimal according to a different goal Y".
    • In particular, if GiveWell had tried to identify those interventions that best accelerate growth, I think my argument would be moot (no matter what interventions they had come up with, in particular in the hypothetical case where distributing bednets had been the result of their investigation).
    • In general, I think that selecting an intervention that's optimal for furthering some goal needs to pay attention to all of importance, tractability, and neglectedness. I agree that it would be bad to exclusively rely on the heuristics "just focus on the most important long-term outcome/risk" when selecting longtermist interventions, just as it would be bad to just rely on the heuristics "work on fighting whatever disease has the largest disease burden globally" when selecting global health interventions. But I think these would just be bad ways to select interventions, which seems orthogonal to the question when an intervention selected for X will also be optimal for Y. (In particular, I don't think that my original outside view argument commits me to the conclusion that in the domain of AI safety it's best to directly solve the largest or most long-term problem, whatever that is. I think it does recommend to deliberately select an intervention optimized for reducing AI risk, but this selection process should also take into account feedback loops and all the other considerations you raised.)
  • The main way I can see to undermine this argument would be to argue that a certain pair of goals X and Y is related in such a way that interventions optimal for X are also optimal for Y (e.g., X and Y are positively correlated, though this in itself wouldn't be sufficient). For example, in this case, such an argument could be of the type "our best macroeconomic models predict that improving health in currently poor countries would have a permanent rate effect on growth, and empirically it seems likely that the potential for sustained increases in the growth rate is largest in currently poor countries" (I'm not saying this claim is true, just that I would want to see something like this).
Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-11T15:49:45.798Z · score: 2 (2 votes) · EA · GW
The "inside view" point is that Christiano's estimate only takes into account the "price of a life saved". But in truth GiveWell's recommendations for bednets or deworming are to a large measure driven by their belief, backed by some empirical evidence, that children who grow up free of worms or malaria become adults who can lead more productive lives. This may lead to better returns than what his calculations suggest. (Micronutrient supplementation may also be quite efficient in this respect.)

I think this is a fair point. Specifically, I agree that GiveWell's recommendations are only partly (in the case of bednets) or not at all (in the case of deworming) based on literally averting deaths. I haven't looked at Paul Christiano's post in sufficient detail to say for sure, but I agree it's plausible that this way of using "price of a life saved" calculations might effectively ignore other benefits, thus underestimating the benefits of bednet-like interventions compared to GiveWell's analysis.

I would need to think about this more to form a considered view, but my guess is this wouldn't change my mind on my tentative belief that global health interventions selected for their short-term (say, anything within the next 20 years) benefits aren't optimal growth interventions. This is largely because I think the dialectical situation looks roughly like this:

  • The "beware suspicious convergence" argument implies that it's unlikely (though not impossible) that health interventions selected for maximizing certain short-term benefits are also optimal for accelerating long-run growth. The burden of proof is thus with the view that they are optimal growth interventions.
  • In addition, some back-of-the-envelope calculations suggest the same conclusion as the first bullet point.
  • You've pointed out a potential problem with the second bullet point. I think it's plausible to likely that this significantly to totally removes the force of the second bullet point. But even if the conclusion of the calculations were completely turned on their head, I don't think they would by themselves succeed in defeating the first bullet point.
Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-10T16:53:06.919Z · score: 3 (3 votes) · EA · GW

As I said in another comment, one relevant complication seems to be that risk and growth interact. In particular, the interaction might be such that speeding up growth could actually have negative value. This has been debated for a long time, and I don't think the answer is obvious. It might something we're clueless about.

(See Paul Christiano's How useful is “progress”? for an ingenious argument for why either

  • (a) "People are so badly mistaken (or their values so misaligned with mine) that they systematically do harm when they intend to do good, or"
  • (b) "Other (particularly self-interested) activities are harmful on average."

Conditional on (b) we might worry that speeding up growth would work via increasing the amount or efficiency of various self-interested activities, and thus would be harmful.

I'm not sure if I buy the argument, though. It is based on "approximat[ing] the changes that occur each day as morally neutral on net". But on longer timescales it seems that we should be highly uncertain about the value of changes. It thus seems concerning to me to look at a unit of time for which the magnitude of change is unintuitively small, round it to zero, and extrapolate from this to a large-scale conclusion.)

Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-10T16:23:39.687Z · score: 4 (3 votes) · EA · GW

You say that:

I will [...] focus instead on a handful of simple model cases. [...] These models will be very simple. In my opinion, nothing of value is being lost by proceeding in this way.

I agree in the sense that I think your simple models succeed in isolating an important consideration that wouldn't itself be qualitatively altered by looking at a more complex model.

However, I do think (without implying that this contradicts anything you have said in the OP) that there are other crucial premises for the argument concluding that reducing existential risk is the best strategy for most EAs. I'd like to highlight three, without implying that this list is comprehensive.

  • One important question is how growth and risk interact. Specifically, it seems that we face existential risks of two different types: (a) 'exogenous' risks with the property that their probability per wall-clock time doesn't depend on what we do (perhaps a freak physics disaster such as vacuum decay); and (b) 'endogenous' risks due to our activities (e.g. AI risk). The probability of such endogenous risks might correlate with proxies such as economic growth or technological progress, or more specific kinds of these trends. As an additional complication, the distinction between exogenous and endogenous risks may not be clear-cut, and arguably is itself endogenous to the level of progress - for example, an asteroid strike could be an existential risk today but not for an intergalactic civilization. Regarding growth, we might thus think that we face a tradeoff where faster growth would on one hand reduce risk by allowing us to more quickly reach thresholds that would make us invulnerable to some risks, but on the other hand might exacerbate endogenous risks that increase with the rate of growth. (A crude model for why there might be risks of the latter kind: perhaps 'wisdom' increases at fixed linear speed, and perhaps the amount of risk posed by a new technology decreases with wisdom.)
    • I think "received wisdom" is roughly that most risk is endogenous, and that more fine-grained differential intellectual or technological progress aimed at specifically reducing such endogenous risk (e.g. working on AI safety rather then generically increasing technological progress) is therefore higher-value than shortening the window of time during which we're exposed to some exogenous risks.
    • See for example Paul Christiano, On Progress and Prosperity
    • A somewhat different lense is to ask how growth will affect the willingness of impatient actors - i.e., those that discount future resources at a higher rate than longtermists - to spend resources on existential risk reduction. This is part of what Leopold Aschenbrenner has examined in his paper on Existential Risk and Economic Growth.
  • More generally, the value of existential risk reduction today depends on the distribution of existential risk over time, including into the very long-run future, and on whether todays effort would have permanent effects on that distribution. This distribution might in turn depend on the rate of growth, e.g. for the reasons mentioned in the previous point. For an excellent discussion, see Tom Sittler's paper on The expected value of the long-term future. In particular, the standard argument for existential risk reduction requires the assumption that we will eventually reach a state with much lower total risk than today.
  • A somewhat related issue is the distribution of opportunities to improve the long-term future over time. Specifically, will there be more efficient longtermist interventions in, say, 50 years? If yes, this would be another reason to favor growth over reducing risk now. Though more specifically it would favor growth, not of the economy as a whole, but of the pool of resources dedicated to improving the long-term future - for example, through 'EA community building' or investing to give later. Relatedly, the observation that longtermists are unusually patient (i.e. discount future resources at a lower rate) is both a reason to invest now and give later, when longtermists control a larger share of the pie - and a consideration increasing the value of "ensuring that the future proceeds without disruptions", potentially by using resources now to reduce existential risk. For more, see e.g.:
Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-10T15:30:42.771Z · score: 8 (7 votes) · EA · GW

You describe the view you're examining as:

cause areas related to existential risk reduction, such as AI safety, should be virtually infinitely preferred to other cause areas such as global poverty

You then proceed by discussing considerations that are somewhat specific to the specific types of interventions you're comparing - i.e., reducing extinction risk versus speeding up growth.

You might be interested in another type of argument questioning this view. These arguments attack the "virtually infinitely" part of the view, in a way that's agnostic about the interventions being compared. For such arguments, see e.g.:

Comment by max_daniel on Assumptions about the far future and cause priority · 2019-11-10T13:55:24.362Z · score: 26 (13 votes) · EA · GW

Thank you, I think this is an excellent post!

I also sympathize with your confusion. - FWIW, I think that a fair amount of uncertainty and confusion about the issues you've raised here is the epistemically adequate state to be in. (I'm less sure whether we can reliably reduce our uncertainty and confusion through more 'research'.) I tentatively think that the "received longtermist EA wisdom" is broadly correct - i.e. roughly that the most good we can do usually (for most people in most situations) is by reducing specific existential risks (AI, bio, ...) -, but I think that

  • (i) this is not at all obvious or settled, and involves judgment calls on my part which I could only partly make explicit and justify; and
  • (ii) the optimal allocation of 'longtermist talent' will have some fraction of people examining whether this "received wisdom" is actually correct, and will also have some distribution across existential risk reduction, what you call growth interventions, and other plausible interventions aimed at improving the long-term future (e.g. "moral circle expansion") - for basically the "switching cost" and related reasons you mention [ETA: see also sc. 2.4 of GPI's research agenda].

One thing in your post I might want to question is that, outside of your more abstract discussion, you phrase the question as whether, e.g., "AI safety should be virtually infinitely preferred to other cause areas such as global poverty". I'm worried that this is somewhat misleading because I think most of your discussion rather concerns the question whether, to improve the long-term future, it's more valuable to (a) speed up growth or to (b) reduce the risk of growth stopping. I think AI safety is a good example of a type-(b) intervention, but that most global poverty interventions likely aren't a good example of a type-(a) intervention. This is because I would find it surprising if an intervention that has been selected to maximize some measure of short-term impact also turned out to be optimal for speeding up growth in the long-run. (Of course, this is a defeatable consideration, and I acknowledge that there might be economic arguments that suggest that accelerating growth in currently poor countries might be particularly promising to increase overall growth.) In other words, I think that the optimal "growth intervention" Alice would want to consider probably isn't, say, donating to distribute bednets; I don't have a considered view on what it would be instead, but I think it might be something like: doing research in a particularly dynamic field that might drive technological advances; or advocating changes in R&D or macroeconomic policy. (For some related back-of-the-envelope calculations, see Paul Christiano's post on What is the return to giving?; they suggest "that good traditional philanthropic opportunities have a return of around 10 and the best available opportunities probably have returns of 100-1000, with most of the heavy hitters being research projects that contribute to long term tech progress and possibly political advocacy", but of course there is a lot of room for error here. See also this post for how maximally increasing technological progress might look like.)

Lastly, here are some resources on the "increase growth vs. reduce risk" question, which you might be interested in if you haven't seen them:

  • Paul Christiano's post on (literal) Astronomical waste, where he considers the permanent loss of value from delayed growth due to cosmological processes (expansion, stars burning down, ...). In particular, he also mentions the possibility that "there is a small probability that the goodness of the future scales exponentially with the available resources", though he ultimately says he favors roughly what you called the plateau view.
  • In an 80,000 Hours podcast, economist Tyler Cowen argues that "our overwhelming priorities should be maximising economic growth and making civilization more stable".
  • For considerations about how to deal with uncertainty over how much utility will grow as a function of resources, see GPI's research agenda, in particular the last bullet point of section 1.4. (This one deals with the possibility of infinite utilities, which raises somewhat similar meta-normative issues. I thought I remembered that they also discuss the literal point you raised - i.e. what if utility will in the long-run grow exponentially? -, but wasn't able to find it.)

I might follow up in additional comments with some pointers to issues related to the one you discuss in the OP.