Posts

Giving and receiving feedback 2020-09-07T07:24:33.941Z · score: 51 (19 votes)
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? 2020-08-13T09:15:39.622Z · score: 71 (28 votes)
Max_Daniel's Shortform 2019-12-13T11:17:10.883Z · score: 21 (7 votes)
When should EAs allocate funding randomly? An inconclusive literature review. 2018-11-17T14:53:38.803Z · score: 39 (24 votes)

Comments

Comment by max_daniel on MichaelDickens's Shortform · 2020-09-24T14:44:45.621Z · score: 6 (3 votes) · EA · GW

Bloom et al. do report exponential growth of various metrics, but I don't think these metrics are well-characterized by 'ideas'. They are things like price-performance of transistors or crop yields per area.

If we instead attempt to measure progress by something like 'number of ideas', there is some evidence in favor of your guess that "ideas should grow logarithmically with effort". E.g., in a review of the 'science of science', Fortunato et al. (2018) say (emphases mine):

Early studies discovered an exponential growth in the volume of scientific literature, a trend that continues with an average doubling period of 15 years. Yet, it would be naïve to equate the growth of the scientific literature with the growth of scientific ideas. [...] Large-scale text analysis, using phrases extracted from titles and abstracts to measure the cognitive extent of the scientific literature, have found that the conceptual territory of science expands linearly with time. In other words, whereas the number of publications grows exponentially, the space of ideas expands only linearly.

Bloom et al. also report a linear increase in life expectancy in sc. 6. I vaguely remember that there are many more examples where exponential growth becomes linear once evaluated on some other 'natural' metrics, but I don't remember where I saw them. Possibly in the literature on logarithmic returns to science. Let me know if it'd be useful if I try to dig up some references.

ETA: See e.g. here, number of known chemical elements. Possibly there are more example in that SSC post.

Comment by max_daniel on Buck's Shortform · 2020-09-23T23:23:41.367Z · score: 10 (3 votes) · EA · GW

I agree that post is low-quality in some sense (which is why I didn't upvote it), but my impression is that its central flaw is being misinformed, in a way that's fairly easy to identify. I'm more worried about criticism where it's not even clear how much I agree with the criticism or where it's socially costly to argue against the criticism because of the way it has been framed.

It also looks like the post got a fair number of downvotes, and that its karma is way lower than for other posts by the same author or on similar topics. So it actually seems to me the karma system is working well in that case.

(Possibly there is an issue where "has a fair number of downvotes" on the EA FOrum corresponds to "has zero karma" in fora with different voting norms/rules, and so the former here appearing too positive if one is more used to fora with the latter norm. Conversely I used to be confused why posts on the Alignment Forum that seemed great to me had more votes than karma score.)

Comment by max_daniel on Buck's Shortform · 2020-09-23T23:11:38.155Z · score: 3 (2 votes) · EA · GW

I agree with this as stated, though I'm not sure how much overlap there is between the things we consider low-quality criticism. (I can think of at least one example where I was mildly annoyed that something got a lot of upvotes, but it seems awkward to point to publicly.)

I'm not so worried about becoming the target of low-quality criticism myself. I'm actually more worried about low-quality criticism crowding out higher-quality criticism. I can definitely think of instances where I wanted to say X but then was like "oh no, if I say X then people will lump this together with some other person saying nearby thing Y in a bad way, so I either need to be extra careful and explain that I'm not saying Y or shouldn't say X after all".

I'm overall not super worried because I think the opposite failure mode, i.e. appearing too unwelcoming of criticism, is worse.

Comment by max_daniel on Does Economic History Point Toward a Singularity? · 2020-09-14T10:30:43.426Z · score: 4 (2 votes) · EA · GW

Yes, sorry, I think I was too quick to make a comment and should have paid more attention to the context. I think the claims in my comment are correct, but as you say it's not clear what exactly they're responding to, and in particular I agree it's not relevant to Asya's point on whether to use 'explosive'.

Comment by max_daniel on RyanCarey's Shortform · 2020-09-11T16:27:56.038Z · score: 3 (2 votes) · EA · GW

Yes, good point! My idle speculations have also made me wonder about Indonesia at least once.

Comment by max_daniel on Giving and receiving feedback · 2020-09-10T11:41:53.457Z · score: 9 (4 votes) · EA · GW

Agree this is a bad property of Google docs. I wonder how much value we're losing because of this ...

EA wants to be the equivalent of the Scientific Revolution for doing good, but instead of a Republic of Letters we have a Cacophony of Comment Threads. ;)

Comment by max_daniel on Some extremely rough research on giving and happiness · 2020-09-09T11:34:14.361Z · score: 16 (9 votes) · EA · GW
I'm just sharing because it seems like a good norm to default towards sharing things like this, and maybe it helps someone else to do a better version.

Strongly agree that this would be a good norm. Thanks for leading by example!

Comment by max_daniel on Does Economic History Point Toward a Singularity? · 2020-09-08T19:23:29.021Z · score: 7 (4 votes) · EA · GW

Thanks for poking at this, it would be quite interesting to me if the "constant exponential growth" story was wrong. Which graphs in Farmer & Lafond (2016) are you referring to? To me, the graph with a summary of all trends only seems to have very few that at first glance look a bit like s-curves. But I agree one would need to go beyond eyeballing to know for sure.

I agree with your other points. My best guess is that input prices and other exogenous factors aren't that important for some of the trends, e.g. Moore's Law or agricultural productivity. And I think some of the manufacturing trends in e.g. Arrow (1971) are in terms of output quantity per hour of work rather than prices, and so also seem less dependent on exogenous factors. But I'm more uncertain about this, and agree that in principle dependence on exogenous factors complicates the interpretation.

Comment by max_daniel on Does Economic History Point Toward a Singularity? · 2020-09-08T16:39:02.774Z · score: 10 (3 votes) · EA · GW
it seems like most industries in the modern world are characterized by relatively continuous productivity improvements over periods of decades or centuries

This agrees with my impression. Just in case someone is looking for references for this, see e.g.:

  • Nagy et al. (2013) - several of the trends they look at, e.g. prices for certain chemical substances, show exponential growth for more than 30 years
  • Farmer & Lafond (2016) - similar to the previous paper, though fewer trends with data from more than 20 years
  • Bloom et al. (2020) - reviews trends in research productivity, most of which go back to 1975 and some to 1900
  • Some early examples from manufacturing (though not covering multiple decades) are reviewed in a famous paper by Arrow (1971), who proposed 'learning by doing' as a mechanism.
Comment by max_daniel on Does Economic History Point Toward a Singularity? · 2020-09-08T09:11:57.776Z · score: 4 (2 votes) · EA · GW

I agree this is puzzling, and I'd love to see more discussion of this.

However, it seems to be that at least in principle there could be a pretty boring explanation: The HGH is correct about the fundamental trend, and the literature on the Industrial Revolution has correctly identified (and maybe explained) a major instance of noise.

Note also that the phenomenon that social behavior that is individually contingent is nevertheless governed by simple macro-laws with few parameters is relatively ubiquitous. E.g. the exact timing of all major innovations since the Industrial Revolution (electricity, chemical engineering, computers, ...) seems fairly contingent, and yet overall the growth rate is remarkably close to constant. Similarly for the rest of Kaldor's facts.

Comment by max_daniel on Does Economic History Point Toward a Singularity? · 2020-09-08T08:28:03.020Z · score: 5 (3 votes) · EA · GW

My impression is that everyone agrees that the Industrial Revolution led to an increase in the growth rate, but that the Hyperbolic Growth Hypothesis (HGH) disagrees with the Series Of Exponentials Hypothesis on whether that increase in the growth rate was trend-breaking.

Put differently, the HGH says that as far as the growth rate is concerned, the Industrial Revolution wasn't special - or if it was, then it must attribute this to noise. According to the HGH, the growth rate has been increasing all the time according to the same hyperbolic function, and the industrial revolution was just a part of this trend. I.e. only one "growth mode" for all of history, rather than the industrial revolution ushering in a new growth mode.

By contrast, on the Series Of Exponential view, the Industrial Revolution did break the previous trend - we had exponential growth both before and after, but with different doubling times.

Comment by max_daniel on Ben Garfinkel's Shortform · 2020-09-03T19:54:11.359Z · score: 3 (2 votes) · EA · GW
I think this depends a bit what class of safety issues we're thinking about. [...] Many other technological 'accident risks' are less social, although never entirely non-social (e.g. even in the case of bridge safety, you still need to trust some organization to do maintenance/testing properly.)

I'm not sure I agree with this. While they haven't been selected to be representative, the sense I got from the accident case studies I've read (e.g. Chernobyl, nuclear weapons accidents, and various cases from the books Flirting With Disaster and Warnings) is that the social component was quite substantial. It seems to me that usually either better engineering (though sometimes this wasn't possible) or better social management of dealing with engineering limitations (usually possible) could have avoided these accidents. It makes a lot of sense to me that some people prefer to talk of "sociotechnical systems".

Comment by max_daniel on Ben Garfinkel's Shortform · 2020-09-03T19:45:58.093Z · score: 4 (3 votes) · EA · GW
But you do need the right balance of conditions to hold: individual units of the technology need to offer their users large enough benefits and small enough personal safety risks, need to create large enough external safety risks, and need to have safety levels that increase slowly enough over time.
Weapons of mass destruction are sort of special in this regard. [...]
[...] I think existential safety risks become a much harder sell, though, if we're primarily imagining non-superweapon applications and distributed/gradual/what-failure-looks-like-style scenarios.

Yes, my guess is we broadly agree about all of this.

I also think it's worth noting that, on an annual basis, even nukes don't have a super high chance of producing global catastrophes through accidental use; if you have a high enough discount rate, and you buy the theory that they substantially reduce the risk of great power war, then it's even possible (maybe not likely) that their existence is currently positive EV by non-longtermist lights.

This also sounds right to me. FWIW, it's not even obvious to me if nukes are negative-EV by longtermist lights. Since nuclear winter seems unlikely to cause immediate extinction this depends on messy questions such as how the EV of trajectory changes from conventional great power war compares to the EV of trajectory changes from nuclear winter scenarios.

Comment by max_daniel on Ben Garfinkel's Shortform · 2020-09-03T17:04:07.516Z · score: 3 (2 votes) · EA · GW
the more unsafe the technology is, the less incentive anyone has to develop or use it

That seems correct all else equal. However, it can be outweighed by actors seeking relative gains or other competitive pressures. And my impression is this is a key premise in some typical arguments for why AI risk is large.

Schlosser's Command and Control has some instructive examples from nuclear policy (which I think you're aware of, so describing them mostly for the benefit of other readers) where e.g. US policymakers were explicitly trading off accident risk with military capabilities when deciding if/how many bombers with nuclear weapons to have patrolling in the air.

And indeed several bombers with nuclear weapons crashed, e.g. 1968 over Greenland, though no nuclear detonation resulted. This is also an example where external parties for a while were kind of screwed. Yes, Denmark had an incentive to reduce safety risks from US bombers flying over their territory; but they didn't have the technical capabilities to develop less risky substitutes, and political defenses like the nuclear-free zone they declared were just violated by the US.

Tbc, I do agree all your points are correct in principle. E.g. in this example, the US did have an incentive to reduce safety risks, and since none of the accidents were "fatal" to the US they did eventually replace nuclear weapons flying around with better ICBMs, submarines etc. I still feel like your take sounds too optimistic once one takes competitive dynamics into account.

--

As an aside, I'm not sure I agree that reducing safety-related externalities is largely an engineering problem, unless we include social engineering. Things like organizational culture, checklists, maintenance policies, risk assessments, etc., also seem quite important to me. (Or in the nuclear policy example even things like arms control, geopolitics, ...)

Comment by max_daniel on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-03T13:56:41.718Z · score: 29 (10 votes) · EA · GW

I didn't downvote your comment, but was close to doing so. (I generally downvote few comments, maybe in some sense "too few".)

The reason why I considered downvoting: You claim that an argument implies a view widely seen as morally repugnant and (i.e. this alone is not sufficient):

  • You are not as clear as I think you could have been that you don't actually ascribe the morally repugnant view to Owen, as opposed to mentioning this as an reduction ad absurdum precisely because you don't think anyone accepts the morally repugnant conclusion.
  • You use more charged language than is necessary to make your point. E.g. instead of saying "repugnant" you could have said something like "which presumably no-one is willing to accept". Similarly, it's not relevant whether the perpetrator in your claim is a pedophile. (But it's good to avoid even the faintest suggestion that someone in this debate is claimed to be pedophile.)
  • I'm not able to follow your reasoning, and suspect you may have misunderstood the comment you're responding to. Most significantly, the above comment doesn't argue that anything is morally okay, simpliciter - it just argues that a certain kind of moral objections, namely an appeal to bad consequences, doesn't work for certain actions. It even explicitly lists other moral reasons against these actions. (Granted, it does suggest that these reasons aren't so strong that the action is clearly impermissible in all circumstances.) But even setting this aside, I'm not sure why you think the above comment has the implication you think it has.

I don't know for sure why anyone downvoted, but moderately strongly suspect they had similar reasons.

Here's a version of your point which is still far from optimal on the above criteria (e.g. I'd probably have avoided the child abuse example altogether) but which I suspect wouldn't have been downvoted:

I think your argument proves too much. It implies, for instance, that it's not clearly impermissible to harm humans in similar ways in which non-human animals are being harmed because of humans slaughtering them for food. [Say 1-2 sentences about why you think this.] As a particularly drastic example, consider that virtually everyone agrees that sexual abuse of children is not permissible under any circumstances. Your argument seems to imply that there would only be a much weaker moral prohibition against child abuse. Clearly we cannot accept this conclusion. So there must be something wrong with your argument.
Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-09-03T13:22:15.911Z · score: 2 (1 votes) · EA · GW

No, but there was a copy and paste error that made the comment unintelligible. Edited now. Thanks for flagging!

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-09-03T12:37:38.050Z · score: 3 (2 votes) · EA · GW

Glad it's helpful!

I think you're very likely doing this anyway, but I'd recommend to get a range of perspectives on these questions. As I said, my own views here don't feel that resilient, and I also know that several epistemic peers disagree with me on some of the above.

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-09-03T11:54:57.619Z · score: 4 (2 votes) · EA · GW
the "shallow takes" involve the same amount of total analysis as the "thorough takes", but they're analysing such a big topic that they can only provide a shallow look at each component

Yes, that's what I had in mind. Thanks for clarifying!

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-09-03T08:53:46.377Z · score: 15 (6 votes) · EA · GW

Several reasons:

  • In many cases, doing thorough work on a narrow question and providing immediately impactful findings is simply too hard. This used to work well in the early days of EA when more low-hanging fruit was available, but rarely works any more.
    • Instead of having 10 shallow takes on immediately actionable question X, I'd rather have 10 thorough takes on different subquestions Y_1, ..., Y_10, even if it's not immediately obvious how exactly they help with answering X (there should be some plausible relation, however). Maybe I expect 8 of these 10 takes to be useless, but unlike adding more shallow takes on X the thorough takes on the 2 remaining subquestions enable incremental and distributed intellectual progress:
      • It may allow us to identify new subquestions we weren't able to find through doing shallow takes on X.
      • Someone else can build on the work, and e.g. do a thorough take on another subquestions that helps illuminate how it relates to X, what else we need to know to use the thorough findings to make progress on Y etc.
      • The expected benefit from unknown unknowns is larger. Random examples: the economic historians who assembled data on historic GDP growth presumably didn't anticipate that their data would feature in outside-view arguments on the plausibility of AGI takeoff this century. (Though if you had asked them, they probably would have been able to see that this is a plausible use - there probably are other examples where the delayed use/benefit is more surprising.)
  • It's often more instrumentally useful because it better fulfills non-EA criteria for excellence or credibility.
    • I think this is especially important when trying to build bridges between EA research and academia with the vision to make more academic research helpful to EA happen.
    • It's also important because non-EA actors often have different criteria for when they're willing to act on research findings. I think EAs tend to be unusually willing to act on epistemic states like "this seems 30% likely to me, even if I can't fully defend this or even say why exactly I believe this" (I think this is good), but if they wanted to convince some other actor (e.g. a government or big firm) to act they'll need more legible arguments.

One recent examples that's salient to me and illustrates what strikes me as a bit off here is the discussion on Leopold Aschenbrenner's paper on x-risk and growth in the comments to this post. A lot of the discussion seemed to be motivated by the question "How much should this paper update our all-things-considered view on whether it's net good to accelerate economic growth?". It strikes me that this is very different from the questions I'd ask about that paper, and also quite far removed from why, as I said, I think this paper was a great contribution.

These reasons are more like:

  • As best as I can tell (significantly because of reactions by other people with more domain expertise), the paper is quite impressive to academic economists, and so could have large instrumental benefits for building bridges.
  • While it didn't even occur to me to update my all-things-considered take on whether it'd be good to accelerate growth much, I think the paper does a really thorough job at modeling one aspect that's relevant to this question. Once we have 10 to 100 papers like it, I think I'll have learned a lot and will be in a great position to update my all-things-considered take. But, crucially, the paper is one clear step in this direction in a way in which an EA Forum post with bottom line "I spent 40 hours researching whether accelerating economic growth is net good, and here is what I think" simply is not.
Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-09-02T19:26:54.618Z · score: 14 (7 votes) · EA · GW

[Off the top of my head. I don't feel like my thoughts on this are very developed, so I'd probably say different things after thinking about it for 1-10 more hours.]

[ETA: On a second reading, I think some of the claims below are unhelpfully flippant and, depending on how one reads them, uncharitable. I don't want to spend the significant time required for editing, but want to flag that I think my dispassionate views are not super well represented below.]

Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia? 

Things that immediately come to mind, not necessarily the most important levers:

  • Identify skills or bodies of knowledge that seem relevant for longtermist research, and where necessary design curricula for deliberate practice of these. In addition to having other downsides, I think our norms of single-dimensional evaluations of people (I feel like I hear much more often that someone is "promising" or "impressive" than that they're "good at <ability or skill>") are evidence of a harmful laziness that helps entrench the status quo.
  • Possibly something like a double-blind within-EA peer review system for some publications could be good.
  • More publicly accessible and easily searchable content, ideally collected or indexed by central hubs. This does not necessarily mean more standard academic publications. I think that e.g. some content that currently only exists in nonpublic Google docs isn't published solely because of (i) exaggerated worries about info hazards or (ii) exaggerated worries that non-polished content might reflect badly on the author. (Though in other cases I think there are valid reasons not to publish.) If there was a place where it was culturally OK to publish rough drafts, this could help.
  • This is more fuzzy, but I think it would be valuable to have a more output-oriented culture. (At the margin - I definitely agree that too much emphasis on producing output can be harmful in some situations or if taken too far.)
  • Culturally, but also when making e.g. concrete hiring decisions, we should put less emphasis on "does this person seem smart?" and more on "does this person have a track record of achievements?". (Again, this is at the margin, and there are exceptions.) Cf. how this changes over the progression of a career in academia - to get into a good university as undergraduate you need to have good grades, which is closer to "does this person seem smart?", but to get tenure you need to have publications, which is closer to "does this person have a track record of achievements?" [I say this as someone with a conspicuous dearth of achievements but ability to project and evidence of smartness, i.e. someone who has benefitted from the status quo.]
  • We should evaluate research less by asking "how immediately action-relevant or impactful is this?" and more by asking "has this isolated a plausibly relevant question, and does it a good job at answering it?".
What kinds of specialisation do you think we'd want - subject knowledge? Along different subject lines to academia? 
  • Subject knowledge
  • Methods (e.g. it regularly happens to me that someone I'm mentoring has a question that is essentially just about statistics but I can't answer it nor do I know anyone easily available in my network who can; it seems like a bit of a travesty to be in a situation where a lot of people worship Bayes's Rule but very few have the knowledge of even a 1-semester long course in applied statistics)

I expect that some of the resulting specialists would have a natural home in existing academic disciplines and others wouldn't, but I can't immediately think of examples.

Do you think EA should primarily use existing academia for training new researchers, or should there be lots of RSP-type things?

I think in principle it'd be great if there were more RSP-type things, but I'm not sure if I think they're good to expand at the margin given opportunity costs.

However, I expect that for most people the best training setup would not be RSP-type things but a combination of:

  • Full-time work/study in academia or at some "elite organization" with good mentoring and short feedback loops.
  • EA-focused "enrichment interventions" that essentially don't substitute for conventional full-time work/study (e.g. weekend seminars, fellowship in term breaks or work sabbaticals). Participants would be selected for EA motivation, there would be opportunity for interaction with EA researchers and other people working at EA orgs, the content would be focused on core EA issues, etc.

This is because I do agree there are important components of "EA/rationalist mindware and knowledge" without which I expect even super smart and extremely skilled people to have little impact. But I'm really skeptical that the best way to transmit these is to have people hang out for years in insular low-stimulation environments. I think we can transmit them in much less time, and in a way that doesn't compete as much with robustly useful skill acquisition, and if not then we can figure out how to do this.

I expect RSP-type things to be targeted at people in more exceptional circumstances, e.g. they have good plans that don't fit into existing institutions or they need time to "switch fields".

Comment by max_daniel on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T18:25:31.247Z · score: 1 (5 votes) · EA · GW
the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public?

My guess is that your points explain a significant share of the effect, but I'd guess the following is also significant:

Expressing worries about how some external dynamic might affect the EA community isn't often done on this Forum, perhaps because it's less naturally "on topic" than discussion of e.g. EA cause areas. I think this applies to worries about so-called cancel culture, but also to e.g.:

  • How does US immigration policy affect the ability of US-based EA orgs to hire talent?
  • How do financial crises or booms affect the total amount of EA-aligned funds? (E.g. I think a significant share of Good Ventures's capital might be in Facebook stocks?)

Both of these questions seem quite important and relevant, but I recall less discussion of those than I'd have at-first-glance expected based on their importance.

(I do think there was some post on how COVID affects fundraising prospects for nonprofits, which I couldn't immediately find. But I think it's somewhat telling that here the external event was from a standard EA cause area, and there generally was a lot of COVID content on the Forum.)

Comment by max_daniel on A curriculum for Effective Altruists · 2020-08-31T11:31:41.550Z · score: 2 (1 votes) · EA · GW

I was mostly thinking of a curriculum that would eventually be much larger (though could be modular, and certainly would have a smaller MVP as first step to gauge viability of the larger curriculum).

But my views on this aren't firm, and in general one of the first things I'll do is to determine various fundamental properties I don't feel certain about yet. Other than length these are e.g. target audience and intended outcomes (e.g. attracting new people to EA, "onboarding" new EAs, bringing moderately experienced EA to the same level, or allowing even quite involved EAs to learn something new by increasing the amount of content that publicly accessible as opposed to in some people's minds or nonpublic docs), scope (e.g. only longtermism?), and focus on content/knowledge vs. skills/methods.

Comment by max_daniel on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-31T11:24:28.379Z · score: 44 (17 votes) · EA · GW

I found it valuable to hear information from the debrief meeting, and I agree with some of what you said - e.g. that it a priori seems plausible that implicit threats played at least some role in the decision. However, I'm not sure I agree with the extent to which you characterize the relevant incentives as threats or blackmail.

I think this is relevant because talk of blackmail suggests an appeal to clear-cut principles like "blackmail is (almost) always bad". Such principles could ground criticism that's independent from the content of beliefs, values, and norms: "I don't care what this is about, structurally your actions are blackmail, and so they're bad."

I do think there is some force to such criticism in cases of so-called deplatforming including the case discussed here. However, I think that most conflict about such cases (between people opposing "deplatforming" and those favoring it) is not explained by different evaluations of blackmail, or different views on whether certain actions constitute blackmail. Instead, I think they are mostly garden-variety cases of conflicting goals and beliefs that lead to a different take on certain norms governing discourse that are mostly orthogonal to blackmail. I do have relevant goals and beliefs as well, and so do have an opinion on the matter, but don't think it's coming from a value-neutral place.

So I don't think there's one side either condoning blackmail or being unaware it's committing blackmail versus another condemning it. I think there's one side who wants a norm of having an extremely high bar for physically disrupting speech in certain situations versus another who wants a norm with a lower bar, one side who wants to treat issues independently versus one who wants to link them together, etc. - And if I wanted to decide which side I agree with in a certain instance, I wouldn't try to locate blackmail (not because I don't think blackmail is bad but because I don't think this is where the sides differ), I'd ask myself who has goals more similar to mine, and whether the beliefs linking actions to goals are correct or not: e.g., what consequences would it have to have one norm versus the other, how much do physical disruptions violate 'deontological' constraints and are there alternatives that wouldn't, would or wouldn't physically disrupting more speech in one sort of situation increase or decrease physical or verbal violence elsewhere, etc.

Below I explain why I think blackmail isn't the main issue here.

--

I think a central example of blackmail as the term is commonly used is something like

Alice knows information about Bob that Bob would prefer not to be public. Alice doesn't independently care about Bob or who has access to this information. Alice just wants generic resources such as money, which Bob happens to have. So Alice tells Bob: "Give me some money or I'll disclose this information about you."

I think some features that contribute to making this an objectionable case of blackmail are:

  • Alice doesn't get intrinsic value from the threatened action (and so it'll be net costly to Alice in isolation, if only because of opportunity cost).
  • There is no relationship between the content of the threat or the threatened action on one hand, and Alice's usual plans or goals.
  • By the standards of common-sense morality, Bob did not deserve to be punished (or at least not as severely) and Alice did not deserve gains because of the relevant information or other previous actions.

Similar remarks apply to robbing at knifepoint or kidnapping.

Do they also apply to actions you refer to as threats to EA Munich? You may have information suggesting they do, and that case I'd likely agree they'd be commonly described as threats. (Only "likely" because new information could also update my characterization of threats, which was quite ad hoc.)

However, my a priori guess would be that the alleged threats in the EA Munich case exhibited the above features to a much smaller extent. (In particular the alleged threat of disaffiliation, less but still substantially so threats of disrupting the event.) Instead, I'd mostly expect things like:

  • Group X thinks that public appearances by Hanson are a danger to some value V they care about (say, gender equality). So in some sense they derive intrinsic value from reducing the number of Hanson's public appearances.
  • A significant part of Group X's mission is to further value V, and they routinely take other actions for the stated reason to further V.
  • Group X thinks that according to moral norms (that are either already in place or Group X thinks should be in place) Hanson no longer deserves to speak publicly without disruptions.

To be clear, I think the difference is gradual rather than black-and-white, and that I imagine in the EA Munich case some of these "threat properties" were present to some extent, e.g.:

  • Group X doesn't usually care about the planned topic of Hanson's talk (tort law).
  • Whether or not Group X agrees, by the standards of common-sense morality and widely shared norms, it is at least controversial whether Hanson should no longer be invited to give unrelated talks, and some responses such as physically disrupting the talk would arguably violate widely shared norms. (Part of the issue is that some of these norms are contested itself, with Group X aiming to change them and others defending them.)
  • Possibly some groups Y, Z, ... are involved whose main purpose is at first glance more removed from value V, but these groups nevertheless want to further their main mission in ways consistent with V, or they think it's useful to signal they care about V either intrinsically or as a concession to perceived outside pressure.

To illustrate the difference, consider the following hypotheticals, which I think would much less or not at all be referred to as blackmail/threats by common standards. If we abstract away from the content of values and beliefs, then I expect the alleged threats to EA Munich to in some ways be more similar to those, and some to overall be quite similar to the first:

The Society for Evidence-Based Medicine has friendly relations and some affiliation with the Society of Curious Doctors. Then they learn that the Curious Doctors plan to host a talk by Dr. Sanhon on a new type of scalpel to be used in surgery. However, they know that Dr. Sanhon has in the past advocated for homeopathy. While this doesn't have any relevance to the topic of the planned talk, they have been concerned for a long time that hosting pro-homeopathy speakers at universities provides a false appearance of scientific credibility for homeopathy, which they believe is really harmful and antithetical to their mission of furthering evidence-based medicine. They didn't become aware of a similar case before, so they don't have a policy in place for how to react; after an ad-hoc discussion, they decide to inform the Curious Doctors that they plan to [disrupt the talk by Sanhon / remove their affiliation]. They believe the responses they've discussed would be good to do anyway if such talks happen, so they think of their message to the Curious Doctors more as an advance notice out of courtesy rather than as a threat.

Alice voted for Republican candidate R. Nashon because she hoped they would lower taxes. She's otherwise more sympathetic to Democratic policies, but cares most about taxation. Then she learns that that Nashon has recently sponsored a tax increase bill. She writes to Nashon's office that she'll vote for the Democrats next time unless Nashon reverses his stance on taxation.

A group of transhumanists is concerned about existential risks from advanced AI. If they knew that no-one was going to build advanced AI, they'd happily focus on some of their other interests such as cryonics and life extension research. However, they think there's some chance that big tech company Hasnon Inc. will develop advanced AI and inadvertently destroy the world. Therefore, they voice their concerns about AI x-risk publicly and advocate for AI safety research. They are aware that this will be costly to Hasnon, e.g. because it could undermine consumer trust or trigger misguided regulation. The transhumanists have no intrinsic interest in harming Hasnon, in fact they mostly like Hasnon's products. Hasnon management invites them to talks with the aim of removing this PR problem and understands that the upshot of the transhumanists' position is "if you continue to develop AI, we'll continue to talk about AI x-risk".
Comment by max_daniel on A curriculum for Effective Altruists · 2020-08-30T06:32:30.598Z · score: 5 (3 votes) · EA · GW

There are also the reading lists recently put together by Richard Ngo.

Comment by max_daniel on A curriculum for Effective Altruists · 2020-08-30T06:31:23.231Z · score: 9 (5 votes) · EA · GW

Great question! I hope to find time to engage substantively later, but for now I just wanted to flag that I'm considering to spend significant time from September or October putting together some kind of "EA curriculum", and that I'd be happy to talk to anyone interested in similar ideas. Send me a PM if you want to jump on a call in the next couple of weeks.

Comment by max_daniel on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-30T06:27:27.362Z · score: 7 (3 votes) · EA · GW

(FWIW, I hadn't heard of that theorem before but don't feel that surprised by the statement. But I'm quite curious if the proofs provide an intuitive understanding for why we need 4 dimensions.

Maybe this is hindsight bias, but I feel like if you had asked me "Can we get any member of [broad but appropriately restricted class of groups] as fundamental group of a [sufficiently general class of manifolds]?" my immediate reply would have been "uh, I'd need to think about this, but it's at least plausible that the answer is yes", whereas there is no way I'd have intuitively said "yes, but we need at least four dimensions".)

Comment by max_daniel on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-27T11:27:03.360Z · score: 2 (1 votes) · EA · GW

Ah, makes sense. Sorry, I think I just misread "half of the rest" as something like "the other half".

Comment by max_daniel on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-25T16:55:28.961Z · score: 7 (3 votes) · EA · GW
For about half of the rest, they'd had a single conversation with me (& I think usually not with anyone else on the committee).

I think we had 3-5 conversations prior to the RSP interview, the first one in 2017. Though I think "single conversation" still gives basically the correct impression as all conversations we've had could conceivably fit into one very long conversation. (And we spoke very irregularly, didn't know each other well, etc.)

I also had had a few very brief online conversations with another member of the selection committee, and I had applied to an organization run by them (with partially but not completely overlapping material).

Comment by max_daniel on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-21T17:17:12.724Z · score: 6 (3 votes) · EA · GW

One more data point: last year's Summer Research Fellowship had an acceptance rate of 11/~90.

Comment by max_daniel on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-21T12:21:05.994Z · score: 5 (3 votes) · EA · GW

Bono et al. (2017), based on reviewing the abstracts of all articles published between 2010 and 2015 and listed on Web of Science, found (N = 262 abstracts reporting non-normal distributions):

In terms of their frequency of appearance, the most-common non-normal distributions can be ranked in descending order as follows: gamma, negative binomial, multinomial, binomial, lognormal, and exponential.

[I read only the abstract and can't comment on the review's quality.]

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-20T19:04:16.921Z · score: 14 (5 votes) · EA · GW

Thanks for this suggestion! Like ems, I think this is major but not novel. For instance, the first version of Brian Tomasik's Charity Cost-Effectiveness in an Uncertain World was written in 2013. And here's a reply from Jess Riedel, also from 2013.

Again, I do think later work including Greaves's cluelessness paper was a valuable contribution. But the basic issue that impact may be dominated by flow-through effects on unintuitive variables, and the apparent sign flipping as new 'crucial considerations' are discovered, was clearly present in the 2013 and possibly earlier discussions.

Comment by max_daniel on Are some SDGs more important than others? Revealed country priorities from four years of VNRs · 2020-08-17T18:00:52.413Z · score: 7 (4 votes) · EA · GW

This criticism of the SDGs makes sense to me on its face. However, I noticed that Lant Pritchett writes the following on the website you link to, which seems in tension to that criticism:

I argue the new SDGs, while derided by many for being overambitious were the developing world’s reaction and rejection of the “low bar” kinky vision of development represented by the MDGs.
Comment by max_daniel on My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda · 2020-08-17T16:55:56.704Z · score: 3 (2 votes) · EA · GW

Really glad to see this published. :)

Silly question, I hope to engage more later:

IDA stands for Iterated Amplification and is a research agenda by Paul Christiano from OpenAI.

Doesn't it stand for Iterated Distillation and Amplification? Or what's the D doing there?

Comment by max_daniel on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-17T13:12:29.505Z · score: 5 (3 votes) · EA · GW

One data point that gets at something similar (i.e. to what extent did RSP recruit from people with an existing network in EA):

I was one of 9 people in the first cohort of RSP (start October 2018). Before starting:

  • 0 of the other 8 people I knew even moderately well,
  • 2 people I had met before in person at events once but didn't know well (to the extent of only having exchanged <10 sentences one-on-one as opposed to in group settings during the multiple-day events we both attended),
  • 2 additional people I had heard of e.g. from online discussions (but hadn't directly interacted with them online),
  • 4 people I had never heard of.

I was surprised by this, particularly the first point. (Positively, as I tend to think EA is too insular.)

I had been working at EAF/FRI (now CLR) since mid-2016, based in Berlin, and had attended several EAGs before. Overall I'd guess I was moderately well networked in EA but less so than people at key anglophone orgs such as CEA or Open Phil.

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-17T12:15:13.687Z · score: 8 (5 votes) · EA · GW

Thanks for this suggestion!

Identifying whole-brain emulation (WBE) as a potentially transformative technology definitely meets my threshold for major. However, this happened well before 2015. E.g. WBE was discussed in Superintelligence (published 2014), the Hanson-Yudkowsky FOOM debate in 2008, and FHI's WBE roadmap is from 2008 as well. So it's not novel.

(I'm fairly sure the idea had been discussed even earlier by transhumanists, but don't know good sources off the top of my head.)

To be clear, I still think the marginal contribution of The Age of Em was important and valuable. But I think it's of the type "refinement of ideas that aren't strictly speaking novel", similar to the "Views on AI have become more diverse" examples Tobias gave above.

Comment by max_daniel on EA Meta Fund Grants – July 2020 · 2020-08-16T11:41:08.038Z · score: 6 (3 votes) · EA · GW

Thanks for the pushback. I think my above comment was in parts quite terse, and in particular the "odd" in "would be odd to single out" does a lot of work.

So yes, it agrees with my impression that in a reference class of explicit formalized groups similar to those you mentioned it's more common for men to be excluded than for women to be excluded. The landscape is too diverse to make confident claims about all of it, but I think in most cases I'd basically think it isn't odd to explicitly single out women as target audience while it would be odd to explicitly single out men.

I suspect it would require a longer conversation to hash out what determines my assessments of 'oddness' and how appropriate they are relative to various goals one might have. Very briefly, some inputs are whether there was a history of different treatment of some audience, whether that audience still faces specific obstacles, has specific experiences or specific needs, and whether there are imbalances in existing informal groups (e.g. similar to the above point on mentoring being ubiquitous surely a lot of informal networking happens at McKinsey).

I think this kind of reasoning is fairly standard and also explains many instances of target audience restriction and specialization other than the ones we've been discussing here. For example, consider the Veterans Administration in the US or Alcoholics Anonymous.

I think I don't want to go into much more depth here, partly because it would be a lot of work, partly because I think it would be a quite wide-ranging discussion that would be off-topic here (and possibly the EA Forum in general). I appreciate this may be frustrating, and if you think it would be important or very helpful to you to understand my views in more detail I'd be happy to have a conversation elsewhere (e.g. send me a PM and we can find a time to call).

FWIW, while I suspect we have a lot of underlying disagreements in this area, I've appreciated your pushback against orthodox liberal views in other discussions on this forum, and I'm sorry that your comment here was downvoted.

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-15T17:12:06.248Z · score: 10 (3 votes) · EA · GW
The idea of AI as a 4th industrial revolution was pushed forward by economists, from what I can see? And then long-termists picked up the idea because of course it's relevant.

My impression is that when most economists talk about AI as a 4th industrial revolution they're talking about impacts much smaller than what longtermists have in mind when they talk about "impacts at least as big as the Industrial Revolution". For example, in a public Google doc on What Open Philanthropy means by "transformative AI", Luke Muehlhauser says:

Unfortunately, in our experience, most people who encounter this definition (understandably) misunderstand what we mean by it. In part this may be due to the ubiquity of discussions about how AI (and perhaps other "transformative technologies") may usher in a "4th industrial revolution," which sounds similar to our definition of transformative AI, but (in our experience) typically denotes a much smaller magnitude of transformation than we have in mind when discussing "transformative AI."

To explain, I think the common belief is that the (first) Industrial Revolution caused a shift to a new 'growth mode' characterized by much higher growth rates of total economic output as well as other indicators relevant to well-being (e.g. life expectancy). It is said to be comparable to only the agricultural revolution (and perhaps earlier fundamental changes such as the arrival of humans or major transitions in evolution).

By contrast, the so-called second and third industrial revolution (electricity, computers, ...) merely sustained the new trend that was kicked off by the first. Hence the title of Luke Muehlhauser's influential blog post There was only one industrial revolution.

So e.g. in terms of the economic growth rate, I think economists talk about a roughly business-as-usual scenario, while longtermists talk about the economic doubling time falling from a decade to a month.

Regarding timing, I also think that some versions of longtermist concerns about AI predate talk about a 4th industrial revolution by decades. (By this, I mean concerns that are of major relevance for the long-term future and meet the 'transformative AI' impact bar, not concerns by people who explicitly considered themselves longtermists or were explicitly comparing their concerns to the Industrial Revolution.) For example, the idea of an intelligence explosion was stated by I. J. Good in 1965, and people also often see concerns about AI risk expressed in statements by Norbert Wiener in 1960 (e.g. here, p. 4) or Alan Turing in 1951.

--

I'm less sure about this, but I think most longtermists wouldn't consider AI to be a competitive cause area if their beliefs about the impacts of AI were similar to those of economists talking about a 4th industrial revolution. Personally, in that case I'd probably put it below all of bio, nuclear, and climate change.

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-14T17:15:29.008Z · score: 6 (3 votes) · EA · GW

Interesting, it sounds like we're using these terms somewhat differently. I guess I'm thinking of (longtermist) macrostrategy and global priorities research as trying to find high-level answers to the questions "How can we do the most good?", "How can we best improve the long-term future?", and "How do we even think about these questions?".

The unilateralist's curse is relevant to the third question, and the insight about AI relevant to the second question.

Admittedly, while I'd count "AI may be an important cause area" as macrostrategy/GPR I'd probably exclude particulars on how best to align AI, and the boundary is fuzzy.

Comment by max_daniel on EA Meta Fund Grants – July 2020 · 2020-08-14T15:18:29.189Z · score: 17 (5 votes) · EA · GW
I think that many people would be afraid to pitch a mentoring scheme that was open to men given that WANBAM exists

FWIW, I find this very surprising, and like Denise personally have the opposite intuition.

(What I would be hesitant to do - but not because I'm afraid but because I think it's a bad idea - is to pitch a mentoring scheme that explicitly emphasizes or discusses at length that it's open to men, or any other audience which is normally included and would be odd to single out.)

In general, the default for most things is that they're open to men, and I struggle to think of examples where the mere existence of an opportunity with this property has been controversial. (This is different from suggesting that specific existing opportunities should be open to men, or bringing up this topic in contexts where people are trying to discuss issues specific to other audiences. These can be controversial, but I think often for good and very mundane reasons.)

It's also worth noting that a lot of mentoring happens outside of explicitly designed mentorship schemes. For example, as part of my job, I'm currently mentoring or advising six people, five of which happen to be male. And personally I've e.g. benefitted from countless informal conversations about my career.

In fact, hopefully any kind of work together with more experienced people includes aspects of mentorship. Mentoring seems such a ubiquitous aspect of work relationships that it makes a lot of sense to me that specific mentorship schemes will tend to focus on gaps in the existing landscapes, e.g. aspects of mentorship not usually provided in the workplace or mentorship on issues specific to certain audiences.

Specific mentorship schemes thus only represent a tiny fraction of the total mentoring that's happening. As a consequence, I think the fact that some or all of them are only open to specific audiences or focus on specific kinds of mentorship is poor evidence for imbalances in the mentorship landscape at large. (Except perhaps indirectly, i.e. the fact that someone thought it's a good idea to start a scheme focused on X suggests there was a gap in X.)

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-14T11:41:19.766Z · score: 17 (7 votes) · EA · GW
So another interesting question is what is required for us to have "many smaller insights" and "the refinement and diffusion of ideas that aren’t strictly speaking novel"? E.g., does that require orgs like FHI and CLR? Or could we do that without paid full-time researchers, just via a bunch of people blogging in their spare time?

I think that's a very interesting question, and one I've sometimes wondered about.

Oversimplifying a bit, my answer is: We need neither just bloggers nor just orgs like FHI and CLR. Instead, we need to move from a model where epistemic progress is achieved by individuals to one where it is achieved by a system characterized by a diversification of epistemic tasks, specialization, and division of labor. (So in many ways I think: we need to become more like academia.)

Very roughly, it seems to me that early intellectual progress in EA often happened via distinct and actionable insights found by individuals. E.g. "AI alignment is super important" or "donating to the best as opposed to typical charities is really important" or "current charity evaluators don't help with finding impactful charities" or "wow, if I donate 10% of my income I can save many lives over my lifetime" or "oh wait, there are orders of magnitudes more wild than farmed animals, so we need to consider the impact of farmed animal advocacy on wild animals".

(Of course, it's a spectrum. Discussion and collaboration were still important, my claim is just that there were significantly more "insights within individuals" than later.)

But it seems to me that most low-hanging fruits have been plucked. So it can be useful to look at other more mature epistemic endeavours. And if I reflect on those it strikes me that in some sense most of the important cognition isn't located in any single mind. E.g. for complex questions about the world, it's the system of science that delivers answers via irreducible properties like "scientific consensus". And while in hindsight it's often possible to summarize epistemic progress in a way that can be understood by individuals, and looks like it could have been achieved by them, the actual progress was distributed across many minds.

(Similarly, the political system doesn't deliver good policies because there's a superintelligent policymaker but because of checks and balances etc.; the justice system doesn't deliver good settlement of disputes because there's a super-Salomonic judge but because of the rules governing court cases that have different roles such as attorneys, the prosecution, judges, etc.)

This also explains why, I think correctly, discussions on how to improve science usually focus on systemic properties like funding, incentives, and institutions. As opposed to, say, how to improve the IQ or rationality of individual scientists.

And similarly, I think we need to focus less on how to improve individuals and more on how to set up a system that can deliver epistemic progress across larger time scales and larger numbers of people less selected by who happens to know whom.

Comment by max_daniel on When Planning Your Career, Start Early · 2020-08-14T10:46:43.102Z · score: 7 (5 votes) · EA · GW

Thanks for sharing your perspective. FWIW, it somewhat resonates with me, even though I said I think I'd have benefitted from hearing about the orthodox EA perspective on career planning much earlier.

I think the two things are consistent roughly because in my specific case I think most of the benefits would have come from "becoming more agenty" in a quite generic sense as well as correcting some misconceptions I used to have (e.g. roughly "only people who care about getting rich or success by conventional standards think about their 'careers', I just want to do maths").

Comment by max_daniel on EA reading list: population ethics, infinite ethics, anthropic ethics · 2020-08-14T10:38:04.066Z · score: 6 (3 votes) · EA · GW
So more generally, I'd say that population ethics is the study of how to compare the moral value of different populations.

As an aside, even if I agreed with that definition, I don't think infinite ethics would be a subset of population ethics.

The distinctive problems of infinite ethics arise roughly when we can affect an infinite number of value-bearing locations. But this is independent of what those value-bearing locations are - in particular, they need not be people.

For example, we'd run into infinitarian paralysis if we thought that each of our actions affected the axiological value of an infinite number of paintings.

Comment by max_daniel on EA reading list: population ethics, infinite ethics, anthropic ethics · 2020-08-14T10:19:13.165Z · score: 4 (2 votes) · EA · GW
I don't think variable populations are a defining feature of population ethics - do you have a source for that?

For example, the abstract of Greaves (2017) says (emphasis mine): "Population axiology is the study of the conditions under which one state of affairs is better than another, when the states of affairs in question may differ over the numbers and the identities of the persons who ever live."

Similarly, the Wikipedia article on Population ethics starts with (emphasis mine): "Population ethics is the philosophical study of the ethical problems arising when our actions affect who is born and how many people are born in the future."

More fuzzy indications are that Part IV of Reasons and Persons is titled "Future Generations" and Gustaf Arrhenius's seminal dissertation is titled "Future Generations. A Challenge for Moral Theory". And Arrhenius says (emphasis mine): "The main problem has been to find an adequate population theory, that is, a theory about the moral value of states of affairs where the number of people, the quality of their lives, and their identities may vary."

--

It also seems to me your candidate definition of "how to compare the moral value of different populations" is too broad to be useful. For example, to answer that question for a given population we also need to know what individual well-being consists in (not just how to aggregate individual welfare to get population welfare). So on your definition the whole question of well-being, including the classic debates between hedonism, desire satisfaction and objective list theories, are subsumed under population ethics! Whereas I think it's useful to view variable-population questions as their own subfield precisely because they arise no matter which view you take on the nature of well-being.

For example, one question discussed in population ethics is when a more equal population with lower total welfare is better than a less equal population with higher total welfare.

Hm, interesting - I definitely agree that this question is relevant, and discussed in the literature, for fixed-population settings. And if I thought it was part of population ethics I'd agree that "variable-population cases" wasn't a good definition of population ethics. But it wouldn't have occurred to me to subsume that question under population ethics, and I can't recall it being labeled population ethics anywhere.

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-13T11:15:07.710Z · score: 16 (6 votes) · EA · GW
Do you have a sense of how long is typically the lag between an insight first being had, and being recognised as major? I think this might often be several years.

As an aside, there are a few papers examining this in the case of academia. One interesting finding is that there are a few outliers that only get widely recognized after decades, much longer than for typical insights. The term for those is 'sleeping beauties'. In a review paper on the science of science, Clauset et al. (2016, p. 478) say:

A systematic analysis of nearly 25 million publications in the natural and social sciences over the past 100 years found that sleeping beauties occur in all fields of study (9). Examples include a now famous 1935 paper by Einstein, Podolsky, and Rosen on quantum mechanics; a 1936 paper by Wenzel on waterproofing materials; and a 1958 paper by Rosenblatt on artificial neural networks.

The main references appear to be van Raan (2004) and Ke et al. (2015).

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-13T11:08:00.526Z · score: 3 (2 votes) · EA · GW

I think for the purpose of this question I was imagining dating insights to roughly 3/4 the way between T1 and T2.

I do agree that the lag time can be several years.

Comment by max_daniel on EA reading list: population ethics, infinite ethics, anthropic ethics · 2020-08-13T11:03:55.132Z · score: 8 (4 votes) · EA · GW
To me they seem basically the same topic: infinite ethics is the subset of population ethics dealing with infinitely large populations. Do you disagree with this characterisation?

This doesn't sound right to me. I think a good characterisation of population ethics is that it's concerned with variable-population as opposed to fixed-population choices. But infinite ethics is relevant in fixed-population settings as well; e.g. the infinitarian paralysis worry described in Bostrom (2003) applies for a fixed infinite population whose welfare is affected by our actions.

Comment by max_daniel on Propose and vote on potential tags · 2020-08-13T10:05:01.055Z · score: 4 (2 votes) · EA · GW

Global priorities research and macrostrategy.

I wanted to use these tags when asking this question, but they don't seem to exist.

There is a tag on cause prioritization. But I think it'd be more useful if that tag was focused on content that is directly relevant for prioritizing between causes, e.g. "here is why I think cause A is more tractable than cause B" or "here's a framework for assessing the neglectedness of a cause". Some global priorities or macrostrategy research has this property, but not all of it. E.g. I think it'd be a bit of a stretch to apply the cause prioritization label to this (amazing!) post on Quantifying anthropic effects on the Fermi paradox.

Comment by max_daniel on Max_Daniel's Shortform · 2020-08-13T09:54:27.201Z · score: 14 (6 votes) · EA · GW

[EA's focus on marginal individual action over structure is a poor fit for dealing with info hazards.]

I tend to think that EAs sometimes are too focused on optimizing the marginal utility of individual actions as opposed to improving larger-scale structures. For example, I think it'd be good if there was much content and cultural awareness on how to build good organizations as there is on how to improve individual cognition. - Think about how often you've heard of "self improvement" or "rationality" as opposed to things like "organizational development".

(Yes, this is similar to the good old 'systemic change' objection aimed at "what EAs tend to do in practice" rather than "what is implied by EAs' normative views".)

It occurred to me that one instance where this might bite in particular are info hazards.

I often see individual researchers agonizing about whether they can publish something they have written, which of several framings to use, and even which ideas are safe to mention in public. I do think that this can sometimes be really important, and that there are areas with a predictably high concentration of such cases, e.g. bio.

However, in many cases I feel like these concerns are far-fetched and poorly targeted.

  • They are far-fetched when they overestimate the effects a marginal publication by a non-prominent person can have on the world. E.g. the US government isn't going to start an AGI project because you posted a thought on AI timelines on LessWrong.
  • They are poorly targeted when they focus on the immediate effects of marginal individual action. E.g., how much does my paper contribute to 'AI capabilities'? What connotations will readers read into different terms I could use for the same concept?

On the other hand, in such cases often there are important info hazards in the areas researchers are working about. For example, I think it's at least plausible that there is true information on, say, the prospects and paths to transformative AI, that would be bad to bring to the attention of, say, senior US or Chinese government officials.

It's not the presence of these hazards but the connection with typical individual researcher actions that I find dubious. To address these concerns, rather than forward chaining from individual action one considers to take for other reasons, I suspect it'd be more fruitful to backward-chain from the location of large adverse effects (e.g. the US government starting an AGI project, if you think that's bad). I suspect this would lead to a focus on structure for the analysis, and a focus on policy for solutions. Concretely, questions like:

  • What are the structural mechanisms for how information gets escalated to higher levels of seniority within, e.g., the US government or Alphabet?
  • Given current incentives, how many publications of potentially hazardous information do we expect, and through which channels?
  • What are mechanisms that can massively amplify the visibility of information? E.g. when will media consider something newsworthy, when and how do new academic subfields form?
Comment by max_daniel on When Planning Your Career, Start Early · 2020-08-13T07:25:20.673Z · score: 4 (3 votes) · EA · GW

I think that's a good point worth highlighting.

As one data point, I first heard of EA only toward the end of my master's degree. Before, I had a fairly different mindset regarding my career. It seems extremely obvious to me that it would have been very beneficial if I had heard of EA, and in particular 80K's career advice, years earlier.

(Even though it might not be that obvious from the outside, e.g. I started an "EA job" immediately after I finished my master's.)

Comment by max_daniel on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-07T08:31:20.808Z · score: 17 (9 votes) · EA · GW

[I learned the following from Tom Davidson, as well as that this perspective goes back to at least Carnap. Note that all of this essentially is just an explanation of beta distributions.]

Laplace's Rule of Succession really is a special case of a more general family of rules. One useful way of describing the general family is as follows:

Recall that Laplace's Rule of Succession essentially describes a prior for Bernoulli experiments; i.e. a series of independent trials with a binary outcome of success or failure. E.g. every day we observe whether the sun rises ('success') or not ('failure') [and, perhaps wrongly, we assume that whether the sun rises on one day is independent from whether it rose on any other day].

The family of priors is as follows: We pretend that prior to any actual trials we've seen N_v "virtual trials", among which were M_v successes. Then at any point after having seen N_a actual trials with M_a successes, we adopt the maximum likelihood estimate for the success probability p of a single trial based on both virtual and actual observations. I.e.,

p = (M_v + M_a) / (N_v + N_a).

Laplace's Rule of Succession simply is the special case for N_v = 2 and M_v = 1. In particular, this means that before the first actual trial we expect it to succeed with probability 1/2. But Laplace's Rule isn't the only prior with that property! We'd also expect the first trial to succeed with probability 1/2 if we took, e.g., N_v = 42 and M_v = 21. The difference compared to Laplace's Rule would be that our estimate for p will move much slower in response to actual observations - intuitively we'll need 42 actual observations until they get the same weight as virtual observations, whereas for Laplace's Rule this happens after 2 actual observations.

And of course, we don't have to "start" with p = 1/2 either - by varying N_v and M_v we can set this to any value.