Posts

[AN #80]: Why AI risk might be solved without additional intervention from longtermists 2020-01-03T07:52:24.981Z
Summary of Stuart Russell's new book, "Human Compatible" 2019-10-19T19:56:52.174Z
Alignment Newsletter One Year Retrospective 2019-04-10T07:00:34.021Z
Thoughts on the "Meta Trap" 2016-12-20T21:36:39.498Z
EA Berkeley Spring 2016 Retrospective 2016-09-11T06:37:02.183Z
EAGxBerkeley 2016 Retrospective 2016-09-11T06:27:16.316Z

Comments

Comment by rohinmshah on High Impact Careers in Formal Verification: Artificial Intelligence · 2021-06-15T17:02:08.763Z · EA · GW

Planned summary for the Alignment Newsletter:

This post considers the applicability of formal verification techniques to AI alignment. Now in order to “verify” a property, you need a specification of that property against which to verify. The author considers three possibilities:

1. **Formally specifiable safety:** we can write down a specification for safe AI, _and_ we’ll be able to find a computational description or implementation

2. **Informally specifiable safety:** we can write down a specification for safe AI mathematically or philosophically, but we will not be able to produce a computational version

3. **Nonspecifiable safety:** we will never write down a specification for safe AI.

Formal verification techniques are applicable only to the first case. Unfortunately, it seems that no one expects the first case to hold in practice: even CHAI, with its mission of building provably beneficial AI systems, is talking about proofs in the informal specification case (which still includes math), on the basis of comments like [these](

https://www.alignmentforum.org/posts/nd692YfFGfZDh9Mwz/an-69-stuart-russell-s-new-book-on-why-we-need-to-replace?commentId=4LhBaSuYPyFvTnDrQ

) in Human Compatible. In addition, it currently seems particularly hard for experts in formal verification to impact actual practice, and there doesn’t seem to be much reason to expect that to change. As a result, the author is relatively pessimistic about formal verification as a route to reduce existential risk from failures of AI alignment.

Comment by rohinmshah on Progress studies vs. longtermist EA: some differences · 2021-06-03T17:34:35.484Z · EA · GW

I just think there's a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.

If you mean like 10x greater chance, I think that's plausible (though larger than I would say). If you mean 1000x greater chance, that doesn't seem defensible.

In both fields you basically ~can't experiment with the actual thing you care about (you can't just build a superintelligent AI and check whether it is aligned; you mostly can't run an intervention on the entire world  and check whether world GDP went up). You instead have to rely on proxies.

In some ways it is a lot easier to run proxy experiments for AI alignment -- you can train AI systems right now, and run actual proposals in code on those systems, and see what they do; this usually takes somewhere between hours and weeks. It seems a lot harder to do this for "improving GDP growth" (though perhaps there are techniques I don't know about).

I agree that PS has an advantage with historical data (though I don't see why economic theory is particularly better than AI theory), and this is a pretty major difference. Still, I don't think it goes from "good chance of making a difference" to "basically zero chance of making a difference".

The latter is something almost completely in the future, where we don't get any chances to get it wrong and course-correct.

Fwiw, I think AI alignment is relevant to current AI systems with which we have experience even if the catastrophic versions are in the future, and we do get chances to get it wrong and course-correct, but we can set that aside for now, since I'd probably still disagree even if I changed my mind on that. (Like, it is hard to do armchair theory without experimental data, but it's not so hard that you should conclude that you're completely doomed and there's no point in trying.)

Comment by rohinmshah on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T20:11:25.322Z · EA · GW

I've been perceiving a lot of EA/XR folks to be in (3) but maybe you're saying they're more in (2)?

Yup.

Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we're just disagreeing on relative priority and neglectedness.

That's what I would say.

I can't see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).

If you have opportunity A where you get a benefit of 200 per $ invested, and opportunity B where you get a benefit of 50 per $ invested, you want to invest in A as much as possible, until the opportunity dries up. At a civilizational scale, opportunities dry up quickly (i.e. with millions, maybe billions of dollars), so you see lots of diversity. At EA scales, this is less true.

So I do agree that some XR folks (myself included) would, if given a pot of millions of dollars to distribute, allocate it all to XR; I don't think the same people would do it for e.g. trillions of dollars. (I don't know where in the middle it changes.)

I think Open Phil, at the billions of dollars range, does in fact invest in lots of opportunities, some of which are arguably about improving progress. (Though note that they are not "fully" XR-focused, see e.g. Worldview Diversification.)

Comment by rohinmshah on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T19:57:10.455Z · EA · GW

I kinda sorta answered Q2 above (I don't really have anything to add to it).

Q3: I'm not too clear on this myself. I'm just an object-level AI alignment researcher :P

Q4: I broadly agree this is a problem, though I think this:

Before PS and EA/XR even resolve our debate, the car might be run off the road—either as an accident caused by fighting groups, or on purpose.

seems pretty unlikely to me, where I'm interpreting it as "civilization stops making any progress and regresses to the lower quality of life from the past, and this is a permanent effect". 

I haven't thought about it much, but my immediate reaction is that it seems a lot harder to influence the world in a good way through the public, and so other actions seem better. That being said, you could search for "raising the sanity waterline" (probably more so on LessWrong than here) for some discussion of approaches to this sort of social progress (though it isn't about educating people about the value of progress in particular).

Comment by rohinmshah on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T19:51:33.504Z · EA · GW

If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal's Mugging.

Sure. I think most longtermists wouldn't endorse this (though a small minority probably would).

But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost.

I don't think this is negative, I think there are better opportunities to affect the future (along the lines of Ben's comment).

I think this is mostly true of other EA / XR folks as well (or at least, if they think it is negative, they aren't confident enough in it to actually say "please stop progress in general"). As I mentioned above, people (including me) might say it is negative in specific areas, such as AGI development, but not more broadly.

And it's unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.

I agree with that (and I think most others would too).

Comment by rohinmshah on Progress studies vs. longtermist EA: some differences · 2021-06-02T19:29:43.942Z · EA · GW

But EA/XR folks don't seem to be primarily advocating for specific safety measures. Instead, what I hear (or think I'm hearing) is a kind of generalized fear of progress. Again, that's where I get lost. I think that (1) progress is too obviously valuable and (2) our ability to actually predict and control future risks is too low.

I think there's a fear of progress in specific areas (e.g. AGI and certain kinds of bio) but not a general one? At least I'm in favor of progress generally and against progress in some specific areas where we have good object-level arguments for why progress in those areas in particular could be very risky.

(I also think EA/XR folks are primarily advocating for the development of specific safety measures, and not for us to stop progress, but I agree there is at least some amount of "stop progress" in the mix.)

Re: (2), I'm somewhat sympathetic to this, but all the ways I'm sympathetic to it seem to also apply to progress studies (i.e. I'd be sympathetic to "our ability to influence the pace of progress is too low"), so I'm not sure how this becomes a crux.

Comment by rohinmshah on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T19:13:38.623Z · EA · GW

If you're willing to accept GCR in order to slightly reduce XR, then OK—but it feels to me that you've fallen for a Pascal's Mugging.

Eliezer has specifically said that he doesn't accept Pascal's Mugging arguments in the x-risk context

I wouldn't agree that this is a Pascal's Mugging. In fact, in a comment on the post you quote, Eliezer says:

If an asteroid were genuinely en route, large enough to wipe out humanity, possibly stoppable, and nobody was doing anything about this 10% probability, I would still be working on FAI but I would be screaming pretty loudly about the asteroid on the side. If the asteroid is just going to wipe out a country, I'll make sure I'm not in that country and then keep working on x-risk.

I usually think of Pascal's Mugging as centrally about cases where you have a tiny probability of affecting the world in a huge way. In contrast, your example seems to be about trading off between uncertain large-sized effects and certain medium-sized effects. ("Medium" is only meant to be relative to "large", obviously both effects are huge on some absolute scale.)

Perhaps your point is that XR can only make a tiny, tiny dent in the probability of extinction; I think most XR folks would have one of two responses:

  1. No, we can make a reasonably large dent. (This would be my response.) Off the top of my head I might say that the AI safety community as a whole could knock off ~3 percentage points from x-risk.
  2. X-risk is so over-determined (i.e. > 90%, maybe > 99%) that even though we can't affect it much, there's no other intervention that's any better (and in particular, progress studies doesn't matter because we die before it has any impact).

The other three questions you mention don't feel cruxy.

The second one (default-good vs. default-bad) doesn't really make sense to me -- I'd say something like "progress tends to increase our scope of action, which can lead to major improvements in quality of life, and also increases the size of possible risks (especially from misuse)".

Comment by rohinmshah on Draft report on existential risk from power-seeking AI · 2021-06-01T22:09:36.248Z · EA · GW

Results are in this post.

Comment by rohinmshah on Final Report of the National Security Commission on Artificial Intelligence (NSCAI, 2021) · 2021-06-01T14:13:29.378Z · EA · GW

A lot of longtermists do pay attention to this sort of stuff, they just tend not to post on the EA Forum / LessWrong. I personally heard about the report from many different people after it was published, and also from a couple of people even before it was published (when there was a chance to provide input on it).

In general I expect that for any sufficiently large object-level thing, the discourse on the EA Forum will lag pretty far behind the discourse of people actively working on that thing (whether that discourse is public or not).  I read the EA Forum because (1) I'm interested in EA and (2) I'd like to correct misconceptions about AI alignment in EA. I would not read it as a source of articles relevant to AI alignment (though every once in a while they do come up).

Comment by rohinmshah on Draft report on existential risk from power-seeking AI · 2021-05-08T16:04:38.265Z · EA · GW

If AGI doom were likely, what additional evidence would we expect to see?

  1. Humans are pursuing convergent instrumental subgoals much more. (Related question: will AGIs want to take over the world?)
    1. A lot more anti-aging research is going on.
    2. Children's inheritances are ~always conditional on the child following some sort of rule imposed by the parent, intended to further the parent's goals after their death.
    3. Holidays and vacations are rare; when they are taken it is explicitly a form of rejuvenation before getting back to earning tons of money.
    4. Humans look like they are automatically strategic.
  2. Humans are way worse at coordination. (Related question: can humans coordinate to prevent AI risk?)
    1. Nuclear war happened some time after WW2.
    2. Airplanes crash a lot more.
    3. Unions never worked.
  3. Economic incentives point strongly towards generality rather than specialization. (Related question: how general will AI systems be? Will they be capable of taking over the world?)
    1. Universities don't have "majors", instead they just teach you how to be more generally intelligent.
    2. (Really the entire world would look hugely different if this were the case; I struggle to imagine it.)

There's probably more, I haven't thought very long about it.

(Before responses of the form "what about e.g. the botched COVID response?", let me note that this is about additional evidence; I'm not denying that there is existing evidence.)

Comment by rohinmshah on Draft report on existential risk from power-seeking AI · 2021-04-30T21:32:26.128Z · EA · GW

I think that at least 80% of the AI safety researchers at MIRI, FHI, CHAI, OpenAI, and DeepMind would currently assign a >10% probability to this claim: "The research community will fail to solve one or more technical AI safety problems, and as a consequence there will be a permanent and drastic reduction in the amount of value in our future."

If you're still making this claim now, want to bet on it? (We'd first have to operationalize who counts as an "AI safety researcher".)

I also think it wasn't true in Sep 2017, but I'm less confident about that, and it's not as easy to bet on.

Comment by rohinmshah on A conversation with Rohin Shah · 2021-04-24T17:29:29.672Z · EA · GW

In that sentence I meant "a treacherous turn that leads to an existential catastrophe", so I don't think the example you link updates me strongly on that.

While Luke talks about that scenario as an example of a treacherous turn, you could equally well talk about it as an example of "deception", since the evolved creatures are "artificially" reducing their rates of reproduction to give the supervisor / algorithm a "false belief" that they are bad at reproducing. Another example along these lines is when a robot hand "deceives" its human overseer into thinking that it has grasped a ball, when it is in fact in front of the ball.

I think really though these examples aren't that informative because it doesn't seem reasonable to say that the AI system is "trying" to do something in these examples, or that it does some things "deliberately". These behaviors were learned through trial and error. An existential catastrophe style treacherous turn would presumably not happen through trial and error. (Even if it did, it seems like there must have been at least some cases where it tried and failed to take over the world, which seems like a clear and obvious warning shot, that we for some reason completely ignored.)

(If it isn't clear, the thing that I care about is something like "will there be some 'warning shot' that greatly increases the level of concern people have about AI systems, before it is too late".)

Comment by rohinmshah on Coherence arguments imply a force for goal-directed behavior · 2021-04-06T22:04:05.123Z · EA · GW

I respond here; TL;DR is that I meant a different thing than the thing Katja is responding to.

Comment by rohinmshah on Layman’s Summary of Resolving Pascallian Decision Problems with Stochastic Dominance · 2021-02-28T03:15:12.846Z · EA · GW

In a working paper, Christian Tarsney comes up with a clever resolution to this conflict

Fwiw, I was expecting that the "resolution" would be an argument for why you shouldn't take the wager.

If you do consider it a resolution: if Alice said she would torture a googol people if you didn't give her $5, would you give her the $5? (And if so, would you keep doing it if she kept upping the price, after you had already paid it?)

Comment by rohinmshah on Is this a good way to bet on short timelines? · 2020-11-30T19:39:13.443Z · EA · GW

Counterfactuals are hard. I wouldn't be committing to donate it. (Also, if I were going to donate it, but it would have been donated anyway, then $4,000 no longer seems worth it if we ignore the other benefits.)

I expect at least one of us to update at least slightly.

I agree with "at least slightly".

I'd be interested to know why you disagree

Idk, empirically when I discuss things with people whose beliefs are sufficiently different from mine, it doesn't seem like their behavior changes much afterwards, even if they say they updated towards X. Similarly, when people talk to me, I often don't see myself making any particular changes to how I think or behave. There's definitely change over the course of a year, but it feels extremely difficult to ascribe that to particular things, and I think it more often comes from reading things that people wrote, rather than talking to them.

Comment by rohinmshah on Is this a good way to bet on short timelines? · 2020-11-29T17:10:00.302Z · EA · GW

I'm happy to sell an hour of my time towards something with no impact at $1,000, so that puts an upper bound of $4,000. (Though currently I've overcommitted myself, so for the next month or two it might be  ~2x higher.)

That being said, I do think it's valuable for people working on AI safety to at least understand each other's positions; if you don't think you can do that re: my position, I'd probably be willing to have that conversation without being paid at all (after the next month or two). And I do expect to understand your position better, though I don't expect to update towards it, so that's another benefit.

Comment by rohinmshah on Is this a good way to bet on short timelines? · 2020-11-28T17:15:25.650Z · EA · GW

I'm pretty sure I have longer timelines than you. On each of the bets:

  1. I would take this, but also I like to think if I did update towards your position I would say that anyway (and I would say that you got it right earlier if you asked me to do so, to the extent that I thought you got it right for the right reasons or something).
  2. I probably wouldn't take this (unless X was quite high), because I don't really expect either of us to update to the other's position.
  3. I wouldn't take this; I am very pessimistic about my ability to do research that I'm not inside-view excited about (like, my 50% confidence interval is that I'd have 10-100x less impact even in the case where someone with the same timelines as me is choosing the project, if they don't agree with me on research priorities). It isn't necessary that someone with shorter timelines than me would choose projects I'm not excited about, but from what I know about what you care about working on, I think it would be the case here. Similarly, I am pessimistic about your ability to do research on broad topics that I choose on my inside view. (This isn't specific to you; it applies to anyone who doesn't share most of my views.)
Comment by rohinmshah on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-16T00:20:32.863Z · EA · GW

Yeah, I think I agree with everything you're saying. I think we were probably thinking of different aspects of the situation -- I'm imagining the sorts of crusades that were given as examples in the OP (for which a good faith assumption seems straightforwardly wrong, and a bad faith assumption seems straightforwardly correct), whereas you're imagining other situations like a university withdrawing affiliation (where it seems far more murky and hard to label as good or bad faith).

Also, I realize this wasn't clear before, but I emphatically don't think that making threats is necessarily immoral or even bad; it depends on the context (as you've been elucidating).

Comment by rohinmshah on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T16:06:35.491Z · EA · GW

I agree with parts of this and disagree with other parts.

First off:

First, if she is acting in good faith, pre-committing to refuse any compromise for 'do not give in to bullying' reasons means one always ends up at ones respective BATNAs even if there was mutually beneficial compromises to be struck.

Definitely agree that pre-committing seems like a bad idea (as you could probably guess from my previous comment).

Second, wrongly presuming bad faith for Alice seems apt to induce her to make a symmetrical mistake presuming bad faith for you. To Alice, malice explains well why you were unwilling to even contemplate compromise, why you considered yourself obliged out of principle  to persist with actions that harm her interests, and why you call her desire to combat misogyny bullying and blackmail.

I agree with this in the abstract, but for the specifics of this particular case, do you in fact think that online mobs / cancel culture / groups who show up to protest your event without warning should be engaged with on a good faith assumption? I struggle to imagine any of these groups accepting anything other than full concession to their demands, such that you're stuck with the BATNA regardless.

(I definitely agree that if someone emails you saying "I think this speaker is bad and you shouldn't invite him", and after some discussion they say "I'm sorry but I can't agree with you and if you go through with this event I will protest / criticize you / have the university withdraw affiliation", you should not treat this as a bad faith attack. Afaik this was not the case with EA Munich, though I don't know the details.)

----

Re: the first five paragraphs: I feel like this is disagreeing on how to use the word "bully" or "threat", rather than anything super important. I'll just make one note:

Alice is still not a bully even if her motivating beliefs re. Bob are both completely mistaken and unreasonable. She's also still not a bully even if Alice's implied second-order norms are wrong (e.g. maybe the public square would be better off if people didn't stridently object to hosting speakers based on their supposed views on topics they are not speaking upon, etc.)

I'd agree with this if you could reasonably expect to convince Alice that she's wrong on these counts, such that she then stops doing things like

(e.g.) protest this event, stridently criticise the group in the student paper for hosting him, petition the university to withdraw affiliation

But otherwise, given that she's taking actions that destroy value for Bob without generating value for Alice (except via their impact on Bob's actions), I think it is fine to think of this as a threat. (I am less attached to the bully metaphor -- I meant that as an example of a threat.)

Comment by rohinmshah on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T05:51:53.547Z · EA · GW

Yeah, I'm aware that is the emotional response (I feel it too), and I agree the game theoretic reason for not giving in to threats is important. However, it's certainly not a theorem of game theory that you always do better if you don't give in to threats, and sometimes giving in will be the right decision.

we will find you and we will make sure it was not worth it for you, at the cost of our own resources

This is often not an option. (It seems pretty hard to retaliate against an online mob, though I suppose you could randomly select particular members to retaliate against.)

Another good example is bullying. A child has ~no resources to speak of, and bullies will threaten to hurt them unless they do X. Would you really advise this child not to give in to the bully?

(Assume for the sake of the hypothetical the child has already tried to get adults involved and it has done ~nothing, as I am told is in fact often the case. No, the child can't coordinate with other children to fight the bully, because children are not that good at coordinating.)

Comment by rohinmshah on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T20:24:03.871Z · EA · GW

It seems like you believe that one's decision of whether or not to disinvite a speaker should depend only on one's beliefs about the speaker's character, intellectual merits, etc. and in particular not on how other people would react.

Suppose that you receive a credible threat that if you let already-invited person X speak at your event, then multiple bombs would be set off, killing hundreds of people. Can we agree that in that situation it is correct to cancel the event?

If so, then it seems like at least in extreme cases, you agree that the decision of whether or not to hold an event can depend on how other people react. I don't see why you seem to assume that in the EA Munich case, the consequences are not bad enough that EA Munich's decision is reasonable.

Some plausible (though not probable) consequences of hosting the talk:

  • Protests disrupting the event (this has previously happened to a local EA group)
  • Organizers themselves get cancelled
  • Most members of the club leave due to risk of the above or disagreements with the club's priorities

At least the first two seem quite bad, there's room for debate on the third.

In addition, while I agree that the extremes of cancel culture are in fact very harmful for EA, it's hard to argue that disinviting a speaker is anywhere near the level of any of the examples you give. Notably, they are not calling for a mob to e.g. remove Robin Hanson from his post, they are simply cancelling one particular talk that he was going to give at their venue. This definitely does have a negative impact on norms, but it doesn't seem obvious to me that the impact is very large.

Separately, I think it is also reasonable for a random person to come to believe that Robin Hanson is not arguing in good faith.

(Note: I'm still undecided on whether or not the decision itself was good or not.)

Comment by rohinmshah on Getting money out of politics and into charity · 2020-10-06T18:22:45.544Z · EA · GW

I'm super excited that you're doing this! It's something I've wanted to exist for a long time, and I considered doing it myself a few years ago. It definitely seems like the legal issues are the biggest hurdle. Perhaps I'm being naively optimistic, but I was at least somewhat hopeful that you could get the political parties to not hate you, by phrasing it as "we're taking away money from the other party".

I'm happy to chat about implementation details, unfortunately I'm pretty busy and can't actually commit enough time to help with, you know, actual implementation. Also unfortunately, it seems I have a similar background to you, and so wouldn't really complement your knowledge very well.

If I were to donate to politics (which I could see happening), I would very likely use this service if it existed.

Comment by rohinmshah on Getting money out of politics and into charity · 2020-10-06T18:14:50.145Z · EA · GW
The obvious criticism, I think, is: "couldn't they benefit more from keeping the money?"

You want people to not have the money any more, otherwise e.g. a single Democrat with a $1K budget could donate repeatedly to match ten Republicans donating $1K each.

I'm not sure what the equilibrium would be, but it seems likely it would evolve towards all money being exactly matched, being returned to the users, and then being donated to the parties the normal way. Or perhaps people would stop using it altogether.

Another important detail here is which charities the money goes to -- the Republican donor may not feel great if after matching the Democrat's donation goes to e.g. Planned Parenthood. In the long run, I'd probably try to do surveys of users to find out which charities they'd object to the other side giving to, and not include those. But initially it could just be GiveWell charities for simplicity.

Re choice of charities

It seems pretty important for this sort of venture to build trust with users and have a lot of legitimacy. So, I think it is probably better to let people choose their own charities (excluding political ones for the reasons mentioned above).

You can still sway donations quite a lot based on the default behavior of the platform. In the long run, I'd probably have GiveWell charities as defaults (where you can point to GiveWell's analysis for legitimacy, and you mostly don't have to worry about room for more funding), and (if you wanted to be longtermist) maybe also a section of "our recommended charities" that is more longtermist with explanations of why those charities were selected.

Comment by rohinmshah on AI Governance: Opportunity and Theory of Impact · 2020-09-23T20:01:21.117Z · EA · GW

I summarized this in AN #118, along with a summary of this related podcast and some of my own thoughts about how this compares to more classical intent alignment risks.

Comment by rohinmshah on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-20T01:48:40.253Z · EA · GW

I do mean CS and not just ML. (E.g. PLDI and OSDI are top conferences with acceptance rates of 27% and 18% respectively according to the first Google result, and Berkeley students do publish there.)

Comment by rohinmshah on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-19T16:41:29.713Z · EA · GW

I don't know for sure, but at least in most areas of Computer Science it is pretty typical for at least Berkeley PhD students to publish in the top conferences in their area. (And they could publish in top journals; that just happens not to be as incentivized in CS.)

I generally dislike using acceptance rates -- I don't see strong reasons that they should correlate strongly with quality or difficulty -- but top CS conferences have maybe ~25% acceptance rates, suggesting this journal would be 5x "harder". This is more than I thought, but I don't think it brings me to the point of thinking it should be a significant point in favor in an outside evaluation, given the size of the organization and the time period over which we're talking.

Comment by rohinmshah on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-18T17:11:01.963Z · EA · GW
some promising signs about their ability to produce work that well-established external reviewers consider to be very high-quality—most notably, the acceptance of one of their decision theory papers to a top philosophy journal, The Journal of Philosophy.

I get that this is not the main case for the grant, and that MIRI generally avoids dealing with academia so this is not a great criterion to evaluate them on, but getting a paper accepted does not imply "very high-quality", and having a single paper accepted in (I assume) a couple of years is an extremely low bar (e.g. many PhD students exceed this in terms of their solo output).

Comment by rohinmshah on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-15T06:40:11.445Z · EA · GW
I think showing that longermism is plausible is also an understatement of the goal of the paper

Yeah, that's a fair point, sorry for the bad argument.

Comment by rohinmshah on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-14T21:22:59.091Z · EA · GW

I feel like it's misleading to take a paper that explicitly says "we show that strong longtermism is plausible", does so via robust arguments, and conclude that longtermist EAs are basing their conclusions on speculative arguments.

If you want robust arguments for interventions you should look at those interventions. I believe there are robust arguments for work on e.g. AI risk, such as Human Compatible. (Personally, I prefer a different argument, but I think the one in HC is pretty robust and only depends on the assumption that we will build intelligent AI systems in the near-ish future, say by 2100.)

Yes, and I'm also not willing to commit to any specific degree of confidence, since I haven't seen any in particular justified. This is also for future impact. Why shouldn't my prior for success be < 1%? Can I rule out a negative expected impact?

Idk what's happening with GFI, so I'm going to bow out of this discussion. (Though one obvious hypothesis is that GFI's main funders have more information than you do.)

Hits-based funding shouldn't be taken for granted.

I mean, of course, but it's not like people just throw money randomly in the air. They use the sorts of arguments you're complaining about to figure out where to try for a hit. What should they do instead? Can you show examples of that working for startups, VC funding, scientific R&D, etc? You mention two things:

  • Developing reasonable probability distributions
  • Diversification

It seems to me that longtermists are very obviously trying to do both of these things. (Also, the first one seems like the use of "explicit calculations" that you seem to be against.)

Comment by rohinmshah on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-14T18:00:11.677Z · EA · GW
Are there any particular articles/texts you would recommend?

Sorry, on what topic?

Imo, the Greaves and MacAskill paper relies primarily on explicit calculations and speculative plausibility arguments for its positive case for strong longtermism.

I see the core case of the paper as this:

... putting together the assumption that the expected size of the future is vast and the assumption that all consequences matter equally, it becomes at least plausible that the amount of ex ante good we can generate by influencing the expected course of the very long-run future exceeds the amount of ex ante good we can generate via influencing the expected course of short-run events, even after taking into account the greater uncertainty of further-future events.

They do illustrate claims like "the expected size of the future is vast" with calculations, but those are clearly illustrative; the argument is just "there's a decent chance that humanity continues for a long time with similar or higher population levels". I don't think you can claim that this relies on explicit calculations except inasmuch as any reasoning that involves claims about things being "large" or "small" depends on calculations.

I also don't see how this argument is speculative: it seems really hard to me to argue that any of the assumptions or inferences are false.

Note it is explicitly talking about the expected size of the future, and so is taking as a normative assumption that you want to maximize actual expected values. I suppose you could argue that the argument is "speculative" in that it depends on this normative assumption, but in the same way AMF is "speculative" in that it depends on the normative assumption that saving human lives is good (an assumption that may not be shared by e.g. anti-natalists or negative utilitarians).

Animal Charity Evaluators has been criticized a few times for this, see here and here.

I haven't been following animal advocacy recently, but I remember reading "The Actual Number is Almost Surely Higher" when it was released and feeling pretty meh about it. (I'm not going to read it now, it's too much of a time sink.)

GiveWell has even been criticized for relying too much on quantitative models in practice, too, despite Holden's own stated concerns with this.

Yeah I also didn't agree with this post. The optimizer's curse tells you that you should expect your estimates to be inflated, but it does not change the actual decisions you should make. I agree somewhat more with the wrong-way reductions part, but I feel like that says "don't treat your models as objective fact"; GiveWell frequently talks about how the cost-effectiveness model is only one input into their decision making.

More generally, I don't think you should look at the prevalence of critiques as an indicator for how bad a thing is. Anything sufficiently important will eventually be critiqued. The question is how correct or valid those critiques are.

I'm still personally not convinced the Good Food Institute has much impact at all, since I'm not aware of a proper evaluation that didn't depend a lot on speculation

I'm interpreting this as "I don't have >90% confidence that GFI has actually had non-trivial impact so far (i.e. an ex-post evaluation)". I don't have a strong view myself since I haven't been following GFI, but I expect even if I read a lot about GFI I'd agree with that statement.

However, if you think this should be society's bar for investing millions of dollars, you would also have to be against many startups, nearly all VCs and angel funding, the vast majority of scientific R&D, some government megaprojects, etc. This bar seems clearly too stringent to me. You need some way of doing something like hits-based funding.

Comment by rohinmshah on Does Economic History Point Toward a Singularity? · 2020-09-13T15:54:29.448Z · EA · GW

Yeah in hindsight that was confusing. I meant that growth rates have been increasing since the Industrial Revolution, and have only become constant in the last few decades.

Comment by rohinmshah on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-08T21:23:20.658Z · EA · GW
Yes. I would have been happy to say that, in general, I expect work of this type is less likely to be useful than other research work that does not try to predict the long-run future of humanity.

Sorry, I think I wasn't clear. Let me make the case for the ex ante value of the Open Phil report in more detail:

1. Ex ante, it was plausible that the report would have concluded "we should not expect lots of growth in the near future".

2. If the report had this conclusion, then we should update that AI risk is much less important than we currently think. (I am not arguing that "lots of growth => transformative AI", I am arguing that "not much growth => no transformative AI".)

3. This would be a very significant and important update (especially for Open Phil). It would presumably lead them to put less money into AI and more money into other areas.

4. Therefore, the report was ex ante quite valuable since it had a non-trivial chance of leading to major changes in cause prioritization.

Presumably you disagree with 1, 2, 3 or 4; I'm not sure which one.

Comment by rohinmshah on Does Economic History Point Toward a Singularity? · 2020-09-08T15:31:45.101Z · EA · GW

Ah, fair point, I'll change "explosive" to "accelerating" everywhere.

Comment by rohinmshah on Does Economic History Point Toward a Singularity? · 2020-09-08T15:31:14.969Z · EA · GW

I agree with this, but it seems irrelevant to Asya's point? If it turned out to be the case that we would just resume the trend of accelerating growth, and AI was the cause of that, I would still call that transformative AI and I would still be worried about AI risks, to about the same degree as I would if that same acceleration was instead trend-breaking.

Comment by rohinmshah on Does Economic History Point Toward a Singularity? · 2020-09-08T00:50:24.615Z · EA · GW

On my read of this doc, everyone agrees that the industrial revolution led to explosive growth, and the question is primarily about whether we should interpret this as a one-off event, or as something that is likely to happen again in the future, so for all viewpoints it seems like transformative AI would still require explosive growth. Does that seem right to you?

Comment by rohinmshah on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-08T00:30:41.740Z · EA · GW

Hmm, I should note that I am in strong support of quantitative models as a tool for aiding decision-making -- I am only against committing ahead of time to do whatever the model tells you to do. If the post is against the use of quantitative models in general, then I do in fact disagree with the post.

Some things that feel like quantitative models that are merely "aiding" rather than "doing" decision-making:

this model for the Global Priorities Project
The case for strong longtermism by Greaves and MacAskill illustrates with some back-of-the-envelope estimates and cites others' estimates (GiveWell's, Matheny's).
Patient philanthropy is being justified on account of EV estimates
Comment by rohinmshah on Does Economic History Point Toward a Singularity? · 2020-09-07T19:47:58.351Z · EA · GW

Planned summary for the Alignment Newsletter:

One important question for the long-term future is whether we can expect accelerating growth in the near future (see e.g. this <@recent report@>(@Modeling the Human Trajectory@)). For AI alignment in particular, the answer to this question could have a significant impact on AI timelines: if some arguments suggested that it would be very unlikely for us to have accelerating growth soon, we should probably be more skeptical that we will develop transformative AI soon.
So far, the case for accelerating growth relies on one main argument that the author calls the _Hyperbolic Growth Hypothesis_ (HGH). This hypothesis posits that the growth _rate_ rises in tandem with the population size (intuitively, a higher population means more ideas for technological progress which means higher growth rates). This document explores the _empirical_ support for this hypothesis.
I’ll skip the messy empirical details and jump straight to the conclusion: while the author agrees that growth rates have been increasing in the modern era (roughly, the Industrial Revolution and everything after), he does not see much support for the HGH prior to the modern era. The data seems very noisy and hard to interpret, and even when using this noisy data it seems that models with constant growth rates fit the pre-modern era better than hyperbolic models. Thus, we should be uncertain between the HGH and the hypothesis that the industrial revolution triggered a one-off transition to increasing growth rates that have now stabilized.

Planned opinion:

I’m glad to know that the empirical support for the HGH seems mostly limited to the modern era, and may be weakly disconfirmed by data from the pre-modern era. I’m not entirely sure how I should update -- it seems that both hypotheses would be consistent with future accelerating growth, though HGH predicts it more strongly. It also seems plausible to me that we should still assign more credence to HGH because of its theoretical support and relative simplicity -- it doesn’t seem like there is strong evidence suggesting that HGH is false, just that the empirical evidence for it is weaker than we might have thought. See also Paul Christiano’s response.
Comment by rohinmshah on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-06T03:39:14.773Z · EA · GW
Does that help?

I buy that using explicit EV calculations is not a great way to reason. My main uncertainty is whether longtermists actually rely a lot on EV calculations -- e.g. Open Phil has explicitly argued against it (posts are from GiveWell before Open Phil existed; note they were written by Holden).

Examples: https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ and https://www.emerald.com/insight/content/doi/10.1108/FS-04-2018-0037/full/html (the later of which I have not read)

I haven't read these so will avoid commenting on them.

I don’t see the OpenPhil article as that useful – it is interesting but I would not think it has a big impact on how we should approach AI risk.

I mean, the report ended up agreeing with our prior beliefs, so yes it probably doesn't change much. (Though idk, maybe it does influence Open Phil.) But it seems somewhat wrong to evaluate the value of conducting research after the fact -- would you have been confident that the conclusion would have agreed with our prior beliefs before the report was done? I wouldn't have been.

Comment by rohinmshah on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-05T20:15:54.100Z · EA · GW

I find this hard to engage with -- you point out lots of problems that a straw longtermist might have, but it's hard for me to tell whether actual longtermists fall prey to these problems. For most of them my response is "I don't see this problem, I don't know why you have this impression".

Responding to the examples you give:

(GPI and CLR and to some degree OpenPhil have done research like this)

I'm not sure which of GPI's and CLR's research you're referring to (and there's a good chance I haven't read it), but the Open Phil research you link to seems obviously relevant to cause prioritization. If it's very unlikely that there's explosive growth this century, then transformative AI is quite unlikely and we would want to place correspondingly more weight on other areas like biosecurity -- this would presumably directly change Open Phil's funding decisions.

For example, I expect that the longtermism community could benefit from looking at business planning strategies. It is notable in the world that organisations, even those with long term goals, do not make concrete plans more than 30 years ahead

... I assume from the phrasing of this sentence that you believe longtermists have concrete plans more than 30 years ahead, which I find confusing. I would be thrilled to have a concrete plan for 5 years in the future (currently I'm at ~2 years). I'd be pretty surprised if Open Phil had a >30 year concrete plan (unless you count reasoning about the "last dollar").

Comment by rohinmshah on Singapore’s Technical AI Alignment Research Career Guide · 2020-08-28T02:08:47.729Z · EA · GW
When you mentioned "I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using 'essentially current techniques'", I took it as prosaic AGI too, but you might mean something else.

Oh yeah, that sounds correct to me. I think the issue was that I thought you meant something different from "prosaic AGI" when you were talking about "short term AI capabilities". I do think it is very impactful to work on prosaic AGI alignment; that's what I work on.

Your rephrasing sounds good to me -- I think you can make it stronger; it is true that many researchers including me endorse working on prosaic AI alignment.

Comment by rohinmshah on Singapore’s Technical AI Alignment Research Career Guide · 2020-08-26T18:29:52.220Z · EA · GW
However, such research on short term AI capabilities is potentially impactful in the long term too, according to some AI researchers like Paul Christiano, Ian Goodfellow, and Rohin Shah.

Huh, I don't see where I said anything that implied that? (I just reread the summary that you linked.)

I'm not entirely sure what you mean by "short term AI capabilities". The context suggests you mean "AI-related problems that will arise soon that aren't about x-risk". If so, under a longtermist perspective, I think that work addressing such problems is better than nothing, but I expect that focusing on x-risk in particular will lead to orders of magnitude more (expected) impact.

(I also don't think the post you linked for Paul implies the statement you made either, unless I'm misunderstanding something.)

Comment by rohinmshah on We're (surprisingly) more positive about tackling bio risks: outcomes of a survey · 2020-08-25T18:03:39.980Z · EA · GW

In case anyone else wanted this sorted by topic and then by person, here you go:

  • Do you think that the world will handle future pandemics and bio risks better as a result of having gone through the current coronavirus pandemic?
    • Joan: Would like to be able to say that as humans, we will be able to adapt. But there’s not a lot of good evidence at the moment that we will take the right steps. We’re seeing states retreat, build high walls, and become less globalised. And signs of anti globalisation trends and anti science trends, are negative indicators when global cooperation is exactly what’s needed to handle these issues better in the future. Good Leadership is key to our future preparedness.
    • Catherine: Slightly pessimistic
    • Catherine: Probably not
    • Catherine: Depends when the next one hits
    • Catherine: If it happens within 5-10 years, we would have boosted ability
    • Catherine: In the scientific side, the huge rush to innovate probably leaves a legacy
    • Catherine: There’s about a 20% probability that we might get better, probably not going to get worse
    • Megan: We should expect organisational learning and prioritisation
    • Megan: But on balance we should likely expect over-indexing/over-fitting based on what’s happened previously, and not enough planning and preparation relating to biological risks that don’t look like what’s come most recently
    • Anita: In general yes, we should expect there to be some learning and improvement. Countries have often struggled with getting sufficient attention and resources to outbreak preparedness
    • Anita: There will probably be more money going into public health and maybe the military as well
    • Anita: Asian countries certainly seem better prepared as a result of their past experience
  • Do you think future bio risks will be more likely, less likely, or unchanged in likelihood after the current pandemic? (it may help to split between deliberate man-made risks, accidental man-made, and natural)
    • Joan: It is pretty clear that bio risks will become more likely. This is because of general trends that predated coronavirus such as technological developments, climate change, population growth, urbanisation and global travel. This is because of general trends that predated coronavirus. Already this century we’ve had four major global disease outbreaks (Swine Flu, MERS, Ebola, COVID-19) -- almost double the rate of previous centuries. In terms of whether COVID-19 actually causes future bio risks to be more likely, NTI preferred not to make a strong comment either way on this point.
    • Catherine: More likely, in that that was the trend already
    • Catherine: Unchanged by covid
    • Catherine: Our economies will go back to interconnectedness
    • Catherine: We will have more contact with wildlife as we encroach further into their habitats
    • Catherine: Deliberate bio risks could go two ways. Potential users of bio weapons might see that this is really disruptive, so might make it more appealing. Conversely might see that there is really no way of making sure that bio risks are contained and won’t affect your own people
    • Catherine: Increased research utilising dangerous pathogens is a source of risk requiring greater attention to biosafety
    • Megan: We should expect some organisational learning and prioritisation as a result of COVID-19
    • Megan: We need more work on understanding and modeling origins of biological risks, without which it’s hard to give definitive answers
    • Megan: We may well see extra work happening to increase our overall understanding of SARS-CoV-2 in particular and viruses in general as a result of the current pandemic. But does understanding viruses increase or decrease our risk? The extra knowledge may well be valuable, but accidents can happen as a result of people doing scientific work which is intended to tackle a pathogen, especially when the rush of people tackling the problem means that people without experience of working with infectious pathogens are involved.
    • Megan: There are state and non-state actors who may have not been interested in or otherwise discounted biological weapons who now may become interested. It’s still not fully understood how terrorists get inspired about ways to use biology as a weapon, and socialising threats can cause information hazards
    • Megan: Also as research develops, it makes it easier for low skilled or medium skilled actors to generate pathogens
    • Anita: Hard to say at this moment
    • Anita: Need to secure biosafety practices in labs; hopefully more people will appreciate that this is really important. Tentatively optimistic about this, however I don’t think I’ve seen as much as I’d like to see about the importance of this.
    • Anita: Could inspire those who want to do harm to see the power of a released pathogen in the community. E.g. an independent group or some state actors who have an interest in the development of bio weapons might feel encouraged. Hopefully, these groups will see that it’s hard to control a pandemic once it starts, so this may also act as a deterrent. But overall, we’re not expecting this pandemic will turn people away from bioweapons.
  • How do you think the willingness of key actors such as governments (but excluding donors) to tackle bio risks will change in light of the current pandemic?
    • Joan: In the near term, we’ll have a higher degree of attention on better preparing for pandemics
    • Joan: It’s unclear whether we will see the right levels of political competence and focused engagement to facilitate the right investments for enduring improvements and attention that last into the future but we have a unique opportunity to work for lasting change.
    • Catherine: Short window of opportunity in which things might change
    • Catherine: Might be c 3-5 years window, perhaps
    • Catherine: Huge economic damage means that the appetite for thinking further ahead might not be there because governments will be focusing on immediate economic recovery needs
    • Catherine: It’s not the case that the world didn’t know that pandemics could cause huge damage and coronavirus has now educated us. It was clear that this sort of event was going to happen. The world bank has been putting out warnings; see for example, the World Bank paper “From Panic to neglect
    • Megan: People are now socialised to the risk, so will take the risks more seriously, but this will differ by risk types.
    • Megan: We have seen a long history of over-indexing on the most recent high profile incidents and environments, including before they are fully understood. For example there was over-indexing on outsider threats in the midst of the anthrax response. Based on past experience, it seems likely that there might be longer term neglect of certain types of risks.
    • Megan: We may see general build-up of capabilities around pandemic response, which will likely be helpful for naturally occurring infectious disease. But there may be less attention on deliberate and accidental bio risks that may look very different.
    • Anita: I expect there will be some additional investment in this area, although there could also be a funding fatigue once we get through this pandemic. A large and enduring investment in biosecurity may be difficult to achieve, especially at the moment when governments are spending so much on COVID
    • Anita: Standard public health budgets are different line items, and you could just up the budgets to, say, something similar or more than what it was in 2003, when it was much higher than today.
    • Anita: However it’s worrying that existential threats look likely to remain underinvested in
  • Have you seen signs that donor interest in tackling bio risks has changed or will change in light of the current pandemic?
    • Joan: There is now lots of attention on biological risks. And several donors such as Bill Gates and Jack Dorsey have been pledging substantial amounts.
    • Joan: The risk is that donors overly focus on naturally occurring biological risks like COVID, without considering that other things also constitute existential risks, like manmade pathogens or nuclear war that also deserve attention.
    • Catherine: Not seen much indication at the moment
    • Catherine: However a small number of specific funders are starting to think about existential bio risks a bit more
    • Megan: We have not seen a noticeable uptick in donations because of COVID but have tried not to be opportunistic.
    • Megan: To a certain extent this is also a function of us spending time talking to senior politicians and others in government and the commercial sector on immediate response and not having the time to broadcast this value to the outside world.
    • Megan: Many of our colleagues working in adjacent areas have seen some donor interest on secondary effects (e.g. the impact of COVID-19 on geopolitics).
    • Megan: This may also be another example of over-indexing -- everyone is focused on the immediate response efforts (contact tracing etc) but not a lot of what will happen if a worse biological risk hits us in the future. We’ve been focused on this longer term strategy.
    • Anita: Have seen some modest uptick in donors who want to give to covid response. Not sure that that will translate to a longer term interest or commitment to the health space going forward. We are so used to the panic neglect cycle. Uptick mostly (but not entirely) from people in the Effective Altruism community.
Comment by rohinmshah on What organizational practices do you use (un)successfully to improve culture? · 2020-08-16T05:22:09.763Z · EA · GW

Some maybe-related posts (not vouching for them):

Team Cohesion and Exclusionary Egalitarianism

Deliberate Performance in People Management

Burnout: What is it and how to Treat it

Comment by rohinmshah on The emerging school of patient longtermism · 2020-08-16T04:59:49.390Z · EA · GW
Arguments pushing back against the Bostrom-Yudkowsky view of AI by Ben Garfinkel.

I don't know to what extent this is dependent on the fact that researchers like me argue for alignment by default, but I want to note that at least my views do not argue for patient longtermism according to my understanding. (Though I have not read e.g. Phil Trammel's paper.)

As the post notes, it's a spectrum, I would not argue that Open Phil should spend a billion dollars on AI safety this year, but I would probably not argue for Open Phil to take fewer opportunities than they currently do, nor would I recommend that individuals not donate to x-risk orgs and save the money instead.

Comment by rohinmshah on EA reading list: longtermism and existential risks · 2020-08-03T18:31:21.142Z · EA · GW

What about The Precipice?

Comment by rohinmshah on The academic contribution to AI safety seems large · 2020-08-02T17:20:38.363Z · EA · GW

Was going to write a longer comment but I basically agree with Buck's take here.

It's a little hard to evaluate the counterfactuals here, but I'd much rather have the contributions from EA safety than from non EA safety over the last ten years.

I wanted to endorse this in particular.

On the actual argument:

1. EA safety is small, even relative to a single academic subfield.
2. There is overlap between capabilities and short-term safety work.
3. There is overlap between short-term safety work and long-term safety work.
4. So AI safety is less neglected than the opening quotes imply.
5. Also, on present trends, there’s a good chance that academia will do more safety over time, eventually dwarfing the contribution of EA.

I agree with 1, 2, and 3 (though perhaps disagree with the magnitude of 2 and 3, e.g. you list a bunch of related areas and for most of them I'd be surprised if they mattered much for AGI alignment).

I agree 4 is literally true, but I'm not sure it necessarily matters, as this sort of thing can be said for ~any field (as Ben Todd notes). It would be weird to say that animal welfare is not neglected because of the huge field of academia studying animals, even though those fields are relevant to questions of e.g. sentience or farmed animal welfare.

I strongly agree with 5 (if we replace "academia" with "academia + industry", it's plausible to me academia never gets involved while industry does), and when I argue that "work will be done by non-EAs", I'm talking about future work, not current work.

Comment by rohinmshah on Objections to Value-Alignment between Effective Altruists · 2020-08-02T07:00:58.722Z · EA · GW
It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines.

I didn't mean to say this, there's certainly overlap. My claim is that (at least in AI safety, and I would guess in other EA areas as well) the reasons we do the research we do are different from those of most academics. It's certainly possible to repackage the research in a format more suited to academia -- but it must be repackaged, which leads to

rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don't

I agree that the things you list have a lot of benefits, but they seem quite hard to me to do. I do still think publishing with peer review is worth it despite the difficulty.

Comment by rohinmshah on Objections to Value-Alignment between Effective Altruists · 2020-07-30T18:53:24.031Z · EA · GW
Most of this was about very large documents on AI safety and strategy issues allegedly existing within OpenAI and MIRI.

I agree people trust MIRI's conclusions a bunch based on supposed good internal reasoning / the fact that they are smart, and I think this is bad. However, I think this is pretty limited to MIRI.

I haven't seen anything similar with OpenAI though of course it is possible.

I agree with all the other things you write.

Comment by rohinmshah on Objections to Value-Alignment between Effective Altruists · 2020-07-29T16:21:51.756Z · EA · GW

This is a good post, I'm glad you wrote it :)

On the abstract level, I think I see EA as less grand / ambitious than you do (in practice, if not in theory) -- the biggest focus of the longtermist community is reducing x-risk, which is good by basically any ethical theory that people subscribe to (exceptions being negative utilitarianism and nihilism, but nihilism cares about nothing and very few people are negative utilitarian and most of those people seem to be EAs). So I see the longtermist section of EA more as the "interest group" in humanity that advocates for the future, as opposed to one that's going to determine what will and won't happen in the future. I agree that if we were going to determine the entire future of humanity, we would want to be way more diverse than we are now. But if we're more like an interest group, efficiency seems good.

On the concrete level -- you mention not being happy about these things:

EAs give high credence to non-expert investigations written by their peers

Agreed this happens and is bad

they rarely publish in peer-review journals and become increasingly dismissive of academia

Idk, academia doesn't care about the things we care about, and as a result it is hard to publish there. It seems like long-term we want to make a branch of academia that cares about what we care about, but before that it seems pretty bad to subject yourself to peer reviews that argue that your work is useless because they don't care about the future, and/or to rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don't. (I think this is the situation of AI safety.)

show an increasingly certain and judgmental stance towards projects they deem ineffective

Agreed this happens and is bad (though you should get more certain as you get more evidence, so maybe I think it's less bad than you do)

defer to EA leaders as epistemic superiors without verifying the leaders epistemic superiority

Agreed this happens and is bad

trust that secret google documents which are circulated between leaders contain the information that justifies EA’s priorities and talent allocation

Agreed this would be bad if it happened, I'm not actually sure that people trust this? I do hear comments like "maybe it was in one of those secret google docs" but I wouldn't really say that those people trust that process.

let central institutions recommend where to donate and follow advice to donate to central EA organisations

Kinda bad, but I think this is more a fact about "regular" EAs not wanting to think about where to donate? (Or maybe they have more trust in central institutions than they "should".)

let individuals move from a donating institution to a recipient institution and visa versa

Seems really hard to prevent this -- my understanding is it happens in all fields, because expertise is rare and in high demand. I agree that it's a bad thing, but it seems worse to ban it.

strategically channel EAs into the US government

I don't see why this is bad. I think it might be bad if other interest groups didn't do this, but they do. (Though I might just be totally wrong about that.)

adjust probability assessments of extreme events to include extreme predictions because they were predictions by other members

That seems somewhat bad but not obviously so? Like, it seems like you want to predict an average of people's opinions weighted by expertise; since EA cares a lot more about x-risk it often is the case that EAs are the experts on extreme events.

Comment by rohinmshah on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-22T20:07:37.341Z · EA · GW

My experience matches Ben's more than yours.

My impression is that there hasn't so much been a shift in views within individual people than the influx of a younger generation who tends to have an ML background and roughly speaking tends to agree more with Paul Christiano than MIRI. Some of them are now somewhat prominent themselves (e.g. Rohin Shah, Adam Gleave, you), and so the distribution of views among the set of perceived "AI risk thought leaders" has changed.

All of the people you named didn't have an ML background. Adam and I have CS backgrounds (before we joined CHAI, I was a PhD student in programming languages, while Adam worked in distributed systems iirc). Ben is in international relations. If you were counting Paul, he did a CS theory PhD. I suspect all of us chose the "ML track" because we disagreed with MIRI's approach and thought that the "ML track" would be more impactful.

(I make a point out of this because I sometimes hear "well if you started out liking math then you join MIRI and if you started out liking ML you join CHAI / OpenAI / DeepMind and that explains the disagreement" and I think that's not true.)

I don't recall anyone seriously suggesting there might not be enough time to finish a PhD before AGI appears.

I've heard this (might be a Bay Area vs. Europe thing).