Posts

What are the key ongoing debates in EA? 2020-03-08T16:12:34.683Z · score: 59 (29 votes)
Characterising utopia 2020-01-02T00:24:23.248Z · score: 29 (16 votes)
Technical AGI safety research outside AI 2019-10-18T15:02:20.718Z · score: 74 (31 votes)
Does any thorough discussion of moral parliaments exist? 2019-09-06T15:33:02.478Z · score: 36 (14 votes)
How much EA analysis of AI safety as a cause area exists? 2019-09-06T11:15:48.665Z · score: 76 (28 votes)
How do most utilitarians feel about "replacement" thought experiments? 2019-09-06T11:14:20.764Z · score: 18 (15 votes)
Why has poverty worldwide fallen so little in recent decades outside China? 2019-08-07T22:24:11.239Z · score: 23 (10 votes)
Which scientific discovery was most ahead of its time? 2019-05-16T12:28:54.437Z · score: 34 (14 votes)
Why doesn't the EA forum have curated posts or sequences? 2019-03-21T13:52:58.807Z · score: 34 (16 votes)
The career and the community 2019-03-21T12:35:23.073Z · score: 80 (43 votes)
Arguments for moral indefinability 2019-02-08T11:09:25.547Z · score: 31 (12 votes)
Disentangling arguments for the importance of AI safety 2019-01-23T14:58:27.881Z · score: 55 (31 votes)
How democracy ends: a review and reevaluation 2018-11-24T17:41:53.594Z · score: 24 (12 votes)
Some cruxes on impactful alternatives to AI policy work 2018-11-22T13:43:40.684Z · score: 21 (12 votes)

Comments

Comment by richard_ngo on What are some 1:1 meetings you'd like to arrange, and how can people find you? · 2020-03-18T14:49:49.022Z · score: 8 (6 votes) · EA · GW

Who are you?

I'm Richard. I'm a research engineer on the AI safety team at DeepMind.

What are some things people can talk to you about? (e.g. your areas of experience/expertise)

AI safety, particularly high-level questions about what the problems are and how we should address them. Also machine learning more generally, particularly deep reinforcement learning. Also careers in AI safety.

I've been thinking a lot about futurism in general lately. Longtermism assumes large-scale sci-fi futures, but I don't think there's been much serious investigation into what they might look like, so I'm keen to get better discussion going (this post was an early step in that direction).

What are things you'd like to talk to other people about? (e.g. things you want to learn)

I'm interested in learning about evolutionary biology, especially the evolution of morality. Also the neuroscience of motivation and goals.

I'd be interested in learning more about mainstream philosophical views on agency and desire. I'd also be very interested in collaborating with philosophers who want to do this type of work, directed at improving our understanding of AI safety.

How can people get in touch with you?

Here, or email: ngor [at] google.com

Comment by richard_ngo on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T17:26:59.244Z · score: 30 (13 votes) · EA · GW

What would convince you that preventing s-risks is a bigger priority than preventing x-risks?

Suppose that humanity unified to pursue a common goal, and you faced a gamble where that goal would be the most morally valuable goal with probability p, and the most morally disvaluable goal with probability 1-p. Given your current beliefs about those goals, at what value of p would you prefer this gamble over extinction?

Comment by richard_ngo on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T17:10:59.936Z · score: 8 (2 votes) · EA · GW

We have a lot of philosophers and philosophically-minded people in EA, but only a tiny number of them are working on philosophical issues related to AI safety. Yet from my perspective as an AI safety researcher, it feels like there are some crucial questions which we need good philosophy to answer (many listed here; I'm particularly thinking about philosophy of mind and agency as applied to AI, a la Dennett). How do you think this funnel could be improved?

Comment by richard_ngo on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T17:06:14.110Z · score: 10 (7 votes) · EA · GW

If you could convince a dozen of the world's best philosophers (who aren't already doing EA-aligned research) to work on topics of your choice, which questions would you ask them to investigate?

Comment by richard_ngo on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T16:12:28.572Z · score: 17 (10 votes) · EA · GW

If you could only convey one idea from your new book to people who are already heavily involved in longtermism, what would it be?

Comment by richard_ngo on What are the key ongoing debates in EA? · 2020-03-15T15:08:51.407Z · score: 30 (7 votes) · EA · GW

Thanks for the list! As a follow-up, I'll try list places online where such debates have occurred for each entry:

1. https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1

2. Toby Ord has estimates in The Precipice. I assume most discussion occurs on specific risks.

3. Lots of discussion on this; summary here: https://forum.effectivealtruism.org/posts/7uJcBNZhinomKtH9p/giving-now-vs-later-a-summary . Also more recently https://forum.effectivealtruism.org/posts/amdReARfSvgf5PpKK/phil-trammell-philanthropy-timing-and-the-hinge-of-history

4. Best discussion of this is probably here: https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like

5. Most stuff on https://longtermrisk.org/ addresses s-risks. In terms of pushback, Carl Shulman wrote http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html and Toby Ord wrote http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/ (although I don't find either compelling). Also a lot of Simon Knutsson's stuff, e.g. https://www.simonknutsson.com/thoughts-on-ords-why-im-not-a-negative-utilitarian

6a. https://forum.effectivealtruism.org/posts/LxmJJobC6DEneYSWB/effects-of-anti-aging-research-on-the-long-term-future , https://forum.effectivealtruism.org/posts/jYMdWskbrTWFXG6dH/a-general-framework-for-evaluating-aging-research-part-1

6b. https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-longtermists-mostly-think-about-animals , https://forum.effectivealtruism.org/posts/ndvcrHfvay7sKjJGn/human-and-animal-interventions-the-long-term-view

6c. https://forum.effectivealtruism.org/posts/xh37hSqw287ufDbQ7/existential-risk-and-economic-growth-1

7. Nothing particularly comes to mind, although I assume there's stuff out there.

8. https://80000hours.org/2020/02/anonymous-answers-effective-altruism-community-and-growth/

9. E.g. here, which also links to more discussions: https://forum.effectivealtruism.org/posts/NLJpMEST6pJhyq99S/notes-could-climate-change-make-earth-uninhabitable-for

Comment by richard_ngo on Harsanyi's simple “proof” of utilitarianism · 2020-02-22T16:26:04.963Z · score: 12 (6 votes) · EA · GW
Because we are indifferent between who has the 2 and who has the 0

Perhaps I'm missing something, but where does this claim come from? It doesn't seem to follow from the three starting assumptions.

Comment by richard_ngo on Announcing the 2019-20 Donor Lottery · 2019-12-03T10:13:29.606Z · score: 12 (5 votes) · EA · GW
2018-19: a $100,000 lottery (no winners)

What happens to the money in this case?

Comment by richard_ngo on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-22T15:29:28.310Z · score: 4 (3 votes) · EA · GW
I think that they might have been better off if they'd instead spent their effort trying to become really good at ML in the hope of being better skilled up with the goal of working on AI safety later.

I'm broadly sympathetic to this, but I also want to note that there are some research directions in mainstream ML which do seem significantly more valuable than average. For example, I'm pretty excited about people getting really good at interpretability, so that they have an intuitive understanding of what's actually going on inside our models (particularly RL agents), even if they have no specific plans about how to apply this to safety.

Comment by richard_ngo on AI safety scholarships look worth-funding (if other funding is sane) · 2019-11-20T20:05:27.475Z · score: 3 (5 votes) · EA · GW
Students able to bring funding would be best-equipped to negotiate the best possible supervision from the best possible school with the greatest possible research freedom.

This seems like the key premise, but I'm pretty uncertain about how much freedom this sort of scholarship would actually buy, especially in the US (people who've done PhDs in ML please comment!) My understanding is that it's rare for good candidates to not get funding; and also that, even with funding, it's usually important to work on something your supervisor is excited about, in order to get more support.

In most of the examples you give (with the possible exceptions of the FHI and GPI scholarships) buying research freedom for PhD students doesn't seem to be the main benefit. In particular:

OpenPhil has its fellowship for AI researchers who happen to be highly prestigious

This might be mostly trying to buy prestige for safety.

and has funded a couple of masters students on a one-off basis.
FHI has its... RSP, which funds early-career EAs with slight supervision.
Paul even made grants to independent researchers for a while.

All of these groups are less likely to have other sources of funding compared with PhD students.

Having said all that, it does seem plausible that giving money to safety PhDs is very valuable, in particular via the mechanism of freeing up more of their time (e.g. if they can then afford shorter commutes, outsourcing of time-consuming tasks, etc).

Comment by richard_ngo on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:19:40.953Z · score: 12 (5 votes) · EA · GW
On a meta note: Different people who work on AI alignment have radically different pictures of what the development of AI will look like, what the alignment problem is, and what solutions might look like.

+1, this is the thing that surprised me most when I got into the field. I think helping increase common knowledge and agreement on the big picture of safety should be a major priority for people in the field (and it's something I'm putting a lot of effort into, so send me an email at ngor@google.com if you want to discuss this).

I think the ideas described in the paper Risks from Learned Optimization are extremely important.

Also +1 on this.

Comment by richard_ngo on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:15:45.001Z · score: 13 (4 votes) · EA · GW
If I thought there was a <30% chance of AGI within 50 years, I'd probably not be working on AI safety.
I expect the world to change pretty radically over the next 100 years.

I find these statements surprising, and would be keen to hear more about this from you. I suppose that the latter goes a long way towards explaining the former. Personally, there are few technologies that I think are likely to radically change the world within the next 100 years (assuming that your definition of radical is similar to mine). Maybe the only ones that would really qualify are bioengineering and nanotech. Even in those fields, though, I expect the pace of change to be fairly slow if AI isn't heavily involved.

(For reference, while I assign more than 30% credence to AGI within 50 years, it's not that much more).

Comment by richard_ngo on A conversation with Rohin Shah · 2019-11-13T01:51:43.888Z · score: 12 (6 votes) · EA · GW

For reference, here's the post on realism about rationality that Rohin mentioned several times.

Comment by richard_ngo on EA Hotel Fundraiser 5: Out of runway! · 2019-10-25T15:24:12.705Z · score: 31 (23 votes) · EA · GW

I'm planning to donate to the EA hotel. Given that it isn't a registered charity, I'm interested in doing donation swaps with EAs in countries where charitable donations aren't tax deductible (like Sweden) so that I can get tax deductions on my donations. Reach out or comment here if interested.

Comment by richard_ngo on Seeking EA experts interested in the evolutionary psychology of existential risks · 2019-10-24T09:11:06.677Z · score: 2 (2 votes) · EA · GW

Any of the authors of this paper: https://www.nature.com/articles/s41598-019-50145-9

Comment by richard_ngo on Only a few people decide about funding for community builders world-wide · 2019-10-23T01:56:45.558Z · score: 14 (10 votes) · EA · GW

This homogeneity might well be bad - in particular by excluding valuable but less standard types of community building. If so this problem would be mitigated by having more funding sources.

Comment by richard_ngo on Ineffective Altruism: Are there ideologies which generally cause there adherents to have worse impacts? · 2019-10-17T14:13:08.826Z · score: 14 (6 votes) · EA · GW

Agreed - in fact, maybe a better question is whether there are any ideologies where strong adherence doesn't lead you to make poor decisions.

Comment by richard_ngo on EA Handbook 3.0: What content should I include? · 2019-10-01T11:53:25.435Z · score: 13 (4 votes) · EA · GW

Here's my (in-progress) collation of important EA resources, organised by topic. Contributions welcome :)

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-13T09:52:03.451Z · score: 1 (1 votes) · EA · GW

Using those two different types of "should" makes your proposed sentence ("It seems that (at least) the humans who are utilitarians should commit mass suicide in order to bring the new beings into existence, because that's what utilitarianism implies is the right action in that situation.") unnecessarily confusing, for a couple of reasons.

1. Most moral anti-realists don't use "epistemic should" when talking about morality. Instead, I claim, they use my definition of moral should: "X should do Y means that I endorse/prefer some moral theory T and T endorses X doing Y". (We can test this by asking anti-realists who don't subscribe to negative utilitarianism whether a negative utilitarian should destroy the universe - I predict they will either say "no" or argue that the question is ambiguous.) And so introducing "epistemic should" makes moral talk more difficult.

2. Moral realists who are utilitarians and use "moral should" would agree with your proposed sentence, and moral anti-realists who aren't utilitarians and use "epistemic should" would also agree with your sentence, but for two totally different reasons. This makes follow-up discussions much more difficult.

How about "Utilitarianism endorses humans voluntarily replacing themselves with these new beings." That gets rid of (most of) the contractarianism. I don't think there's any clean, elegant phrasing which then rules out the moral uncertainty in a way that's satisfactory to both realists and anti-realists, unfortunately - because realists and anti-realists disagree on whether, if you prefer/endorse a theory, that makes it rational for you to act on that theory. (In other words, I don't know whether moral realists have terminology which distinguishes between people who act on false theories that they currently endorse, versus people who act on false theories they currently don't endorse).

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-11T11:50:33.241Z · score: 4 (2 votes) · EA · GW

I originally wrote a different response to Wei's comment, but it wasn't direct enough. I'm copying the first part here since it may be helpful in explaining what I mean by "moral preferences" vs "personal preferences":

Each person has a range of preferences, which it's often convenient to break down into "moral preferences" and "personal preferences". This isn't always a clear distinction, but the main differences:

1. Moral preferences are much more universalisable and less person-specific (e.g. "I prefer that people aren't killed" vs "I prefer that I'm not killed").

2. Moral preferences are associated with a meta-preference that everyone has the same moral preferences. This is why we feel so strongly that we need to find a shared moral "truth". Fortunately, most people are in agreement in our societies on the most basic moral questions.

3. Moral preferences are associated with a meta-preference that they are consistent, simple, and actionable. This is why we feel so strongly that we need to find coherent moral theories rather than just following our intuitions.

4. Moral preferences are usually phrased as "X is right/wrong" and "people should do right and not do wrong" rather than "I prefer X". This often misleads people into thinking that their moral preferences are just pointers to some aspect of reality, the "objective moral truth", which is what people "objectively should do".

When we reflect on our moral preferences and try to make them more consistent and actionable, we often end up condensing our initial moral preferences (aka moral intuitions) into moral theories like utilitarianism. Note that we could do this for other preferences as well (e.g. "my theory of food is that I prefer things which have more salt than sugar") but because I don't have strong meta-preferences about my food preferences, I don't bother doing so.

The relationship between moral preferences and personal preferences can be quite complicated. People act on both, but often have a meta-preference to pay more attention to their moral preferences than they currently do. I'd count someone as a utilitarian if they have moral preferences that favour utilitarianism, and these are a non-negligible component of their overall preferences.

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-11T11:44:19.167Z · score: 1 (1 votes) · EA · GW

My first objection is that you're using a different form of "should" than what is standard. My preferred interpretation of "X should do Y" is that it's equivalent to "I endorse some moral theory T and T endorses X doing Y". (Or "according to utilitarianism, X should do Y" is more simply equivalent to "utilitarianism endorses X doing Y"). In this case, "should" feels like it's saying something morally normative.

Whereas you seem to be using "should" as in "a person who has a preference X should act on X". In this case, should feels like it's saying something epistemically normative. You may think these are the same thing, but I don't, and either way it's confusing to build that assumption into our language. I'd prefer to replace this latter meaning of "should" with "it is rational to". So then we get:

"it is rational for humans who are utilitarians to commit mass suicide in order to bring the new beings into existence, because that's what utilitarianism implies is the right action."

My second objection is that this is only the case if "being a utilitarian" is equivalent to "having only one preference, which is to follow utilitarianism". In practice people have both moral preferences and also personal preferences. I'd still count someone as being a utilitarian if they follow their personal preferences instead of their moral preferences some (or even most) of the time. So then it's not clear whether it's rational for a human who is a utilitarian to commit suicide in this case; it depends on the contents of their personal preferences.

I think we avoid all of this mess just by saying "Utilitarianism endorses replacing existing humans with these new beings." This is, as I mentioned earlier, a similar claim to "ZFC implies that 1 + 1 = 2", and it allows people to have fruitful discussions without agreeing on whether they should endorse utilitarianism. I'd also be happy with Simon's version above: "Utilitarianism seems to imply that humans should...", although I think it's slightly less precise than mine, because it introduces an unnecessary "should" that some people might take to be a meta-level claim rather than merely a claim about the content of the theory of utilitarianism (this is a minor quibble though. Analogously: "ZFC implies that 1 + 1 = 2 is true").

Anyway, we have pretty different meta-ethical views, and I'm not sure how much we're going to converge, but I will say that from my perspective, your conflation of epistemic and moral normativity (as I described earlier) is a key component of why your position seems confusing to me.

Comment by richard_ngo on How much EA analysis of AI safety as a cause area exists? · 2019-09-10T09:35:35.079Z · score: 2 (2 votes) · EA · GW
Are you aware of any surveys or any other evidence supporting this? (I'd accept "most people in AI safety that I know started working in it because EA investigative work convinced them that AI safety matters" or something of that nature.)

I'm endorsing this, and I'm confused about which part you're skeptical about. Is it the "many EAs" bit? Obviously the word "many" is pretty fuzzy, and I don't intend it to be a strong claim. Mentally the numbers I'm thinking of are something like >50 people or >25% of committed (or "core", whatever that means) EAs. Don't have a survey to back that up though. Oh, I guess I'm also including people currently studying ML with the intention of doing safety. Will edit to add that.

Why are you trying to answer this, instead of "How should I update, given the results of all available investigations into AI safety as a cause area?"

There are other questions that I would like answers to, not related to AI safety, and if I trusted EA consensus, then that would make the process much easier.

For this question then, it seems that Paul Christiano also needs to be discounted (and possibly others as well but I'm not as familiar with them).

Indeed, I agree.

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-09T13:46:29.079Z · score: 3 (3 votes) · EA · GW

Okay, thanks. So I guess the thing I'm curious about now is: what heuristics do you have for deciding when to prioritise contractarian intuitions over consequentialist intuitions, or vice versa? In extreme cases where one side feels very strongly about it (like this one) that's relatively easy, but any thoughts on how to extend those to more nuanced dilemmas?

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-09T12:55:09.324Z · score: 9 (5 votes) · EA · GW

I think that "utilitarianism seems to imply that humans who are utilitarians should..." is a type error regardless of whether you're a realist or an anti-realist, in the same way as "the ZFC axioms imply that humans who accept those axioms should believe 1+1=2". That's not what the ZFC axioms imply - actually, they just imply that 1+1 = 2, and it's our meta-theory of mathematics which determines how you respond to this fact. Similarly, utilitarianism is a theory which, given some actions (or maybe states of the world, or maybe policies) returns a metric for how "right" or "good" they are. And then how we relate to that theory depends on our meta-ethics.

Given how confusing talking about morality is, I think it's important to be able to separate the object-level moral theories from meta-ethical theories in this way. (For more along these lines, see my post here).

Comment by richard_ngo on Does any thorough discussion of moral parliaments exist? · 2019-09-09T10:51:42.815Z · score: 3 (2 votes) · EA · GW

I imagine so, but if that's the reason it seems out of place in a paper on theoretical ethics.

Comment by richard_ngo on Does any thorough discussion of moral parliaments exist? · 2019-09-09T02:22:46.718Z · score: 3 (2 votes) · EA · GW

Nice! Seems like a cool paper. One thing that confuses me, though, is why the authors think that their theory's "moral risk aversion with respect to empirically expected utility" is undesirable. People just have weird intuitions about expected utility all the time, and don't reason about it well in general. See, for instance, how people prefer (even when moral uncertainty isn't involved) to donate to many charities rather than donating only to the one highest expected utility charity. It seems reasonable to call that preference misguided, so why can't we just call the intuitive objection to "moral risk aversion with respect to empirically expected utility" misguided?

Comment by richard_ngo on How much EA analysis of AI safety as a cause area exists? · 2019-09-09T01:36:21.757Z · score: 8 (8 votes) · EA · GW

Let me try answer the latter question (and thanks for pushing me to flesh out my vague ideas more!) One very brief way you could describe the development of AI safety is something like "A few transhumanists came up with some key ideas and wrote many blog posts. The rationalist movement formed from those following these things online, and made further contributions. Then the EA movement formed, and while it was originally focused on causes like global poverty, over time did a bunch of investigative work which led many EAs to become convinced that AI safety matters, and to start working on it, directly or indirectly (or to gain skills with the intent of doing such work)."

The three questions I am ultimately trying to answer are: a) how valuable is it to build up the EA movement? b) how much should I update when I learn that a given belief is a consensus in EA? and c) how much evidence do the opinions of other people provide in favour of AI safety being important?

To answer the first question, assuming that analysis of AI safety as a cause area is valuable, I should focus on contributions by people who were motivated or instigated by the EA movement itself. Here Nick doesn't count (except insofar as EA made his book come out sooner or better).

To answer the second question, it helps to know whether the focus on AI safety in EA came about because many people did comprehensive due diligence and shared their findings, or whether there wasn't much investigation and the ubiquity of the belief was driven via an information cascade. For this purpose, I should count work by people to the extent that they or people like them are likely to critically investigate other beliefs that are or will become widespread in EA. Being motivated to investigate AI safety by membership in the EA movement is the best evidence, but for the purpose of answering this question I probably should have used "motivated by the EA movement or motivated by very similar things to what EAs are motivated by", and should partially count Nick.

To answer the third question, it helps to know whether the people who have become convinced that AI safety is important are a relatively homogenous group who might all have highly correlated biases and hidden motivations, or whether a wide range of people have become convinced. For this purpose, I should count work by people to the extent that they are dissimilar to the transhumanists and rationalists who came up with the original safety arguments, and also to the extent that they rederived the arguments for themselves rather than being influenced by the existing arguments. Here EAs who started off not being inclined towards transhumanism or rationalism at all count the most, and Nick counts very little.

Note that Nick is quite an outlier though, so while I'm using him as an illustrative example, I'd prefer engagement on the general points rather than this example in particular.

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T18:26:53.929Z · score: 5 (5 votes) · EA · GW

I agree and didn't mean to imply that Knutsson endorses the argument in absolute terms; thanks for the clarification.

Comment by richard_ngo on How much EA analysis of AI safety as a cause area exists? · 2019-09-08T18:20:53.162Z · score: 6 (4 votes) · EA · GW

To my knowledge it doesn't meet the "Was motivated or instigated by EA" criterion, since Nick had been developing those ideas since well before the EA movement started. I guess he might have gotten EA money while writing the book, but even if that's the case it doesn't feel like a central example of what I'm interested in.

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-07T14:37:48.467Z · score: 10 (6 votes) · EA · GW

Thanks for the informative reply! And also for writing the paper in the first place :)

"Such theories could be understood as sometimes prescribing ‘say whatever is optimal to say; that is, say whatever will bring about the best results.’ It might be optimal to pretend to not bite the bullet even though the person actually does."

I think we need to have high epistemic standards in this community, and would be dismayed if a significant number of people with strong moral views were hiding them in order to make a better impression on others. (See also https://forum.effectivealtruism.org/posts/CfcvPBY9hdsenMHCr/integrity-for-consequentialists)

Comment by richard_ngo on Are we living at the most influential time in history? · 2019-09-05T21:54:20.914Z · score: 19 (5 votes) · EA · GW

Nice post :) A couple of comments:

even if we’re at some enormously influential time right now, if there’s some future time that is even more influential, then the most obvious EA activity would be to invest resources (whether via financial investment or some sort of values-spreading) in order that our resources can be used at that future, more high-impact, time. Perhaps there’s some reason why that plan doesn’t make sense; but, currently, almost no-one is even taking that possibility seriously.

To me it seems that the biggest constraint on being able to invest in future centuries is the continuous existence of a trustworthy movement from now until then. I imagine that a lot of meta work implicitly contributes towards this; so the idea that the HoH is far in the future is an argument for more meta work (and more meta work targeted towards EA longevity in particular). But my prior on a given movement remaining trustworthy over long time periods is quite low, and becomes lower the more money it is entrusted with.

But there are future scenarios that we can imagine now that would seem very influential:

To the ones you listed, I would add:

  • The time period during which we reach technological completion, since from then on the stochasticity from the rate of technological advancement becomes a much less important factor.
  • As you mentioned previously, the time period during which we develop comprehensive techniques for engineering the motivations and values of the subsequent generation - if it actually happens to not be very close to us. (E.g. it might require a much more developed understanding of sociology than we currently have to carry out in practice).
Comment by richard_ngo on Why were people skeptical about RAISE? · 2019-09-05T21:11:41.724Z · score: 1 (1 votes) · EA · GW
RAISE was oriented toward producing people who become typical MIRI researchers... I expect that MIRI needs atypically good researchers.

Slightly odd phrasing here which I don't really understand, since I think the typical MIRI researcher is very good at what they do, and that most of them are atypically good researchers compared with the general population of researchers.

Do you mean instead "RAISE was oriented toward producing people who would be typical for an AI researcher in general"? Or do you mean that there are only minor benefits from additional researchers who are about as good as current MIRI researchers?

Comment by richard_ngo on Effective Altruism London Strategy 2019 · 2019-08-22T17:12:53.490Z · score: 16 (11 votes) · EA · GW

Nice document overall, makes a lot of sense. A few small (slightly nit-picky) comments:

Our vision is an optimal world.

This slogan feels a bit off to me. Most EA activities are aimed towards avoiding clearly bad things; the idea of aiming for any specific conception of utopia doesn't seem to me to represent that very well. There's a lot of disagreement over what sort of worlds would be optimal, or whether that concept even makes sense.

People for whom doing good is a goal in their life, who are open to changing their focus

I'm not sure either of these things is a crucial characteristic of the people you should be targeting. Consider someone working in an EA cause area who's not open to changing their focus, and who joined that area solely out of personal interest, but who nevertheless is interested in EA ideas and contributes a lot of useful things to the community (career guidance, support, etc).

We also will attempt to track the following metrics to inform strategy...

While I'm sure you'll have a holistic approach towards these metrics, they all fall into the broad bucket of "do more standard EA things". I have some concerns that this leads to people overfitting to ingroup incentives. So I'd suggest also prioritising something like "promoting the general competence and skills of group members". For example, there are a bunch of EA London people currently working in government. If they informally gave each other advice and mentorship and advanced to more senior roles more rapidly, that would be pretty valuable, but not show up in any of the metrics you mention.

Comment by richard_ngo on Ask Me Anything! · 2019-08-17T21:53:43.681Z · score: 7 (9 votes) · EA · GW

Wei's list focused on ethics and decision theory, but I think that it would be most valuable to have more good conceptual analysis of the arguments for why AI safety matters, and particularly the role of concepts like "agency", "intelligence", and "goal-directed behaviour". While it'd be easier to tackle these given some knowledge of machine learning, I don't think that background is necessary - clarity of thought is probably the most important thing.

Comment by richard_ngo on Could we solve this email mess if we all moved to paid emails? · 2019-08-14T15:24:43.975Z · score: 6 (5 votes) · EA · GW

This all makes sense, and it does seem that people who are launching big projects might benefit from paid emails as a norm. On the other hand, you seem unusually worried about "spamming" people by sending them things it's pretty plausible they'd be interested in. It would be fairly easy to put at the top of your email something like "If you're interested in doing AI forecasting, read on; otherwise feel free to ignore this email" which means the cost is something like ~10 seconds per uninterested recipient, which seems reasonable.

On a meta note, I think I felt less positively towards this post than I otherwise would have, because it felt like a call to action (which I hold to high standards) rather than an exploratory poll - e.g. I read the first few bullet points as rhetorical questions. Seems like it was just a phrasing issue; and as an exploratory poll, I think it's interesting and I'm glad to have had the issue brought to mind :)

Comment by richard_ngo on Could we solve this email mess if we all moved to paid emails? · 2019-08-12T15:46:59.714Z · score: 31 (10 votes) · EA · GW

It's not clear to me that we are in a mess. The only actual example you gave was a spammy corporate newsletter, which seems irrelevant.

This might look as follows: Lots of people write to senior researchers asking for feedback on papers or ideas, yet they’re mostly crackpots or uninteresting, so most stuff is not worth reading. A promising young researcher without many connections would want their feedback (and the senior researcher would want to give it!), but it simply takes too much effort to figure out that the paper is promising, so it never gets read. In fact, expecting this, the junior researcher might not even send it in the first place.

Does this happen much? Have you received feedback from people saying that this has happened to them? I expect personal networks in EA to be pretty good at connecting people - and if a young researcher is promising they can often explain why in a sentence or two (even if it's just by name-dropping previous positions).

Currently, the signalling problem is solved by things like:
Spending lots of effort crafting interesting-sounding intros which signal that the thing is worth reading, instead of just getting to the point
Burning social capital -- adding tags like “[Urgent]” or “[Important]” to the subject line

Does the latter actually happen? I've never seen it. Also, why is the former bad? It seems like an even better costly signal than paying money to send emails because it also produces a short description of the work which helps the recipient evaluate it. And very few people have too little time to skim a paragraph-long summary.

Comment by richard_ngo on What book(s) would you want a gifted teenager to come across? · 2019-08-07T22:34:05.725Z · score: 9 (3 votes) · EA · GW

I think I enjoyed Diaspora more, and it seems a little more relevant to far-future considerations. What about Permutation City in particular did you like?

Comment by richard_ngo on Some solutions to utilitarian problems · 2019-07-12T23:43:30.817Z · score: 1 (1 votes) · EA · GW

Meta: your last link doesn't seem to point anywhere.

Comment by richard_ngo on Some solutions to utilitarian problems · 2019-07-12T23:35:52.058Z · score: 4 (3 votes) · EA · GW

Interesting post. I wanted to write a substantive response, but ran out of energy. However, I have written previously on why I'm skeptical of the relevance of formally defined utility functions to ethics. Here's one essay about the differences between people's preferences and the type of "utility" that's morally valuable. Here's one about why there's no good way to ground preferences in the real world. And here's one attacking the underlying mindset that makes it tempting to model humans are agents with coherent goals.

Comment by richard_ngo on What new EA project or org would you like to see created in the next 3 years? · 2019-06-27T15:35:11.930Z · score: 1 (1 votes) · EA · GW

There are two functions I'm looking for: the "archive/index" function, and the "sequences" function. The former should store as much EA content as possible (including stuff that's not high-quality enough for us to want to direct newcomers to); it'd ideally also have enough structure to make it easily-browsable. The latter should zoom in on a specific topic or person and showcase their ideas in a way that can be easily read and digested.

https://priority.wiki/ is somewhere in between those two, in a way that seems valuable, but that doesn't quite fit with the functions I outlined above. It doesn't seem like it's aiming to be an exhaustive repository of content. But the individual topic pages also don't seem well-curated enough that I could just point someone to them and say "read all the stuff on this page to learn about the topic". The latter might change as more work goes into it, but I'm more hopeful about the EA forum sequences feature for this purpose.

The list of syllabi on EAHub is also interesting, and fits with the sequences function, albeit only on one specific topic (introducing EA).

Are those what you were referring to, or are there other places on EAHub where (object-level) content is collected that I didn't spot?

Comment by richard_ngo on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:16:49.387Z · score: 1 (1 votes) · EA · GW

I was particularly reminded of this by spending twenty minutes yesterday searching for an EA blog I wanted to cite, which has somehow vanished into the aether. EDIT: never mind, found it.


Comment by richard_ngo on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:15:59.111Z · score: 10 (4 votes) · EA · GW

Collection and curation of EA content from across the internet in a way that's accessible to newcomers, easily searchable, and will last for decades (at least). Seems like it wouldn't take that long to do a decent job, doesn't require uncommon skills, and could be pretty high-value.

I would be open to paying people to do this; message me if interested.

Comment by richard_ngo on Which scientific discovery was most ahead of its time? · 2019-05-22T00:01:25.035Z · score: 3 (2 votes) · EA · GW

I think Eli was asking whether your whole response was a quote, since the whole thing is in block quote format.

Comment by richard_ngo on The Athena Rationality Workshop - June 7th-10th at EA Hotel · 2019-05-11T17:52:16.407Z · score: 3 (3 votes) · EA · GW

What's your position on people coming for only part of the workshop? I'd be interested in attending but would rather not miss more than one day of work.

Comment by richard_ngo on Salary Negotiation for Earning to Give · 2019-04-05T16:17:34.340Z · score: 5 (4 votes) · EA · GW

Strong +1 for the kalzumeus blog post, that was very helpful for me.

Comment by richard_ngo on 35-150 billion fish are raised in captivity to be released into the wild every year · 2019-04-05T15:17:33.359Z · score: 4 (3 votes) · EA · GW
In general, stocking programmes aim at supporting commercial fisheries.

I'm a little confused by this, since it seems hugely economically inefficient to go to all the effort of raising fish, only to release them and then recapture them. Am I missing something, or is this basically a make-work program for the fishing industry?

Comment by richard_ngo on The career and the community · 2019-03-29T17:09:04.219Z · score: 6 (4 votes) · EA · GW

Note that your argument here is roughly Ben Pace's position in this post which we co-wrote. I argued against Ben's position in the post because I thought it was too extreme, but I agree with both of you that most EAs aren't going far enough in that direction.

Comment by richard_ngo on EA is vetting-constrained · 2019-03-28T11:44:40.830Z · score: 6 (5 votes) · EA · GW

Excellent post, although I think about it using a slightly different framing. How vetting-constrained granters are depends a lot on how high their standards are. In the limit of arbitrarily high standards, all the vetting in the world might not be enough. In the limit of arbitrarily low standards, no vetting is required.

If we find that there's not enough capability to vet, that suggests that either our standards are correct and we need more vetters, or that our standards are too high and we should lower them. I don't have much inside information, so this is mostly based on my overall worldview, but I broadly think it's more the latter: that standards are too high, and that worrying too much about protecting EA's reputation makes it harder for us to innovate.

I think it would be very valuable to have more granters publicly explaining how they make tradeoffs between potential risks, clear benefits, and low-probability extreme successes; if these explanations exist and I'm just not aware of them, I'd appreciate pointers.

Another startup contacted at least 4 grantmaking organisations. Three of them deferred to the fourth.

One "easy fix" would simply be to encourage grantmakers to defer to each other less. Imagine that only one venture capital fund was allowed in Silicon Valley. I claim that's one of the worst things you could do for entrepreneurship there.

Comment by richard_ngo on The career and the community · 2019-03-28T11:24:30.518Z · score: 5 (4 votes) · EA · GW

I agree that all of the things you listed are great. But note that almost all of them look like "convince already-successful people of EA ideas" rather than "talented young EAs doing exceptional things". For the purposes of this discussion, the main question isn't when we get the first EA senator, but whether the advice we're giving to young EAs will make them more likely to become senators or billion-dollar donors or other cool things. And yes, there's a strong selection bias here because obviously if you're young, you've had less time to do cool things. But I still think your argument weighs only weakly against Vishal's advocacy of what I'm tempted to call the "Silicon Valley mindset".

So the empirical question here is something like, if more EAs steer their careers based on a Silicon Valley mindset (as opposed to an EA mindset), will the movement overall be able to do more good? Personally I think that's true for driven, high-conscientiousness generalists, e.g. the sort of people OpenPhil hires. For other people, I guess what I advocate in the post above is sort of a middle ground between Vishal's "go for extreme growth" and the more standard EA advice to "go for the most important cause areas".

Comment by richard_ngo on The career and the community · 2019-03-25T17:30:51.731Z · score: 4 (3 votes) · EA · GW

Is this not explained by founder effects from Less Wrong?