Posts

Technical AGI safety research outside AI 2019-10-18T15:02:20.718Z · score: 61 (23 votes)
Does any thorough discussion of moral parliaments exist? 2019-09-06T15:33:02.478Z · score: 36 (14 votes)
How much EA analysis of AI safety as a cause area exists? 2019-09-06T11:15:48.665Z · score: 70 (27 votes)
How do most utilitarians feel about "replacement" thought experiments? 2019-09-06T11:14:20.764Z · score: 18 (15 votes)
Why has poverty worldwide fallen so little in recent decades outside China? 2019-08-07T22:24:11.239Z · score: 23 (10 votes)
Which scientific discovery was most ahead of its time? 2019-05-16T12:28:54.437Z · score: 30 (13 votes)
Why doesn't the EA forum have curated posts or sequences? 2019-03-21T13:52:58.807Z · score: 34 (16 votes)
The career and the community 2019-03-21T12:35:23.073Z · score: 77 (40 votes)
Arguments for moral indefinability 2019-02-08T11:09:25.547Z · score: 31 (12 votes)
Disentangling arguments for the importance of AI safety 2019-01-23T14:58:27.881Z · score: 54 (30 votes)
How democracy ends: a review and reevaluation 2018-11-24T17:41:53.594Z · score: 23 (11 votes)
Some cruxes on impactful alternatives to AI policy work 2018-11-22T13:43:40.684Z · score: 21 (12 votes)

Comments

Comment by richard_ngo on A conversation with Rohin Shah · 2019-11-13T01:51:43.888Z · score: 9 (4 votes) · EA · GW

For reference, here's the post on realism about rationality that Rohin mentioned several times.

Comment by richard_ngo on EA Hotel Fundraiser 5: Out of runway! · 2019-10-25T15:24:12.705Z · score: 31 (23 votes) · EA · GW

I'm planning to donate to the EA hotel. Given that it isn't a registered charity, I'm interested in doing donation swaps with EAs in countries where charitable donations aren't tax deductible (like Sweden) so that I can get tax deductions on my donations. Reach out or comment here if interested.

Comment by richard_ngo on Seeking EA experts interested in the evolutionary psychology of existential risks · 2019-10-24T09:11:06.677Z · score: 2 (2 votes) · EA · GW

Any of the authors of this paper: https://www.nature.com/articles/s41598-019-50145-9

Comment by richard_ngo on A single person decides about funding for community builders world-wide · 2019-10-23T01:56:45.558Z · score: 14 (10 votes) · EA · GW

This homogeneity might well be bad - in particular by excluding valuable but less standard types of community building. If so this problem would be mitigated by having more funding sources.

Comment by richard_ngo on Ineffective Altruism: Are there ideologies which generally cause there adherents to have worse impacts? · 2019-10-17T14:13:08.826Z · score: 14 (6 votes) · EA · GW

Agreed - in fact, maybe a better question is whether there are any ideologies where strong adherence doesn't lead you to make poor decisions.

Comment by richard_ngo on EA Handbook 3.0: What content should I include? · 2019-10-01T11:53:25.435Z · score: 13 (4 votes) · EA · GW

Here's my (in-progress) collation of important EA resources, organised by topic. Contributions welcome :)

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-13T09:52:03.451Z · score: 1 (1 votes) · EA · GW

Using those two different types of "should" makes your proposed sentence ("It seems that (at least) the humans who are utilitarians should commit mass suicide in order to bring the new beings into existence, because that's what utilitarianism implies is the right action in that situation.") unnecessarily confusing, for a couple of reasons.

1. Most moral anti-realists don't use "epistemic should" when talking about morality. Instead, I claim, they use my definition of moral should: "X should do Y means that I endorse/prefer some moral theory T and T endorses X doing Y". (We can test this by asking anti-realists who don't subscribe to negative utilitarianism whether a negative utilitarian should destroy the universe - I predict they will either say "no" or argue that the question is ambiguous.) And so introducing "epistemic should" makes moral talk more difficult.

2. Moral realists who are utilitarians and use "moral should" would agree with your proposed sentence, and moral anti-realists who aren't utilitarians and use "epistemic should" would also agree with your sentence, but for two totally different reasons. This makes follow-up discussions much more difficult.

How about "Utilitarianism endorses humans voluntarily replacing themselves with these new beings." That gets rid of (most of) the contractarianism. I don't think there's any clean, elegant phrasing which then rules out the moral uncertainty in a way that's satisfactory to both realists and anti-realists, unfortunately - because realists and anti-realists disagree on whether, if you prefer/endorse a theory, that makes it rational for you to act on that theory. (In other words, I don't know whether moral realists have terminology which distinguishes between people who act on false theories that they currently endorse, versus people who act on false theories they currently don't endorse).

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-11T11:50:33.241Z · score: 4 (2 votes) · EA · GW

I originally wrote a different response to Wei's comment, but it wasn't direct enough. I'm copying the first part here since it may be helpful in explaining what I mean by "moral preferences" vs "personal preferences":

Each person has a range of preferences, which it's often convenient to break down into "moral preferences" and "personal preferences". This isn't always a clear distinction, but the main differences:

1. Moral preferences are much more universalisable and less person-specific (e.g. "I prefer that people aren't killed" vs "I prefer that I'm not killed").

2. Moral preferences are associated with a meta-preference that everyone has the same moral preferences. This is why we feel so strongly that we need to find a shared moral "truth". Fortunately, most people are in agreement in our societies on the most basic moral questions.

3. Moral preferences are associated with a meta-preference that they are consistent, simple, and actionable. This is why we feel so strongly that we need to find coherent moral theories rather than just following our intuitions.

4. Moral preferences are usually phrased as "X is right/wrong" and "people should do right and not do wrong" rather than "I prefer X". This often misleads people into thinking that their moral preferences are just pointers to some aspect of reality, the "objective moral truth", which is what people "objectively should do".

When we reflect on our moral preferences and try to make them more consistent and actionable, we often end up condensing our initial moral preferences (aka moral intuitions) into moral theories like utilitarianism. Note that we could do this for other preferences as well (e.g. "my theory of food is that I prefer things which have more salt than sugar") but because I don't have strong meta-preferences about my food preferences, I don't bother doing so.

The relationship between moral preferences and personal preferences can be quite complicated. People act on both, but often have a meta-preference to pay more attention to their moral preferences than they currently do. I'd count someone as a utilitarian if they have moral preferences that favour utilitarianism, and these are a non-negligible component of their overall preferences.

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-11T11:44:19.167Z · score: 1 (1 votes) · EA · GW

My first objection is that you're using a different form of "should" than what is standard. My preferred interpretation of "X should do Y" is that it's equivalent to "I endorse some moral theory T and T endorses X doing Y". (Or "according to utilitarianism, X should do Y" is more simply equivalent to "utilitarianism endorses X doing Y"). In this case, "should" feels like it's saying something morally normative.

Whereas you seem to be using "should" as in "a person who has a preference X should act on X". In this case, should feels like it's saying something epistemically normative. You may think these are the same thing, but I don't, and either way it's confusing to build that assumption into our language. I'd prefer to replace this latter meaning of "should" with "it is rational to". So then we get:

"it is rational for humans who are utilitarians to commit mass suicide in order to bring the new beings into existence, because that's what utilitarianism implies is the right action."

My second objection is that this is only the case if "being a utilitarian" is equivalent to "having only one preference, which is to follow utilitarianism". In practice people have both moral preferences and also personal preferences. I'd still count someone as being a utilitarian if they follow their personal preferences instead of their moral preferences some (or even most) of the time. So then it's not clear whether it's rational for a human who is a utilitarian to commit suicide in this case; it depends on the contents of their personal preferences.

I think we avoid all of this mess just by saying "Utilitarianism endorses replacing existing humans with these new beings." This is, as I mentioned earlier, a similar claim to "ZFC implies that 1 + 1 = 2", and it allows people to have fruitful discussions without agreeing on whether they should endorse utilitarianism. I'd also be happy with Simon's version above: "Utilitarianism seems to imply that humans should...", although I think it's slightly less precise than mine, because it introduces an unnecessary "should" that some people might take to be a meta-level claim rather than merely a claim about the content of the theory of utilitarianism (this is a minor quibble though. Analogously: "ZFC implies that 1 + 1 = 2 is true").

Anyway, we have pretty different meta-ethical views, and I'm not sure how much we're going to converge, but I will say that from my perspective, your conflation of epistemic and moral normativity (as I described earlier) is a key component of why your position seems confusing to me.

Comment by richard_ngo on How much EA analysis of AI safety as a cause area exists? · 2019-09-10T09:35:35.079Z · score: 2 (2 votes) · EA · GW
Are you aware of any surveys or any other evidence supporting this? (I'd accept "most people in AI safety that I know started working in it because EA investigative work convinced them that AI safety matters" or something of that nature.)

I'm endorsing this, and I'm confused about which part you're skeptical about. Is it the "many EAs" bit? Obviously the word "many" is pretty fuzzy, and I don't intend it to be a strong claim. Mentally the numbers I'm thinking of are something like >50 people or >25% of committed (or "core", whatever that means) EAs. Don't have a survey to back that up though. Oh, I guess I'm also including people currently studying ML with the intention of doing safety. Will edit to add that.

Why are you trying to answer this, instead of "How should I update, given the results of all available investigations into AI safety as a cause area?"

There are other questions that I would like answers to, not related to AI safety, and if I trusted EA consensus, then that would make the process much easier.

For this question then, it seems that Paul Christiano also needs to be discounted (and possibly others as well but I'm not as familiar with them).

Indeed, I agree.

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-09T13:46:29.079Z · score: 3 (3 votes) · EA · GW

Okay, thanks. So I guess the thing I'm curious about now is: what heuristics do you have for deciding when to prioritise contractarian intuitions over consequentialist intuitions, or vice versa? In extreme cases where one side feels very strongly about it (like this one) that's relatively easy, but any thoughts on how to extend those to more nuanced dilemmas?

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-09T12:55:09.324Z · score: 9 (5 votes) · EA · GW

I think that "utilitarianism seems to imply that humans who are utilitarians should..." is a type error regardless of whether you're a realist or an anti-realist, in the same way as "the ZFC axioms imply that humans who accept those axioms should believe 1+1=2". That's not what the ZFC axioms imply - actually, they just imply that 1+1 = 2, and it's our meta-theory of mathematics which determines how you respond to this fact. Similarly, utilitarianism is a theory which, given some actions (or maybe states of the world, or maybe policies) returns a metric for how "right" or "good" they are. And then how we relate to that theory depends on our meta-ethics.

Given how confusing talking about morality is, I think it's important to be able to separate the object-level moral theories from meta-ethical theories in this way. (For more along these lines, see my post here).

Comment by richard_ngo on Does any thorough discussion of moral parliaments exist? · 2019-09-09T10:51:42.815Z · score: 3 (2 votes) · EA · GW

I imagine so, but if that's the reason it seems out of place in a paper on theoretical ethics.

Comment by richard_ngo on Does any thorough discussion of moral parliaments exist? · 2019-09-09T02:22:46.718Z · score: 3 (2 votes) · EA · GW

Nice! Seems like a cool paper. One thing that confuses me, though, is why the authors think that their theory's "moral risk aversion with respect to empirically expected utility" is undesirable. People just have weird intuitions about expected utility all the time, and don't reason about it well in general. See, for instance, how people prefer (even when moral uncertainty isn't involved) to donate to many charities rather than donating only to the one highest expected utility charity. It seems reasonable to call that preference misguided, so why can't we just call the intuitive objection to "moral risk aversion with respect to empirically expected utility" misguided?

Comment by richard_ngo on How much EA analysis of AI safety as a cause area exists? · 2019-09-09T01:36:21.757Z · score: 8 (8 votes) · EA · GW

Let me try answer the latter question (and thanks for pushing me to flesh out my vague ideas more!) One very brief way you could describe the development of AI safety is something like "A few transhumanists came up with some key ideas and wrote many blog posts. The rationalist movement formed from those following these things online, and made further contributions. Then the EA movement formed, and while it was originally focused on causes like global poverty, over time did a bunch of investigative work which led many EAs to become convinced that AI safety matters, and to start working on it, directly or indirectly (or to gain skills with the intent of doing such work)."

The three questions I am ultimately trying to answer are: a) how valuable is it to build up the EA movement? b) how much should I update when I learn that a given belief is a consensus in EA? and c) how much evidence do the opinions of other people provide in favour of AI safety being important?

To answer the first question, assuming that analysis of AI safety as a cause area is valuable, I should focus on contributions by people who were motivated or instigated by the EA movement itself. Here Nick doesn't count (except insofar as EA made his book come out sooner or better).

To answer the second question, it helps to know whether the focus on AI safety in EA came about because many people did comprehensive due diligence and shared their findings, or whether there wasn't much investigation and the ubiquity of the belief was driven via an information cascade. For this purpose, I should count work by people to the extent that they or people like them are likely to critically investigate other beliefs that are or will become widespread in EA. Being motivated to investigate AI safety by membership in the EA movement is the best evidence, but for the purpose of answering this question I probably should have used "motivated by the EA movement or motivated by very similar things to what EAs are motivated by", and should partially count Nick.

To answer the third question, it helps to know whether the people who have become convinced that AI safety is important are a relatively homogenous group who might all have highly correlated biases and hidden motivations, or whether a wide range of people have become convinced. For this purpose, I should count work by people to the extent that they are dissimilar to the transhumanists and rationalists who came up with the original safety arguments, and also to the extent that they rederived the arguments for themselves rather than being influenced by the existing arguments. Here EAs who started off not being inclined towards transhumanism or rationalism at all count the most, and Nick counts very little.

Note that Nick is quite an outlier though, so while I'm using him as an illustrative example, I'd prefer engagement on the general points rather than this example in particular.

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T18:26:53.929Z · score: 5 (5 votes) · EA · GW

I agree and didn't mean to imply that Knutsson endorses the argument in absolute terms; thanks for the clarification.

Comment by richard_ngo on How much EA analysis of AI safety as a cause area exists? · 2019-09-08T18:20:53.162Z · score: 6 (4 votes) · EA · GW

To my knowledge it doesn't meet the "Was motivated or instigated by EA" criterion, since Nick had been developing those ideas since well before the EA movement started. I guess he might have gotten EA money while writing the book, but even if that's the case it doesn't feel like a central example of what I'm interested in.

Comment by richard_ngo on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-07T14:37:48.467Z · score: 6 (5 votes) · EA · GW

Thanks for the informative reply! And also for writing the paper in the first place :)

"Such theories could be understood as sometimes prescribing ‘say whatever is optimal to say; that is, say whatever will bring about the best results.’ It might be optimal to pretend to not bite the bullet even though the person actually does."

I think we need to have high epistemic standards in this community, and would be dismayed if a significant number of people with strong moral views were hiding them in order to make a better impression on others. (See also https://forum.effectivealtruism.org/posts/CfcvPBY9hdsenMHCr/integrity-for-consequentialists)

Comment by richard_ngo on Are we living at the most influential time in history? · 2019-09-05T21:54:20.914Z · score: 19 (5 votes) · EA · GW

Nice post :) A couple of comments:

even if we’re at some enormously influential time right now, if there’s some future time that is even more influential, then the most obvious EA activity would be to invest resources (whether via financial investment or some sort of values-spreading) in order that our resources can be used at that future, more high-impact, time. Perhaps there’s some reason why that plan doesn’t make sense; but, currently, almost no-one is even taking that possibility seriously.

To me it seems that the biggest constraint on being able to invest in future centuries is the continuous existence of a trustworthy movement from now until then. I imagine that a lot of meta work implicitly contributes towards this; so the idea that the HoH is far in the future is an argument for more meta work (and more meta work targeted towards EA longevity in particular). But my prior on a given movement remaining trustworthy over long time periods is quite low, and becomes lower the more money it is entrusted with.

But there are future scenarios that we can imagine now that would seem very influential:

To the ones you listed, I would add:

  • The time period during which we reach technological completion, since from then on the stochasticity from the rate of technological advancement becomes a much less important factor.
  • As you mentioned previously, the time period during which we develop comprehensive techniques for engineering the motivations and values of the subsequent generation - if it actually happens to not be very close to us. (E.g. it might require a much more developed understanding of sociology than we currently have to carry out in practice).
Comment by richard_ngo on Why were people skeptical about RAISE? · 2019-09-05T21:11:41.724Z · score: 1 (1 votes) · EA · GW
RAISE was oriented toward producing people who become typical MIRI researchers... I expect that MIRI needs atypically good researchers.

Slightly odd phrasing here which I don't really understand, since I think the typical MIRI researcher is very good at what they do, and that most of them are atypically good researchers compared with the general population of researchers.

Do you mean instead "RAISE was oriented toward producing people who would be typical for an AI researcher in general"? Or do you mean that there are only minor benefits from additional researchers who are about as good as current MIRI researchers?

Comment by richard_ngo on Effective Altruism London Strategy 2019 · 2019-08-22T17:12:53.490Z · score: 16 (11 votes) · EA · GW

Nice document overall, makes a lot of sense. A few small (slightly nit-picky) comments:

Our vision is an optimal world.

This slogan feels a bit off to me. Most EA activities are aimed towards avoiding clearly bad things; the idea of aiming for any specific conception of utopia doesn't seem to me to represent that very well. There's a lot of disagreement over what sort of worlds would be optimal, or whether that concept even makes sense.

People for whom doing good is a goal in their life, who are open to changing their focus

I'm not sure either of these things is a crucial characteristic of the people you should be targeting. Consider someone working in an EA cause area who's not open to changing their focus, and who joined that area solely out of personal interest, but who nevertheless is interested in EA ideas and contributes a lot of useful things to the community (career guidance, support, etc).

We also will attempt to track the following metrics to inform strategy...

While I'm sure you'll have a holistic approach towards these metrics, they all fall into the broad bucket of "do more standard EA things". I have some concerns that this leads to people overfitting to ingroup incentives. So I'd suggest also prioritising something like "promoting the general competence and skills of group members". For example, there are a bunch of EA London people currently working in government. If they informally gave each other advice and mentorship and advanced to more senior roles more rapidly, that would be pretty valuable, but not show up in any of the metrics you mention.

Comment by richard_ngo on Ask Me Anything! · 2019-08-17T21:53:43.681Z · score: 6 (8 votes) · EA · GW

Wei's list focused on ethics and decision theory, but I think that it would be most valuable to have more good conceptual analysis of the arguments for why AI safety matters, and particularly the role of concepts like "agency", "intelligence", and "goal-directed behaviour". While it'd be easier to tackle these given some knowledge of machine learning, I don't think that background is necessary - clarity of thought is probably the most important thing.

Comment by richard_ngo on Could we solve this email mess if we all moved to paid emails? · 2019-08-14T15:24:43.975Z · score: 6 (5 votes) · EA · GW

This all makes sense, and it does seem that people who are launching big projects might benefit from paid emails as a norm. On the other hand, you seem unusually worried about "spamming" people by sending them things it's pretty plausible they'd be interested in. It would be fairly easy to put at the top of your email something like "If you're interested in doing AI forecasting, read on; otherwise feel free to ignore this email" which means the cost is something like ~10 seconds per uninterested recipient, which seems reasonable.

On a meta note, I think I felt less positively towards this post than I otherwise would have, because it felt like a call to action (which I hold to high standards) rather than an exploratory poll - e.g. I read the first few bullet points as rhetorical questions. Seems like it was just a phrasing issue; and as an exploratory poll, I think it's interesting and I'm glad to have had the issue brought to mind :)

Comment by richard_ngo on Could we solve this email mess if we all moved to paid emails? · 2019-08-12T15:46:59.714Z · score: 31 (10 votes) · EA · GW

It's not clear to me that we are in a mess. The only actual example you gave was a spammy corporate newsletter, which seems irrelevant.

This might look as follows: Lots of people write to senior researchers asking for feedback on papers or ideas, yet they’re mostly crackpots or uninteresting, so most stuff is not worth reading. A promising young researcher without many connections would want their feedback (and the senior researcher would want to give it!), but it simply takes too much effort to figure out that the paper is promising, so it never gets read. In fact, expecting this, the junior researcher might not even send it in the first place.

Does this happen much? Have you received feedback from people saying that this has happened to them? I expect personal networks in EA to be pretty good at connecting people - and if a young researcher is promising they can often explain why in a sentence or two (even if it's just by name-dropping previous positions).

Currently, the signalling problem is solved by things like:
Spending lots of effort crafting interesting-sounding intros which signal that the thing is worth reading, instead of just getting to the point
Burning social capital -- adding tags like “[Urgent]” or “[Important]” to the subject line

Does the latter actually happen? I've never seen it. Also, why is the former bad? It seems like an even better costly signal than paying money to send emails because it also produces a short description of the work which helps the recipient evaluate it. And very few people have too little time to skim a paragraph-long summary.

Comment by richard_ngo on What book(s) would you want a gifted teenager to come across? · 2019-08-07T22:34:05.725Z · score: 9 (3 votes) · EA · GW

I think I enjoyed Diaspora more, and it seems a little more relevant to far-future considerations. What about Permutation City in particular did you like?

Comment by richard_ngo on Some solutions to utilitarian problems · 2019-07-12T23:43:30.817Z · score: 1 (1 votes) · EA · GW

Meta: your last link doesn't seem to point anywhere.

Comment by richard_ngo on Some solutions to utilitarian problems · 2019-07-12T23:35:52.058Z · score: 4 (3 votes) · EA · GW

Interesting post. I wanted to write a substantive response, but ran out of energy. However, I have written previously on why I'm skeptical of the relevance of formally defined utility functions to ethics. Here's one essay about the differences between people's preferences and the type of "utility" that's morally valuable. Here's one about why there's no good way to ground preferences in the real world. And here's one attacking the underlying mindset that makes it tempting to model humans are agents with coherent goals.

Comment by richard_ngo on What new EA project or org would you like to see created in the next 3 years? · 2019-06-27T15:35:11.930Z · score: 1 (1 votes) · EA · GW

There are two functions I'm looking for: the "archive/index" function, and the "sequences" function. The former should store as much EA content as possible (including stuff that's not high-quality enough for us to want to direct newcomers to); it'd ideally also have enough structure to make it easily-browsable. The latter should zoom in on a specific topic or person and showcase their ideas in a way that can be easily read and digested.

https://priority.wiki/ is somewhere in between those two, in a way that seems valuable, but that doesn't quite fit with the functions I outlined above. It doesn't seem like it's aiming to be an exhaustive repository of content. But the individual topic pages also don't seem well-curated enough that I could just point someone to them and say "read all the stuff on this page to learn about the topic". The latter might change as more work goes into it, but I'm more hopeful about the EA forum sequences feature for this purpose.

The list of syllabi on EAHub is also interesting, and fits with the sequences function, albeit only on one specific topic (introducing EA).

Are those what you were referring to, or are there other places on EAHub where (object-level) content is collected that I didn't spot?

Comment by richard_ngo on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:16:49.387Z · score: 1 (1 votes) · EA · GW

I was particularly reminded of this by spending twenty minutes yesterday searching for an EA blog I wanted to cite, which has somehow vanished into the aether. EDIT: never mind, found it.


Comment by richard_ngo on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:15:59.111Z · score: 10 (4 votes) · EA · GW

Collection and curation of EA content from across the internet in a way that's accessible to newcomers, easily searchable, and will last for decades (at least). Seems like it wouldn't take that long to do a decent job, doesn't require uncommon skills, and could be pretty high-value.

I would be open to paying people to do this; message me if interested.

Comment by richard_ngo on Which scientific discovery was most ahead of its time? · 2019-05-22T00:01:25.035Z · score: 3 (2 votes) · EA · GW

I think Eli was asking whether your whole response was a quote, since the whole thing is in block quote format.

Comment by richard_ngo on The Athena Rationality Workshop - June 7th-10th at EA Hotel · 2019-05-11T17:52:16.407Z · score: 3 (3 votes) · EA · GW

What's your position on people coming for only part of the workshop? I'd be interested in attending but would rather not miss more than one day of work.

Comment by richard_ngo on Salary Negotiation for Earning to Give · 2019-04-05T16:17:34.340Z · score: 5 (4 votes) · EA · GW

Strong +1 for the kalzumeus blog post, that was very helpful for me.

Comment by richard_ngo on 35-150 billion fish are raised in captivity to be released into the wild every year · 2019-04-05T15:17:33.359Z · score: 4 (3 votes) · EA · GW
In general, stocking programmes aim at supporting commercial fisheries.

I'm a little confused by this, since it seems hugely economically inefficient to go to all the effort of raising fish, only to release them and then recapture them. Am I missing something, or is this basically a make-work program for the fishing industry?

Comment by richard_ngo on The career and the community · 2019-03-29T17:09:04.219Z · score: 6 (4 votes) · EA · GW

Note that your argument here is roughly Ben Pace's position in this post which we co-wrote. I argued against Ben's position in the post because I thought it was too extreme, but I agree with both of you that most EAs aren't going far enough in that direction.

Comment by richard_ngo on EA is vetting-constrained · 2019-03-28T11:44:40.830Z · score: 6 (5 votes) · EA · GW

Excellent post, although I think about it using a slightly different framing. How vetting-constrained granters are depends a lot on how high their standards are. In the limit of arbitrarily high standards, all the vetting in the world might not be enough. In the limit of arbitrarily low standards, no vetting is required.

If we find that there's not enough capability to vet, that suggests that either our standards are correct and we need more vetters, or that our standards are too high and we should lower them. I don't have much inside information, so this is mostly based on my overall worldview, but I broadly think it's more the latter: that standards are too high, and that worrying too much about protecting EA's reputation makes it harder for us to innovate.

I think it would be very valuable to have more granters publicly explaining how they make tradeoffs between potential risks, clear benefits, and low-probability extreme successes; if these explanations exist and I'm just not aware of them, I'd appreciate pointers.

Another startup contacted at least 4 grantmaking organisations. Three of them deferred to the fourth.

One "easy fix" would simply be to encourage grantmakers to defer to each other less. Imagine that only one venture capital fund was allowed in Silicon Valley. I claim that's one of the worst things you could do for entrepreneurship there.

Comment by richard_ngo on The career and the community · 2019-03-28T11:24:30.518Z · score: 4 (3 votes) · EA · GW

I agree that all of the things you listed are great. But note that almost all of them look like "convince already-successful people of EA ideas" rather than "talented young EAs doing exceptional things". For the purposes of this discussion, the main question isn't when we get the first EA senator, but whether the advice we're giving to young EAs will make them more likely to become senators or billion-dollar donors or other cool things. And yes, there's a strong selection bias here because obviously if you're young, you've had less time to do cool things. But I still think your argument weighs only weakly against Vishal's advocacy of what I'm tempted to call the "Silicon Valley mindset".

So the empirical question here is something like, if more EAs steer their careers based on a Silicon Valley mindset (as opposed to an EA mindset), will the movement overall be able to do more good? Personally I think that's true for driven, high-conscientiousness generalists, e.g. the sort of people OpenPhil hires. For other people, I guess what I advocate in the post above is sort of a middle ground between Vishal's "go for extreme growth" and the more standard EA advice to "go for the most important cause areas".

Comment by richard_ngo on The career and the community · 2019-03-25T17:30:51.731Z · score: 4 (3 votes) · EA · GW

Is this not explained by founder effects from Less Wrong?

Comment by richard_ngo on The career and the community · 2019-03-25T13:08:27.962Z · score: 19 (8 votes) · EA · GW

One other thing that I just noticed: looking at the list of 80k's 10 priority paths found here, the first 6 (and arguably also #8: China specialist) are all roles for which the majority of existing jobs are within an EA bubble. On one hand, this shows how well the EA community has done in creating important jobs, but it also highlights my concern about us steering people away from conventionally successful careers and engagement with non-EAs.

Comment by richard_ngo on The career and the community · 2019-03-24T18:47:19.821Z · score: 3 (3 votes) · EA · GW

This just seems like an unusually bad joke (as he also clarifies later). I think the phenomenon you're talking about is real (although I'm unsure as to the extent) but wouldn't use this as evidence.

Comment by richard_ngo on The career and the community · 2019-03-24T14:34:46.327Z · score: 13 (5 votes) · EA · GW

Hi Michelle, thanks for the thoughtful reply; I've responded below. Please don't feel obliged to respond in detail to my specific points if that's not a good use of your time; writing up a more general explanation of 80k's position might be more useful?

You're right that I'm positive about pretty broad capital building, but I'm not sure we disagree that much here. On a scale of breadth to narrowness of career capital, consulting is at one extreme because it's so generalist, and the other extreme is working at EA organisations or directly on EA causes straight out of university. I'm arguing against the current skew towards the latter extreme, but I'm not arguing that the former extreme is ideal. I think something like working at a top think tank (your example above) is a great first career step. (As a side note, I mention consulting twice in my post, but both times just as an illustrative example. Since this seems to have been misleading, I'll change one of those mentions to think tanks).

However, I do think that there are only a small number of jobs which are as good on so many axes as top think tanks, and it's usually quite difficult to get them as a new grad. Most new grads therefore face harsher tradeoffs between generality and narrowness.

More importantly, in order to help others as much as we can, we really need to both work on the world’s most pressing problems and find what inputs are most needed in order to make progress on them. While this will describe a huge range of roles in a wide variety of areas, it will still be the minority of jobs.

I guess my core argument is that in the past, EA has overfit to the jobs we thought were important at the time, both because of explicit career advice and because of implicit social pressure. So how do we avoid doing so going forward? I argue that given the social pressure which pushes people towards wanting to have a few very specific careers, it's better to have a community default which encourages people towards a broader range of jobs, for three reasons: to ameliorate the existing social bias, to allow a wider range of people to feel like they belong in EA, and to add a little bit of "epistemic modesty"-based deference towards existing non-EA career advice. I claim that if EA as a movement had been more epistemically modest about careers 5 years ago, we'd have a) more people with useful general career capital, b) more people in things which didn't use to be priorities, but now are, like politics, c) fewer current grads who (mistakenly/unsuccessfully) prioritised their career search specifically towards EA orgs, and maybe d) more information about a broader range of careers from people pursuing those paths. There would also have been costs to adding this epistemic modesty, of course, and I don't have a strong opinion on whether the costs outweight the benefits, but I do think it's worth making a case for those benefits.

We’ve updated pretty substantially away from that in favour of taking a more directed approach to your career

Looking at this post on how you've changed your mind, I'm not strongly convinced by the reasons you cited. Summarised:

1. If you’re focused on our top problem areas, narrow career capital in those areas is usually more useful than flexible career capital.

Unless it turns out that there's a better form of narrow career which it would be useful to be able to shift towards (e.g. shifts in EA ideas, or unexpected doors opening as you get more senior).

2. You can get good career capital in positions with high immediate impact

I've argued that immediate impact is usually a fairly unimportant metric which is outweighed by the impact later on in your career.

3. Discount rates on aligned-talent are quite high in some of the priority paths, and seem to have increased, making career capital less valuable.

I am personally not very convinced by this, but I appreciate that there's a broad range of opinions and so it's a reasonable concern.

It still seems to be the case that organisations like the Open Philanthropy Project and GiveWell are occasionally interested in hiring people 0-2 years out of university. And while there seem to be some people to whom working at EA organisations seems more appealing than it should, there are also many people for whom it seems less appealing or cognitively available than it should. For example, while the people on this forum are likely to be very inclined to apply for jobs at EA organisations, many of the people I talk to in coaching don’t know that much about various EA organisations and why they might be good places to work.

Re OpenPhil and GiveWell wanting to hire new grads: in general I don't place much weight on evidence of the form "organisation x thinks their own work is unusually impactful and worth the counterfactual tradeoffs".

I agree that you have a very difficult job in trying to convey key ideas to people who are are coming from totally different positions in terms of background knowledge and experience with EA. My advice is primarily aimed at people who are already committed EAs, and who are subject to the social dynamics I discuss above - hence why this is a "community" post. I think you do amazing work in introducing a wider audience to EA ideas, especially with nuance via the podcast as you mentioned.

Comment by richard_ngo on Concept: EA Donor List. To enable EAs that are starting new projects to find seed donors, especially for people that aren’t well connected · 2019-03-24T12:33:48.568Z · score: 2 (2 votes) · EA · GW

I quite like this idea, and think that the unilateralist's curse is less important than others make it out to be (I'll elaborate on this in a forum post soon).

Just wanted to quickly mention https://lets-fund.org/ as a related project, in case you hadn't already heard of it.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-23T12:17:54.066Z · score: 8 (4 votes) · EA · GW
I also think there's a lot of value to publishing a really good collection the first time around

The EA handbook already exists, so this could be the basis for the first sequence basically immediately. Also EA concepts.

More generally, I think I disagree with the broad framing you're using, which feels like "we're going to get the definitive collection of essays on each topic, which we endorse". But even if CEA manages to put together a few such sequences, I predict that this will stagnate once people aren't working on it as hard. By contrast, a more scalable type of sequence could be something like: ask Brian Tomasik, Paul Christiano, Scott Alexander, and other prolific writers, to assemble a reading list of the top 5-10 essays they've written relating to EA (as well as allowing community members to propose lists of essays related to a given theme). It seems quite likely that at least some of those points have been made better elsewhere, and also that many of them are controversial topics within EA, but people should be aware of this sort of thing, and right now there's no good mechanism for that happening except vague word of mouth or spending lots of time scrolling through blogs.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-23T11:57:52.359Z · score: 9 (5 votes) · EA · GW
It crucially doesn't ensure that the rewarded content will continue to be read by newcomers 5 years after it was written... New EAs on the Forum are not reading the best EA content of the past 10 years, just the most recent content.

This sentence deserves a strong upvote all by itself, it is exactly the key issue. There is so much good stuff out there, I've read pretty widely on EA topics but continue to find excellent material that I've never seen before, scattered across a range of blogs. Gathering that together seems vital as the movement gets older and it gets harder and harder to actually find and read everything.

I can imagine this being an automatic process based on voting, but I have an intuition that it's good for humans to be in the loop. One reason is that when humans make decisions, you can ask why, but when 50 people vote, it's hard to interrogate that system as to the reason behind its decision, and improve its reasoning the next time.

I think that's true when there are moderators who are able to spend a lot of time and effort thinking about what to curate, like you do for Less Wrong. But right now it seems like the EA forum staff are very time-constrained, and in addition are worried about endorsing things. So in addition to the value of decentralising the work involved, there's an additional benefit of voting in that it's easier for CEA to disclaim endorsement.

Given that, I don't have a strong opinion about whether it's better for community members to be able to propose and vote on sequences, or whether it's better for CEA to take a strong stance that they're going to curate sequences with interesting content without necessarily endorsing it, and ensure that there's enough staff time available to do that. The former currently seems more plausible (although I have no inside knowledge about what CEA are planning).

The thing I would like not to happen is for the EA forum to remain a news site because CEA is too worried about endorsing the wrong things to put up the really good content that already exists, or sets such a high bar for doing so that in practice you get only a couple of sequences. EA is a question, not a set of fixed endorsed beliefs, and I think the ability to move fast and engage with a variety of material is the lifeblood of an intellectual community.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-22T00:46:36.910Z · score: 4 (3 votes) · EA · GW

It's very cool that you took the time to do so. I agree that preserving and showcasing great content is important in the long term, and am sad that this hasn't come to anything yet. Of course the EA forum is still quite new, but my intuition is that collating a broadly acceptable set of sequences (which can always be revised later) is the sort of thing that would take only one or two intern-weeks.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-22T00:41:52.770Z · score: 1 (1 votes) · EA · GW

Isn't all the code required for curation already implemented for Less Wrong? I guess adding functionality is rarely easy, but in this case I would have assumed that it was more work to remove it than to keep it.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-21T17:45:27.071Z · score: 9 (5 votes) · EA · GW

Agreed that subforums are a good idea, but the way they're done on facebook seems particularly bad for creating common knowledge, because (as you point out) they're so scattered. Also the advantage of people checking facebook more is countered, for me, by the disadvantage of facebook being a massive time sink, so that I don't want to encourage myself or others to go on it when I don't have to. So it would be ideal if the solution could be a modification or improvement to the EA forum - especially given that the code for curation already exists!

Comment by richard_ngo on The career and the community · 2019-03-21T17:36:23.895Z · score: 6 (2 votes) · EA · GW

Thanks for the comment! I find your last point particularly interesting, because while I and many of my friends assume that the community part is very important, there's an obvious selection effect which makes that assumption quite biased. I'll need to think about that more.

I think I disagree slightly that there needs to be a "task Y", it may be the case that some people will have an interest in EA but wont be able to contribute

Two problems with this. The first is that when people first encounter EA, they're usually not willing to totally change careers, and so if they get the impression that they need to either make a big shift or there's no space for them in EA, they may well never start engaging. The second is that we want to encourage people to feel able to take risky (but high expected value) decisions, or commit to EA careers. But if failure at those things means that their career is in a worse place AND there's no clear place for them in the EA community (because they're now unable to contribute in ways that other EAs care about) they will (understandably) be more risk-averse.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-21T14:48:03.317Z · score: 2 (2 votes) · EA · GW

Ironically enough, I can't find the launch announcement to verify this.

Comment by richard_ngo on Why do you reject negative utilitarianism? · 2019-02-17T22:48:17.451Z · score: 11 (4 votes) · EA · GW

Toby Ord gives a good summary of a range of arguments against negative utilitarianism here.

Personally, I think that valuing positive experiences instrumentally is insufficient, given that the future has the potential to be fantastic.