Comment by richard_ngo on Some solutions to utilitarian problems · 2019-07-12T23:43:30.817Z · score: 1 (1 votes) · EA · GW

Meta: your last link doesn't seem to point anywhere.

Comment by richard_ngo on Some solutions to utilitarian problems · 2019-07-12T23:35:52.058Z · score: 4 (3 votes) · EA · GW

Interesting post. I wanted to write a substantive response, but ran out of energy. However, I have written previously on why I'm skeptical of the relevance of formally defined utility functions to ethics. Here's one essay about the differences between people's preferences and the type of "utility" that's morally valuable. Here's one about why there's no good way to ground preferences in the real world. And here's one attacking the underlying mindset that makes it tempting to model humans are agents with coherent goals.

Comment by richard_ngo on What new EA project or org would you like to see created in the next 3 years? · 2019-06-27T15:35:11.930Z · score: 1 (1 votes) · EA · GW

There are two functions I'm looking for: the "archive/index" function, and the "sequences" function. The former should store as much EA content as possible (including stuff that's not high-quality enough for us to want to direct newcomers to); it'd ideally also have enough structure to make it easily-browsable. The latter should zoom in on a specific topic or person and showcase their ideas in a way that can be easily read and digested.

https://priority.wiki/ is somewhere in between those two, in a way that seems valuable, but that doesn't quite fit with the functions I outlined above. It doesn't seem like it's aiming to be an exhaustive repository of content. But the individual topic pages also don't seem well-curated enough that I could just point someone to them and say "read all the stuff on this page to learn about the topic". The latter might change as more work goes into it, but I'm more hopeful about the EA forum sequences feature for this purpose.

The list of syllabi on EAHub is also interesting, and fits with the sequences function, albeit only on one specific topic (introducing EA).

Are those what you were referring to, or are there other places on EAHub where (object-level) content is collected that I didn't spot?

Comment by richard_ngo on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:16:49.387Z · score: 1 (1 votes) · EA · GW

I was particularly reminded of this by spending twenty minutes yesterday searching for an EA blog I wanted to cite, which has somehow vanished into the aether. EDIT: never mind, found it.


Comment by richard_ngo on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:15:59.111Z · score: 8 (2 votes) · EA · GW

Collection and curation of EA content from across the internet in a way that's accessible to newcomers, easily searchable, and will last for decades (at least). Seems like it wouldn't take that long to do a decent job, doesn't require uncommon skills, and could be pretty high-value.

I would be open to paying people to do this; message me if interested.

Comment by richard_ngo on Which scientific discovery was most ahead of its time? · 2019-05-22T00:01:25.035Z · score: 3 (2 votes) · EA · GW

I think Eli was asking whether your whole response was a quote, since the whole thing is in block quote format.

Which scientific discovery was most ahead of its time?

2019-05-16T12:28:54.437Z · score: 30 (13 votes)
Comment by richard_ngo on The Athena Rationality Workshop - June 7th-10th at EA Hotel · 2019-05-11T17:52:16.407Z · score: 3 (3 votes) · EA · GW

What's your position on people coming for only part of the workshop? I'd be interested in attending but would rather not miss more than one day of work.

Comment by richard_ngo on Salary Negotiation for Earning to Give · 2019-04-05T16:17:34.340Z · score: 5 (4 votes) · EA · GW

Strong +1 for the kalzumeus blog post, that was very helpful for me.

Comment by richard_ngo on 35-150 billion fish are raised in captivity to be released into the wild every year · 2019-04-05T15:17:33.359Z · score: 4 (3 votes) · EA · GW
In general, stocking programmes aim at supporting commercial fisheries.

I'm a little confused by this, since it seems hugely economically inefficient to go to all the effort of raising fish, only to release them and then recapture them. Am I missing something, or is this basically a make-work program for the fishing industry?

Comment by richard_ngo on The career and the community · 2019-03-29T17:09:04.219Z · score: 6 (4 votes) · EA · GW

Note that your argument here is roughly Ben Pace's position in this post which we co-wrote. I argued against Ben's position in the post because I thought it was too extreme, but I agree with both of you that most EAs aren't going far enough in that direction.

Comment by richard_ngo on EA is vetting-constrained · 2019-03-28T11:44:40.830Z · score: 6 (5 votes) · EA · GW

Excellent post, although I think about it using a slightly different framing. How vetting-constrained granters are depends a lot on how high their standards are. In the limit of arbitrarily high standards, all the vetting in the world might not be enough. In the limit of arbitrarily low standards, no vetting is required.

If we find that there's not enough capability to vet, that suggests that either our standards are correct and we need more vetters, or that our standards are too high and we should lower them. I don't have much inside information, so this is mostly based on my overall worldview, but I broadly think it's more the latter: that standards are too high, and that worrying too much about protecting EA's reputation makes it harder for us to innovate.

I think it would be very valuable to have more granters publicly explaining how they make tradeoffs between potential risks, clear benefits, and low-probability extreme successes; if these explanations exist and I'm just not aware of them, I'd appreciate pointers.

Another startup contacted at least 4 grantmaking organisations. Three of them deferred to the fourth.

One "easy fix" would simply be to encourage grantmakers to defer to each other less. Imagine that only one venture capital fund was allowed in Silicon Valley. I claim that's one of the worst things you could do for entrepreneurship there.

Comment by richard_ngo on The career and the community · 2019-03-28T11:24:30.518Z · score: 4 (3 votes) · EA · GW

I agree that all of the things you listed are great. But note that almost all of them look like "convince already-successful people of EA ideas" rather than "talented young EAs doing exceptional things". For the purposes of this discussion, the main question isn't when we get the first EA senator, but whether the advice we're giving to young EAs will make them more likely to become senators or billion-dollar donors or other cool things. And yes, there's a strong selection bias here because obviously if you're young, you've had less time to do cool things. But I still think your argument weighs only weakly against Vishal's advocacy of what I'm tempted to call the "Silicon Valley mindset".

So the empirical question here is something like, if more EAs steer their careers based on a Silicon Valley mindset (as opposed to an EA mindset), will the movement overall be able to do more good? Personally I think that's true for driven, high-conscientiousness generalists, e.g. the sort of people OpenPhil hires. For other people, I guess what I advocate in the post above is sort of a middle ground between Vishal's "go for extreme growth" and the more standard EA advice to "go for the most important cause areas".

Comment by richard_ngo on The career and the community · 2019-03-25T17:30:51.731Z · score: 4 (3 votes) · EA · GW

Is this not explained by founder effects from Less Wrong?

Comment by richard_ngo on The career and the community · 2019-03-25T13:08:27.962Z · score: 19 (8 votes) · EA · GW

One other thing that I just noticed: looking at the list of 80k's 10 priority paths found here, the first 6 (and arguably also #8: China specialist) are all roles for which the majority of existing jobs are within an EA bubble. On one hand, this shows how well the EA community has done in creating important jobs, but it also highlights my concern about us steering people away from conventionally successful careers and engagement with non-EAs.

Comment by richard_ngo on The career and the community · 2019-03-24T18:47:19.821Z · score: 3 (3 votes) · EA · GW

This just seems like an unusually bad joke (as he also clarifies later). I think the phenomenon you're talking about is real (although I'm unsure as to the extent) but wouldn't use this as evidence.

Comment by richard_ngo on The career and the community · 2019-03-24T14:34:46.327Z · score: 13 (5 votes) · EA · GW

Hi Michelle, thanks for the thoughtful reply; I've responded below. Please don't feel obliged to respond in detail to my specific points if that's not a good use of your time; writing up a more general explanation of 80k's position might be more useful?

You're right that I'm positive about pretty broad capital building, but I'm not sure we disagree that much here. On a scale of breadth to narrowness of career capital, consulting is at one extreme because it's so generalist, and the other extreme is working at EA organisations or directly on EA causes straight out of university. I'm arguing against the current skew towards the latter extreme, but I'm not arguing that the former extreme is ideal. I think something like working at a top think tank (your example above) is a great first career step. (As a side note, I mention consulting twice in my post, but both times just as an illustrative example. Since this seems to have been misleading, I'll change one of those mentions to think tanks).

However, I do think that there are only a small number of jobs which are as good on so many axes as top think tanks, and it's usually quite difficult to get them as a new grad. Most new grads therefore face harsher tradeoffs between generality and narrowness.

More importantly, in order to help others as much as we can, we really need to both work on the world’s most pressing problems and find what inputs are most needed in order to make progress on them. While this will describe a huge range of roles in a wide variety of areas, it will still be the minority of jobs.

I guess my core argument is that in the past, EA has overfit to the jobs we thought were important at the time, both because of explicit career advice and because of implicit social pressure. So how do we avoid doing so going forward? I argue that given the social pressure which pushes people towards wanting to have a few very specific careers, it's better to have a community default which encourages people towards a broader range of jobs, for three reasons: to ameliorate the existing social bias, to allow a wider range of people to feel like they belong in EA, and to add a little bit of "epistemic modesty"-based deference towards existing non-EA career advice. I claim that if EA as a movement had been more epistemically modest about careers 5 years ago, we'd have a) more people with useful general career capital, b) more people in things which didn't use to be priorities, but now are, like politics, c) fewer current grads who (mistakenly/unsuccessfully) prioritised their career search specifically towards EA orgs, and maybe d) more information about a broader range of careers from people pursuing those paths. There would also have been costs to adding this epistemic modesty, of course, and I don't have a strong opinion on whether the costs outweight the benefits, but I do think it's worth making a case for those benefits.

We’ve updated pretty substantially away from that in favour of taking a more directed approach to your career

Looking at this post on how you've changed your mind, I'm not strongly convinced by the reasons you cited. Summarised:

1. If you’re focused on our top problem areas, narrow career capital in those areas is usually more useful than flexible career capital.

Unless it turns out that there's a better form of narrow career which it would be useful to be able to shift towards (e.g. shifts in EA ideas, or unexpected doors opening as you get more senior).

2. You can get good career capital in positions with high immediate impact

I've argued that immediate impact is usually a fairly unimportant metric which is outweighed by the impact later on in your career.

3. Discount rates on aligned-talent are quite high in some of the priority paths, and seem to have increased, making career capital less valuable.

I am personally not very convinced by this, but I appreciate that there's a broad range of opinions and so it's a reasonable concern.

It still seems to be the case that organisations like the Open Philanthropy Project and GiveWell are occasionally interested in hiring people 0-2 years out of university. And while there seem to be some people to whom working at EA organisations seems more appealing than it should, there are also many people for whom it seems less appealing or cognitively available than it should. For example, while the people on this forum are likely to be very inclined to apply for jobs at EA organisations, many of the people I talk to in coaching don’t know that much about various EA organisations and why they might be good places to work.

Re OpenPhil and GiveWell wanting to hire new grads: in general I don't place much weight on evidence of the form "organisation x thinks their own work is unusually impactful and worth the counterfactual tradeoffs".

I agree that you have a very difficult job in trying to convey key ideas to people who are are coming from totally different positions in terms of background knowledge and experience with EA. My advice is primarily aimed at people who are already committed EAs, and who are subject to the social dynamics I discuss above - hence why this is a "community" post. I think you do amazing work in introducing a wider audience to EA ideas, especially with nuance via the podcast as you mentioned.

Comment by richard_ngo on Concept: EA Donor List. To enable EAs that are starting new projects to find seed donors, especially for people that aren’t well connected · 2019-03-24T12:33:48.568Z · score: 2 (2 votes) · EA · GW

I quite like this idea, and think that the unilateralist's curse is less important than others make it out to be (I'll elaborate on this in a forum post soon).

Just wanted to quickly mention https://lets-fund.org/ as a related project, in case you hadn't already heard of it.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-23T12:17:54.066Z · score: 8 (4 votes) · EA · GW
I also think there's a lot of value to publishing a really good collection the first time around

The EA handbook already exists, so this could be the basis for the first sequence basically immediately. Also EA concepts.

More generally, I think I disagree with the broad framing you're using, which feels like "we're going to get the definitive collection of essays on each topic, which we endorse". But even if CEA manages to put together a few such sequences, I predict that this will stagnate once people aren't working on it as hard. By contrast, a more scalable type of sequence could be something like: ask Brian Tomasik, Paul Christiano, Scott Alexander, and other prolific writers, to assemble a reading list of the top 5-10 essays they've written relating to EA (as well as allowing community members to propose lists of essays related to a given theme). It seems quite likely that at least some of those points have been made better elsewhere, and also that many of them are controversial topics within EA, but people should be aware of this sort of thing, and right now there's no good mechanism for that happening except vague word of mouth or spending lots of time scrolling through blogs.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-23T11:57:52.359Z · score: 9 (5 votes) · EA · GW
It crucially doesn't ensure that the rewarded content will continue to be read by newcomers 5 years after it was written... New EAs on the Forum are not reading the best EA content of the past 10 years, just the most recent content.

This sentence deserves a strong upvote all by itself, it is exactly the key issue. There is so much good stuff out there, I've read pretty widely on EA topics but continue to find excellent material that I've never seen before, scattered across a range of blogs. Gathering that together seems vital as the movement gets older and it gets harder and harder to actually find and read everything.

I can imagine this being an automatic process based on voting, but I have an intuition that it's good for humans to be in the loop. One reason is that when humans make decisions, you can ask why, but when 50 people vote, it's hard to interrogate that system as to the reason behind its decision, and improve its reasoning the next time.

I think that's true when there are moderators who are able to spend a lot of time and effort thinking about what to curate, like you do for Less Wrong. But right now it seems like the EA forum staff are very time-constrained, and in addition are worried about endorsing things. So in addition to the value of decentralising the work involved, there's an additional benefit of voting in that it's easier for CEA to disclaim endorsement.

Given that, I don't have a strong opinion about whether it's better for community members to be able to propose and vote on sequences, or whether it's better for CEA to take a strong stance that they're going to curate sequences with interesting content without necessarily endorsing it, and ensure that there's enough staff time available to do that. The former currently seems more plausible (although I have no inside knowledge about what CEA are planning).

The thing I would like not to happen is for the EA forum to remain a news site because CEA is too worried about endorsing the wrong things to put up the really good content that already exists, or sets such a high bar for doing so that in practice you get only a couple of sequences. EA is a question, not a set of fixed endorsed beliefs, and I think the ability to move fast and engage with a variety of material is the lifeblood of an intellectual community.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-22T00:46:36.910Z · score: 4 (3 votes) · EA · GW

It's very cool that you took the time to do so. I agree that preserving and showcasing great content is important in the long term, and am sad that this hasn't come to anything yet. Of course the EA forum is still quite new, but my intuition is that collating a broadly acceptable set of sequences (which can always be revised later) is the sort of thing that would take only one or two intern-weeks.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-22T00:41:52.770Z · score: 1 (1 votes) · EA · GW

Isn't all the code required for curation already implemented for Less Wrong? I guess adding functionality is rarely easy, but in this case I would have assumed that it was more work to remove it than to keep it.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-21T17:45:27.071Z · score: 9 (5 votes) · EA · GW

Agreed that subforums are a good idea, but the way they're done on facebook seems particularly bad for creating common knowledge, because (as you point out) they're so scattered. Also the advantage of people checking facebook more is countered, for me, by the disadvantage of facebook being a massive time sink, so that I don't want to encourage myself or others to go on it when I don't have to. So it would be ideal if the solution could be a modification or improvement to the EA forum - especially given that the code for curation already exists!

Comment by richard_ngo on The career and the community · 2019-03-21T17:36:23.895Z · score: 6 (2 votes) · EA · GW

Thanks for the comment! I find your last point particularly interesting, because while I and many of my friends assume that the community part is very important, there's an obvious selection effect which makes that assumption quite biased. I'll need to think about that more.

I think I disagree slightly that there needs to be a "task Y", it may be the case that some people will have an interest in EA but wont be able to contribute

Two problems with this. The first is that when people first encounter EA, they're usually not willing to totally change careers, and so if they get the impression that they need to either make a big shift or there's no space for them in EA, they may well never start engaging. The second is that we want to encourage people to feel able to take risky (but high expected value) decisions, or commit to EA careers. But if failure at those things means that their career is in a worse place AND there's no clear place for them in the EA community (because they're now unable to contribute in ways that other EAs care about) they will (understandably) be more risk-averse.

Comment by richard_ngo on Why doesn't the EA forum have curated posts or sequences? · 2019-03-21T14:48:03.317Z · score: 2 (2 votes) · EA · GW

Ironically enough, I can't find the launch announcement to verify this.

Why doesn't the EA forum have curated posts or sequences?

2019-03-21T13:52:58.807Z · score: 34 (16 votes)

The career and the community

2019-03-21T12:35:23.073Z · score: 76 (39 votes)
Comment by richard_ngo on Why do you reject negative utilitarianism? · 2019-02-17T22:48:17.451Z · score: 4 (3 votes) · EA · GW

Toby Ord gives a good summary of a range of arguments against negative utilitarianism here.

Personally, I think that valuing positive experiences instrumentally is insufficient, given that the future has the potential to be fantastic.

Comment by richard_ngo on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-13T00:13:55.588Z · score: 4 (3 votes) · EA · GW
The argument for doom by default seems to rest on a default misunderstanding of human values as the programmer attempts to communicate them to the AI.

I don't think this is correct. The argument rests on AIs having any values which aren't human values (e.g. maximising paperclips), not just misunderstood human values.

Comment by richard_ngo on Arguments for moral indefinability · 2019-02-10T10:33:34.511Z · score: 2 (2 votes) · EA · GW
Multiple terminal values will always lead to irreconcilable conflicts.

This is not the case when there's a well-defined procedure for resolving such conflicts. For example, you can map several terminal values onto a numerical "utility" scale.

Comment by richard_ngo on Arguments for moral indefinability · 2019-02-09T14:01:54.518Z · score: 4 (3 votes) · EA · GW

From skimming the SEP article on pluralism, it doesn't quite seem like what I'm talking about. Pluralism + incomparability comes closer, but still seems like a subset of my position, since there are other ways that indefinability could be true (e.g. there's only one type of value, but it's intrinsically vague)

Arguments for moral indefinability

2019-02-08T11:09:25.547Z · score: 24 (10 votes)
Comment by richard_ngo on Simultaneous Shortage and Oversupply · 2019-02-01T13:20:28.282Z · score: 2 (2 votes) · EA · GW

This seems plausible, but also quite distinct from the claim that "roles for programmers in direct work tend to sit open for a long time", which I took the list of openings to be supporting evidence for.

Comment by richard_ngo on Simultaneous Shortage and Oversupply · 2019-01-27T15:37:29.219Z · score: 9 (4 votes) · EA · GW

The OpenAI and DeepMind posts you linked aren't necessarily relevant, e.g. the Software Engineer, Science role is not for DeepMind's safety team, and it's pretty unclear to me whether the OpenAI ML engineer role is safety-relevant.

Comment by richard_ngo on Request for input on multiverse-wide superrationality (MSR) · 2019-01-27T02:02:39.076Z · score: 1 (1 votes) · EA · GW

The example you've given me shows that agents which implement exactly the same (high-level) algorithm can cooperate with each other. The metric I'm looking for is: how can we decide how similar two agents are when their algorithms are non-identical? Presumably we want a smoothness property for that metric such that if our algorithms are very similar (e.g. only differ with respect to some radically unlikely edge case) the reduction in cooperation is negligible. But it doesn't seem like anyone knows how to do this.

Comment by richard_ngo on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T11:10:37.974Z · score: 2 (2 votes) · EA · GW

Can you give some examples of "more responsible" ways?

I agree that in general calculating your own random digits feels a lot like rolling your own crypto. (Edit: I misunderstood the method and thought there was an easy exploit, which I was wrong about. Nevertheless at least 1/3 of the digits in the API response are predictable, maybe more, and the whole thing is quite small, so it might be possible to increase your probability of winning slightly by brute force calculating possibilities, assuming you get to pick your own contiguous ticket number range. My preliminary calculations suggest that this method would be too difficult, but I'm not an expert, there may be more sophisticated hacks).

Comment by richard_ngo on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T12:18:34.560Z · score: 2 (2 votes) · EA · GW

(edited) I just saw your link above about growth vs value investing. I don't think that's a helpful distinction in this case, and when people talk about a company being undervalued I think that typically includes both unrecognised growth potential and unrecognised current value. (Maybe that's less true for startups, but we're talking about already-listed companies here).

I do think the core claim of "if AGI will be as big a deal as we think it'll be, then the markets are systematically undervaluing AI companies" is a reasonable one, but the arguments you've given here aren't precise enough to justify confidence, especially given the aforementioned need for caution. For example, premise 4 doesn't actually follow directly from premise 3 because the returns could be large but not outsized compared with other investments. I think you can shore that link up, but not without contradicting your other point:

I'm not claiming that investing in AI companies will generate higher-than-average returns in the long run.

Which means (under the definition I've been using) that you're not claiming that they're undervalued.

Comment by richard_ngo on Disentangling arguments for the importance of AI safety · 2019-01-24T11:24:12.483Z · score: 2 (2 votes) · EA · GW

I agree that the extent to which individual humans are rational agents is often overstated. Nevertheless, there are many examples of humans who spend decades striving towards distant and abstract goals, who learn whatever skills and perform whatever tasks are required to reach them, and who strategically plan around or manipulate the actions of other people. If AGI is anywhere near as agentlike as humans in the sense of possessing the long-term goal-directedness I just described, that's cause for significant concern.

Comment by richard_ngo on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T01:52:57.839Z · score: 1 (1 votes) · EA · GW

If AI research companies aren't currently undervalued, then your Premise 4 (being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI) is incorrect, because the market will have anticipated those outsized returns and priced them in to the current share price.

Comment by richard_ngo on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T01:50:03.363Z · score: 22 (12 votes) · EA · GW

"returns that can later be deployed to greater altruistic effect as AI research progresses"

This is hiding an important premise, which is that you'll actually be able to deploy those increased resources well enough to make up for the opportunities you forego now. E.g. Paul thinks that (as an operationalisation of slow takeoff) the economy will double in 4 years before the first 1 year doubling period starts. So after that 4 year period you might end up with twice as much money but only 1 or 2 years to spend it on AI safety.

Comment by richard_ngo on Disentangling arguments for the importance of AI safety · 2019-01-24T01:19:46.429Z · score: 1 (1 votes) · EA · GW

I've actually spent a fair while thinking about CAIS, and written up my thoughts here. Overall I'm skeptical about the framework, but if it turns out to be accurate I think that would heavily mitigate arguments 1 and 2, somewhat mitigate 3, and not affect the others very much. Insofar as 4 and 5 describe AGI as an agent, that's mostly because it's linguistically natural to do so - I've now edited some of those phrases. 6b does describe AI as a species, but it's unclear whether that conflicts with CAIS, insofar as the claim that AI will never be agentlike is a very strong one, and I'm not sure whether Drexler makes it explicitly (I discuss this point in the blog post I linked above).

Comment by richard_ngo on Disentangling arguments for the importance of AI safety · 2019-01-24T01:09:33.910Z · score: 2 (2 votes) · EA · GW

I agree that it's not too concerning, which is why I consider it weak evidence. Nevertheless, there are some changes which don't fit the patterns you described. For example, it seems to me that newer AI safety researchers tend to consider intelligence explosions less likely, despite them being a key component of argument 1. For more details along these lines, check out the exchange between me and Wei Dai in the comments on the version of this post on the alignment forum.

Disentangling arguments for the importance of AI safety

2019-01-23T14:58:27.881Z · score: 48 (28 votes)
Comment by richard_ngo on What Is Effective Altruism? · 2019-01-10T10:45:45.770Z · score: 8 (6 votes) · EA · GW

I like "science-aligned" better than "secular", since the former implies the latter as well as a bunch of other important concepts.

Also, it's worth noting that "everyone's welfare is to count equally" in Will's account is approximately equivalent to "effective altruism values all people equally" in Ozymandias' account, but neither of them imply the following paraphrase: "from the effective altruism perspective, saving the life of a baby in Africa is exactly as good as saving the life of a baby in America, which is exactly as good as saving the life of Ozy’s baby specifically." I understand the intention of that phrase, but actually I'd save whichever baby would grow up to have the best life. Is there any better concrete description of what impartiality actually implies?

How democracy ends: a review and reevaluation

2018-11-24T17:41:53.594Z · score: 23 (11 votes)
Comment by richard_ngo on Some cruxes on impactful alternatives to AI policy work · 2018-11-24T02:40:38.860Z · score: 3 (3 votes) · EA · GW

Your points seem plausible to me. While I don't remember exactly what I intended by the claim above, I think that one influence was some material I'd read referencing the original "productivity paradox" of the 70s and 80s. I wasn't aware that there was a significant uptick in the 90s, so I'll retract my claim (which, in any case, wasn't a great way to make the overall point I was trying to convey).

Some cruxes on impactful alternatives to AI policy work

2018-11-22T13:43:40.684Z · score: 21 (12 votes)
Comment by richard_ngo on Insomnia: a promising cure · 2018-11-20T15:34:14.648Z · score: 6 (2 votes) · EA · GW

CBT-I is also recommended in Why We Sleep (see my summary of the book).

Nitpick: "The former two have diminishing returns, but the latter does not." It definitely does - I think getting 12 or 13 hours sleep is actively worse for you than getting 9 hours.

Comment by richard_ngo on What's Changing With the New Forum? · 2018-11-08T10:45:51.092Z · score: 13 (6 votes) · EA · GW
Posts on the new Forum are split into two categories:
Frontpage posts are timeless content covering the ideas of effective altruism. They should be useful or interesting even to readers who only know the basic concepts of EA and aren’t very active within the community.

I'm a little confused about this description. I feel like intellectual progress often requires presupposition of fairly advanced ideas which build on each other, and which are therefore inaccessible to "readers who only know the basic concepts". Suppose that I wrote a post outlining views on AI safety aimed at people who already know the basics of machine learning, or a post discussing a particular counter-argument to an unusual philosophical position. Would those not qualify as frontpage posts? If not, where would they go? And where do personal blogs fit into this taxonomy?

Comment by richard_ngo on Why Do Small Donors Give Now, But Large Donors Give Later? · 2018-10-30T14:55:32.539Z · score: 2 (2 votes) · EA · GW

It's a clever explanation, but I'm not sure how much to believe it without analysing other hypotheses. E.g. maybe tax-deductibility is a major factor, or maybe it's just much harder to give away large amounts of money quickly.

Comment by richard_ngo on What is the Most Helpful Categorical Breakdown of Normative Ethics? · 2018-08-15T21:23:16.519Z · score: 4 (4 votes) · EA · GW

I think it's a mischaracterisation to think of virtue ethics in terms of choosing the most virtuous actions (in fact, one common objection to virtue ethics is that it doesn't help very much in choosing actions). I think virtue ethics is probably more about being the most virtuous, and making decisions for virtuous reasons. There's a difference: e.g. you're probably not virtuous if you choose normally-virtuous actions for the wrong reasons.

For similar reasons, I disagree with cole_haus that virtue ethicists choose actions to produce the most virtuous outcomes (although there is at least one school of virtue ethics which seems vaguely consequentialist, the eudaimonists. See https://plato.stanford.edu/entries/ethics-virtue). Note however that I haven't actually looked into virtue ethics in much detail.

Edit: contractarianism is a fourth approach which doesn't fit neatly into either division

Comment by richard_ngo on Fisher & Syed on Tradable Obligations to Enhance Health · 2018-08-14T13:30:35.937Z · score: 1 (3 votes) · EA · GW

My default position would be that IKEA have an equal obligation, but that it's much more difficult and less efficient to try and make IKEA fulfill that obligation.

Comment by richard_ngo on Request for input on multiverse-wide superrationality (MSR) · 2018-08-14T13:15:21.842Z · score: 9 (9 votes) · EA · GW

A few doubts:

  1. It seems like MSR requires a multiverse large enough to have many well-correlated agents, but not large enough to run into the problems involved with infinite ethics. Most of my credence is on no multiverse or infinite multiverse, although I'm not particularly well-read on this issue.

  2. My broad intuition is something like "Insofar as we can know about the values of other civilisations, they're probably similar to our own. Insofar as we can't, MSR isn't relevant." There are probably exceptions, though (e.g. we could guess the direction in which an r-selected civilisation's values would vary from our own).

  3. I worry that MSR is susceptible to self-mugging of some sort. I don't have a particular example, but the general idea is that you're correlated with other agents even if you're being very irrational. And so you might end up doing things which seem arbitrarily irrational. But this is just a half-fledged thought, not a proper objection.

  4. And lastly, I would have much more confidence in FDT and superrationality in general if there were a sensible metric of similarity between agents, apart from correlation (because if you always cooperate in prisoner's dilemmas, then your choices are perfectly correlated with CooperateBot, but intuitively it'd still be more rational to defect against CooperateBot, because your decision algorithm isn't similar to CooperateBot in the same way that it's similar to your psychological twin). I guess this requires a solution to logical uncertainty, though.

Happy to discuss this more with you in person. Also, I suggest you cross-post to Less Wrong.

Comment by richard_ngo on Want to be more productive? · 2018-06-12T00:28:58.430Z · score: 2 (2 votes) · EA · GW

As a followup to byanyothername's questions: Could you say a little about what distinguishes your coaching from something like a CFAR workshop?

Comment by richard_ngo on EA Hotel with free accommodation and board for two years · 2018-06-05T13:13:07.219Z · score: 5 (5 votes) · EA · GW

Kudos for doing this. The main piece of advice which comes to mind is to make sure to push this via university EA groups. I don't think you explicitly identified students as a target demographic in your post, but current students and new grads have the three traits which make the hotel such an attractive proposition: they're unusually time-rich, cash-poor, and willing to relocate.