Posts

Some (Rough) Thoughts on the Value of Campaign Contributions 2020-02-10T04:53:21.350Z · score: 19 (8 votes)
[Link] Aiming for Moral Mediocrity | Eric Schwitzgebel 2020-01-02T23:25:00.101Z · score: 7 (5 votes)
How Fungible Are Interests? 2019-12-16T08:54:33.904Z · score: 26 (17 votes)

Comments

Comment by rook on How to estimate the EV of general intellectual progress · 2020-01-28T05:23:34.215Z · score: 7 (2 votes) · EA · GW

Some low-effort thoughts (I am not an economist so I might be embarrassing myself!):

  • My first inclination is something like "find the average output of the field per unit time, then find the average growth rate of a field, and then calculate the 'extra' output you'd get with a higher growth rate." In other words: (1) what is the field currently doing of value? (2) how much more value would that field produce if they did whatever they're currently doing faster?
    • It would be interesting to see someone do a quantitative analysis of the history of progress in some particular field. However, because so much intellectual progress has happened in the last ~300 years by so few people (relatively speaking), my guess is we might not have enough data in many cases.
  • The more something like the "great man theory" applies to a field (i.e. the more stochastic progress is), the more of a problem you have with this model. The first thing I thought of was philosophy: the median philosopher probably has an output close to 0, but the top >0.01% philosopher contributes extraordinary value. You probably couldn't have a very (helpful) systematic model for philosophical discoveries. Maybe you could ask a question like "what's the output we'd get from solving (or making significant headway towards solving) philosophical problem X, and how do we increase the chance that someone solves X?"
  • With regard to that latter question (also your second set-up), I wonder how reliably we could apply heuristics for determining the EV of particular contributions (i.e. how much value do we usually get from papers in field Y with ~X citations?).
Comment by rook on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T18:07:09.909Z · score: 2 (2 votes) · EA · GW

I dug up a few other places 80,000 Hours mentions law careers, but I couldn't find any article where they discuss US commercial law for earning-to-give. The other mentions I found include:

In their profile on US AI Policy, one of their recommended graduate programs is a "prestigious law JD from Yale or Harvard, or possibly another top 6 law school."

In this article for people with existing experience in a particular field, they write “If you have experience as a lawyer in the U.S. that’s great because it’s among the best ways to get positions in government & policy, which is one of our top priority areas.”

It's also mentioned in this article that Congress has a lot of HLS graduates.

Comment by rook on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T15:59:28.556Z · score: 2 (2 votes) · EA · GW

You mentioned in the answer to another question that you made the transition from being heavily involved with social justice in undergrad to being more involved with EA in law school. This makes me kind of curious -- what's your EA "origin story"? (How did you find out about effective altruism, how did you first become involved, etc.)

Comment by rook on In praise of unhistoric heroism · 2020-01-08T02:47:48.570Z · score: 5 (4 votes) · EA · GW

I love this post! It’s beautifully written, and one of the best things I’ve read on the forum in a while. So take my subsequent criticism of it with that in mind! I apologize in advance if I’m totally missing the point.

I feel like EAs (and most ambitious people generally) are pretty confused about how to reconcile status/impact with self-worth (I’m including myself in this group). If confronted, many of us would say that status/impact should really be orthogonal to how we feel about ourselves, but we can’t quite bring that to be emotionally true. We helplessly invidiously compare ourselves with successful people like “Carl” (using the name as a label here, not saying we really do this when we look at Carl Schuman), even though we consciously would admit that the feeling doesn’t make much sense.

I’ve read a number of relevant discussions, and I still don’t think anyone has satisfactorily dealt with this problem. But I’ll say that, for now, I think we should separate questions about the moral integrity of our actions (how we should define the goodness/badness of our actions) and those about how we should think about ourselves as people (whether we’re good/bad people). They’re related, but there might not be an easy mapping from one to the other. For instance, I think it’s very conceivable that a “Dorothea” may be a better person than a “Carl”, but a “Carl” does more good than a “Dorothea.” And, perhaps, while we should strive to do as much good as possible, our self-worth should track the kind of people we are much more closely than how much good we do.

Comment by rook on [Link] Aiming for Moral Mediocrity | Eric Schwitzgebel · 2020-01-04T05:05:06.066Z · score: 1 (1 votes) · EA · GW

This is fair. I was trying to salvage his argument without running into the problems mentioned in the above comment, but if he means "aim" objectively, then its tautologically true that people aim to be morally average, and if he means "aim" subjectively, then it contradicts the claim that most people subjectively aim to be slightly above average (which is what he seems to say in the B+ section).

The options are: (1) his central claim is uninteresting (2) his central claim is wrong (3) I'm misunderstanding his central claim. And I normally would feel like I should play it safe and default to (3), but it's probably (2).

Comment by rook on [Link] Aiming for Moral Mediocrity | Eric Schwitzgebel · 2020-01-04T03:22:49.662Z · score: 3 (3 votes) · EA · GW

This was a good comment and very clarifying. I agree with most of what you say about the evidence – Schwitzgebel seems to be misinterpreting the evidence (and I think I was also initially).

Just to be extra charitable to Schwitzgebel, however, I think we can assume his central claim is basically intelligible (even if it’s not supported by the evidence), and he’s just using some words in an inconsistent way. Some of the confusion in your comment may be caused by this inconsistency.

In most of his piece, by “aiming to be mediocre”, Schwitzgebel means that people’s behavior regresses to the actual moral middle of a reference class, even though they believe the moral middle is even lower. Imagine there’s a target where the bullseye is 5 feet above the ground, but some archer’s eyesight is off so they think it’s 3 feet above the ground. You could say that subjectively they’re aiming for the target, but objectively they're aiming below the target. When you write:

If people systematically believed themselves to be better than average and were aiming for mediocrity, then they could (and would) save themselves effort and reduce their moral behaviour until they no longer thought themselves to be above average.

You’re understanding “aim” in the subjective sense, whereas Schwitzgebel usually understands it in the objective sense. Someone might believe themselves to be better than average (they believe they're aiming at the target), but are objectively aiming for mediocrity (they’re actually aiming below the target).

The problem is that he starts using “aim” in the subjective sense in the “aiming for a B+” section. It is literally not possible that a person is both aiming for a B+ and aiming for a C+. It is, however, possible that they are subjectively aiming for a B+, but objectively aiming for a C+.

Comment by rook on [Link] Aiming for Moral Mediocrity | Eric Schwitzgebel · 2020-01-03T15:04:34.544Z · score: 2 (4 votes) · EA · GW

Not to be pedantic, but

  • "People behave morally mediocre" and "People regard themselves as morally mediocre" are two different types of claims. I take Schwitzgebel as claiming the former, and I think he agrees with you that people regard themselves as slightly above average (e.g. section 6 titled "Aiming for a B+").
  • He also agrees with you that the evidence is unsatisfactory in many ways (see section 4 titled " The Gap Between the Evidence Above and the Thesis That Most People Aim for Moral Mediocrity"). Granted, he doesn't make the specific point that you do, but I think it's pretty safe to assume what he's assuming: people adjust towards the behavior of their peers (i.e. they regress towards the mean). It could be true that people could are influenced in other ways (if they see others behaving poorly, they want to behave better), but I don't think the evidence points towards that.
Comment by rook on How Fungible Are Interests? · 2019-12-23T04:54:50.112Z · score: 1 (1 votes) · EA · GW
This is not to say that she couldn't and that she might use this as an excuse to avoid doing what she thinks is necessary to excuse doing what is convenient, but to say that we should have compassion for those who may find they agree with EA but find they cannot immediately make the changes they would like to due to life conditions, and we should not judge them as less good EAs even if they are less able to contribute to EA missions than if they were a different person in a different world that doesn't exist.

This is great, and I'd like to add some follow-up comments in light of it.

My main point was really that passion is a contingent, rather than an intrinsic, thing. If you’re into X instead of Y, that could be because you invested more time in X, not because you “fundamentally” don’t find Y interesting. This may seem uplifting to some EAs: it means that many people have vastly more potential to do good than they might have originally thought!

But I agree that there’s something about the “human experience” that my explanation is missing. This is because “contingent” doesn’t directly imply “fungible” or “interchangeable” – people (usually) can’t fluidly change what they’re interested in or passionate about, even if those interests or passions stem from “contingent” factors. I think, as a result, I described Sue’s case in a slightly unfair and judgmental way (in a way that’s probably not totally healthy, individually or as a community). Real people are subject to all sorts of cognitive and emotional constraints that the original post does not properly recognize.

On a personal note – this post was (on some level) an attempt to rationalize a decision I’m currently going through in my own life. I’m a recent college graduate trying to decide whether to apply to graduate programs in philosophy, or to do something else. I kind of feel like Sue – maybe I could do something in philosophy, but maybe I could do something even more significant elsewhere, if only I invested as much time elsewhere as I have in philosophy. I know my interest in philosophy is contingent, in a sense, but I wonder how fungible it is.

I add this personal note in part to say that I can empathize with the kind of EAs you describe.