Posts

Two Types of Average Utilitarianism 2022-09-04T17:15:14.890Z
The standard person-affecting view doesn't solve the Repugnant Conclusion. 2022-08-23T19:58:07.450Z
How much do EA grant organizations rely on experts? 2022-07-14T22:40:14.208Z
Why should we care about existential risk? 2022-04-08T23:43:37.308Z

Comments

Comment by RedStateBlueState on Two Types of Average Utilitarianism · 2022-09-06T03:44:33.191Z · EA · GW

In other domains, when we combine different metrics to yield one frankenstein metric, it is because these different metrics are all partial indicators of some underlying measure we cannot directly observe. The whole point of ethics is that we are trying to directly describe this underlying measure of "good", and thus it doesn't make sense to me to create some frankenstein view. 

The only instance I would see this being ok is in the context of moral uncertainty, where we're saying "I believe there is some underlying view but I don't know what it is, so I will give some weight to a bunch of these plausible theories". Which maybe is what you're getting at? But in that case, I think it's necessary to believe that each of the views you are averaging over could be approximately true on its own, which IMO really isn't the case with a complicated utilitarianism formula, especially since we know there is no formula out there that will give us all we desire. Though this is another long philosophical rabbit hole, I'm sure.

Comment by RedStateBlueState on Two Types of Average Utilitarianism · 2022-09-05T22:07:14.631Z · EA · GW

There's a proof showing that any utilitarian ideology violates either the repugnant or sadistic conclusion (or anti-egalitarianism, incentivizing an unequal society), so you can't cleverly avoid these two conclusions with some fancy math. To add, any fancy view you create will be in some sense unmotivated - you just came with a formula that you like, but why would such a formula be true? Totalism and averagism seem to be the two most interpretable utilitarian ideologies, with totalism caring only about pain/pleasure (and not by whom this pain/pleasure is experienced) and averagism being the same except population-neutral, not incentivizing a larger population unless it has higher average net pleasure. Anything else is kind of an arbitrary view invented by someone who is too into math.

Comment by RedStateBlueState on Open EA Global · 2022-09-03T21:50:17.251Z · EA · GW

Somewhat of a tangential question but what is the point of making EAGx region-specific? If these are the only events with a relatively low bar of entry, why are we not letting people attend them until one happens to come along near where they live? Without this restriction I could easily see EAGx solving most of the problems Scott is bringing up with EAG.

Comment by RedStateBlueState on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-09T02:16:28.242Z · EA · GW

To be clear the reason I unendorsed this comment is that I think it was covered in the comment I was replying to and it was very bad on my part not to read the comment fully and coherently before replying. Checking the EA forum when I’m dead tired isn’t the best idea.

Comment by RedStateBlueState on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-08T19:04:36.395Z · EA · GW

The smartest people will always gravitate toward the EA group, you don’t actually have to sift through all the students to find them.

Comment by RedStateBlueState on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-07T03:54:38.824Z · EA · GW

Right this is always a spectrum but I don’t think “Olympian” is the best cutoff here, I think “been doing this thing since I was five and am now nationally ranked” or “started some local volunteer (not school) organization” is a better description here. Honestly I would expect your rock climbing thing to reflect pretty well and am somewhat surprised that it didn’t get you anywhere, though I think getting into Georgetown is not exactly a rejection of your qualifications :)

Comment by RedStateBlueState on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-07T03:44:16.757Z · EA · GW

I mean, I’m happy to see data that proves me wrong, but I wouldn’t extrapolate from your individual case. Obviously extracurriculars + 1560 SAT arent a free ticket to harvard, but I still think the primary differentiator between smart kids that get into Ivies and those that don’t (beyond affirmative action) is how involved they are in extracurriculars.

Comment by RedStateBlueState on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-07T03:12:28.835Z · EA · GW

One other thing I think is somewhat relevant here is that among really smart students, ivies primarily select for well-roundedness, ie doing a lot of volunteer work, being really committed to a hobby, etc., rather than simply being good at and focused on schoolwork. I think there is some argument that these things yield better EAs (they may be less likely to feel like their path is directly toward academia), but I think well-roundedness is in general unfavorable in EA. Dedicating your life towards an intellectual community requires a high degree of commitment toward intellectualism, rather than simply seeing intellectual work as a part of your life but not where you derive most of your purpose or joy.

Comment by RedStateBlueState on EA is becoming increasingly inaccessible, at the worst possible time · 2022-07-22T23:36:08.867Z · EA · GW

If you donate 10% of your income to an EA organization, you are an EA. No matter how much you make. No exceptions.

This should be (and I think is?) our current message.

Comment by RedStateBlueState on EA Shouldn't Try to Exercise Direct Political Power · 2022-07-22T02:02:33.067Z · EA · GW

I'm repeating myself, which I guess is a sign I'm not writing clearly. I think the way you're looking at it is this:

  • Interest group lobbies -> politician pushes for the pork- > pork gets included.

My claim is that the true mechanism is this:

  • Politician wants some funding to show constituents -> may turn to interest groups in district to see what projects need funding (or may just know about projects in their home district) -> politician pushes for pork -> pork gets included
Comment by RedStateBlueState on EA Shouldn't Try to Exercise Direct Political Power · 2022-07-21T20:12:47.140Z · EA · GW

I don't think this is an accurate view of pork. Pork is pushed for by legislators, not by interest groups. These projects have some support from within the district, sure, but it's really the legislators that want them to happen so they can advertise to their constituents. Similarly EA would be much more likely to make its way into legislation if it were pushed for by a devoted legislator than by an outside interest group.

As for the articles in the press, I think Yglesias makes a pretty convincing case that these can do quite a bit of good as well; in my mind they're probably net good, but I understand the concern.

Comment by RedStateBlueState on EA Shouldn't Try to Exercise Direct Political Power · 2022-07-21T19:38:54.260Z · EA · GW

EDIT: after reflecting on this comment I think I was too dismissive of the risk of association between EA and Democrats, particularly because I think we're headed toward a period of Republican domination of US politics and the risk of being associated with Democrats may feasibly outweigh the reward of potential policy. Anyway, below are my original thoughts.

 

Interesting post. I lean toward disagreeing, for a couple of reasons.

I think you would agree that Congress can, if it adopts EA legislation, be greatly helpful to the EA cause. It just has way more money and influence than EA can dream of at the moment. The questions are then:

  1. Is having a self-proclaimed EA in Congress helpful to getting legislation passed?
  2. Will the potential negative press and association with Democrats be too harmful to the EA movement to be worth it?

On (1), I think the answer is a resounding YES, and you have to overthink really hard to reach a different conclusion. Congresspeople deal with a ton of different interest groups all the time trying to have their preferred policies make their way into legislation. We are competing with all of them, which makes our odds of success quite low, especially since politicians generally consult the interest groups they agree with rather than interest groups actually persuading politicians. Having a congressman tirelessly devoted to the singular cause of getting EA legislation enacted, on the other hand - that can be powerful. I think the Tea Party/Squad comparison is quite a bad one, given that these groups focus on hyper-partisan legislation, which as you say EA is not. A more apt comparison in my eyes is pork. Politicians get funding all the time for projects in their districts so they can report back to their constituents. Hundreds of these things get included in many omnibus bills in order to ensure the vote of every single legislator. If Carrick Flynn is unwilling to vote for legislation without AI safety funding, and Democrats in the House need his vote due to a narrow majority, that boosts our odds significantly (especially since AI safety is pretty non-partisan and unlikely to be a sticking point in the Senate). I think you focus too much on the short-term in this analysis - politicians stick around for a long time, and Flynn could very realistically have a lot of influence in future sessions. The upside here seems massive.

For question 2, first let me comment on negative press. I'm pretty skeptical. Flynn really didn't advertise EA at all during his campaign, and his opponents did not attack him for it (aside from crypto due to its association with wealth/corruption), and for good reason. It's really hard to get voters to care about esoteric ideas one way or the other. Voters at large hold pretty authoritarian values and don't care about Republicans' attacks on democracy. Very liberal whites are basically the only ones who care about climate change. And these are esoteric issues that are already quite political - the idea that AI safety or animal welfare would become a campaign point is in my view laughable. To add, if it becomes too much of an issue, we (as a movement) can always decide it's not worth it and stop running candidates. Flynn was a nice trial run that showed us crypto is a weakness, and it's worth having more tests.

Now onto association with Democrats. EA will always be a left-dominated organization due to the extreme left lean of highly educated people. I do share your concern about Republicans being unwilling to pass EA legislation if it's associated with Democrats. But I think you vastly exaggerate how much more association EA would have with Democrats if there were a couple of Democratic EA-associated legislators in office, especially if they never really talk about EA in public. And besides, going back to question 1, I still think there is a (much) higher chance of getting legislation passed if we have an EA in Congress.

Comment by RedStateBlueState on Community Builders Spend Too Much Time Community Building · 2022-07-12T04:48:23.320Z · EA · GW

It seems like a poor division of labor to have the president doing so much of the outreach. Have you considered having a dedicated outreach coordinator? This would be a job good for a communications-focused member to have. Unlike a president, who as you said is likely wasting some opportunity to skill up, this role would be useful for the communications-focused person later in life as well.

Comment by RedStateBlueState on EA for dumb people? · 2022-07-12T02:09:34.038Z · EA · GW

Vox’s Future Perfect is pretty good for this!

Comment by RedStateBlueState on Person-affecting intuitions can often be money pumped · 2022-07-07T17:15:04.296Z · EA · GW

Yes, I should have thought more about Slider's reply before posting, I take back my agreement. Still, I don't find dutch booking convincing in Christiano's case.

The reason to reject a theory based on dutch booking is that there is no logical choice to commit to, in this case to maximize EV. I don't think this applies to the Paul Christiano case, because the second lottery does not have higher EV than the first. Yes, once you play the first lottery and find out that it has a finite value the second one will have higher EV, but until then the first one has higher EV (in an infinite way) and you should choose it.

But again I think there can be reasonable disagreement about this, I just think equating dutch booking for the person-affecting view and for the total utilitarianism view is misleading. These are substantially different philosophical claims.

Comment by RedStateBlueState on Person-affecting intuitions can often be money pumped · 2022-07-07T16:31:44.036Z · EA · GW

After reading the linked comment I think the view that total utilitarianism can be dutch booked is fairly controversial (there is another unaddressed comment I quite agree with), and on a page like this one I think it's misleading to state as fact in a comment that total utilitarianism  can be dutch booked in a similar way that person-affecting views can be dutch booked.

Comment by RedStateBlueState on Person-affecting intuitions can often be money pumped · 2022-07-07T14:05:36.756Z · EA · GW

Right, the “default” critique is why people (myself included) are consequentialists. But I think the view outlined in this post is patently absurd and nobody actually believes it. Trade 3 means that you would have no reservations about killing a (very) happy person for a couple utilons!

Comment by RedStateBlueState on Person-affecting intuitions can often be money pumped · 2022-07-07T12:46:23.075Z · EA · GW

Maybe I have the wrong idea about what “person-affecting view” refers to, but I thought a person-affecting view was a non-consequentialist ideology that would not take trade 3, ie it is neutral about moving from no person to happy person but actively dislikes moving from happy person to no person.

Comment by RedStateBlueState on Michael Nielsen's "Notes on effective altruism" · 2022-06-03T18:59:48.538Z · EA · GW

(Crossposting)

This is a wonderful critique - I agreed with it much more than I thought I would.

Fundamentally, EA is about two things. The first is a belief in utilitarianism or a utilitarian-esque moral system, that there exists an optimal world we should aspire to. This is a belief I believe to be pretty universal, whether people want to admit it or not.

The second part of EA is the belief that we should try to do as much good as possible. Emphasis on “try” - there is a subtle distinction between “hope to do the most amount of good”(the previous paragraph) and “actively try to do the most amount of good”. This piece points out many ways in which doing the latter does not actually lead to the former. The focus on quantifying impact leads to a male/white community, it leads to a reliance on nonprofits that tend to be less sustainable, it leads to outsourcing of intellectual work to individual decision-makers, etc.

But the question of “does trying to optimize impact actually lead to optimal outcomes?” is just an epistemic one. The critiques mentioned are simply counter-arguments, and there are numerous arguments in favor that many others have made. But this is a question on which we have some actual evidence, and I feel that this piece understates the substantial work that EA has already done. We have very good evidence that GiveWell charities have an order of magnitude higher impact than the average one. We are supporting animal welfare policy that has had some major victories in state referenda. We have good reason to believe AI safety is a horribly neglected issue that we need to work on.

This isn’t just a theoretical debate. We know we are doing better work than the average altruistic person outside the community. Effective Altruism is working.

Comment by RedStateBlueState on EA can sound less weird, if we want it to · 2022-05-25T03:16:53.846Z · EA · GW

It's kind of funny for me to hear about people arguing that weirdness is a necessary part of EA. To me, EA concepts are so blindingly straightforward ("we should try to do as much good with donations as possible", "long-term impacts are more important than short-term impacts", "even things that have a small probability of happening are worth tackling if they are impactful enough") that you have to actively modify your rhetoric to make them seem weird. 

Strongly agree with all of the points you brought up - especially on AI Safety. I was quite skeptical for a while until someone gave me an example of AI risk that didn't sound like it was exaggerated for effect, to which my immediate reaction was "Yeah, that seems... really scarily plausible".

Comment by RedStateBlueState on How many people have heard of effective altruism? · 2022-05-21T21:32:06.141Z · EA · GW

I think something like 0.1% of the population is a more accurate figure for how you coded the most strict category. 0.3% for the amount I would consider to have actually heard of the movement. These are the figures I would have given before seeing the study, anyway.

It's hard for me to point to specific numbers that have shaped my thinking, but I'll lay out a bit of my thought process. Of the people I know in person through non-EA means, I'm pretty sure not more than a low-single-digit percent know about EA, and this is a demographic that is way more likely to have heard of EA than the general public. Additionally, as someone who looks at a lot at political polls, I am constantly shocked at how little the public knows about pretty much everything. Given that e.g. EA forum participation numbers are measured in the thousands, I highly doubt 6 million Americans have heard of EA.

Comment by RedStateBlueState on How many people have heard of effective altruism? · 2022-05-21T04:19:52.241Z · EA · GW

Really good write-up!

I find the proportion of people who have heard of EA even after adjusting for controls to be extremely high. I imagine some combination of response bias and just looking up the term is causing overestimation of EA knowledge.

Moreover, given that I expect EA knowledge to be extremely low in the general population, I’m not sure what the point of doing these surveys is. It seems to me you’re always fighting against various forms of survey bias that are going to dwarf any real data. Doing surveys of specific populations seems a more productive way of measuring knowledge.

I’ll update my priors a bit but I remain skeptical

Comment by RedStateBlueState on "Big tent" effective altruism is very important (particularly right now) · 2022-05-20T04:09:23.671Z · EA · GW

I think we can use the EA/Rationality divide to form a home for the philosophy-oriented people in Rationality that doesn't dominate EA culture. Rationality used to totally dominate EA, something that has I think become less true over time, even if it's still pretty prevalent at current levels. Having separate rationality events that people know about, while still ensuring that people devoted to EA have strong rationalist fundamentals (which is a big concern!), seems like the way to go for creating a thriving community.

Comment by RedStateBlueState on Choosing causes re Flynn for Oregon · 2022-05-18T04:31:24.554Z · EA · GW

Shouldn’t we know better than to update in retrospect based on one highly uncertain datapoint?

We have a number of political data people in EA (eg David Shor) who thought donating to Flynn was a good investment early in the campaign cycle (later on I was hearing they thought it was no longer worth it). There was also good reason to believe Flynn could be high-impact if elected. Let’s not overthink this.

Comment by RedStateBlueState on EA and the current funding situation · 2022-05-11T06:03:09.606Z · EA · GW

If you want to get a lot of money for your project, EA grants are not the way to do it. Because of the strong philosophical principles of the EA community, we are more skeptical and rigorous than just about any funding source out there. Granted, I don't actually know much about the nonprofit grant space as a whole: if it comes to the point that EA grants are basically the only game in town for nonprofit funding, then maybe it could become an issue. But if that becomes the case I think we are in a very good position and I believe we could come up with some solutions.

Comment by RedStateBlueState on If you had an hour with a political leader, what would you focus on? · 2022-05-09T02:54:34.359Z · EA · GW

I do think that it's very important not to make animal welfare a partisan issue, so if you do bring it up, be careful. The same probably goes for a lot of these other issues, I just know animal welfare in particular is able to make a lot of headway in public referenda because it is relatively nonpartisan.

Comment by RedStateBlueState on Effective altruism’s odd attitude to mental health · 2022-04-30T03:20:25.146Z · EA · GW

For me, mental health is a notable topic because it is one of the few downsides of modernization. I have a pretty grim view of humanity, and I've talked to a lot of people about how I think the median human living 5$ a day probably has a terrible life. 

The response is always something along these lines: they've never experienced anything else, so for them it's really not that bad of a life. That is, people have some underlying intuition that there is always hedonistic adaptation to new "quality of life", and that someone's perspective on their own life matters maybe even more than their actual life.

In rich countries, this looks like mental health issues. People get so used to their physical needs being taken care of that any emotional struggles in their life feel amplified, leading to anxiety and depression.

So I think it is accurate that the most important issues in this world are global health, poverty, etc., simply because so much of the world is underdeveloped. However, if we want to get to a really great world, a world approaching perfection, we will have to tackle mental health issues.

Comment by RedStateBlueState on My bargain with the EA machine · 2022-04-28T01:33:17.707Z · EA · GW

In that case I'm going to blame Google for defining volition as "the faculty or power of using one's will."  Or maybe that does mean "endorse"? Honestly I'm very confused, feel free to ignore my original comment.

Comment by RedStateBlueState on My bargain with the EA machine · 2022-04-28T01:19:26.114Z · EA · GW

I appreciate the honesty and thoughtfulness of the post, and I think the diagram illustrates your point beautifully. I do worry, however, that thinking of human will in this diagrammatic sense understates the human ability to affect their own will. None of us act in manners that are 100% EA; this point is obvious and needs not be rationalized. All we can do is constantly strive to be more like true EAs. My psychological intuition is that this has to be a gradual, in-the-moment process, where we take EA opportunities when they come up instead of planning which opportunities we think are consistent with our will. Taking your will as a given which you then need to act around could in this way be counterproductive.

Comment by RedStateBlueState on Go Republican, Young EA! · 2022-04-13T05:12:52.935Z · EA · GW

There is a big difference between working in policy institutions and in politics/campaigning directly. By working in Republican policy institutions (eg think tanks), you can have enormous impact that you couldn’t while working under Democrats. By working in Republican campaigns, you are contributing (non-negligibly given the labor shortage you describe!) to the fall of US democracy and a party that has much worse views on almost every subject under most moral frameworks.

For someone with a reasonably clear picture of the moral impacts of policy, working under Republicans is also enormously emotionally difficult. Valuable, yes, but not for the faint of heart.

Comment by RedStateBlueState on Why should we care about existential risk? · 2022-04-09T02:16:08.421Z · EA · GW

Thank you for the response!

Yeah I think I have the most problem with (4), something that I probably should have expressed more in the post.

It's true that humans are in theory trying to optimize for good outcomes, and this is a reason to expect utility to diverge to infinity. However, there are in my view equally good reasons utility to diverge to negative infinity- that being that the world is not designed  for humans. We are inherently fragile creatures, only suitable to live in a world with specific temperature, air composition, etc. There are a lot of large-scale phenomenon causing these factors to change - s-risks - that could send utility plunging. This, plus the fact that current utility is below 0, means that I think existential risk is probably a moral benefit.

I also agree that this whole thing is pretty pedantic, especially in cases like AI domination.