Posts

Optimisation-focused introduction to EA podcast episode 2021-01-15T09:59:29.416Z
Retrospective on Teaching Rationality Workshops 2021-01-03T17:15:06.154Z
Local Group Event Idea: EA Community Talks 2020-12-20T17:12:29.251Z
Make a Public Commitment to Writing EA Forum Posts 2020-11-18T18:23:11.468Z
Helping each other become more effective 2020-10-30T21:33:47.382Z
What altruism means to me 2020-08-15T08:25:28.386Z
The world is full of wasted motion 2020-08-05T20:41:23.710Z

Comments

Comment by Neel Nanda on Local Group Event Idea: EA Community Talks · 2021-08-14T17:09:09.593Z · EA · GW

I'm curious whether you ended up trying these out?

Comment by Neel Nanda on How students, groups, and community members can use funding · 2021-08-11T12:35:23.293Z · EA · GW

Great post! I'd be very excited to see somewhere like the Infrastructure Fund funding any of these.

Another point: Generally converting money to time and productivity - affording healthy ready meals, a fast and functional laptop and phone, not needing to stress about meeting rent, being able to get Ubers rather than walking, etc. I think it's often awkward to ask for money to meet things like this for yourself, but I want anyone doing good community building with to not need to worry about things like this! Or if anyone is doing a part time job to support themselves while doing community building, I'd LOVE for them to be paid for the community building instead, and be able to focus on that more

I think people in EA are often averse to things like this, because that money could be donated instead. But I think this often leads to bad norms around this stuff - if you're doing high impact work, your time is valuable, and saving time means you can do more good work!

Comment by Neel Nanda on [PR FAQ] Improving tag notifications · 2021-08-09T20:21:31.793Z · EA · GW

I had absolutely no idea you could subscribe to a tag! Thanks (as a result, I have no real views on this feature)

Comment by Neel Nanda on [PR FAQ] Adding profile pictures to the Forum · 2021-08-09T20:20:02.813Z · EA · GW

I feel mildly negative about this idea, though find it hard to articulate why

Comment by Neel Nanda on [PR FAQ] Sharing readership data with Forum authors · 2021-08-09T20:17:40.020Z · EA · GW

I would find this extremely motivating (though also obsessively check it in a way that is somewhat unhealthy)

Comment by Neel Nanda on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T20:22:12.016Z · EA · GW

Interesting idea! I think this works much better when supply is constrained, eg land, and not when supply is elastic (eg GPUs). I'm curious whether anyone has actually tried this

Comment by Neel Nanda on Most research/advocacy charities are not scalable · 2021-08-07T20:13:32.671Z · EA · GW

I think AI research on large models is quite different to the kind of research meant by this post, because it requires large amounts of compute, which is physical (though I guess not exactly a product)

Similarly biotech research or high energy physics research is really expensive, and mostly because of physical world stuff

Comment by Neel Nanda on (Video) How to be a less crappy person · 2021-08-03T13:30:22.431Z · EA · GW

Lastly, I may be alone here, but I am concerned with EA community becoming a little too quickly bound to norms and rules. I would be afraid we could quickly become a dogmatic and siloed group. I would argue the approach in the video above is unique/diverse in the community, and that there is strong value in that 

I agree with the principle of being pro-diversity and anti-dogma in general, but I disagree when it comes to public communications. If someone communicates badly about EA, that harms the movement, can negatively change perceptions, and makes it harder for everyone else doing communication. Eg, 80K over-emphasising earning to give early on. 

I think that divisive and argumentative approaches like this one, as Harrison says, can put a lot of people off and give them a more negative image of EA, and I think this can be harmful to the movement. This doesn't mean that public communication needs to be homogenous, but I do think it's valuable to push back on public communication that we think may be harmful. 

Comment by Neel Nanda on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-29T21:56:13.964Z · EA · GW

Thanks a lot for the thorough post! I found it really helpful how you put rough numbers on everything, and made things concrete, and I feel like I have clearer intuitions for these questions now.

My understanding is that these considerations only apply to longtermists, and that for people who prioritise global health and well-being or animal welfare this is all much less clear, would you agree with that? My read is that those cause areas have much more high quality work by non EAs and high quality, shovel ready interventions.

I think that nuance can often get lost in discussions like this, and I imagine a good chunk of 80K readers are not longtermists, so if this only applies to longtermists I think that would be good to make clear in a prominent place.

And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.

Comment by Neel Nanda on A Twitter bot that tweets high impact jobs · 2021-07-26T20:02:13.025Z · EA · GW

The best way for this is to create an issue on github

Fyi this link is broken

Comment by Neel Nanda on Apply to the new Open Philanthropy Technology Policy Fellowship! · 2021-07-22T12:19:18.723Z · EA · GW

This seems like a great initiative, I'm excited to see where this goes!

Do people need to be US citizens (or green card holders etc) to apply for this?

Comment by Neel Nanda on What would you ask a policymaker about existential risks? · 2021-07-08T12:14:38.684Z · EA · GW

Have you spoken at all with the Centre for Long-term Resilience? They work with UK policy makers on issues related to catastrophic and existential risk, and I imagine would be pretty interested in this project.

Comment by Neel Nanda on Inspiring others to do good · 2021-07-08T12:11:01.284Z · EA · GW

Interesting idea! I'm curious to see where this goes. I'm unsure whether I expect most people to perceive this as pretentious, or as admirable/norm-setting

One thing that would significantly put me off using this as is is that I can only choose 3 cause areas (none of which are the ones I most highly prioritise), and can't choose specific charities within each cause area. But if this website isn't aimed at longtermists/highly engaged EAs, maybe this is fine! I believe One for the World do something similar.

Comment by Neel Nanda on What should CEEALAR be called? · 2021-06-24T11:06:41.052Z · EA · GW

The other primary advantage is that the name is quite self-explanatory.

When I hear the name, I picture a hotel chain trying to provide excellent and efficient service. It doesn't feel like it gets to the heart of the EA Hotel for me.

Comment by Neel Nanda on What effectively altruistic inducement prize contest would you like to be funded? · 2021-06-23T20:43:06.537Z · EA · GW

Why is "iterated embryo selection" desirable on EA grounds?

I can see the argument that this let's us improve human intelligence, which eg leads to more technological progress. But it seems unclear whether this is good from an x-risk perspective. And I can see many ways that better control over human genetics can lead to super bad outcomes, eg stable dictatorships.

Comment by Neel Nanda on What are the 'PlayPumps' of cause prioritisation? · 2021-06-23T20:40:48.682Z · EA · GW

This seems like an awesome project!

I'm curious why you're emphasising 'it needs to be obvious, after some thought, that this cause is not worth pursuing at all' as a criteria here. To me, it doesn't really feel like cause prioritisation to first check whether your cause is even helpful. I feel that the harder but more important insight is that 'even if your cause is GOOD, some other causes can be better. Resources are scarce, and so you should focus on the causes that are MORE good'.

To me, one of the core ideas of EA is trying to maximise the good you do, not just settling for good enough. And that's something I'd want to come across in an introductory work. Though it's much harder to make this intuitive, obviously!

Comment by Neel Nanda on What are some key numbers that (almost) every EA should know? · 2021-06-18T13:05:21.157Z · EA · GW

I'd love to use such an Anki deck!

Comment by Neel Nanda on New skilled volunteering board for effective animal advocacy · 2021-06-18T13:03:15.175Z · EA · GW

Nice initiative! I'd have found the post title more informative if you replaced 'high priority EA cause area' with 'Animal Advocacy'/'Animal Welfare'. Is there a reason you went with the first one?

Comment by Neel Nanda on Event-driven mission hedging and the 2020 US election · 2021-06-15T11:21:22.040Z · EA · GW

Interesting idea, and thought-provoking post, thanks! 

I find it odd to call this mission hedging though. It feels more like mission anti-hedging - I want to be maximally risk seeking and go all in to have more money in the world where my cause is doing better.

Comment by Neel Nanda on How Should Free Will Theories Impact Effective Altruism? · 2021-06-15T11:18:22.106Z · EA · GW

If free will doesn't exist, does that ruin/render void the EA endeavour?

Can you say more about why free will not existing is relevant to morality? 

My personal take is that free will seems like a pretty meaningless and confused concept, and probably doesn't exist (whatever that means). But that I want to do what I can to make the world a better place anyway, in the same way that I clearly want and value things in my normal life, regardless of whether I'm doing this with free will.

Comment by Neel Nanda on Are there any 'maximum egoism' pledges? · 2021-06-15T11:09:27.043Z · EA · GW

Ah, interesting! I didn't know this was the original conception of GWWC. I'm glad that got changed! 

Comment by Neel Nanda on Are there any 'maximum egoism' pledges? · 2021-06-13T19:50:25.099Z · EA · GW

Huh, that is not what I thought you meant by maximal egoism

Giving What We Can have the Further Pledge, to donate everything above a certain threshold.

Comment by Neel Nanda on Which non-EA-funded organisations did well on Covid? · 2021-06-10T23:43:28.738Z · EA · GW

To me the shining example of this is Jacob Falkovich's Seeing the Smoke, which somehow helped convince the UK to lockdown in March 2020.

Comment by Neel Nanda on Which non-EA-funded organisations did well on Covid? · 2021-06-10T23:41:32.330Z · EA · GW

even if he has been somewhat (in my opinion unfairly) shunned by the EA community

What's this referring to? I know he consumes a bunch of rationalist content, but wasn't aware of much interaction with EA, or of any action of the community towards him.

Comment by Neel Nanda on EA Infrastructure Fund: Ask us anything! · 2021-06-07T00:14:54.072Z · EA · GW

If I know an organisation is applying to EAIF, and have an inside view that the org is important, how valuable is donating $1000 to the org compared to donating $1000 to EAIF? More generally, how should medium sized but risk-neutral donors coordinate with the fund?

Comment by Neel Nanda on EA Infrastructure Fund: Ask us anything! · 2021-06-06T21:37:48.638Z · EA · GW

What were the most important practices you transferred?

Comment by Neel Nanda on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T07:19:48.787Z · EA · GW

Thanks a lot for the write-up! Seems like there's a bunch of extremely promising grants in here. And I'm really happy to see that the EAIF is scaling up grantmaking so much. I'm particularly excited about the grants to HIA, CLTR and to James Aung & Emma Abele's project.

And thanks for putting in so much effort into the write-up, it's really valuable to see the detailed thought process behind grants and makes me feel much more comfortable with future donations to EAIF. I particularly appreciated this for the children's book grant, I went from being strongly skeptical to tentatively excited by the write-up.

Comment by Neel Nanda on Long-Term Future Fund: May 2021 grant recommendations · 2021-05-31T18:16:50.347Z · EA · GW

Yes, agreed. My argument is that if cases are sufficiently low in the US, then deploying it now won't get much data, and the app likely won't get much uptake

Comment by Neel Nanda on Introducing Rational Animations · 2021-05-31T11:19:08.488Z · EA · GW

As a single point of anecdata, I got interested in EA via being part of the rationality community, and think it's plausible I would not have gotten involved in EA if there wasn't that link

Comment by Neel Nanda on Long-Term Future Fund: May 2021 grant recommendations · 2021-05-31T10:51:12.258Z · EA · GW

Po-Shen Loh – $100,000, with an expected reimbursement of up to $100,000

Awesome! I think NOVID is a really clever idea, and I'm excited to see it getting funding.

One concern I have about the value proposition, which I didn't see addressed here: It seems that this funding might be coming too late in the pandemic to be useful? It seems that NOVID will only really help in future pandemics if it clearly demonstrates value now. But as far as I'm aware, it's mainly being developed and deployed in the US, which seems to be most of the way to herd immunity. So it seems plausible that there won't be enough transmission for NOVID to really demonstrate value.

Comment by Neel Nanda on Why should we be effective in our altruism? · 2021-05-31T10:36:55.596Z · EA · GW

There is enormous spread in how much good some interventions do. For example, money spent helping the world's poorest people to be 100x more effective than money spent helping the typical person in the West. 100x differences are a really big deal, but feel unintuitive and hard to think about - these don't often come up in every day life. And caring about evidence and effectiveness is our main tool to identify these differences in spread, and focus on the best interventions. So we need to care about effectiveness, because we happen to live in a world where caring about it makes a massive difference in how much good we do

Comment by Neel Nanda on Concerns with ACE's Recent Behavior · 2021-04-26T08:44:53.690Z · EA · GW

 Evidence-based reasoning, with the understanding that the burden of proof lies with those who deny that the EA movement must make strenuous efforts to eliminate all forms of discrimination in its midst.

I feel somewhat skeptical of this, given that you also say:

This may include in some contexts behaviour consisting in denying that such discrimination exists or that it needs to be addressed.

It feels like 'trying to provide empirical evidence that the EA movement should not make overcoming discrimination an overwhelming priority' can certainly feel like denying discrimination exists, and can feel harmful to people. I'm somewhat skeptical that such a discussion would likely happen in a healthy and constructive way under prevailing social justice discussion norms. Have you ever come across good examples of such discussions?

Comment by Neel Nanda on Concerns with ACE's Recent Behavior · 2021-04-25T20:06:42.927Z · EA · GW

The key part of running feedback by an org isn't to inform the org of the criticism, it's to hear their point of view, and see whether any events have been misrepresented (from their point of view). And, ideally, to give them a heads up to give a response shortly after the criticism goes up

Comment by Neel Nanda on Cash Transfers as a Simple First Argument · 2021-04-18T22:10:56.832Z · EA · GW

I really like this example! I used in an interview I gave about EA and thought it went down pretty well. My main concern with using it is that I don't personally fund direct cash transfers (or think they're anywhere near the highest impact thing), and both think it can misrepresent the movement, and think that it's disingenuous to imply that EA is about robustly good things like this, when I actually care most about things like AI Safety.

As a result, I frame the example like this (if I can have a high-context conversation):

  • Effectiveness, and identifying the highest impact interventions is a cornerstone of EA. I think this is super important, because there's really big spread between how much good different interventions, much more than feels intuitive
  • Direct cash transfers are a proof of concept: There's good evidence that doubling your income increases your wellbeing by the same amount, no matter how wealthy you were to start with. We can roughly think of helping someone as just giving them money, and so increasing their income. The average person in the US has income about 100x the income of the world's poorest people, and so with the resources you'd need to double the income of an average American, you could help 100x as many of the world's poorest people!
    • Contextualise, and emphasise just how weird 100x differences are - these don't come up in normal life. It'd be like you were considering renting buying a laptop for $1000, shopped around for a bit, and found one just as good for $10! (Pick an example that I expect to resonate with big expenses the person faces, eg a laptop, car, rent, etc)
    • Emphasis that this is just a robust example as a proof of concept, and that in practice I think we can do way better - this just makes us confident that spread is out there, and worth looking for. Depending on the audience, maybe explain the idea of hits-based giving, and risk neutrality.
Comment by Neel Nanda on Concerns with ACE's Recent Behavior · 2021-04-17T09:24:26.097Z · EA · GW

Thanks for sharing, that part updated me a lot away from Ben's view and towards Hypatia's view. 

An aspect I found particularly interesting was that Anima International seems to do a lot of work in Eastern European countries, which tend to be much more racially homogenous, and I presume have fairly different internal politics around race to the US. And that ACE's review emphasises concerns, not about their ability to do good work in their countries, but about their ability to participate in international spaces with other organisations.

They work in: 

Denmark, Poland, Lithuania, Belarus, Estonia, Norway, Ukraine, the United Kingdom, Russia, and France

It seems even less justifiable to me to judge an organisation according to US views around racial justice, when they operate in such a different context.

EDIT: This point applies less than I thought. Looks like Connor Jackson, the person in question, is a director of their UK branch, which I'd consider much closer to the US on this topic. 

Comment by Neel Nanda on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-17T09:09:25.569Z · EA · GW

Thanks for the clarification. I'm glad that's in there, and I'll feel better about this once the 'Top 10 problem areas' feed exists, but I still feel somewhat dissatisfied. I think that 'some EAs prioritise longtermism, some prioritise neartermism or are neutral. 80K personally prioritises longtermism, and does so in this podcast feed, but doesn't claim to speak on behalf of the movement and will point you elsewhere if you're specifically interested in global health or animal welfare' is a complex and nuanced point. I generally think it's bad to try making complex and nuanced points in introductory material like this, and expect that most listeners who are actually new to EA wouldn't pick up on that nuance. 

I would feel better about this if the outro episode covered the same point, I think it's easier to convey at the end of all this when they have some EA context, rather than at the start.

A concrete scenario to sketch out my concern:

Alice is interested in EA, and somewhat involved. Her friend Bob is interested in learning more, and Alice looks for intro materials. Because 80K is so prominent, Alice comes across 'Effective Altruism: An Introduction' first, and recommends this to Bob. Bob listens to the feed, and learns a lot, but because there's so much content and Bob isn't always paying close attention, Bob doesn't remember all of it. Bob only has a vague memory of Episode 0 by the end, and leaves with a vague sense that EA is an interesting movement, but only cares about weird, abstract things rather than suffering happening today, and concludes that the movement has got a bit too caught up in clever arguments. And as a result, Bob decides not to engage further.

Comment by Neel Nanda on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-17T08:54:45.001Z · EA · GW

Ah, thanks for the clarification! That makes me feel less strongly about the lack of diversity. I interpreted it as prioritising ALLFED over global health stuff as representative of the work of the EA movement, which felt glaringly wrong

Comment by Neel Nanda on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-16T15:45:38.705Z · EA · GW

I strongly second all of this. I think 80K represents quite a lot of EAs public facing outreach, and that it's important to either be explicit that this is longtermism focused, or to try to be representative of what happens in the movement as a whole. I think this especially holds for somewhat explicitly framed as an introductory resource, since I expect many people get grabbed by global health/animal welfare angles who don't get grabbed by longtermist angles.

Though I do see the countervailing concern that 80K is strongly longtermism focused, and that it'd be disingenuous for an introduction to 80K to give disproportionate time to neartermist causes, if those are explicitly de-prioritised 

Comment by Neel Nanda on Concerns with ACE's Recent Behavior · 2021-04-16T12:37:35.593Z · EA · GW

Thanks a lot for writing this up and sharing this. I have little context beyond following the story around CARE and reading this post, but based on the information I have, these seem like highly concerning allegations, and ones I would like to see more discussion around. And I think writing up plausible concerns like this clearly is a valuable public service.

Out of all these, I feel most concerned about the aspects that reflect on ACE as an organisation, rather than that which reflect the views of ACE employees. If ACE employees didn't feel comfortable going to CARE, I think it is correct for ACE to let them withdraw. But I feel concerned about ACE as an organisation making a public statement against the conference. And I feel incredibly concerned if ACE really did downgrade the rating of Anima International as a result. 

That said, I feel like I have fairly limited information about all this, and have an existing bias towards your position. I'm sad that a draft of this wasn't run by ACE beforehand, and I'd be keen to hear their perspective. Though, given the content and your desire to remain anonymous, I can imagine it being unusually difficult to hear ACE's thoughts before publishing.

Personally, I consider the epistemic culture of EA to be one of its most valuable aspects, and think it's incredibly important to preserve the focus on truth-seeking, people being free to express weird and controversial ideas, etc. I think this is an important part of EA finding neglected ways to improve the world, identifying and fixing its mistakes, and keeping a focus on effectiveness. To the degree that the allegations in this post are true, and that this represents an overall trend in the movement, I find this extremely concerning, and expect this to majorly harm the movement's ability to improve the world.

Comment by Neel Nanda on Concerns with ACE's Recent Behavior · 2021-04-16T12:27:51.299Z · EA · GW

I interpret it as 'the subgroup of the Effective Altruist movement predominantly focused on animal welfare'

Comment by Neel Nanda on "Insider giving" - An unfortunate donation strategy used by corporate insiders to avoid losses · 2021-04-14T12:09:56.042Z · EA · GW

Interesting! It's not that obvious to me that this is bad. Eg, if this gets people donating stock rather than donating nothing at all, this feels like a cash transfer from the government to charities?

Of course, WHICH charities receive the stock matters a lot here

inflates donation figures.

From the article linked:

And what they find is that "large shareholders’ gifts are suspiciously well timed. Stock prices rise abnormally about 6% during the one-year period before the gift date and they fall abnormally by about 4% during the one year after the gift date, meaning that large shareholders tend to find the perfect day on which to give."

A 4% inflation really doesn't seem that bad? Especially since, as Larks says, charities can sell stock themselves much sooner than a year.

Comment by Neel Nanda on Some quick notes on "effective altruism" · 2021-03-27T08:13:25.029Z · EA · GW

I also find that a bit cringy. To me, the issue is saying "I have SUCCEEDED at being effective at altruism", which feels like a high bar and somewhat arrogant to explicitly admit to

Comment by Neel Nanda on Long-Term Future Fund: Ask Us Anything! · 2021-03-25T14:57:33.965Z · EA · GW

Do you mean this as distinct from Jonas's suggestion of:

Nah, I think Jonas' suggestion would be a good implementation of what I'm suggesting. Though as part of this, I'd want the LTFF to be less public facing and obvious - if someone googled 'effective altruism longtermism donate' I'd want them to be pointed to this new fund.

Hmm, I agree that a version of this fund could be implemented pretty easily - eg just make a list of the top 10 longtermist orgs and give 10% to each. My main concern is that it seems easy to do in a fairly disingenuous and manipulative way, if we expect all of its money to just funge against OpenPhil. And I'm not sure how to do it well and ethically.

Comment by Neel Nanda on EA Funds has appointed new fund managers · 2021-03-24T21:33:35.716Z · EA · GW

Huh, I find this surprising. I'd thought the Global Health and Development Fund was already intended to focus on hits-based giving in global health. Can you elaborate a bit more on what the middle ground being hit here is, by the current fund?

Comment by Neel Nanda on AMA: Tom Chivers, science writer, science editor at UnHerd · 2021-03-11T11:23:25.660Z · EA · GW

What would your advice be for talking to the media about EA? (And when to figure out whether to do it at all!)

How would you frame the message of EA to go down well with a large audience? (Eg, in an article in a major newspaper). How would this change with the demographics/political bias of that audience? Do you think it's possible to convey longtermist ideas in such a setting?

Being ahead of the curve on COVID-19/pandemics seems like a major win for EA, but it has also been a major global tragedy. How do you think we can best talk about COVID when selling EA, that is both tactful and reflects well on EA?

Comment by Neel Nanda on Don't Be Bycatch · 2021-03-11T11:21:04.020Z · EA · GW

Thanks a lot for writing this! I think this is a really common trap to fall into, and I both see this a lot in others, and in myself.

To me, this feels pretty related to the trap of guilt-based motivation - taking the goals that I care about, and thinking of them as 'I should do this' or as obligations, and feeling bad and guilty when I don't meet them. Combined with having unrealistically high standards, based on a warped and perfectionist view of what I 'should' be capable of, hindsight bias and the planning fallacy and what I think the people around me are capable of. Which combine to mean that I set myself standards I can never really meet, feel guilty for failing to meet them, and ultimately build up aversions that stop me caring about whatever I'm working on, and to flinch away from it.

This is particularly insidious, because I find the intention behind this is often pure and important to me. It comes from a place of striving to be better, of caring about things, and wanting to live in consistency with my values. But in practice, this intention, plus those biases and failure modes, combine in me doing far worse than I could.

I find a similar mindset to your first piece of advice useful: I imagine a future version of myself that is doing far better than I am today, and ask how I could have gotten there. And I find that I'd be really surprised and confused if I suddenly got way better one day. But that it's plausible to me that each day I do a little bit better than before, and that, on average, this compounds over time. Which means it's important to calibrate my standards so that I expect myself to do a bit better than what I have been  realistically capable of before.

If you resonate with that, I wrote a blog post called Your Standards Are Too High on how I (try to) deal with this problem. And the Replacing Guilt series by Nate Soares is phenomenally good, and probably one of the most useful things I've ever read re own my mental health

Comment by Neel Nanda on Alice Crary's philosophical-institutional critique of EA: "Why one should not be an effective altruist" · 2021-02-27T22:10:32.322Z · EA · GW

I think what you've written is not an argument against consequentialism, it's about trying to put numbers on things in order to rank the consequences?

Regardless, that wasn't how I interpreted her case. It doesn't feel like she cares about the total amount of systemic equality and justice in the world. She fundamentally cares about this from the perspective of the individual doing the act, rather than the state of the world, which seems importantly different. And to me, THIS part breaks consequentialism

Comment by Neel Nanda on Alice Crary's philosophical-institutional critique of EA: "Why one should not be an effective altruist" · 2021-02-27T07:59:54.785Z · EA · GW

Thanks for sharing! One thing I didn't notice in the summary: The talk seemed specifically focused on the impact of EA on the animal advocacy space (which I found mildly surprising and interesting, since these critiques pattern match much more to global health/equity/justice concerns)

This article seems to basically boil down to "take a specific view of morality that the author endorses, which heavily emphasises virtue, justice, systemic change and individual obligations, and is importantly not consequentialist, yet also demanding enough to be hard to satisfice on".

Then, once you have taken this alternate view, observe that this wildly changes your moral conclusions and opinions on how to act, and much of what EA stands for.

You can quibble about "the article claims to be challenging the fundamental idea of EA, yet EA is compatible with any notion of the good and capable of doing this effectively". But I personally think that EA DOES have a bunch of common moral beliefs, eg the importance of consequentialism, impartial views of welfare, the importance of scope and numbers, and to some degree utilitarianism. And that EA beliefs are robust to people not sharing all of these views, and to pluralistic views like others in this thread have argued (eg, put in the effort to be a basically decent person according to common sense morality and then ruthlessly optimise for your notion of the good with your spare resources). But I think you also need to make some decisions about what you do and do not value, especially for a moral view that's demanding rather than just "be a basically decent person", and her view seems fairly demanding?

I'm a bit confused about EXACTLY what the view of morality here described is - it pattern matches onto virtue ethics, and views on the importance of justice and systemic change? But I definitely think it's quite different from any system that I subscribe to. And it doesn't feel like the article is really trying to convince me to take up this view, just taking it as implicit. And it seems fine to note that most EAs have some specific moral beliefs, and that if you substantially disagree with those then you have different conclusions? But it's hardly a put down critique of EA, it's just a point that tradeoffs are hard and you need to pick your values to make decisions.

The paragraph of the talk that felt most confusing/relevant:

This philosophical critique brings into question effective altruists’ very notion of doing the “most good.” As effective altruists use it, this phrase presupposes that the rightness of a social intervention is a function of its consequences and that the outcome involving the best consequences counts as doing most good. This idea has no place within an ethical stance that underlies the philosophical critique. Adopting this stance is a matter of seeing the real fabric of the world as endowed with values that reveal themselves only to a developed sensibility. To see the world this way is to leave room for an intuitively appealing conception of actions as right insofar as they exhibit just sensitivity to the worldly circumstances at hand. Accepting this appealing conception of action doesn’t commit one to denying that right actions frequently aim at ends. Here acting rightly includes acting in ways that are reflective of virtues such as benevolence, which aims at the well-being of others. With reference to the benevolent pursuit of others’ well-being, it certainly makes sense to talk about good states of affairs. But it is important, as Philippa Foot once put, “that we have found this end within morality, forming part of it, not standing outside it as a good state of affairs by which moral action in general is to be judged” (Foot 1985, 205). Right action also includes acting, when appropriate, in ways reflective of the broad virtue of justice, which aims at an end—giving people what they are owed—that can conflict with the end of benevolence. If we are responsive to circumstances, sometimes we will act with an eye to others’ well-being, and sometimes with an eye to other ends. In a case in which it is not right to improve others’ well-being, it makes no sense to say that we produce a worse result. To say this would be to pervert our grasp of the matter by importing into it an alien conception of morality. If keep our heads, we will say that the result we face is, in the only sense that is meaningful, the best one. There is here simply no room for EA-style talk of “most good.”

Comment by Neel Nanda on Some EA Forum Posts I'd like to write · 2021-02-23T13:13:39.822Z · EA · GW

I love the idea of this post! I'd be extremely excited to read the forecasting post and I think making that would be highly valuable. I'm not that interested in the others

Comment by Neel Nanda on Local Group Event Idea: EA Community Talks · 2021-01-23T20:51:53.777Z · EA · GW

Ah, awesome! I'd love to hear how it goes