Posts

Public Spreadsheet of Effective Altruism Resources by Career Type 2019-06-03T18:43:06.199Z · score: 35 (15 votes)
What exactly is the system EA's critics are seeking to change? 2019-05-27T03:46:45.290Z · score: 7 (8 votes)
Update on the Vancouver Effective Altruism Community 2019-05-17T06:10:14.053Z · score: 24 (7 votes)
EA Still Needs an Updated and Representative Introductory Guidebook 2019-05-12T07:33:46.183Z · score: 33 (19 votes)
What caused EA movement growth to slow down? 2019-05-12T05:48:44.184Z · score: 14 (9 votes)
Does the status of 'co-founder of effective altruism' actually matter? 2019-05-12T04:34:32.667Z · score: 9 (10 votes)
Announcement: Join the EA Careers Advising Network! 2019-03-17T20:40:04.956Z · score: 30 (30 votes)
Neglected Goals for Local EA Groups 2019-03-02T02:17:12.624Z · score: 35 (19 votes)
Radicalism, Pragmatism, and Rationality 2019-03-01T08:18:22.136Z · score: 14 (9 votes)
Building Support for Wild Animal Suffering [Transcript] 2019-02-24T11:56:33.548Z · score: 14 (8 votes)
Do you have any suggestions for resources on the following research topics on successful social and intellectual movements similar to EA? 2019-02-24T00:12:58.780Z · score: 6 (1 votes)
How Can Each Cause Area in EA Become Well-Represented? 2019-02-22T21:24:08.377Z · score: 14 (8 votes)
What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? 2019-01-30T02:52:25.471Z · score: 22 (10 votes)
Effective Altruism Making Waves 2018-11-15T20:20:08.959Z · score: 6 (7 votes)
Reducing Wild Animal Suffering Ecosystem & Directory 2018-10-31T18:26:52.476Z · score: 12 (8 votes)
Reducing Wild Animal Suffering Literature Library: Original Research and Cause Prioritization 2018-10-15T20:28:10.896Z · score: 8 (8 votes)
Reducing Wild Animal Suffering Literature Library: Consciousness and Ecology 2018-10-15T20:24:57.674Z · score: 6 (6 votes)
The EA Community and Long-Term Future Funds Lack Transparency and Accountability 2018-07-23T00:39:10.742Z · score: 62 (63 votes)
Effective Altruism as Global Catastrophe Mitigation 2018-06-08T04:35:16.582Z · score: 7 (9 votes)
Remote Volunteering Opportunities in Effective Altruism 2018-05-13T07:43:10.705Z · score: 34 (26 votes)
Reducing Wild Animal Suffering Literature Library: Introductory Materials, Philosophical & Empirical Foundations 2018-05-05T03:23:15.858Z · score: 10 (12 votes)
Wild Animal Welfare Project Discussion: A One-Year Strategic Review 2018-05-05T00:56:04.991Z · score: 8 (10 votes)
Ten Commandments for Aspiring Superforecasters 2018-04-25T05:07:39.734Z · score: 10 (10 votes)
Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply 2018-04-13T22:10:16.460Z · score: 11 (11 votes)
Lessons for Building Up a Cause 2018-02-10T08:25:53.644Z · score: 13 (15 votes)
Room For More Funding In AI Safety Is Highly Uncertain 2016-05-12T13:52:37.487Z · score: 6 (6 votes)
Effective Altruism Is Exploring Climate Change Action, and You Can Be Part of It 2016-04-22T16:39:30.688Z · score: 10 (10 votes)
Why You Should Visit Vancouver 2016-04-07T01:57:28.627Z · score: 9 (9 votes)
Effective Altruism, Environmentalism, and Climate Change: An Introduction 2016-03-10T11:49:45.914Z · score: 17 (17 votes)
Consider Applying to Organize an EAGx Event, And An Offer To Help Apply 2016-01-22T20:14:07.121Z · score: 4 (4 votes)
[LINK] Will MacAskill AMA on Reddit 2015-08-03T20:45:42.530Z · score: 3 (3 votes)
Efective Altruism Quotes 2015-08-01T13:49:23.484Z · score: 1 (1 votes)
2015 Summer Welcome Thread 2015-06-16T20:29:36.185Z · score: 2 (2 votes)
[Announcement] The Effective Altruism Course on Coursera is Now Open 2015-06-16T20:20:00.044Z · score: 4 (4 votes)
Don't Be Discouraged In Reaching Out: An Open Letter 2015-05-21T22:26:50.906Z · score: 5 (5 votes)
What Cause(s) Do You Support? And Why? 2015-03-22T00:13:37.886Z · score: 2 (2 votes)
Announcing the Effective Altruism Newsletter 2015-03-11T06:05:51.545Z · score: 10 (10 votes)
March Open Thread 2015-03-01T17:14:59.382Z · score: 1 (1 votes)
Does It Make Sense to Make Multi-Year Donation Commitments to One Organization? 2015-01-27T19:37:30.175Z · score: 2 (2 votes)
Learning From Less Wrong: Special Threads, and Making This Forum More Useful 2014-09-24T10:59:20.874Z · score: 6 (6 votes)

Comments

Comment by evan_gaensbauer on Did Geoff Anders ever write a post about the performance of Leverage Research and their recent disbanding? · 2019-08-05T01:23:06.822Z · score: 6 (3 votes) · EA · GW

Frankly, I'm unsure how much there is to learn from or about Leverage Research at this point. Having been in the effective altruism movement for almost as long as Leverage Research has been around, an organization which has had some kind of association with effective altruism since soon after it was founded, Leverage Research's history is one of failed projects, many linked to the mismanagement of Leverage Research as an ecosystem of projects. In effective altruism, one of our goals is learning from mistakes, including the mistakes of others, is so we don't make the same kind of mistakes ourselves. It's usually more prudent to judge mistakes on a case-by-case basis, as opposed to the actor or agency that perpetuates them. Yet other times there is a common thread. When there is evidence for repeated failures borne of systematic errors in an organization's operations and worldview, often the most prudent lesson we can learn from that organization is why they repeatedly and consistently failed, and about their environment, for why it enabled a culture of an organization barely ever course-correcting, or being receptive to feedback. What we might be able to learn from Leverage Research is how EA(-adjacent) organizations should not operate, and how effective altruism as a community can learn to interact with them better.

Comment by evan_gaensbauer on Announcing the launch of the Happier Lives Institute · 2019-08-05T01:10:06.994Z · score: 2 (1 votes) · EA · GW

Alright, thanks for letting me know. I'll remember that for the future.

Comment by evan_gaensbauer on Announcing the launch of the Happier Lives Institute · 2019-08-05T01:09:32.508Z · score: 2 (1 votes) · EA · GW

Hi. I'm just revisiting this comment now. I don't have anymore questions. Thanks for your detailed response.

Comment by evan_gaensbauer on Support AMF in tab 4 a cause so it reaches its goal. · 2019-07-29T05:13:45.610Z · score: 6 (4 votes) · EA · GW

I saw this post had negative karma, and I upvoted it again to positive karma. I'm making this comment to signal-boost that I believe this article belongs on the EA Forum; and, that, if one is going to downvote articles like this that by all appearances are appropriate for the EA Forum, it would be helpful to provide a constructive explanation/criticism of them.

Comment by evan_gaensbauer on Defining Effective Altruism · 2019-07-22T02:01:38.983Z · score: 2 (3 votes) · EA · GW

I've been in the EA community since 2012. As someone who has been in EA for that long, I entered the community taking to heart the intentional stance of 'doing the most good'. Back then, a greater proportion of the community wanted EA to primarily be about a culture of effective, personal, charitable giving. The operative word of that phrase is 'personal,' since even though there are foundations behind the EA movement like the Open Philanthropy Project that have a greater endowment than the rest of the EA community combined might ever hope to earn to give, for different reasons a lot of EAs still think it's important EA emphasizes a culture of personal giving regardless. I understand and respect that stance, and respect its continued presence in EA. I wouldn't even mind if it became a much bigger part of EA once again. This is a culture within EA that frames effective altruism as more of an obligation. Yet personally I believe it's more effective, and does more good, by doing so in a more diverse array of that. I am glad EA has evolved in that direction, and so I think it's fitting this definition of EA reflects that.

Comment by evan_gaensbauer on Defining Effective Altruism · 2019-07-22T01:52:18.485Z · score: 1 (4 votes) · EA · GW

I've adopted to develop exclusion criterion for entryists into EA that EA, as a community, by definition, would see as bad actors, e.g., white supremacists. While one set of philosophical debates within EA, and with other communities, is how far, and how fast, the circle of moral expansion should grow. This common denominator in EA seems to imply a baseline agreement common to all of EA that we would be opposed to people who see to rapidly and dramatically shrink the circle of moral concern of the current human generation. So, to the extent someone:

1. shrinks the circle of moral concern;

2. does so to a great degree/magnitude;

3. does so very swiftly;

EA as a community should beware uncritically tolerating them as members of the community.

Comment by evan_gaensbauer on Defining Effective Altruism · 2019-07-22T01:37:40.029Z · score: 5 (4 votes) · EA · GW
I've been thinking more that we may want to split up "Effective Altruism" into a few different areas. The main EA community should have an easy enough time realizing what is relevant, but this could help organize things for other communities.

People have talked about "splitting up" EA in the past to streamline things, while other people worry about how that might needlessly balkanize the community. My own past observations of trying to 'split up' EA, into specialized compartments, it that, more than being good or bad, it doesn't have much consequence at all. So, I wouldn't recommend more EAs just make an uncritical try of doing so again, if for no other reason than it strikes me as a waste of time and effort.

As mentioned in this piece, the community's take on EA may be different from what we may want for academics. In that case one option would be to distill the main academic-friendly parts of EA into a new term in order to interface with the academic world.

The heuristic I use to think about this is to leave the management with the relationship between the EA community and "Group X", is to let members of the EA community who are part of Group X manage EA's relationship with Group X. That heuristic could break down in some places, but it seems to have worked okay so far for different industry groups. For EA to think of 'academia' as an industry like 'the software industry' is probably not the most accurate thing to do. I just think the heuristic fits because EAs in academia will, presumably, know how to navigate academia on behalf of EA better than the rest of us will.

I think what has worked best is for different kinds of academics in EA to lead the effort to build relationships with their respective specializations, within both the public and private sectors (there is also the non-profit sector, but that is something EA is basically built out of to begin with). To streamline this process, I've created different Facebook groups for networking and discussions for EAs in different respective profession/career streams, as part of a EA careers public resource sheet. It is a public resource, so please feel free to share and use it however you like.

Comment by evan_gaensbauer on Defining Effective Altruism · 2019-07-21T09:55:48.182Z · score: 4 (2 votes) · EA · GW

This is similar to how I describe effective altruism to those whom I introduce to the idea. I'm not in academia, and so I mostly introduce it to people who aren't intellectuals. However, I can trace some of the features of your more rigorous definition in the one I've been using lately. It's: " 'effective altruism' is a community and movement focused on using science, evidence, and reason to try solving the world's biggest/most important problems". It's kind of clunky, and it's imperfect, but it's what I've replaced "to do the most good" with, which generically stated presents the understandable problems you went over above.

Comment by evan_gaensbauer on GiveWell's Top Charities Are Increasingly Hard to Beat · 2019-07-14T18:29:45.415Z · score: 1 (1 votes) · EA · GW

This is a recent criticism of Givewell that I didn't see responded to or accounted for in any clear way in the linked post. I haven't read the whole thing closely yet, but no section appears to go over the considerations raised in that post. If they were sound, these criticisms incorporated into the analysis might make Givewell's top-recommended charities look more 'beatable'. I was wondering if I was missing something in the post, and Open Phil's analysis either accounts for or incorporates for that possibility.

Comment by evan_gaensbauer on GiveWell's Top Charities Are Increasingly Hard to Beat · 2019-07-11T06:54:18.402Z · score: 1 (5 votes) · EA · GW

Do you know if these take into account criticisms of Givewell's methodology for estimating the effectiveness of their recommended charities?

Comment by evan_gaensbauer on Announcing the launch of the Happier Lives Institute · 2019-06-21T01:05:23.275Z · score: 1 (6 votes) · EA · GW

Thanks for the response, Aaron. Had I been aware this post would have received Frontpage status, I would not have made my above comment. I notice my above comment has many votes, but not a lot of karma, which means it was a controversial comment. Presumably, at least several people disagree with me.

1. I believe the launch of new EA-aligned organizations should be considered of interest to people who browse the Frontpage.

2. It's not clear to me that it's only people who are 'relatively new to EA' who primarily browse the Frontpage instead of the Community page. While I'm aware the Frontpage is intended primarily for people relatively new to EA, it's not clear to me the usage of the EA Forum is such that it's only newcomers to EA who primarily browse the Frontpage. Ergo, it seems quite possible there are a lot of people who are committed EA community members, who are not casually interested in each update from every one of dozens of EA-aligned organizations. So, they may skip the 'Community' page, while nonetheless there are major updates like these that are more 'community-related' than 'general' EA content, but nonetheless deserve on the Frontpage, where people who do not browse the community tab often, who are also not newcomers to EA, will see them.

3. I understand why there would be some hesitance to move posts announcing the launch of new EA-aligned projects/organizations to the Frontpage. The problem is there aren't really hard barriers to just anyone declaring a new project/organization aimed at 'doing good' gaming EA by paying lip service to EA principles and practices, but, behind the scenes, the organization is not (intending/trying to be) as effective or altruistic as they claimed to be. One reason this problem intersects with moving posts to the Frontpage of the EA Forum is because to promote just any new project/organization that declares themselves to be EA-aligned to a place of prominence in EA sends the signal, intentionally or not, that this project/org has received a kind of 'official EA stamp of approval'. Why I brought up Michael Plant's reputation is not because I thought anyone's reputation alone should dictate what assignment on the EA Forum their posts receive. I just mentioned it that, on the chance Aaron or the administration of the EA Forum was on the fence about whether to promote this post to the Frontpage or not, I wanted to vouch for Michael Plant as an EA community member whose reputation of commitment of fidelity to EA principles and practices in the projects he is involved with is such that, on priors, I would expect the new project/org he is launching, and its announcement, to be that which the EA Forum should be willing to put its confidence behind.

4. I agree ideally the reputation of an individual EA community member should not impact what we think of the content of their EA Forum posts. I also agree that in practice we should aspire to live this principle in practice as much as possible. I just also believe that it's realistic to acknowledge EA is a community of biased humans like any other, and so forms of social influence like individual reputation still impact how we behave. For example, if William MacAskill or Peter Singer were to announce the launch of a new EA-aligned project/org, not exclusively based on their prior reputation, but based on their prior reputation, barring a post they made to the EA Forum reading like patent nonsense, which is virtually guaranteed not to happen, I expect it would be promoted to the Frontpage. My goal in vouching for Michael Plant is, while he isn't as well-known in EA as Profs. MacAskill or Singer, was to indicate I believe he deserves a similar level of credit in the EA community as a philosopher who practices EA with impeccable fidelity.

5. I also made my above comment under perceiving the norms for which posts are assigned to the 'Community' or 'Frontpage' posts to be ambiguous. For the purposes of what kinds of posts announcing the launch of a new EA-aligned project/org will be assigned to the Frontpage, I find the following from Aaron sufficient and satisfactory clarification of my prior concerns:

I think detailed posts that explain a specific approach to doing the most good make sense for this category, and this post does that while also happening to be about a new organization. Some but not all posts about new organizations are likely to be assigned Frontpage status.

6. Aaron dislikes my use of the word 'relegate' to describe the assignments of posts on the EA Forum to the Frontpage or the Community page, respectively. I used the word 'relegate', because that appears to be how promotions to the Frontpage on LessWrong work, and because I was under the impression the EA Forum had similar administration norms to LessWrong. Since the EA Forum 2.0 is based on the same codebase as LW 2.0, and the same team that built LW2.0 also was crucial in the development of the EA Forum2.0, I was acting under the assumption the EA Forum admin team significantly borrowed admin norms from the LW2.0 team from which they inherited administration of the EA Forum 2.o. In his above comment, Aaron has clarified the distinction between the 'Frontpage' and other tabs on the EA Forum is not the same as the distinction between the 'Frontpage' and other tabs on LW.

7. While the distinctions between Frontpage and and Community sections are intended to serve different purposes, and not as a measure of quality, because of the availability heuristic, I worry one default outcome of 'Frontpage' posts, well, being on the frontpage on the EA Forum, and their receiving more attention, meaning they will be assumed to be of higher quality.

These are the reasons that motivated me to make my above comment. Some but not all of these concerns are entirely assuaged by Aaron's response. All my concerns specifically regarding EA Forum posts that are announcements for new orgs/projects are assuaged. Some of my concerns with ambiguity between which posts will be assigned to the Frontpage or Community tabs respectively remain. However, they hinge upon disputable facts of the matter that could be resolved alone by EA Forum usage statistics, specifically comparative usage stats between the Community and Frontpage tabs. I don't know if the EA Forum moderation team has access to that kind of data, but I believe access to such usage stats could greatly aid in resolving my concerns regarding how much traffic each tab, and its respective posts, receive.

Comment by evan_gaensbauer on Announcing the launch of the Happier Lives Institute · 2019-06-19T19:03:15.707Z · score: 4 (12 votes) · EA · GW

While updates from individual EA-aligned organizations are typically relegated to the 'community' page on the EA Forum, I believe an exception should be made for the public announcement for the launch of a new EA-aligned organization, especially one that takes a focus area that doesn't already have major professional representation in EA. I believe that such announcements are of interest to people who browse the EA Forum, including newcomers to the community, and is not what I would call just a 'niche' interest in EA. Also, specifically with the case of Michael D. Plant, I believe he is someone whose reputation in EA precedes him such that we should give credit as the announcement of this project launch to be of significant interest to the EA community, and of things coming out of EA that are of interest to the broader public.

Comment by evan_gaensbauer on Public Spreadsheet of Effective Altruism Resources by Career Type · 2019-06-08T21:15:41.817Z · score: 2 (1 votes) · EA · GW

It isn't meant to mean software engineering, but all engineering. Unfortunately, aside from the FB group I made, I wasn't aware of any other EA materials and resources for engineers other than specifically for software engineering.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-06-05T17:33:34.743Z · score: 3 (2 votes) · EA · GW
I'm sure lots of lefties would not like how market-friendly EA tends to be

It's unclear to me how representative this is of either EA or leftists. Year over year, the EA survey has shown the vast majority of EA to be "left-of-centre", which includes a significant portion of the community whose politics might very well be described as 'far-left'. So while some leftists might be willing to surmise from one EA-aligned organization, or a subset of the community, being market-friendly as representative of how market-friendly all of EA is, that's an unsound inference. Additionally, even for leftist movements in the U.S. to the left of the Democratic establishment, there is enough ideological diversity I would say many of them appreciate markets enough such that they're not 'unfriendly' to them. Of course there are leftists who aren't friendly to markets, but I'm aware of a phenomenon of some factions on the Left to claim to speak on behalf of the whole Left, when there is no reason in the vast majority of these cases to think it's a sound conclusion to draw that the bulk of the Left is hostile to markets. So, while 'a lot' of leftists may be hostile to markets, and 'a lot' of EA may be market-friendly, without being substantiated with more empirical evidence and logical qualification, those claims don't provide useful info we can meaningfully work with.

Current Affairs overall is fairly amenable to EA and has a large platform within the left. I don't think "they are a political movement that seeks attention and power" is a fair or complete characterization of the left. The people I know on the left genuinely believe that their preferred policies will improve people's lives (e.g. single payer, increase minimum wage, more worker coops, etc.).

I think you're misinterpreting. I never said that was a complete characterization, and fairness has nothing to do with it. Leftist movements are political movements, and I would say they're seeking attention and power like any and every other political movement. I'm on the Left as well, and that I and the people who are leftists genuinely believe our preferred policies will indeed improve people's lives doesn't change the fact the acquisition of political power to achieve those goals, and acquiring the requisite public attention to achieve that political power, is necessary to achieve those goals. To publicly acknowledge this can be fraught because such language can be easily, often through motivation, interpreted by leftists or their sympathizers as speaking of a political movement covetous of power for its own sake. If one is too sheepish to explain otherwise, and stand up for one's convictions, it's a problem. Yet it shouldn't be a problem. I've read articles written by no less than Current Affairs' editor-in-chief Nathan Robinson that to talk about power is something all leftists need to do more of.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T19:28:53.284Z · score: 4 (2 votes) · EA · GW

Strongly upvoted. I don't have anything else to add right now then I now understand why you're asking this question as you have, and that I agree it makes sense as a first step with the background assumptions you're coming in with.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T19:27:47.140Z · score: 2 (1 votes) · EA · GW

I think your suggestion of Good Ventures making more grants to the EA Funds would be a better alternative, though before that I'd like to be confident the kinks have been worked out of the EA Funds management system. I was speaking more generally, though, that all kinds of generic structures that merely decentralized grantmaking in EA more would be better. That it could almost be any structure that had that feature was my point. I'm aware there are reasons people might behave as though so much decision-making being concentrated in Open Phil is optimal. If you have knowledge there is a significant portion of the EA community who indeed sincerely believes the current structure for capital allocation being so concentrated as it is, please let me know. I would act on that, as I would see that as a ludicrous and dangerous notion for all of EA I wouldn't think even Open Phil or Good Ventures would condone.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:43:43.356Z · score: 1 (2 votes) · EA · GW

Like I said in my above comment, asking interesting questions to avoid stating inconvenient if valuable opinions doesn't go far in EA. If you think so much centralization of decision-making in Open Phil in the person of Holden Karnofsky is suboptimal, and there are better alternatives, why not just say so?

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:42:12.031Z · score: 2 (1 votes) · EA · GW

I think it's unimportant. I would hope everyone is already aware we've arrived where we're at for contingent reasons. I think it's more than plausible we could have an alternative structure for capital allocation than the one we have now. I think this first step should have been combined with the next couple steps to just be its own first step.

Michael Dickens took the opposite route and said Open Phil should prioritize wild animal welfare. I also remember last year there were lots of people just asking questions about whether the EA Funds should be managed differently, and nothing happened, and then I made a statement more than a question, and then the EA Funds changed a lot.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:26:58.725Z · score: 1 (2 votes) · EA · GW

Right, but if all he is doing is signing off, then you're attributing to him only the final part of the decision, and treating that as if it's the whole decision.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:26:15.169Z · score: 2 (1 votes) · EA · GW

Right, I guess I was asking why you're exploring it.

I don't think will get a better structure through the route you're going, which is just asking questions about Open Phil. I figure at the least one would try figuring out what structure they consider best, and then explaining why you think it's the case Good Ventures should switch to that structure.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:10:59.163Z · score: 2 (1 votes) · EA · GW

Why are you asking this question? I'm asking because it seems more like an academic exercise in what would be a better capital allocation structure if one were to be had, when in practice it doesn't seem like will get there.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:09:38.706Z · score: 13 (8 votes) · EA · GW
Rough estimate: if ~60% of Open Phil grantmaking decisioning is attributable to Holden, then 47.2% of all EA capital allocation, or $157.4M, was decided by one individual in 2017. 2018 & 2019 will probably have similar proportions.
It seems like EA entered into this regime largely due to historically contingent reasons (Cari & Dustin developing a close relationship with Holden, then outsourcing a lot of their philanthropic decision-making to him & the Open Phil staff).

This estimate seems to have a lot of problems with it, from attributing so much credit to Holden that way not only seeming in practice treating him as too unique an actor without substantiation, if not in principle playing fast and loose with how causation works, while being a wild overestimate from nowhere. This is a pretty jarring turn to what up until that point I had been reading as a reasonable question post.

Comment by evan_gaensbauer on Is EA Growing? EA Growth Metrics for 2018 · 2019-06-03T06:06:38.347Z · score: 15 (9 votes) · EA · GW

When I asked about what has caused EA movement growth to slow down, people answered it seemed likeliest EA made the choice to focus on fidelity instead of rapid growth. That is a vague response. What I took it to mean is:

  • EA as a community, led by EA-aligned organizations, chose to switch from prioritizing rapid growth to fidelity.
  • That this was a choice means that, presumably, EA could have continued to rapidly grow.
  • No limiting factor has been identified that is outside the control of the EA community regarding its growth.
  • EA could make the choice to grow once again.

Given this, nobody has identified a reason why EA can't just grow again if we so choose as a community by redirecting resources at our disposal. Since the outcome is under our control and subject to change, I think it's unwarranted to characterize the future as 'bleak'.

Comment by evan_gaensbauer on On the margin, should EA focus on outreach or retention? · 2019-06-02T23:45:34.992Z · score: 9 (5 votes) · EA · GW

I think we should focus on retention. When I asked what has caused EA movement growth to slow down, the answer achieved through the consensus in the comments was that EA chose to focus on fidelity rather than movement growth.

I choose to interpret this as the EA community not knowing how to make good use of the same high growth rates with fidelity towards our goals, since several years ago when the community was much smaller to pursue our goals with fidelity entailed growing the community large enough to have a chance of achieving our goals. So, it's not an either/or scenario. Suffice to say growth has been necessary to EA in the past, but we're at a stage where:

  • we have some resources in excess of what we are making use of.
  • we may not know how to make use of greater growth.
  • there are multiple other factors to progress on EA's goals more limiting than growth.

So, EA is currently at a point where growth doesn't appear as crucial. I imagine in the future, as EA solves factors limiting our rate of progress on the pursuit of our goals unrelated to growth, success will free up more opportunity to make good use of growth. The growth rate of EA is slowing. It may plateau or decline in the future. It doesn't appear a problem right now.

Meanwhile, there was a relatively popular article a couple months ago about how EA doesn't focus enough on climate change, and there were many comments from people who would otherwise be much more involved in EA being put off by the apparent neglect of climate change. As a professional community, if we keep demanding high talent but keep lacking the capacity to match talented people to jobs, then lots of those people we initially attracted mostly for the jobs will exit when there are none.

So, in the last few months, there are multiple examples about how retention failure may pose a serious problem to EA in the future. Meanwhile, a lack of growth doesn't pose a current serious threat to EA. EA as a community seems to understand our own growth much more than we understand retention, since we haven't studied it as much. I've been impressed by how much understanding of the movement's own growth was demonstrated in the answers I've gotten. So, I'm confident if EA thought it needed to sustain high growth rates once again, we could rise to such a pressing challenge. I'm not confident we know how to solve retention problems.

To solve retention problems EA would need to learn what are the potential sources of retention problems. Were EA to solve such potential problems, we would solve problems not only causing existing effective altruists to leave the movement, but also problems newcomers see in EA that repel them. So, focusing on retention solves problems in the present that will help with movement growth in the future, if ever EA tries to grow at a faster rate in the future.

Finally, I'd say the third option of 'upskilling' Aaron suggested in his comment isn't totally mutually exclusive with retention either, since I think increasing opportunities for upskilling, especially making them more inclusive and widely available, would do a lot for retention in EA as well.

Comment by evan_gaensbauer on What caused EA movement growth to slow down? · 2019-06-02T23:17:41.119Z · score: 2 (1 votes) · EA · GW

Yeah, that looks interesting. Thanks for letting me know. I'll check it out.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-01T18:21:56.249Z · score: 12 (6 votes) · EA · GW

My article criticizing the EA Funds last year were both more cogent than this post, and the recipient of a much greater number of upvotes, than Ben's here. I do in fact think it is the case here that this post is receiving downvotes because of the factual errors with it. Yet neither is this entirely separate from the issue of people downvoting the post simply because they don't like it as a criticism of EA. That people don't like the post is confounded by the fact the reason they don't like it could be because they think it's very erroneous.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-30T13:20:18.992Z · score: 5 (3 votes) · EA · GW

Okay, so, what has has happened is:

  • khorton said she is a centrist, who for the sake of argument, was putting on her 'leftist' hat.
  • By "leftist", I thought she meant she was being the devil's advocate for anti-capitalism, when she was actually being an advocate for progressive/left-liberal reform.
  • She assumed that you assumed, like me, she was playing the role of devil's advocate for anti-capitalism, when you did not, i.e., not anti-capitalist.
  • While khorton's original comment didn't mention reform and regulation of global markets, she made clear in her next response to me that is what she intended as the subject of her comment even though she didn't make it explicit.
  • I got mixed up, and as the subject changed, I forgot market reform was never even implied by khorton's original comment.

While I disagreed with how rude your original response to her was, I did agree with your point. Now that you've edited it, and this comment is sorted, I've now upvoted your comment, as I agree with you.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-30T13:10:06.711Z · score: 5 (5 votes) · EA · GW

Point of correction: khorton is a 'she', not a 'he'.

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-30T03:20:43.599Z · score: 4 (2 votes) · EA · GW

He is claiming the idea that the cost of saving a life being $100k instead of being $5k being a sufficient condition to logically conclude one is not obliged to save a life, given the assumption one would otherwise be obliged to save a life, and that one believes in obligations in the first place, is ridiculous.

Comment by evan_gaensbauer on My state allows for a 1 member nonprofit board and I like that idea in order to keep my vision. However I want to have a "board of directors", but have them as a body to give me advice, as opposed to the traditional governing board? How can I actually apply this and what non misleading title can I give to the "board of directors"? · 2019-05-30T02:38:00.715Z · score: 0 (2 votes) · EA · GW

What state are you in? Based on what I know, being able to set up a non-profit with a one-member directory board is not common. So, I would be curious to know what state you are living in.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-30T01:51:35.370Z · score: 2 (3 votes) · EA · GW

Right, I wasn't assuming communism on your part. I was just sharing thoughts of my own that I thought better represented the frustration kbog was trying to express. I did this because I thought he was making a valid point with his comment you downvoted about how the kind of question you're asking would lead EA to prioritize a route for public dialogue that it doesn't actually make sense to prioritize, since it is one you made from a leftist viewpoint as a thought exercise, even though you clarified you yourself are a centrist, and as a criticism of EA it is unsound.

My above comment was also addressing the premise you thought the historical origins of wealth as seen from an anti-capitalist perspective is a very relevant criticism of EA. I of course assumed by 'leftist' you meant 'anti-capitalist', which you did not. So, my last comment doesn't apply. I was aware that you yourself were just wearing a leftist hat for the sake of argument, and I did not assume communism on your part.

Of course, regarding your point about questions of reform of contemporary global markets, I agree with you, and disagree with kbog, that that is a legitimate criticism of EA the community should think more about.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-29T21:13:42.369Z · score: 2 (1 votes) · EA · GW

Yeah, I just meant it's a funny coincidence. I don't think there is any issue citing him here.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-29T21:03:23.709Z · score: 2 (1 votes) · EA · GW

While I didn't upvote kbog's comment for being rude, and I agree with you he didn't need to be that rude, I didn't downvote it either because I think he is reaching for a valid point. While I express it differently, I share kbog's frustration with how sometimes effective altruists say we should extend so much charity to anti-capitalist critics of EA, while it may not be a majority of them, there are lots of kinds of anti-capitalism it seems EA should not actually want to reconcile with. I expressed that point without the rudeness of kbog's comment in another comment reply I'll excerpt here:

All variety of leftist ideologies from history are on the upswing today, as politics becomes more polarized, and more people are shifting leftward (and, of course, rightward as well) away from the centre. This has impelled some radical anti-capitalists to spread in the last few years as a propaganda the meme "liberals get the bullet too".
If this was inspired by they ideology of, say, Leninism, then while even if EA shouldn't moralize in asserting ourselves as "better", this would be sufficient grounds for EA to deny a positive association with them, even if the line is meant only rhetorically or symbolically. This would be justified even if we would at the same time build bridges to other leftist movements that have shown themselves more conducive to cooperation with EA, such as those Marxists who would be willing to seek common ground with EA. Of course, as there are many ideologies on the Left, including whole families of ideologies totally incompatible with EA, I believe we must be clear about how we're going to tow this line. Like you yourself said, this isn't unique to leftists. With regards to the Right, EA could build bridges to conservatism, while nonetheless totally rejecting a notion we might ally ourselves with the family of rightist ideologies we could call "supremacism".
[...]
If EA is to become part of humanity's fabric of moral common sense, we must recognize there are ideologies that don't operate under that fabric in the perpetuation of their goals, and go against the grain of both EA and the fabric of common sense. For EA to be worth anything, we must on principle be willing to engage against those ideologies. Of course, EA can and should be willing to ally itself with those leftists who'd seek to expand the circle of moral concern against those who would seek to shrink it to get ahead, no matter what their original ideals were.
This is with regards to political ideologies where either the disagreement over fundamental values, or at least basic facts that inform our moral judgements, are irreconcilable. Yet there will also be political movements with which EA can reconcile, as we would share the same fundamental values, but EA will nonetheless be responsible to criticize or challenge, on the grounds those movements are, in practice, using means or pursuing ends that put them in opposition to those of EA.
[...]
I believe our willingness to live up to that responsibility is one of the few things that distinguishes EA at all from any other community predicated on doing good.
Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-29T20:49:46.315Z · score: 2 (1 votes) · EA · GW

It's kind of funny to me that post on the DSA you've just linked is written by the same author of the Current Affairs article I linked on your post about socialism and EA the other day that you ripped apart.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-29T17:24:19.743Z · score: 2 (1 votes) · EA · GW

Strongly upvoted. This highlights the difference between criticisms of EA it doesn't focus enough on systemic change that come from a particularly left-wing perspectives, and others which are based on empirical or ethical disagreements as opposed to political ones. This is a distinction I should have made clear in the OP, and I didn't. Thanks for the clarification.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-29T17:16:23.443Z · score: 2 (1 votes) · EA · GW

Strongly upvoted. Thanks for the detailed and thoughtful response.

Utilitarianism / widening of the moral circle is very similar to ordinary lefty egalitarianism. Don't lose sight of that just because some branches of the left don't think some particular EA method are the best possible way to save the world, and cite Failure to Challenge the System as the reason.

At least one leftist critique of EA has made the case while leftist political movements and EA can find common ground in the ideals shared between egalitarianism and utilitarianism, through an egalitarian lens, the framing of altruism should be seen by all leftists as problematic. From "5 Problems with Effective Altruism", by Connor Woodman, published in Novara Media in June 2016:

4. Solidarity is a better moral framework than altruism.
‘Aid’ has paternalistic undertones. Instead, we should be looking to support and join in transnational solidarity with movements in the west and Global South: indigenous peoples, landless peasants, precarious garment workers. As Monique Deveaux puts it: “By failing to see the poor as actual or prospective agents of justice [EA’s approaches] risk ignoring the root political causes of, and best remedies for, entrenched poverty.”
The best way to show solidarity is to strike at the heart of global inequality in our own land. There are an array of solidarity groups that seek to change western foreign policy and support modern-day national liberation movements. There are also various western NGOs which seek to injure the production of structural injustice in the west.
Words like solidarity – along with class, imperialism and exploitation – are scrubbed from the EA lexicon. Perhaps they should relaunch as Effective Solidarity.

"The System" is the power structures that emerged thereby. It includes concepts of private property, slavery, marriage (which was generally a form of slavery), social control of reproduction, social control of sex, caste, class, racism, etc - all mechanisms ultimately meant to justify the power held by the powerful. [...]
Despite resource scarcity declining due to tech advance, the bulk of human societies are still operating off those neolithic power hierarchies, and the attending harmful structures and concepts are still in place.

I'm aware of this. Some leftist critics of EA come from an essentially Marxist perspective (i.e., "The history of all hitherto existing society is the history of class struggles.") While not all contemporary Marxists, some leftists take this to a logical conclusion known as class reductionism: the idea other apparent kinds of oppression like racism and sexism are absolutely functions of classism, and so this is the only kind of anti-oppression politics leftists need or should prioritize (in spite of Urban Dictionary's contention, I'm aware this is a position in fact advanced by some people, although it's true the accusation as often levelled is unsound). Obviously, there are disagreements over class reductionism within leftism that have nothing to do with EA.

It's ambiguous whether EA's leftist critics are primarily talking about 'systemic change' in terms of economic class, or through an intersectional lens, and see race, sex, sexuality, or other dimensions of oppression to be just as important as economics in what in EA's approaches should change. So, my original question could have been formulated as: to what extent is leftist criticism of EA class reductionist, or intersectionalist?

2) Corporate environments maximize profit. Effective altruists maximize impact. As both these things are ultimately geared towards maximizing something that ultimately boils down to a number, effective altruist language often sounds an awful lot like corporate language, and people who "succeed" in effective altruism look and sound an awful lot like people who "succeed" in corporate environments. This breeds a sense of distrust. There's a long history within leftism of groups of people "selling out" - claiming to try to change the system from inside, but then turning their backs on the powerless once they got power. To some degree, this similarity may create distasteful perceptions of a person's "value" within effective altruism that is analogous to the distasteful perception of a person's "value" in a capitalist society. (E.g. capitalist society treats people who are good at earning money as sort of morally superior. Changing "earning money" to "causing impact" can cause similarly wrong thinking)

This is a major source of implicit distrust some leftists would have upon being introduced to EA that has been just below the surface of my thinking on this subject, but I've never seen anyone in EA articulate this point so well.

3) EAs to some extent come off as viewing the global poor as "people to help" rather than "people to empower". The effective altruist themself is viewed as the hero and agent of change, not the people they are helping. There is not that much discussion of the people we are helping as agents of change who might play an important part in their own liberation.

This is essentially the criticism of EA I quoted above from the Novara Media article. You're right this is a criticism of EA that isn't inherently leftist, but it is one leftists tend to make most often. It's one I agree with. I look forward to your writing on it. Please feel free to reach out to me for help in writing it, or for proofreading, editing, or feedback on the draft.

I would strongly recommend not creating a false dichotomy between "EA" and "Leftists", and setting up these things as somehow opposed or at odds.

I'm aware of this, especially because criticisms of EA by leftists outside EA are confounded by the fact most of the EA community already is leftist, and critics often lack awareness of this. I was just utilizing this frame because it's one the debate has historically been situated in by how leftist critics of EA have imposed this dichotomy on the conversation between themselves, and the EA community.

Well, I think this is an unhelpful tone. It is, again, setting up EA as something different and better than leftism, rather than a way for us to fulfill our values - even if our values aren't all exactly the same as each others. This isn't particular to leftism.

I should have been more specific above. If I didn't think the was room for cooperation or collaboration between EA and any leftist political movements, I would have said 'most' or 'all' of them are ineffective, or countereffective, by the lights of overlapping principles of both EA and leftist politics. However, while it may not be most, there are at least some leftist factions I do think EA is qualified in asserting we are better than at providing people with opportunities to pursue their own autonomy and liberation. EA should be, and thus rightly is, open to an earnest and ongoing dialogue with leftist political movements, even some of the most radical among them. Nobody has to take it from me. No less than William MacAskill has said in a closing address at EAG that EA should be open-minded to diverse intellectual, political, and ideological perspectives, and thus should not assume something like Marxism is wrong on principle, in response to what he presumably saw as an insufficient degree of open-mindedness in the very movement he co-founded. Yet EA can't take that to a conclusion of undermining its own principles.

All variety of leftist ideologies from history are on the upswing today, as politics becomes more polarized, and more people are shifting leftward (and, of course, rightward as well) away from the centre. This has impelled some radical anti-capitalists to spread in the last few years as a propaganda the meme "liberals get the bullet too". If this was inspired by they ideology of, say, Leninism, then while even if EA shouldn't moralize in asserting ourselves as "better", this would be sufficient grounds for EA to deny a positive association with them, even if the line is meant only rhetorically or symbolically. This would be justified even if we would at the same time build bridges to other leftist movements that have shown themselves more conducive to cooperation with EA, such as those Marxists who would be willing to seek common ground with EA. Of course, as there are many ideologies on the Left, including whole families of ideologies totally incompatible with EA, I believe we must be clear about this. Like you yourself said, this isn't unique to leftists. With regards to the Right, EA could build bridges to conservatism, while nonetheless totally rejecting a notion we might ally ourselves with the family of rightist ideologies we could call "supremacism".

The goal for EA is not to engage against other ideologies, the goal (to the extent that EA ideas are good and true, which obviously they may not all be) is to become part of the fabric of common sense by which other ideologies operate and try to perpetuate their goals.

To reframe my last point in the context of your words, if EA is to become part of humanity's fabric of moral common sense, we must recognize there are ideologies that don't operate under that fabric in the perpetuation of their goals, and go against the grain of both EA and the fabric of common sense. For EA to be worth anything, we must on principle be willing to engage against those ideologies. Of course, EA can and should be willing to ally itself with those leftists who'd seek to expand the circle of moral concern against those who would seek to shrink it to get ahead, no matter what their original ideals were.

This is with regards to political ideologies where either the disagreement over fundamental values, or at least basic facts that inform our moral judgements, are irreconcilable. Yet there will also be political movements with which EA can reconcile, as we would share the same fundamental values, but EA will nonetheless be responsible to criticize or challenge, on the grounds those movements are, in practice, using means or pursuing ends that put them in opposition to those of EA. Current Affairs is a socialist/radical leftist magazine that I believe represents the kinds of leftist movements with which EA can find common ground. Yet when I was seeking the same kind of conciliation you're seeking in this thread, in another discussion of socialism and effective altruism, kbog impressed upon me the importance of EA's willingness to push back against those policy prescriptions that would fail to increase well-being as much as could be done simply because of a failure of effectiveness, if not altruistic intent. This conclusion is an unfortunate one even to me, as like myself I believe most of EA wouldn't want to have engage others in this way, but it may be necessary. I believe our willingness to live up to that responsibility is one of the few things that distinguishes EA at all from any other community predicated on doing good.

What's more, I'm pretty sure that the widespread acceptance of the basic building block concepts of effective altruism (such as, all people are equally important) are largely due to these leftist social movements. I don't think it's a stretch to say that EA itself is at least in part among the products of these social movements.

Agreed.




Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-29T12:57:57.368Z · score: 2 (1 votes) · EA · GW

I'm aware there are at least some critics of EA who by "systemic change" do indeed bother with the critical theory stuff, in addition to capitalism, as they see the other structures of oppression they're trying to point at as part and parcel with capitalism. I recall an article or two like this, but I don't specifically remember which ones right now. I will try finding them, and when I do, I'll respond with another comment.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-29T12:54:30.708Z · score: 4 (2 votes) · EA · GW
I'm in favor of more work on figuring out policy strategy from an effectiveness perspective, but I don't know that "EA" is responsible for that work

I agree. It's my habit for the sake of argument in casual and generic discussions in EA to treat "EA" as a unitary blob of resources. I agree if we're seriously trying to getpoliy specific, it doesn't make sense to talk about EA as a whole unit, but the individual actions of particular actors in and around the EA ecosystem.

Are there specific actors within EA who ought to be doing more, but aren't?

I haven't thought about this enough to name specific organizations. There appear to be blocs within EA who support policy reform in particular areas, which may or may not be shared with the Open Philanthropy Project. However, unlike Open Phil, the most a bloc of supporters for a particular kind of policy reform in EA appear to organize themselves into is an informal association that is all talk, no action. When I think of EA-connected policy work, the following comes to mind:

  • Open Phil, through their grants.
  • The NGOs Open Phil grants to, which usually either predate EA, or are largely independent of the community aside from their relationship with Open Phil.
  • A number of academic/research policy institutes focused on global coordination, AI alignment, and other x-risks, launched in tandem with some of the world's leading research universities, such as UC Berkeley, Harvard, Oxford, and Cambridge.

In other words, these are all orgs that probably would have gotten off the ground, and could achieve their goals, without the support of EA, except for Open Phil as an EA-aligned org. And by "Open Phil", it's more like just Good Ventures and a handful of program officers. So if we subtract their efforts from the rest of the policy work the EA community can take credit for, there isn't much left.

Collectively combined, the rest of the EA community is several thousand people with a decade of experience through dozens of independently launched NGOs/NPOs and tens of millions of dollars at their disposal who, for all we talk about public policy, haven't done much about it. I believe some EA associations in Europe have done some policy consulting, yet, for example, in the United States, the most significant policy work that I'm aware has ever been tried in EA independent of Open Phil was EA Policy Analytics, which didn't go very far.

What would this due diligence look like? Is there a certain thing you wish someone had created that no one has? Have people created the kinds of things you want, but in a low-quality fashion?

I'd like to see more comprehensive responses to individual critiques of EA in history, and also to the body of criticism of EA in general. I think the series of more informal blog posts different EAs have written as responses to such critiques over the years have been okay, but they haven't really moved the dial. My impression EA, and our leftist critics, have reached a stand-still/impasse, but this is unnecessary. A systematic review of leftist criticism of EA is something I'm working on myself, though it isn't at the top of my priority list to finish it in the near future.

Also, I expect GiveWell's upcoming policy change work (and ongoing work by orgs like J-PAL that have GiveWell funding) to generate a lot of systematic change per dollar spent. Have you looked at J-PAL's Innovation in Government Initiative at all?

I haven't. I'll check them out. I wasn't aware of these developments, so thanks for bringing them to my attention.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-29T12:28:28.053Z · score: 3 (2 votes) · EA · GW

I expect EA hasn't publicly acknowledged this is as much as we maybe should have in the past because:

1. Even if we were to assume the worst, and that all the gains of the Western world EA is giving away were originally ill-gotten, it wouldn't change how we think it is best redistributed to improve the world, including to do justice by the very people the wealth would allegedly have been expropriated from;

2. Acknowledging this can give opportunistic critics of EA the chance to back EA into a corner and pillory us as too ignorant of issues of justice to accomplish any good;

3. Even if we did acknowledged this, it's unclear we would reach a conclusion about what EA should do better than what we have now, since this is a question of the origins of wealth, a fundamental question of politics as hotly disputed in the world today as any, and not one I expect EA would be able to resolve to anyone's satisfaction.

This isn't to say EA shouldn't do better on this issue. It's just in my experience the conditions set up when people debate these questions in public, including with regard to EA, aren't set up to give EA a chance to learn, respond, update, improve, or change. I.e., most instances when this subject is broached it is a political debate set up for rhetorical purposes by 2 sides EA is caught between, and who exploit EA's reception to criticism to use it as a springboard to advance their own agenda.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-28T13:57:03.847Z · score: 4 (5 votes) · EA · GW

From my viewpoint, to the extent the systemic change criticism of EA is correct, EA should internalize this criticism, and should effectively change socioeconomic systems better than leftists ever expected from us, and perhaps better than leftist political movements themselves (lots of them don't appear to be active or at least effective in actually changing "the system" they themselves criticize EA for neglecting). If that's the case, I think what we've been doing is mostly lip service, or bending the activities we currently support out of shape to look like the systemic change leftists criticize us for not engaging in enough. It's plausible something like EA donations to Givewell-recommended charities like GiveDirectly or the AMF taken to their logical conclusion leads to the best systemic change we could reliably seek to enact. Yet I don't think we've done our diligence to check that is in fact the case, or there is a kind of effective systemic socio-political/socio-economic change we should participate in.

While EA is quickly moving toward policy work, a comprehensive and legible slate of what the global EA efforts in this regard are doesn't exist. I could tell you most of what's going on in EA on the fronts of global poverty alleviation; mental health; animal welfare; AI alignment, and other x-risks. I'm not confident I could map out even a minority of the policy work going on in EA if someone asked me.

To the extent the systemic change criticism of EA is incorrect, as EA enters the policy arena more and more, we will once again come in friction with leftist (and other political movements), unlike EA has since its inception. The difference this time is we would be asserting the systemic change we're pursuing is more effective (and/or in other ways better) than the systemic change other movements are engaging in. And if that's the case, I think EA needs to engage the communities of our critics just as critically as they have engaged us. This is something I've begun working on myself.


Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-28T10:06:36.596Z · score: 1 (3 votes) · EA · GW

Yeah, but I already acknowledged what they're usually doing. I want to know what the unusual leftist critics of EA think.

Comment by evan_gaensbauer on Structure EA organizations as WSDNs? · 2019-05-27T04:34:16.800Z · score: 6 (3 votes) · EA · GW

It's been my experience that while people in EA-aligned orgs usually share a common goal, disagreements about how things should be run, especially between a Board of Directors, and the paid employees of the org, is such that it is generally enough of a problem to be a point in favour of transitioning to structuring NPOs as worker cooperatives to reduce conflict between different vested interests. I believe this would be true of the non-profit sector in general, and not limited to EA. I'm not convinced EA tends to be dramatically better or worse on this front than other movements professionally based in the non-profit sector such that I'd put much stock in the testimony of any one individual on this subject.

Comment by evan_gaensbauer on EA Still Needs an Updated and Representative Introductory Guidebook · 2019-05-24T06:47:42.653Z · score: 4 (2 votes) · EA · GW

Thanks for the feedback. That sound reasonable. I wrote the OP because this was a resolvable issue that lots of people disputed over EA that appeared to be left unfinished. There are other introductory guidebooks for effective altruism for different causes, etc. So that there isn't a general guidebook right now that satisfies different relevant parties in EA doesn't seem like a huge problem. Michael Chen pointed out multiple major problems with one article in the current EA Handbook 2.0. They're significant mistakes that the article needs changed for it to hold up. I figure for the EA Handbook to deliver its message with integrity, it has to do that for all the articles in the EA Handbook. Since most of the articles were initially written as blog posts, I expect there are other holes in each which with hindsight we could point out. It's just that the articles in the EA Handbook 2.0 may not have been as professionally written as published books or scholarly articles by effective altruists, which is a quality we should ostensibly aspire to if an introductory book to EA is about EA putting its best foot forward to people new to EA.

Siebe Rozendal suggested an updated version of Doing Good Better. I thought this would be too much work, but it seems like it might be less work to update DGB than it would be to update the EA Handbook, since that poses multiple difficulties. I thought that would require Will doing most to all of the work to update DGB himself, but The Life You Can Save (the organization) has worked with Singer to update the book of the same name. Jon Behar, who works for The Life You Can Save, explains it here. It's a new edition 8 years later, so there must have been a lot to change. So, the CEA could do something similar where Will works with them to update DGB. CEA could consult with TLYCS (the organization) or work with them in some capacity to replicate the process they've used with Singer to update TLYCS (the book).

I honestly think it might be more tractable and more effective to update DGB than the EA Handbook 2.0. If that's the case, given that DGB is written more as an intro to EA as well, and it's more popular, I imagine some EAs would be willing to donate time and/or money to see an updated version of DGB happen.

Is that something you think Will and/or the CEA wold consider?

Comment by evan_gaensbauer on Cost-Effectiveness of RC Forward · 2019-05-21T02:30:39.355Z · score: 18 (8 votes) · EA · GW

I have a for a long time thought it would be valuable for RC's approach to be replicated in many other countries. So I am glad there is now an article like this with which to show effective altruists in other countries what the value of tax-deductible donation portals like RC really is. So there is a good chance I will cite this article in the future if I discuss the topic on the EA Forum again, or elsewhere.

Comment by evan_gaensbauer on Why do EA events attract more men than women? Focus group data · 2019-05-19T01:09:00.348Z · score: 9 (6 votes) · EA · GW

While there hasn't been a survey or focus group, I am casually aware common problems for EA event attendance in Vancouver, Canada, are travel time/distance; busyness and opportunity costs; and a feeling of not fitting in with high-context topics, but finding intro EA topics/conversations too repetitive.

Comment by evan_gaensbauer on Overview of Capitalism and Socialism for Effective Altruism · 2019-05-17T19:22:20.846Z · score: 2 (1 votes) · EA · GW

Yesterday I read an article from Current Affairs called How The Left Should Think About Trade. It wasn't terrible. It made some good points. After reading this review, and the CA article, one possibility that comes to mind is that there be worker cooperatives in both the developed and developing worlds, or some similar way for workers in countries of vastly different economic strata to still benefit from trade agreements. Did you come across anything in your research that went over that consideration?

Comment by evan_gaensbauer on Update on the Vancouver Effective Altruism Community · 2019-05-17T18:57:14.396Z · score: 2 (1 votes) · EA · GW

Thanks, fixed.

Comment by evan_gaensbauer on Overview of Capitalism and Socialism for Effective Altruism · 2019-05-17T08:46:04.892Z · score: 6 (4 votes) · EA · GW

I have been writing my own overview of the surfaces of disagreement between EA and anticapitalist/leftist politics, and this reviews covers a lot of what leftist critics of EA refer to as economic 'systemic change' that I wouldn't have known how to research myself. So, thanks for writing this. I will probably cite you when I publish my own article, and I'll let you know about it.

Comment by evan_gaensbauer on How does one live/do community as an Effective Altruist? · 2019-05-17T08:27:03.914Z · score: 7 (2 votes) · EA · GW
Seeing a core group of people often allows you to follow their lives

A comment on a recent article I posted to the EA Forum of neglected goals for local EA groups was that one thing I and others don't emphasize enough is that local EA communities give people a connection to EA as a community, which keeps them invested and engaged with EA, as opposed to communities serving merely directly utilitarian purposes. I agree, so I am glad you raise this similar point.

Group singing/ eating is really fun - are there any studies on this?

I am sure there are studies on this you could find on Google Scholar or some other search engine for research publications, though I have no idea what the best way to go about identifying the most relevant research on these subjects. In addition to Michelle's sources, the rationality and effective altruism communities in Berkeley, California have engaged these kinds of rituals for a while now. So, you may be able to learn more about how that has gone for community-building if you run across some. Raemon might be the best person on the EA Forum to ask about this.

Support around illness, births, marriages, deaths is great - my friends all used to make meals for each other at these times. Not having to worry about food at a stressful time is a big plus.

Different local EA communities around the world widely vary in how big or close-knit they are, so how there isn't a standard way in the EA communities to address these issues. However, there is a confidential Facebook group called EA Peer Support for virtual support from the EA communities for these kinds of personal times. Anyone can contact Julia Wise to be added to the group if they have a Facebook account.

Knowing people of different generations helps loneliness, particularly in the old and young.

The few people of an older generation I've known who have participated in EA appear to have mostly enjoyed and benefited from it. Since EA historically has been a movement that has such a high concentration of younger people, if it's difficult to say if or how EA will diversify along generational lines in the future.

Is there any good EA wedding liturgy? Liturgy ("We are gathered here today...") is, if written well is a great way to be clear about what you believe and say it in a beautiful, poetic way. I make no defence of some concepts in the wedding service, but it's a great service.
Are there suggestions for ways a wedding could convey EA concepts through form.

I know some folks in the Berkeley rationality and effective altruism communities have stuff for wedding services that contain EA-like themes or messages. If you ask Raemon, he should be able to point you in the direction of some of that stuff as well.

Comment by evan_gaensbauer on Is this a valid argument against clean meat? · 2019-05-17T08:13:15.690Z · score: 3 (2 votes) · EA · GW

I am not sure if there has been a study or survey on this subject so far, but it seems like the kind of thing effective altruists would create a study or survey to assess.

People I know who are concerned about factory farming but who also still eat meat and are also aware of clean meat are looking forward to progress in clean meat, and intend to buy it as it becomes able to replace demand for meant from animals. In the meantime, they intend to still eat meat from animals, to varying degrees. I don't think most of them are excusing clean meat coming out soon as an excuse to not worry about climate change, or keep eating meat now, since they were probably going to keep eating meat in the present either way. They have just expressed a willingness to switch to clean meat in part of out significant concern with factory farming when cleat meat becomes as convenient as animal meat.