Posts

Does the Berkeley Existential Risk Initiative (self-)identify as an EA-aligned organization? 2020-06-30T19:43:52.432Z · score: 13 (6 votes)
Expert Communities and Public Revolt 2020-03-28T19:00:54.616Z · score: 2 (4 votes)
Free E-Book: Social Movements: An Introduction, 2nd Edition 2020-03-21T23:50:36.520Z · score: 11 (4 votes)
AMA: "The Oxford Handbook of Social Movements" 2020-03-18T03:34:20.452Z · score: 26 (14 votes)
Public Spreadsheet of Effective Altruism Resources by Career Type 2019-06-03T18:43:06.199Z · score: 36 (16 votes)
What exactly is the system EA's critics are seeking to change? 2019-05-27T03:46:45.290Z · score: 12 (10 votes)
Update on the Vancouver Effective Altruism Community 2019-05-17T06:10:14.053Z · score: 25 (8 votes)
EA Still Needs an Updated and Representative Introductory Guidebook 2019-05-12T07:33:46.183Z · score: 34 (20 votes)
What caused EA movement growth to slow down? 2019-05-12T05:48:44.184Z · score: 14 (9 votes)
Does the status of 'co-founder of effective altruism' actually matter? 2019-05-12T04:34:32.667Z · score: 9 (10 votes)
Announcement: Join the EA Careers Advising Network! 2019-03-17T20:40:04.956Z · score: 30 (30 votes)
Neglected Goals for Local EA Groups 2019-03-02T02:17:12.624Z · score: 35 (19 votes)
Radicalism, Pragmatism, and Rationality 2019-03-01T08:18:22.136Z · score: 14 (9 votes)
Building Support for Wild Animal Suffering [Transcript] 2019-02-24T11:56:33.548Z · score: 14 (8 votes)
Do you have any suggestions for resources on the following research topics on successful social and intellectual movements similar to EA? 2019-02-24T00:12:58.780Z · score: 6 (1 votes)
How Can Each Cause Area in EA Become Well-Represented? 2019-02-22T21:24:08.377Z · score: 10 (7 votes)
What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? 2019-01-30T02:52:25.471Z · score: 23 (11 votes)
Effective Altruism Making Waves 2018-11-15T20:20:08.959Z · score: 7 (7 votes)
Reducing Wild Animal Suffering Ecosystem & Directory 2018-10-31T18:26:52.476Z · score: 12 (8 votes)
Reducing Wild Animal Suffering Literature Library: Original Research and Cause Prioritization 2018-10-15T20:28:10.896Z · score: 13 (10 votes)
Reducing Wild Animal Suffering Literature Library: Consciousness and Ecology 2018-10-15T20:24:57.674Z · score: 11 (8 votes)
The EA Community and Long-Term Future Funds Lack Transparency and Accountability 2018-07-23T00:39:10.742Z · score: 67 (63 votes)
Effective Altruism as Global Catastrophe Mitigation 2018-06-08T04:35:16.582Z · score: 7 (9 votes)
Remote Volunteering Opportunities in Effective Altruism 2018-05-13T07:43:10.705Z · score: 42 (32 votes)
Reducing Wild Animal Suffering Literature Library: Introductory Materials, Philosophical & Empirical Foundations 2018-05-05T03:23:15.858Z · score: 18 (15 votes)
Wild Animal Welfare Project Discussion: A One-Year Strategic Review 2018-05-05T00:56:04.991Z · score: 8 (10 votes)
Ten Commandments for Aspiring Superforecasters 2018-04-25T05:07:39.734Z · score: 19 (12 votes)
Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply 2018-04-13T22:10:16.460Z · score: 11 (11 votes)
Lessons for Building Up a Cause 2018-02-10T08:25:53.644Z · score: 13 (15 votes)
Room For More Funding In AI Safety Is Highly Uncertain 2016-05-12T13:52:37.487Z · score: 6 (6 votes)
Effective Altruism Is Exploring Climate Change Action, and You Can Be Part of It 2016-04-22T16:39:30.688Z · score: 10 (10 votes)
Why You Should Visit Vancouver 2016-04-07T01:57:28.627Z · score: 9 (9 votes)
Effective Altruism, Environmentalism, and Climate Change: An Introduction 2016-03-10T11:49:45.914Z · score: 17 (17 votes)
Consider Applying to Organize an EAGx Event, And An Offer To Help Apply 2016-01-22T20:14:07.121Z · score: 5 (5 votes)
[LINK] Will MacAskill AMA on Reddit 2015-08-03T20:45:42.530Z · score: 3 (3 votes)
Efective Altruism Quotes 2015-08-01T13:49:23.484Z · score: 1 (1 votes)
2015 Summer Welcome Thread 2015-06-16T20:29:36.185Z · score: 2 (2 votes)
[Announcement] The Effective Altruism Course on Coursera is Now Open 2015-06-16T20:20:00.044Z · score: 4 (4 votes)
Don't Be Discouraged In Reaching Out: An Open Letter 2015-05-21T22:26:50.906Z · score: 5 (5 votes)
What Cause(s) Do You Support? And Why? 2015-03-22T00:13:37.886Z · score: 2 (2 votes)
Announcing the Effective Altruism Newsletter 2015-03-11T06:05:51.545Z · score: 10 (10 votes)
March Open Thread 2015-03-01T17:14:59.382Z · score: 1 (1 votes)
Does It Make Sense to Make Multi-Year Donation Commitments to One Organization? 2015-01-27T19:37:30.175Z · score: 2 (2 votes)
Learning From Less Wrong: Special Threads, and Making This Forum More Useful 2014-09-24T10:59:20.874Z · score: 6 (6 votes)

Comments

Comment by evan_gaensbauer on Should we think more about EA dating? · 2020-08-05T01:05:50.463Z · score: 2 (1 votes) · EA · GW

Thanks!

Comment by evan_gaensbauer on Should we think more about EA dating? · 2020-08-04T19:29:17.238Z · score: 2 (6 votes) · EA · GW

The value of n is so high that Peter wouldn't want to embarrass the rest of us with how smooth he is by disclosing that information. Yet I've got access to it! It's a well-kept secret that 8% of all historical growth of the EA movement is due to Peter bringing cute girls into the movement by way of telling all of them he passes by about the thought experiment about the drowning child[citation needed].

Comment by evan_gaensbauer on Should we think more about EA dating? · 2020-08-04T19:22:51.864Z · score: 2 (1 votes) · EA · GW

Would you mind please sharing a link to this startup 'Roam'? They sound interesting but I've not heard of it. I'd look it up myself but I doubt I'd know how to find the right website just by searching the word "roam." 

Comment by evan_gaensbauer on Should we think more about EA dating? · 2020-08-04T09:15:54.022Z · score: 2 (1 votes) · EA · GW

Summary: There are multiple reasons why, in my opinion, we in EA should not encourage intra-community dating beyond how it arises organically in the community. Yet that's not the same thing as not thinking about it. A modicum of public discussion about intra-community dating is probably not 'culty' compared to much of what the EA community already engages in regardless. One solution may be for those of us who are personal friends with each other in EA to make greater effort to provide support to each other in our mutual pursuits of romantic partners amenable to an EA lifestyle, especially including outside the EA community as well. 

 

I agree the EA community should not systematically think about us dating each other. By "systematically," I mean that I don't think the EA community ought to try seeking a programmatic way for us to date each other. There are multiple reasons I expect doing so would be a poor choice for the EA community. The concern we've discussed in this thread in that is that it could make EA look 'culty,' which I agree is a legitimate concern. One issue I've got with how the EA community tends to think about brand management and public relations, or whatever the social movement equivalent for those concepts are, is that we tend to reflexively care about it only when it comes up at random, as opposed to thinking about it systematically.

That's relevant because, relative to much more significant aspects of EA, whether we openly "think about dating each other" is not that 'culty.' There is some op-ed in a semi-popular magazine, print and/or online, about how communities concerned about AI alignment as an existential risk amount to doomsday cults. Much of the population perceives veganism as a cult. I've met a lot of people over the years who have told me that the phenomenon of widespread adoption of common lifestyle changes among community members still makes it gives off 'culty' vibes. Meanwhile, plenty of cultures within global society publicly and systematically encourage dating within their cultures. It seems like doing this along lines of national or religious identity is more publicly acceptable than doing so along racial lines. Like with what form it would likely take in EA, plenty of subcultures and movements that lend themselves to particular ways of life have online dating websites dedicated to their communities. 
 

Thus, I think the other downsides to systematically encouraging dating within the EA community, such as the skewed gender ratio perhaps quickly resulting in the system failing to satisfy the needs of most involved individuals, are greater than EA appearing 'culty.' It's important to distinguish why I think we shouldn't systematically encourage intra-community dating because I also expect it would be wrong for us to "not think about" each other's dating needs at all. For example, I don't think it's a negative thing that this EA Forum post and all these discussions in the comments are publicly taking place on the EA Forum. It seems to me the majority of community members never check the EA Forum with a frequency approaching a regular basis, never mind the millions of people who hear about EA but never become part of the movement. I think the solution is for us to extend private offers as peers in the same community to talk about each other's efforts to find romantic partners who spend our lives with also fits with living the EA-inspired lives we each want to live out. 

Comment by evan_gaensbauer on Should we think more about EA dating? · 2020-07-27T01:57:20.559Z · score: 2 (1 votes) · EA · GW

Strongly upvoted. This is an approach I've taken to dating outside the EA community. Most of my dating is typically outside the EA community. I've not found success in long-term romance. I'm pretty confident that's due to factors in my private life unrelated to this specific approach to dating EA community members can take. I'd recommend more in EA try it as well.

Comment by evan_gaensbauer on Should we think more about EA dating? · 2020-07-27T01:54:16.815Z · score: 5 (3 votes) · EA · GW

I responded to Marisa with this comment which pushes back on the notion that inter-EA dating is a particularly culty and insular phenomenon. Upshots:

  • Some public accusations of cultishness should be taken seriously, but EA should respond to them by doing what we do best: looking into scientific research, specifically about cults, in evaluating such allegations to ourselves. This is a more sensible approach than hand-wringing about hypothetical accusations of cultishness that haven't been levelled yet. To do so only plays into the hands of moral panics over cults in public discourse that don't themselves typically lessen the harms of cults, real or perceived.
  • Dozens if not hundreds in EA have dated, formed relationships, gotten married or started families in ways that have benefited themselves personally and also their capacity to do good. This is similarly true in its own ways of tens of millions of people who marry and start families within their own religions, cultures or ethnic groups, including in more diverse and pluralistic societies. While EA ought to be worried about ways in which it could cult-like, the common human tendency to spend our lives with those who share our own respective ways of life doesn't appear to be high on that list.
  • One could argue that that's a problematic tendency within societies at large and EA should aspire to more than that. Given my perception that those in EA who've formed flourishing relationships within the community have done so organically as individuals, there doesn't seem to me to be a reason to encourage intra-community dating. Yet to discourage it based on a concern it may appear cult-like would be to impel community members to a kind of romantic asceticism for nobody's benefit.
Comment by evan_gaensbauer on Should we think more about EA dating? · 2020-07-27T01:42:45.303Z · score: 2 (1 votes) · EA · GW

Summary: Concerns about apparent or actual cultishness are serious but ought to be worked through in a more rational way than is typical of popular discourse about cults. EA pattern-matches to being a small, niche community on the fringe of mainstream society, which is also a common characteristic and tell of a cult. Yet there is widespread cognitive dissonance in society at large about how social structures involving tens of millions of people also have harmful, cult-like aspects to them as well. It's perhaps the majority of people in even more diverse societies that marry and raise families within their own religion, culture or ethnic group.

That many of us in EA are strongly inclined to spend our lives with those who share our own way of life doesn't distinguish as problematic from the rest of society. One could argue that almost all cultures are cult-like and EA should aspire to be(come) a super-rational community free of the social problems plaguing all others. That would seem to me to be molehill mountaineering that can be disregarded as vain attempts to impel EA to be(come) quixotically perfect.

Regarding 'culty-ness,' I feel like too many subcultures or countercultures play into the hands of the paranoid accusations of a generic and faceless public. Several years ago, when I was both aware of evidence-based definitions of cults and was in extreme disagreement with mainstream societies, I thought accusations of being a cult levelled at movements that weren't unambiguously cults ought to be disregarded. I no longer feel this way, as I now recognize that a degree of cultishness in an organization or community can exist on a spectrum. Ergo, some public accusations of appearing to or actually being a cult ought to be taken very seriously.

EA is a small, niche community on the fringes of society. Putting that way may seem to stigmatize EA as pattern-matching to those fringe movements that pose a serious threat to society at large. That's not what I meant. I just pointed out that this is a crucial function between society at large and subcultures which may begin alienating themselves to the point of falling down the rabbit hole of becoming a cult.

Yet it seems there are mass groups in all mainstream societies that if they were a small, fringe group would be labelled cults, but only are not because they've been normalized over several decades. Such groups can constitute tens or even hundreds of millions of people. I believe such groups are often whole religions, or social structures similar to religions, which as they transform into mainstream institutions are sanitized in a way that makes them less harmful per individual than small, niche cults like the Church of Scientology.

Nonetheless, they often cause significant harm. So, much of humanity has severe cognitive dissonance about what is and isn't a cult, and why all kinds of mainstream institutions shouldn't be considered just as harmful as cults. This should cause us to take concerns of being culty with a grain of salt when it comes from a source that is selective in its opposition to cult-like groups. What I've never understood is, if some in EA are concerned that EA may seem like or take on actual cult-like tendencies, why none of us try assessing this for ourselves. As a movement that aspires to be scientific, we in EA ought to be able to assess to what extent our community is like a cult by reviewing the scientific research and literature on the subject of cults.

With all this in mind, we can put in context the concern some features in EA might make it appear like a cult to the rest of the world. While optics matter, they aren't everything. Of all the things EA has been accused of being a cult for, that those in EA tend to form relationships with one another isn't a frequent one. It's perhaps the majority of people in diverse societies that tend to date, marry or start families with those from their own ethnic, religious and cultural background. Most people don't call them cults. That's because there's a common understanding that individuals are drawn to spend their lives with those who share a common way of life. Outsiders to most ways of life understand that, even if they don't totally understand a respective way of life itself.

One lingering concern for some in EA might be that we ought to aspire to be far better in how we conceive of and do good things than the rest of the world. That might include being less cult-like than even entire cultures which themselves aren't technically cults. There are freethinking, cosmopolitan atheists who would call all religions and most cultures cults. Such accusations may cite that intermarriage between members of one culture because they are the same culture only occurs to irrationally preserve and perpetuate that culture, and its traditions and institutions.

I don't totally disagree with such freethinkers myself but I wouldn't take that criticism to heart to the point of discouraging relationships among those in EA. Relationships within the EA community are imperfect in their own ways, as is the case with all kinds of relationships inspired by a particular way of life. Yet I've seen dozens if not hundreds in EA personally flourish and enhance the good they're doing by being in relationships with other community members. Taking every naysayer to heart won't free us of problems. After all, we in EA are only human (and, I'll postulate, will be imperfect even in light of potentially becoming post-human super-beings in the future).

Comment by evan_gaensbauer on Should we think more about EA dating? · 2020-07-27T01:20:05.587Z · score: 2 (1 votes) · EA · GW

I appreciate this informative comment. I've got a couple of relevant points to add.

1. As a community coordinator for EA, a few years ago I was aware more in EA were interested in dating others in the community. I shared a link to reciprocity.io around in EA Facebook groups like EA Hangout. This got a few more hundred people to get on reciprocity. I talked to Katja Grace, who originally had the idea.

Reciprocity.io was written to support the much smaller Bay Area rationality community, which had the time had over 100 people but not too many more than that. So many in EA getting on reciprocity.io caused it to crash. The code wasn't particularly worth saving and at the time Katja suggested that if someone wanted, it might be better to make a newer, better site from scratch.

2. As far as I'm aware, LGBTQ+ people are significantly overrepresented in the EA community relatve to the background population. I don't know how much of this is determined by feeder communities for EA, i.e., how much the communities people find EA from are themselves disproportionately representative of the LGBTQ+ community. Feeder communities for EA include:

  • animal advocacy movements
  • organizations focused on particular causes in the non-profit sector
  • startup culture
  • transhumanism
  • rationality
  • etc.

Caveats: I don't know more specifically than that how the representation for LGBTQ+ folks in EA skews. By representation I mean statistical representation, not representation of LGBTQ+ as identities. Neither am I suggesting that anyone ought to infer anything else about the experiences and status of LGBTQ+ folks in the EA community based just on the fact they're overrepresented in the EA community.

I haven't put any thoughts into how this otherwise impacts the gender ratio of the EA community or dating prospects for individual community members therein. I just offer the info in case it inspires others' insights about intra-community dating and relationships.

Comment by evan_gaensbauer on Expert Communities and Public Revolt · 2020-03-30T01:43:58.309Z · score: 1 (1 votes) · EA · GW

This still neglects the possibility that if governments across the world are acting in a matter suboptimally, then them cooperating with each other, and a close and cozy relationship between expert communities and governments may come up the cost of a negative relationship with broad sections of the public. Who and what 'the public' should usually be unpacked but suffice to say there are sections of civil society that are closer to correctly diagnosing problems and solutions regarding social crises, as far as expert communities are concerned, than governments. For example, expert communities sometimes have more success in achieving their goals working with many environmental movements around the world to indirectly move government policy than working with governments directly. This is sometimes observed today in progress made in tackling the climate crisis. Similarly during the Cold War, social movements (anti-war, anti-nuclear, environmental movements) in countries on both sides played a crucial in moving governments towards policy that deescalated nuclear tensions, like the SALT treaties, that an expert organization like the Bulletin of Atomic Scientists (BAS) would advocate for. It's not clear that movements within the scientific community to deescalate nuclear tensions between governments would have succeeded without broader movements in society pursuing the same goals.

Obviously such movements can be a hindrance to the goals for improving the world pursued by expert communities, when governments are otherwise the institutions that would advance progress towards these goals better than those movements. A key example of this is how environmental movements have played a positive role in combating pollution and deescalating nuclear tensions during the Cold War, they've been counterproductive by decreasing public acceptance and the political pursuit of the safest forms of nuclear energy. Many governments around the world which otherwise would build more nuclear reactors to produce energy and electricity to replace fossil fuels don't do so because they rightly fear the public backlash that would be whipped up by environmental movements. Some sections of the global environmental movement have become quite effective on freezing the progress on climate change that could be made by governments around the world building more nuclear reactors.

There are trade-offs in the relationships expert communities face in building relationships with sections of the public like social movements vs. governments. I haven't done enough research to know if there is a super-effective strategy for knowing what to do under any conditions, as an expert community. Suffice to say, there aren't easy answers for effective altruism as a social and intellectual movement, or the expert communities to which we're connected, to resolve these issues.

While we are on this topic, I thought it would be fit if we acknowledge what similar issues effective altruism as a movement faces. Effective altruism as a global community has been crucial the growing acceptance of AI alignment as a global priority among some institutions in Silicon Valley and other influential research institutions across the world, both academic and corporate. We've also influenced some NGOs in policymaking and world governments to take seriously transformative AI and the risks it poses. Yet it's mostly been indirect, has had little visible impact and hasn't produced a better, ongoing relationship between EA as a set of institutions, and governments.

We're now in a position where as much as EA might be integrated with efforts in AI security in Silicon Valley and universities around the world, governments of countries like Russia, China, South Korea, the European Union, and at least the military and intelligence institutions of the American government are focused on it. Those governments focusing on AI security more is in part a consequence of EA perpetuating greater public consciousness regarding AI alignment (the far-bigger factor being the corporate and academic sectors achieving major research progress in AI as recognized through significant milestones and breakthroughs). There are good reasons why some EA-aligned organizations would keep private that they've developed working relationships with the research arms of world governments on the subject of AI security. Yet from what we can observe publicly, it's not clear that at present perspectives from EA and expert communities we work with would have more than a middling influence on the choices world governments make regarding matters of security in AI R&D.

Comment by evan_gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-25T04:41:50.880Z · score: 2 (1 votes) · EA · GW

I've identified the chapters in OHSM that, if there is an answer to these questions to be found in the book, they will be in there. They are 5 chapters, totaling roughly 100 pages in number. Half the chapters focus on ties to other social movements, and half the chapters focus on political parties/ideologies. I can and will read them, but to give a complete answer to your questions, I'd have to read most of at least a couple of chapters. That will take time. Maybe I can provide specific answers to more pointed questions. If you've read this comment, pick one goal from one cause area, and decide if you think the achievement of that goal depends more on EA's relationship to either another social movement, or a political ideology. At that level of specificity, I expect I can achieve something like giving one or two academic citations that should answer that question. Again, I will answer the question at the highest level, but at that point I'm writing a mini-book review on the EA Forum that will take a couple of weeks for me to complete.

Comment by evan_gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-25T04:30:43.958Z · score: 3 (2 votes) · EA · GW

I'm aware of a practical framework that social movements along other kinds of organizations can use. There are different versions of this framework, for example, in start-up culture. I'm going to use the version I'm familiar with from social movements. I haven't taken the time yet to look up in the OHSM if this is a framework widely and effectively employed by social movements overall.

A mission is what a movement seeks to ultimately accomplish. It's usually the very thing that inspires the creation of a movement. It's so vast it often goes unstated. For example, the global climate change movement has a mission of 'stopping the catastrophic impact of climate change'. Yet that's so obvious it's not like at meetings environmentalists need to establish the fact they've gathered is to stop climate change. It's common knowledge.

The mission of effective altruism is, more or less, "to do the most good". Cause areas exist in other movements similarly broad to effective altruism, but they're not the same thing as a mission. The cause area someone focuses on will be due to their perception of how to do the most good, or their evaluation of how they can personally do the most good. So each cause area in EA represents a different interpretation of how to do the most good, as opposed to being a mission or goal in and of itself.

Goals are the factors a movement believes are the milestones to be completed to complete a mission. The movement believes each goal by itself is a necessary factor in completing the mission, and that the full set of goals combined fulfills the sufficient condition to complete the mission. So for the examples you gave, the set up would be as follows:

Cause: Global poverty alleviation

Mission: End extreme global poverty.

Goals: Improve trade and foreign aid.

Cause: Factory Farming

Mission: End factory farming.

Goals: Gain popular support for legal and corporate reforms.

Cause: Existential risk reduction

Mission: Avoid extinction.

Goals: MItigate extinction risk from AI, pandemics, and nuclear weapons.

Cause: Climate Change

Mission: Address climate change.

Goals: Pursue cap-and-trade, carbon taxes and clean tech

Cause: Wild Animal Welfare

Mission: Improve the welfare of wild animals.

Goals: Do research to figure out how to do that.

Having laid it out like this, it’s easier to see (1), why a “cause” isn’t a “mission” or “goal”; and, (2), how this framework can be crucial for clarifying what a movement is about at the highest level of abstraction. For example, while the mission of the cause of ‘global poverty alleviation’ is ‘eliminate extreme global poverty’, the goals of systemic international policy reform don’t match up to what EA primarily focuses on to alleviate global poverty, which is a lot of fundraising, philanthropy, research and field activity, focused on global health, not public policy. Your framing assumes ‘existential risk reduction’ refers to ‘extinction risk’, but ‘existential risk’ has been defined as long-term outcomes that permanently and irreversibly alter the trajectory of life, humanity, intelligence and civilization on Earth or in the universe. That includes extinction risks but can also include risks of astronomical suffering. If nitpicking the difference between missions and goals seems like needless semantics, remember that because EA as a community doesn’t have a clear and common framework for defining these things, we’ve been debating and discussing them for years.

Below goals are strategy and tactics. The strategy is the framework a movement employs for how to achieve the goals. Tactics are the set of concrete, action-oriented steps the movement takes to implement the strategy. The mission is to the goals as the strategy is to the tactics. There is more to get into about strategy and tactics, but this is too abstract a discussion to get into that. For figuring out what an effective social movement is, and how it becomes effective, it’s enough to start thinking in terms of mission and goals.

Comment by evan_gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-22T02:13:03.185Z · score: 2 (1 votes) · EA · GW

This isn't from the OHSM, but two resources to learn more about this topic are the Wikipedia article on 'satisficing', a commonly suggested strategy for adapting utilitarianism in response to the demandingness criticism, and this section of the 'consequentialism' article on the Stanford Encyclopedia of Philosophy focused on the demandingness criticism.

Comment by evan_gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-21T23:37:58.394Z · score: 2 (1 votes) · EA · GW

Why have you found it underwhelming?

Comment by evan_gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-21T23:37:30.003Z · score: 2 (1 votes) · EA · GW

Done

Comment by evan_gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-18T07:26:41.282Z · score: 2 (1 votes) · EA · GW

Same as with my response to your other questions in your other comment, it's easier to operationalize 'success', 'failure', and 'support' with missions, goals and objectives in mind. The other questions I believe I can find answers for more easily, but these ones aren't answerable without specified goals.

Comment by evan_gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-18T07:23:39.121Z · score: 5 (3 votes) · EA · GW

These questions seem too general to provide a satisfying answer. I'd have to quote a few whole chapters to give a complete answer. An answer applicable to effective altruism depends on making assumptions about what the community's goals are. I think it's safe to make some assumptions here for the sake of argument. To start off, it's safe to say effective altruism is in practice a reformist as opposed to revolutionary movement. Beyond that, it'd be helpful to specify what kind of goals you have in mind, and what means of achieving them are either preferred and/or believed to be most effective.

Comment by evan_gaensbauer on What are the key ongoing debates in EA? · 2020-03-18T07:02:49.195Z · score: 4 (3 votes) · EA · GW

Whether effective altruism should be sanitized seems like an issue separate from how big the movement can or should grow. I'm also not sure questions of sanitization should be reduced to just either doing weird things openly, or not doing them at all. That framing ignores the possibility of how something can be changed to be less 'weird', like has been done with AI alignment, or, to a lesser extent, wild animal welfare. Someone could figure out how to make it so betting on pandemics or whatever can be done without it becoming a liability for the reputation of effective altruism.

Comment by evan_gaensbauer on Did Geoff Anders ever write a post about the performance of Leverage Research and their recent disbanding? · 2019-08-05T01:23:06.822Z · score: 6 (3 votes) · EA · GW

Frankly, I'm unsure how much there is to learn from or about Leverage Research at this point. Having been in the effective altruism movement for almost as long as Leverage Research has been around, an organization which has had some kind of association with effective altruism since soon after it was founded, Leverage Research's history is one of failed projects, many linked to the mismanagement of Leverage Research as an ecosystem of projects. In effective altruism, one of our goals is learning from mistakes, including the mistakes of others, is so we don't make the same kind of mistakes ourselves. It's usually more prudent to judge mistakes on a case-by-case basis, as opposed to the actor or agency that perpetuates them. Yet other times there is a common thread. When there is evidence for repeated failures borne of systematic errors in an organization's operations and worldview, often the most prudent lesson we can learn from that organization is why they repeatedly and consistently failed, and about their environment, for why it enabled a culture of an organization barely ever course-correcting, or being receptive to feedback. What we might be able to learn from Leverage Research is how EA(-adjacent) organizations should not operate, and how effective altruism as a community can learn to interact with them better.

Comment by evan_gaensbauer on Announcing the launch of the Happier Lives Institute · 2019-08-05T01:10:06.994Z · score: 2 (1 votes) · EA · GW

Alright, thanks for letting me know. I'll remember that for the future.

Comment by evan_gaensbauer on Announcing the launch of the Happier Lives Institute · 2019-08-05T01:09:32.508Z · score: 2 (1 votes) · EA · GW

Hi. I'm just revisiting this comment now. I don't have anymore questions. Thanks for your detailed response.

Comment by evan_gaensbauer on Support AMF in tab 4 a cause so it reaches its goal. · 2019-07-29T05:13:45.610Z · score: 6 (4 votes) · EA · GW

I saw this post had negative karma, and I upvoted it again to positive karma. I'm making this comment to signal-boost that I believe this article belongs on the EA Forum; and, that, if one is going to downvote articles like this that by all appearances are appropriate for the EA Forum, it would be helpful to provide a constructive explanation/criticism of them.

Comment by evan_gaensbauer on Defining Effective Altruism · 2019-07-22T02:01:38.983Z · score: 2 (3 votes) · EA · GW

I've been in the EA community since 2012. As someone who has been in EA for that long, I entered the community taking to heart the intentional stance of 'doing the most good'. Back then, a greater proportion of the community wanted EA to primarily be about a culture of effective, personal, charitable giving. The operative word of that phrase is 'personal,' since even though there are foundations behind the EA movement like the Open Philanthropy Project that have a greater endowment than the rest of the EA community combined might ever hope to earn to give, for different reasons a lot of EAs still think it's important EA emphasizes a culture of personal giving regardless. I understand and respect that stance, and respect its continued presence in EA. I wouldn't even mind if it became a much bigger part of EA once again. This is a culture within EA that frames effective altruism as more of an obligation. Yet personally I believe it's more effective, and does more good, by doing so in a more diverse array of that. I am glad EA has evolved in that direction, and so I think it's fitting this definition of EA reflects that.

Comment by evan_gaensbauer on Defining Effective Altruism · 2019-07-22T01:52:18.485Z · score: 1 (4 votes) · EA · GW

I've adopted to develop exclusion criterion for entryists into EA that EA, as a community, by definition, would see as bad actors, e.g., white supremacists. While one set of philosophical debates within EA, and with other communities, is how far, and how fast, the circle of moral expansion should grow. This common denominator in EA seems to imply a baseline agreement common to all of EA that we would be opposed to people who see to rapidly and dramatically shrink the circle of moral concern of the current human generation. So, to the extent someone:

1. shrinks the circle of moral concern;

2. does so to a great degree/magnitude;

3. does so very swiftly;

EA as a community should beware uncritically tolerating them as members of the community.

Comment by evan_gaensbauer on Defining Effective Altruism · 2019-07-22T01:37:40.029Z · score: 5 (4 votes) · EA · GW
I've been thinking more that we may want to split up "Effective Altruism" into a few different areas. The main EA community should have an easy enough time realizing what is relevant, but this could help organize things for other communities.

People have talked about "splitting up" EA in the past to streamline things, while other people worry about how that might needlessly balkanize the community. My own past observations of trying to 'split up' EA, into specialized compartments, it that, more than being good or bad, it doesn't have much consequence at all. So, I wouldn't recommend more EAs just make an uncritical try of doing so again, if for no other reason than it strikes me as a waste of time and effort.

As mentioned in this piece, the community's take on EA may be different from what we may want for academics. In that case one option would be to distill the main academic-friendly parts of EA into a new term in order to interface with the academic world.

The heuristic I use to think about this is to leave the management with the relationship between the EA community and "Group X", is to let members of the EA community who are part of Group X manage EA's relationship with Group X. That heuristic could break down in some places, but it seems to have worked okay so far for different industry groups. For EA to think of 'academia' as an industry like 'the software industry' is probably not the most accurate thing to do. I just think the heuristic fits because EAs in academia will, presumably, know how to navigate academia on behalf of EA better than the rest of us will.

I think what has worked best is for different kinds of academics in EA to lead the effort to build relationships with their respective specializations, within both the public and private sectors (there is also the non-profit sector, but that is something EA is basically built out of to begin with). To streamline this process, I've created different Facebook groups for networking and discussions for EAs in different respective profession/career streams, as part of a EA careers public resource sheet. It is a public resource, so please feel free to share and use it however you like.

Comment by evan_gaensbauer on Defining Effective Altruism · 2019-07-21T09:55:48.182Z · score: 4 (2 votes) · EA · GW

This is similar to how I describe effective altruism to those whom I introduce to the idea. I'm not in academia, and so I mostly introduce it to people who aren't intellectuals. However, I can trace some of the features of your more rigorous definition in the one I've been using lately. It's: " 'effective altruism' is a community and movement focused on using science, evidence, and reason to try solving the world's biggest/most important problems". It's kind of clunky, and it's imperfect, but it's what I've replaced "to do the most good" with, which generically stated presents the understandable problems you went over above.

Comment by evan_gaensbauer on GiveWell's Top Charities Are Increasingly Hard to Beat · 2019-07-14T18:29:45.415Z · score: 1 (1 votes) · EA · GW

This is a recent criticism of Givewell that I didn't see responded to or accounted for in any clear way in the linked post. I haven't read the whole thing closely yet, but no section appears to go over the considerations raised in that post. If they were sound, these criticisms incorporated into the analysis might make Givewell's top-recommended charities look more 'beatable'. I was wondering if I was missing something in the post, and Open Phil's analysis either accounts for or incorporates for that possibility.

Comment by evan_gaensbauer on GiveWell's Top Charities Are Increasingly Hard to Beat · 2019-07-11T06:54:18.402Z · score: 1 (5 votes) · EA · GW

Do you know if these take into account criticisms of Givewell's methodology for estimating the effectiveness of their recommended charities?

Comment by evan_gaensbauer on Announcing the launch of the Happier Lives Institute · 2019-06-21T01:05:23.275Z · score: 1 (6 votes) · EA · GW

Thanks for the response, Aaron. Had I been aware this post would have received Frontpage status, I would not have made my above comment. I notice my above comment has many votes, but not a lot of karma, which means it was a controversial comment. Presumably, at least several people disagree with me.

1. I believe the launch of new EA-aligned organizations should be considered of interest to people who browse the Frontpage.

2. It's not clear to me that it's only people who are 'relatively new to EA' who primarily browse the Frontpage instead of the Community page. While I'm aware the Frontpage is intended primarily for people relatively new to EA, it's not clear to me the usage of the EA Forum is such that it's only newcomers to EA who primarily browse the Frontpage. Ergo, it seems quite possible there are a lot of people who are committed EA community members, who are not casually interested in each update from every one of dozens of EA-aligned organizations. So, they may skip the 'Community' page, while nonetheless there are major updates like these that are more 'community-related' than 'general' EA content, but nonetheless deserve on the Frontpage, where people who do not browse the community tab often, who are also not newcomers to EA, will see them.

3. I understand why there would be some hesitance to move posts announcing the launch of new EA-aligned projects/organizations to the Frontpage. The problem is there aren't really hard barriers to just anyone declaring a new project/organization aimed at 'doing good' gaming EA by paying lip service to EA principles and practices, but, behind the scenes, the organization is not (intending/trying to be) as effective or altruistic as they claimed to be. One reason this problem intersects with moving posts to the Frontpage of the EA Forum is because to promote just any new project/organization that declares themselves to be EA-aligned to a place of prominence in EA sends the signal, intentionally or not, that this project/org has received a kind of 'official EA stamp of approval'. Why I brought up Michael Plant's reputation is not because I thought anyone's reputation alone should dictate what assignment on the EA Forum their posts receive. I just mentioned it that, on the chance Aaron or the administration of the EA Forum was on the fence about whether to promote this post to the Frontpage or not, I wanted to vouch for Michael Plant as an EA community member whose reputation of commitment of fidelity to EA principles and practices in the projects he is involved with is such that, on priors, I would expect the new project/org he is launching, and its announcement, to be that which the EA Forum should be willing to put its confidence behind.

4. I agree ideally the reputation of an individual EA community member should not impact what we think of the content of their EA Forum posts. I also agree that in practice we should aspire to live this principle in practice as much as possible. I just also believe that it's realistic to acknowledge EA is a community of biased humans like any other, and so forms of social influence like individual reputation still impact how we behave. For example, if William MacAskill or Peter Singer were to announce the launch of a new EA-aligned project/org, not exclusively based on their prior reputation, but based on their prior reputation, barring a post they made to the EA Forum reading like patent nonsense, which is virtually guaranteed not to happen, I expect it would be promoted to the Frontpage. My goal in vouching for Michael Plant is, while he isn't as well-known in EA as Profs. MacAskill or Singer, was to indicate I believe he deserves a similar level of credit in the EA community as a philosopher who practices EA with impeccable fidelity.

5. I also made my above comment under perceiving the norms for which posts are assigned to the 'Community' or 'Frontpage' posts to be ambiguous. For the purposes of what kinds of posts announcing the launch of a new EA-aligned project/org will be assigned to the Frontpage, I find the following from Aaron sufficient and satisfactory clarification of my prior concerns:

I think detailed posts that explain a specific approach to doing the most good make sense for this category, and this post does that while also happening to be about a new organization. Some but not all posts about new organizations are likely to be assigned Frontpage status.

6. Aaron dislikes my use of the word 'relegate' to describe the assignments of posts on the EA Forum to the Frontpage or the Community page, respectively. I used the word 'relegate', because that appears to be how promotions to the Frontpage on LessWrong work, and because I was under the impression the EA Forum had similar administration norms to LessWrong. Since the EA Forum 2.0 is based on the same codebase as LW 2.0, and the same team that built LW2.0 also was crucial in the development of the EA Forum2.0, I was acting under the assumption the EA Forum admin team significantly borrowed admin norms from the LW2.0 team from which they inherited administration of the EA Forum 2.o. In his above comment, Aaron has clarified the distinction between the 'Frontpage' and other tabs on the EA Forum is not the same as the distinction between the 'Frontpage' and other tabs on LW.

7. While the distinctions between Frontpage and and Community sections are intended to serve different purposes, and not as a measure of quality, because of the availability heuristic, I worry one default outcome of 'Frontpage' posts, well, being on the frontpage on the EA Forum, and their receiving more attention, meaning they will be assumed to be of higher quality.

These are the reasons that motivated me to make my above comment. Some but not all of these concerns are entirely assuaged by Aaron's response. All my concerns specifically regarding EA Forum posts that are announcements for new orgs/projects are assuaged. Some of my concerns with ambiguity between which posts will be assigned to the Frontpage or Community tabs respectively remain. However, they hinge upon disputable facts of the matter that could be resolved alone by EA Forum usage statistics, specifically comparative usage stats between the Community and Frontpage tabs. I don't know if the EA Forum moderation team has access to that kind of data, but I believe access to such usage stats could greatly aid in resolving my concerns regarding how much traffic each tab, and its respective posts, receive.

Comment by evan_gaensbauer on Announcing the launch of the Happier Lives Institute · 2019-06-19T19:03:15.707Z · score: 4 (12 votes) · EA · GW

While updates from individual EA-aligned organizations are typically relegated to the 'community' page on the EA Forum, I believe an exception should be made for the public announcement for the launch of a new EA-aligned organization, especially one that takes a focus area that doesn't already have major professional representation in EA. I believe that such announcements are of interest to people who browse the EA Forum, including newcomers to the community, and is not what I would call just a 'niche' interest in EA. Also, specifically with the case of Michael D. Plant, I believe he is someone whose reputation in EA precedes him such that we should give credit as the announcement of this project launch to be of significant interest to the EA community, and of things coming out of EA that are of interest to the broader public.

Comment by evan_gaensbauer on Public Spreadsheet of Effective Altruism Resources by Career Type · 2019-06-08T21:15:41.817Z · score: 2 (1 votes) · EA · GW

It isn't meant to mean software engineering, but all engineering. Unfortunately, aside from the FB group I made, I wasn't aware of any other EA materials and resources for engineers other than specifically for software engineering.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-06-05T17:33:34.743Z · score: 3 (2 votes) · EA · GW
I'm sure lots of lefties would not like how market-friendly EA tends to be

It's unclear to me how representative this is of either EA or leftists. Year over year, the EA survey has shown the vast majority of EA to be "left-of-centre", which includes a significant portion of the community whose politics might very well be described as 'far-left'. So while some leftists might be willing to surmise from one EA-aligned organization, or a subset of the community, being market-friendly as representative of how market-friendly all of EA is, that's an unsound inference. Additionally, even for leftist movements in the U.S. to the left of the Democratic establishment, there is enough ideological diversity I would say many of them appreciate markets enough such that they're not 'unfriendly' to them. Of course there are leftists who aren't friendly to markets, but I'm aware of a phenomenon of some factions on the Left to claim to speak on behalf of the whole Left, when there is no reason in the vast majority of these cases to think it's a sound conclusion to draw that the bulk of the Left is hostile to markets. So, while 'a lot' of leftists may be hostile to markets, and 'a lot' of EA may be market-friendly, without being substantiated with more empirical evidence and logical qualification, those claims don't provide useful info we can meaningfully work with.

Current Affairs overall is fairly amenable to EA and has a large platform within the left. I don't think "they are a political movement that seeks attention and power" is a fair or complete characterization of the left. The people I know on the left genuinely believe that their preferred policies will improve people's lives (e.g. single payer, increase minimum wage, more worker coops, etc.).

I think you're misinterpreting. I never said that was a complete characterization, and fairness has nothing to do with it. Leftist movements are political movements, and I would say they're seeking attention and power like any and every other political movement. I'm on the Left as well, and that I and the people who are leftists genuinely believe our preferred policies will indeed improve people's lives doesn't change the fact the acquisition of political power to achieve those goals, and acquiring the requisite public attention to achieve that political power, is necessary to achieve those goals. To publicly acknowledge this can be fraught because such language can be easily, often through motivation, interpreted by leftists or their sympathizers as speaking of a political movement covetous of power for its own sake. If one is too sheepish to explain otherwise, and stand up for one's convictions, it's a problem. Yet it shouldn't be a problem. I've read articles written by no less than Current Affairs' editor-in-chief Nathan Robinson that to talk about power is something all leftists need to do more of.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T19:28:53.284Z · score: 4 (2 votes) · EA · GW

Strongly upvoted. I don't have anything else to add right now then I now understand why you're asking this question as you have, and that I agree it makes sense as a first step with the background assumptions you're coming in with.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T19:27:47.140Z · score: 2 (1 votes) · EA · GW

I think your suggestion of Good Ventures making more grants to the EA Funds would be a better alternative, though before that I'd like to be confident the kinks have been worked out of the EA Funds management system. I was speaking more generally, though, that all kinds of generic structures that merely decentralized grantmaking in EA more would be better. That it could almost be any structure that had that feature was my point. I'm aware there are reasons people might behave as though so much decision-making being concentrated in Open Phil is optimal. If you have knowledge there is a significant portion of the EA community who indeed sincerely believes the current structure for capital allocation being so concentrated as it is, please let me know. I would act on that, as I would see that as a ludicrous and dangerous notion for all of EA I wouldn't think even Open Phil or Good Ventures would condone.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:43:43.356Z · score: 1 (2 votes) · EA · GW

Like I said in my above comment, asking interesting questions to avoid stating inconvenient if valuable opinions doesn't go far in EA. If you think so much centralization of decision-making in Open Phil in the person of Holden Karnofsky is suboptimal, and there are better alternatives, why not just say so?

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:42:12.031Z · score: 2 (1 votes) · EA · GW

I think it's unimportant. I would hope everyone is already aware we've arrived where we're at for contingent reasons. I think it's more than plausible we could have an alternative structure for capital allocation than the one we have now. I think this first step should have been combined with the next couple steps to just be its own first step.

Michael Dickens took the opposite route and said Open Phil should prioritize wild animal welfare. I also remember last year there were lots of people just asking questions about whether the EA Funds should be managed differently, and nothing happened, and then I made a statement more than a question, and then the EA Funds changed a lot.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:26:58.725Z · score: 1 (2 votes) · EA · GW

Right, but if all he is doing is signing off, then you're attributing to him only the final part of the decision, and treating that as if it's the whole decision.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:26:15.169Z · score: 2 (1 votes) · EA · GW

Right, I guess I was asking why you're exploring it.

I don't think will get a better structure through the route you're going, which is just asking questions about Open Phil. I figure at the least one would try figuring out what structure they consider best, and then explaining why you think it's the case Good Ventures should switch to that structure.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:10:59.163Z · score: 2 (1 votes) · EA · GW

Why are you asking this question? I'm asking because it seems more like an academic exercise in what would be a better capital allocation structure if one were to be had, when in practice it doesn't seem like will get there.

Comment by evan_gaensbauer on What's the best structure for optimal allocation of EA capital? · 2019-06-04T17:09:38.706Z · score: 14 (9 votes) · EA · GW
Rough estimate: if ~60% of Open Phil grantmaking decisioning is attributable to Holden, then 47.2% of all EA capital allocation, or $157.4M, was decided by one individual in 2017. 2018 & 2019 will probably have similar proportions.
It seems like EA entered into this regime largely due to historically contingent reasons (Cari & Dustin developing a close relationship with Holden, then outsourcing a lot of their philanthropic decision-making to him & the Open Phil staff).

This estimate seems to have a lot of problems with it, from attributing so much credit to Holden that way not only seeming in practice treating him as too unique an actor without substantiation, if not in principle playing fast and loose with how causation works, while being a wild overestimate from nowhere. This is a pretty jarring turn to what up until that point I had been reading as a reasonable question post.

Comment by evan_gaensbauer on Is EA Growing? EA Growth Metrics for 2018 · 2019-06-03T06:06:38.347Z · score: 17 (11 votes) · EA · GW

When I asked about what has caused EA movement growth to slow down, people answered it seemed likeliest EA made the choice to focus on fidelity instead of rapid growth. That is a vague response. What I took it to mean is:

  • EA as a community, led by EA-aligned organizations, chose to switch from prioritizing rapid growth to fidelity.
  • That this was a choice means that, presumably, EA could have continued to rapidly grow.
  • No limiting factor has been identified that is outside the control of the EA community regarding its growth.
  • EA could make the choice to grow once again.

Given this, nobody has identified a reason why EA can't just grow again if we so choose as a community by redirecting resources at our disposal. Since the outcome is under our control and subject to change, I think it's unwarranted to characterize the future as 'bleak'.

Comment by evan_gaensbauer on On the margin, should EA focus on outreach or retention? · 2019-06-02T23:45:34.992Z · score: 9 (5 votes) · EA · GW

I think we should focus on retention. When I asked what has caused EA movement growth to slow down, the answer achieved through the consensus in the comments was that EA chose to focus on fidelity rather than movement growth.

I choose to interpret this as the EA community not knowing how to make good use of the same high growth rates with fidelity towards our goals, since several years ago when the community was much smaller to pursue our goals with fidelity entailed growing the community large enough to have a chance of achieving our goals. So, it's not an either/or scenario. Suffice to say growth has been necessary to EA in the past, but we're at a stage where:

  • we have some resources in excess of what we are making use of.
  • we may not know how to make use of greater growth.
  • there are multiple other factors to progress on EA's goals more limiting than growth.

So, EA is currently at a point where growth doesn't appear as crucial. I imagine in the future, as EA solves factors limiting our rate of progress on the pursuit of our goals unrelated to growth, success will free up more opportunity to make good use of growth. The growth rate of EA is slowing. It may plateau or decline in the future. It doesn't appear a problem right now.

Meanwhile, there was a relatively popular article a couple months ago about how EA doesn't focus enough on climate change, and there were many comments from people who would otherwise be much more involved in EA being put off by the apparent neglect of climate change. As a professional community, if we keep demanding high talent but keep lacking the capacity to match talented people to jobs, then lots of those people we initially attracted mostly for the jobs will exit when there are none.

So, in the last few months, there are multiple examples about how retention failure may pose a serious problem to EA in the future. Meanwhile, a lack of growth doesn't pose a current serious threat to EA. EA as a community seems to understand our own growth much more than we understand retention, since we haven't studied it as much. I've been impressed by how much understanding of the movement's own growth was demonstrated in the answers I've gotten. So, I'm confident if EA thought it needed to sustain high growth rates once again, we could rise to such a pressing challenge. I'm not confident we know how to solve retention problems.

To solve retention problems EA would need to learn what are the potential sources of retention problems. Were EA to solve such potential problems, we would solve problems not only causing existing effective altruists to leave the movement, but also problems newcomers see in EA that repel them. So, focusing on retention solves problems in the present that will help with movement growth in the future, if ever EA tries to grow at a faster rate in the future.

Finally, I'd say the third option of 'upskilling' Aaron suggested in his comment isn't totally mutually exclusive with retention either, since I think increasing opportunities for upskilling, especially making them more inclusive and widely available, would do a lot for retention in EA as well.

Comment by evan_gaensbauer on What caused EA movement growth to slow down? · 2019-06-02T23:17:41.119Z · score: 2 (1 votes) · EA · GW

Yeah, that looks interesting. Thanks for letting me know. I'll check it out.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-01T18:21:56.249Z · score: 12 (6 votes) · EA · GW

My article criticizing the EA Funds last year were both more cogent than this post, and the recipient of a much greater number of upvotes, than Ben's here. I do in fact think it is the case here that this post is receiving downvotes because of the factual errors with it. Yet neither is this entirely separate from the issue of people downvoting the post simply because they don't like it as a criticism of EA. That people don't like the post is confounded by the fact the reason they don't like it could be because they think it's very erroneous.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-30T13:20:18.992Z · score: 5 (3 votes) · EA · GW

Okay, so, what has has happened is:

  • khorton said she is a centrist, who for the sake of argument, was putting on her 'leftist' hat.
  • By "leftist", I thought she meant she was being the devil's advocate for anti-capitalism, when she was actually being an advocate for progressive/left-liberal reform.
  • She assumed that you assumed, like me, she was playing the role of devil's advocate for anti-capitalism, when you did not, i.e., not anti-capitalist.
  • While khorton's original comment didn't mention reform and regulation of global markets, she made clear in her next response to me that is what she intended as the subject of her comment even though she didn't make it explicit.
  • I got mixed up, and as the subject changed, I forgot market reform was never even implied by khorton's original comment.

While I disagreed with how rude your original response to her was, I did agree with your point. Now that you've edited it, and this comment is sorted, I've now upvoted your comment, as I agree with you.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-30T13:10:06.711Z · score: 5 (5 votes) · EA · GW

Point of correction: khorton is a 'she', not a 'he'.

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-30T03:20:43.599Z · score: 4 (2 votes) · EA · GW

He is claiming the idea that the cost of saving a life being $100k instead of being $5k being a sufficient condition to logically conclude one is not obliged to save a life, given the assumption one would otherwise be obliged to save a life, and that one believes in obligations in the first place, is ridiculous.

Comment by evan_gaensbauer on My state allows for a 1 member nonprofit board and I like that idea in order to keep my vision. However I want to have a "board of directors", but have them as a body to give me advice, as opposed to the traditional governing board? How can I actually apply this and what non misleading title can I give to the "board of directors"? · 2019-05-30T02:38:00.715Z · score: 0 (2 votes) · EA · GW

What state are you in? Based on what I know, being able to set up a non-profit with a one-member directory board is not common. So, I would be curious to know what state you are living in.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-30T01:51:35.370Z · score: 2 (3 votes) · EA · GW

Right, I wasn't assuming communism on your part. I was just sharing thoughts of my own that I thought better represented the frustration kbog was trying to express. I did this because I thought he was making a valid point with his comment you downvoted about how the kind of question you're asking would lead EA to prioritize a route for public dialogue that it doesn't actually make sense to prioritize, since it is one you made from a leftist viewpoint as a thought exercise, even though you clarified you yourself are a centrist, and as a criticism of EA it is unsound.

My above comment was also addressing the premise you thought the historical origins of wealth as seen from an anti-capitalist perspective is a very relevant criticism of EA. I of course assumed by 'leftist' you meant 'anti-capitalist', which you did not. So, my last comment doesn't apply. I was aware that you yourself were just wearing a leftist hat for the sake of argument, and I did not assume communism on your part.

Of course, regarding your point about questions of reform of contemporary global markets, I agree with you, and disagree with kbog, that that is a legitimate criticism of EA the community should think more about.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-29T21:13:42.369Z · score: 2 (1 votes) · EA · GW

Yeah, I just meant it's a funny coincidence. I don't think there is any issue citing him here.

Comment by evan_gaensbauer on What exactly is the system EA's critics are seeking to change? · 2019-05-29T21:03:23.709Z · score: 2 (1 votes) · EA · GW

While I didn't upvote kbog's comment for being rude, and I agree with you he didn't need to be that rude, I didn't downvote it either because I think he is reaching for a valid point. While I express it differently, I share kbog's frustration with how sometimes effective altruists say we should extend so much charity to anti-capitalist critics of EA, while it may not be a majority of them, there are lots of kinds of anti-capitalism it seems EA should not actually want to reconcile with. I expressed that point without the rudeness of kbog's comment in another comment reply I'll excerpt here:

All variety of leftist ideologies from history are on the upswing today, as politics becomes more polarized, and more people are shifting leftward (and, of course, rightward as well) away from the centre. This has impelled some radical anti-capitalists to spread in the last few years as a propaganda the meme "liberals get the bullet too".
If this was inspired by they ideology of, say, Leninism, then while even if EA shouldn't moralize in asserting ourselves as "better", this would be sufficient grounds for EA to deny a positive association with them, even if the line is meant only rhetorically or symbolically. This would be justified even if we would at the same time build bridges to other leftist movements that have shown themselves more conducive to cooperation with EA, such as those Marxists who would be willing to seek common ground with EA. Of course, as there are many ideologies on the Left, including whole families of ideologies totally incompatible with EA, I believe we must be clear about how we're going to tow this line. Like you yourself said, this isn't unique to leftists. With regards to the Right, EA could build bridges to conservatism, while nonetheless totally rejecting a notion we might ally ourselves with the family of rightist ideologies we could call "supremacism".
[...]
If EA is to become part of humanity's fabric of moral common sense, we must recognize there are ideologies that don't operate under that fabric in the perpetuation of their goals, and go against the grain of both EA and the fabric of common sense. For EA to be worth anything, we must on principle be willing to engage against those ideologies. Of course, EA can and should be willing to ally itself with those leftists who'd seek to expand the circle of moral concern against those who would seek to shrink it to get ahead, no matter what their original ideals were.
This is with regards to political ideologies where either the disagreement over fundamental values, or at least basic facts that inform our moral judgements, are irreconcilable. Yet there will also be political movements with which EA can reconcile, as we would share the same fundamental values, but EA will nonetheless be responsible to criticize or challenge, on the grounds those movements are, in practice, using means or pursuing ends that put them in opposition to those of EA.
[...]
I believe our willingness to live up to that responsibility is one of the few things that distinguishes EA at all from any other community predicated on doing good.