How Can Each Cause Area in EA Become Well-Represented?

2019-02-22T21:24:08.377Z · score: 19 (6 votes)
Comment by evan_gaensbauer on The Narrowing Circle (Gwern) · 2019-02-13T19:16:50.656Z · score: 2 (1 votes) · EA · GW

Summary: One theory for the pattern of moral shifts over the last few hundred years from a perspective of social science and history is the apparent moral shifts have followed a transition from more traditionalist and religious worldviews to more liberal ones, as largely driven by economic and political changes produced by industrialization and modernization. While this narrative model has limitations, it definitely seems significant enough to change how EA thinks about moral circle expansion.

One possibility is the pattern of moral shifts observed in the last couple hundred years in the Western world, and other parts of the world to a lesser extent, is driven by modernization. With the modernization, industrialization, urbanization, and rationalization (i.e., integration of society with advanced science and technology) of societies, popular consideration of different populations of moral patients has shifted along common lines. The upshot for this is EA should consider the possibility moral shifts are driven more by the influence of a changing material and technological environment, and less to do with whole societies intentionally shifting the exercise of their moral agency.

Modernization has given rise to the modern nation-state and greater political centralization, giving the rise to various forms of liberal political ideologies. While liberalism started with the Enlightenment, its popular spread followed the Scientific and Industrial Revolutions, and greater urbanization. The increased contact between different groups, such as differing ethnic groups and the sexes in the workplace, accentuated societal prejudices by making apparent how superficial and arbitrary the material deprivation between different groups of people was, at a historically unprecedented growth in global material wealth. This has a lot of power to explain civil rights and more moral consideration being extended to ethnic, religious, and sexual minorities, and women and children.

At the same time, the decline of more agrarian and religious society alienated more people from traditional communities and religion. This is consistent with the analysis why moral consideration of elders, ancestors, deities, and other groups traditional local communities and religion gave people more moral exposure to.

On one hand, a single convenient narrative explaining how apparent moral progress across societies is actually a natural political and social progression driven almost exclusively by technological and economic changes seems too convenient in the absence of overwhelming evidence. It definitely seems to me intuitively unlikely apparent moral circle expansions would necessarily have happened in the course of history. On the other hand, the idea that the moral circle expansion is an apt evidence-based theory for explaining historical moral progress could be recognized by EA as a largely confused notion, and we could spend less time trying to frame moral shifts through a flawed lens. From there, we could view the theory of moral circle expansion as more of a prospective model for thinking about how various societies' moral circles may likely expand in the present and near future.

Comment by evan_gaensbauer on The Narrowing Circle (Gwern) · 2019-02-13T18:47:20.790Z · score: 2 (1 votes) · EA · GW

Neat post! Feedback:

  • One population neglected in a lot of conversation on moral circle expansion, and in Gwern's consideration, is how the treatment of children after infancy has changed over time. I'm only knowledgeable of the history of the last couple hundred years as it relates to legal treatment, such as child labour laws. The study of how the treatment of children has changed will be complicated by the changing definition of 'children' over time; over time adulthood in societies has been treated to begin as early as the onset of puberty up to twenty years of age. That stated, people older than two or three and lower than the historical lower-bound for age of adulthood seem to have stably been regarded as 'children' throughout history.
  • Another kind of potential moral patient neglected in this conversation are abstract entities, such as concern for the overall health of a tribe, local community, or society; and, more recently in history, cultures and nations, the environment, and biodiversity. One thing all these entities have in common is there appears to be a common moral intuition one can evaluate their overall moral well-being that is greater than the sum of the well-being of their individual members (such as humans or other animals). This differs from how EA typically approaches similar entities, such as more often conflating their moral well-being with the aggregate well-being of their individual members. I'm guessing there are ways moral psychology regarding these entities differs significantly from how people think morally about individual moral patients. I don't know enough about what those differences might be to comment on them, but to understand them better seems crucial to thinking about this topic.
Comment by evan_gaensbauer on The Narrowing Circle (Gwern) · 2019-02-13T18:26:14.090Z · score: 3 (2 votes) · EA · GW

My impression is the West hasn't traditionally revered elders as highly as some other societies, but in the distant past the West revered elders more than we do now.

Comment by evan_gaensbauer on The Narrowing Circle (Gwern) · 2019-02-13T18:14:03.542Z · score: 3 (2 votes) · EA · GW

I agree. While the absolute size of the moral catastrophe that is wrongful treatment of prisoners is brought up a lot, that's a different issue than either the proportion of the population presently in prison, or the amount of harm inflicted on each individual prisoner, relative to the past.

Comment by evan_gaensbauer on The Narrowing Circle (Gwern) · 2019-02-13T18:11:38.036Z · score: 4 (3 votes) · EA · GW

One argument for why people don't proportionally care about future generations is because they're such a distant concern. A pattern I notice with the moral shifts you describe is most people have become more distant from the relevant populations over time, such as prisoners and animals. We're also more "distant" from our ancestors and deities in the sense we may care about them much less in large part because we're exposed to memes promoting caring about them in our everyday lives much less frequently.

Comment by evan_gaensbauer on What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? · 2019-02-03T02:17:46.270Z · score: 2 (1 votes) · EA · GW

That makes sense. I'm not angling for a civil service career myself, but it makes sense. At least in the past for the U.K. 80,000 Hours has recommended entering the civil service as more impactful in expectation than trying to win in electoral politics (mostly because the expected value of generic/randomly selected candidates of winning and achieving their goals is so low; individuals with reason to think they could have a decisive edge in electoral politics should consider it more).

Comment by evan_gaensbauer on What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? · 2019-01-30T23:03:28.944Z · score: 3 (2 votes) · EA · GW

Yeah, I've seen EA community members talk about impacting politics on a national scale, and then also on a municipal scale. Nobody talks about a state-or-province-level much, so I don't know much about it. I imagine the level of ease which one can get things done is somewhere between the national level and the municipal level, but I've yet to check it out.

What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy?

2019-01-30T02:52:25.471Z · score: 22 (10 votes)
Comment by evan_gaensbauer on Combination Existential Risks · 2019-01-14T22:46:53.497Z · score: 7 (6 votes) · EA · GW

The Global Catastrophic Risks Institute (GCRI) has a webpage up with their research on this topic under the heading 'cross-risk evaluation and prioritization.' Alexey Turchin also made this map of 'double scenarios' for global catastrophic risk, which maps out the pairwise possibilities for how two global catastrophic risks could interact.

Comment by evan_gaensbauer on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-26T01:41:22.403Z · score: 2 (1 votes) · EA · GW

What strikes me as odd to me is this organization doesn't appear to me to operate in a way considered necessarily effective or respectable by the standards of Christian international aid either, let alone EA standards, based on what I know of them. Like, most Christian organizations working in the developing world may have a hand in evangelism, yes, but they partially do so by materially benefiting the charitable recipients as well, such as teaching children how to read, or building and then teaching them in Christian schools. It's not clear from the website this org does any of that.

This creates the issue where if the Pay It Forward Foundation, or its staff or supporters, identify as both Christian and EA, there are in fact some Christian EAs who believe evangelism in this manner is the most good they can do. Most EAs might not be comfortable with that, but the Pay It Forward Foundation might not take us seriously if we tell them they're not effective, because obviously they're going by their own standards of what they think 'effective altruism' means. If they weren't, they wouldn't bother associating with EA in the first place while being so different from the rest of EA.

While they are the minority, there are a significant number of Christian effective altruists. While how to approach the Pay It Forward Foundation seems awkward (at least to me), I think the next best step might be to ask some Christian community members what they think of the Pay It Forward Foundation, and how they believe the community should approach them, if approaching instead of ignoring them is something any of us decides is worthwhile.

Comment by evan_gaensbauer on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-25T23:00:24.121Z · score: 6 (4 votes) · EA · GW

I agree, and I was going to say something about this as well. As a Canadian, I notice the tacit America-centrism in EA discourse even more than what Ozy rightly notices is the assumption in much EA discourse we're all left-of-centre. At the same time, going by the 2018 EA Survey, at least one third of EA community members are in the U.S. Other factors that would be missed by the EA survey are the fact that the majority of resources EA commands are in EA:

  • Between the Open Philanthropy Project and perhaps the majority of earners-to-give being in the U.S., the vast majority of funding/donations driven through EA comes through the U.S.
  • I haven't definitively checked, but I'd expect at least half the NPOs/NGOs who identify as part as or are aligned with EA are in the U.S. This includes the flagship organizations in major EA cause areas, such as virtually all x-risk organizations outside Cambridge and Oxford universities; Givewell in global poverty alleviation; and ACE and the Good Food Institute working in farm animal welfare.
  • In terms of political/policy goals in the populations of different countries, the U.S. will still be of more interest to EA than any other country for the foreseeable future, because it seems one of the countries where EA is likeliest to impact public policy; where EA-impacted policy shifts may have the greatest humanitarian/philanthropic impact, due to the sheer population and economic size of the U.S.; and a country where EA-impacted policy gains can best serve as a model/template for how EA could replicate such successes in other countries.

As long as EAs writing about EA from an American perspective qualify in their articles/posts that's what they're doing, I think the realistic thing for non-Americans among us to do is expect for the foreseeable future a seemingly disproportionate focus on American culture/politics will still dominate EA discussions.

Comment by evan_gaensbauer on Long-Term Future Fund AMA · 2018-12-25T04:30:41.623Z · score: 14 (6 votes) · EA · GW

What do you mean by 'expert team' in this regard? In particular, if you consider yourself or the other fund managers to be experts, would you being willing to qualify or operationalize that expertise?

I ask because when the EA Fund management teams were first announced, there was a question about why there weren't 'experts' in the traditional sense on the team, i.e., what makes you think you'd be as good as managing the Long-Term Future Fund as a Ph.D. in AI, biosecurity, or nuclear security (assuming when we talk about 'long-term future' we mostly in practice mean 'existential risk reduction')?

I ask because when the new EA Funds management teams were announced, someone asked the same question, and I couldn't think of a very good answer. So I figure it'd be best to get the answer from you, in case it gets asked of any us again, which seems likely?

Comment by evan_gaensbauer on Long-Term Future Fund AMA · 2018-12-25T01:55:35.968Z · score: 10 (5 votes) · EA · GW

Is there anything the EA community can do to make it easier for yourself and other fund managers to spend more time as you'd like to on grantmaking decisions, especially executive time spent on the decision-making?

I'm thinking of stuff like the CEA allocating more staff or volunteer time to helping the EA Funds managers take care of lower-level, 'boring logistical tasks' that are part of their responsibilities, outsourcing some of the questions you might have to EA Facebook groups so you don't have to waste time doing internet searches anyone could do, etc. Stuff like that.

Comment by evan_gaensbauer on Announcing EA Funds management AMAs: 20th December · 2018-12-24T17:46:05.673Z · score: 8 (5 votes) · EA · GW

In the future, I think it'd make more sense to announce these kinds of AMAs with more advance notice. Most community members wouldn't notice or be prepared for an AMA a day in advance. I've noticed in the last few months many community members, in particular those who'd otherwise be inclined to donate to the EA Funds, are still quite cynical about the EA Funds being worth their money. I appreciate the changes that have been made to the EA Funds, having said as much, and I am fully satisfied the changes made to the EA Funds in light of my requests that such changes indeed be made. So I thought if there was anyone in the EA community whose opinion on how much the EA Funds appear to have improved in the last several months that would be worth something, it'd be mine. There is a lot of cynicism in spite of that. So I'd encourage the CEA and the EA Funds management teams to take their roles very seriously.

On another note, I want to apologize if it comes across as if I'm being too demanding of Marek in particular, who I am grateful to for the singularly superb responsibility he has taken in making sure the EA Funds are functioning to the satisfaction of donors as much as is feasible.

Comment by evan_gaensbauer on Announcing EA Funds management AMAs: 20th December · 2018-12-24T17:18:32.410Z · score: 3 (2 votes) · EA · GW

Is there any chance there will be an AMA for the Global Health & Development EA Fund?

Comment by evan_gaensbauer on Effective Altruism Making Waves · 2018-11-16T07:21:47.544Z · score: 2 (1 votes) · EA · GW

I didn't know about that. That's incredible!

Comment by evan_gaensbauer on Effective Altruism Making Waves · 2018-11-16T07:19:41.834Z · score: 6 (4 votes) · EA · GW

In the examples I was talking about, it was ads in one of the biggest fast food franchises in the country, and the random people I talk to about AI safety are at bus stops and airports. This isn't just from my social network.Like I said, it's only a lot of people in my social network who have heard the words 'effective altruism,' or know what they refer to. I was mostly talking about the things EA has impacted, like AI safety and the Beyond Burger, receiving a lot of public attention, even if EA doesn't receive credit. I took the outcomes of EA receiving attention to be a sign of steps toward the movement's goals as a good thing without regard to whether people have heard of EA.

Effective Altruism Making Waves

2018-11-15T20:20:08.959Z · score: 6 (7 votes)
Comment by evan_gaensbauer on Reducing Wild Animal Suffering Ecosystem & Directory · 2018-10-31T23:20:42.888Z · score: 1 (1 votes) · EA · GW

I made the changes regarding RP in points 1 through 5. I'll add the arrows as well.

Regarding point 6:

I don't think Rethink Priorities has any volunteer partnerships with any other organizations yet.

Sentience Institute and ACE have also received grants from the EA Animal Welfare Fund.

I'm aware, but SI and ACE received grants funding activity related to farm animal welfare, not wild animal welfare. I'm also going to do an ecosystem map and directory for effective animal advocacy/farm animal welfare in EA as well. So I was going to include references to the Animal Welfare Fund's grants to ACE and Si in that post. It's ambiguous to me if I should include them in this post as well. What do you think?

Reducing Wild Animal Suffering Ecosystem & Directory

2018-10-31T18:26:52.476Z · score: 11 (7 votes)
Comment by evan_gaensbauer on Announcing new EA Funds management teams · 2018-10-31T17:04:22.606Z · score: 2 (4 votes) · EA · GW

What would you say qualifies as expertise in these fields? It's ambiguous, because it's not like universities are offering Ph.D.'s in 'Safeguarding the Long-Term Future.'

Comment by evan_gaensbauer on Announcing new EA Funds management teams · 2018-10-28T10:46:31.733Z · score: 1 (9 votes) · EA · GW

Thank you for this. This satisfies virtually all the changes I suggested to the EA Funds in my post from July. I think the EA Funds in their prior form would have benefited from major donors to the funds being more proactive in informing the fund managers what kinds of projects they'd generally like to see receive grants. That is something that is up to donors themselves, and not something the CEA can directly change. But it appears the CEA is facilitating that as much as they can.

While the Funds were predicated on the notion many donors independently trying to evaluate the best projects or organizations within entire focus areas, this neglects the fact in the history of EA some of the biggest donors to various causes are themselves also the best evaluators of those causes. However, it's clear the CEA understands this by putting individuals like Matt Wage and Luke Ding on the new fund management teams. Across the teams of each of the funds, it appears the fund managers will be in frequent contact with a large and diverse pool of donors to each of these focus areas.

Comment by evan_gaensbauer on Reducing Wild Animal Suffering Literature Library: Consciousness and Ecology · 2018-10-28T10:13:07.196Z · score: 0 (0 votes) · EA · GW

Hi Michael. Thanks for your comment. Animals that become vermin make people's lives, and what we as suffering reducers can do about it, is a difficult issue. The question of what to do about the suffering of wild animals that interact with humans a lot is a difficult one. For example, using poisonous chemicals to exterminate animals may cause them more harm in death than other potentially affordable and overlooked interventions. However, vermin don't attract as much attention in this field right now. I expect it's because while vermin are populous, the even bigger, and hence more important, populations of animals effective altruism focuses our research on are in the wilderness proper (e.g., a forest or marsh as opposed to urban/suburban areas). However, since there is such a dearth of thinking on this field, suggestions to improve or initiate better interventions for improving wild animal welfare in any domain are always welcome.

Comment by evan_gaensbauer on Reducing Wild Animal Suffering Literature Library: Introductory Materials, Philosophical & Empirical Foundations · 2018-10-25T03:34:07.194Z · score: 0 (0 votes) · EA · GW

This is meant as an introductory reading list for basic knowledge for effective altruists just learning about reducing wild animal suffering as a field, and want to catch up to speed. While some philosophers have written many of these essays and articles, others have been written by researchers with an academic background in the life sciences. There aren't firm conclusions in this area yet. Nobody is taking RWAS research out of small EA non-profits with limited research experience as better conclusions than academia would produce.

Multiple EA organizations are spending thousands of dollars per year on multiple staff to build bridges to life scientists in academia to transform welfare biology into a genuine academic discipline. These organizations are also comfortable with the fact academic research may quickly overturn many of the tenuous impressions effective altruists have themselves formed of wild animal suffering. I'm confident EAs when contradicted by novel academic research will change their minds. It seems you think we are taking these findings much more seriously than we actually do.

Comment by evan_gaensbauer on Double Crux prompts for Effective Altruists · 2018-10-25T03:25:22.552Z · score: 0 (0 votes) · EA · GW

Yeah, reading your comments has assuaged my concerns since based on your observations the sign of the consequences of double-cruxing on EA example questions seems more unclear than clearly negative, and likely slightly positive. In general it seems like a neat exercise that is interesting but just doesn't provide enough time to leave EAs with any impression of these issues much stronger than the one they came in with. I am still thinking of making a Google Form with my version of the questions, and then posing them to EAs, to see what kind of responses are generated as an (uncontrolled) experiment. I'll let you know if I do so.

Comment by evan_gaensbauer on Bottlenecks and Solutions for the X-Risk Ecosystem · 2018-10-17T23:48:56.205Z · score: 2 (2 votes) · EA · GW

Upvoted.

Questions:

  1. What's the definition of expertise in x-risk? Unless someone has an academic background in a field where expertise is well-defined by credentials, there doesn't appear to be any qualified definition for expertise in x-risk reduction.

  2. What are considered the signs of a value-misaligned actor?

  3. What are the qualities indicating "exceptionally good judgement and decision-making skills" in terms of x-risk reduction orgs?

  4. Where can we find these numerous public lists of project ideas produced by x-risk experts?

Comments:

  1. While 'x-risk' is apparently unprecedented in large parts of academia, and may have always been obscure, I don't believe it's unprecedented in academia or in intellectual circles as a whole. Prevention of nuclear war and and once-looming environmental catastrophes like the ozone holes posed arguably existential risks that were academically studied. The development of game theory was largely motivated by a need for better analysis of war scenarios between the U.S. and Soviet Union during the Cold War.

  2. An example of a major funder for small projects in x-risk reduction would be the Long-Term Future EA Fund. For a year its management was characterized by Nick Beckstead, a central node in the trust network of funding for x-risk reduction, not providing much justification for grants made mostly to x-risk projects the average x-risk donor could've very easily identified themselves. The way the issue of the 'funding gap' is framed seems to imply patches to the existing trust network may be sufficient to solve the problem, when it appears the existing trust network may be fundamentally inadequate.

Comment by evan_gaensbauer on Double Crux prompts for Effective Altruists · 2018-10-17T22:49:39.110Z · score: 1 (1 votes) · EA · GW

I made different points, but in this comment I'm generally concerned doing something like this at big EA events could publicly misrepresent and oversimplify a lot of issues EA deals with.

Comment by evan_gaensbauer on Double Crux prompts for Effective Altruists · 2018-10-17T22:47:08.271Z · score: 8 (8 votes) · EA · GW

I think the double crux game can be good for dispute resolution. But I think generating disagreement even in a sandbox environment can be counterproductive. It's similar to how having public debates on its face appears seems like it can better resolve a dispute, but if one isn't willing to debate entirely in good faith, they can ruin the debate to the point it shouldn't have happened in the first place. Even if a disagreement isn't socially bad in that it will persist as a conflict after a failed double crux game, it could limit effective altruists to black-and-white thinking after the fact. This lends itself to an absence of the creative problem-solving EA needs.

Perhaps even more than collaborative truth-seeking, the EA community needs individual EAs to learn to think for themselves more to generate possible solutions that the community's core can't solve themselves. There are a lot of EAs who have spare time on their hands that could be better used without something to put it towards. I think starting independent projects an be a valuable use of that time. Here are some of these questions reframed to prompt effective altruists to generate creative solutions.

Imagine you've been given discretion of 10% of the Open Philanthropy Project's annual grantmaking budget. How would you distribute it?

How would solve what you see as the biggest cultural problem in EA?

Under what conditions do you think the EA movement would be justified in deliberately deceiving or misleading the public?

How should EA address our outreach blindspots?

At what rate should EA be growing? How should that be managed?

These questions are reframed to be more challenging. But that's my goal. I think many individual EAs should be challenged to generate less confused models on these topics, and from there between models is when deliberation like double crux should start. Especially if they start from a place of ignorance on current thinking on these issues in EA[1], I don't think in the span of only a couple minutes either side of a double crux game will generate an excellent but controversial hypothesis worth challenging.

The examples in the questions provided are open questions in EA EA organizations don't themselves have good answers to, and I'm sure they'd appreciate additional thinking and support building off their ideas. These aren't binary questions with just one of two possible solutions. I think using EA examples in the double crux game may be a bad idea because it will inadvertently lead EAs to come away with a more simplistic impression of these issues than they should. There is no problem with the double crux game, but maybe EAs should learn it without using EA examples.

[1] This sounds callous, but I think it's a common coordination problem we need to fix. It isn't hard, as it's actually quite easy to miss important theoretical developments that make the rounds among EA orgs but aren't broadcast to the broader movement.

Reducing Wild Animal Suffering Literature Library: Original Research and Cause Prioritization

2018-10-15T20:28:10.896Z · score: 8 (8 votes)

Reducing Wild Animal Suffering Literature Library: Consciousness and Ecology

2018-10-15T20:24:57.674Z · score: 6 (6 votes)
Comment by evan_gaensbauer on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-13T00:08:54.584Z · score: 0 (0 votes) · EA · GW

Oh, no, that all makes sense. I was just raising questions I had about the post as I came across them. But I guess I should've have read the whole post first. I haven't finished it yet. Thanks.

Comment by evan_gaensbauer on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-11T22:01:02.330Z · score: 1 (1 votes) · EA · GW

Yeah, I'm still left with more questions than answers.

Comment by evan_gaensbauer on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-11T21:39:09.392Z · score: 8 (4 votes) · EA · GW

I've volunteered to submit a comment to the EA Forum from a couple anonymous observes which I believe deserves to be engaged.

The model this survey is based on implicitly creates something of an 'ideal EA,' which is somebody young, quantitative, elite, who has the means and opportunities to go to an elite university, and has the personality to hack very high-pressure jobs. In other words, it paints a picture of EA that is quite exclusive.

Comment by evan_gaensbauer on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-11T21:28:02.771Z · score: 1 (1 votes) · EA · GW

We surveyed managers at organisations in the community to find out their views. These results help to inform our recommendations about the highest impact career paths available.

How much weight does 80,000 Hours give to these survey results relative for other factors which together form 80k's career recommendations?

I ask because I'm not sure managers at EA organizations know what in the near future their focus area as a whole will need, and I think 80k might be able to exercise better independent judgement than the aggregate opinion of EA organization leaders. For example, there was an ops bottleneck in EA that is a lot better now. It seemed like orgs like 80k and CEA spotted this problem, and drove operations talent to a variety of EA orgs. But independent of one another I don't recall other EA orgs which benefited from this push helping to solve this coordination problem in the first place.

In general, I'm impressed with 80k's more formal research. I imagine there might be pressure for 80k to give more weight to softer impressions like what different EA org managers think the EA movement needs. But I think 80k's career recommendations will remain better if they're built off a harder research methodology.

Comment by evan_gaensbauer on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-11T20:55:31.024Z · score: -2 (1 votes) · EA · GW

One possibility is because the EA organizations you hire for are focused on causes which also have a lot of representation in the non-profit sector outside of the EA movement, like global health and animal welfare, it's easier to attract talent which is both very skilled and very dedicated. Since a focus on the far-future is more limited to EA and adjacent communities, there is just a smaller talent pool of both extremely skilled and dedicated potential employees to draw from.

Far-future-focused EA orgs could be constantly suffering from this problem of a limited talent pool, to the point they'd be willing to pay hundreds of thousands of dollars to find an extremely talented hire. In AI safety/alignment, this wouldn't be weird as AI researchers can easily take a salary of hundreds of thousands at companies like OpenAI or Google. But this should only apply to orgs like MIRI or maybe FHI, which are far from the only orgs 80k surveyed.

So the data seems to imply leaders at EA orgs which already have a dozen staff would pay 20%+ of their budget for the next single marginal hire. So it still doesn't make sense that year after year a lot of EA orgs apparently need talent so badly they'll spend money they don't have to get it.

Comment by evan_gaensbauer on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-06T06:24:11.472Z · score: -1 (1 votes) · EA · GW

Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me if I've misunderstood you!), and I think this is a prevalent assumption across this forum (at least when it comes to the topic of AI risks & safety). While I agree that being an EA can contribute to one's motivation for the given research topic, I don't see any rationale for the claim that EAs are more qualified to do scientific research relevant for EA than non-explicit-EAs. That would mean that, say, Christians are a priori more qualified to do research that goes towards some Christian values. I think this is a non sequitur.

I think it's a common perception in EA effective altruists can often do work as efficiently and effectively as academics not explicitly affiliated with EA. Often EAs also think academics can do some if not most EA work than a random non-academic EA. AI safety is more populated with and stems from the rationality community. On average it's more ambivalent towards academia than EA. It's my personal opinion there are a variety of reasons why EA may often have a comparative advantage of doing the research in-house. There are a number of reasons for this.

One is practical. Academics would often have to divide their time doing EA-relevant research with teaching duties. EA tends to focus on unsexy research topics, so academics may be likelier to get grants for focusing on irrelevant research. Depending on the field, the politics of research can distort the epistemology of academia so it won't work for EA's purposes. These are constraints effective altruists working full-time at NPOs funded by other effective altruists don't face, allowing them to dedicate all their attention to their organization's mission.

Personally, my confidence in EA to make progress on research and other projects for a wide variety of goals is bolstered by some original research in multiple causes being lauded by academics as some of the best on the subject they've seen. Of course, these are NPOs focused on addressing neglected problems in global poverty, animal advocacy campaigns, and other niche areas. Some of the biggest successes in EA come from close collaborations with academia. I think most EAs would encourage more cooperation between academia and EA. I've pushed in the past for EA making more grants to academics doing sympathetic research. Attracting talent with an academic research background to EA can be difficult. I agree with you overall EA's current approach doesn't make sense.

I think you've got a lot of good points. I'd encourage you to make a post out of some of the comments I made here. I think one reason your posts might be poorly received is because some causes in EA, especially AI safety/alignment, have received a lot of poor criticism in the past merely for trying to do formal research outside of academia. I could review a post before you post it to the EA Forum to suggest edits so it would be better received. Either way, I think EA integrating more with academia is a great idea.

Comment by evan_gaensbauer on Problems with EA representativeness and how to solve it · 2018-08-06T04:10:52.915Z · score: 0 (2 votes) · EA · GW

I'm working on a project to scale up volunteer work opportunities with all kinds of EA organizations. Part of what I wanted to do is develop a system for EA organizations to delegate tasks to volunteers, including writing blog posts. This could help EA orgs like New Incentives get more of their content up on the EA Forum, such as research summaries and progress updates. Do you think orgs would find this valuable.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-05T19:01:29.520Z · score: 1 (1 votes) · EA · GW

I agree with kbog, while this is unusual for discourse for the EA Forum, this is still far above a bar where I think it's practical to be worried about controversy. If someone thinks the content of a post on the EA Forum might trigger some reader(s), I don't see anything wrong with including content warnings on posts. I'm unsure what you mean by "flagging" potentially controversial content.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-05T18:56:07.625Z · score: 1 (9 votes) · EA · GW

I admit I'm coming from a place of not entirely trusting all other users here. That may be a factor in why my comments are longer in this thread than they need to be. I tend to write more than is necessary in general. For what it's worth, I treat the EA Forum not as an internal space but how I'd ideally like to see it be used. That is as a primary platform for EA discourse, on par with a level of activity more akin to the 'Effective Altruism' Facebook group, or LessWrong.

I admit I've been wasting time. I've stopped responding directly to the OP because if I'm coming across as implicitly signaling this issue is a drama mine, I should come out and say what I actually believe. I may make a top-level post about. I haven't decided yet.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-05T18:49:39.990Z · score: 0 (8 votes) · EA · GW

[epistemic status: meta]

Summary: Reading comments in this thread which are similar reactions I've seen you or other rationality bloggers receive from effective altruists on critical posts regarding EA, I think there is a pattern to how rationalists may tend to write on important topics that doesn't gel with the typical EA mindset. Consequentially, it seems the pragmatic thing for us to do would be to figure out how to alter how we write to get our message across to a broader audience.

"Compared to a Ponzi scheme" seems like a pretty unfortunate compression of what I actually wrote. Better would be to say that I claimed that a large share of ventures, including a large subset of EA, and the US government, have substantial structural similarities to Ponzi schemes.

Upvoted.

I don't if you've read some of the other comments in this thread. But some of the most upvoted ones are about how I need to change up my writing style. So unfortunate compressions of what I actually write aren't new to me, either. I'm sorry I compressed what you actually wrote. But even an accurate compression of what you actually wrote might make my comments too long for what most users prefer on the EA Forum. If I just linked to your original post, it would be too long for us to read.

I spend more of my time on EA projects. If there were more promising projects coming out of the rationality community, I'd spend more time on them relative to how much time I dedicate to EA projects. But I go where the action is. Socially, I'm as if not more socially involved with the rationality community than I am with EA.

From my inside view, here is how I'd describe the common problem with my writing on the EA Forum: I came here from LessWrong. Relative to LW, I haven't found how or what I write on the EA Forum to be too long. But that's because I'm anchoring off EA discourse looking like SSC 100% of the time. But since the majority of EAs don't self-identify as rationalists, and the movement is so intellectually diverse, the expectation is the EA Forum won't be formatted on any discourse style common to the rationalist diaspora.

I've touched upon this issue with Ray Arnold before. Zvi has touched on it too in some of his blog posts about EA. A crude rationalist impression might be the problem with discourse on the EA Forum is it isn't LW. In terms of genres of creative non-fiction writing, the EA Forum is less tolerant of diversity than LW. That's fine. Thinking about this consequentially, I think rationalists who want their message heard by EA more don't need to learn to write better, but write different.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-05T00:24:15.964Z · score: 1 (3 votes) · EA · GW

Thanks. I wasn't aware of that. I'll redact that part of my comment.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-04T23:47:16.381Z · score: 3 (7 votes) · EA · GW

Yeah, that has become abundantly clear to me with how many upvotes these comments were receiving. I've received feedback on this before, but never with such a strong signal before. Sometimes I have different goals with my public writing at different times. So it's not always my intention for how I write to be maximally accessible to everyone. I usually know who reads my posts, and why they appreciate them, as I receive a lot of positive feedback as well. It's evident I've generalized that in this thread to the point it's hurting the general impact of spreading my message. So I completely agree. Thanks for the feedback :)

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-04T23:38:59.602Z · score: 1 (3 votes) · EA · GW

Would it help if I included a summary of my posts at the top of them?

Often I write for a specific audience, which is more limited and exclusive. I don't think there is anything necessarily wrong with taking this approach to discourse in EA. Top-level posts on the EA Forum are made specific to a single cause, written in an academic style for a niche audience. I've mentally generalized this to how I write about anything on the internet.

It turns out not writing in a more inclusive way is harming the impact of my messages more than I thought. I'll make more effort to change this. Thanks for the feedback.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-04T23:21:25.997Z · score: 3 (5 votes) · EA · GW

Of course. What I was trying to explain is when there is a time crunch, I've habituated myself to use more words. Obviously it's a habit worth changing. Thanks for the feedback :)

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-04T23:15:20.319Z · score: 1 (3 votes) · EA · GW

Upvoted. I'm sorry for the ambiguity of my comment. I meant each of the posts here under the usernames "throwaway," "throwaway2," and "anonymous" are each consistently being made by same three people, respectively. I was just clarifying up front as I was addressing you for others reading it's almost certainly the same anonymous individual making the comments under the same account. I wouldn't expect you to forgo your anonymity.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-04T06:39:07.185Z · score: 0 (20 votes) · EA · GW

Leverage Research has now existed for over 7.5 years1 Since 2011, it has consumed over 100 person-years of human capital.

Given by their own admission in a comment response to their original post, the author of this post is providing these facts so effective altruists can make an informed decision regarding potentially attending the 2018 EA Summit, with the expectation these facts can or will discourage EAs from attending the EA Summit, it’s unclear how these facts are relevant information.

  • In particular, no calculation or citation is provided for the estimate Leverage has consumed over 100 person-years of human capital. Numbers from nowhere aren’t facts, so this isn’t even a fact.

  • Regardless, no context or reference for why these numbers matter, e.g., by contrasting Leverage with what popular EA organizations have accomplished over similar timeframes or person-years of human capital consumed.

From 2012-16, Leverage Research spent $2.02 million, and the associated Institute for Philosophical Research spent $310k.23

As comments from myself; Tara MacAulay, former CEO of the CEA; and Geoff Anders, executive director of Leverage, has made clear, Leverage:

  • has never and does not intend to solicit donations from individuals part of the EA community at large.

  • has in the past identified as part of EA movement, and was formative to the movement in its earlier years, but now identifies as distinct from EA, while still respecting EA, and collaborating with EA organizations where their goals overlap with Leverage.

  • does not present itself as effective or impactful using the evaluation criteria most typical of EA, and shouldn’t be evaluated on those grounds, as has been corroborated by EA organizations which have collaborated with Leverage in the past.

Based on this, the ~$2 million Leverage spend from 2012-16 shouldn’t be, as a lump sum, regarded as having been spent under an EA framework, or on EA grounds, nor evaluated as a means to discourage individual effective altruists from forming independent associations with Leverage distinct from EA as a community. Both EA and Leverage confirm Leverage has in the past but for the present and last few years should not be thought of as an EA organization. Thus, arguing Leverage is deceiving the EA movement on the grounds they stake a claim on EA without being effective is invalid, because Leverage does no such thing.

Leverage Research previous organized the Pareto Fellowship in collaboration with another effective altruism organization. According to one attendee, Leverage staff were secretly discussing attendees using an individual Slack channel for each.

While like the facts in the above section, this is a fact, I fail to see how it’s notable regarding recruitment transparency regarding Leverage. I’ve also in the past criticized double standards regarding transparency in the EA movement, that organizations in EA should not form secret fora to the exclusion of others. That’s because it should be sufficient to ensure necessary privacy among and between EA organizations using things like private email, Slack channels, etc. What’s more, every EA organization I or others I’ve talked to have volunteered has something like a Slack channel. When digital communications internal to an organization are necessary to its operation, it has become standard practice for every organization in that boat to use something like an internal mailing list or Slack channel exclusive to their staff. That the Pareto Fellowship or Leverage Research would have Slack channels for evaluating potential fellows for recruitment on an individual basis may be unusual among EA organizations. But it’s not unheard of among how competent organizations operate. Also, it has no bearing on how Leverage might appeal to transparency while being opaque in a way other organizations associated with EA aren’t.

Also, since you’re seeking as much transparency about Leverage as possible, I expect your presentation will be transparent in kind. Thus, would you mind identifying the EA organization in question which was part of the collaboration with Leverage and the Pareto Fellowship you’re referring to?

Leverage Research sends staff to effective altruism organizations to recruit specific lists of people from the effective altruism community, as is apparent from discussions with and observations Leverage Research staff at these events.

As with the last statement, this may be unusual among EA organizations, but this is in Leverage’s past identifying as an EA organization, which they no longer do. There is nothing about this which is inherently a counter-effective organizational or community practice inside or outside of the EA movement, nor does have direct relevance to transparency, nor the author’s goal with this post.

Leverage Research has spread negative information about organisations and leaders that would compete for EA talent.

Who?

Leverage Research has had a strategy of using multiple organizations to tailor conversations to the topics of interest to different donors.

Like with other statements, I don’t understand how transparently exposing this practice is meant to as a fact back the author’s goal with this post, nor move readers’ impression of Leverage in whatever sense.

Leverage Research had longstanding plans to replace Leverage Research with one or more new organizations if the reputational costs of the name Leverage Research ever become too severe.

Given the number of claims in this post and in the comments from the same author presented as facts aren’t indeed facts, and so many of the facts stated are presented without context or relevance to the author’s goal, I’d like to see this claim substantiated by any evidence whatsoever. Otherwise, I won’t find this claim credible enough to be believable.

In short, regarding the assorted facts, the author of this post (by their own admission in a comment response), is trying to prove something. And I can’t perceive how these facts and other claims made advances that goal. So my question to the author is: what is your point?

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-04T05:46:42.034Z · score: -1 (13 votes) · EA · GW

Thanks. This is useful feedback :)

Yeah, to be fair, I was writing these comments in rapid succession based on information unique to me to quickly prevent the mischaracterization of the EA Summit next week. I am both attending the EA Summit next week, and I am significantly personally invested in it as representing efforts in EA I'd like to see greatly advanced. I also have EA projects I've been working on I intend to talk about at the EA Summit next week. (In spite of acknowledging my own motive here, I still made all my previous comments with as much fidelity as I could muster.)

All this made me write these comments hastily enough that I write in long sentences. Mentally, when writing quickly, it's how I condense as much information into as few clauses as possible in making arguments. You're not the first person to tell me writing shorter and simpler sentences would be easier to read. In general, when I'm making public comments without a time crunch, these days I'm making more of a conscious effort to be more comprehensible :)

But I may be totally ideosyncratic here (English isn't my first language), so do ignore this if it doesn't strike you as useful.

This is useful feedback, but English not being your first language is a factor too, because that isn't how "idiosyncratic" is spelled. :P

I also would not expect effective altruists not fluent in English to be able to follow a lot of what I write (or a lot of posts written in EA, for that matter). Often because of the continually complicated discourse exclusively in English in EA, I forget to write with a readership which largely doesn't speak English as a first language. I'll keep this more in mind for how I write my posts in the future.

Comment by evan_gaensbauer on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-04T05:38:43.294Z · score: 1 (3 votes) · EA · GW

[Part II of II]

In the week since I made my original post, Joey Savoie of Charity Science made this post, itself being rapidly upvoted, that how EA is represented, and how and who EA as a whole community ought trust to represent us, is receiving significant misgivings with how things have been going within EA. Whether it's part of a genuine pattern or not, the perception of the CEA (or any other organization representing EA) as failing to represent EA in accord with the what the EA movement as their supporters think tears the fabric of EA as a movement.

Indeed, two early grants from these funds were to emerging orgs: BERI and EA Sweden, so I think it's good that some warning was here. That said, even at the time this was written, I think “likely” was too strong a word, and “may” would have been more appropriate. It’s just an error that I failed to catch. In a panel discussion at EA Global in 2017, my answer to a related question about funding new vs. established orgs was more tentative, and better reflects what I think the page should have said.

I also think there are a couple of other statements like this on the page that I think could have been misinterpreted in similar ways, and I have regrets about them as well.

In my follow-up, I'll clarify misunderstanding about how the Long-Term Future and EA Community Funds would be allocated by both donors to the Funds, and other effective altruists, is a result of misinterpretation of ambiguous communications in hindsight should have been handled differently. To summarize my feelings here, if ultimately this much confusion resulted from some minor errors in diction, one would hope in an EA organization there would be enough oversight to ensure their own accountability such that minor errors in word choice would not lead to such confusion in the first place.

Ultimately, it was the responsibility of the CEA's Tech Team to help you ensure these regretted communications never led to this, and looking at the organization online, there is nobody else responsible than the CEA as a whole organization to ensure the Tech Team prioritizes that well. And if the CEA got what it identified as its own priorities relative to what of its own activities the rest of the EA community were most important to building and leading the movement so wrong, it also leads me to conclude the CEA as a whole needs to be more in touch with the EA movement as a whole. I don't know if there is any more to ask about why what's happened with not only the two EA Funds you manage, but the continued lagging of the CEA behind the community's and donors' realistic expectations to update them even as all the fund managers themselves had answers to provide. But one theme of my follow-up will to be asking how the CEA, including its leadership, and the EA movement can work together to ensure outcomes like this don't happen again.

Comment by evan_gaensbauer on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-04T05:38:08.295Z · score: -1 (1 votes) · EA · GW

[Part I of II]

Thank you for your thoughtful response.

  • At the time that the funds were formed, it was an open question in my mind how much of the funding would support established organizations vs. emerging organizations.
  • Since then, the things that changed were that EA Grants got started, I encountered fewer emerging organizations that I wanted to prioritize funding than expected, and Open Phil funding to established organizations grew more than I expected.
  • The three factors contributed to having fewer grants to make that couldn’t be made in other ways than was expected.
  • The former two factors contributed to a desire to focus primarily on established organizations.
  • The third opposes this, but I still see the balance of considerations favoring me focusing on established organizations.

As far as I'm concerned, these factors combined more than exonerate you from aspersions you were in acting in bad faith in the management of either these funds. For what it's worth, I apologize you've had to face such accusations in the comments below as a result of my post. I hoped for the contrary, as I consider such aspersions at best counterproductive. I expect I'll do a follow-up as a top-level post to the EA Forum, in which case I'll make abundantly clear I disbelieve you were acting in bad faith, and that, if anything, it's as I expected: what's happened is a result of the CEA failing to ensure you as a fund manager and the EA Funds were in sufficiently transparent and regular communication with the EA community, and/or donors to these funds.

Personally, I disagree with a perspective the Long-Term and EA Community Funds should be operated differently than the other two funds, i.e., seeking to fund well-established as opposed to nascent EA projects/organizations. I do so while also agreeing it is a much better use of your personal time to focus on making grants to established organizations, and follow the cause prioritization/evaluation model you've helped develop and implement at Open Phil.

I think one answer is for the CEA to hire or appoint new/additional fund managers for one or both of the Long-Term Future and EA Community Funds to relieve pressure on you to do everything, both dividing your time between the Funds and your important work at Open Phil less than now, and to foster more regular communication to the community regarding these Funds. While I know yourself and Benito commented it's difficult to identify someone to manage the funds both the CEA and EA community at large would considered qualified, I explained my conclusion in this comment as to why I think it's both important and tractable for us as a community to pursue the improvement of the EA Funds by seeking more qualified fund managers.

What I've learned from the responses to my original post in the last week, more than I expected, was many effective altruists indeed, not as a superficial preference, but based on an earnest conviction think it would be more effective for the EA Funds to be focused on funding smaller, newer EA projects/organizations at a stage of development prior to when Open Phil might fund them. This appears true among EAs regardless of cause, and it happens to be the Long-Term Future and EA Community Funds being managed differently which brought this to the fore.

At a first glance among both existing and potential donors to the Long-Term Future and EA Community Funds, the grantees being MIRI; CFAR; 80k; CEA; and the Founders Pledge are leaving the community nonplussed (example) because those are exactly the charities EA donors could and would have guessed are targets for movement-building and long-term future donations by default. The premise of the EA Funds was the fund managers, based on their track records, could and would identify targets for donations within these focus areas with time and analysis the donors could not themselves afford. This was an attempt to increase the efficiency of donation in EA, and reduce potentially redundant cause prioritization efforts in EA.

But it's become apparent to many effective altruists in the wake of my post, beyond any intention I had, combined with other dissatisfaction with the EA Funds in the last year, that didn't happen. Given donors to the Long-Term Future and EA Community Funds would likely not have identified donation targets like EA Sweden and BERI that you mentioned, I consider it unlikely the money from the two funds you manage would have ended up this year at charities much different than the ones you're disbursing the EA Funds to as of August 8th.

So I don't think the Long-Term Future and EA Community Funds were a waste of money. What it did quantitatively waste was a lot of time as: (i) the donors' to the EA Funds could've either donated to one of the Funds' new grantees earlier, thus presumably benefiting the organization in question more; or, (ii) they could have taken a bit of time to do their own analysis which, however inadequate compared to what they at one point expected from the EA Funds, would leave them more satisfied than the current outcome.

Although it's qualitative and symbolic, I maintain the most consequential outcome of how differences the EA community at large has had with how the EA Funds are being administered as a project of the CEA is the shock it causes to the well of trust and good will between EA organizations, and effective altruists as individuals and as a whole.

I understand how this is confusing, and I regret the way that we worded it. I can see that this could give someone the impression that the fund would focus primarily on emerging organizations, and that isn’t what I intended to communicate.

What I wanted to communicate was that I might fund many emerging organizations, if that seemed like the best idea, and I wanted to warn donors about the risks involved with funding emerging organizations.

I in no sense any longer hold you personally responsible for the mismatch between how you thought and how much of the EA community, including donors to the EA Funds, thought you would manage the Long-Term Future and EA Community Funds. Unfortunately, that does not to me excuse the failure to ensure the fidelity of communicating again. Again, I believe the fidelity model of spreading EA is one of the best that has come out for EA movement-building in years. But like with how miscommunication on the CEA's part has apparently undermined their ability to as a representative agency of the EA movement pursue their own mission and goals, it's very concerning when the CEA can't adhere to the movement-building model they prescribe for themselves, and would hope the rest of EA might also follow.

I don't even hold Sam Deere, Marek Duda, or JP Addison themselves particularly responsible for the failure to update or check the fidelity of your updates and thinking on how to manage the EA Funds to donors and the EA community. While that was their responsibility, what with the delays in the email responses, and their preoccupation with the important tasks of updating the EA Forum 2.0 and all other tech projects under the CEA umbrella, it would appear the CEA tech team wasn't afforded or led to believe they should prioritize clear and regular maintenance of the EA Funds online communications/updates relative to their other tasks. Obviously, this is in even starker contrast than what I expected when I made this post to what many effective altruists think about how much of a priority the CEA should have made of the EA Funds.

The difference between these outcomes, and other mistakes the CEA has made in the past; or the EA Funds, and other big funds in EA, is these Funds were made from the donations of individual effective altruists, either modestly or as a major, conscious shift among those earning to give, that faced skepticism from the beginning it would be more effective than how those donors would counterfactually donate their own money. The CEA assured EA community members that wouldn't be the case. And those community members who went on to donate to the EA Funds are now learning pessimistic forecasts on the potential greater effectiveness of the Long-Term Future and EA Community Funds were correct. And this by the lights of the CEA as an organization lacking the self-awareness to know they were failing the expectations they had set for themselves, and on which grounds they asked for the trust of the whole EA movement.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-04T02:13:45.683Z · score: 1 (1 votes) · EA · GW

I meant the EA Foundation, who I was under the impression received incubation from CEA. Since apparently my ambiguous perception of those events might be wrong, I've switched the example of one CEA's incubees to ACE.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-04T02:11:52.630Z · score: -4 (20 votes) · EA · GW

By this what I expect Tara means is in reference to the fact Leverage Research has historically solicited all their funding from major private donors such as Peter Thiel as of a few years ago, and in the intervening years, I assume other philanthropists. Leverage both associates with EA, and appreciate what EA as a movement has done to see Leverage, just as what Leverage has done to help build up EA is appreciated, as others have expressed in the other comments on the original post.

Due to, as Geoff Anders pointed out in his own comment response, that Leverage works in the same spirit of EA but with higher-variance than the EA movement has been, as an organization Leverage works on projects other EA organizations while signaling that their stark difference from the rest of EA is non-threatening by not soliciting donations from the EA community at large. When I met Geoff Anders in person in 2014, he explained to me this is the case for Leverage's profile within EA, and this is part of the rationale Leverage also uses to privately court funding for their operations. As of 2014, the donor in question was Peter Thiel, who I'm presuming provided enough funding at the time for Leverage, they weren't in need to seek other donors. Since then, I haven't in direct communication with Geoff nor Leverage. So I don't know who, Peter Thiel or who else, is funding Leverage Research. But between my own impressions, and the anecdata provided in this thread, I presume Leverage continues to privately secure all the funding they need while occasionally partnering with EA(-adjacent) organizations on projects related to startups and the long-term future, as they have in the past.

Before Paradigm was officially a distinct organization from Leverage, as Leverage was incubating Paradigm at the time, they received their funding from the same source. I'm aware for their clients who aren't effective altruists, Paradigm charges for some of their workshops, and does consultancy for for-profit startups and their founders in the Bay Area. This is a source of income for Paradigm I understand they use for their other projects, including providing free or discounted workshops to effective altruists. Between these things, I assume Paradigm doesn't intend for the indefinite future to publicly solicit funding from the EA community at large, either.

I assume this is what Tara meant by Leverage, Paradigm and related project not being a good use of EA money. This reaffirms the impression that Leverage doesn't seek donations from individual effective altruists, not in an attempt to deceive the community in any way, but to signal respect for the epistemic differences between Leverage and the EA movement at large, while collaboration between Leverage and EA organizations continues.

I don't know what Tara means by Leverage, Paradigm or related projects not being a good use of EA time. I'm assuming she is reaffirming the public impression Leverage their executive director, Geoff Anders, provided in his own comment response to the original post. That is, while individual effective altruists who staff Leverage or Paradigm are in their free time working for the organizations (similar to how Google famously provides their software engineers with 10% free time to develop projects as they see fit, resulting in products like Gmail), effective altruists who don't independent of their association with EA consider Leverage in the range of effectiveness as the charities EAs typically donate to should not presume Leverage promises or solicits to use EA time and money on EA lines. This is consistent with much the same Geoff mentioned in his own comment.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-04T01:35:56.486Z · score: 3 (11 votes) · EA · GW

I've long been confused about the reputation Leverage has in the EA community. After hearing lots of conflicting reports, both extremely positive and negative, I decided to investigate a little myself. As a result, I've had multiple conversations with Geoff, and attended a training weekend run by Paradigm. I can understand why many people get a poor impression, and question the validity of their early stage research. I think that in the past, Leverage has done a poor job communicating their mission, and relationship to the EA movement. I'd like to see Leverage continue to improve transparency, and am pleased with Geoff's comments below.

As someone whose experience as an outsider from Leverage, who has not done paid for any EA organizations in the past, is similar to Tara's, I can corroborate her impression. I've not been in the Bay Area or had a volunteer or personal association with any EA organizations located there since 2014. Thus, my own investigation was from afar, following the spread-out info on Leverage available online, including past posts regarding Leverage on LW and the EA Forum, and online conversations with former staff, interns and visitors to Leverage Research. The impression I got from what is probably a very different data-set than Tara's is virtually identical. Thus, I endorse as a robust yet fair characterization of Leverage Research.

Despite some initial hesitation, I found the Paradigm training I attended surprisingly useful, perhaps even more so than the CFAR workshop I attended. The workshop was competently run, and content was delivered in a polished fashion. I didn't go in expecting the content to be scientifically rigorous, most self improvement content isn't. It was fun, engaging, and useful enough to justify the time spent.

I've also heard from several CFAR workshop alumni myself they found the Paradigm training they received more useful than the CFAR workshop they attended as well. A couple of them also noted their surprise at this impression, given their trepidation knowing Paradigm sprouted from Leverage, what with their past reputation. A confounding factor in these anecdotes would be the CFAR workshops my friends and acquaintances had attended were from a few years ago, in which time those same people revisiting CFAR, and more recent CFAR workshop alumni, remark how different and superior to their earlier workshops CFAR's more recent ones have been. Nonetheless, the impression I've received is nearly unanimous positive experiences at Paradigm workshops from attendees part of the EA movement, competitive in quality with CFAR workshops, which has years of troubleshooting and experience on Paradigm.

I've been wanting to see new and more movement building focused activities in EA. CEA can't do it all alone, and I generally support people in the EA community attempting ambitious movement building projects. Given this, and my positive experience attending an event put on by Paradigm, I decided to provide some funding for the EA Summit personally.

I want to clarify the CEA has not been alone in movement-building activities, and the CEA itself has ongoing associations with the Local Effective Altruism Network (LEAN) and the Effective Altruism Foundation out of the German-speaking EA world on movement-building activities. Paradigm Academy's staff, in seeking to kickstart grassroots movement-building efforts in EA, are aware of this, as LEAN is a participating organization in EA as well. Additionally, while Charity Science (CS) has typically been and has streamlined their focus on direct global poverty interventions, their initial incubation and association with Rethink Charity and LEAN, as well as their recent foray into cause-neutral effective charity incubation, could arguably qualify them as focused on EA movement-building as well.

This is my conjecture based on where it seems CS is headed. I haven't asked them, and I recommend anyone curious ask CS themselves if they identify movement-building as part of their current activities in EA. I bring this up as relevant because CS is also officially participating in the EA Summit.

Also, Tara, thanks for providing funding for this event :)

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-03T23:20:48.885Z · score: 0 (24 votes) · EA · GW

The reason for posting these facts now is that as of the time of writing, Leverage's successor, the Paradigm Academy is seeking to host the EA Summit in one week. The hope is that these facts would firstly help to inform effective altruists on the matter of whether they would be well-advised to attend, and secondly, what approach they may want to take if they do attend.

I've provided my explanations for the following in this comment:

  • No evidence has been provided Paradigm Academy is Leverage's successor. While the OP stated facts about Leverage, all the comments declaring more facts about Leverage Research are merely casting spurious associations between Leverage Research and the EA Summit. Along with the facts, you've smuggled in an assumption amounting to nothing more than a conspiracy theory about Leverage rebranding themselves as Paradigm Academy and is organizing the 2018 EA Summit for some unclear and ominous reason. In addition to no logical reason or sound evidence being provided for how Leverage's negative reputation in EA should be transferred to the upcoming Summit, my interlocutors have admitted themselves or revealed their evidence from personal experience to be weak. I've provided my direct personal experience knowing the parties involved in organizing the EA Summit, and also having paid close attention from afar of Leverage's trajectory in and around EA, contrary to the unsubstantiated thesis the 2018 EA Summit is some opaque machination by Leverage Research.

  • There is no logical connection between the facts about Leverage Research and the purpose of the upcoming EA Summit. Further, the claims presented as facts about the upcoming Summit aren't actually facts.

Leverage Research has recruited from the EA community using mind-maps and other psychological techniques, obtaining dozens of years of work, but doing little apparent good. As a result, the author views it as inadvisable for EAs to engage with Leverage Research and its successor, Paradigm Academy.

At this point, I'll just point out the idea Paradigm is somehow necessarily in any sense Leverage's successor is based on no apparent evidence. So the author's advice doesn't logically follow from the claims made about Leverage Research. What's more, as I demonstrated in my other comments, this event isn't some unilateral attempt by Paradigm Academy to steer EA in some unknown direction.

Rather, they should seek the advice of mentors outside of the Leverage orbit before deciding to attend such an event.

As one of the primary organizers for the EA community in Vancouver, Canada; the primary organizer for the rationality community in Vancouver; a liaison for local representation of these communities with adjacent communities; and an organizer for many novel efforts to coordinate effective altruists, including the EA Newsletter, I don't know if I'd describe myself as a "mentor." But I know others who see me that way, and it wouldn't be unfair of me to say both digitally, and geographically on the west coast; in Vancouver; and in Canada, I am someone who creates more opportunities for many individuals to connect to EA.

Also, if it wasn't clear, I'm well outside the Leverage orbit. If someone wants to accuse me of being a hack for Leverage, I can make some effort to prove I'm not part of their orbit (though I'd like to state that I would still see that as unnecessarily poor faith in this conversation). Anyway, as an outsider and veteran EA community organizer, I'm willing to provide earnest and individuated answers to questions about why I'm going to the 2018 EA Summit; or why and what kind of other effective altruists should also attend. I am not speaking for anyone but myself. I'm willing to do this in-thread as replies to this comment; or, if others would prefer, on social media or in another EA Forum post. Because I don't have as much time, and I'd to answer such questions transparently, I will only answer questions publicly asked of me.

Based on past events such as the Pareto Fellowship, invitees who ultimately decide to attend would be well-advised to be cautious about recruitment, by keeping in touch with friends and mentors throughout.

Unlike the author of this post and comment stated, it doesn't follow this event will be anything like the Pareto Fellowship, as there aren't any facts linking Leverage Research's past track record as an organization to the 2018 EA Summit.

For what it's worth to anyone, I intend to attend the 2018 EA Summit, and I offer as a friend my support and contact regarding any concerns other attendees may have.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-03T22:38:38.305Z · score: 8 (26 votes) · EA · GW

helps them recruit people

Do you mind clarifying what you mean by "recruits people?" I.e., do you mean they recruit people to attend the workshops, or to join the organizational staff.

I have spoken with four former interns/staff who pointed out that Leverage Research (and its affiliated organizations) resembles a cult according to the criteria listed here.

In this comment I laid out the threat to EA as a cohesive community itself for those within to like the worst detractors of EA and adjacent communities to level blanket accusations of an organization of being a cult. Also, that comment was only able to provide mention of a handful of people describing Leverage like a cult, admitting they could not recall any specific details. I already explained that that report doesn't not qualify as a fact, nor even an anecdote, but hearsay, especially since further details aren't being provided.

I'm disinclined to take seriously more hearsay of a mysterious impression of Leverage as cultish given the poor faith in which my other interlocutor was acting in. Since none of the former interns or staff this hearsay of Leverage being like a cult are coming forward to corroborate what features of a cult from the linked Lifehacker article Leverage shares, I'm unconvinced your or the other reports of Leverage as being like a cult aren't being taken out of context from the individuals you originally heard them from, nor that this post and the comments aren't a deliberate attempt to do nothing but tarnish Leverage.

The EA Summit 2018 website lists LEAN, Charity Science, and Paradigm Academy as "participating organizations," implying they're equally involved. However, Charity Science is merely giving a talk there. In private conversation, at least one potential attendee was told that Charity Science was more heavily involved. (Edit: This issue seems to be fixed now.)

Paradigm Academy was incubated by Leverage Research, as many organizations in and around EA are by others (e.g., MIRI incubated CFAR; CEA incubated ACE, etc.). As far as I can tell now, like with those other organizations, Paradigm and Leverage should be viewed as two distinct organizations. So that itself is not a fact about Leverage, which I also went over in this comment.

The EA Summit 2018 website lists LEAN, Charity Science, and Paradigm Academy as "participating organizations," implying they're equally involved. However, Charity Science is merely giving a talk there. In private conversation, at least one potential attendee was told that Charity Science was more heavily involved. (Edit: This issue seems to be fixed now.)

As I stated in that comment as well, there is a double standard at play here. EA Global each year is organized by the CEA. They aren't even the only organization in EA with the letters "EA" in their name, nor are they exclusively considered among EA organizations able to wield the EA brand. And yet despite all this nobody objects on priors to the CEA as a single organization branding these events each year. As we shouldn't. Of course, none of this necessary to invalidate the point you're trying to make. Julia Wise as the Community Liaison for the CEA has already clarified the CEA themselves support the Summit.

So the EA Summit has already been legitimized by multiple EA organizations as a genuine EA event, including the one which is seen as the default legitimate representation for the whole movement.

(low confidence) I've heard through the grapevine that the EA Summit 2018 wasn't coordinated with other EA organizations except for LEAN and Charity Science.

As above, that the EA Summit wasn't coordinated by more than one organization means nothing. There are already EA retreat- and conference-like events organized by local university groups and national foundations all over the world, which have gone well, such as the Czech EA Retreat in 2017. So the idea EA should be so centralized only registered non-profits with some given caliber of prestige in the EA movement, or those they approve, can organize events to be viewed as legitimate by the community is unfounded. Not even the CEA wants that centralized. Nobody does. So whatever point you're trying to prove about the EA Summit using facts about Leverage Research is still invalid.

For what it's worth, while no other organizations are officially participating, here are some effective altruists who will be speaking at the EA Summit, and the organizations they're associated with. This is sufficient to warrant a correct identification that those organizations are in spirit welcome and included at EAG. So the same standard should apply to the EA Summit.

  • Ben Pace, Ray Arnold and Oliver Habryka: LessWrong isn't an organization, but it's played a formative role in EA, and with LW's new codebase being the kernel of for the next version of the EA Forum, Ben and Oliver as admins and architects of the new LW are as important representatives of this online community as any in EA's history.

  • Rob Mather is the ED of the AMF. AMF isn't typically regarded as an "EA organization" because they're not a metacharity in need of dependence directly on the EA movement. But that Givewell's top-recommended charity since EA began, which continues to receive more donations from effective altruists than any other, to not been given consideration would be senseless.

  • Sarah Spikes runs the Berkeley REACH.

  • Holly Morgan is a staffer for the EA London organization.

In reviewing these speakers, and seeing so many from LEAN and Rethink Charity, with Kerry Vaughan being a director for individual outreach at CEA, I see what the EA Summit is trying to do. They're trying to have as speakers at the event to rally local EA group organizers from around the world to more coordinated action and spirited projects. Which is exactly what the organizers of the EA Summit have been saying the whole time. This is also why as an organizer for rationality and EA projects in Vancouver, Canada, trying to develop a project to scale both here and cities everywhere a system for organizing local groups to do direct work; and as a very involved volunteer online community organizer in EA, I was invited to attend the EA Summit. It's also why one the event organizers consulted with me before they announced the EA Summit how they thought it should be presented in the EA community.

This isn't counterevidence to be skeptical of Leverage. This is evidence counter to the thesis the EA Summit is nothing but a launchpad for Leverage's rebranding within the EA community as "Paradigm Academy," being advanced in these facts about Leverage Research. No logical evidence has been presented that the tenuous links between Leverage and the organization of the 2018 EA Summit entails the negative reputation Leverage has acquired over the years should be transferred onto the upcoming Summit.

Comment by evan_gaensbauer on Leverage Research: reviewing the basic facts · 2018-08-03T22:03:28.166Z · score: 4 (26 votes) · EA · GW
  1. The CEA, the very organization you juxtaposed with Leverage and Paradigm in this comment has in the past been compared to a Ponzi scheme. Effective altruists who otherwise appreciated that criticism thought much of the value was lost in comparing it to a Ponzi scheme, and without it, the criticism may been better received. Additionally, LessWrong and the rationality community; CFAR and MIRI; and all of AI safety have been for years been smeared as a cult by their detractors. The rationality community isn't perfect. There is no guarantee interactions with a self-identified (aspiring) rationality community will "rationally" go however an individual or small group of people interacting with the community, online or in person, hope or expect. But the vast majority of effective altruists, even those who are cynical about these organizations or sub-communities within EA, disagree with how these organizations have been treated, for it poisons the well of good will in EA for everyone. In this comment, you stated your past experience with the Pareto Fellowship and Leverage left you feeling humiliated and manipulated. I've also been a vocal critic in person throughout the EA community of both Leverage Research and how Geoff Anders has led the organization. But that to elevate a personal opposition of them to a public exposure of opposition research in an attempt to tarnish an event they're supporting alongside many other parties in EA is not something I ever did, or will do. My contacts in EA and myself have followed Leverage. I've desisted in making posts like this myself, because digging for context I found Leverage has changed from any impression I've gotten of them. And that's why at first I was skeptical of attending the EA Summit. But upon reflection, I realized it wasn't supported by the evidence to conclude Leverage is so incapable of change that anything they're associated with should be distrusted. But what you're trying to do with Leverage Research is no different than what EA's worst critics do not in an effort to change EA or its members, but to tarnish them. From within or outside of EA, to criticize any EA organization in such a fashion is below any acceptable epistemic standard in this movement.

  2. If the post and comments here are stating facts about Leverage Research, and you're reporting impressions with no ability to remember specific details that Leverage is like a cult, those are barely facts. The only fact is some people perceived Leverage to be like a cult in the past, which are only anecdotes. And without details, they're only hearsay. Combined with the severity of the consequences if this hearsay was borne out, to be unable to produce actual facts invalidates the point you're trying to make.

The EA Community and Long-Term Future Funds Lack Transparency and Accountability

2018-07-23T00:39:10.742Z · score: 62 (63 votes)

Effective Altruism as Global Catastrophe Mitigation

2018-06-08T04:35:16.582Z · score: 7 (9 votes)

Remote Volunteering Opportunities in Effective Altruism

2018-05-13T07:43:10.705Z · score: 24 (24 votes)

Reducing Wild Animal Suffering Literature Library: Introductory Materials, Philosophical & Empirical Foundations

2018-05-05T03:23:15.858Z · score: 10 (12 votes)

Wild Animal Welfare Project Discussion: A One-Year Strategic Review

2018-05-05T00:56:04.991Z · score: 8 (10 votes)

Ten Commandments for Aspiring Superforecasters

2018-04-25T05:07:39.734Z · score: 10 (10 votes)

Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply

2018-04-13T22:10:16.460Z · score: 11 (11 votes)

Lessons for Building Up a Cause

2018-02-10T08:25:53.644Z · score: 13 (15 votes)

Room For More Funding In AI Safety Is Highly Uncertain

2016-05-12T13:52:37.487Z · score: 6 (6 votes)

Effective Altruism Is Exploring Climate Change Action, and You Can Be Part of It

2016-04-22T16:39:30.688Z · score: 9 (9 votes)

Why You Should Visit Vancouver

2016-04-07T01:57:28.627Z · score: 9 (9 votes)

Effective Altruism, Environmentalism, and Climate Change: An Introduction

2016-03-10T11:49:45.914Z · score: 17 (17 votes)

Consider Applying to Organize an EAGx Event, And An Offer To Help Apply

2016-01-22T20:14:07.121Z · score: 4 (4 votes)

[LINK] Will MacAskill AMA on Reddit

2015-08-03T20:45:42.530Z · score: 3 (3 votes)

Efective Altruism Quotes

2015-08-01T13:49:23.484Z · score: 1 (1 votes)

2015 Summer Welcome Thread

2015-06-16T20:29:36.185Z · score: 2 (2 votes)

[Announcement] The Effective Altruism Course on Coursera is Now Open

2015-06-16T20:20:00.044Z · score: 4 (4 votes)

Don't Be Discouraged In Reaching Out: An Open Letter

2015-05-21T22:26:50.906Z · score: 5 (5 votes)

What Cause(s) Do You Support? And Why?

2015-03-22T00:13:37.886Z · score: 2 (2 votes)

Announcing the Effective Altruism Newsletter

2015-03-11T06:05:51.545Z · score: 10 (10 votes)

March Open Thread

2015-03-01T17:14:59.382Z · score: 1 (1 votes)

Does It Make Sense to Make Multi-Year Donation Commitments to One Organization?

2015-01-27T19:37:30.175Z · score: 2 (2 votes)

Learning From Less Wrong: Special Threads, and Making This Forum More Useful

2014-09-24T10:59:20.874Z · score: 6 (6 votes)