Comment by evan_gaensbauer on Who in EA enjoys managing people? · 2019-04-21T05:29:30.672Z · score: 4 (2 votes) · EA · GW

FWIW, I would enjoy more opportunities to organize events and conferences, and manage operations teams.

Comment by evan_gaensbauer on Who in EA enjoys managing people? · 2019-04-21T05:28:03.802Z · score: 2 (1 votes) · EA · GW

FWIW, I would enjoy more opportunities to organize events and conferences, and manage operations teams.

Comment by evan_gaensbauer on Political culture at the edges of Effective Altruism · 2019-04-21T05:03:44.638Z · score: 7 (4 votes) · EA · GW

My understanding of how EA typically responds to anti-capitalist critiques of EA:

  • EAs are very split on capitalism, but a significant minority aren't fans of it, and the majority think (very) significant reforms/regulations of the free market in some form(s) are justified.
  • The biggest difference on economics between EA and left-wing political movements is EA sees the market liberalization worldwide as a or the main source of increasing quality of life and material standard of living, and an unprecedented decrease in absolute global poverty in human history, in the last several decades. So EAs are likelier to have confidence in free(r) market principles as fundamentally good than most other left-leaning crowds.
  • Lots of EAs see their participation in EA as the most good they can do with their private/personal efforts, and often they're quite active in politics, often left-wing politics, as part of the good they do as their public/political efforts. So, while effective giving/altruism is the most good one can do with some resources, like one's money, other resources, like one's time, can be put towards efforts aimed at systemic change. Whenever I've seen this pointed out, the distinction has mysteriously always been lost on anti-capitalist critics of EA. If there is a more important and different point they're trying to make, I'm missing it.
  • A lot of EAs make the case that the kind of systemic change they are pursuing is what they think is best. This includes typical EA efforts, like donating to Givewell-recommended charities. The argument is these interventions are based on robust empirical evidence, and are demonstrably so cost-effective, such that they improve the well-being of people in undeveloped or developing countries, and their subsequent ability to autonomously pursue systemic change in their own societies. There are also a lot of EAs focused on farm animal welfare they believe is the most radically important form of systemic change they can focus on. As far as I'm aware, there are no existing significant or prominent public responses to these arguments from a left-wing perspective. Any such sources would be appreciated.
  • A lot of anti-capitalist criticism of EA is how it approaches the eradication of extreme global poverty. In addition to not addressing EA's arguments for how their current efforts are aiming at affecting systemic change in the world's poorer/poorest countries, anti-capitalist critics haven't offered up much in the way of concrete, fleshed-out, evidence-based approaches to systemic change that would motivate EA to adopt them.
  • Anti-capitalist critics are much likelier than EA to see the redistribution of accumulated wealth through private philanthropy as having been accumulated unjustly and/or through exploitative means. Further, they're likelier to see relative wealth inequality within a society as a fundamentally more important problem, and thus see directly redressing it fundamentally higher priority, than most of the EA community. Because of these different background assumptions, they're likelier to perceive EA's typical approaches to doing the most good as insufficiently supportive of democracy and egalitarianism. As a social movement, EA is much more like a voluntary community of people who contribute resources privately available to them, than it is a collective political effort. A lot of EAs are active in political activity aimed at systemic change, publicly do so as part and parcel with their EA motivations, and are not only willing but actively encourage public organization and coordination of these efforts among EAs and other advocates/activists. That anti-capitalist critics haven't responded to these points seems to hinge on how they haven't validated the distinction between use of personal/private resources, and public/political resources.

There isn't much more EA can do to respond to anti-capitalist critics until anti-capitalist critics broach these subjects. The ball is in their court.

Comment by evan_gaensbauer on Political culture at the edges of Effective Altruism · 2019-04-21T03:56:43.950Z · score: 4 (3 votes) · EA · GW

Anecdotally, I'd say I know several EAs who have shifted in the last few years from libertarianism or liberalism to conservatism, and some of them have been willing to be vocal about this in EA spaces. However, just as many of them have exited EA because they were fed up with how they weren't taken seriously. I'd estimate of the dozens of EAs I know personally quite well, and the hundreds I'm more casually familiar with, 10-20% would count as 'conservative,' or at least 'right-of-centre.' Of course, this is a change from what was before apparently zero representation for conservatives in EA. Unfortunately, I can't provide more info, as conservatives in EA are not wont to publicly discuss their political differences with EAs, because they don't feel like their opinions are taken seriously or are respected.

Comment by evan_gaensbauer on Political culture at the edges of Effective Altruism · 2019-04-21T03:53:07.373Z · score: 4 (2 votes) · EA · GW

Upvoted for starting an interesting and probing conversation. I do have several nitpicks.

Perhaps the most common criticism of EA is that the movement does not collectively align with radical anticapitalist politics

Maybe I've just stopped paying attention to basic criticisms of EA along these lines, because every time all the best responses from EA to these criticisms are produced in an attempt at a good-faith debate, the critics apparently weren't interested in an actually serious dialogue that could change EA. Yet in the last couple years while the absolute amount of anticapitalism has increased, I've noticed less criticism of EA on the grounds it's not anticapitalist enough. I think EA has begun to have a cemented reputation as a community that is primarily left-leaning, and certainly welcomes anticapitalist thought, but won't on the whole mobilize towards anticapitalist activism at least until anticapitalist movements themselves produce effective means of 'systemic change.'

An autistic rights activist condemned EA by alleging incompatibility between cost-benefit analysis and disability rights

I'm skeptical friction between EA and actors who misunderstand so much has consequences bad enough to worry about, since I don't expect the criticism would be taken so seriously by anyone else to the point it would have much of an impact at all.

Key EA philosopher Peter Singer has been viewed negatively by left-wing academia after taking several steps to promote freedom of speech (Journal of Controversial Ideas, op-ed in defense of Damore)
Key EA philosopher Peter Singer was treated with hostility by left-wing people for his argument on sex with severely cognitively disabled adults
Peter Singer has been treated with hostility by traditional conservatives for his arguments on after-birth abortion and zoophilia

I'm also concerned about the impact of Singer's actions on EA itself, but I'd like to see more focused analysis exploring what the probable impacts of controversies around Singer are.

MacAskill's interview with Joe Rogan provoked hostility from viewers because of an offhand comment/joke he made about Britain deserving punishment for Brexit
William MacAskill received pushback from right-wing people for his argument in favor of taking refugees

Ditto my concerns about controversies surrounding Singer for Will as well, although I am generally much less concerned with Will than Singer.

Useful x-risk researchers, organizations and ideas are frequently viewed negatively by leftists inside and outside academia

I know some x-risk reducers who think a lot of left-wing op-eds are beginning to create a sentiment in some relevant circles that a focus on 'AI alignment as an existential risk' is a pie-in-the-sky, rich techie white guy concern about AI safety, and more concern should be put on how advances in AI will affect issues of social justice. The concern is diverting the focus of AI safety efforts away from how AGI poses an existential risk for what are perceived as more parochial concerns could be grossly net negative.

Impacts on existential risk:
None yet, that I can think of

Depending on what considers an x-risk, popular support for right-wing politicians that pursue counterproductive climate change or other anti-environmental policies, or who tend to be more hawkish, jingoistic, and nationalistic in ways that will increase the chances of great-power conflict, negatively impacts x-risk reduction efforts. It's not clear that this has a direct impact on any EA work focused on x-risks, though, which is the kind of impacts you meant to assess.

Left-wing political culture seems to be a deeper, more pressing source of harm.

I understand you provided a caveat, but I think this take still misses a lot.

  • If you asked a lot of EAs, I think most of them would say right-wing political culture poses a deeper potential source of harm to EA than left-wing political culture. Left-wing political culture is only a more pressing source of harm because EA is disproportionately left-leaning, so the social networks EAs run in, and thus decision-making in EA, are more likely to be currently impacted by left-wing political culture.
  • It misses what counts as 'left-wing political culture,' especially in Anglo-American discourse, as the left-wing landscape is rapidly and dramatically shifting. While most EAs are left-leaning, and a significant minority would identify with the basket socialist/radical/anti-capitalist/far-left, a greater number, perhaps a plurality, would identify as centre-left/liberal/neoliberal. From the political right, and from other angles, both these camps are 'left-wing.' Yet they're sufficiently different that when accuracy matters, as it does regarding EA, we should use more precise language to differentiate between centre-left/liberal and radical/anticapitalist/far-left 'left-wing political culture.' For example, in the U.S., it currently seems the 'progressive' political identity can apply to everyone from a neoliberal to a social democrat to a radical anticapitalist. On leftist forums I frequent, liberals are often labelled as 'centrists' or 'right-wing,' and are perceived as having more in common with conservative and moderates than they do anti-capitalists.
  • Anecdotally, I would say the grassroots membership of the EA movement is more politically divergent, less moderate, and generallly "to the left" of flagship EA organizations/institutions, in that I talk to a lot of EAs who feel EA is generally still too much to the right for their liking, and actually agree with and wish EA would be much more in line with changes left-wing critics would demand of us.
Comment by evan_gaensbauer on Who is working on finding "Cause X"? · 2019-04-19T04:32:04.519Z · score: 4 (2 votes) · EA · GW

The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don't currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I've talked to who don't share those priorities say they'd be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.

Comment by evan_gaensbauer on Who is working on finding "Cause X"? · 2019-04-19T04:27:32.280Z · score: 4 (2 votes) · EA · GW

Givewell's and Open Phil's worked wasn't termed 'Cause X,' but I think a lot of the stuff you're pointing to would've started before 'Cause X' was a common term in EA. They definitely qualify. One thing is Givewell and Open Phil are much bigger organizations than most in EA, so they are unusually able to pursue these things. So my contention that this kind of research is impractical for most organizations to do still holds up. It may be falsified in the near future though. Aside from Givewell and Open Phil, the organizations that can permanently focus on cause prioritization are:

  • institutes at public universities with large endowments, like the Future of Humanity Institute and the Global Priorities Institute at Oxford University.
  • small, private non-profit organizations like Rethink Priorities.

Honestly, I am impressed and pleasantly surprised organizations like Rethink Priorities can go from a small team to a growing organization in EA. Cause prioritization is such a niche cause unique to EA, I didn't know if there was hope for it to keep sustainably growing. So far, the growth of the field has proven sustainable. I hope it keeps up.

Comment by evan_gaensbauer on Should EA grantmaking be subject to independent audit? · 2019-04-19T02:33:21.988Z · score: 7 (4 votes) · EA · GW

I just wanted to channel Aaron's comment in clarifying the following:

  • While I don't mind the characterization, I didn't originally intend my comment as a kind of audit.
  • I was not under the impression the money had been disbursed yet, and it wasn't ever my intention to criticize grantmaking decisions after disbursement, or to evaluate individual grant recommendations from this round in particular, only a general trend in the LTF Fund.
Comment by evan_gaensbauer on Should EA grantmaking be subject to independent audit? · 2019-04-19T02:28:42.073Z · score: 15 (6 votes) · EA · GW

A lot of this is the private sensitivity many community members feel about publicly criticizing the Open Philanthropy Project. I'd chalk it up to the relative power Open Phil wields having complicated impacts on all our thinking on this subject, since with how little the EA community comments on it, the lack of public feedback Open Phil receives seems out of sync with the idea they are the sort of organization that would welcome it. Another thing is the quality of criticism and defense of grantmaking decisions on both sides is quite low. It seems to me EA has overgeneralized its conflict avoidance to exclude scenarios when adversarial debate or communication is fruitful for a community overall, and so when adversarial debate is instrumental, EA is poor at it to the point it doesn't recognize good debate.

A pattern I've seen is for critics of something in EA will parse disagreement with some aspect(s) of their criticism as a wholesale political rejection of everything they're saying, or taking it as a personal attack on them on retaliation for attacking a shibboleth of EA. These reactions are usually patently false, but this hasn't stopped EA from garnering a reputation of being hypocritically closed to criticism, and impossible to affect change in.

While I wouldn't say I generally agree with all of Open Phil's grants, and simply by chance most EAs or other people wouldn't because they're are so many, the impression I've gotten is that the EA community and Good Ventures don't have identical priorities. EA is primarily concerned with global poverty alleviation, AI alignment, and animal welfare. An example of something Open Phil or Good Ventures prioritizes more than EA is criminal justice reform. EA agrees criminal justice reform is one of the more promising areas in public policy to do good, it's not literally one of EA's top priorities. So, criminal justice reform is a top priority more particular to Dustin Moskowitz and Cari Tuna.

My impression is that as long as motivations in Open Phil's grantmaking don't pull away from effectiveness and other EA values in the cause areas the community cares most about, they don't mind as much what Open Phil does. A good example of when the EA community is willing to strongly criticize Open Phil when ineffective grantmaking infringes on a cause area EA is more passionate about is the criticism Open Phil received from multiple points over how they made their grant to OpenAI.

Comment by evan_gaensbauer on Long Term Future Fund: April 2019 grant decisions · 2019-04-17T08:14:25.450Z · score: 34 (13 votes) · EA · GW

Summary: This is the most substantial round of grant recommendations from the EA Long-Term Future Fund to date, so it is a good opportunity to evaluate the performance of the Fund after changes to its management structure in the last year. I am measuring the performance of the EA Funds on the basis of what I am calling 'counterfactually unique' grant recommendations. I.e., grant recommendations that, without the Long-Term Future Fund, individual donors nor larger grantmakers like the Open Philanthropy Project would have identified or funded.

Based on that measure, 20 of 23, or 87%, grant recommendations, worth $673,150 of $923,150, or ~73% of the money to be disbursed, are counterfactually unique. Having read all the comments, multiple concerns with a few specific grants came up, based on uncertainty or controversy in the estimation of value of these grant recommendations. Even if we exclude those grants from the estimate of counterfactually unique grant recommendations to make a 'conservative' estimate, 16 of 23, or 69.5%, of grants, worth $535,150 of $923,150, or ~58%, of the money to be disbursed, are counterfactually unique and fit into a more conservative, risk-averse approach that would have ruled out more uncertain or controversial successful grant applicants.

These numbers are an extremely significant improvement in the quality and quantity of unique opportunities for grantmaking the Long-Term Future Fund has made since a year ago. This grant report generally succeeds at achieving a goal of coordinating donations through the EA Funds to unique recipients who otherwise would have been overlooked for funding by individual donors and larger grantmakers. This report is also the most detailed of its kind, and creates an opportunity to create a detailed assessment of the Long-Term Future Fund's track record going forward. I hope the other EA Funds emulate and build on this approach.

General Assessment

In his 2018 AI Alignment Literature Review and Charity Comparison, Ben Hoskins had the following to say about changes in the management structure of the EA Funds.

I’m skeptical this will solve the underlying problem. Presumably they organically came across plenty of possible grants – if this was truly a ‘lower barrier to giving’ vehicle than OpenPhil they would have just made those grants. It is possible, however, that more managers will help them find more non-controversial ideas to fund.

To clarify, the purpose of the EA Funds has been to allow individual donors relatively smaller than grantmakers like the Open Philanthropy Project (including all donors in EA except other professional, private, non-profit grantmaking organizations) to identify higher-risk grants for projects that are still small enough that they would be missed by an organization like Open Phil. So, for a respective cause area, an EA Fund functions as like an index fund that incentivizes the launch of nascent projects, organizations, and research in the EA community.

Of the $923,150 of grant recommendations made to Centre for Effective Altruism for the EA Long-Term Future Fund this round of grantmaking, all but $250,000 of it went to the kind of projects or organizations that the Open Philanthropy Project tends to make. To clarify, there isn't a rule or practice of the EA Funds not making those kinds of grant. It's at the discretion of the fund managers to decide if they should recommend grants at a given time to more typical grant recipients in their cause area, or to newer, smaller, and/or less-established projects/organizations. At the time of this grantmaking round, recommendations to better-established organizations like MIRI, CFAR, and Ought were considered the best proportional use of marginal funds allotted for disbursement at this time.

20 (~87% of total number) grant recommendations totalling $723,150 = ~73%

+ 3 (~13% of total number) grant recommendations totalling $200,00 = ~27%

= 23 grant (in total) recommendations totalling $923,150 = 100%

Since this is the most extensive round of grant recommendations from the Long-Term Future Fund to date with the EA Funds' new management structure, this is the best apparent opportunity for evaluating the success of the changes made to how the EA Funds are managed. In this round of grantmaking, 87% of the total number of grant recommendations were for efforts of individuals, totalling 73% of the total amount of money that would be disbursed for these grants, that would otherwise have been missed by individual donors, or larger grantmaking bodies.

In other words, the Long-Term Future (LTF) Fund is directly responsible for 87% of 23 grant recommendations made, totalling 73% of $923.15K worth of unique grants, that, presumably, would not have been counterfactually identified had individual donors not been able to pool and coordinate their donations through the LTF Fund. I keep highlighting these numbers, because they can essentially be thought of as the LTF Funds' current rate of efficiency in fulfilling the purposes it was set up for.

Criticisms and Conservative Estimates

Above is the estimate for the number of grants, and the amount of donations to the EA Funds, that are counterfactually unique to the EA Funds, and can be thought of how effective the impact of the Long-Term Future Fund in particular is. That is the estimate for the grants donors to the EA Funds very probably could not have identified by themselves. Yet another question is would they opt to donate to the grant recommendations that have been just been made by the LTF fund managers? Part of the basis for the EA Funds thus far is to trust the fund mangers' individual discretion based on their years of expertise or professional experience working in the respective cause area. My above estimates are based on the assumption all the counterfactually unique grant recommendations the LTF Funds make are indeed effective. We can think of those numbers as a 'liberal' estimate.

I've at least skimmed or read all 180+ comments on this post thus far, and a few persistent concerns with the grant recommendations have stood out. These were concerns that the evidence basis on which some grant recommendations were made wasn't sufficient to justify the grant, i.e., they were 'too risky.' If we exclude grant recommendations that are subject to multiple, unresolved concerns from the LTF Funds, we can make a 'conservative' estimate of the percentage and dollar value of counterfactually unique grant recommendations made by the LTF Fund.

  • Concerns with 1 grant recommendations worth $28,000 to hand out printed copies of fanfiction HPMoR to international math competition medalists.
  • Concerns with 2 grant recommendations worth $40,000 for individuals who are not currently pursuing one or more specific, concrete projects, but rather are pursuing independent research or self-development. The concern is the grant is based on the fund manager's (managers' ?) personal confidence in the individual, and even explication for the grant recommendations expressed concern with the uncertainty in the value of grants like these.
  • Concerns that with multiple grants made to similar forecasting-based projects, there would be redundancy, in particular concern with 1 grant recommendation worth $70,000 to forecasting company Metaculus that might be better suited to an investment for equity in a startup rather than a grant from a non-profit foundation.

In total, these are 4 grants worth $138,000 that multiple commenters have raised concerns with on the basis the uncertainty for these grants means the grant recommendations don't seem justified. To clarify, I am not making an assumption about the value of these grants are. All I would say about these particular grants is they are unconventional, but that insofar as the EA Funds are intended to be a kind of index fund willing to back more experimental efforts, these projects fit within the established expectations of how the EA Funds are to be manged. Reading all the comments, the one helpful, concrete suggestion was for the LTF Fund to follow-up in the future with grant recipients and publish their takeaways from the grants.

Of the 20 recommendations made for unique grant recipients worth $673,150, if we were to exclude these 4 recommendations worth $138,000, that leaves 16 of 23, or 69.5% of total recommendations, worth $535,150 of $923,150, or ~58% worth of the total grant recommendations, uniquely attributable to the EA Funds. Again, those grant recommendations excluded from this 'conservative' estimate are ruled out based on the uncertainty or lack of confidence in them from commenters, not necessarily the fund managers themselves. While presumably any of the value of any grant recommendation could be disputed, these are the only grant recipients for which multiple commenters have made raised still-unresolved concerns so far. These grants are still initially being made, so whether the best hopes of the fund managers for the value of each of these grants will be borne out is something to follow-up with in the future.

Conclusion

While these numbers don't address suggestions for how the management of the Long-Term Future Fund can still be improved, overall I would say these numbers show the Long-Term Future Fund has made extremely significant improvement since last year at achieving a high rate of counterfactually unique grants to more nascent or experimental projects that are typically missed in EA donations. I think with some suggested improvements like hiring some professional clerical assistance with managing the Long-Term Future Fund, the Long-Term Future Fund is employing a successful approach to making unique grants. I hope the other EA Funds try emulating and building on this approach. The EA Funds are still relatively new, and so to measure their track record of success with their grants remains to be done, but this report provides a great foundation to start doing so.

Comment by evan_gaensbauer on Long Term Future Fund: April 2019 grant decisions · 2019-04-17T07:05:54.197Z · score: 3 (2 votes) · EA · GW

If you don't mind me asking, what did goal did you intend to achieve or accomplish with this comment?

Comment by evan_gaensbauer on Long Term Future Fund: April 2019 grant decisions · 2019-04-17T04:28:15.999Z · score: 5 (3 votes) · EA · GW

This strikes me as a great, concrete suggestion. As I tell a lot of people, great suggestions in EA only go somewhere if someone is done with them. I would strongly encourage you to develop this suggestion into its own article on the EA Forum about how the EA Funds can be improved. Please let me know if you are interested in doing so, and I can help out. If you don't think you'll have time to develop this suggestion, please let me know, as I would be interested in doing that myself if you don't have the time.

Comment by evan_gaensbauer on Long Term Future Fund: April 2019 grant decisions · 2019-04-17T03:57:09.103Z · score: 2 (1 votes) · EA · GW

The way the management of the EA Funds is structured to me makes sense within the goals set for the EA Funds. So I think the situation in which 2 people are paid full-time for one month to evaluate EA Funds applications makes sense is one where 2 of the 4 volunteer fund managers took a month off from their other positions to evaluate the applications. Finding 2 people from out of the blue to evaluate applications for one month without continuity with how the LTF Fund has been managed seems like it'd be too difficult to effectively accomplish in the timeframe of a few months.

In general, one issue the EA Funds face other granting bodies in EA don't face is the donations come from many different donors. This consequently means how much the EA Funds receive and distribute, and how it's distributed, is much more complicated than ones the CEA or a similar organization typically faces.

Comment by evan_gaensbauer on Long Term Future Fund: April 2019 grant decisions · 2019-04-17T03:47:27.296Z · score: 4 (2 votes) · EA · GW

One issue with this is the fund managers are unpaid volunteers who have other full-time jobs, so being a fund manager isn't a "job" in the most typical sense. Of course a lot of people think it should be treated like one though. When this came up in past discussions regarding how the EA Funds could be structured better, suggestions like hiring a full-time fund manager came up against trade-offs against other priorities for the EA Funds, like not spending too much overheard on them, or having the diversity of perspectives that comes with multiple volunteer fund managers.

Comment by evan_gaensbauer on Who is working on finding "Cause X"? · 2019-04-17T01:33:22.491Z · score: 8 (4 votes) · EA · GW

I've always thought of "Cause X" as a theme for events like EAG that are meant to prompt thinking in EA, and wasn't ever intended as something to take seriously and literally in actual EA action. If it was intended to be that, I don't think it ever should have been. I don't think it should be treated as such either. I don't see how it makes sense to anyone as a practical pursuit.

There have been some cause prioritization efforts that took 'Cause X' seriously. Yet the presence of x-risk reduction in EA as a top priority, the #1 question has been to verify the validity and soundness of the fundamental assumptions underlying x-risk reduction as the top global priority. That's because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn't the top priority. For prioritizers willing to work within the boundaries that the assumptions determining x-risk as the top moral priority are all true, cause prioritization focused on how actors should be working on x-risk reduction.

Since the question became reformulated as "Is x-risk reduction Cause X?," much cause prioritization research has been reduced to research on questions in relevant areas of still-great uncertainty (e.g., population ethics and other moral philosophy, forecasting, etc.). As far as I'm aware, no other cause pri efforts have been predicated on the theme of 'finding Cause X.'

In general, I've never thought it made much sense. Any cause that has gained traction in EA already entails a partial answer to that question, along some common lines that arguably define what EA is.

While they're disparate, all the causes in EA combine some form of practical aggregate consequentialism with global-scale interventions to impact the well-being of as large a population as feasible, within whatever other constraints one is working with. This is true of the initial cause areas EA prioritized: global poverty alleviation; farm animal welfare; and AI alignment. Other causes, like public policy reform, life extension, mental health interventions, wild animal welfare, and other existential risks, all fit with this framework.

It's taken for granted in EA conversations, but there are shared assumptions that go into this common perspective that distinguish EA from other efforts to do good. If someone disagrees with that framework, and has different fundamental assumptions about what is important, then they naturally sort themselves into different kinds of extant movements that align with their perspective better, such as more overtly political movements. In essence, what separates EA from any other movement, in terms of how any of us, and other private individuals, chose in which socially conscious community to spend our own time, is the different assumptions we make in trying to answer the question: 'What is Cause X?'

They're not brought to attention much, but there are sources outlining what the 'fundamental assumptions' of EA are (what are typically called 'EA values) which I can provide upon request. Within EA, I think pursuing what someone thinks Cause X is takes the following form:

1. If one is confident one's current priority is the best available option one can realistically impact within the EA framework, working on it directly makes sense. An example of this work is the work of any EA-aligned organization permanently dedicated to work in one or more specific causes, and efforts to support them.

2. If one is confident one's current priority is the best available option, but one needs more evidence to convincingly justify it as a plausible top priority in EA, or doesn't know how individuals can do work to realistically have an impact on the cause, doing research to figure that out makes sense. An example of this kind of work is the research Rethink Priorities is undertaking to identify crucial evidence underpinning fundamental assumptions in causes like wild animal welfare.

3. If one is confident the best available option one will identify is within the EA framework, but you have little to no confidence in what those options will be, it makes sense to do very fundamental research that intellectually explores the principles of effective altruism. An example of this kind of work in EA is that of the Global Priorities Institute.

Comment by evan_gaensbauer on EA Forum Prize: Winners for February 2019 · 2019-04-07T08:16:23.705Z · score: 6 (3 votes) · EA · GW

EA Forum content generally considered most valuable tends to be the kind that advances the objectives of one or more of EA's cause areas, or the philosophy of the movement in general. Content focused on EA itself as a social community is a different kind of content that is typically related as less valuable. I think this judgement can be inferred from what articles tend to win the EA Forum Prizes. The sticking point is that this post is perceived as a particularly valuable example (perhaps the most valuable example) among a kind of post that are generally regarded as less valuable.

Of course the post in question advances the objectives of EA. At least in the evaluation of the judges, a handful of other posts this month were more valuable still. It wasn't disqualified.

Whether by coincidence of typically being on the topic of 'community,' or another reason, I agree we should neither shy away from incentivize posts that reflect disagreements in EA, or are critical of EA as it is; nor directly disincentivize disagreement. I do believe there is a tendency towards that. While I am wary of incentivizing discussion of disagreement for its own sake, since that could introduce the perverse incentives of people posting articles that don't do the disagreement justice, overall I believe it's fairly achievable.

I've got a lot on my plate, and it is also not as much a personal priority for me in EA, so I wouldn't do it, but I would recommend you (or someone else concerned) write an EA Forum article discussing what you think the criteria or priorities should be for the EA Forum Prizes, relative to the kinds of articles that win the prize now, and in particular why it is important they should include incentivizing high-quality treatments of critical disagreements in EA. I would be willing to proofread or otherwise aid in writing the article.

Comment by evan_gaensbauer on EA Forum Prize: Winners for February 2019 · 2019-04-07T07:51:25.780Z · score: 2 (1 votes) · EA · GW

Thanks for your response. I was under a false impression. My apologies for the mistake.

Comment by evan_gaensbauer on EA Forum Prize: Winners for February 2019 · 2019-04-01T04:59:38.431Z · score: 4 (2 votes) · EA · GW

Edit: The original text of this comment below remains unedited, but I made the mistake of stating the CEA sets the conditions of the EA Forum Prizes, when they only provide the funding for them.

Summary: It makes sense the EA Forum is currently set up to promote or incentivize content that clearly advances one or more of EA's current objectives framed so it's generally accessible. That content is prioritized based on the view it's the most important role or function the EA Forum serves as a platform. This is different than the priority of promoting and incentivizing popular content, because it raises awareness and starts a conversation of what is a top priority for the greatest number of community members (active on the EA Forum). This post advances the latter as opposed to the former goal, which is probably why it wouldn't receive an EA Forum Prize. It seems starting a conversation about what the priorities for promotion and incentives on the EA Forum, and what the criterion for selecting those priorities, should be would be how to best broach this subject.

Why different posts receive the reward, and why this post didn't receive the reward, is a matter of what kind of posts people want to reward and incentivize, and why. It also makes sense to keep in mind the rewards are given and the EA Forum maintained by the Centre for Effective Altruism (CEA) as an institution. I'm aware with the current strategy for the EA Forum, the goal is to promote content that is:

  • generally accessible.
  • content that is more basic, and doesn't assume advanced background knowledge of one or more particular cause areas.
  • makes intellectual and/or material progress on the general goals of effective altruism, or successfully appeals to a wide audience about why and how a particular means can be applied to achieve those goals.

This is based on the ultimate goal of having the EA Forum be a platform primarily focused on community-building, both in terms of growing the effective altruism movement, and enhancing the level of involvement from people who only relate to EA in a more casual way (e.g., inducing those who merely to 'subscribe' to EA as a philosophy to personally 'identify' with it, and change what they themselves personally do to be aligned with EA values).

This contrasts with how the current EA community tends to use Facebook groups, which are used for conversations that tend to either be more specialized and technical, e.g., about a specific cause area or career, or social and informal. For a bulk of the current active EA community, their use of the EA Forum is based on prioritizing conversation of affairs in EA that are both official, and general, in that the conversation is, at least in theory, relevant to everyone in EA. It makes sense to a lot of the EA community this should be a primary purpose for the EA Forum, and they've gotten accustomed to using it that way.

The problem is what much of the EA community sees as a primary priority for the EA Forum's role/function is not the top priority for what is promoted or incentivized as part of the EA Forum's moderation strategy. The EA Forum serves as a public square for whatever topics and subjects are a priority for the EA community at large. The content being incentivized through rewards, or promoted to the frontpage, is content that advances EA's objectives, as opposed to discussions themed on grievances with the EA community's social dynamics, what a lot of people in EA call a more 'meta-level' discussion or issue. The dedicated space for this on the EA Forum 2.0 so far has been the 'Community' section.

One obvious factor here is how promoting or incentivizing content that raises awareness of disagreements and controversies within EA is something that could be offputting to a general readership, or get them more involved in ways that distract from rather than advance progress on EA's objectives. For what it's worth, I think this was an unusually fruitful hashing out in public of a common grievance in EA. I also don't believe the CEA is not rewarding posts critical of community dynamics out a desire to starve these discussions of awareness and attention. They consider these conversations important, but it's just they merely consider posts that directly advance the objectives of EA as a movement in various ways more valuable.

So based on moderation strategy of the EA Forum, there is a criterion for awarding EA Forum Prizes that are not aligned with the content that tends to be most popular, for whatever reasons. It's similar to how the Academy Awards don't usually go to the films that earn the most money at the box office. The next step seems to be having a conversation aiming to reconcile with what the EA Forum's moderation strategy prioritizes, with why the community at large thinks the most upvoted EA Forum posts are the most important, and should be incentivized.

Comment by evan_gaensbauer on EA Forum Prize: Winners for February 2019 · 2019-04-01T04:07:11.374Z · score: 5 (3 votes) · EA · GW

Dovetailing Milan, I remember from a discussion in the comments of that post itself, it was reckoned that even taking into account changes to the karma system in the EA Forum 2.0, that post received the highest absolute number of upvotes from any post in the history of the EA Forum.

Comment by evan_gaensbauer on Apology · 2019-03-30T00:22:59.786Z · score: -4 (5 votes) · EA · GW

For posterity, to reiterate what Habryka said, I am familiar with the case to which he is referring.

Comment by evan_gaensbauer on Announcement: Join the EA Careers Advising Network! · 2019-03-21T17:24:54.550Z · score: 2 (1 votes) · EA · GW

If you've been working in the field you're currently in for several years, and have a good handle on how to make career transitions, you're probably good. A lot of this will be either students asking what major they should select, what grad school people will go to, or what are the first jobs out of school they should apply to. I'll also match advisees and advisors up based on their needs and advantages, so as long as you feel out the survey in detail, I should be able to match you up with someone you can help well.

Comment by evan_gaensbauer on Announcement: Join the EA Careers Advising Network! · 2019-03-20T02:07:53.594Z · score: 14 (5 votes) · EA · GW

I made a separate comment for my thoughts on worst-case scenarios, because I have a lot to say on the subject.

I imagine the worst-case scenario is something like an advisor giving radically bad career advice to numerous advisees based on idiosyncratic priorities or beliefs about their own field, and then advisees waste significant amounts of their own time or money acting on that feedback, when they could have easily spent those same resources better. Of course that already happens in EA. So there already isn't enough quality control in EA for this kind of thing. That isn't to say I shouldn't try to ensure greater quality control in my own project, but it's important to know the pre-existing context in EA.

I should say one reason I haven't thought about worst-case scenarios you've brought up so far is because I've taken for granted they're unlikely to occur. It seems obvious to me people would tend to act in good faith if they were bothering to participate in this network, but even if they were to act in bad faith, anyone saying anything like your suggestions would disqualify them (in my eyes, at least) from participating as an advisor, for the simple reason none of those things have anything to do with careers.

If I include a survey, I definitely should include a feedback survey. I have intended to talk to 80,000 Hours to ask them questions about how they set up career coaching, and that will inform how I develop this network too. in the feedback survey for advisees I'll include a question about whether their coach did anything inappropriate, especially including trying to push the conversation in a direction that had nothing to do with trying to figure out their careers. If a career coach recommended someone donate a kidney, invest in a dubious crypto startup, or try saving the world by taking a bunch of psychedelics, that would get flagged, and they would be removed from the pool of prospective advisors.

At the same time, effective altruists have written blog posts on the EA Forum about how to donate kidneys, or recommend people do so. Getting recruited for weird projects can happen at EA events, including official ones like EAG events. I can definitely ask others how they've minimized the risk of strange things happening. Yet all throughout the small risk of these averse experiences persist. I know the point you were making wasn't about these specific examples, but my point is already in EA there is a small risk of things like this happening that are hard to eliminate. So I don't know why someone would single out a career advising network to exploit, and that this is of all things is likelier to produce viral headlines about how bad this is. It just seems so unlikely, I would feel strange introducing a quality control measure like having advisors click a box or sign a digital form saying they were aware they were only doing career advising, and not scamming advisees or something.

Again, I will include a quality feedback survey, so anything like this should get caught.

I do take seriously concerns of possible sexual harassment. It also seems strange to me that is as likely to happen over an online session, but I will ask other EA groups if there is anything I should do to minimize these kinds of risks in the advising network. That would also get including in a quality feedback survey. I'm unsure if I should include a separate ask about sexual harassment. This is something I will definitely think a lot more before I set up any in-person advising sessions. In general, it seems like there's a lot more risk with in-person advising sessions, so I will take longer to develop quality control measures before I set those up. By count, at most 18/71 possible pairings I could make now would result in in-person advising sessions. Chances are the number of in-person sessions it would make sense to set up at this point would be even lower still.

Comment by evan_gaensbauer on Announcement: Join the EA Careers Advising Network! · 2019-03-20T02:07:52.002Z · score: 2 (1 votes) · EA · GW

I made a separate comment for my thoughts on worst-case scenarios, because I have a lot to say on the subject.

I imagine the worst-case scenario is something like an advisor giving radically bad career advice to numerous advisees based on idiosyncratic priorities or beliefs about their own field, and then advisees waste significant amounts of their own time or money acting on that feedback, when they could have easily spent those same resources better. Of course that already happens in EA. So there already isn't enough quality control in EA for this kind of thing. That isn't to say I shouldn't try to ensure greater quality control in my own project, but it's important to know the pre-existing context in EA.

I should say one reason I haven't thought about worst-case scenarios you've brought up so far is because I've taken for granted they're unlikely to occur. It seems obvious to me people would tend to act in good faith if they were bothering to participate in this network, but even if they were to act in bad faith, anyone saying anything like your suggestions would disqualify them (in my eyes, at least) from participating as an advisor, for the simple reason none of those things have anything to do with careers.

If I include a survey, I definitely should include a feedback survey. I have intended to talk to 80,000 Hours to ask them questions about how they set up career coaching, and that will inform how I develop this network too. in the feedback survey for advisees I'll include a question about whether their coach did anything inappropriate, especially including trying to push the conversation in a direction that had nothing to do with trying to figure out their careers. If a career coach recommended someone donate a kidney, invest in a dubious crypto startup, or try saving the world by taking a bunch of psychedelics, that would get flagged, and they would be removed from the pool of prospective advisors.

At the same time, effective altruists have written blog posts on the EA Forum about how to donate kidneys, or recommend people do so. Getting recruited for weird projects can happen at EA events, including official ones like EAG events. I can definitely ask others how they've minimized the risk of strange things happening. Yet all throughout the small risk of these averse experiences persist. I know the point you were making wasn't about these specific examples, but my point is already in EA there is a small risk of things like this happening that are hard to eliminate. So I don't know why someone would single out a career advising network to exploit, and that this is of all things is likelier to produce viral headlines about how bad this is. It just seems so unlikely, I would feel strange introducing a quality control measure like having advisors click a box or sign a digital form saying they were aware they were only doing career advising, and not scamming advisees or something.

Again, I will include a quality feedback survey, so anything like this should get caught.

I do take seriously concerns of possible sexual harassment. It also seems strange to me that is as likely to happen over an online session, but I will ask other EA groups if there is anything I should do to minimize these kinds of risks in the advising network. That would also get including in a quality feedback survey. I'm unsure if I should include a separate ask about sexual harassment. This is something I will definitely think a lot more before I set up any in-person advising sessions. In general, it seems like there's a lot more risk with in-person advising sessions, so I will take longer to develop quality control measures before I set those up. By count, at most 18/71 possible pairings I could make now would result in in-person advising sessions. Chances are the number of in-person sessions it would make sense to set up at this point would be even lower still.

Comment by evan_gaensbauer on Announcement: Join the EA Careers Advising Network! · 2019-03-20T01:47:32.271Z · score: 9 (4 votes) · EA · GW

Maybe 'advisor' was the wrong word, and I should go with something like 'mentor.' Does 'mentor' connote something more casual, and does not claim to be the height of professionalism, while still aspiring to maintain quality moreso than 'advisor?'

At any rate to respond to your points, I intend to implement the following:

  • A quality feedback survey for people's experience with this system.
  • A short guide pointing advisors and advisees to relevant, pre-existing EA career resources as a primer before calls begin (e.g., an email recommending potential advisors to read 80,000 Hours career profile relevant to the professional or research field they wish to advise upon.)

Re: coordinating with local groups: to my knowledge, most university and local EA groups don't have an ongoing system for careers advising, but only do one-off workshops. In trying to implement this system, I've already run into a couple local groups who do already have an ongoing system for careers workshops, and believe their infrastructure is sufficient in place of the one I'm trying to build. Otherwise, the goal is to coordinate with local groups to build such infrastructure insofar as it's effective to do so. I have already sent these surveys to dozens of Facebook groups for local EA groups, so I am in contact with every local EA group with any local EA groups that are doing something similar. Since effective altruists tend to be disproportionately concentrated in places like the Bay Area, there was a significant chance advisors and advisees would be able to meet in person. A couple people suggested I include an option for meeting in-person. That introduces a dimension of matching people I haven't thought through as much. Thus far, it appears the vast majority of matches will be online, which is something I feel more prepared for. So I've included local match-ups as an option, but I'm unsure if they'll turn out to be much of a factor.

Announcement: Join the EA Careers Advising Network!

2019-03-17T20:40:04.956Z · score: 29 (29 votes)
Comment by evan_gaensbauer on Neglected Goals for Local EA Groups · 2019-03-10T04:28:36.818Z · score: 2 (1 votes) · EA · GW

Thanks for the feedback. It's been my impression the CEA through community-building grants has become clearer in better ways than before. I think these things either aren't talked about as much, or don't permeate through the community as fast, so much of the community isn't aware of these issues. I think clearly and consistently updating the community on changes is underrated, as it seems to have a dramatic impact on how many actors in EA make choices or allocate resources.

Comment by evan_gaensbauer on Neglected Goals for Local EA Groups · 2019-03-10T04:26:21.596Z · score: 2 (1 votes) · EA · GW

I agree more independent local EA groups need to define success and its consequences for themselves. Using proxy metrices is also just a way of getting local EA groups to share some common ground since we can evaluate between them, e.g., for grant-making purposes, or so local EA groups have a template for what success looks like.

Comment by evan_gaensbauer on Neglected Goals for Local EA Groups · 2019-03-10T04:24:49.609Z · score: 2 (1 votes) · EA · GW

I would be interested in collaborating on this, and perhaps doing most of the actual writing, or at least quite a lot of it, as I don't find writing to be as slow and painful a process.

Comment by evan_gaensbauer on How Can Each Cause Area in EA Become Well-Represented? · 2019-03-08T01:21:39.893Z · score: 2 (1 votes) · EA · GW
  • Recently, there has been a lot of talk about how the talent pipeline in EA is inefficiently managed. One part of this is central EA organizations have focused on developing the talent pipeline for a narrow selection of metacharities and NPOs focused on AI alignment/x-risk reduction. Developing infrastructure for the ecosystems of other cause areas in EA, to optimize their talent pipelines, could make the overall problem of talent allocation in EA less of a problem.
  • Cause areas within EA could develop their own materials and handbooks to circulate among supporters, and organize events or conferences that allow for better, more specialized networking than more cause-neutral events like EAG can offer.
Comment by evan_gaensbauer on Radicalism, Pragmatism, and Rationality · 2019-03-04T01:05:39.004Z · score: 2 (1 votes) · EA · GW

I think the work ALLFED does definitely qualifies for the kind of effective altruism I'm talking about.

Neglected Goals for Local EA Groups

2019-03-02T02:17:12.624Z · score: 35 (19 votes)
Comment by evan_gaensbauer on Is EA a community of elites? · 2019-03-01T09:29:54.022Z · score: 7 (4 votes) · EA · GW

"Coastal elite" brings to mind often mostly white, politically progressive people from well-to-do backgrounds. Most EAs are white and liberal, and I know a lot of them have gone to elite colleges or top universities. Demographically, they're similar. There are aspects of EA characterized as elitist, but I can't think of much else that comes to mind that makes EA clearly the same as the "coastal elite" stereotype. At least, while people may have anecdotes, I can't think of any more data from the EA community that's directly relevant.

Radicalism, Pragmatism, and Rationality

2019-03-01T08:18:22.136Z · score: 14 (9 votes)
Comment by evan_gaensbauer on How Can Each Cause Area in EA Become Well-Represented? · 2019-02-28T07:49:55.623Z · score: 4 (2 votes) · EA · GW

Right, in organizing events like EAG, the CEA may optimize for matching up labour supply and demand for x-risk. They may not have the capacity or know-how to do this for every cause area at every event. This could create the impression there are only jobs at x-risk orgs, or only people with the respective credentials are welcomed to EA. So the appearance EA only focuses on one or the other is due to an artificial as opposed to a real problem. So I think people are likely to blame or point fingers, when I think that misunderstand the nature of the problem, and it requires a different kind of solution.

Comment by evan_gaensbauer on How Can Each Cause Area in EA Become Well-Represented? · 2019-02-28T07:46:01.451Z · score: 2 (1 votes) · EA · GW

Yeah, that's great. I think the next step would be to find a way to translate and integrate this info into other forms, like ensuring the info gets out at EAG, or university EA groups are made aware of them, but that's a more complex process.

Comment by evan_gaensbauer on How Can Each Cause Area in EA Become Well-Represented? · 2019-02-27T01:03:37.118Z · score: 2 (3 votes) · EA · GW

I'm happy to write this up. One thing I think driving these considerations is a mismatch of priorities, leading people not to communicate about the right things to get on the same page. For example, central EA orgs like 80k and the CEA, with their priority on x-risk reduction, may pay most attention to helping x-risk orgs find hires. This comes out in what they do. There is nothing wrong with this, because I don't even necessarily events like EAG have to necessarily trade-off between focusing on different causes. It's just there is more to EA than x-risk, in terms of both supply of and demand for labour. If things like the EA FB groups directory, which includes different groups for different professional fields within EA for people to network within, are not something people working at EA organizations nor many community members at large are aware of, nobody will bring them up. So it can create a mutual impression there is less opportunity in EA for different kinds of people to work on different kinds of things there actually. A lot of this is the self-fulfilling prophecy of confidence. Believing there is room to create new opportunities in EA is the thing itself that drives people to create those opportunities. If nobody is pointing out how possible these opportunities are, nobody will believe they're possible.

Admittedly, since I know more about the resources, to make them more accessible to the community is something I'd like. The Local EA Network (LEAN), a project of Rethink Charity (RC), has been revamping the EA Hub this year, an online portal that could make accessing these things for all EAs much easier. I don't if the EA Hub is back up and running for anyone to access, or when that would be. This post itself were more my preliminary thoughts on how many people could better reframe disagreements within EA.

Comment by evan_gaensbauer on Do you have any suggestions for resources on the following research topics on successful social and intellectual movements similar to EA? · 2019-02-24T20:17:46.884Z · score: 2 (1 votes) · EA · GW

Thanks.

Building Support for Wild Animal Suffering [Transcript]

2019-02-24T11:56:33.548Z · score: 14 (8 votes)

Do you have any suggestions for resources on the following research topics on successful social and intellectual movements similar to EA?

2019-02-24T00:12:58.780Z · score: 6 (1 votes)

How Can Each Cause Area in EA Become Well-Represented?

2019-02-22T21:24:08.377Z · score: 14 (8 votes)
Comment by evan_gaensbauer on The Narrowing Circle (Gwern) · 2019-02-13T19:16:50.656Z · score: 2 (1 votes) · EA · GW

Summary: One theory for the pattern of moral shifts over the last few hundred years from a perspective of social science and history is the apparent moral shifts have followed a transition from more traditionalist and religious worldviews to more liberal ones, as largely driven by economic and political changes produced by industrialization and modernization. While this narrative model has limitations, it definitely seems significant enough to change how EA thinks about moral circle expansion.

One possibility is the pattern of moral shifts observed in the last couple hundred years in the Western world, and other parts of the world to a lesser extent, is driven by modernization. With the modernization, industrialization, urbanization, and rationalization (i.e., integration of society with advanced science and technology) of societies, popular consideration of different populations of moral patients has shifted along common lines. The upshot for this is EA should consider the possibility moral shifts are driven more by the influence of a changing material and technological environment, and less to do with whole societies intentionally shifting the exercise of their moral agency.

Modernization has given rise to the modern nation-state and greater political centralization, giving the rise to various forms of liberal political ideologies. While liberalism started with the Enlightenment, its popular spread followed the Scientific and Industrial Revolutions, and greater urbanization. The increased contact between different groups, such as differing ethnic groups and the sexes in the workplace, accentuated societal prejudices by making apparent how superficial and arbitrary the material deprivation between different groups of people was, at a historically unprecedented growth in global material wealth. This has a lot of power to explain civil rights and more moral consideration being extended to ethnic, religious, and sexual minorities, and women and children.

At the same time, the decline of more agrarian and religious society alienated more people from traditional communities and religion. This is consistent with the analysis why moral consideration of elders, ancestors, deities, and other groups traditional local communities and religion gave people more moral exposure to.

On one hand, a single convenient narrative explaining how apparent moral progress across societies is actually a natural political and social progression driven almost exclusively by technological and economic changes seems too convenient in the absence of overwhelming evidence. It definitely seems to me intuitively unlikely apparent moral circle expansions would necessarily have happened in the course of history. On the other hand, the idea that the moral circle expansion is an apt evidence-based theory for explaining historical moral progress could be recognized by EA as a largely confused notion, and we could spend less time trying to frame moral shifts through a flawed lens. From there, we could view the theory of moral circle expansion as more of a prospective model for thinking about how various societies' moral circles may likely expand in the present and near future.

Comment by evan_gaensbauer on The Narrowing Circle (Gwern) · 2019-02-13T18:47:20.790Z · score: 2 (1 votes) · EA · GW

Neat post! Feedback:

  • One population neglected in a lot of conversation on moral circle expansion, and in Gwern's consideration, is how the treatment of children after infancy has changed over time. I'm only knowledgeable of the history of the last couple hundred years as it relates to legal treatment, such as child labour laws. The study of how the treatment of children has changed will be complicated by the changing definition of 'children' over time; over time adulthood in societies has been treated to begin as early as the onset of puberty up to twenty years of age. That stated, people older than two or three and lower than the historical lower-bound for age of adulthood seem to have stably been regarded as 'children' throughout history.
  • Another kind of potential moral patient neglected in this conversation are abstract entities, such as concern for the overall health of a tribe, local community, or society; and, more recently in history, cultures and nations, the environment, and biodiversity. One thing all these entities have in common is there appears to be a common moral intuition one can evaluate their overall moral well-being that is greater than the sum of the well-being of their individual members (such as humans or other animals). This differs from how EA typically approaches similar entities, such as more often conflating their moral well-being with the aggregate well-being of their individual members. I'm guessing there are ways moral psychology regarding these entities differs significantly from how people think morally about individual moral patients. I don't know enough about what those differences might be to comment on them, but to understand them better seems crucial to thinking about this topic.
Comment by evan_gaensbauer on The Narrowing Circle (Gwern) · 2019-02-13T18:26:14.090Z · score: 3 (2 votes) · EA · GW

My impression is the West hasn't traditionally revered elders as highly as some other societies, but in the distant past the West revered elders more than we do now.

Comment by evan_gaensbauer on The Narrowing Circle (Gwern) · 2019-02-13T18:14:03.542Z · score: 3 (2 votes) · EA · GW

I agree. While the absolute size of the moral catastrophe that is wrongful treatment of prisoners is brought up a lot, that's a different issue than either the proportion of the population presently in prison, or the amount of harm inflicted on each individual prisoner, relative to the past.

Comment by evan_gaensbauer on The Narrowing Circle (Gwern) · 2019-02-13T18:11:38.036Z · score: 4 (3 votes) · EA · GW

One argument for why people don't proportionally care about future generations is because they're such a distant concern. A pattern I notice with the moral shifts you describe is most people have become more distant from the relevant populations over time, such as prisoners and animals. We're also more "distant" from our ancestors and deities in the sense we may care about them much less in large part because we're exposed to memes promoting caring about them in our everyday lives much less frequently.

Comment by evan_gaensbauer on What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? · 2019-02-03T02:17:46.270Z · score: 2 (1 votes) · EA · GW

That makes sense. I'm not angling for a civil service career myself, but it makes sense. At least in the past for the U.K. 80,000 Hours has recommended entering the civil service as more impactful in expectation than trying to win in electoral politics (mostly because the expected value of generic/randomly selected candidates of winning and achieving their goals is so low; individuals with reason to think they could have a decisive edge in electoral politics should consider it more).

Comment by evan_gaensbauer on What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? · 2019-01-30T23:03:28.944Z · score: 3 (2 votes) · EA · GW

Yeah, I've seen EA community members talk about impacting politics on a national scale, and then also on a municipal scale. Nobody talks about a state-or-province-level much, so I don't know much about it. I imagine the level of ease which one can get things done is somewhere between the national level and the municipal level, but I've yet to check it out.

What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy?

2019-01-30T02:52:25.471Z · score: 22 (10 votes)
Comment by evan_gaensbauer on Combination Existential Risks · 2019-01-14T22:46:53.497Z · score: 7 (6 votes) · EA · GW

The Global Catastrophic Risks Institute (GCRI) has a webpage up with their research on this topic under the heading 'cross-risk evaluation and prioritization.' Alexey Turchin also made this map of 'double scenarios' for global catastrophic risk, which maps out the pairwise possibilities for how two global catastrophic risks could interact.

Comment by evan_gaensbauer on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-26T01:41:22.403Z · score: 2 (1 votes) · EA · GW

What strikes me as odd to me is this organization doesn't appear to me to operate in a way considered necessarily effective or respectable by the standards of Christian international aid either, let alone EA standards, based on what I know of them. Like, most Christian organizations working in the developing world may have a hand in evangelism, yes, but they partially do so by materially benefiting the charitable recipients as well, such as teaching children how to read, or building and then teaching them in Christian schools. It's not clear from the website this org does any of that.

This creates the issue where if the Pay It Forward Foundation, or its staff or supporters, identify as both Christian and EA, there are in fact some Christian EAs who believe evangelism in this manner is the most good they can do. Most EAs might not be comfortable with that, but the Pay It Forward Foundation might not take us seriously if we tell them they're not effective, because obviously they're going by their own standards of what they think 'effective altruism' means. If they weren't, they wouldn't bother associating with EA in the first place while being so different from the rest of EA.

While they are the minority, there are a significant number of Christian effective altruists. While how to approach the Pay It Forward Foundation seems awkward (at least to me), I think the next best step might be to ask some Christian community members what they think of the Pay It Forward Foundation, and how they believe the community should approach them, if approaching instead of ignoring them is something any of us decides is worthwhile.

Comment by evan_gaensbauer on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-25T23:00:24.121Z · score: 6 (4 votes) · EA · GW

I agree, and I was going to say something about this as well. As a Canadian, I notice the tacit America-centrism in EA discourse even more than what Ozy rightly notices is the assumption in much EA discourse we're all left-of-centre. At the same time, going by the 2018 EA Survey, at least one third of EA community members are in the U.S. Other factors that would be missed by the EA survey are the fact that the majority of resources EA commands are in EA:

  • Between the Open Philanthropy Project and perhaps the majority of earners-to-give being in the U.S., the vast majority of funding/donations driven through EA comes through the U.S.
  • I haven't definitively checked, but I'd expect at least half the NPOs/NGOs who identify as part as or are aligned with EA are in the U.S. This includes the flagship organizations in major EA cause areas, such as virtually all x-risk organizations outside Cambridge and Oxford universities; Givewell in global poverty alleviation; and ACE and the Good Food Institute working in farm animal welfare.
  • In terms of political/policy goals in the populations of different countries, the U.S. will still be of more interest to EA than any other country for the foreseeable future, because it seems one of the countries where EA is likeliest to impact public policy; where EA-impacted policy shifts may have the greatest humanitarian/philanthropic impact, due to the sheer population and economic size of the U.S.; and a country where EA-impacted policy gains can best serve as a model/template for how EA could replicate such successes in other countries.

As long as EAs writing about EA from an American perspective qualify in their articles/posts that's what they're doing, I think the realistic thing for non-Americans among us to do is expect for the foreseeable future a seemingly disproportionate focus on American culture/politics will still dominate EA discussions.

Comment by evan_gaensbauer on Long-Term Future Fund AMA · 2018-12-25T04:30:41.623Z · score: 14 (6 votes) · EA · GW

What do you mean by 'expert team' in this regard? In particular, if you consider yourself or the other fund managers to be experts, would you being willing to qualify or operationalize that expertise?

I ask because when the EA Fund management teams were first announced, there was a question about why there weren't 'experts' in the traditional sense on the team, i.e., what makes you think you'd be as good as managing the Long-Term Future Fund as a Ph.D. in AI, biosecurity, or nuclear security (assuming when we talk about 'long-term future' we mostly in practice mean 'existential risk reduction')?

I ask because when the new EA Funds management teams were announced, someone asked the same question, and I couldn't think of a very good answer. So I figure it'd be best to get the answer from you, in case it gets asked of any us again, which seems likely?

Comment by evan_gaensbauer on Long-Term Future Fund AMA · 2018-12-25T01:55:35.968Z · score: 11 (6 votes) · EA · GW

Is there anything the EA community can do to make it easier for yourself and other fund managers to spend more time as you'd like to on grantmaking decisions, especially executive time spent on the decision-making?

I'm thinking of stuff like the CEA allocating more staff or volunteer time to helping the EA Funds managers take care of lower-level, 'boring logistical tasks' that are part of their responsibilities, outsourcing some of the questions you might have to EA Facebook groups so you don't have to waste time doing internet searches anyone could do, etc. Stuff like that.

Comment by evan_gaensbauer on Announcing EA Funds management AMAs: 20th December · 2018-12-24T17:46:05.673Z · score: 8 (5 votes) · EA · GW

In the future, I think it'd make more sense to announce these kinds of AMAs with more advance notice. Most community members wouldn't notice or be prepared for an AMA a day in advance. I've noticed in the last few months many community members, in particular those who'd otherwise be inclined to donate to the EA Funds, are still quite cynical about the EA Funds being worth their money. I appreciate the changes that have been made to the EA Funds, having said as much, and I am fully satisfied the changes made to the EA Funds in light of my requests that such changes indeed be made. So I thought if there was anyone in the EA community whose opinion on how much the EA Funds appear to have improved in the last several months that would be worth something, it'd be mine. There is a lot of cynicism in spite of that. So I'd encourage the CEA and the EA Funds management teams to take their roles very seriously.

On another note, I want to apologize if it comes across as if I'm being too demanding of Marek in particular, who I am grateful to for the singularly superb responsibility he has taken in making sure the EA Funds are functioning to the satisfaction of donors as much as is feasible.

Comment by evan_gaensbauer on Announcing EA Funds management AMAs: 20th December · 2018-12-24T17:18:32.410Z · score: 3 (2 votes) · EA · GW

Is there any chance there will be an AMA for the Global Health & Development EA Fund?

Comment by evan_gaensbauer on Effective Altruism Making Waves · 2018-11-16T07:21:47.544Z · score: 2 (1 votes) · EA · GW

I didn't know about that. That's incredible!

Comment by evan_gaensbauer on Effective Altruism Making Waves · 2018-11-16T07:19:41.834Z · score: 6 (4 votes) · EA · GW

In the examples I was talking about, it was ads in one of the biggest fast food franchises in the country, and the random people I talk to about AI safety are at bus stops and airports. This isn't just from my social network.Like I said, it's only a lot of people in my social network who have heard the words 'effective altruism,' or know what they refer to. I was mostly talking about the things EA has impacted, like AI safety and the Beyond Burger, receiving a lot of public attention, even if EA doesn't receive credit. I took the outcomes of EA receiving attention to be a sign of steps toward the movement's goals as a good thing without regard to whether people have heard of EA.

Effective Altruism Making Waves

2018-11-15T20:20:08.959Z · score: 6 (7 votes)

Reducing Wild Animal Suffering Ecosystem & Directory

2018-10-31T18:26:52.476Z · score: 12 (8 votes)

Reducing Wild Animal Suffering Literature Library: Original Research and Cause Prioritization

2018-10-15T20:28:10.896Z · score: 8 (8 votes)

Reducing Wild Animal Suffering Literature Library: Consciousness and Ecology

2018-10-15T20:24:57.674Z · score: 6 (6 votes)

The EA Community and Long-Term Future Funds Lack Transparency and Accountability

2018-07-23T00:39:10.742Z · score: 62 (63 votes)

Effective Altruism as Global Catastrophe Mitigation

2018-06-08T04:35:16.582Z · score: 7 (9 votes)

Remote Volunteering Opportunities in Effective Altruism

2018-05-13T07:43:10.705Z · score: 31 (26 votes)

Reducing Wild Animal Suffering Literature Library: Introductory Materials, Philosophical & Empirical Foundations

2018-05-05T03:23:15.858Z · score: 10 (12 votes)

Wild Animal Welfare Project Discussion: A One-Year Strategic Review

2018-05-05T00:56:04.991Z · score: 8 (10 votes)

Ten Commandments for Aspiring Superforecasters

2018-04-25T05:07:39.734Z · score: 10 (10 votes)

Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply

2018-04-13T22:10:16.460Z · score: 11 (11 votes)

Lessons for Building Up a Cause

2018-02-10T08:25:53.644Z · score: 13 (15 votes)

Room For More Funding In AI Safety Is Highly Uncertain

2016-05-12T13:52:37.487Z · score: 6 (6 votes)

Effective Altruism Is Exploring Climate Change Action, and You Can Be Part of It

2016-04-22T16:39:30.688Z · score: 10 (10 votes)

Why You Should Visit Vancouver

2016-04-07T01:57:28.627Z · score: 9 (9 votes)

Effective Altruism, Environmentalism, and Climate Change: An Introduction

2016-03-10T11:49:45.914Z · score: 17 (17 votes)

Consider Applying to Organize an EAGx Event, And An Offer To Help Apply

2016-01-22T20:14:07.121Z · score: 4 (4 votes)

[LINK] Will MacAskill AMA on Reddit

2015-08-03T20:45:42.530Z · score: 3 (3 votes)

Efective Altruism Quotes

2015-08-01T13:49:23.484Z · score: 1 (1 votes)

2015 Summer Welcome Thread

2015-06-16T20:29:36.185Z · score: 2 (2 votes)

[Announcement] The Effective Altruism Course on Coursera is Now Open

2015-06-16T20:20:00.044Z · score: 4 (4 votes)

Don't Be Discouraged In Reaching Out: An Open Letter

2015-05-21T22:26:50.906Z · score: 5 (5 votes)

What Cause(s) Do You Support? And Why?

2015-03-22T00:13:37.886Z · score: 2 (2 votes)

Announcing the Effective Altruism Newsletter

2015-03-11T06:05:51.545Z · score: 10 (10 votes)

March Open Thread

2015-03-01T17:14:59.382Z · score: 1 (1 votes)

Does It Make Sense to Make Multi-Year Donation Commitments to One Organization?

2015-01-27T19:37:30.175Z · score: 2 (2 votes)

Learning From Less Wrong: Special Threads, and Making This Forum More Useful

2014-09-24T10:59:20.874Z · score: 6 (6 votes)