Posts

How Big a Problem is Status Quo Bias in the EA Community? 2022-01-10T04:48:33.245Z
Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism 2021-12-29T20:20:54.901Z
Vancouver Winter Solstice Meetup 2021-12-15T00:15:16.812Z
How Do We Make Nuclear Energy Tractable? 2021-11-11T04:48:38.536Z
What is your perspective on the ongoing farmer protests and strikes in India over the dramatic changes the government has introduced into the economy? 2021-05-03T00:35:24.391Z
Does the Berkeley Existential Risk Initiative (self-)identify as an EA-aligned organization? 2020-06-30T19:43:52.432Z
Expert Communities and Public Revolt 2020-03-28T19:00:54.616Z
Free E-Book: Social Movements: An Introduction, 2nd Edition 2020-03-21T23:50:36.520Z
AMA: "The Oxford Handbook of Social Movements" 2020-03-18T03:34:20.452Z
Public Spreadsheet of Effective Altruism Resources by Career Type 2019-06-03T18:43:06.199Z
What exactly is the system EA's critics are seeking to change? 2019-05-27T03:46:45.290Z
Update on the Vancouver Effective Altruism Community 2019-05-17T06:10:14.053Z
EA Still Needs an Updated and Representative Introductory Guidebook 2019-05-12T07:33:46.183Z
What caused EA movement growth to slow down? 2019-05-12T05:48:44.184Z
Does the status of 'co-founder of effective altruism' actually matter? 2019-05-12T04:34:32.667Z
Announcement: Join the EA Careers Advising Network! 2019-03-17T20:40:04.956Z
Neglected Goals for Local EA Groups 2019-03-02T02:17:12.624Z
Radicalism, Pragmatism, and Rationality 2019-03-01T08:18:22.136Z
Building Support for Wild Animal Suffering [Transcript] 2019-02-24T11:56:33.548Z
Do you have any suggestions for resources on the following research topics on successful social and intellectual movements similar to EA? 2019-02-24T00:12:58.780Z
How Can Each Cause Area in EA Become Well-Represented? 2019-02-22T21:24:08.377Z
What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? 2019-01-30T02:52:25.471Z
Effective Altruism Making Waves 2018-11-15T20:20:08.959Z
Wild Animal Welfare Ecosystem & Directory 2018-10-31T18:26:52.476Z
Wild Animal Welfare Literature Library: Original Research and Cause Prioritization 2018-10-15T20:28:10.896Z
Wild Animal Welfare Literature Library: Consciousness and Ecology 2018-10-15T20:24:57.674Z
The EA Community and Long-Term Future Funds Lack Transparency and Accountability 2018-07-23T00:39:10.742Z
Effective Altruism as Global Catastrophe Mitigation 2018-06-08T04:35:16.582Z
Remote Volunteering Opportunities in Effective Altruism 2018-05-13T07:43:10.705Z
Wild Animal Welfare Literature Library: Introductory Materials, Philosophical & Empirical Foundations 2018-05-05T03:23:15.858Z
Wild Animal Welfare Project Discussion: A One-Year Strategic Review 2018-05-05T00:56:04.991Z
Ten Commandments for Aspiring Superforecasters 2018-04-25T05:07:39.734Z
Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply 2018-04-13T22:10:16.460Z
Lessons for Building Up a Cause 2018-02-10T08:25:53.644Z
Room For More Funding In AI Safety Is Highly Uncertain 2016-05-12T13:52:37.487Z
Effective Altruism Is Exploring Climate Change Action, and You Can Be Part of It 2016-04-22T16:39:30.688Z
Why You Should Visit Vancouver 2016-04-07T01:57:28.627Z
Effective Altruism, Environmentalism, and Climate Change: An Introduction 2016-03-10T11:49:45.914Z
Consider Applying to Organize an EAGx Event, And An Offer To Help Apply 2016-01-22T20:14:07.121Z
[LINK] Will MacAskill AMA on Reddit 2015-08-03T20:45:42.530Z
Efective Altruism Quotes 2015-08-01T13:49:23.484Z
2015 Summer Welcome Thread 2015-06-16T20:29:36.185Z
[Announcement] The Effective Altruism Course on Coursera is Now Open 2015-06-16T20:20:00.044Z
Don't Be Discouraged In Reaching Out: An Open Letter 2015-05-21T22:26:50.906Z
What Cause(s) Do You Support? And Why? 2015-03-22T00:13:37.886Z
Announcing the Effective Altruism Newsletter 2015-03-11T06:05:51.545Z
March Open Thread 2015-03-01T17:14:59.382Z
Does It Make Sense to Make Multi-Year Donation Commitments to One Organization? 2015-01-27T19:37:30.175Z
Learning From Less Wrong: Special Threads, and Making This Forum More Useful 2014-09-24T10:59:20.874Z

Comments

Comment by Evan_Gaensbauer on Democratising Risk - or how EA deals with critics · 2022-01-15T10:07:14.277Z · EA · GW

Strongly upvoted, and me too. Which sources do you have in mind? We can compare lists if you like. I'd be willing to have that conversation in private but for the record I expect it'd be better to have it in public, even if you'd only be vague about it.

Comment by Evan_Gaensbauer on How Big a Problem is Status Quo Bias in the EA Community? · 2022-01-14T04:16:30.317Z · EA · GW

It wasn't a private group but only people need to request to join if they're on Facebook. I agree with you though.

Comment by Evan_Gaensbauer on How Big a Problem is Status Quo Bias in the EA Community? · 2022-01-13T16:31:44.566Z · EA · GW

That's a good idea but the post was in a private group, so I figured that might complicate things if people aren't on Facebook or they have to join a while other group anyway before they join the conversation. I'll do it next time though. Thanks for the suggestion.

Comment by Evan_Gaensbauer on Wild Animal Welfare Literature Library: Original Research and Cause Prioritization · 2022-01-13T16:30:33.859Z · EA · GW

Yeah, I've been thinking of updating the library but that would take enough effort I haven't gotten around to it yet. I could get started on it whenever if I had some help. Please let me know if you or someone else you know would like to help. I might also make an EA Forum post requesting help, if you think that'd be a better idea.

Comment by Evan_Gaensbauer on Should EA be explicitly long-termist or uncommitted? · 2022-01-11T22:11:12.362Z · EA · GW

Strongly upvoted. Which organizations are those?

Comment by Evan_Gaensbauer on Should EA be explicitly long-termist or uncommitted? · 2022-01-11T19:05:46.648Z · EA · GW

Yeah, some parts of this discussion are more theoretical than practical and I probably should have highlighted this. Nonetheless, I think it's easy to make the mistake of saying "We'll never get to point X" and then end up having no idea of what to do if you actually get to point X. If the prominence of long-termism keeps growing within EA, who knows where we'll end up?

Asking that question as a stopping point doesn't resolve the ambiguity of which of this is theoretical vs. practical. 

If the increasing prominence of long-termism like that, in terms of different kinds of resources consumed relative to short-termist efforts, is only theoretical, then the issue is one worth keeping in mind for the future. If it's a practical concern, then, other things being equal, it could be enough of a priority that determining which specific organizations should distinguish themselves as long-termist may need to begin right now. 

The decisions different parties in EA make on this subject will be the main factor determining 'where we end up' anyway.

I can generate a rough assessment for resources other than money of what expectations near-termism vs. long-termism is receiving and can anticipate for at least the near future. I can draft an EA Forum post for that by myself but I could co-author it with you and one or more others if you'd like.

Comment by Evan_Gaensbauer on Should EA be explicitly long-termist or uncommitted? · 2022-01-11T18:32:53.315Z · EA · GW

Strongly upvoted. As I was hitting the upvote button, there was a little change in the existing karma from '4' to '3', which meant someone downvoted it. I don't know why and I consider it responsible of downvoters to leave a comment as to why they're downvoting but it doesn't matter because I gave this comment more karma than can be taken away so easily.

Comment by Evan_Gaensbauer on Should EA be explicitly long-termist or uncommitted? · 2022-01-11T14:15:10.247Z · EA · GW

Is there an assessment of how big this problem really is? How many people distributed across how many local EA groups are talking about this? Is there a proxy/measure for what impact these disputes are having?

Comment by Evan_Gaensbauer on How Big a Problem is Status Quo Bias in the EA Community? · 2022-01-10T22:55:17.966Z · EA · GW

He asked about a status quo bias favouring the world the way it is. He noticed much of the EA community appears to favour the status quo for politics, economics, etc. He presumably meant mainstream liberal/centrist positions in Western countries. To paraphrase, he is new to EA and his intuition is that if EA does more good on the margin that traditional institutions aren't doing, advocating for what they're already doing might defeat the purpose of maximizing marginal utility. He thought he might be missing something, which is why he asked.

Comment by Evan_Gaensbauer on [deleted post] 2021-12-30T02:47:00.732Z

Yeah, I've added it as an embedded link in the post now. Thanks for catching that for me. I don't know why I forgot that.

Comment by Evan_Gaensbauer on AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project · 2021-12-21T04:15:11.184Z · EA · GW

I haven't really thought this through in any detail, but I wonder if the EA/rationalist obsession with deeply analyzing and debating everything makes them bad at memes.

This seems really plausible to me. I think I'm above average among EAs at memes.

It's more than really plausible. It's definitely true. In general, effective altruists tends to suck at making memes. More than a humble opinion, it is a fact that you're in the top half of the top decile for making memes in EA. I wouldn't be surprised if you're in the top percentile. It's not hard. Most effective altruists are just not that good at making memes. 

So after releasing a tentative summary of research done by a coworker and I, I thought it'd be really cool to summarize our (very long) post in a few quick memes. But every time I try to do this (and seriously, I've spent ~30 minutes by now across multiple false starts), I get stuck because I worry too much about the memes not conveying the appropriate level of nuance or whatever, plus I worry about seeming too irreverent and accidentally making light of some people's life's work, plus... :/

You should have come to me. You could have messaged any major meme-maker in dank EA memes. This wouldn't be hard. Even with concerns with being too irreverent, the solution is to run the memes by whoever did the work first. We've done that in dank EA memes before with Brian Tomasik or David Denkenberger. 

Comment by Evan_Gaensbauer on AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project · 2021-12-21T04:06:59.223Z · EA · GW

I've been reading the comments in this thread but this one convinces me it'd be worthwhile to do a review of the impact of dank effective altruism memes

Comment by Evan_Gaensbauer on AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project · 2021-12-21T04:02:42.664Z · EA · GW

EA as a community is pretty publicly sober-minded. I imagine that turns some people off. 

It might be better for EA to turn some people off, if they're not the kind who cares to have sufficient standards for effectiveness. Also, as an admin for Dank EA Memes, I attest that EA could easily have a more vibrant meme culture if things changed to make that the better option. It's not clear to me which way is the better way to go. I'm not going to dignify the notion r/neoliberal memes are danker than EA memes with a response unless neoliberals start showing insisting on it, in which case I will likely respond with hostility. 

Comment by Evan_Gaensbauer on AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project · 2021-12-21T03:44:45.657Z · EA · GW

 I love it, but I always figured it was private for a reason -- EA is full of lots of counterintuitive philosophical ideas that people find off-putting (like... utilitarianism alone is already off-putting to most normies), and EA seems to be very obsessed with having a good/prestigious reputation as a responsible, serious movement. Our jokes are mostly about how weird EA is, so we might want to keep our jokes to ourselves if we are desperately trying to seem normal to everyone else.

As an admin for that group, I can confirm that's why the group has been private.

Our jokes are mostly about how weird EA is, so we might want to keep our jokes to ourselves if we are desperately trying to seem normal to everyone else.

We aren't trying to desperately seem normal to everyone else. We should 't try to be weird and we should probably try to fit in with mainstream society in some crucial ways but if our attempts to appear normal can be described as "desperate," they're probably an over-correction. 

We could start making fun of ordinary charities like the Red Cross and Salvation Army, but I doubt that would go over well

One of the Salvation Army's slogan is also "doing the most good," and yes, that is really true, so that's made for some great memes. Otherwise, yes, memes like this have mostly been taken to have been made in poor taste. 

I'm not sure about that theory; hopefully there is some way we can figure out how to harness meme magic. 

This has already been accomplished in multiple ways. Since it was launched almost seven years ago, among other achievements, a few hundred thousand dollars have been counterfactually donated to EA-prioritized causes through that group. I've thought of doing a write-up about it but I've not gotten around to it. I'd do  that write-up if enough people thought it'd be valuable. 

Comment by Evan_Gaensbauer on What are the bad EA memes? How could we reframe them? · 2021-12-21T02:20:15.965Z · EA · GW

That has always been a strawman of what earning to give is about. My opinion is that at this point it's better for EA to assert itself in the face of misrepresentations instead of trying to defuse them through conciliatory dialogue that has never worked. 

Comment by Evan_Gaensbauer on What are the bad EA memes? How could we reframe them? · 2021-12-21T02:15:41.979Z · EA · GW

It doesn't seem necessary to do it. In this comment, I went over how major mistakes with how EA branded itself in its first few years were in hindsight very bad optics because that resulted in major public misconceptions about what EA is about and what it's really effective for those in EA to do, e.g., with their careers.

Comment by Evan_Gaensbauer on What are the bad EA memes? How could we reframe them? · 2021-12-21T01:22:52.826Z · EA · GW

Summary: It was bad optics for EA to associate itself with memes that misrepresent what the movement is really about. It was mistaken branding efforts in the first few years of EA that has gotten EA stuck with these inaccurate interpretations about the movement.

It's both. Common misconceptions about EA are not only inaccurate presentations of what EA is about. They're the consequence of EA misrepresenting itself. That's why it was bad optics. 

The impression I've gotten from other comments on this post is that people aren't very aware that the these misconceptions about EA were caused by EA branding itself with memes like the one Hauke Hillebrandt references in this comment

I don't know if it's because most people have joined the EA movement before it got stuck with these misconceptions. 

Yet I've participated in EA for a decade and I remember for the first few years we associated ourselves with earning to give and overly simplistic utilitarian (pseudo-utilitarian?) approaches. 

I made that mistake a lot. It's hard to overstate how much we, the first 'cohort' of EA, made that mistake. (Linch, I'm aware you've been in EA for a long time too but I don't mean to imply you're part of that first cohort or whatever.) It took only a few years for us to fully recognize we were making these mistakes and attempt to rectify so many misconceptions about EA. Yet a decade later we're still stuck with them.

Comment by Evan_Gaensbauer on What are the bad EA memes? How could we reframe them? · 2021-12-21T01:03:22.153Z · EA · GW

I'm aware a problem with AI risk or AI safety is that it doesn't distinguish other AI-related ethics or security concerns from the AI alignment problem, as the EA community's primary concern about advanced AI. I got interesting answers to a question I recently asked on LessWrong about who else has this same attitude towards this kind of conceptual language. 

Comment by Evan_Gaensbauer on What are the bad EA memes? How could we reframe them? · 2021-12-21T00:52:49.368Z · EA · GW

The question wasn't about misconceptions, as memes that misrepresent what EA is really about, but how EA has spread memes that misrepresent what it's really about. In other words, the question is about what better memes EA should choose to send messages and represent itself better.

EA was never explicitly about mitigating global poverty, earning to give, ignoring change or utilitarianism. Yet for its first few years, EA disproportionately focused its public messaging on those memes. That's caused these misconceptions about EA to persist for years longer and the point is to recognize how the cause of that is the mistake EA itself made.

Comment by Evan_Gaensbauer on What are the bad EA memes? How could we reframe them? · 2021-12-20T23:09:46.797Z · EA · GW

Why do they object to it? 

My experience has been that those who don't participate in EA at all have a better reception of "AI risk" in general than near-termists in EA. 

I expect long-termists care as much if not more about what others outside of EA think of AI risk as a concept than near-termists in EA. 

I also recently asked a related question on LessWrong about the distinction between AI risk and AI alignment as concepts.

Comment by Evan_Gaensbauer on What are the bad EA memes? How could we reframe them? · 2021-12-20T23:03:22.794Z · EA · GW

Summary: It's common knowledge that the movement which has grown in the aftermath of the George Floyd protests brands itself as seeking to defund rather than abolish the police. To make the same, very literal mistake one criticizes another movement for making signals EA is too sloppy and careless to be really effective or taken seriously. 

This of course rightly identifies the kind of problem but misrepresents its content. The word used is not "abolish" but "defund." 

This is common knowledge. I don't mean this personally, as I sympathize with one not considering it necessary to be so tedious, but there is technically no excuse for making this mistake.

It might seem like a trivial fact. Yet if it's trivial, it also takes no effort to acknowledge it. It's important for participants in effective altruism to indicate their earnest effort to be impartial by taking enough care to not make the same mistake(s) other movements are being criticized for making.

The claim is "defund" means something like: 

  1. Dramatically reduce the annual budgets of police.
  2. Reallocate that funding to  public services and social programs that address the systemic causes of crime and to reduce crime rates by other means.

This of course isn't a sufficient defence of the slogan "defund the police." It neglects the fact that almost everyone who isn't involved in social justice movements will interpret defund as a synonym for abolish.  

Yet rebranding with the term "police reform" would also pose a problem. It's an over-correction that fails to distinguish how one movement seeks to reform the police from the ways anyone else would reform the police. 

The open borders movement faces the same challenge. Rebranding "open borders" as "immigration reform" would be pointless. 

The best term I've seen to replace "defund the police" is "divest from the police" because it more accurately represents the goal of reallocating funding from policing to other public services. I only saw it embraced by the local movement where I live in Vancouver for a few months in 2020. That movement now mostly brands itself with "defund" instead of "divest." I haven't asked why but I presume it's because association with the better-known brand brings them more attention and recognition. 

I'm aware this comment is probably annoying. I almost didn't want to write it  because I don't want to annoy others. 

Yet misrepresenting another movement like this isn't even strawmanning. It indicates an erroneous understanding of that movement. The criticism doesn't apply to something that movement isn't doing.

The need I feel to do this annoys me too. It's annoying because it puts EA in a position of always having to steelman other movements. 

It begs the question of whether it's really necessary for EA to steelman other movements when they only ever strawman EA. The answer to that question is yes. 

It's not about validating those other movements. It's about reinforcing the habit of being effective so EA can succeed when other movements fail. 

Other than the major focus areas in EA, there are efforts to effectively make progress in achieving the goals for causes prioritized by other movements. For example, Open Philanthropy focuses on criminal justice reform too. By trying to be the most effective for every cause EA pursues, EA can outperform other movements in ways that will move the public to care more about effectiveness and trust EA more. 
 

(Full disclosure: I support the general effort to dramatically decrease police funding and reallocate that money to public services and social programs that will better and more systemically serve the goals of public safety and crime reduction. I know multiple core activists and organizers in the local 'defund the police' movement.

Comment by Evan_Gaensbauer on Biblical advice for people with short AI timelines · 2021-12-07T00:08:13.232Z · EA · GW

You're welcome :)

Comment by Evan_Gaensbauer on Effective Slacktivism: why somebody should do prioritization research on slacktivism · 2021-12-07T00:03:13.221Z · EA · GW

Calling someone on the phone isn't slacktivism. I've been involved in efforts outside effective altruism and phone call or other campaigns that are really effective would not succeed if anyone was "slacking." I went into how we need better language than slacktivism to clarify this subject matter in this comment. Fast Action Network might be between slacktivism and effortful online activism.

Comment by Evan_Gaensbauer on Effective Slacktivism: why somebody should do prioritization research on slacktivism · 2021-12-06T23:59:11.100Z · EA · GW

Someone I know in the community pinged me about this article, so I presume I was meant to provide feedback. I'm going to ask what kind of feedback I could give that was in mind but here is the first thing that comes to mind for "effective slacktivism" and a starting point for researching it. 

There needs to be a better word than "slacktivism" to describe what any of us are really trying to discuss. Slacktivism doesn't only mean easy, simple, quick tasks one can do with almost no effort. Another meaning is that it's a kind of lazy virtue signaling someone will do so they can receive status for being a good person without really having to do anything to earn it. This ambiguity and the meanings laden with stigma don't lend itself to thinking about how to make online activism super cost-effective. The next step is to think about how to operationalize what you mean by effective and/or ineffective online activism, or whatever. That can start with determining the qualities or features of effective online activism being sought:

  • Simple
  • Easy
  • Quick
  • Cost-Effective
  • Scalable
  • Neglected, Tractable, etc
Comment by Evan_Gaensbauer on Biblical advice for people with short AI timelines · 2021-12-06T21:49:09.924Z · EA · GW

Summary: Slavery is only used as a rough analogy for either of these scenarios because there aren't real precedents for these kinds of scenarios in human history. To understand how a machine superintelligence could do something like torturing everyone until the end of time while still being superintelligent, check out:

                                                                                                                                                                                                                                                               

"Enslavement" is a rough analogy for the first scenario only because there isn't a simple, singular concept that characterizes such a course of events without precedent in human history. The second scenario is closer to enslavement but the context is different than human slavery (or even the human 'enslavement' of non-human animals, such as in industrial farming). It's more similar to the MSI being like an ant queen, but as an exponentially more rational agent, and the sub-agents are drones. 

Another nitpick: actually, I haven't heard about (a) as described here--anything you'd suggest I look at?

A classic example from the rationality community is of an AGI programmed to maximize human happiness and trained to recognize such on a dataset of smiling human faces. In theory, a failure mode therein could be the AGI producing endless copies of humans whose muscles in their faces it stimulates to always have them smiling their entire lives. 

That's an example so reductive as to be maybe too absurd for anyone to expect something like that would happen. Yet it was meant to establish proof of concept. In terms of whose making a "mistake," it's hard to describe without someone more about the theory of AI alignment. To clarify, what I should have said is that while such an outcome could appear to be an error on the part of the AGI, it would really be a human error for having programmed it wrong, and the AGI would be properly executing on its goal as it was programmed to do.

Complexity of value is a concept that gets at part of that kind of problem. Eliezer Yudkowsky of the Machine Intelligence Research Institute (MIRI) expanded on it in a paper he authored called "Artificial Intelligence as a Positive and Negative Factor in Global Risk" for the Global Catastrophic Risks handbook, curated by Nick Bostrom of the Future of Humanity Institute (FHI), and originally published by Oxford University Press in 2008. Bostrom's own book from 2014, Superintelligence, comprehensively reviewed potential, anticipated failure modes for AI alignment. Bostrom would also have extensively covered this kind of failure mode but I forget in what part of the book that was.

I'm guessing there have been updates to these concepts in the several years since those works were published but I haven't kept up to date with that research literature in the last few years. Reading one or more of those works should give you the basics/fundamentals for understanding the subject. You could use those as a jumping-off point to ask further questions on the EA Forum, LessWrong or the Alignment Forum if you want to learn more after. 

Comment by Evan_Gaensbauer on Biblical advice for people with short AI timelines · 2021-12-06T19:32:03.392Z · EA · GW

Summary: The difference between early Christianity and modern movements focused on reducing prospective existential risks is to that to publicly and boldly speak one's beliefs that go against the ruling ideology was considered against common sense morality during the Cold War. Modern x-risk movements can't defend themselves from suppression as well because their small communities subject to severe conditions in modern police/surveillance states.

                                                                                                                                                                                                                                             

Some scientists and whistleblowers in the Soviet Union and the United States not only lost their jobs but were imprisoned for a number of years, or were otherwise legally punished or politically persecuted in ways that had severe consequences beyond the professional. As far as I'm aware, none of them were killed and I'd be very surprised if any of them were. 

Please don't concern yourself to write more on this subject on my behalf. I'm satisfied with the conclusion that the difference between early Christians and the modern whistleblowers in question is that for the whistleblowers to publicly and boldly express their honest beliefs was perceived as a betrayal of good citizenship. The two major conditions that come to mind that determined these different outcomes are:

1. The Authoritarianism on Both Sides of the Iron Curtain During the Cold War. 

Stalinist Russia is of course recognized as being totalitarian but history has been mythologized to downplay how much liberal democracy in the United States was at risk of failing during the same period. I watched a couple documentaries on that subject produced to clarify the record about the facts of the matter during the McCarthyist era. The anti-communism of the time was becoming extreme in a way well-characterized in a speech Harry S. Truman addressed to Congress. I forget the exact quote but to paraphrase it, it went something like: "we didn't finish beating fascism only for us to descend into fascism ourselves." 

2. The Absence of an Authoritative Organization on the Part of the Defectors

(Note: In this case, I don't mean "defector" to be pejorative but only to indicate that members of the respective communities took actions defying rules established by the reigning political authority.

As I understand it, Christianity began dramatically expanding even within a few years of Jesus' crucifixion. Over the next few decades, it became a social/religious organization that grew enough that it became harder and harder for the Roman Empire to simply quash. There was not really an organization for Cold War whistleblowers that had enough resources to meaningfully defend its members from being suppressed or persecuted.

Comment by Evan_Gaensbauer on Biblical advice for people with short AI timelines · 2021-12-06T19:18:28.654Z · EA · GW

Mainstream Christianity is guilty of this, though so are many other social movements.

All sects of any organized religion ultimately originate from what's likely to have been a singular, unified version from when the religion began. Unless any sect has acknowledged what original prophecies in the religion were wrong, they've all made the same mistakes. As far as I'm aware, almost no minor sects of any organized religion acknowledge those mistakes any more than the mainstream sects.


EA as a whole should seek to understand why we got it so wrong

There isn't anything like a consensus to the point it's not evident that even a majority of the EA/x-risk community has short timelines for artificial general intelligence (AGI). There have been one or more surveys of the AI safety/alignment community on this subject but I'm not aware if there are one or more sets of data cataloguing the timelines of specific agencies in the field.


Also, I'd like to see more concrete testable short term predictions from those we trust with AI predictions. Are they good forecasters in general? Are they well calibrated or insightful in ways we can test?

Improving forecasting has become relevant to multiple focus areas in EA, so it's become something of a focus area in itself.  There are multiple forecasting organizations that specifically focus on existential risks (x-risks) in general and also AI timelines. 

As far as I'm aware, "short timelines" for such predictions range from a few months to a few years out. I'm not aware either if whole organizations making AI timeline predictions are logging their predictions the way individual forecasters are. The relevant data may not yet be organized in a way that directly provides a summary track record for the different forecasters in question. Yet much of that data does exist and should be accessible. It wouldn't be too hard to track and catalogue it to get those answers. 

Comment by Evan_Gaensbauer on Biblical advice for people with short AI timelines · 2021-12-06T18:18:28.607Z · EA · GW

A significant minority of utilitarians and fellow travelers in EA, mostly negative(-leaning) utilitarians but others as well, are concerned machine superintelligence (MSI) may be programmed wrong and for indefinite/arbitrary periods of time potentially either:

a. retain humans, their descendants or simulations of them as hostages and subject them to endless torture in a mistaken conception that its helping instead of harming them.

b. generate artificial (sub-)agents with morally relevant sentience/experiences but program those agents to act in ways that conflict with their own well-being.

Comment by Evan_Gaensbauer on Biblical advice for people with short AI timelines · 2021-12-06T17:59:53.749Z · EA · GW

I recognize there is some ambiguity in my comment. I also read your article again and I noticed some ambiguity I perceived on my part. That seems to be the source of confusion.

To clarify, it was not only the Bulletin of Atomic Scientists (BAS) who took those personal and professional risks in question. Other scientists and individuals who were not 'leaders' took those risks too. Albert Einstein did so personally outside the BAS but he called on any and all scientists to be willing to blow the whistle if necessary, even if they risked going to jail.

For such leading scientists to call on others to also be (tentatively) willing to take such risks if necessary contradicts the advice of early church leaders to the laity to "not quit their day jobs."

Nobody was advising scientists in positions to reduce x-risks or whatnot to embrace a value system so different they'd personally spurn those who didn't share it. Yet my impression is that during the Cold War, "common sense morality" would be loyalty to the authorities in the United States or Soviet Union, including to not challenge their Cold War policies. In that case, scientists and other whistleblowers would have been defying commonly accepted public morality.

Comment by Evan_Gaensbauer on Is the doing the most good really that simple? · 2021-12-06T16:45:34.247Z · EA · GW

Summary: It's not that simple and this kind of analysis on earning to give has been outdated for years.

I'm not aware to what extent you mean for this to apply to you personally or to anyone in general. I'm aware it's been several years since an organization like 80,000 Hours (80k) has recommended earning to give (ETG) as a default option. For dozens if not hundreds of roles, it has been harder across many EA-affiliated research organizations to identify the kinds of candidates they want than it is to spend the donations they receive to hire candidates. This article on the EA Forum seems to be the most recent analysis reaching the conclusion that EA is more constrained by talent than money.

Comment by Evan_Gaensbauer on Biblical advice for people with short AI timelines · 2021-12-05T22:12:52.329Z · EA · GW

Summary

  1. While some narratives about AI alignment bear a conspicuous resemblance to the apocalyptic thinking and eschatology of some Christians in history, there isn't much that fundamentally distinguishes that mindset towards AI alignment from similar mindsets towards other ostensible existential risks.
  2. This has been true at times during the last century and remains true today. It was at times crucial, if not necessary, for some of those involved in other communities similar to long-termist effective altruism to make decisions and take actions that contradicted much of this advice.

                                                                                                                                                                                                                                                            

This advice is in one way also applicable to other potential global catastrophic or existential risks as well but in another way may not be applicable to any of them. Even before the advent of nuclear weapons, World War II (WWII) was feared to potentially destroy civilization. Between the Cold War that began a few years later and different kinds of global ecological catastrophe, there are hundreds of millions of people across several generations who have experienced for more than half a century a life in way that had them convinced they were living at the hinge of history. While such concerns may have been alleged to be fears too similar to religious eschatology, almost all of them were rooted in secular phenomena examined from a naturalistic and materialist perspective.

This isn't limited to generic populations and includes communities that are so similar to the existential risk (x-risk) reduction community of today that they serve as a direct inspiration for our present efforts. After the Manhattan Project, Albert Einstein and other scientists who contributed to the effort but weren't aware of the full intentions of the government of the United States for nuclear weapons both wanted to do something about their complicity in such destruction. For the record, while they weren't certain either way, at the time many of those scientists feared a sufficiently large-scale nuclear war could indeed cause human extinction. Among others, those scientists founded the Bulletin of Atomic Scientists, likely the first ever 'x-risk reduction' organization in history.

In both the United States and the Soviet Union, scientists and others well-placed to warn the public about the cataclysmic threat posed by the struggle for more power by both superpowers took personal and professional risks. Some of those who did so were censured, fired and/or permanently lost their careers. Some were even criminally convicted or jailed. Had they not, perhaps none of us would have ever been born to try reducing x-risks or talk about how to think about that today.

To some extent, the same likely remains true in multiple countries today. The same is also true for the climate crisis. Employees of Amazon who have made tweets advocating for greater efforts to combat the climate crisis have been fired because their affiliation with Amazon in that way risks bringing too much attention to how Amazon itself contributes to the crisis. There also more and more people who through civil disobedience have gotten arrested for their participation in civil disobedience to combat the climate crisis or other global catastrophic risks. 

I've known many in effective altruism who've changed their careers so to focus on x-risk reduction not limited to AI alignment. There are millions of young people around the world who are pursuing careers intended to do the same because they both believe it's more important than anything else they could do and it's futile to pursue anything else in the face of looming catastrophe. All of this is anticipated to be critical in their lifetimes, often in the next 20-30 years. All of those people have also been presumed to be delusional in a way akin to the apocalyptic delusions of religious fanatics in history.

While for the other risks there isn't the same expected potential for transhumanism, indefinite life extension and utopian conditions, the future of humankind and perhaps all life is considered to be under threat. Beyond effective altruism, I've got more and more friends, and friends of friends, who are embracing a mindset entailing much of the above. Perhaps what should surprise us is that more people we don't know from in and around effective altruism aren't doing the same. 

Comment by Evan_Gaensbauer on How Do We Make Nuclear Energy Tractable? · 2021-11-12T21:00:22.375Z · EA · GW

Thanks, that helps too. I still intend to read everything in full later. It's not like it's something I don't know when I'd ever do. It's only that I've got other tasks I've got to complete I need to prioritize before I get to this and I'm not sure how long those other tasks will take. Please feel free to ping me by the end of November if I've not followed up by then.

Comment by Evan_Gaensbauer on How Do We Make Nuclear Energy Tractable? · 2021-11-12T03:12:58.389Z · EA · GW

I've not taken the time yet to read either your write-up or that paper. That's on top of another ten that have been cited in the answers I've received so far. I'm busy enough that I don't know when I'll have time to read all of them but it could be weeks before I finish reading all of them. Yet what I was seeking was better sources and now I've got more, in quality and quantity, then I was prepared to receive, so thanks!

Also, Matthew Dalhausen's answer reaches some conclusions that are somewhat the opposite of the ones you've presented, so I'd be interested in what conclusion(s) you two discussing the matter would generate. 

Comment by Evan_Gaensbauer on How Do We Make Nuclear Energy Tractable? · 2021-11-12T03:01:37.497Z · EA · GW

Edit: Hauke Hillebrandt left a couple different answers representing different viewpoints but his answer here reached some conclusions that somewhat contrast with your own. I'd be interesting to learn what other conclusions might be reached if you two were to discuss the points where you might disagree.

I'll reply to this comment at greater length later once I've read it more closely but I wanted to thank you now for taking the time to write it, as it's very informative. 

Comment by Evan_Gaensbauer on How Do We Make Nuclear Energy Tractable? · 2021-11-12T02:57:57.478Z · EA · GW

My loose impression is that some recent excitement is driven by Small Modular Reactors, and of course, climate change.

I'm aware of SMRs but most of the info I'm exposed to about them is either mainstream media. That tends to be more sensational and focuses on how exciting the are instead of on data. Mainstream science reporting is better but such articles commonly rehash the basics, like simply describing what SMRs are, instead of getting into details like projected development timelines. 

I can check whether anyone in the forecasting community, either in or outside of EA, are asking those questions. Others in EA who've studied or worked in a relevant field may also know more or know someone who does. Please let me know if you otherwise know of sources providing more specific information on the future of SMRs.

Nuclear in contrast is actually getting more expensive. Possibly due to increased regulatory/safety overhead.

In the tweet you've linked to, Patrick Collison's comment on the subject frames the matter as though the dramatic increase in regulation that has slowed down the construction of new nuclear power plants is irrational in general. That may be the case but it was only after the 1950s that problems that nuclear power plants may pose if not managed properly become apparent. 

It's because of nuclear meltdowns that more regulations were introduced. It shouldn't be surprising if constructing new nuclear power plants even years longer than it took to construct them decades ago. They should take significantly longer to construct at present for them to be constructed and managed safely in perpetuity. 

One could argue that at this point it's been an over-correction and now the construction and maintenance of new nuclear power plants is over-regulated. The case for that specific argument must be made on its own but it wasn't either by Patrick or the person who posted the original tweet he quote-tweeted/retweeted. I of course appreciate you providing that link to get the point across, and it's not your fault, but the tweet itself is useless. 

The Foreign Policy article they're quoting is behind a paywall on their website I don't have access to right now but I'll try getting access to it. If I do, I can copy-paste its contents into a document I can share privately upon request.

I've not taken the time to read in full the other articles to which you've linked. Once I have, I'll reply letting you know what if any more comments I have. 

Comment by Evan_Gaensbauer on How Do We Make Nuclear Energy Tractable? · 2021-11-12T02:33:07.529Z · EA · GW

(I've got a longer response to the part of your comment comparing the rate of development of nuclear energy in different countries, so I'm posting it as its own comment. I'll respond to the other points you've made in a separate comment.)

Some are even plant closures (mostly US, Canada, Germany), but China has a ton of new plants. Other countries with new plants and planned plants include Finland, Egypt, France, Poland, Russia, Turkey, even the US!

The primary motivation for plant closures I'm aware of are concerns about health, safety,  pollution and potential catastrophe. That's the case in North America after the meltdown on Three Mile Island and also Japan after the Fukashima Daiyichi reactor meltdown. A difference with Germany is that Germany has had an exceptionally strong Green movement, as a social and political movement. That's resulted in Germany shutting down more nuclear power plants down over environmental concerns but also a greater proportionate development of renewable energy compared to many other Western countries. 

One pattern is that the countries where nuclear power plants tend to either be shut down at greater rates or built at lower rates is that they are liberal democracies. It's easy to presume that because liberal-democratic governments are more subject to the pressure(s) of public opinion,  (relatively more) authoritarian governments face fewer political hurdles to building nuclear power plants. Yet as the country that has built the most nuclear power plants the fastest in China, I would expect the greater factor is not necessarily that it's an authoritarian but a more technocratic government that's able to overcome more easily what would otherwise be political barriers. 

Egypt, Russia, Turkey and Poland are all countries that are rated as having become more authoritarian over the last several years. Yet the development of nuclear power plants takes as many if not even more years, so the increasing rate of development of nuclear energy in those countries could easily precede their more authoritarian political pivots. All of those other countries are neither building nuclear power plants as fast as China is nor are their governments particularly technocratic. 

Like yourself, I've not studied this subject as closely but either that Wikipedia article or other, related sources may make clearer which of these hypotheses do or don't bear out. Thank you for sharing that useful resource. Depending on the other feedback I get and whether I find the time later, I may author an article for the EA Forum evaluating that data and these hypotheses. Please let me know any other hypotheses you might have and I would assess those as well. 

Comment by Evan_Gaensbauer on What is your perspective on the ongoing farmer protests and strikes in India over the dramatic changes the government has introduced into the economy? · 2021-05-06T00:30:38.569Z · EA · GW

Summary: You're right that I should clarify what I'm trying to ask to get a better answer. I was just being lazy and was hoping someone would have a confident expert opinion to assuage ignorance. To reform my understanding to ask (a) better question(s) on the subject, I will need to do some research to learn more that I was trying to avoid in the first place when I asked this question.

I asked because whether tens of millions of people in India continue to languish in poverty for at least several more years seems to depend on what the ultimate outcome(s) will be in the aftermath. That seemed important enough on a global level and from an EA viewpoint that I wanted to have a better understanding. I was hoping to receive an 'expert opinion,' but I felt like 'opinion' was too subjective when in EA discourse the goal is to keep things as evidence- and fact-based as we can. I couldn't think of a better word to replace 'opinion' with than 'perspective.' Yet it's another word generic and clunky enough that it doesn't leave anyone who would answer this question much to work with.

Yeah, I'm not much more familiar with how things have progressed in India therein anymore than a random layperson who still happens to occasionally consume international news. I've only followed it through reading one or two short articles on new developments every month. This has left me with only a shallow and superficial understanding of the matter. When I asked this question, I was taking a shot in the dark that there might some expert/specialist in EA who had been following this closely and give me a confident perspective I could just take as my own instead of forming an original one. That hasn't happened.

Of course, one of our community peers with relevant expertise would better be able to analyze the subject matter better than I can. Yet you're right that to get to that point I'm going to have to form enough of my own initial perspective and ask sub-questions to give enough for someone else to work with. That just means I'm going to do a lot of research on my own that I was trying to avoid in the first place by getting an answer to this question.

Comment by Evan_Gaensbauer on Should we think more about EA dating? · 2020-08-05T01:05:50.463Z · EA · GW

Thanks!

Comment by Evan_Gaensbauer on Should we think more about EA dating? · 2020-08-04T19:29:17.238Z · EA · GW

The value of n is so high that Peter wouldn't want to embarrass the rest of us with how smooth he is by disclosing that information. Yet I've got access to it! It's a well-kept secret that 8% of all historical growth of the EA movement is due to Peter bringing cute girls into the movement by way of telling all of them he passes by about the thought experiment about the drowning child[citation needed].

Comment by Evan_Gaensbauer on Should we think more about EA dating? · 2020-08-04T19:22:51.864Z · EA · GW

Would you mind please sharing a link to this startup 'Roam'? They sound interesting but I've not heard of it. I'd look it up myself but I doubt I'd know how to find the right website just by searching the word "roam." 

Comment by Evan_Gaensbauer on Should we think more about EA dating? · 2020-08-04T09:15:54.022Z · EA · GW

Summary: There are multiple reasons why, in my opinion, we in EA should not encourage intra-community dating beyond how it arises organically in the community. Yet that's not the same thing as not thinking about it. A modicum of public discussion about intra-community dating is probably not 'culty' compared to much of what the EA community already engages in regardless. One solution may be for those of us who are personal friends with each other in EA to make greater effort to provide support to each other in our mutual pursuits of romantic partners amenable to an EA lifestyle, especially including outside the EA community as well. 

 

I agree the EA community should not systematically think about us dating each other. By "systematically," I mean that I don't think the EA community ought to try seeking a programmatic way for us to date each other. There are multiple reasons I expect doing so would be a poor choice for the EA community. The concern we've discussed in this thread in that is that it could make EA look 'culty,' which I agree is a legitimate concern. One issue I've got with how the EA community tends to think about brand management and public relations, or whatever the social movement equivalent for those concepts are, is that we tend to reflexively care about it only when it comes up at random, as opposed to thinking about it systematically.

That's relevant because, relative to much more significant aspects of EA, whether we openly "think about dating each other" is not that 'culty.' There is some op-ed in a semi-popular magazine, print and/or online, about how communities concerned about AI alignment as an existential risk amount to doomsday cults. Much of the population perceives veganism as a cult. I've met a lot of people over the years who have told me that the phenomenon of widespread adoption of common lifestyle changes among community members still makes it gives off 'culty' vibes. Meanwhile, plenty of cultures within global society publicly and systematically encourage dating within their cultures. It seems like doing this along lines of national or religious identity is more publicly acceptable than doing so along racial lines. Like with what form it would likely take in EA, plenty of subcultures and movements that lend themselves to particular ways of life have online dating websites dedicated to their communities. 
 

Thus, I think the other downsides to systematically encouraging dating within the EA community, such as the skewed gender ratio perhaps quickly resulting in the system failing to satisfy the needs of most involved individuals, are greater than EA appearing 'culty.' It's important to distinguish why I think we shouldn't systematically encourage intra-community dating because I also expect it would be wrong for us to "not think about" each other's dating needs at all. For example, I don't think it's a negative thing that this EA Forum post and all these discussions in the comments are publicly taking place on the EA Forum. It seems to me the majority of community members never check the EA Forum with a frequency approaching a regular basis, never mind the millions of people who hear about EA but never become part of the movement. I think the solution is for us to extend private offers as peers in the same community to talk about each other's efforts to find romantic partners who spend our lives with also fits with living the EA-inspired lives we each want to live out. 

Comment by Evan_Gaensbauer on Should we think more about EA dating? · 2020-07-27T01:57:20.559Z · EA · GW

Strongly upvoted. This is an approach I've taken to dating outside the EA community. Most of my dating is typically outside the EA community. I've not found success in long-term romance. I'm pretty confident that's due to factors in my private life unrelated to this specific approach to dating EA community members can take. I'd recommend more in EA try it as well.

Comment by Evan_Gaensbauer on Should we think more about EA dating? · 2020-07-27T01:54:16.815Z · EA · GW

I responded to Marisa with this comment which pushes back on the notion that inter-EA dating is a particularly culty and insular phenomenon. Upshots:

  • Some public accusations of cultishness should be taken seriously, but EA should respond to them by doing what we do best: looking into scientific research, specifically about cults, in evaluating such allegations to ourselves. This is a more sensible approach than hand-wringing about hypothetical accusations of cultishness that haven't been levelled yet. To do so only plays into the hands of moral panics over cults in public discourse that don't themselves typically lessen the harms of cults, real or perceived.
  • Dozens if not hundreds in EA have dated, formed relationships, gotten married or started families in ways that have benefited themselves personally and also their capacity to do good. This is similarly true in its own ways of tens of millions of people who marry and start families within their own religions, cultures or ethnic groups, including in more diverse and pluralistic societies. While EA ought to be worried about ways in which it could cult-like, the common human tendency to spend our lives with those who share our own respective ways of life doesn't appear to be high on that list.
  • One could argue that that's a problematic tendency within societies at large and EA should aspire to more than that. Given my perception that those in EA who've formed flourishing relationships within the community have done so organically as individuals, there doesn't seem to me to be a reason to encourage intra-community dating. Yet to discourage it based on a concern it may appear cult-like would be to impel community members to a kind of romantic asceticism for nobody's benefit.
Comment by Evan_Gaensbauer on Should we think more about EA dating? · 2020-07-27T01:42:45.303Z · EA · GW

Summary: Concerns about apparent or actual cultishness are serious but ought to be worked through in a more rational way than is typical of popular discourse about cults. EA pattern-matches to being a small, niche community on the fringe of mainstream society, which is also a common characteristic and tell of a cult. Yet there is widespread cognitive dissonance in society at large about how social structures involving tens of millions of people also have harmful, cult-like aspects to them as well. It's perhaps the majority of people in even more diverse societies that marry and raise families within their own religion, culture or ethnic group.

That many of us in EA are strongly inclined to spend our lives with those who share our own way of life doesn't distinguish as problematic from the rest of society. One could argue that almost all cultures are cult-like and EA should aspire to be(come) a super-rational community free of the social problems plaguing all others. That would seem to me to be molehill mountaineering that can be disregarded as vain attempts to impel EA to be(come) quixotically perfect.

Regarding 'culty-ness,' I feel like too many subcultures or countercultures play into the hands of the paranoid accusations of a generic and faceless public. Several years ago, when I was both aware of evidence-based definitions of cults and was in extreme disagreement with mainstream societies, I thought accusations of being a cult levelled at movements that weren't unambiguously cults ought to be disregarded. I no longer feel this way, as I now recognize that a degree of cultishness in an organization or community can exist on a spectrum. Ergo, some public accusations of appearing to or actually being a cult ought to be taken very seriously.

EA is a small, niche community on the fringes of society. Putting that way may seem to stigmatize EA as pattern-matching to those fringe movements that pose a serious threat to society at large. That's not what I meant. I just pointed out that this is a crucial function between society at large and subcultures which may begin alienating themselves to the point of falling down the rabbit hole of becoming a cult.

Yet it seems there are mass groups in all mainstream societies that if they were a small, fringe group would be labelled cults, but only are not because they've been normalized over several decades. Such groups can constitute tens or even hundreds of millions of people. I believe such groups are often whole religions, or social structures similar to religions, which as they transform into mainstream institutions are sanitized in a way that makes them less harmful per individual than small, niche cults like the Church of Scientology.

Nonetheless, they often cause significant harm. So, much of humanity has severe cognitive dissonance about what is and isn't a cult, and why all kinds of mainstream institutions shouldn't be considered just as harmful as cults. This should cause us to take concerns of being culty with a grain of salt when it comes from a source that is selective in its opposition to cult-like groups. What I've never understood is, if some in EA are concerned that EA may seem like or take on actual cult-like tendencies, why none of us try assessing this for ourselves. As a movement that aspires to be scientific, we in EA ought to be able to assess to what extent our community is like a cult by reviewing the scientific research and literature on the subject of cults.

With all this in mind, we can put in context the concern some features in EA might make it appear like a cult to the rest of the world. While optics matter, they aren't everything. Of all the things EA has been accused of being a cult for, that those in EA tend to form relationships with one another isn't a frequent one. It's perhaps the majority of people in diverse societies that tend to date, marry or start families with those from their own ethnic, religious and cultural background. Most people don't call them cults. That's because there's a common understanding that individuals are drawn to spend their lives with those who share a common way of life. Outsiders to most ways of life understand that, even if they don't totally understand a respective way of life itself.

One lingering concern for some in EA might be that we ought to aspire to be far better in how we conceive of and do good things than the rest of the world. That might include being less cult-like than even entire cultures which themselves aren't technically cults. There are freethinking, cosmopolitan atheists who would call all religions and most cultures cults. Such accusations may cite that intermarriage between members of one culture because they are the same culture only occurs to irrationally preserve and perpetuate that culture, and its traditions and institutions.

I don't totally disagree with such freethinkers myself but I wouldn't take that criticism to heart to the point of discouraging relationships among those in EA. Relationships within the EA community are imperfect in their own ways, as is the case with all kinds of relationships inspired by a particular way of life. Yet I've seen dozens if not hundreds in EA personally flourish and enhance the good they're doing by being in relationships with other community members. Taking every naysayer to heart won't free us of problems. After all, we in EA are only human (and, I'll postulate, will be imperfect even in light of potentially becoming post-human super-beings in the future).

Comment by Evan_Gaensbauer on Should we think more about EA dating? · 2020-07-27T01:20:05.587Z · EA · GW

I appreciate this informative comment. I've got a couple of relevant points to add.

1. As a community coordinator for EA, a few years ago I was aware more in EA were interested in dating others in the community. I shared a link to reciprocity.io around in EA Facebook groups like EA Hangout. This got a few more hundred people to get on reciprocity. I talked to Katja Grace, who originally had the idea.

Reciprocity.io was written to support the much smaller Bay Area rationality community, which had the time had over 100 people but not too many more than that. So many in EA getting on reciprocity.io caused it to crash. The code wasn't particularly worth saving and at the time Katja suggested that if someone wanted, it might be better to make a newer, better site from scratch.

2. As far as I'm aware, LGBTQ+ people are significantly overrepresented in the EA community relatve to the background population. I don't know how much of this is determined by feeder communities for EA, i.e., how much the communities people find EA from are themselves disproportionately representative of the LGBTQ+ community. Feeder communities for EA include:

  • animal advocacy movements
  • organizations focused on particular causes in the non-profit sector
  • startup culture
  • transhumanism
  • rationality
  • etc.

Caveats: I don't know more specifically than that how the representation for LGBTQ+ folks in EA skews. By representation I mean statistical representation, not representation of LGBTQ+ as identities. Neither am I suggesting that anyone ought to infer anything else about the experiences and status of LGBTQ+ folks in the EA community based just on the fact they're overrepresented in the EA community.

I haven't put any thoughts into how this otherwise impacts the gender ratio of the EA community or dating prospects for individual community members therein. I just offer the info in case it inspires others' insights about intra-community dating and relationships.

Comment by Evan_Gaensbauer on Expert Communities and Public Revolt · 2020-03-30T01:43:58.309Z · EA · GW

This still neglects the possibility that if governments across the world are acting in a matter suboptimally, then them cooperating with each other, and a close and cozy relationship between expert communities and governments may come up the cost of a negative relationship with broad sections of the public. Who and what 'the public' should usually be unpacked but suffice to say there are sections of civil society that are closer to correctly diagnosing problems and solutions regarding social crises, as far as expert communities are concerned, than governments. For example, expert communities sometimes have more success in achieving their goals working with many environmental movements around the world to indirectly move government policy than working with governments directly. This is sometimes observed today in progress made in tackling the climate crisis. Similarly during the Cold War, social movements (anti-war, anti-nuclear, environmental movements) in countries on both sides played a crucial in moving governments towards policy that deescalated nuclear tensions, like the SALT treaties, that an expert organization like the Bulletin of Atomic Scientists (BAS) would advocate for. It's not clear that movements within the scientific community to deescalate nuclear tensions between governments would have succeeded without broader movements in society pursuing the same goals.

Obviously such movements can be a hindrance to the goals for improving the world pursued by expert communities, when governments are otherwise the institutions that would advance progress towards these goals better than those movements. A key example of this is how environmental movements have played a positive role in combating pollution and deescalating nuclear tensions during the Cold War, they've been counterproductive by decreasing public acceptance and the political pursuit of the safest forms of nuclear energy. Many governments around the world which otherwise would build more nuclear reactors to produce energy and electricity to replace fossil fuels don't do so because they rightly fear the public backlash that would be whipped up by environmental movements. Some sections of the global environmental movement have become quite effective on freezing the progress on climate change that could be made by governments around the world building more nuclear reactors.

There are trade-offs in the relationships expert communities face in building relationships with sections of the public like social movements vs. governments. I haven't done enough research to know if there is a super-effective strategy for knowing what to do under any conditions, as an expert community. Suffice to say, there aren't easy answers for effective altruism as a social and intellectual movement, or the expert communities to which we're connected, to resolve these issues.

While we are on this topic, I thought it would be fit if we acknowledge what similar issues effective altruism as a movement faces. Effective altruism as a global community has been crucial the growing acceptance of AI alignment as a global priority among some institutions in Silicon Valley and other influential research institutions across the world, both academic and corporate. We've also influenced some NGOs in policymaking and world governments to take seriously transformative AI and the risks it poses. Yet it's mostly been indirect, has had little visible impact and hasn't produced a better, ongoing relationship between EA as a set of institutions, and governments.

We're now in a position where as much as EA might be integrated with efforts in AI security in Silicon Valley and universities around the world, governments of countries like Russia, China, South Korea, the European Union, and at least the military and intelligence institutions of the American government are focused on it. Those governments focusing on AI security more is in part a consequence of EA perpetuating greater public consciousness regarding AI alignment (the far-bigger factor being the corporate and academic sectors achieving major research progress in AI as recognized through significant milestones and breakthroughs). There are good reasons why some EA-aligned organizations would keep private that they've developed working relationships with the research arms of world governments on the subject of AI security. Yet from what we can observe publicly, it's not clear that at present perspectives from EA and expert communities we work with would have more than a middling influence on the choices world governments make regarding matters of security in AI R&D.

Comment by Evan_Gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-25T04:41:50.880Z · EA · GW

I've identified the chapters in OHSM that, if there is an answer to these questions to be found in the book, they will be in there. They are 5 chapters, totaling roughly 100 pages in number. Half the chapters focus on ties to other social movements, and half the chapters focus on political parties/ideologies. I can and will read them, but to give a complete answer to your questions, I'd have to read most of at least a couple of chapters. That will take time. Maybe I can provide specific answers to more pointed questions. If you've read this comment, pick one goal from one cause area, and decide if you think the achievement of that goal depends more on EA's relationship to either another social movement, or a political ideology. At that level of specificity, I expect I can achieve something like giving one or two academic citations that should answer that question. Again, I will answer the question at the highest level, but at that point I'm writing a mini-book review on the EA Forum that will take a couple of weeks for me to complete.

Comment by Evan_Gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-25T04:30:43.958Z · EA · GW

I'm aware of a practical framework that social movements along other kinds of organizations can use. There are different versions of this framework, for example, in start-up culture. I'm going to use the version I'm familiar with from social movements. I haven't taken the time yet to look up in the OHSM if this is a framework widely and effectively employed by social movements overall.

A mission is what a movement seeks to ultimately accomplish. It's usually the very thing that inspires the creation of a movement. It's so vast it often goes unstated. For example, the global climate change movement has a mission of 'stopping the catastrophic impact of climate change'. Yet that's so obvious it's not like at meetings environmentalists need to establish the fact they've gathered is to stop climate change. It's common knowledge.

The mission of effective altruism is, more or less, "to do the most good". Cause areas exist in other movements similarly broad to effective altruism, but they're not the same thing as a mission. The cause area someone focuses on will be due to their perception of how to do the most good, or their evaluation of how they can personally do the most good. So each cause area in EA represents a different interpretation of how to do the most good, as opposed to being a mission or goal in and of itself.

Goals are the factors a movement believes are the milestones to be completed to complete a mission. The movement believes each goal by itself is a necessary factor in completing the mission, and that the full set of goals combined fulfills the sufficient condition to complete the mission. So for the examples you gave, the set up would be as follows:

Cause: Global poverty alleviation

Mission: End extreme global poverty.

Goals: Improve trade and foreign aid.

Cause: Factory Farming

Mission: End factory farming.

Goals: Gain popular support for legal and corporate reforms.

Cause: Existential risk reduction

Mission: Avoid extinction.

Goals: MItigate extinction risk from AI, pandemics, and nuclear weapons.

Cause: Climate Change

Mission: Address climate change.

Goals: Pursue cap-and-trade, carbon taxes and clean tech

Cause: Wild Animal Welfare

Mission: Improve the welfare of wild animals.

Goals: Do research to figure out how to do that.

Having laid it out like this, it’s easier to see (1), why a “cause” isn’t a “mission” or “goal”; and, (2), how this framework can be crucial for clarifying what a movement is about at the highest level of abstraction. For example, while the mission of the cause of ‘global poverty alleviation’ is ‘eliminate extreme global poverty’, the goals of systemic international policy reform don’t match up to what EA primarily focuses on to alleviate global poverty, which is a lot of fundraising, philanthropy, research and field activity, focused on global health, not public policy. Your framing assumes ‘existential risk reduction’ refers to ‘extinction risk’, but ‘existential risk’ has been defined as long-term outcomes that permanently and irreversibly alter the trajectory of life, humanity, intelligence and civilization on Earth or in the universe. That includes extinction risks but can also include risks of astronomical suffering. If nitpicking the difference between missions and goals seems like needless semantics, remember that because EA as a community doesn’t have a clear and common framework for defining these things, we’ve been debating and discussing them for years.

Below goals are strategy and tactics. The strategy is the framework a movement employs for how to achieve the goals. Tactics are the set of concrete, action-oriented steps the movement takes to implement the strategy. The mission is to the goals as the strategy is to the tactics. There is more to get into about strategy and tactics, but this is too abstract a discussion to get into that. For figuring out what an effective social movement is, and how it becomes effective, it’s enough to start thinking in terms of mission and goals.

Comment by Evan_Gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-22T02:13:03.185Z · EA · GW

This isn't from the OHSM, but two resources to learn more about this topic are the Wikipedia article on 'satisficing', a commonly suggested strategy for adapting utilitarianism in response to the demandingness criticism, and this section of the 'consequentialism' article on the Stanford Encyclopedia of Philosophy focused on the demandingness criticism.

Comment by Evan_Gaensbauer on AMA: "The Oxford Handbook of Social Movements" · 2020-03-21T23:37:58.394Z · EA · GW

Why have you found it underwhelming?