What do you think the effective altruism movement sucks at? 2023-01-30T06:38:12.649Z
Why does Elon Musk suck so much at calibration? 2022-11-06T18:10:36.318Z
Going Too Meta and Avoiding Controversy Carries Risks Too: A Case Study 2022-07-28T09:38:25.772Z
It's Time to Fix How Effective Altruism Is Perceived 2022-07-26T02:17:05.859Z
Who Want to Collaborate on an Effective Altruism for Dummies Sequence? 2022-07-23T03:21:38.488Z
EA Blog Recommendation: Thing of Things, by Ozy Brennan 2022-07-23T00:04:35.964Z
The Term "EA-Aligned" Carries Baggage 2022-07-15T01:57:19.099Z
What are some techniques to save time in the EA career search process? 2022-07-14T22:14:02.399Z
How do employers not affiliated with effective altruism regard experience at EA-affiliated organizations? 2022-07-14T22:01:52.097Z
(Self-)Criticism Doesn't Solve Problems. We Do. 2022-07-04T08:05:42.841Z
[Resolved] When will "EA Forum Docs" be taken out of the beta stage and replace Markdown as the default formatting option? 2022-05-27T19:24:25.227Z
Does anyone know about an EA group leader retreat this summer? 2022-05-26T02:02:16.447Z
What are your recommendations for technical AI alignment podcasts? 2022-05-11T21:52:13.665Z
How Do You Get People to Really Show Up to Local Group Meetups? 2022-05-06T19:19:02.682Z
What are the numbers in mind for the super-short AGI timelines so many long-termists are alarmed about? 2022-04-19T21:09:36.085Z
Where are the actors who are trying to advance the development of nuclear energy in effective ways? Why don't most people trying to advance the development of nuclear energy care about effectiveness? 2022-04-10T23:59:57.399Z
Evan_Gaensbauer's Shortform 2022-03-05T09:50:06.300Z
Is Anyone Else Seeking to Help Community Members in Ukraine Who Make Refugee Claims? 2022-02-27T00:52:10.193Z
The Culture of Fear in Effective Altruism Is Much Worse than Commonly Recognized 2022-02-06T23:30:00.127Z
Should there be clearer disclosures of when individual staffers at EA-affiliated organizations are posting in a professional capacity? 2022-01-26T00:35:15.105Z
How Big a Problem is Status Quo Bias in the EA Community? 2022-01-10T04:48:33.245Z
Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism 2021-12-29T20:20:54.901Z
Vancouver Winter Solstice Meetup 2021-12-15T00:15:16.812Z
How Do We Make Nuclear Energy Tractable? 2021-11-11T04:48:38.536Z
What is your perspective on the ongoing farmer protests and strikes in India over the dramatic changes the government has introduced into the economy? 2021-05-03T00:35:24.391Z
Does the Berkeley Existential Risk Initiative (self-)identify as an EA-aligned organization? 2020-06-30T19:43:52.432Z
Expert Communities and Public Revolt 2020-03-28T19:00:54.616Z
Free E-Book: Social Movements: An Introduction, 2nd Edition 2020-03-21T23:50:36.520Z
AMA: "The Oxford Handbook of Social Movements" 2020-03-18T03:34:20.452Z
Public Spreadsheet of Effective Altruism Resources by Career Type 2019-06-03T18:43:06.199Z
What exactly is the system EA's critics are seeking to change? 2019-05-27T03:46:45.290Z
Update on the Vancouver Effective Altruism Community 2019-05-17T06:10:14.053Z
EA Still Needs an Updated and Representative Introductory Guidebook 2019-05-12T07:33:46.183Z
What caused EA movement growth to slow down? 2019-05-12T05:48:44.184Z
Does the status of 'co-founder of effective altruism' actually matter? 2019-05-12T04:34:32.667Z
Announcement: Join the EA Careers Advising Network! 2019-03-17T20:40:04.956Z
Neglected Goals for Local EA Groups 2019-03-02T02:17:12.624Z
Radicalism, Pragmatism, and Rationality 2019-03-01T08:18:22.136Z
Building Support for Wild Animal Suffering [Transcript] 2019-02-24T11:56:33.548Z
Do you have any suggestions for resources on the following research topics on successful social and intellectual movements similar to EA? 2019-02-24T00:12:58.780Z
How Can Each Cause Area in EA Become Well-Represented? 2019-02-22T21:24:08.377Z
What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? 2019-01-30T02:52:25.471Z
Effective Altruism Making Waves 2018-11-15T20:20:08.959Z
Wild Animal Welfare Ecosystem & Directory 2018-10-31T18:26:52.476Z
Wild Animal Welfare Literature Library: Original Research and Cause Prioritization 2018-10-15T20:28:10.896Z
Wild Animal Welfare Literature Library: Consciousness and Ecology 2018-10-15T20:24:57.674Z
The EA Community and Long-Term Future Funds Lack Transparency and Accountability 2018-07-23T00:39:10.742Z
Effective Altruism as Global Catastrophe Mitigation 2018-06-08T04:35:16.582Z
Remote Volunteering Opportunities in Effective Altruism 2018-05-13T07:43:10.705Z
Wild Animal Welfare Literature Library: Introductory Materials, Philosophical & Empirical Foundations 2018-05-05T03:23:15.858Z


Comment by Evan_Gaensbauer on Taking a leave of absence from Open Philanthropy to work on AI safety · 2023-03-26T04:26:12.524Z · EA · GW

A lot of of people in and around effective altruism I've talked to have offered a lot of different takes about how there is a crisis of leadership in the movement/community.

Those takes are about how that crisis has just been getting worse instead during the last few months. That's especially in light of recent scandals or media coverage that has led to a public perception of various community leaders becoming, more than anything else, more stigmatized than ever before. As it has gotten worse

Comment by Evan_Gaensbauer on EA, 30 + 14 Rapes, and My Not-So-Good Experience with EA. · 2023-02-21T21:47:30.753Z · EA · GW

I'm not super familiar with the practices of call-in culture though I'm aware of it. While I'm sure there are some communities that have practiced methods similar to call-in culture well for a long time, they've been uncommon and I understand that call-in culture has in general only been spreading across different movements for a few years now. I also expect this community would benefit from learning more about call-in culture but it'd be helpful if you can make some recommendations for effective altruists to check out.

Comment by Evan_Gaensbauer on Kaleem's Shortform · 2023-02-18T23:57:25.730Z · EA · GW

Yeah, I was assuming your regular job wasn't related to EA and that was one of the less thoughtful comments, so I'm sorry and please pardon my mistake.

Comment by Evan_Gaensbauer on Why EA Will Be Anti-Woke or Die · 2023-02-17T04:18:32.079Z · EA · GW

Strongly downvoted.

This post quotes Scott Alexander on a tangent about as much as it does Richard Hannania to bolster minor points made in Hannania's post by appealing to a bias in favour of Scott Alexander among effective altruists.

By linking to and so selectively quoting Hanania prominently, you're trying to create an impression that the post should be trustworthy to effective altruists in spite of the errors and falsehoods about effective altruism in particular and just in general. Assuming you've made this post in reinforcing a truth-seeking agenda in a truth-seeking agenda, you've failed by propagating an abysmal perspective.

There are anti-woke viewpoints that have been well-received on the EA Forum but this isn't one of them. Some of them haven't been anonymous, so the fact that you had no reason to worry more about your reputation than 'truth-seeking' isn't an excuse.

You would, could and should have done better if you had shared an original viewpoint really more familiar with effective altruism than Hannania is. May you take heed of this lesson for the next time you try to resolve disputes.

Comment by Evan_Gaensbauer on Nathan Young's Shortform · 2023-02-17T03:26:47.332Z · EA · GW

Strongly upvoted

Comment by Evan_Gaensbauer on Kaleem's Shortform · 2023-02-17T03:26:02.409Z · EA · GW

Downvoted. If you're procrastinating to do EA work, you should be bold enough to encourage your peers to do more of the same.

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-02-17T03:23:40.378Z · EA · GW

I'm going to be explicit in my controversial rejection or even condemnation of criticisms of the perceived status quo in EA from multiple angles and also the defence of it. I want to preregister my intentions before any pending allegations that I'm biased in favour of a particular side. If anything, I will be biased against all parties involved for their common failure to do right by the matters at hand.

Comment by Evan_Gaensbauer on In (mild) defence of the social/professional overlap in EA · 2023-02-14T23:45:12.256Z · EA · GW

In terms of what's considered appropriate in "regular western culture," a lot of this is not true enough to justify the generalizations you're making:

  • There are variations within cultures in any country, never mind between western, and all other countries, whereby the extent to which crude sexism is considered appropriate. I've met many men from some different walks of life just in Canada whose sense of what's normal is such that they'll look down on other men who don't tow the line with their chauvinistic attitudes and misogynistic comments.

  • While it's far from being all of them, there are a lot of sections of the upper class where bragging about how much money one makes is considered respectable, and this influences other aspects of culture too, especially in North America.

  • Making intense eye contact during a normal interaction is considered inappropriate in most cultures, though there is relative nuance here. Spending a longer amount of time making direct eye contact as part of a back-and-forth in conversation is much more accepted in western cultures, for example, compared to in Russia or China, to the point that to avoid too much making eye contact during conversation in western cultures is often considered rude.

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-02-14T03:07:05.349Z · EA · GW

I'll probably make a link post with a proper summary later but here is a follow-up from Simon Knutsson on recent events related to longtermism and the EA school of thought out of Oxford.

Comment by Evan_Gaensbauer on How to be introduced to the EA community? · 2023-02-08T23:44:16.594Z · EA · GW

I can introduce you more. What you drew you in? The more general and philosophical side of it all? Or a particular cause, like existential risk reduction or animal protection or global povery alleviation? 80,000 Hours or LessWrong? Or something else?

Also, do you have profiles on any kind of social media? Twitter? Facebook? LinkedIn? Discord? Slack? I can send you some invitation links in a private message to EA groups or whatever on any of those platforms.

Comment by Evan_Gaensbauer on The Culture of Fear in Effective Altruism Is Much Worse than Commonly Recognized · 2023-02-08T15:49:51.144Z · EA · GW

This post received a mixed reception a year ago when I initially posted it, though I guess I really called it. If it seems like I'm kind of boasting here, you're right. I've been vindicated and while I feel tempted to say "I told you so," I want to note here that I was more right than I was given credit for, in the name of the principle of solidarity with those who spoke up before me, as I alluded to in this post, and those who've spoken up since, in the wake of the FTX crisis, and after. 

Comment by Evan_Gaensbauer on The number of burner accounts is too damn high · 2023-02-08T08:27:48.964Z · EA · GW

I've been trying it out for a while now before the posts on why there are burner accounts on the forum during the last few days, though I've been trying out this thing where I've posted lots of stuff most effective altruists may not be willing to post. I cover some of the how and why that is here.

I've been meaning to ask if there is anything anyone else thinks I should do with that, so I'm open to suggestions.

Comment by Evan_Gaensbauer on Why People Use Burner Accounts: A Commentary on Blacklists, the EA "Inner Circle", and Funding · 2023-02-08T08:19:51.775Z · EA · GW

I've been trying it out for a while now before the posts on why there are burner accounts on the forum during the last few days, though I've been trying out this thing where I've posted lots of stuff most effective altruists may not be willing to post. I cover some of the how and why that is here.

I've been meaning to ask if there is anything anyone else thinks I should do with that, so I'm open to suggestions.

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-02-06T04:18:38.812Z · EA · GW

I know multiple victims/survivors/whatever who were interviewed by TIME, not only one of the named individuals but some of the anonymous interviewees as well.

The first time I cried because of everything that has happened in EA during the last few months was when I learned for the fifth or sixth time that some of my closer friends in EA lost everything because of the FTX collapse.

The second time I cried about it all was today. 

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-01-24T08:57:38.736Z · EA · GW

People complained about how the Centre for Effective Altruism (CEA) had said they were trying not to be like the "government of Effective Altruism" but then they kept acting exactly like they were the Government of EA for years and years.

Yet that's wrong. The CEA was more like the police force of effective altruism. The de facto government of effective altruism was for the longest time, maybe from 2014-2020, Good Ventures/Open Philanthropy. All of that changed with the rise of FTX. All of that changed again with the fall of FTX. 

I've put everything above in the past tense because that was the state of things before 2022. There's no such thing as a "government of effective altruism" anymore, regardless of whether anyone wants one or not. Neither the  CEA, Open Philanthropy, nor Good Ventures could fill that role, regardless of whether anyone would want it or not.

 We can't go back. We can only go forward. There is no backup plan anyone in effective altruism had waiting in the wings to roll out in case of a movement-wide leadership crisis. It's just us. It's just you. It's just me. It's just left to everyone who is still sticking around in this movement together. We only have each other.


Comment by Evan_Gaensbauer on FLI FAQ on the rejected grant proposal controversy · 2023-01-20T17:37:53.913Z · EA · GW

I understand that Tegmark's brother was affiliated with them but wasn't part of the leadership.

Comment by Evan_Gaensbauer on FLI FAQ on the rejected grant proposal controversy · 2023-01-19T20:31:41.999Z · EA · GW

Thank you for sharing this.

It took courage to publish this when it seems this state of affairs has become more trying with every week FLI has tried to show how they've been learning and doing better after course-correcting from the initial mistake for months now.

I have one main question with a set of a few related questions, though I understand if you'd only answer the first one.

While the FAQ mentions how Nya Dagbladet is one of few publications critical of decisions to put nuclear weapons near the Swedish border, it doesn't directly address why Nya Dagbladet was considered for a grant in the first place.

  1. What was the specific work Nya Dagbladet had done or was expected to do that had them be considered for a grant in the first place? Would it just have been to fund Nya Dagbladet to publish more media in favour of nuclear deescalation? Or was it something else?

  2. As there are apparently few but still some other publications doing the kind of work in Sweden FLI might have been prospectively interested in funding, what was it specifically about Nya Dagbladet that had FLI consider them more? How much were the following factors:

  • perception that Nya Dagbladet published higher quality or more impactful content?

  • Nya Dagbladet having less funding than other publications and thus being considered a marginally more scalable and neglected publication?

  • the apparent independence of Nya Dagbladet's journalism, such that an increased independence might have been thought to give them more leeway to criticize government policies that risked increasing government policies?

  1. It appears Nya Dagbladet significantly deceived FLI, such as keeping hidden details about its political affiliations when the grant applicants presumably would have known that that disclosure might have their application rejected. Yet does FLI consider Nya Dagbladet to have 100% lied? I.e., is it thought the publication would have used the money only for what the grant would have permitted, or is it thought it might have totally scammed FLI to use the money to push other propaganda and conspiracy theories?

After finally publishing answers to these questions while still being doubted and having to take care of business as this scandal ends, I don't expect anyone from FLI to immediately answer these questions. Please take your time to answer them, or just decline to answer them at this time.

Comment by Evan_Gaensbauer on Doing EA Better · 2023-01-19T06:18:31.755Z · EA · GW

The Effective Altruism movement is not above conflicts of interest

If anyone still thinks effective altruism is above conflicts of interest, I have an NFT of the Brooklyn  Bridge to sell u, hmu on FTX if u r interested. 

Comment by Evan_Gaensbauer on Doing EA Better · 2023-01-19T04:52:11.146Z · EA · GW

We should not be afraid of consulting outside experts, both to improve content/framing and to discover blind-spots

If anything, we should be afraid of any tendency to stigmatize consulting outside experts. If anything, it'd be preferable all effective altruists are afraid of discouraging consultation from outside experts.

Comment by Evan_Gaensbauer on Doing EA Better · 2023-01-19T03:45:48.947Z · EA · GW

This is not at all to say that Google Docs and blogposts are inherently “bad”

Indeed, only the Sith deal in absolutes.

Comment by Evan_Gaensbauer on Any state / local politicians in this community? (state legislators / city council members / etc) · 2023-01-19T03:33:31.105Z · EA · GW

I haven't run for office but I've been very involved in electoral/parliamentary/party politics for the last few years. I don't have a lot of time to write up a bunch of what my experience is about right now, though it'd be easier for me to just answer questions in an organic way over a video or voice call, so whoever send me a private message if you'd be interested in that.

Comment by Evan_Gaensbauer on david_reinstein's Shortform · 2023-01-19T01:43:19.311Z · EA · GW

This is such a predictable and unsurprising set of results that it's adorable.

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-01-18T16:56:43.357Z · EA · GW

I wrote my other reply yesterday from my smartphone and it was hard to tell which one of my short form posts you were replying to, so I thought it was a different one and that's why my comment from yesterday may not have seemed so relevant. I'm sorry for any confusion.

Anyway, why I'm posting short forms like this too is that they're thoughts on my mind I want to express for at least some effective altruists to notice, though I'm not prepared right now to contend with the feedback and potential controversy that makings these as top level posts would provoke right now.

Comment by Evan_Gaensbauer on Does EA understand how to apologize for things? · 2023-01-17T23:20:13.673Z · EA · GW

I tend to agree, though I answered the question this way because I've noticed a pattern much of the time during the last few years of actors in the EA community tending to apologize poorly much of the time.

I didn't have time to provide more examples when I wrote the comment, though I figure offering simple takes are okay to open-ended question posts like this anyway. It seems from how my comment has received that most people who've bothered checking this comment have in mind that kind of trend in EA I'm alluding to and, not only agree, but appreciate at least one person, like me, putting it plainly.

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-01-17T19:53:33.178Z · EA · GW

It's long enough to be a top level post, though I didn't have time around the days these thoughts were on my mind to flesh it out more, with links or more details, or time to address what I'm sure would be a lot of good questions I'd receive. I wouldn't want to post before it could be of better quality.

I've started using my short form to draft stubs or snippets of top level posts. I'd appreciate any comments or feedback on them encouraging me to turn them it top level posts, or, alternatively, feedback even discouraging me from turning them into a top level post if someone would think it's worthwhile.

Comment by Evan_Gaensbauer on The Culture of Fear in Effective Altruism Is Much Worse than Commonly Recognized · 2023-01-17T02:33:27.903Z · EA · GW

Coming back to this almost a year later and having thought about this a few times, that post gets across most of what I wanted to express in this post better than I could have, so thanks for sharing it!

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-01-16T01:11:22.604Z · EA · GW

It should be noted that for most of the period that the Centre for Effective Altruism itself admits and acknowledges as its longest continuous period of a pattern of mistakes from 2016-2020, according to the Mistakes page on the CEA's website, two of the only three members of the board of directors were Nick Beckstead, Toby Ord and William MacAskill.

(Note, January 15th: as I'm initially writing this and as of right now, I want to be clear and correct about this enough that I'll be running it by someone from the CEA. If someone from CEA reads this before I contact any of you, please feel free to either reply here or send me a private message for any mistakes/errors I've made here.)

(Note, Jan. 16th: I previously stated that Holden Karnofsky was a board member, not Toby. I also stated that this was the board of the CEA in the UK, that was my mistake. I've now been corrected by a staffer at the CEA, as I mentioned before that I'd be in contact with. I apologize for my previous errors.)

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-01-16T01:03:10.683Z · EA · GW

Any formal conflict of interest I ever had in effective altruism I shed myself of almost five years ago. I've been a local and online group organizer in EA for a decade, so I've got lots of personal friends who work at or with support from EA-affilated organizations. Those might be called more informal conflicts of interest, though I don't know how much they might count as conflicts of interest at all.

I haven't had any greater social conflicts of interest, like being in a romantic relationship with anyone else in EA, for that long as well.

I've never signed a non-disclosure agreement for any EA-affiliated organization I might have had a role at or contracted with for any period of time. Most of what I'm referring to here is nothing that should worry anyone who is aware of the specific details of my personal history in effective altruism. My having dated someone for a few months who wasn't a public figure or a staffer at any EA-affiliated organization, or me having been a board member in name only for a few months to help get off the ground a budding EA organization that has now been defunct for years anyway, are of almost no relevance or significance to anything happening in EA in 2023.

In 2018, I was a recipient of an Effective Altruism Grant, one of the kinds of alternative funding programs administered by the Centre for Effective Altruism (CEA), like the current Effective Altruism Funds or the Community Building Grants program, though the EA Grants program was discontinued a few years ago.

I was also contracted for a couple months in 2018 with the organization then known as the Effective Altruism Foundation, as a part-time researcher for one of the EA Foundation's projects, the Foundational Research Institute (FRI), which has for a few years now been succeeded by a newer effort launched by many of the same effective altruists who operated FRI, called the Center for Long-Term Risk (CLTR).

Most of what I intend to focus on posting about on this forum in the coming months won't be at all about CLTR as it exists today or its background, though there will be some. Much of what I intend to write will technically entail referencing some of the CEA's various activities, past and present, though that's almost impossible to avoid when trying to address the dynamics of the effective altruism community as a whole anyway. Most of what I intend to write that will touch upon the CEA will have nothing to do with my past conflict of interest of having been a grant recipient in 2018.

Much of the above is technically me doing due diligence, though that's not my reason for writing this post.

I'm writing this post because everyone else should understand that I indeed have zero conflicts of interest, that I've never signed a non-disclosure agreement, and that for years and still into the present, I've had no active desire to work up to netting a job or career within most facets of EA. (Note, Jan. 17: Some of that could change but I don't expect any of it to change for at least the next year.)

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-01-15T23:34:59.030Z · EA · GW

Events of the last few months have shown that in the last few years many whistleblowers weren't taken seriously enough. If they had been, a lot of problems in EA that have come to pass might have been avoided or prevented entirely. They at least could have been resolved much sooner and before the damage became so great.

As much as more effective altruists have come to recognize this in the last year, one case I think deserves to be revisited but hasn't been is this review of problems in EA and related research communities originally written by Simon Knutsson in 2019, based on his own experiences working in the field.

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-01-15T23:23:56.222Z · EA · GW

As of June 2022, Holden Karnofsky said he was "currently on 4 boards in addition to Open Philanthropy's."

If that's still the case, that's too many organizations for a single individual in effective altruism to hold board positions at.

Comment by Evan_Gaensbauer on The writing style here is bad · 2023-01-15T22:58:32.288Z · EA · GW

Strongly upvoted

Comment by Evan_Gaensbauer on Does EA understand how to apologize for things? · 2023-01-15T22:53:33.609Z · EA · GW

Beyond any matter related to Nick Bostrom's recent apology, my two cents is that the answer is that, generally, no, most of the effective altruism community doesn't know how to apologize well.

Comment by Evan_Gaensbauer on A short thought on supporting EA despite recent issues · 2023-01-14T02:07:13.660Z · EA · GW

It's complicated because effective altruism is an institution, as are the organizations in it you're referencing. Yet those organizations are indeed also nodes in the effective altruism movement as a system. Ultimately, failing all else, those nodes could be replaced. The institution of effective altruism could survive without them. 

Comment by Evan_Gaensbauer on Should Founders Pledge focus more on recruiting EAs to become entrepreneurs? · 2023-01-11T03:09:21.601Z · EA · GW

My two cents is yes. I agree with most of the recent arguments in favour of more entrepreneurship in EA. One of the exceptions is that I'm strongly opposed to that happening through attempts to earn to give in ethically shady industries right after the FTX collapse. I trust The Founders Pledge about as much as any organization in EA to facilitate the former while preventing the latter.

Comment by Evan_Gaensbauer on RobBensinger's Shortform · 2023-01-11T02:56:33.258Z · EA · GW

I haven't checked if you've posted this in the Dank EA Memes Facebook group yet, though you should if you haven't. This meme would be incredibly popular in that group. It would get hundreds of likes. It would be the discourse premise that would launch one thousand threads. This is one of the rare occasions when posting in Dank EA Memes might net you the kind of serious feedback you want better than posting on this forum or LessWrong or, really, anywhere else on the internet. 

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2023-01-11T02:44:47.753Z · EA · GW

After the collapse of FTX, any predictions that the effective altruism movement will die with it are greatly exaggerated. Effective altruism will change that maybe none of us ourselves can even predict but it won't die.

There are countless haters of so many movements who on the internet will themselves into believing what will happen to that movement when it fails is what they wish will happen. I.e., that the movement will die. Sensationalist polemicists and internet trolls don't understand history or the world enough to know what they're talking about when they celebrate the gleeful end of whatever cultural forces they hate. 

This isn't just true for effective altruism. This is true for every such movement towards which anyone takes such a shallow interpretation. If movements like socialism, communism, and fascism can make a worldwide comeback in the 2010s and 2020s in spite of their histories, effective altruism isn't going to just up and die, not by a longshot. 

Comment by Evan_Gaensbauer on Evan_Gaensbauer's Shortform · 2022-12-11T09:09:19.072Z · EA · GW

The FTX bankruptcy broke something in the heart effective altruism but in the process, I'm astonished with how dank it has become. This community was never supposed to be this dank and has never been danker. I never would've expected this. It's absurd. 

Comment by Evan_Gaensbauer on AI Safety Seems Hard to Measure · 2022-12-11T08:46:36.888Z · EA · GW

I look forward to the next posts in this sequence, "It's Harder to Eliminate Global Poverty Than You'd Think," and "The Hard Problem of Consciousness Remains Unsolved."

Comment by Evan_Gaensbauer on [Cause Exploration Prizes] Dynamic democracy to guard against authoritarian lock-in · 2022-12-10T08:46:23.568Z · EA · GW

While we don't like putting labels but in the interest of upfront disclosure, our politics can be thought of as somewhere in the anarcho-communist/syndicalist corner, with sympathies for Bookchin
Authors background: ML/game theory PhD students who've been thinking about politics, economics and ways to achieve some scalable, feasible approximation to anarchocommunism

The authors  had my  interest when I saw the title of this post, though after I've read the sections quoted above and this entire section of this essay, now they have my attention.

Comment by Evan_Gaensbauer on What is the relationship between impact and EA Forum karma? · 2022-12-08T02:46:17.195Z · EA · GW

Even if someone considers the impact of a post to be high, regardless of how much karma it has, it doesn't necessarily mean much because that doesn't discern the worth of the post.

"Impact" needs to be operationalized and it remains infeasible to achieve a consensus on how it should be operationalized. It might be possible in theory to achieve agreement on what counts as an objectively valuable post, though in practice the criteria anyone applies for judging the value of any posts might as well be completely arbitrary.

For example, if my preferred cause is shared as a top priority only by a small minority of the effective altruism community, like invertebrate suffering, every post about that cause will receive way less karma than most other posts. That doesn't change the facts that:

  1. I'll consider the most valuable posts to be ones that have a below-average karma score among all posts.

  2. the posts most effective altruists consider the most impactful may not matter to me much at all.

Once someone becomes a more independent-minded effective altruist, how much post a karma receives matters way less.

Comment by Evan_Gaensbauer on Kelsey Piper's recent interview of SBF · 2022-11-17T10:01:24.807Z · EA · GW

Ok, either SBF is actually a complete moron, or this was a very calculated ploy.

he comes off as bizarre and incompetent, rather than as a evil super-villain

Why not both?

Comment by Evan_Gaensbauer on In favour of compassion, and against bandwagons of outrage · 2022-11-13T08:45:11.825Z · EA · GW

To be fair, you have to have a very high IQ to understand dank EA memes. The humor is extremely subtle, and without a solid grasp of theoretical memetics most of the jokes will go over a typical effective altruist's head. There's also the nihilistic outlook, which is deftly woven into their characterisation - the aesthetic draws heavily from Narodnaya Volya literature, for instance. The fans understand this stuff; they have the intellectual capacity to truly appreciate the depths of these jokes, to realize that they're not just funny- they say something deep about LIFE. As a consequence EAs who don't understand dank memes truly ARE normies- of course they wouldn't appreciate, for instance, the humour in the catchphrase "thank machine doggo," which itself is a cryptic reference to Apinis' original classic I'm smirking right now just imagining one of those guileless geeks scratching their heads in confusion as genius trolley problems reveal themselves on their smartphone screens. How I pity them. 😂 And yes by the way, I DO have a dank EA memes tattoo. And no, you cannot see it. It's for the ladies' eyes only- And even they have to demonstrate that they're within 5 IQ points of my own (preferably lower) beforehand.

Comment by Evan_Gaensbauer on FTX Crisis. What we know and some forecasts on what will happen next · 2022-11-09T06:30:45.240Z · EA · GW

The question of to what extent more effective altruists should return to earning to give during the last year as the value of companies like Meta and FTX has declined has me pondering whether that's worthwhile, given how nobody in EA seems to know how to spend well way more money per year on multiple EA causes. 

I've been meaning to write a post about how there has been a lot of mixed messaging about what to do about AI alignment. There has been an increased urgency to onboard new talent, and launch and expand projects, yet there is an apparently growing consensus that almost everything everyone is doing is either pointless or making things worse. Setbacks the clean meat industry faces have been mounting during the last couple years. There aren't clear or obvious ways to make significant progress on overcoming those setbacks mainly by throwing more money at them in some way. 

I'm not as familiar with how much room for more funding before diminishing marginal returns are hit for other priority areas for EA. I expect that other than a few clear-cut cases like maybe some of Givewell's top recommended charities, there isn't a strong sense of how to spend well more money per year than a lot of causes are already receiving from the EA community. 

It's one thing for smaller numbers of people returning to give to know the best targets for increased marginal funding that might fall through after the decline of FTX. It seems like it might be shortsighted to send droves of people rushing back into earning to give when there wouldn't be any consensus for what interventions they should earning to give to. 

Comment by Evan_Gaensbauer on Why does Elon Musk suck so much at calibration? · 2022-11-09T04:44:14.202Z · EA · GW

None of Musk's projects are by themselves bad ideas. None of them are obviously a waste of effort either. I agree the impacts of his businesses are mostly greater than the impact of his philanthropy, while the opposite is presumably the case for most philanthropists in EA. 

I agree his takeover of Twitter so far doesn't strongly indicate whether Twitter will be ruined. He has made it much harder for himself to achieve his goals with Twitter, though, through a series of many mistakes he has made during the last year in the course of buying Twitter.

The problem is that he is someone who is able to have an impact that's neither based strictly in business nor philanthropy. A hits-based approach based on low-probability, high-consequence events will sometimes include a low risk of highly negative consequences. The kind of risk tolerance associated with a hits-based approach doesn't work when misses could be catastrophic:

  • His attempts in the last month to intervene in the war in Ukraine and disputes over Taiwan's sovereignty seem to speak for themselves as at least a yellow flag. That's enough of a concern even ignoring whatever impacts he has on domestic politics in the United States. 
  • The debacle of whether OpenAI as an organization will be a net positive for AI alignment and the involvement of effective altruism in the organization's foundation is thought of by some as one of the worst mistakes in the history of AI safety/alignment. Elon Musk played a crucial role in OpenAI's founding and has acknowledged he made mistakes with OpenAI since he has distanced himself from the organization. In general, the overall impact he has had on AI alignment is ambiguous. He remains one of a small number of individuals who have the most capability to impact public responses to advancing AI other than world leaders, though it's not clear whether or how much he could be relied on to have a positive impact on AI safety/AI alignment in the future.

These are only a couple examples of the potential impact and risks of the decisions he makes that are unlike anything that any individual in EA has done before. An actor in his position should have a greater deal of fear and uncertainty that should at least inspire someone to be more cautious. My assumption is he isn't cautious enough. I asked my initial question in the hope the causes of his recklessness can be identified, to aid in formulating adequate protocols for responding to the potentially catastrophic errors he commits in the future. 

Comment by Evan_Gaensbauer on Why does Elon Musk suck so much at calibration? · 2022-11-09T01:26:52.168Z · EA · GW

I agree that Musk should have more epistemic guardrails but also that EA should me more ambitious and not less timid, but more tactful. Trying to always please everyone, be apolitical and fly under the radar can constitute an extreme risk aversion, a risk in itself.

Comment by Evan_Gaensbauer on Why does Elon Musk suck so much at calibration? · 2022-11-09T00:58:41.901Z · EA · GW

I acknowledged in some other comments that I wrote this post sloppily, so I'm sorry for the ambiguity. Musk's recent purchase of Twitter and its ongoing consequences is part of why I've made this post. It's not about it being bad that he bought Twitter. The series of mistakes that has

It's not about him being outspoken and controversial. The problem is Musk's not being sufficiently risk-averse and potentially having blindspots that could have a significant negative impact on his EA-related/longtermist efforts.

Comment by Evan_Gaensbauer on Why does Elon Musk suck so much at calibration? · 2022-11-09T00:31:28.477Z · EA · GW

I'm thinking of asking people like that about what they're doing but I'm also intending to request feedback from them and others in EA how to communicate related ideas better. I've asked this question to check if there are major factors I might be missing as a prelude to a post with my own views. That'd be high stakes enough that I'd put in the effort to write it better that I didn't put into this question post. I might title it something like "Effective Altruism Should Proactively Help Allied/Aligned Philanthropists Optimize Their Marginal Impact."

Other than at the Centre of Effective Altruism, who are the new/senior communications staff it'd be good to contact?

Comment by Evan_Gaensbauer on Why does Elon Musk suck so much at calibration? · 2022-11-09T00:17:54.564Z · EA · GW

I meant the initial question literally and sought an answer. I listed some general kinds of answers and clarified that I'm seeking answers to what potential factors may be shaping Musk's approaches that would not be so obvious. I acknowledge I could have written that better and that the tone makes it ambiguous whether I was trying to slag him disguised as me asking sincere question.

Comment by Evan_Gaensbauer on Why does Elon Musk suck so much at calibration? · 2022-11-08T22:54:19.465Z · EA · GW

Strongly upvoted. You've put my main concern better than I knew how to put it myself.

Comment by Evan_Gaensbauer on Why does Elon Musk suck so much at calibration? · 2022-11-08T22:45:10.820Z · EA · GW

Musk has for years identified that one of the major motivators for most of his endeavours is to ensure civilization is preserved.

From EA convincing Elon Musk to take existential threats from transformative AI seriously almost a decade ago, to his recent endorsement of longtermism and William MacAskill's What We Owe the Future on Twitter for millions to see, the public will perceive a strong association him and EA.

He also continues to influence the public response to potential existential threats like unaligned AI and the climate crisis, among others. Even if Musk has more hits than misses, his track record is mixed enough that it's worth trying to notice any real patterns across his mistakes so the negative impact could be mitigated. Given Musk's enduring respect for EA, the community may be better able than most to inspire him to make better decisions in the future as it relates to having a positive social impact, i.e., become better at calibration.