Post results

Have The Effective Altruists And Rationalists Brainwashed Me? by UtilityMonster (utilitymonster) · 2022-06-19T16:05:15.348Z
In the [second chapter](https://www.hpmor.com/chapter/2) of [Harry Potter and the Methods of Rationality](https://hpmorpodcast.com/?page_id=56)^[\[12\]](#fnag53fk88v8q)^, by [Eliezer Yudkowsky](https://en.wikipedia.org/wiki/Eliezer_Yudkowsky), Professor McGonagall turns into a cat in front of Harry. Harry freaked out.
Interviews with former EA teens by Kirsten (kirsten) · 2023-04-25T16:58:02.234Z
I first heard of EA ideas in HPMOR when I 13 - Arun Jose
Recovering from Rejection (written for the In-Depth EA Program) by Aaron Gertler (aaron-gertler) · 2023-07-03T09:42:50.097Z
Then, as a college freshman, I read [Harry Potter and the Methods of Rationality](http://www.hpmor.com/). It showed me a new way of thinking about the world: it could actually be changed, drastically, for the better, if you focused on the right problems. I became less cynical. But I still read the same news sites as I had before, and got a similar perspective from my college coursework, and kept recording issues in my journal…
I scraped all public "Effective Altruists" Goodreads reading lists by MaxRa (maxra) · 2021-03-23T20:28:30.476Z
Here the books that our community already explored a bunch. I would not have expected *1984* and *Superintelligence* to make it to the Top 5. HPMOR being the least read Harry Potter novel is a slight disappointment.
10 years of Earning to Give by AGB (agb) · 2023-11-07T23:35:53.463Z
1.  ^**[^](#fnref44e2wwj53ap)**^
    
    IIRC, from £50k / yr to £200k / yr.
    
2.  ^**[^](#fnrefsukqzh3uyo)**^
    
    Please note I'm taking some liberties with the rounding here; this is intended as indicative only.
    
3.  ^**[^](#fnreflrr6h8xefsf)**^
    
    This is also the standard for [UK](https://www.ons.gov.uk/economy/inflationandpriceindices/articles/consumerpricesindexincludingowneroccupiershousingcostshistoricalseries/1988to2004#introduction) and [US](https://www.bls.gov/cpi/factsheets/owners-equivalent-rent-and-rent.htm) inflation indices. In short, I substitute my actual housing costs (mortgage interest, maintenance, etc.) for an estimate what I would have paid to rent equivalent accomodation. Overall this comes out within 10% of my experinced costs but is much smoother year-to-year. 
    
4.  ^**[^](#fnref2uoivn2qgzn)**^
    
    More precisely, events I would have previously found very stressful. My actual felt reaction is highly reminiscent of [this quote](https://hpmor.com/chapter/74):
    
    > Going through Azkaban had recalibrated his scale of emotional disturbances; and losing a House point, which had formerly rated five out of ten, now lay somewhere around zero point three.
    
5.  ^**[^](#fnrefh1bztmafk7o)**^
    
    More sad for the world than for me personally; my immediate circles formed in earlier years aren't going anywhere, but it does seem like the ladder has been pulled up behind me. 
    
6.  ^**[^](#fnrefjp1zxh7wwx)**^
    
    I wasn't aware of the [FIRE](https://www.investopedia.com/terms/f/financial-independence-retire-early-fire.asp) movement, but it seems likely I would have come across it sooner or later and that would have helped provide some structure and planning on the objective. As it was I think I first heard the acronym in 2014.
    
7.  ^**[^](#fnref061zim10dc2p)**^
    
    Again, some liberties with GBP:USD conversion rates and rounding. It's about right though; GBP was higher back in the mid-2010s...
Sharing Information About Nonlinear by Ben Pace (ben-pace) · 2023-09-07T06:51:26.290Z
*   I, Emerson, have had a lot of exceedingly harsh and cruel business experience, including getting tricked or stabbed-in-the-back. Nonetheless, I have often prevailed in these difficult situations, and learned a lot of hard lessons about how to act in the world.
*   The skills required to do so seem to me lacking in many of the earnest-but-naive EAs that I meet, and I would really like them to learn how to be strong in this way. As such, I often tell EAs these stories, selecting for the most cut-throat ones, and sometimes I try to play up the harshness of how you have to respond to the threats. I think of myself as playing the role of a wise old mentor who has had lots of experience, telling stories to the young adventurers, trying to toughen them up, somewhat similar to how Prof Quirrell^[\[8\]](#fn03ez0hl92kmc)^ toughens up the students in HPMOR through teaching them Defense Against the Dark Arts, to deal with real monsters in the world.
*   For instance, I tell people about my negotiations with Adorian Deck about the OMGFacts brand and Twitter account. We signed a good deal, but a California technicality meant he could pull from it and take my whole company, which is a really illegitimate claim. They wouldn't talk with me, so I was working with top YouTubers to make some videos publicizing and exposing his bad behavior. This got him back to the negotiation table and we worked out a deal where he got $10k/month for seven years, which is not a shabby deal, and meant that I got to keep my company!
*   It had been reported to Ben that Emerson said he would be willing to go into legal gray areas in order to "crush his enemies" (if they were acting in very reprehensible and norm-violating ways). Emerson thinks this has got to be a misunderstanding, that he was talking about what other people might do to you, which is a crucial thing to discuss and model.
GiveWell's updated estimate of deworming and decay by GiveWell (givewell) · 2023-04-03T21:07:04.920Z
Imposing a Lifestyle: A New Argument for Antinatalism by Oldphan (oldphan) · 2023-08-23T22:23:14.080Z
EA & LW Forum Weekly Summary (30th Jan - 5th Feb 2023) by Zoe Williams (greyarea) · 2023-02-07T02:13:12.255Z
EA Oslo: AI-pils by Group Organizer (group-organizer) · 2023-07-24T06:51:31.576Z
EA Oslo: Sommerfest by Group Organizer (group-organizer) · 2023-06-14T04:59:07.745Z
HLI’s Giving Season 2023 Research Overview by Happier Lives Institute (happier-lives-institute) · 2023-11-28T14:03:18.901Z
AI Risk is like Terminator; Stop Saying it's Not by skluug · 2022-03-08T19:17:16.932Z
Book Review: Oryx and Crake by Benny Smith (benny-smith) · 2023-07-13T14:10:09.186Z
Uncertainty over time and Bayesian updating by David Rhys Bernard (davidbernard) · 2023-10-25T15:51:54.955Z
What do we really know about growth in LMICs? (Part 1: sectoral transformation) by Karthik Tadepalli (karthik-tadepalli) · 2023-12-02T17:04:22.295Z
Takeaways on US Policy Careers (Part 2): Career Advice by Mau (mau) · 2021-11-08T08:42:57.801Z
Book Review: Going Infinite by Zvi (zvi-1) · 2023-10-25T19:08:21.643Z
Positive roles of life and experience in suffering-focused ethics by Teo Ajantaival (teo-ajantaival) · 2021-05-22T16:05:35.885Z
Cash and FX management for EA organizations by JueYan (jueyan) · 2023-10-14T00:19:01.225Z
So you want to do operations [Part two] - how to acquire and test for relevant skills by eirine · 2018-12-17T11:09:06.691Z
人類に対するAIの脅威を真剣に受け取るべき理由 by EA Japan (ea-japan) · 2023-08-16T14:45:21.428Z

Comment results

comment by trevor1 on trevor1's Quick takes · 2023-08-21T02:18:30.496Z
HPMOR technically isn't built to be time-efficient, the [highlights of the sequences](https://www.lesswrong.com/highlights) is better for that. HPMOR is meant to replace other things you do for fun like reading fun novels or TV shows or social media, and replace that with material that offers passive upskilling. In that sense, it is profoundly time-efficient, because it replaces fun time spent not upskilling at all, with fun time spent upskilling. 

A very large proportion of EA-adjacent people in the bay area swear by it as a way to become more competent in a very broad and significant way, but I'm not sure how it compares with other books like Discworld which are also intended for slack/leisure time. AFAIK CEA has not even done a survey explicitly asking about the self-improvement caused by HPMOR, let alone study measuring the benefits of having different kinds of people read it.
comment by burner on Sharing Information About Nonlinear · 2023-09-07T15:15:23.692Z
Influencing the creation of Professor Quirrel in HPMOR and being influenced by Professor Quirrel in HPMOR both seem to correlate with being a bad actor in EA - a potential red flag to watch out for.
comment by trevor1 on trevor1's Quick takes · 2023-08-20T16:13:04.087Z
I strongly think that people are sleeping on massive EV from encouraging others to read [HPMOR](https://hpmor.com/), or to reread it if it's been years. It costs nothing to read it, and it likely offers broad and intense upskilling.
comment by robirahman on trevor1's Quick takes · 2023-08-21T20:36:01.473Z
I think you're committing a typical mind fallacy if you think most people would benefit from reading HPMOR as much as you did.
comment by trevor1 on trevor1's Quick takes · 2023-08-21T02:03:28.272Z
I totally agree, but like [the sequences](https://www.lesswrong.com/highlights) those books consume energy that is normally spent on work, or at least hobbies, whereas HPMOR is optimized to replace time that would otherwise have been spent on videos, social media, socializing, other novels, etc. and is therefore the best bet I know of to boost EA as a whole.
comment by jlemien on trevor1's Quick takes · 2023-08-21T11:41:06.433Z
Additional thought: if we assume that people can gain skills from reading fiction (or from otherwise engaging in imaginary world, such as via films or games), does HPMOR give the best “return on investment” per hour spent? Is it better than reading War and Peace, or watching John Green videos, or playing Life is Strange? I’m skeptical that on EAs tend to be bias in favor of it, and therefore we would neglect other options.

(I’m not really expecting anyone to have actual data on this, but I’d be curious to see people bounce the idea around a bit)
comment by quadratic-reciprocity on Read The Sequences · 2023-11-20T12:09:39.928Z
This post got some flak and I am not sure if it actually led to more EAs seriously considering engaging with the Sequences. However, I stand by the recommendation even more strongly now. If I were in a position to give reading recommendations to smart young people who wanted to do big, impactful things, I would recommend the Sequences (or HPMOR) over any of the EA writing.
comment by ben-pace on Sharing Information About Nonlinear · 2023-09-07T20:38:06.878Z
The answer to many of your questions is no, I have little former professional experience at this sort of investigation! (I had also never run an office before Lightcone Office, never run a web forum before LessWrong, and never run a conference before EAGxOxford 2016.)

My general attitude to doing new projects that I think should be done and nobody else is doing them is captured in this [quote](https://hpmor.com/notes/101/) by Eliezer Yudkowsky that I think about often:

> But if there’s one thing I’ve learned in life, it’s that the important things are accomplished not by those best suited to do them, or by those who ought to be responsible for doing them, but by whoever actually shows up.
comment by jackson-wagner on Announcing the AI Fables Writing Contest! · 2023-07-12T21:09:56.512Z
Yeah, as a previous top-three winner of the EA Forum Creative Writing Contest (see my story [here](https://forum.effectivealtruism.org/posts/SGG2j9zm2icRsXofa/creative-nonfiction-the-toba-supervolcanic-eruption)) and of Future of Life Institute's AI Worldbuilding contest ([here](https://forum.effectivealtruism.org/posts/LLfaikCmysmdxussN/fiction-improved-governance-on-the-critical-path-to-ai)), I agree that it seems like the default outcome is that even the winning stories don't get a huge amount of circulation.  The real impact would come from writing the one story that actually does go viral beyond the EA community.  But this seems pretty hard to do; perhaps better to pick something that has already gone viral (perhaps an existing story like one of the Yudkowsky essays, or perhaps expanding on something like a very popular tweet to turn it into a story), and try to improve its presentation by polishing it, and perhaps adding illustrations or porting to other mediums like [video](https://www.youtube.com/watch?v=cZYNADOHhVY) / [audio](https://hpmorpodcast.com/) / etc.  
  
That is why I am currently spending most of my EA effort helping out RationalAnimations, which sometimes writes original stuff but often adapts essays & topics that have preexisting traction within EA.  (Suggestions welcome for things we might consider adapting!)  
  
Could also be a cool mini-project of somebody's, to go through the archive of existing rationalist/EA stories, and try and spruce them up with midjourney-style AI artwork; you might even be able to create some passable, relatively low-effort youtube videos just by doing a dramatic reading of the story and matching it up with panning imagery of midjourney / stock art?  
  
On the other hand, writing stories is fun, and a $3000 prize pool is not too much to spend in the hopes of maybe generating the next viral EA story!  I guess my concrete advice would be to put more emphasis on starting from a seed of something that's already shown some viral potential (like a popular tweet making some point about AI safety, or a fanfic-style spinoff of a well-known story that is tweaked to contain an AI-relevant lesson, or etc).
comment by ariel-simnegar on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T22:08:19.234Z
I apologize for phrasing my comment in a way that made you feel like that. I certainly didn't mean to insinuate that rationalists lack "agency and ability to think critically" -- I actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezer's writings.

I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I don't believe that. Please allow me to enumerate my specific claims and their justifications:

1.  Caring about animal welfare is important (99% confidence): [Here's](https://forum.effectivealtruism.org/posts/ZS9GDsBtWJMDEyFXh/eliezer-yudkowsky-is-frequently-confidently-egregiously?commentId=7s6jL3dMp4s8zdHvf) the justification I wrote to niplav. Note that this confidence is greater than my confidence that animal suffering is real. This is because I think moral uncertainty means caring about animal welfare is still justified in most worlds where animals turn out not to suffer.
2.  Rationalist culture is less animal-friendly than highly engaged EA culture (85% confidence): I think this claim is pretty evident, and it's corroborated [here](https://forum.effectivealtruism.org/posts/FjSxNiNRqgFG6By8Q/) by many disinterested parties."
3.  Eliezer's views on animal welfare have had significant influence on views of animal welfare in rationalist culture" (75% confidence):
    1.  A fair critique is that sure, the sequences and HPMOR have had huge influence on rationalist culture, but the claim that Eliezer's views *in domains that have nothing do with rationality* (like animal welfare) have had outsize influence on rationalist culture is much less clear.
    2.  My only pushback is the experience I've had engaging with rationalists and reading LessWrong, where I've just seen rationalists reflecting Eliezer's views on many domains other than "rationality: A-Z" over and over again. This very much includes the view that animals lack consciousness. Sure, Eliezer isn't the only influential EA/rationalist who believes this, and he didn't originate that idea either. But I think that in the possible world where Eliezer was a staunch animal activist, rationalist discourse around animal welfare would look quite different.
4.  Rationalist culture has significant influence on those who could steer future TAI (80% confidence):
    1.  [NYT](https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html): "two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement...Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a [Rationalist thought experiment](https://www.nytimes.com/2020/07/25/style/elon-musk-maureen-dowd.html) — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community."
    2.  [Sam Altman](https://twitter.com/sama/status/1621621724507938816?lang=en):"certainly \[Eliezer\] got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc".

On whether aligned TAI would create a utopia for humans and animals, I think the [arguments](https://longtermrisk.org/against-wishful-thinking/) for [pessimism](https://reducing-suffering.org/utopia/)--*especially* about the prospects for animals--are serious enough that having TAI steerers care about animals is very important.
comment by evan_gaensbauer on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-17T08:14:25.450Z
_Summary_: This is the most substantial round of grant recommendations from the EA Long-Term Future Fund to date, so it is a good opportunity to evaluate the performance of the Fund after changes to its management structure in the last year. I am measuring the performance of the EA Funds on the basis of what I am calling 'counterfactually unique' grant recommendations. I.e., grant recommendations that, without the Long-Term Future Fund, individual donors nor larger grantmakers like the Open Philanthropy Project would have identified or funded.

Based on that measure, **20** of **23**, or **87%**, grant recommendations, worth **$673,150** of **$923,150,** or ~**73%** of the money to be disbursed, are counterfactually unique. Having read all the comments, multiple concerns with a few specific grants came up, based on uncertainty or controversy in the estimation of value of these grant recommendations. Even if we exclude those grants from the estimate of counterfactually unique grant recommendations to make a 'conservative' estimate, **16** of **23**, or **69.5%**, of grants, worth **$535,150** of **$923,150**, or **~58%**, of the money to be disbursed, are counterfactually unique **and** fit into a more conservative, risk-averse approach that would have ruled out more uncertain or controversial successful grant applicants.

These numbers are an extremely significant improvement in the quality and quantity of unique opportunities for grantmaking the Long-Term Future Fund has made since a year ago. This grant report generally succeeds at achieving a goal of coordinating donations through the EA Funds to unique recipients who otherwise would have been overlooked for funding by individual donors and larger grantmakers. This report is also the most detailed of its kind, and creates an opportunity to create a detailed assessment of the Long-Term Future Fund's track record going forward. I hope the other EA Funds emulate and build on this approach.

_General Assessment_

In his [2018 AI](https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison) [Alignment Literature Review and Charity](https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison) [Comparison](https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison), Larks had the following to say about changes in the management structure of the EA Funds.

> I’m skeptical this will solve the underlying problem. Presumably they organically came across plenty of possible grants – if this was truly a ‘lower barrier to giving’ vehicle than OpenPhil they would have just made those grants. It is possible, however, that more managers will help them find more non-controversial ideas to fund.

To clarify, the purpose of the EA Funds has been to allow individual donors relatively smaller than grantmakers like the Open Philanthropy Project (including all donors in EA except other professional, private, non-profit grantmaking organizations) to identify higher-risk grants for projects that are still small enough that they would be missed by an organization like Open Phil. So, for a respective cause area, an EA Fund functions as like an index fund that incentivizes the launch of nascent projects, organizations, and research in the EA community.

Of the $923,150 of grant recommendations made to Centre for Effective Altruism for the EA Long-Term Future Fund this round of grantmaking, all but $250,000 of it went to the kind of projects or organizations that the Open Philanthropy Project tends to make. To clarify, there isn't a rule or practice of the EA Funds not making those kinds of grant. It's at the discretion of the fund managers to decide if they should recommend grants at a given time to more typical grant recipients in their cause area, or to newer, smaller, and/or less-established projects/organizations. At the time of this grantmaking round, recommendations to better-established organizations like MIRI, CFAR, and Ought were considered the best proportional use of marginal funds allotted for disbursement at this time.

20 (**~87%** of total number) grant recommendations totalling $723,150 = **~73%**

\+ 3 (**~13%** of total number) grant recommendations totalling $200,00 = **~27%**

= 23 grant (in total) recommendations totalling $923,150 = **100%**

  

Since this is the most extensive round of grant recommendations from the Long-Term Future Fund to date with the EA Funds' new management structure, this is the best apparent opportunity for evaluating the success of the changes made to how the EA Funds are managed. In this round of grantmaking, **87%** of the total number of grant recommendations were for efforts of individuals, totalling **73%** of the total amount of money that would be disbursed for these grants, that would otherwise have been missed by individual donors, or larger grantmaking bodies.

In other words, the Long-Term Future (LTF) Fund is directly responsible for **87%** of **23** grant recommendations made, totalling **73%** of **$923.15K** worth of unique grants, that, presumably, would not have been counterfactually identified had individual donors not been able to pool and coordinate their donations through the LTF Fund. I keep highlighting these numbers, because they can essentially be thought of as the LTF Funds' current rate of efficiency in fulfilling the purposes it was set up for.

_Criticisms and Conservative Estimates_

Above is the estimate for the number of grants, and the amount of donations to the EA Funds, that are counterfactually unique to the EA Funds, and can be thought of how effective the impact of the Long-Term Future Fund in particular is. That is the estimate for the grants donors to the EA Funds very probably _could_ not have identified by themselves. Yet another question is _would_ they opt to donate to the grant recommendations that have been just been made by the LTF fund managers? Part of the basis for the EA Funds thus far is to trust the fund mangers' individual discretion based on their years of expertise or professional experience working in the respective cause area. My above estimates are based on the assumption all the counterfactually unique grant recommendations the LTF Funds make are indeed effective. We can think of those numbers as a 'liberal' estimate.

I've at least skimmed or read all 180+ comments on this post thus far, and a few persistent concerns with the grant recommendations have stood out. These were concerns that the evidence basis on which some grant recommendations were made wasn't sufficient to justify the grant, i.e., they were 'too risky.' If we exclude grant recommendations that are subject to multiple, unresolved concerns from the LTF Funds, we can make a 'conservative' estimate of the percentage and dollar value of counterfactually unique grant recommendations made by the LTF Fund.

*   Concerns with **1** grant recommendations worth **$28,000** to hand out printed copies of fanfiction _HPMoR_ to international math competition medalists.
*   Concerns with **2** grant recommendations worth **$40,000** for individuals who are not currently pursuing one or more specific, concrete projects, but rather are pursuing independent research or self-development. The concern is the grant is based on the fund manager's (managers' ?) personal confidence in the individual, and even explication for the grant recommendations expressed concern with the uncertainty in the value of grants like these.
*   Concerns that with multiple grants made to similar forecasting-based projects, there would be redundancy, in particular concern with **1** grant recommendation worth **$70,000** to forecasting company Metaculus that might be better suited to an investment for equity in a startup rather than a grant from a non-profit foundation.

In total, these are **4** grants worth **$138,000** that multiple commenters have raised concerns with on the basis the uncertainty for these grants means the grant recommendations don't seem justified. To clarify, I am not making an assumption about the value of these grants are. All I would say about these particular grants is they are unconventional, but that insofar as the EA Funds are intended to be a kind of index fund willing to back more experimental efforts, these projects fit within the established expectations of how the EA Funds are to be manged. Reading all the comments, the one helpful, concrete suggestion was for the LTF Fund to follow-up in the future with grant recipients and publish their takeaways from the grants.

Of the **20** recommendations made for unique grant recipients worth **$673,150**, if we were to exclude these **4** recommendations worth **$138,000**, that leaves **16** of **23**, or **69.5%** of total recommendations, worth **$535,150** of **$923,150**, or **~58%** worth of the total grant recommendations, uniquely attributable to the EA Funds. Again, those grant recommendations excluded from this 'conservative' estimate are ruled out based on the uncertainty or lack of confidence in them from commenters, not necessarily the fund managers themselves. While presumably any of the value of any grant recommendation could be disputed, these are the only grant recipients for which multiple commenters have made raised still-unresolved concerns so far. These grants are still initially being made, so whether the best hopes of the fund managers for the value of each of these grants will be borne out is something to follow-up with in the future.

_Conclusion_

While these numbers don't address suggestions for how the management of the Long-Term Future Fund can still be improved, overall I would say these numbers show the Long-Term Future Fund has made extremely significant improvement since last year at achieving a high rate of counterfactually unique grants to more nascent or experimental projects that are typically missed in EA donations. I think with some suggested improvements like hiring some professional clerical assistance with managing the Long-Term Future Fund, the Long-Term Future Fund is employing a successful approach to making unique grants. I hope the other EA Funds try emulating and building on this approach. The EA Funds are still relatively new, and so to measure their track record of success with their grants remains to be done, but this report provides a great foundation to start doing so.
comment by nicklaing on Open Philanthropy’s newest focus area: Global Public Health Policy · 2023-11-26T14:51:58.901Z
The way people downvote jokes on this forum lol. Really discourages it, I feel we need a decent chunk more humor here!
comment by jeffrey-kursonis on Announcing CEA’s Interim Managing Director · 2023-04-06T00:49:56.501Z
Wonderful Ben! I have been in your exact position before over a struggling organization at the heart of a movement similar to EA - and your post is exactly what is needed. One of the oldest wisdoms known is that humor is the best medicine. EA is full of amazing people, and a few missteps have happened. If a the desire to good in the world sometimes gets a little mixed up with personal ego ambitions, so what? We are all human. The key is to learn and not repeat the mistakes. I'm pretty sure that will be the result. I have tremendous confidence in all the leaders of EA and feel sure they will make wise moves soon.   
  
It's not quite time to say the crisis is finished, there's still some decisions to be made, some leaders who might relinquich responsibilities. But soon after that it will be time for EA to move on and get out of this difficult time and carry on with being effective at altruism -- probably the most important thing humans can do. 

It's my belief EA needs more art and humanities people at every level, this kind of post and attitude is exactly the thing to make those people feel comfortable and welcome.
comment by jeffrey-kursonis on Announcing CEA’s Interim Managing Director · 2023-04-08T17:09:51.679Z
And not only was it highly appropriate it was very funny, I LOL’d more than once. Anybody working with you is very lucky. 

I’ve spent my life in very serious pursuits, where passion and morals and ethics are very much in your face all the time and I’ve found the only sustainable way to survive in that environment is with humor and I've seen that widely practiced by many such leaders. 
comment by oagr on Ozzie Gooen's Quick takes · 2023-08-01T21:28:38.222Z
I recently noticed it here:  
[https://forum.effectivealtruism.org/posts/WJGsb3yyNprAsDNBd/ea-orgs-need-to-tabletop-more](https://forum.effectivealtruism.org/posts/WJGsb3yyNprAsDNBd/ea-orgs-need-to-tabletop-more)  
  
Looking back, it seems like there weren't many more very recently. Historically, there have been some.  
  
[EA needs consultancies](https://forum.effectivealtruism.org/posts/CwFyTacABbWuzdYwB/ea-needs-consultancies)  
[EA needs to understand its “failures” better](https://forum.effectivealtruism.org/posts/Nwut6L6eAGmrFSaT4/ea-needs-to-understand-its-failures-better)  
[EA needs more humor](https://forum.effectivealtruism.org/posts/qmh9bWAthovqoew8z/ea-needs-more-humor)  
[EA needs Life-Veterans and "Less Smart" people](https://forum.effectivealtruism.org/posts/2JXLxKbSsbicnt9N9/ea-needs-life-veterans-and-less-smart-people)  
[EA needs outsiders with a greater diversity of skills](https://forum.effectivealtruism.org/posts/G7nraJyjxCfiWEjkz/ea-needs-outsiders-with-a-greater-diversity-of-skills)  
[EA needs a hiring agency and Nonlinear will fund you to start one](https://forum.effectivealtruism.org/posts/4yRHpnaE3Ho8TgATZ/ea-needs-a-hiring-agency-and-nonlinear-will-fund-you-to)  
[EA needs a cause prioritization journal](https://forum.effectivealtruism.org/posts/tSCnGyP5wpSuEmRDQ/ea-needs-a-cause-prioritization-journal)  
[Why EA needs to be more conservative](https://forum.effectivealtruism.org/posts/nQ2dbp2bxe2mxE54j/why-ea-needs-to-be-more-conservative)  
  
Looking above, many of those seem like "nice to haves". The word "need" seems over-the-top to me.
comment by nicklaing on Donation Election rewards · 2023-11-22T19:33:22.992Z
Love it!  
  
But why not make the bad drawings NFTs? Its an amazing idea, we can resell them tomorrow for tens of thousands of dollars, and make orders of magnitude more money which we can then re-gift spawning more NFTs creating a perpetual motion money making scheme for good!  
  
This is sure to be a sustainable ~ponzi scheme  ~business model to drive the success of the Effective Altruism movement into our utopian perfectly aligned AI driven future!

Too soon?

(A little unsure if this humor is ok here or not, will happy delete if karma quickly slides negative ;)
comment by quintin-pope-1 on Quick takes on "AI is easy to control" · 2023-12-03T00:19:03.796Z
(Didn't consult Nora on this; I speak for myself)

  
I only briefly skimmed this response, and will respond even more briefly.

Re "Re: "AIs are white boxes""

You apparently completely misunderstood the point we were making with the white box thing. It has ~nothing to do with mech interp. It's entirely about whitebox optimization being better at controlling stuff than blackbox optimization. This is true even if the person using the optimizers has no idea how the system functions internally.   
  
Re: "Re: "Black box methods are sufficient"" (and the other stuff about evolution)

Evolution analogies are bad. There are many specific differences between ML optimization processes and biological evolution that predictably result in very different high level dynamics. You should not rely on one to predict the other, as I have [argued](https://optimists.ai/2023/04/11/evolution-provides-no-evidence-for-the-sharp-left-turn/) [extensively](https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky?commentId=x5gvAzg5mBKEaywuD#Edit__Why_evolution_is_not_like_AI_training) [elsewhere](https://www.lesswrong.com/posts/FyChg3kYG54tEN3u6/evolution-is-a-bad-analogy-for-agi-inner-alignment).   
 

Trying to draw inferences about ML from bio evolution is only slightly less absurd than trying to draw inferences about cheesy humor from actual dairy products. Regardless of the fact they can both be called "optmization processes", they're completely different things, with different causal structures responsible for their different outcomes, and **crucially**, those differences in causal structure explain their different outcomes. There's thus no valid inference from "X happened in biological evolution" to "X will eventually happen in ML", because X happening in biological evolution is explained by evolution-specific details that don't appear in ML (at least for most alignment-relevant Xs that I see MIRI people reference often, like the [sharp left turn](https://optimists.ai/2023/04/11/evolution-provides-no-evidence-for-the-sharp-left-turn/)).

Re: "Re: Values are easy to learn, this mostly seems to me like it makes the incredibly-common conflation between "AI will be able to figure out what humans want" (yes; obviously; this was never under dispute) and "AI will care""

This wasn't the point we were making in that section at all. We were arguing about concept learning order and the ease of internalizing human values versus other features for basing decisions on. We were arguing that human values are easy features to learn / internalize / hook up to decision making, so on any natural progression up the learning capacity ladder, you end up with an AI that's aligned before you end up with one that's so capable it can destroy the entirety of human civilization by itself.   
 

Re "Even though this was just a quick take, it seemed worth posting in the absence of a more polished response from me, so, here we are."

I think you badly misunderstood the post (e.g., multiple times assuming we're making an argument we're not, based on shallow pattern matching of the words used: interpreting "whitebox" as meaning mech interp and "values are easy to learn" as "it will know human values"), and I wish you'd either take the time to actually read / engage with the post in sufficient depth to not make these sorts of mistakes, or not engage at all (or at least not be so rude when you do it).   
 

*(Note that this next paragraph is speculation, but a possibility worth bringing up, IMO): *

As it is, your response feels like you skimmed just long enough to pattern match our content to arguments you've previously dismissed, then regurgitated your cached responses to those arguments. Without further commenting on the merits of our specific arguments, I'll just note that this is a very bad habit to have if you want to actually change your mind in response to new evidence/arguments about the feasibility of alignment.

  
Re: "Overall take: unimpressed."

I'm more frustrated and annoyed than "unimpressed". But I also did not find this response impressive.
comment by linch on The Seeker’s Game – Vignettes from the Bay · 2023-07-11T09:28:30.275Z
Now I obviously can't speak about or for every Bay Area EA subculture or individual interaction, but the following sentence leapt out to me.

> So better lie low, occasionally drop sophisticated remarks – and don't be caught asking dumb questions.

I would be *very* surprised if this is a good long-term strategy. I mean, this is close to the opposite of the advice I've received as a newcomer in (I think) literally every formal organization I've worked at (both in and outside of EA), and also the opposite of the advice I give to newcomers. I don't see why the situations should be very different between quasi-formal and formal situations in this case.

I also think there's a intuitive conceptual model for asking many questions:

*   You learn and grow more by asking dumb questions than by nodding wisely and asking questions that you consider smart.
*   On the object-level, there's no sense in which asking dumb questions can plausibly have very significant downsides *for the world* (other than opportunity costs)
    *   Whereas bad research and bad decisions *could* plausibly be significantly harmful
    *   Failure to ask questions may lead to bad inferences, and bad inferences could lead to bad decisions
*   Nobody's going to ever remember dumb questions you ask, unless it's *really* out there, like "Why are Americans afraid of [dragons](https://w3.ric.edu/faculty/rpotter/temp/waaaod.pdf)?"
    *   And even then they'll only remember it humorously
*   For better or worse, some people (hi!) really like explaining/mansplaining and hearing the sound of their own voice. 
    *   So if you ask loads of dumb questions to different people, you might make a new friend!

So please, feel free to ask dumb questions, y'all! :)