Have The Effective Altruists And Rationalists Brainwashed Me?post by Matt Goldwater (Matt_Goldwater) · 2022-06-19T16:05:15.348Z · EA · GW · 1 comments
TLDR: No. But I cautiously trust many common EA/rationalist opinions.
When I’m searching for help online, I start some of my search queries with prefixes such as site:lesswrong.com. That means Google will only return search results from LessWrong.
I’ve searched site:lesswrong.com cold shower, site:lesswrong.com optimal tooth brushing, site:lesswrong.com wirecutter, site:astralcodexten.substack.com aromatherapy, and site:forum.effectivealtruism.org where should I live.
LessWrong is the site of the rationalist community [? · GW]. They imply they’re less wrong [LW · GW] than everyone else. Astral Codex Ten is a blog by the prominent rationalist Scott Alexander. His rationalist fame suggests he’s especially less wrong. And the EA Forum is the forum of the effective altruism (EA) movement. Effective altruism is about “doing good better.”
But are the rationalists really less wrong? Are the effective altruists truly doing good better? How can I evaluate them in an as unbiased way as possible? After all, I don’t only read rationalist content for general (i.e., Lifehacker-esque) productivity advice. I read what rationalists say about rationality. They influence how I think.
What I’ve Learned From The Rationalists
I think the following list contains the most important lessons about rationality that I’ve learned from rationalists.
Always Be Rational
Enough people are the same way that there’s no shortage of advice for wannabe entrepreneurs. I’d hear tropes like “founders should be overconfident,” “fake it till you make it,” and “move fast and break things.”
I agree with the spirit of those statements. If someone doesn’t believe in themself enough, or they’re not willing to take enough risks, I wouldn’t bet on their startup succeeding. I think it makes sense to “fake it” (i.e., pretend to be confident and/or lie) when appropriate too.
But I wouldn’t take those tropes too seriously. Sometimes I’ve been too confident in my ability. I’ve faked it by telling people I’d complete a task by a certain time and failed to do it. And sometimes, I don’t think it’s worth it to risk breaking something. Meta (Facebook) changed its motto from “Move fast and break things” to “Move fast with stable infrastructure.” That seems fair if they still lose close to $163,565 every minute the app goes down.
I refer to those tropes as reversible advice. Scott Alexander suggests considering the opposite of the advice you’re receiving if 1) there are plausibly near-equal groups of people who need this advice versus the opposite advice, or 2) you’ve self-selected into the group of people receiving this advice by, for example, being a fan of the blog / magazine / TV channel / political party / self-help-movement offering it.
And The Scout Mindset, by Julia Galef, implies that nobody should be overconfident. It describes how when Elon Musk founded SpaceX, he thought there was a 10% chance that a SpaceX craft would make it into orbit. It states he thought there was a 10% chance Tesla would succeed too. And that Jeff Bezos thought there was a 30% chance Amazon would succeed.
Musk said, “If something's important enough, you should try. Even if the probable outcome is failure.” Page 113 of The Scout Mindset may have subtly inspired me to use this example. I think that suggests he makes bets that maximize his expected utility.
Maximize Expected Utility
I define being rational as making decisions that maximize expected utility.
Imagine someone is about to roll a traditional six-sided die. You have the opportunity to bet $1 million that the die will land on 1. If you win, you get another $7 million. Otherwise, you lose everything.
The expected value of this bet is the amount of money you’d expect to make from it. That would be $333,333.33.
And if you don’t make this bet, greedy, filthy-rich Joe would get the chance to make it instead. So should you make this bet? If you can make this bet an unlimited number of times, and you’d rather earn money than greedy, filthy-rich Joe, almost definitely.
But what if you have exactly $1 million, you have no source of income, an additional $7 million sounds great, but you don’t know what you’d do with it, and you’re only allowed to make this bet once? You also think you’d really dislike starving, being homeless, or whatever would happen if you have no money.
You can use utility points that reflect what you fundamentally value to make this decision. You could fundamentally value anything, such as how long you’ll live, your dignity, or your happiness. Let’s pretend you fundamentally value happiness. You may decide losing $1 million would decrease your happiness by 100 hypothetical happiness points. And winning $7 million would increase your happiness by 200 points. In this case, your expected utility is -50 happiness points.
As someone who thinks I’d really dislike starving, would I have made this bet before I’d read about maximizing expected utility? I don’t think so.
Plus, it’s tough to precisely quantify how I think my utility would change. I rarely write out utility point calculations like I just did. I’m still doing the math implicitly.
So was Holden Karnofsky’s article explaining expected utility valuable to me? I think it helps me more consciously think about what gives me utility.
Values And Probabilities Answer Everything
The above utility calculation compared how much you valued losing $1 million versus gaining $7 million. Winning $7 million, +200 happiness points, was seen as twice as beneficial as losing $1 million, -100 happiness points. But since there was an 83.33% (⅚) probability of losing $1 million, that outweighed the 16.67% (⅙) probability of winning $7 million.
Karnofsky discusses the idea that if people directly stated their values (i.e., what they care about) and probabilities (i.e., their odds something is true), they’d always understand why they disagree with someone. This makes sense to me. Since, as Karnofsky says, people don’t always communicate clearly, I’d say almost every disagreement comes down to at least one of values, probabilities, or semantics (i.e., what people mean by what they say).
I don’t think there’s a foolproof way to resolve any disagreement. Especially one over values. Who am I to tell you that you’d actually gain 300 happiness points from winning $7 million?
But hopefully, talking things over can resolve semantic debates. And disagreements over probabilities can be tested.
The Importance Of The Experimental Method
Harry was breathing in short gasps. His voice came out choked. "You can't DO that!"
"It's only a Transfiguration," said Professor McGonagall. "An Animagus transformation, to be exact."
"You turned into a cat! A SMALL cat! You violated Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! And cats are COMPLICATED! A human mind can't just visualise a whole cat's anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?"
Professor McGonagall's lips were twitching harder now. "Magic."
"Magic isn't enough to do that! You'd have to be a god!"
And then Harry collected himself. He thought “The March of Reason would just have to start over, that was all; they still had the experimental method and that was the important thing.”
I learned about the scientific method in elementary school. But I never appreciated it until I read this passage. Even if everything I think I know turns out to be wrong, I can always find the truth through experimentation.
But I think I ultimately believe what I want to believe.
Confirmation Bias Is Everywhere
I’d heard of confirmation bias before I’d heard of the rationalist community. I would’ve defined it as believing what you want to believe. And I’d still use that definition. But I’d narrowly thought of confirmation bias as a reason I’d look for evidence to justify why I’d be successful (e.g., why my startup will succeed) or why I should feel intelligent (e.g., why my political opinion is correct).
I appreciate how The Scout Mindset showed me how confirmation bias could lead me to believe something negative about myself. For example, I remember the first time I was exposed to someone I thought might have coronavirus on April 20, 2020. My gut instinct was that if my roommate had covid that he’d probably already spread it to me and my roommates. So there was nothing I could do. I believe I specifically said something like we’re all fucked or screwed.
My assumption that my roommate could’ve already given me covid still feels reasonable. It was convenient and incorrect to assume there was nothing I could do. I could’ve started wearing a mask, socially distanced, and encouraged my roommates to do the same. I could’ve left my house. My personal coronavirus risk tolerance has changed over time. The point is that I didn’t have to assume I was fucked or screwed. I had a choice.
Similarly, from 2016 to 2021, I generally felt 100% certain that I should focus my self-improvement efforts on becoming a better software engineer. After all, it was too late to switch careers. That belief motivated me to code.
However, I shouldn’t have been so certain. I didn’t have to code. I told myself that so I could believe I didn’t have a decision to make. That made me happy immediately. Yes, thinking about what to do can be stressful. But it’s often worth it.
I don’t think the rationalists have fundamentally reshaped me. Before finding the rationalist community, I wouldn’t have suggested being irrational, ignoring the experimental method, or succumbing to confirmation bias. The rationalists gained my trust by telling me things that I already believed or were open to believing in ways that helped me with self-introspection.
Granted, I suppose any cult member believes what they’re open to believing. And my trust in EA’s/rationalists has shaped my opinions on important issues. I just told my roommate that I leaned against funding gain-of-function research. Until writing that sentence, I thought that was the EA/rationalist stance, but the “expert,” Anthony Fauci, currently supported gain-of-function research. I now see he hasn’t publicly stated he supports gain-of-function research since at least 2018. 
Most significantly, I lean towards believing the EA’s/rationalists are right that there’s at least a 1% chance that an artificial intelligence will cause human extinction over the next century!
But I don’t believe what I said about gain-of-function research or AI as much as I believe things I actually understand.
Ultimately, I think limiting some of my search queries to EA/rationalist websites is a statement about Google’s competence. I believe EA’s/rationalists are generally rational and that they have similar values to me. So I’d rather search google site:lesswrong.com exercise than think up a search query to help Google understand my values, such as efficient exercise to maximize longevity and mental health.
However, while searching EA/rationalist sites is sometimes a useful heuristic, the rationalists have helped me appreciate how easy it is to believe the truth is convenient. If a question is important enough, I’ll do whatever it takes to find the answer.
(cross-posted from my blog: https://utilitymonster.substack.com/p/https://utilitymonster.substack.com/p/brainwashed)
This post explains how I got into EA. And I found the rationalist community through EA. My impression is that most rationalists are also members of the EA community. So a lot of my trust in the EA community carried over to the rationalist community.
Throughout this post, I use whichever term out of EA or rationalist that feels more appropriate. Or I use both terms.
In case this would’ve been considered plagiarism, I noticed that Chapter 8 of The Scout Mindset (pg 105) starts with a similar story and uses the term “theater bug.”
Although, I could’ve misinterpreted the intended spirit of those statements.
It doesn’t say what Musk specifically means by Tesla would succeed. And all the comments where Musk says this are after Tesla and SpaceX have had some success (i.e., they’re worth billions). The earliest statement cited in The Scout Mindset where Musk says he thought one of them would fail is from 2014. I lean towards believing that Musk isn’t trying to appear humble. My impression is that Tesla and SpaceX both nearly went bankrupt in 2008. I imagine he thought it wouldn’t be practical to say he thought they’d fail to the public before they were successful enough.
Likewise, the earliest statement I could find where Jeff Bezos said he thought Amazon had a 30% chance of success was in 1999, after Amazon was already a public company.
Page 113 of The Scout Mindset may have inspired me to use this example.
The die is equally likely to land on 1,2,3,4,5, and 6.You’d win on 1, one of the six possible outcomes. And if you win you earn an additional $7 million dollars. 1/6 * 7,000,000 = 1,166,666.67. And you’d lose on 2,3,4,5 and 6, five of the six possible outcomes. ⅚ * -1000000 = -833,333.33. 1,166,666.67 + -833,333.33 = $333,333.33.
Once again, you have a 1/6 chance of winning the bet. In that case, you’d get 200 utility points. And you have a 5/6 chance of losing the bet and losing 100 utility points. 1/6 * 200 + 5/6 * -100 = -50.
Galef uses the term “motivated reasoning” instead of confirmation bias. In the book (pg 6), she says they mean the same thing.
I imagine The Scout Mindset isn’t the only resource which demonstrates that confirmation bias could lead someone to believe something negative about themself. But, as of May 16, 2022, I could only find one example of that from the first page of Google results when I search “confirmation bias” or “motivated reasoning.” That example is how someone who believes the world will end will only believe the end has been delayed when an apocalypse doesn’t happen. I don’t know if reading that example earlier would’ve helped me recognize scenarios like the ones about coronavirus and software engineering above. (And the world might end at some point.)
The google results I looked at for confirmation bias are: Wikipedia, Encyclopedia Britannica, VeryWellMind, the abstract of this article, Farnam Street, SimplyPsychology, The Decision Lab, Psychology Today, and Investopedia. For motivated reasoning, I looked at Wikipedia, Psychology Today, Discover Magazine, Oxford Bibliographies, iResearchNet, Forbes, APA, this paper's abstract, and this paper's summary. I didn’t watch any videos from the results.
And I vaguely remember reading that con artists initially tell you stuff that’s true to earn your trust. Plus, there have been large charity scams before. Although, the entirety of EA being a scam would have to be a massive conspiracy. It’s more likely that some organizations/initiatives associated with the EA and/or rationalist communities are deemed ineffective (e.g., Raising For Effective Giving [EA(p) · GW(p)], No Lean Season, more examples here [EA · GW] and here [EA · GW]), or have serious issues (e.g., Leverage Research, The Monastic Academy). I also don’t know how I’d measure the effectiveness of many organizations focused on preventing existential risks, and I’d understand if someone felt EA nonprofits were spending too much on overhead. I’d bet some EA nonprofits (e.g., Redwood Research, Open Philanthropy) pay their average employee over six figures. There’s no formal definition of what makes an organization an EA/rationalist organization.
The linked article’s author, Kelsey Piper, is a member of the EA/rationalist communities.
He wrote an op-ed calling for gain-of-function research in 2011. And he apparently praised the lifting of the U.S. ban on gain-of-function research in 2018. I haven’t watched the video posted citing that claim. I think I had the impression Fauci clearly currently supports gain-of-function research because I didn’t notice the date on a screenshot of his 2011 op-ed in this article.
And Google may be promoting the values of the Fellowship of Friends.
If I just search “exercise” on Google, I get articles that state general reasons why exercise is good or exercises that are good for everyone. Here’s my first page of results: Mayo Clinic, Wikipedia, Healthline, WebMD, NHS, NHS again, and Harvard Health. The only result I might go back to at some point is the Wikipedia page. It seemed fairly thorough. I didn’t look at videos, podcasts, and articles labeled as news from my results.
For example, here’s my first page of article results from googling “efficient exercise to maximize longevity and mental health”: AARP, Time, Longevity.Technology, Blue Zones, Mental Health Foundation, Medical News Today, Andrew Merle, Harvard Health, Washington Post, Amherst College. My overall takeaway was that there’s a lot of conflicting advice, and no source stood out as great. And here’s a link to LessWrong posts [? · GW] on exercise. This post [LW · GW] acknowledges some of the questions I have, but doesn’t answer them. And the author’s statement, “you are now as knowledgeable as any personal trainer I've spoken with,” made me feel he was overconfident.
I also searched site:astralcodexten.substack.com exercise. And I found this comment and this comment. They were similar to this LessWrong comment [LW(p) · GW(p)]. So because I cautiously trust rationalists, and because I didn’t think anything Google showed before seemed better, I’d lean towards looking to those sources if I wanted to learn more about fitness. Not that I ever expect to have much confidence that I’m exercising optimally.
There isn’t much on LessWrong about cold showers or optimal tooth brushing.
Comments sorted by top scores.