Sam Harris takes Giving What We Can pledge for himself and for his meditation company "Waking Up"
Harris references MacAksill and Ord as having been central to his thinking and talks about Effective Altruism and exstential risk. He publicly pledges 10% of his own income and 10% of the profit from Waking Up. He also will create a series of lessons on his meditation and education app around altruism and effectiveness.
Harris has 1.4M twitter followers and is a famed Humanist and New Athiest. The Waking Up app has over 500k downloads on android, so I guess over 1 million overall.
Harris is a marmite figure - in my experience people love him or hate him.
It is good that he has done this.
Newswise, it seems to me it is more likely to impact the behavior of his listeners, who are likely to be well-disposed to him. This is a significant but currently low-profile announcement. As will the courses be on his app.
I don't think I'd go spreading this around more generally, many don't like Harris and for those who don't like him, it could be easy to see EA as more of the same (callous superior progessivism).
In the low probability (5%?) event that EA gains traction in that space of the web (generally called the Intellectual Dark Web - don't blame me, I don't make the rules) I would urge caution for EA speakers who might pulled into polarising discussion which would leave some groups feeling EA ideas are "not for them".
This seems quite likely given EA Survey data where, amongst people who indicated they first heard of EA from a Podcast and indicated which podcast, Sam Harris' strongly dominated all other podcasts.
More speculatively, we might try to compare these numbers to people hearing about EA from other categories. For example, by any measure, the number of people in the EA Survey who first heard about EA from Sam Harris' podcast specifically is several times the number who heard about EA from Vox's Future Perfect. As a lower bound, 4x more people specifically mentioned Sam Harris in their comment than selected Future Perfect, but this is probably dramatically undercounting Harris, since not everyone who selected Podcast wrote a comment that could be identified with a specific podcast. Unfortunately, I don't know the relative audience size of Future Perfect posts vs Sam Harris' EA podcasts specifically, but that could be used to give a rough sense of how well the different audiences respond.
This is true, although for whatever reason the responses to the podcast question seemed very heavily dominated by references to MacAskill.
This is the graph from our original post [EA(p) · GW(p)], showing every commonly mentioned category, not just the host (categories are not mutually exclusive). I'm not sure what explains why MacAskill really heavily dominated the Podcast category, while Singer heavily dominated the TED Talk [EA · GW] category.
I strongly dislike the following sentence on effectivealtruism.org:
"Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on."
It reads to me as arrogant, and epitomises the worst caracatures my friends do of EAs. Read it in a snarky voice (such as one might if they struggled with the movement and were looking to do research) "Rather that just doing what feels right..."
I suggest it gets changed to one of the following:
"We use evidence and careful analysis to find the very best causes to work on."
"It's great when anyone does a kind action no matter how small or effective. We have found value in using evidence and careful analysis to find the very best causes to work on."
I am genuinely sure whoever wrote it meant well, so thank you for your hard work.
I also thought this when I first read that sentence on the site, but I find it difficult (as I'm sure its original author does) to communicate its meaning in a subtler way. I like your proposed changes, but to me the contrast presented in that sentence is the most salient part of EA. To me, the thought is something like this:
"Doing good feels good, and for that reason, when we think about doing charity, we tend to use good feeling as a guide for judging how good our act is. That's pretty normal, but have you considered that we can use evidence and analysis to make judgments about charity?"
The problem IMHO is that without the contrast, the sentiment doesn't land. No one, in general, disagrees in principle with the use of evidence and careful analysis: it's only in contrast with the way things are typically done that the EA argument is convincing.
I would choose your statement over the current one.
I think the sentiment lands pretty well even with a very toned down statement. The movement is called "effective altruism". I think often in groups are worried that outgroups will not get their core differences when generally that's all outgroups know about them.
I don't think that anyone who visits that website won't think that effectiveness isn't a core feature. And I don't think we need to be patronising (as EAs are charactured as being in conversations I have) in order to make known something that everyone already knows.
Has anyone ever run a competition for EA related short stories?
Why would this be a good idea? * Narratives resonate with people and have been used to convey ideas for 1000s of years * It would be low cost and fun * Using voting on this forum there is the same risk of "bad posts" as for any other post
How could it work? * Stories submitted under a tag on the EA forum. * Rated by upvotes * Max 5000 words (I made this up, dispute it in the comments) * If someone wants to give a reward, then there could be a prize for the highest rated * If there is a lot of interest/quality they could be collated and even published * Since it would be measured by upvotes it seems unlikely a destructive story would be highly rated (or as likely as any other destructive post on the forum)
Upvote if you think it's a good idea. If it gets more than 40 karma I'll write one.
tl;dr EA books have a positive externality. The response should be to subsidise them
If EA thinks that certain books (doing good better, the precipice) have greater benefits than they seem, they could subsidise them.
There could be an EA website which has amazon coupons for EA books so that you can get them more cheaply if buying for a friend, or advertise said coupon to your friends to encourage them to buy the book.
From 5 mins of research the current best way would be for a group to buys EA books and sell them at the list price but provide coupons as here - https://www.passionintopaychecks.com/how-to-create-single-use-amazon-coupons-promo-codes/
Alternatively, you could just sell them at the coupon price.
I think people have been taking up the model of open sourcing books (well, making them free). This has been done for [The Life You can Save](https://en.wikipedia.org/wiki/The_Life_You_Can_Save) and [Moral Uncertainty](https://www.williammacaskill.com/info-moral-uncertainty).
I think this could cost $50,000 to $300,000 or so depending on when this is done and how popular it is expected to be, but I expect it to be often worth it.
I like this idea and think it's worth you taking further. My initial reactions are:
Getting more EA books into peoples hands seems great and worth much more per book than the cost of a book.
I don't know how much of a bottleneck the price of a book is to buying them for friends/club members. I know EA Oxford has given away many books, I've also bought several for friends (and one famous person I contacted on instagram as a long shot who actually replied.
I'd therefore be interested in something which aimed to establish whether making books cheaper was a better or worse idea than just encouraging people to gift them.
John Behar/TLYCS probably have good thoughts on this.
This perception gap site would be a good form for learning and could be used in altruism. It reframes correcting biases as a fun prediction game.
It's a site which gets you to guess what other political groups (republicans and democrats) think about issues.
Why is it good:
1) It gets people thinking and predicting. They are asked a clear question about other groups and have to answer it. 2) It updates views in a non-patronising way - it turns out dems and repubs are much less polarised than most people think (the stat they give is that people predict 50% of repubs hold extreme views, when actually it's 30). But rather than yelling this, or an annoying listicle, it gets people's consent and teachest something. 3) It builds consensus. If we are actually closer to those we disagree with than we think, perhaps we could work with them. 4) It gives quick feedback. People learn best when given feedback which is close to the action. In this case, people are rapidly rewarded for thoughts like "probably most of X group" are more similar to me that I first think.
What percentage of neocons want insitutional reform? What % of libertarians want an end to factory farming? What % of socialists want an increase in foreign direct aid?
If you want to change people's minds, don't tell them stuff, get them to guess trustworthy values as a cutesy game.
Does anyone know people working on reforming the academic publishing process?
Coronavirus has caused journalists to look for scientific sources. There are no journal articles because of the lag time. So they have gone to preprint servers like bioRxiv (pronounced bio-archive). These servers are not peer reviewed so some articles are of low quality. So people have gone to twitter asking for experts to review the papers.