"Hinge of History" Refuted (April Fools' Day) 2021-04-01T07:00:03.864Z
Thomas Kwa's Shortform 2020-09-23T19:25:09.159Z


Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2021-02-10T02:09:35.650Z · EA · GW

I want to skill up in pandas/numpy/data science over the next few months. Where can I find a data science project that is relevant to EA? Some rough requirements:

  • Takes between 1 and 3 months of full-time work
  • Helps me (a pretty strong CS undergrad) become fluent in pandas quickly, and maybe use some machine learning techniques I've studied in class
  • About as open-ended as a research internship
  • Feels meaningful
    • Should be important enough that I enjoy doing it, but it's okay if it has e.g. 5% as much direct benefit as the highest-impact thing I could be doing
    • I'm interested in AI safety and other long-term cause areas
  • Bonus: working with time-series data, because I'm particularly confused about how it works.

I've already looked at the top datasets on kaggle and other places, and don't feel inclined to work on them because they don't seem relevant and have probably been analyzed to death. Also, I've only taken a few ML and no data science classes, so I might not be asking the right questions.

Comment by Thomas Kwa (tkwa) on Spirituality & Science Policy and Infrastructure · 2020-12-22T04:03:01.770Z · EA · GW

I downvoted this because it contains large claims which are vague and probably false, and also because I don't see any relevance to the EA movement. To single one out, "The skeptical movement seems to be involved to some extent with regards to its branding and possibly research interference" sound like how pseudoscientists claim that controlled experiments interfere with their supernatural powers. Will reverse this vote if there's evidence I'm wrong.

There are efforts to promote geographic diversity in EA, as well as translate and integrate EA ideas to other cultures and do cross-cultural moral research. Furthering any one of these would reduce the effect of any Eurocentric bias the EA community has inherited, and I think they're all better places to look than alternative medicine.

Comment by Thomas Kwa (tkwa) on [Expired] 20,000 Free $50 Charity Gift Cards · 2020-12-12T03:33:45.027Z · EA · GW

Fixed, thanks.

Comment by Thomas Kwa (tkwa) on [Expired] 20,000 Free $50 Charity Gift Cards · 2020-12-11T21:50:15.633Z · EA · GW

Some EA-aligned charities listed (use the search function in the bottom right corner):

  • Center for Long-Term Risk (listed as Effective Altruism Foundation)
  • Founders Pledge
  • 80,000 Hours
  • Center for Effective Altruism
  • Future of Humanity Institute
  • Machine Intelligence Research Institute
  • AMF
  • GiveDirectly
  • Animal Ethics

I'm probably missing a ton of global health and animal charities, because I don't know them.

Comment by Thomas Kwa (tkwa) on andrewleeke's Shortform · 2020-11-21T22:19:55.858Z · EA · GW

You might find it helpful to look at this ethnography of an EA group. Also relevant is this analysis of the Big Five personality traits of respondents to the Rethink Charity community survey. It has statistical flaws, but one takeaway is that most EAs are high in openness. Finally, there's this Global Optimum Podcast episode on the personality of EAs.

Justification and signalling explanations don't seem especially compelling to me because in some sense, everything is justification and signaling. Also, I'm not sure if you're hinting at this, but it's unlikely that you'll be diagnosed with a mental illness just for being drawn to / believing in EA, unless it significantly impedes your everyday functioning. Since I'm not a therapist, I don't think I can comment further on what a therapist would say.

Comment by Thomas Kwa (tkwa) on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-11-21T22:12:47.241Z · EA · GW

The link to the survey data ( is now broken.

Comment by Thomas Kwa (tkwa) on Please Take the 2020 EA Survey · 2020-11-13T02:23:38.974Z · EA · GW

To add to that, if there are concerns about data being de-anonymized, there are statistical techniques to mitigate it.

Comment by Thomas Kwa (tkwa) on What are some quick, easy, repeatable ways to do good? · 2020-11-13T00:25:28.596Z · EA · GW

This is a bit of a frame challenge, but I think it's OK to feed stray cats. Most people are built to empathize with people around us, not the total sum of global utility, so it's hard to beat the emotional high of a simple random act of kindness. (Conversely, for most people, the vast majority of good you can do comes from your career choice, and it's hard to approach this with small-scale actions.) So my advice is to pick someone close to you, do something nice for them and not worry about the magnitude of the altruistic payoff. You could also reflect on the positive long-term impact of some action (mentally follow the chain all the way from "finish project" -> "gain career capital" -> "get hired by <EA org>" -> "be able to work on <cause area>" -> reduce suffering) and use that to motivate yourself, but that only works for some people.

This is a classic idea in EA circles going back to 2009, and it absolutely still applies.

Comment by Thomas Kwa (tkwa) on Desperation Hamster Wheels · 2020-11-01T19:36:46.818Z · EA · GW

As an empirical matter, one's naive/early/quick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.

One possible reason is completely rational: if we're estimating expected value of an intervention with a 1% chance to be highly valuable, then 99% of the time we realize the moonshot won't work and revise the expected value downward.

Comment by Thomas Kwa (tkwa) on When you shouldn't use EA jargon and how to avoid it · 2020-10-26T18:59:50.079Z · EA · GW

Sometimes I catch myself using jargon even knowing it's a bad communication strategy, because I just like feeling clever, or signaling that I'm an insider, or obscuring my ideas so people can't challenge them. OP says these are "naughty reasons to use jargon" (slide 9), but I think that in some cases they fulfill some real social need for people, and if these motivations are still there, we need better ways to satisfy them.

Some ideas:

  • Instead of associating jargon with cleverness, mentally reframe things. Someone who uses jargon isn't necessarily clever, especially if they're misusing it. Feynman said "If you can’t explain something in simple terms, you don’t understand it", so pat yourself on the back for translating something into straightforward language when appropriate.
  • Instead of using jargon to feel connected to the in-group, build a group identity that doesn't rely on jargon. I'm not really sure how to do this.
  • Instead of using jargon to prevent people from understanding your ideas to challenge them, keep your identity small so you don't feel personally attacked when being challenged. When you have low confidence in a belief, qualify them with an "I think" or "I have a lot of confusing intuitions here, but..."
    • Perhaps also doing exposure therapy to practice losing debates without feeling like you've been slapped down
    • This is actually one of the reasons I like the "epistemic status" header; it helps me qualify my statements much more efficiently. From now one I'll be dropping the "epistemic status" terminology but keeping the header.

I'm sure there are more and better ideas in this direction.

Comment by Thomas Kwa (tkwa) on Making More Sequences · 2020-10-20T23:58:37.503Z · EA · GW

So far, I’ve produced one of what I hope will be several sections of the Handbook. The topic is “Motivation”: What are the major ideas and principles of effective altruism, and how do they inspire people to take action? (You could also think of this as a general introduction to EA.)

If this material is received well enough, I’ll keep releasing additional material on a variety of topics, following a similar format. If people aren’t satisfied with the content, style, or format, I may switch things up in the future.

Comment by Thomas Kwa (tkwa) on Making More Sequences · 2020-10-20T18:04:47.166Z · EA · GW

Can someone create an “introduction to EA” sequence? I would love to do it, but I think that this should be done by an actual mod or someone from an official EA institution.


The EA handbook is being turned into a sequence.

Comment by Thomas Kwa (tkwa) on Which is better for animal welfare, terraforming planets or space habitats? And by how much? · 2020-10-18T17:44:49.277Z · EA · GW

I may write up an answer because the question is interesting, but I think the premise of this question-- that we have a meaningful choice between planets and habitats-- is unlikely.

1. Assuming space colonization and terraforming get here before AI or other transformative technologies like whole brain emulation, it seems very unlikely that the terraformed planet will be "unmanaged wilderness". First, the Earth is already over 35% of the land area of the inner planets, so it's not like there will be a large amount of free space. Second, without the benefit of natural water and nutrient sources, not to mention hundreds of thousands of years of evolution to reach a stable equilibrium, wilderness will be necessarily managed to maintain ecosystem balances.

2. In the long run, planets are extremely inefficient as space colonies. It takes just a few years to disassemble Mercury into solar panels and habitats, creating thousands of times as much economic value as anything that could exist on the planet. Asteroids don't even need to be lifted out of a gravity well to be turned into habitats. So economic incentives will be strongly against planets, making the question moot. (Unless we turn them into planet-sized computers or something, which would again be out of scope of this question.)

Comment by Thomas Kwa (tkwa) on Getting money out of politics and into charity · 2020-10-18T01:12:27.936Z · EA · GW

An idea very similar to this was mentioned on the EA forum in 2015.

Comment by Thomas Kwa (tkwa) on Objections to Value-Alignment between Effective Altruists · 2020-10-17T04:00:11.284Z · EA · GW
A non-exhaustive subset of admired individuals I believe includes: E. Yudkowsky, P. Christiano, S. Alexander, N. Bostrom, W. MacAskill, Ben Todd, H. Karnowsky, N. Beckstead, R. Hanson, O. Cotton-Barratt, E. Drexler, A. Critch, … As far as I perceive it, all revered individuals are male.

Although various metrics do show that the EA community has room to grow in diversity, I don't think the fandom culture has nearly that much gender imbalance. Some EA women who consistently produce very high-quality content include Arden Koehler, Anna Salamon, Kelsey Piper, Elizabeth Van Nostrand. I have also heard others revere Julia Wise, Michelle Hutchinson and Julia Galef, whose writing I don't follow. I think that among EAs, I have only slightly below median tendency to revere men over women, and these women EA thinkers feel about as "intimidating" or "important" to me as the men on your list.

Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2020-10-17T03:28:03.805Z · EA · GW

Hmm, that's what I suspected. Maybe it's possible to estimate anyway though-- quick and dirty method would be to identify the most effective interventions a large charity has, estimate that the rest follow a power law, take the average and add error bars upwards for the possibility we underestimated an intervention's effectiveness?

Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2020-10-16T19:46:00.720Z · EA · GW

Are there GiveWell-style estimates of the cost-effectiveness of the world's most popular charities (say UNICEF), preferably by independent sources and/or based on past results? I want to be able to talk to quantitatively-minded people and have more data than just saying some interventions are 1000x more effective.

Comment by Thomas Kwa (tkwa) on What types of charity will be the most effective for creating a more equal society? · 2020-10-13T03:40:36.339Z · EA · GW

First off, welcome to the EA community! If you haven't already, you might want to read the Introduction to Effective Altruism. I don't have time to write up a full answer, so here are a few of my thoughts.

Usually in the effective altruism community, we are cause-neutral; that is, we try to address whichever charitable cause area maximizes impact. While it's intuitively compelling that the most cost-effective effort is to eliminate the root cause of a problem, this could be a suboptimal choice for a few reasons.

  • Most things have multiple causes, and it's not obvious which one to spend the most resources on without an in-depth analysis; one could just as easily say that the root cause of poverty-related problems is a lack of caring about the poor, or inability to coordinate to fix large problems, or the high cost of basic necessities like medicine and clean water.
  • Even if systemic change would fix wealth inequality, actually finding and implementing such change could be difficult or expensive enough that it's more impactful to address the needs of the extreme poor first.
  • It could be tractable to research, say, government structures that incentivize redistribution of wealth if you have a political science PhD, but there might be no good way for the average person to spend money on the cause area.

I haven't looked in depth at the arguments for systemic change being cost-effective, partly because global health isn't my specialty. If you have a strong argument for it that isn't already addressed in a literature review, I encourage posting it here as an article or shortform post.

Comment by Thomas Kwa (tkwa) on What types of charity will be the most effective for creating a more equal society? · 2020-10-12T19:27:29.227Z · EA · GW

In the interest of being helpful and welcoming to this new user, could any downvoters give feedback or explain their votes?

Edit: Someone is trying to join, or at least interface with, the EA community by asking a question that we can answer. The question is well-formed, represents an hour or more of thought, and addresses a popular idea among the altruistically-minded. The only concrete thing I don't like about this post is that the OP is slightly rude in saying "Please, if you disagree with me, carry your precious opinion elsewhere."

I think that people are downvoting this because the OP is not impartial, and has a preferred way to improve the world. I think that in general, automatically downvoting posts by such people is wrong, and if we have good epistemic hygiene, the benefits (being more welcoming and intellectually diverse, helping future people understand EA by addressing popular misconceptions and mistakes) by engaging with the question will far outweigh risks of dilution. This is because dilution only becomes a big problem when people start to misunderstand or misappropriate EA ideas, and we address such misunderstandings precisely through high-fidelity communication. Engaging here is one of the highest-fidelity forms of text-based communication possible.

Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2020-10-11T04:33:25.441Z · EA · GW

To clarify, you mean a donor-advised fund I have an account with (say Fidelity, Vanguard, etc.) which I manage myself?

Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2020-10-10T19:01:57.021Z · EA · GW

Is it possible to donate appreciated assets (e.g. stocks) to one of the EA Funds? The tax benefits would be substantially larger than donating cash.

I know that MIRI and GiveWell as well as some other EA-aligned nonprofits do support donating stocks. GiveWell even has a DAF with Vanguard Charitable. But I don't see such an option for the EA Funds.

edit: DAF = donor-advised fund

Comment by Thomas Kwa (tkwa) on Best Consequentialists in Poli Sci #1 : Are Parliaments Better? · 2020-10-09T17:32:25.975Z · EA · GW

Thanks for the elaboration! I'm just glad to hear that the researchers didn't make any obvious mistakes.

Comment by Thomas Kwa (tkwa) on Best Consequentialists in Poli Sci #1 : Are Parliaments Better? · 2020-10-09T00:18:41.996Z · EA · GW

Epistemic status: have not read the paper

The conclusion seems reasonable, but I have some concerns about taking this at face value. The large number of dependent variables also makes me a bit skeptical. How do we know they weren't p-hacking by, say, choosing the best 14 of 25 possible dependent variables? More importantly, it doesn't seem to robustly establish causation. What if Latin America, the US and Africa have worse outcomes due to lack of trade or something?

Furthermore no subsequent article (afaik) has found evidence supporting presidentialism.

How many such articles have there been?

Comment by Thomas Kwa (tkwa) on [Linkpost] Some Thoughts on Effective Altruism · 2020-10-09T00:02:13.327Z · EA · GW

Two subcategories of idea 3 that I see, and my steelman of each:

3a. To maximize good, it's incorrect to try to do the most good. Most people who apply a maximization framework to the amount of justice will create less justice than people who build relationships and seek to understand power structures, because thinking quantitatively about justice is unlikely to work no matter how carefully you think. Or, most of the QALYs that we can create result from other difficult-to-quantitatively-maximize things like ripple effects from others modeling our behavior. Trying to do the most good will create less good than some other behavior pattern.

3b. "Good" cannot be quantified even in theory, except in the nitpicky sense that mathematically, an agent with coherent preferences acts as if it's maximizing expected utility. Such a utility function is meaningless. Maybe the utility function is maximized by doing X even though you think X results in a worse world than Y. Maybe doing Z maximizes utility, but only if you have a certain psychological framing. Even though this doesn't make sense, the decisions are still morally correct.

Comment by Thomas Kwa (tkwa) on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-08T18:22:49.809Z · EA · GW

This answer would be strengthened by one or two examples of his careful thinking, or especially by a counterpoint to the claim that DxE uses psychological manipulation techniques on its members.

Comment by Thomas Kwa (tkwa) on Open and Welcome Thread: October 2020 · 2020-10-06T18:57:30.076Z · EA · GW

I strong-upvote when I feel like my comment is underappreciated, and don't think of it as too different from strong-upvoting someone else's comment. The existence of the strong-upvote already allows someone to strong-upvote whatever they want, which doesn't seem to be a problem.

Comment by Thomas Kwa (tkwa) on Denise_Melchin's Shortform · 2020-10-02T01:27:44.977Z · EA · GW

I don't see bloat as much of a concern, because our voting system, which works pretty well, can bring the best comments to the top. If they're not substantive, they should either be pretty short, or not be highly upvoted.

Comment by Thomas Kwa (tkwa) on How do you decide between upvoting and strong upvoting? · 2020-09-25T02:35:30.684Z · EA · GW

Karma should be awarded when a post or comment is high-quality, especially when it's hard for others to notice the high quality. So I strong-upvote a comment when it has an important non-obvious insight and I needed to think for a while before understanding it.

Comment by Thomas Kwa (tkwa) on Ramiro's Shortform · 2020-09-25T02:12:38.061Z · EA · GW

Someone I know also noticed this a couple of months ago, so I looked into the methodology and found some possible issues. I emailed Joey Savoie, one of the authors of the report; he hasn't responded yet. Here's the email I sent him:

Someone posted an article you co-authored in 2018 in the Stanford Arete Fellowship mentors group, and the conclusion that wild chimps had a higher welfare score than humans in India seemed off to me. I had the intuition that chimps can control their environment less well than human hunter-gatherers, plus have a less egalitarian social structure, plus the huge amount of infrastructure in food. This seemed like it could reveal either a surprising truth, or a methodological flaw or biases in the evaluators; I read through the full report and have some thoughts which I hope are constructive.
- The way humans are compared to non-humans seems too superficial. I think 6 points to humans in India vs 9 points in wild chimpanzees based on the high level of diagnosed disability among people in India is misleading, because we've spent billions more on diagnosing human diseases than chimps.
- Giving 0 points to humans in India for thirst/hunger/malnutrition, while chimps get 11, seems absurd for similar reasons. If we put as much effort into the diet of chimps as in the diets of wealthy humans to get a true reference point for health, I wouldn't be surprised if more than 15% of chimps were considered malnourished. Also, the untreated drinking water consumed in India is used to support this rating, but though untreated water causes harm through disease, it shouldn't be in the "thirst/hunger/malnutrition" category. [name of mentor] from the chat sums this up as there not being a 'wealthy industrialized chimps' group to contrast with.
I'm wondering if you see these as important criticisms. Do you still endorse the overall results of the report enough that you think we should share it with mentees, and if so, should we add caveats?
Comment by Thomas Kwa (tkwa) on Effective strategy and an overlooked area of research? · 2020-09-25T02:01:04.284Z · EA · GW

Here are my thoughts, which may sound overly critical, but are an honest attempt to communicate my ideas clearly.

When I start reading, I immediately notice two red flags:

  • The argument is formatted as a long manifesto by someone without a known track record of good epistemics. The manifesto claims to solve global cooperation, something many competent people have tried hard to solve.
  • The idea of a type of transformative knowledge that causes people to suddenly ignore their current incentives and start cooperating sounds fantastical.

Because of these red flags, I decide that the claim is extraordinary and you need to provide extraordinary evidence. From reading further, I notice further problems. To be clear, I don't think patching these problems will save the thesis: I would still be skeptical due to the prior implausibility and lack of a clear, plausible plan for increasing the world's empathy levels 10%.

  • Aligning everyone's beliefs won't solve conflict; you need to fix structural problems too.
  • If you could communicate obvious true beliefs and get people to internalize them properly, everyone would be an EA. A general method of communicating non-obvious true beliefs about the nature of reality to people, and getting them to act on it, sounds implausible.
  • You say "At some critical point a positive feedback loop will emerge so that every human becomes supersapient over time." If this is the natural result of some small critical mass of people becoming supersapient, why has Buddhism not taken over the world with its millions of enlightened people over thousands of years of existence?

The version of this idea that is scaled back to be plausible to me sounds something like "Scientists should study the benefits of meditation more; with a LOT of funding and rigor this could possibly get past 'does meditation work' to identifying specific benefits and best practices. People should also practice meditation and, if they can safely, experiment with psychedelics, to better understand themselves and possibly become more rational and empathic." That's something I believe, but interventions may not be cost-effective enough to be an EA cause area. (There are EA-adjacent efforts to improve mental health in the developing world, but not many stand out as highly leveraged.)

Comment by Thomas Kwa (tkwa) on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-09-25T00:27:29.169Z · EA · GW

Can you give an example of such a conversation, as well as the thought process towards bringing them up? I hear about conversational principles like these, but I don't know how to get from "vague feeling that something is wrong with the conversation" to "I think you're confusing me with excess information".

Comment by Thomas Kwa (tkwa) on Halffull's Shortform · 2020-09-23T23:16:19.744Z · EA · GW

It's not on the 80k list of "other global issues", and doesn't come up on a quick search of Google or this forum, so I'd guess not. One reason might be that the scale isn't large enough-- it seems much harder to get existential risk from GMOs than from, say, engineered pandemics.

Comment by Thomas Kwa (tkwa) on Buck's Shortform · 2020-09-23T22:46:27.416Z · EA · GW

I've upvoted some low quality criticism of EA. Some of this is due to emotional biases or whatever, but a reason I still endorse is that I haven't read strong responses to some obvious criticism.

Example: I currently believe that an important reason EA is slightly uninclusive and moderately undiverse is because EA community-building was targeted at people with a lot of power as a necessary strategic move. Rich people, top university students, etc. It feels like it's worked, but I haven't seen a good writeup of the effects of this.

I think the same low-quality criticisms keep popping up because there's no quick rebuttal. I wish there were a post of "fallacies about problems with EA" that one could quickly link to.

Comment by Thomas Kwa (tkwa) on Denise_Melchin's Shortform · 2020-09-23T22:00:55.340Z · EA · GW

I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.

One possible way to identify views reasonable enough to develop tools for is checking that they're consistent under some amount of reflection; another way could be checking that they're consistent with facts e.g. lack of evidence for supernatural entities, or the best knowledge on conscious experience of animals.

Comment by Thomas Kwa (tkwa) on Thomas Kwa's Shortform · 2020-09-23T19:25:09.517Z · EA · GW

I'm worried about EA values being wrong because EAs are unrepresentative of humanity and reasoning from first principles is likely to go wrong somewhere. But naively deferring to "conventional" human values seems worse, for a variety of reasons:

  • There is no single "conventional morality", and it seems very difficult to compile a list of what every human culture thinks of as good, and not obvious how one would form a "weighted average" between these.
  • most people don't think about morality much, so their beliefs are likely to contradict known empirical facts (e.g. cost of saving lives in the developing world) or be absurd (placing higher moral weight on beings that are physically closer to you).
  • Human cultures have gone through millennia of cultural evolution, such that values of existing people are skewed to be adaptive, leading to greed, tribalism, etc.; Ian Morris says "each age gets the thought it needs".

However, these problems all seem surmountable with a lot of effort. The idea is a team of EA anthropologists who would look at existing knowledge about what different cultures value (possibly doing additional research) and work with philosophers to cross-reference between these while fixing inconsistencies and removing values that seem to have an "unfair" competitive edge in the battle between ideas (whatever that means!).

The potential payoff seems huge, as it would expand the basis of EA moral reasoning from the intuitions of a tiny fraction of humanity to that of thousands of human cultures, and allow us to be more confident about our actions. Is there a reason this isn't being done? Is it just too expensive?

Comment by Thomas Kwa (tkwa) on Announcing the EA donation swap system · 2020-09-17T03:29:23.389Z · EA · GW

Is this still actively in use in September 2020?

Comment by Thomas Kwa (tkwa) on How do i know a charity is actually effective · 2020-07-17T20:29:56.245Z · EA · GW

The person who broke down in tears during an interview is actually Derek Parfit, also an effective altruist. Source:

As for his various eccentricities, I don’t think they add anything to an understanding of his philosophy, but I find him very moving as a person. When I was interviewing him for the first time, for instance, we were in the middle of a conversation and suddenly he burst into tears. It was completely unexpected, because we were not talking about anything emotional or personal, as I would define those things. I was quite startled, and as he cried I sat there rewinding our conversation in my head, trying to figure out what had upset him. Later, I asked him about it. It turned out that what had made him cry was the idea of suffering. We had been talking about suffering in the abstract.
Comment by Thomas Kwa (tkwa) on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-01T04:02:29.086Z · EA · GW

Say an expert (or a prediction market median) is much stronger than you, but you have a strong inside view. What's your thought process for validating it? What's your thought process if you choose to defer?

Comment by Thomas Kwa (tkwa) on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-28T23:56:36.161Z · EA · GW

I thought this talk was brilliant, not least in the specific terms you mentioned. I often talk to my EA friends about "counterfactual impact", leverage, and "comparative advantage" and often have a hard time switching gears to talk to non-EAs, but I can imagine this slight shift in terminology to "cause-and-effect evidence", leverage, and "personal advantage" to hit close to the core ideas and sound much friendlier. Most of the talk was immediately actionable as well. Thank you for making it.

Comment by Thomas Kwa (tkwa) on Asymmetric altruism · 2020-06-27T19:32:27.807Z · EA · GW

I notice that I meant to link to this different episode on the non-identity problem but found it didn't really fit and rationalized that away, so my comment may not be relevant.

Comment by Thomas Kwa (tkwa) on EA Forum feature suggestion thread · 2020-06-27T18:34:02.034Z · EA · GW

I think this could be more useful for people who are slightly downvoted, or whose posts just don't get much attention. I remember a few recent highly-downvoted posts and comments (below -10 or so), and all of them seem to have well-written feedback; sometimes more thought was put into the feedback than the original post (not necessarily a bad thing, but going even further could be a massive waste of energy).

People who provide feedback also have to want to engage. On Stack Exchange, closing a question requires a reason, but mods and high-rep users are known to close poorly-written questions for vague reasons without providing much feedback. An even worse failure mode I see is if users are disincentivized from downvoting because they don't want to be added to the feedback list.

Comment by Thomas Kwa (tkwa) on Asymmetric altruism · 2020-06-27T18:18:03.872Z · EA · GW

Have you heard the 80000 Hours podcast episode with Will MacAskill? The first hour has a decent exploration of asymmetries and similar deontological concerns, and MascAskill's paralysis argument is a fairly good argument against them.

Comment by Thomas Kwa (tkwa) on EA could benefit from a general-purpose nonprofit entity that offers donor-advised funds and fiscal sponsorship · 2020-06-27T02:52:26.177Z · EA · GW

Related: 80k podcast on patient philanthropy .

Comment by Thomas Kwa (tkwa) on Matt_Lerner's Shortform · 2020-06-17T00:12:09.450Z · EA · GW

Proportional representation?

Comment by Thomas Kwa (tkwa) on Are All Existential Risks Equivalent to a Lack of General Collective Intelligence? And is GCI therefore the Most Important Human Innovation in the History and Immediate Future of Mankind? · 2020-06-16T18:41:15.256Z · EA · GW

This type of content might be more suited to LessWrong, and you might get better feedback/engagement there.

Comment by Thomas Kwa (tkwa) on Is the Buy-One-Give-One model an effective form of altruism? · 2020-06-16T18:34:26.142Z · EA · GW

Glad I could help. By the way, it came to my attention that GiveWell is investigating the cause area of providing glasses in developing countries:

This is promising, but I still endorse the general stance that B1G1-type programs have obstacles to overcome to reach effectiveness.

Comment by Thomas Kwa (tkwa) on Is the Buy-One-Give-One model an effective form of altruism? · 2020-06-09T01:25:07.937Z · EA · GW

It looks like you're fairly new to effective altruism, so you might want see my other comment or read the EA Handbook for more of the reasoning behind these answers.

1) I'm not affiliated with the CEA (nor are most of the people on this forum), but there are certainly forms of philanthropy more in line with the principles of effective altruism.

2) Effectiveness is often estimated as importance x neglectedness x tractability. There are good reasons for this, as when correctly formalized it's an estimate of the total good one can do in the world; see below. I think most consumers are better off either buying from socially responsible non-B1G1 companies, or buying from any company and donating the money saved to either GW top charities (which rate much better in importance and neglectedness) or high-impact existential risk, farm animal welfare, or wild animal welfare causes, which can rate even better depending on your value system.

3, 4) The incentives of B1G1 companies seem to push them towards relatively ineffective causes, and they might indirectly be causing net harm.

5) I would be happy if P&G switched from MNT to bednets. It's possible the marketing could be equally good since malaria affects so many children under 5.

6) This is a valid criticism. Since highly effective causes are rare, any restriction makes it hard to find one.

7) Not sure.

8) I don't think this is fair. Neonatal tetanus causes infant mortality, and the MNT vaccine reduces it, even if there are more effective causes to address. In general, addressing institutional/systemic issues can sometimes be more complicated and costly than directly attacking the problem.

9) Given that these companies aren't currently giving to EA causes themselves, it's hard for me to imagine such companies recommending them to consumers.

10) I'm skeptical of claims that millennials cause this or that trend because they're such a broad group. But you could look at data for this one. For example, polls that ask about inclination towards buying B1G1 products, broken down by age range.

11) EA has grown over the last decade, but total donations as a percentage of GDP are more or less flat. If you mean the growth in EA, that's too complex a question for me to answer.

12) Not sure.

Comment by Thomas Kwa (tkwa) on Is the Buy-One-Give-One model an effective form of altruism? · 2020-06-08T23:45:47.637Z · EA · GW

Epistemic status: I am a university student who has read a lot of EA material but has little knowledge about B1G1 programs. I thought carefully about this post for a few hours.

I think there's a wide spectrum of possible effectiveness depending on implementation, but in practice they seem unlikely to be much more effective than the average non-EA charity, and a factor of at least 10 behind many EA causes.

Overall, the strictest forms of B1G1, where a company gives the exact same product they're selling, seems gimmicky to me. The reason is that needs of people in the developing world are vastly different from those of the wealthy people buying the products. I think market forces might even dictate that these programs are not much more effective than direct cash transfers: If they were much more effective, the target population would be willing to buy them, which would cannibalize the sales of the company. [1] None of the 3 companies you list is so naive-- they mostly outsource their work to charities. But this comes with its own problems: they don't apply their own domain knowledge to their interventions.

Warby Parker works with Pupils Project and VisionSpring. Pupils Project operates in the US, so it's unlikely they are cost-effective. VisionSpring at least works in Bangladesh. According to a [GiveWell interview][2], they do undercut commercial prices by a factor of 2 by selling glasses at cost for 150 taka ($1.77) [3], but I doubt that glasses are a leveraged intervention in the developing world. GiveWell does not currently recommend VisionSpring as a top or standout charity, instead recommending charities that can beat cash by a factor of 5-60 and are supported by very strong evidence.

TOMS has stopped distributing shoes in favor of donating 1/3 of their profits to a fund managed by their giving team. Their 2019 impact report is basically a marketing document full of infographics; it appears they make some attempt at evaluating impact of charities, but don't follow effective altruist principles. For example, they fund projects in the US, and clean water programs (The Gates foundation has studied the water, sanitation, and hygiene sector extensively and finds better opportunities in sanitation).

P&G's MNT vaccine program is through UNICEF, which is massively overfunded by comparison to charities recommended by GW and the Open Philanthropy Project.

There are more fundamental problems. The B1G1 website says they primarily evaluate causes by "progress of the project activity" and financial records; it's likely they're falling for the overhead myth and vastly underemphasizing the effectiveness of the cause area, which is left up to the company. EA has at least three branches where effective cause areas are found: global health/poverty, farm/wild animal welfare, and existential risk. It would be ideal if companies' B1G1 programs either supported effective programs in one of these areas, or found a unique niche. B1G1 programs need to yield good PR, and sometimes have the additional constraint of providing a tangible product, so it appears they're limited to a small subset of global health interventions, which in these three examples look no better than the average charity in terms of effectiveness. I don't see any companies with B1G1 programs in farm or wild animal welfare, probably because it is politically contentious. Existential risk causes seem even less likely to yield good PR because they're the exact opposite of the tangible transaction at the heart of B1G1. And B1G1 seems unlikely to let companies find a unique niche given that they're outsourcing to nonprofits.

Finally, I have other concerns. B1G1 companies could be decreasing the amount given to more effective charities, which given that some charities are hundreds or thousands of times more effective than others, might cause net harm. They also might be using such programs to cover up being socially irresponsible (e.g. poor treatment of factory workers, or contributing to high-suffering animal agriculture).

Since this comment is rather long, I've split it into two, with the second comment directly answering the 12 questions.

[1]: See for why. Other GiveWell charities manage to outperform cash because they don't sell commodities-- individual families can't buy a school deworming program.


[3]: Strangely, they sell glasses for $0.85 each on their website. Perhaps they have high distribution costs.

Comment by Thomas Kwa (tkwa) on How to promote widespread usage of high quality, reusable masks · 2020-04-20T18:54:45.710Z · EA · GW

80000 Hours says the ~4% with greatest comparative advantage should work on COVID-19.

Comment by Thomas Kwa (tkwa) on Why I'm Not Vegan · 2020-04-09T23:18:17.418Z · EA · GW

> This is primarily the instrumental value of your enjoyment, right? Otherwise, you should compare your going vegan directly to the suffering of animals by not going vegan

I think you're drawing the line in an unfair place between instrumental and inherent value. Most EAs I know are not so morally demanding on themselves as to have no self-interest. If someone is well-off in a non-EA job and donates 40% of their income to GiveWell or x-risk charities, they're a fairly dedicated EA. But donating "only" 40% still implies a >10:1 income disparity between oneself and the global poor, and thus that one values one's own enjoyment >50x more than that of an arbitrary human. I think the norm of being less than maximally demanding is beneficial to the EA community and protects against unproductive asceticism. So self-interest that looks inherent can actually be instrumental.