Posts

Thomas Kwa's Shortform 2020-09-23T19:25:09.159Z · score: 2 (1 votes)

Comments

Comment by tkwa on When you shouldn't use EA jargon and how to avoid it · 2020-10-26T18:59:50.079Z · score: 6 (5 votes) · EA · GW

Sometimes I catch myself using jargon even knowing it's a bad communication strategy, because I just like feeling clever, or signaling that I'm an insider, or obscuring my ideas so people can't challenge them. OP says these are "naughty reasons to use jargon" (slide 9), but I think that in some cases they fulfill some real social need for people, and if these motivations are still there, we need better ways to satisfy them.

Some ideas:

  • Instead of associating jargon with cleverness, mentally reframe things. Someone who uses jargon isn't necessarily clever, especially if they're misusing it. Feynman said "If you can’t explain something in simple terms, you don’t understand it", so pat yourself on the back for translating something into straightforward language when appropriate.
  • Instead of using jargon to feel connected to the in-group, build a group identity that doesn't rely on jargon. I'm not really sure how to do this.
  • Instead of using jargon to prevent people from understanding your ideas to challenge them, keep your identity small so you don't feel personally attacked when being challenged. When you have low confidence in a belief, qualify them with an "I think" or "I have a lot of confusing intuitions here, but..."
    • Perhaps also doing exposure therapy to practice losing debates without feeling like you've been slapped down
    • This is actually one of the reasons I like the "epistemic status" header; it helps me qualify my statements much more efficiently. From now one I'll be dropping the "epistemic status" terminology but keeping the header.

I'm sure there are more and better ideas in this direction.

Comment by tkwa on Making More Sequences · 2020-10-20T23:58:37.503Z · score: 1 (1 votes) · EA · GW

So far, I’ve produced one of what I hope will be several sections of the Handbook. The topic is “Motivation”: What are the major ideas and principles of effective altruism, and how do they inspire people to take action? (You could also think of this as a general introduction to EA.)

If this material is received well enough, I’ll keep releasing additional material on a variety of topics, following a similar format. If people aren’t satisfied with the content, style, or format, I may switch things up in the future.

Comment by tkwa on Making More Sequences · 2020-10-20T18:04:47.166Z · score: 3 (2 votes) · EA · GW

Can someone create an “introduction to EA” sequence? I would love to do it, but I think that this should be done by an actual mod or someone from an official EA institution.

 

The EA handbook is being turned into a sequence.

Comment by tkwa on Which is better for animal welfare, terraforming planets or space habitats? And by how much? · 2020-10-18T17:44:49.277Z · score: 11 (7 votes) · EA · GW

I may write up an answer because the question is interesting, but I think the premise of this question-- that we have a meaningful choice between planets and habitats-- is unlikely.

1. Assuming space colonization and terraforming get here before AI or other transformative technologies like whole brain emulation, it seems very unlikely that the terraformed planet will be "unmanaged wilderness". First, the Earth is already over 35% of the land area of the inner planets, so it's not like there will be a large amount of free space. Second, without the benefit of natural water and nutrient sources, not to mention hundreds of thousands of years of evolution to reach a stable equilibrium, wilderness will be necessarily managed to maintain ecosystem balances.

2. In the long run, planets are extremely inefficient as space colonies. It takes just a few years to disassemble Mercury into solar panels and habitats, creating thousands of times as much economic value as anything that could exist on the planet. Asteroids don't even need to be lifted out of a gravity well to be turned into habitats. So economic incentives will be strongly against planets, making the question moot. (Unless we turn them into planet-sized computers or something, which would again be out of scope of this question.)

Comment by tkwa on Getting money out of politics and into charity · 2020-10-18T01:12:27.936Z · score: 2 (2 votes) · EA · GW

An idea very similar to this was mentioned on the EA forum in 2015.

Comment by tkwa on Objections to Value-Alignment between Effective Altruists · 2020-10-17T04:00:11.284Z · score: 7 (4 votes) · EA · GW
A non-exhaustive subset of admired individuals I believe includes: E. Yudkowsky, P. Christiano, S. Alexander, N. Bostrom, W. MacAskill, Ben Todd, H. Karnowsky, N. Beckstead, R. Hanson, O. Cotton-Barratt, E. Drexler, A. Critch, … As far as I perceive it, all revered individuals are male.

Although various metrics do show that the EA community has room to grow in diversity, I don't think the fandom culture has nearly that much gender imbalance. Some EA women who consistently produce very high-quality content include Arden Koehler, Anna Salamon, Kelsey Piper, Elizabeth Van Nostrand. I have also heard others revere Julia Wise, Michelle Hutchinson and Julia Galef, whose writing I don't follow. I think that among EAs, I have only slightly below median tendency to revere men over women, and these women EA thinkers feel about as "intimidating" or "important" to me as the men on your list.

Comment by tkwa on Thomas Kwa's Shortform · 2020-10-17T03:28:03.805Z · score: 3 (2 votes) · EA · GW

Hmm, that's what I suspected. Maybe it's possible to estimate anyway though-- quick and dirty method would be to identify the most effective interventions a large charity has, estimate that the rest follow a power law, take the average and add error bars upwards for the possibility we underestimated an intervention's effectiveness?

Comment by tkwa on Thomas Kwa's Shortform · 2020-10-16T19:46:00.720Z · score: 3 (2 votes) · EA · GW

Are there GiveWell-style estimates of the cost-effectiveness of the world's most popular charities (say UNICEF), preferably by independent sources and/or based on past results? I want to be able to talk to quantitatively-minded people and have more data than just saying some interventions are 1000x more effective.

Comment by tkwa on What types of charity will be the most effective for creating a more equal society? · 2020-10-13T03:40:36.339Z · score: 8 (6 votes) · EA · GW

First off, welcome to the EA community! If you haven't already, you might want to read the Introduction to Effective Altruism. I don't have time to write up a full answer, so here are a few of my thoughts.

Usually in the effective altruism community, we are cause-neutral; that is, we try to address whichever charitable cause area maximizes impact. While it's intuitively compelling that the most cost-effective effort is to eliminate the root cause of a problem, this could be a suboptimal choice for a few reasons.

  • Most things have multiple causes, and it's not obvious which one to spend the most resources on without an in-depth analysis; one could just as easily say that the root cause of poverty-related problems is a lack of caring about the poor, or inability to coordinate to fix large problems, or the high cost of basic necessities like medicine and clean water.
  • Even if systemic change would fix wealth inequality, actually finding and implementing such change could be difficult or expensive enough that it's more impactful to address the needs of the extreme poor first.
  • It could be tractable to research, say, government structures that incentivize redistribution of wealth if you have a political science PhD, but there might be no good way for the average person to spend money on the cause area.

I haven't looked in depth at the arguments for systemic change being cost-effective, partly because global health isn't my specialty. If you have a strong argument for it that isn't already addressed in a literature review, I encourage posting it here as an article or shortform post.

Comment by tkwa on What types of charity will be the most effective for creating a more equal society? · 2020-10-12T19:27:29.227Z · score: 22 (11 votes) · EA · GW

In the interest of being helpful and welcoming to this new user, could any downvoters give feedback or explain their votes?

Edit: Someone is trying to join, or at least interface with, the EA community by asking a question that we can answer. The question is well-formed, represents an hour or more of thought, and addresses a popular idea among the altruistically-minded. The only concrete thing I don't like about this post is that the OP is slightly rude in saying "Please, if you disagree with me, carry your precious opinion elsewhere."

I think that people are downvoting this because the OP is not impartial, and has a preferred way to improve the world. I think that in general, automatically downvoting posts by such people is wrong, and if we have good epistemic hygiene, the benefits (being more welcoming and intellectually diverse, helping future people understand EA by addressing popular misconceptions and mistakes) by engaging with the question will far outweigh risks of dilution. This is because dilution only becomes a big problem when people start to misunderstand or misappropriate EA ideas, and we address such misunderstandings precisely through high-fidelity communication. Engaging here is one of the highest-fidelity forms of text-based communication possible.

Comment by tkwa on Thomas Kwa's Shortform · 2020-10-11T04:33:25.441Z · score: 1 (1 votes) · EA · GW

To clarify, you mean a donor-advised fund I have an account with (say Fidelity, Vanguard, etc.) which I manage myself?

Comment by tkwa on Thomas Kwa's Shortform · 2020-10-10T19:01:57.021Z · score: 8 (5 votes) · EA · GW

Is it possible to donate appreciated assets (e.g. stocks) to one of the EA Funds? The tax benefits would be substantially larger than donating cash.

I know that MIRI and GiveWell as well as some other EA-aligned nonprofits do support donating stocks. GiveWell even has a DAF with Vanguard Charitable. But I don't see such an option for the EA Funds.

edit: DAF = donor-advised fund

Comment by tkwa on Best Consequentialists in Poli Sci #1 : Are Parliaments Better? · 2020-10-09T17:32:25.975Z · score: 2 (2 votes) · EA · GW

Thanks for the elaboration! I'm just glad to hear that the researchers didn't make any obvious mistakes.

Comment by tkwa on Best Consequentialists in Poli Sci #1 : Are Parliaments Better? · 2020-10-09T00:18:41.996Z · score: 3 (3 votes) · EA · GW

Epistemic status: have not read the paper

The conclusion seems reasonable, but I have some concerns about taking this at face value. The large number of dependent variables also makes me a bit skeptical. How do we know they weren't p-hacking by, say, choosing the best 14 of 25 possible dependent variables? More importantly, it doesn't seem to robustly establish causation. What if Latin America, the US and Africa have worse outcomes due to lack of trade or something?

Furthermore no subsequent article (afaik) has found evidence supporting presidentialism.

How many such articles have there been?

Comment by tkwa on [Linkpost] Some Thoughts on Effective Altruism · 2020-10-09T00:02:13.327Z · score: 13 (5 votes) · EA · GW

Two subcategories of idea 3 that I see, and my steelman of each:

3a. To maximize good, it's incorrect to try to do the most good. Most people who apply a maximization framework to the amount of justice will create less justice than people who build relationships and seek to understand power structures, because thinking quantitatively about justice is unlikely to work no matter how carefully you think. Or, most of the QALYs that we can create result from other difficult-to-quantitatively-maximize things like ripple effects from others modeling our behavior. Trying to do the most good will create less good than some other behavior pattern.

3b. "Good" cannot be quantified even in theory, except in the nitpicky sense that mathematically, an agent with coherent preferences acts as if it's maximizing expected utility. Such a utility function is meaningless. Maybe the utility function is maximized by doing X even though you think X results in a worse world than Y. Maybe doing Z maximizes utility, but only if you have a certain psychological framing. Even though this doesn't make sense, the decisions are still morally correct.

Comment by tkwa on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-08T18:22:49.809Z · score: 10 (7 votes) · EA · GW

This answer would be strengthened by one or two examples of his careful thinking, or especially by a counterpoint to the claim that DxE uses psychological manipulation techniques on its members.

Comment by tkwa on Open and Welcome Thread: October 2020 · 2020-10-06T18:57:30.076Z · score: 1 (1 votes) · EA · GW

I strong-upvote when I feel like my comment is underappreciated, and don't think of it as too different from strong-upvoting someone else's comment. The existence of the strong-upvote already allows someone to strong-upvote whatever they want, which doesn't seem to be a problem.

Comment by tkwa on Denise_Melchin's Shortform · 2020-10-02T01:27:44.977Z · score: 3 (2 votes) · EA · GW

I don't see bloat as much of a concern, because our voting system, which works pretty well, can bring the best comments to the top. If they're not substantive, they should either be pretty short, or not be highly upvoted.

Comment by tkwa on How do you decide between upvoting and strong upvoting? · 2020-09-25T02:35:30.684Z · score: 1 (1 votes) · EA · GW

Karma should be awarded when a post or comment is high-quality, especially when it's hard for others to notice the high quality. So I strong-upvote a comment when it has an important non-obvious insight and I needed to think for a while before understanding it.

Comment by tkwa on Ramiro's Shortform · 2020-09-25T02:12:38.061Z · score: 8 (5 votes) · EA · GW

Someone I know also noticed this a couple of months ago, so I looked into the methodology and found some possible issues. I emailed Joey Savoie, one of the authors of the report; he hasn't responded yet. Here's the email I sent him:

Someone posted an article you co-authored in 2018 in the Stanford Arete Fellowship mentors group, and the conclusion that wild chimps had a higher welfare score than humans in India seemed off to me. I had the intuition that chimps can control their environment less well than human hunter-gatherers, plus have a less egalitarian social structure, plus the huge amount of infrastructure in food. This seemed like it could reveal either a surprising truth, or a methodological flaw or biases in the evaluators; I read through the full report and have some thoughts which I hope are constructive.
- The way humans are compared to non-humans seems too superficial. I think 6 points to humans in India vs 9 points in wild chimpanzees based on the high level of diagnosed disability among people in India is misleading, because we've spent billions more on diagnosing human diseases than chimps.
- Giving 0 points to humans in India for thirst/hunger/malnutrition, while chimps get 11, seems absurd for similar reasons. If we put as much effort into the diet of chimps as in the diets of wealthy humans to get a true reference point for health, I wouldn't be surprised if more than 15% of chimps were considered malnourished. Also, the untreated drinking water consumed in India is used to support this rating, but though untreated water causes harm through disease, it shouldn't be in the "thirst/hunger/malnutrition" category. [name of mentor] from the chat sums this up as there not being a 'wealthy industrialized chimps' group to contrast with.
I'm wondering if you see these as important criticisms. Do you still endorse the overall results of the report enough that you think we should share it with mentees, and if so, should we add caveats?
Comment by tkwa on Effective strategy and an overlooked area of research? · 2020-09-25T02:01:04.284Z · score: 14 (4 votes) · EA · GW

Here are my thoughts, which may sound overly critical, but are an honest attempt to communicate my ideas clearly.

When I start reading, I immediately notice two red flags:

  • The argument is formatted as a long manifesto by someone without a known track record of good epistemics. The manifesto claims to solve global cooperation, something many competent people have tried hard to solve.
  • The idea of a type of transformative knowledge that causes people to suddenly ignore their current incentives and start cooperating sounds fantastical.

Because of these red flags, I decide that the claim is extraordinary and you need to provide extraordinary evidence. From reading further, I notice further problems. To be clear, I don't think patching these problems will save the thesis: I would still be skeptical due to the prior implausibility and lack of a clear, plausible plan for increasing the world's empathy levels 10%.

  • Aligning everyone's beliefs won't solve conflict; you need to fix structural problems too.
  • If you could communicate obvious true beliefs and get people to internalize them properly, everyone would be an EA. A general method of communicating non-obvious true beliefs about the nature of reality to people, and getting them to act on it, sounds implausible.
  • You say "At some critical point a positive feedback loop will emerge so that every human becomes supersapient over time." If this is the natural result of some small critical mass of people becoming supersapient, why has Buddhism not taken over the world with its millions of enlightened people over thousands of years of existence?

The version of this idea that is scaled back to be plausible to me sounds something like "Scientists should study the benefits of meditation more; with a LOT of funding and rigor this could possibly get past 'does meditation work' to identifying specific benefits and best practices. People should also practice meditation and, if they can safely, experiment with psychedelics, to better understand themselves and possibly become more rational and empathic." That's something I believe, but interventions may not be cost-effective enough to be an EA cause area. (There are EA-adjacent efforts to improve mental health in the developing world, but not many stand out as highly leveraged.)

Comment by tkwa on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-09-25T00:27:29.169Z · score: 1 (1 votes) · EA · GW

Can you give an example of such a conversation, as well as the thought process towards bringing them up? I hear about conversational principles like these, but I don't know how to get from "vague feeling that something is wrong with the conversation" to "I think you're confusing me with excess information".

Comment by tkwa on Halffull's Shortform · 2020-09-23T23:16:19.744Z · score: 3 (2 votes) · EA · GW

It's not on the 80k list of "other global issues", and doesn't come up on a quick search of Google or this forum, so I'd guess not. One reason might be that the scale isn't large enough-- it seems much harder to get existential risk from GMOs than from, say, engineered pandemics.

Comment by tkwa on Buck's Shortform · 2020-09-23T22:46:27.416Z · score: 8 (3 votes) · EA · GW

I've upvoted some low quality criticism of EA. Some of this is due to emotional biases or whatever, but a reason I still endorse is that I haven't read strong responses to some obvious criticism.

Example: I currently believe that an important reason EA is slightly uninclusive and moderately undiverse is because EA community-building was targeted at people with a lot of power as a necessary strategic move. Rich people, top university students, etc. It feels like it's worked, but I haven't seen a good writeup of the effects of this.

I think the same low-quality criticisms keep popping up because there's no quick rebuttal. I wish there were a post of "fallacies about problems with EA" that one could quickly link to.

Comment by tkwa on Denise_Melchin's Shortform · 2020-09-23T22:00:55.340Z · score: 3 (2 votes) · EA · GW

I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.

One possible way to identify views reasonable enough to develop tools for is checking that they're consistent under some amount of reflection; another way could be checking that they're consistent with facts e.g. lack of evidence for supernatural entities, or the best knowledge on conscious experience of animals.

Comment by tkwa on Thomas Kwa's Shortform · 2020-09-23T19:25:09.517Z · score: 30 (12 votes) · EA · GW

I'm worried about EA values being wrong because EAs are unrepresentative of humanity and reasoning from first principles is likely to go wrong somewhere. But naively deferring to "conventional" human values seems worse, for a variety of reasons:

  • There is no single "conventional morality", and it seems very difficult to compile a list of what every human culture thinks of as good, and not obvious how one would form a "weighted average" between these.
  • most people don't think about morality much, so their beliefs are likely to contradict known empirical facts (e.g. cost of saving lives in the developing world) or be absurd (placing higher moral weight on beings that are physically closer to you).
  • Human cultures have gone through millennia of cultural evolution, such that values of existing people are skewed to be adaptive, leading to greed, tribalism, etc.; Ian Morris says "each age gets the thought it needs".

However, these problems all seem surmountable with a lot of effort. The idea is a team of EA anthropologists who would look at existing knowledge about what different cultures value (possibly doing additional research) and work with philosophers to cross-reference between these while fixing inconsistencies and removing values that seem to have an "unfair" competitive edge in the battle between ideas (whatever that means!).

The potential payoff seems huge, as it would expand the basis of EA moral reasoning from the intuitions of a tiny fraction of humanity to that of thousands of human cultures, and allow us to be more confident about our actions. Is there a reason this isn't being done? Is it just too expensive?

Comment by tkwa on Announcing the EA donation swap system · 2020-09-17T03:29:23.389Z · score: 1 (1 votes) · EA · GW

Is this still actively in use in September 2020?

Comment by tkwa on How do i know a charity is actually effective · 2020-07-17T20:29:56.245Z · score: 2 (2 votes) · EA · GW

The person who broke down in tears during an interview is actually Derek Parfit, also an effective altruist. Source: http://bostonreview.net/books-ideas-mccoy-family-center-ethics-society-stanford-university/lives-moral-saints

As for his various eccentricities, I don’t think they add anything to an understanding of his philosophy, but I find him very moving as a person. When I was interviewing him for the first time, for instance, we were in the middle of a conversation and suddenly he burst into tears. It was completely unexpected, because we were not talking about anything emotional or personal, as I would define those things. I was quite startled, and as he cried I sat there rewinding our conversation in my head, trying to figure out what had upset him. Later, I asked him about it. It turned out that what had made him cry was the idea of suffering. We had been talking about suffering in the abstract.
Comment by tkwa on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-01T04:02:29.086Z · score: 12 (8 votes) · EA · GW

Say an expert (or a prediction market median) is much stronger than you, but you have a strong inside view. What's your thought process for validating it? What's your thought process if you choose to defer?

Comment by tkwa on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-28T23:56:36.161Z · score: 2 (2 votes) · EA · GW

I thought this talk was brilliant, not least in the specific terms you mentioned. I often talk to my EA friends about "counterfactual impact", leverage, and "comparative advantage" and often have a hard time switching gears to talk to non-EAs, but I can imagine this slight shift in terminology to "cause-and-effect evidence", leverage, and "personal advantage" to hit close to the core ideas and sound much friendlier. Most of the talk was immediately actionable as well. Thank you for making it.

Comment by tkwa on Asymmetric altruism · 2020-06-27T19:32:27.807Z · score: 1 (1 votes) · EA · GW

I notice that I meant to link to this different episode on the non-identity problem but found it didn't really fit and rationalized that away, so my comment may not be relevant.

Comment by tkwa on EA Forum feature suggestion thread · 2020-06-27T18:34:02.034Z · score: 14 (6 votes) · EA · GW

I think this could be more useful for people who are slightly downvoted, or whose posts just don't get much attention. I remember a few recent highly-downvoted posts and comments (below -10 or so), and all of them seem to have well-written feedback; sometimes more thought was put into the feedback than the original post (not necessarily a bad thing, but going even further could be a massive waste of energy).

People who provide feedback also have to want to engage. On Stack Exchange, closing a question requires a reason, but mods and high-rep users are known to close poorly-written questions for vague reasons without providing much feedback. An even worse failure mode I see is if users are disincentivized from downvoting because they don't want to be added to the feedback list.

Comment by tkwa on Asymmetric altruism · 2020-06-27T18:18:03.872Z · score: 1 (1 votes) · EA · GW

Have you heard the 80000 Hours podcast episode with Will MacAskill? The first hour has a decent exploration of asymmetries and similar deontological concerns, and MascAskill's paralysis argument is a fairly good argument against them.

Comment by tkwa on EA could benefit from a general-purpose nonprofit entity that offers donor-advised funds and fiscal sponsorship · 2020-06-27T02:52:26.177Z · score: 4 (3 votes) · EA · GW

Related: 80k podcast on patient philanthropy .

Comment by tkwa on Matt_Lerner's Shortform · 2020-06-17T00:12:09.450Z · score: 1 (1 votes) · EA · GW

Proportional representation?

Comment by tkwa on Are All Existential Risks Equivalent to a Lack of General Collective Intelligence? And is GCI therefore the Most Important Human Innovation in the History and Immediate Future of Mankind? · 2020-06-16T18:41:15.256Z · score: 2 (2 votes) · EA · GW

This type of content might be more suited to LessWrong, and you might get better feedback/engagement there.

Comment by tkwa on Is the Buy-One-Give-One model an effective form of altruism? · 2020-06-16T18:34:26.142Z · score: 3 (2 votes) · EA · GW

Glad I could help. By the way, it came to my attention that GiveWell is investigating the cause area of providing glasses in developing countries: https://www.givewell.org/international/technical/programs/eyeglasses#How_cost-effective_is_the_program

This is promising, but I still endorse the general stance that B1G1-type programs have obstacles to overcome to reach effectiveness.

Comment by tkwa on Is the Buy-One-Give-One model an effective form of altruism? · 2020-06-09T01:25:07.937Z · score: 5 (3 votes) · EA · GW

It looks like you're fairly new to effective altruism, so you might want see my other comment or read the EA Handbook for more of the reasoning behind these answers.

1) I'm not affiliated with the CEA (nor are most of the people on this forum), but there are certainly forms of philanthropy more in line with the principles of effective altruism.

2) Effectiveness is often estimated as importance x neglectedness x tractability. There are good reasons for this, as when correctly formalized it's an estimate of the total good one can do in the world; see below. I think most consumers are better off either buying from socially responsible non-B1G1 companies, or buying from any company and donating the money saved to either GW top charities (which rate much better in importance and neglectedness) or high-impact existential risk, farm animal welfare, or wild animal welfare causes, which can rate even better depending on your value system. https://concepts.effectivealtruism.org/concepts/importance-neglectedness-tractability/

3, 4) The incentives of B1G1 companies seem to push them towards relatively ineffective causes, and they might indirectly be causing net harm.

5) I would be happy if P&G switched from MNT to bednets. It's possible the marketing could be equally good since malaria affects so many children under 5.

6) This is a valid criticism. Since highly effective causes are rare, any restriction makes it hard to find one.

7) Not sure.

8) I don't think this is fair. Neonatal tetanus causes infant mortality, and the MNT vaccine reduces it, even if there are more effective causes to address. In general, addressing institutional/systemic issues can sometimes be more complicated and costly than directly attacking the problem.

9) Given that these companies aren't currently giving to EA causes themselves, it's hard for me to imagine such companies recommending them to consumers.

10) I'm skeptical of claims that millennials cause this or that trend because they're such a broad group. But you could look at data for this one. For example, polls that ask about inclination towards buying B1G1 products, broken down by age range.

11) EA has grown over the last decade, but total donations as a percentage of GDP are more or less flat. If you mean the growth in EA, that's too complex a question for me to answer.

12) Not sure.

Comment by tkwa on Is the Buy-One-Give-One model an effective form of altruism? · 2020-06-08T23:45:47.637Z · score: 24 (8 votes) · EA · GW

Epistemic status: I am a university student who has read a lot of EA material but has little knowledge about B1G1 programs. I thought carefully about this post for a few hours.

I think there's a wide spectrum of possible effectiveness depending on implementation, but in practice they seem unlikely to be much more effective than the average non-EA charity, and a factor of at least 10 behind many EA causes.

Overall, the strictest forms of B1G1, where a company gives the exact same product they're selling, seems gimmicky to me. The reason is that needs of people in the developing world are vastly different from those of the wealthy people buying the products. I think market forces might even dictate that these programs are not much more effective than direct cash transfers: If they were much more effective, the target population would be willing to buy them, which would cannibalize the sales of the company. [1] None of the 3 companies you list is so naive-- they mostly outsource their work to charities. But this comes with its own problems: they don't apply their own domain knowledge to their interventions.

Warby Parker works with Pupils Project and VisionSpring. Pupils Project operates in the US, so it's unlikely they are cost-effective. VisionSpring at least works in Bangladesh. According to a [GiveWell interview][2], they do undercut commercial prices by a factor of 2 by selling glasses at cost for 150 taka ($1.77) [3], but I doubt that glasses are a leveraged intervention in the developing world. GiveWell does not currently recommend VisionSpring as a top or standout charity, instead recommending charities that can beat cash by a factor of 5-60 and are supported by very strong evidence.

TOMS has stopped distributing shoes in favor of donating 1/3 of their profits to a fund managed by their giving team. Their 2019 impact report is basically a marketing document full of infographics; it appears they make some attempt at evaluating impact of charities, but don't follow effective altruist principles. For example, they fund projects in the US, and clean water programs (The Gates foundation has studied the water, sanitation, and hygiene sector extensively and finds better opportunities in sanitation).

P&G's MNT vaccine program is through UNICEF, which is massively overfunded by comparison to charities recommended by GW and the Open Philanthropy Project.

There are more fundamental problems. The B1G1 website says they primarily evaluate causes by "progress of the project activity" and financial records; it's likely they're falling for the overhead myth and vastly underemphasizing the effectiveness of the cause area, which is left up to the company. EA has at least three branches where effective cause areas are found: global health/poverty, farm/wild animal welfare, and existential risk. It would be ideal if companies' B1G1 programs either supported effective programs in one of these areas, or found a unique niche. B1G1 programs need to yield good PR, and sometimes have the additional constraint of providing a tangible product, so it appears they're limited to a small subset of global health interventions, which in these three examples look no better than the average charity in terms of effectiveness. I don't see any companies with B1G1 programs in farm or wild animal welfare, probably because it is politically contentious. Existential risk causes seem even less likely to yield good PR because they're the exact opposite of the tangible transaction at the heart of B1G1. And B1G1 seems unlikely to let companies find a unique niche given that they're outsourcing to nonprofits.

Finally, I have other concerns. B1G1 companies could be decreasing the amount given to more effective charities, which given that some charities are hundreds or thousands of times more effective than others, might cause net harm. They also might be using such programs to cover up being socially irresponsible (e.g. poor treatment of factory workers, or contributing to high-suffering animal agriculture).

Since this comment is rather long, I've split it into two, with the second comment directly answering the 12 questions.

[1]: See https://www.givewell.org/international/charities/income-raising-goods for why. Other GiveWell charities manage to outperform cash because they don't sell commodities-- individual families can't buy a school deworming program.

[2]: https://files.givewell.org/files/conversations/VisionSpring_05-17-19_(public).pdf

[3]: Strangely, they sell glasses for $0.85 each on their website. Perhaps they have high distribution costs.

Comment by tkwa on How to promote widespread usage of high quality, reusable masks · 2020-04-20T18:54:45.710Z · score: 2 (2 votes) · EA · GW

80000 Hours says the ~4% with greatest comparative advantage should work on COVID-19.

Comment by tkwa on Why I'm Not Vegan · 2020-04-09T23:18:17.418Z · score: 10 (5 votes) · EA · GW

> This is primarily the instrumental value of your enjoyment, right? Otherwise, you should compare your going vegan directly to the suffering of animals by not going vegan

I think you're drawing the line in an unfair place between instrumental and inherent value. Most EAs I know are not so morally demanding on themselves as to have no self-interest. If someone is well-off in a non-EA job and donates 40% of their income to GiveWell or x-risk charities, they're a fairly dedicated EA. But donating "only" 40% still implies a >10:1 income disparity between oneself and the global poor, and thus that one values one's own enjoyment >50x more than that of an arbitrary human. I think the norm of being less than maximally demanding is beneficial to the EA community and protects against unproductive asceticism. So self-interest that looks inherent can actually be instrumental.

Comment by tkwa on Empirical data on value drift · 2020-04-05T00:04:45.866Z · score: 9 (3 votes) · EA · GW

The CEA founding team seems like the absolute best case for value drift, because to found CEA one must have a much higher baseline inclination towards EA than the average person. Also probably a lot of power, which helps them control their environment while many EAs would be forced into non-EA lifestyles by factors beyond their control. So 25% drifters of the original CEA team feels more scary to me than 40-70% of average EAs.