Posts

Why and how to start a for-profit company serving emerging markets 2019-11-06T01:00:08.621Z · score: 66 (32 votes)
"Why Nations Fail" and the long-termist view of global poverty 2019-07-16T07:59:04.566Z · score: 84 (39 votes)
Has your "EA worldview" changed over time? How and why? 2019-02-23T14:22:30.855Z · score: 28 (13 votes)
Where I gave and why in 2016 2017-01-06T05:10:45.783Z · score: 6 (6 votes)
[link] GiveWell's 2015 recommendations are out! 2015-11-21T02:49:59.680Z · score: 2 (2 votes)
Solving donation coordination problems 2015-05-28T01:04:42.999Z · score: 8 (8 votes)
How important is marginal earning to give? 2015-05-19T20:41:47.098Z · score: 16 (18 votes)
Explaining impact purchases 2015-04-16T00:53:20.126Z · score: 6 (6 votes)
Tech job Q&A 2015-03-19T17:52:17.386Z · score: 6 (6 votes)
Results from a survey of people's views on donation matching 2015-03-01T07:30:55.575Z · score: 5 (5 votes)
[cross-post] Does donation matching work? 2015-01-11T03:49:56.538Z · score: 9 (9 votes)
Where are you giving and why? 2014-12-12T03:32:17.417Z · score: 4 (4 votes)
Spitballing EA career ideas 2014-11-30T00:02:32.521Z · score: 11 (11 votes)
Career choice: Evaluate opportunities, not just fields 2014-09-28T21:37:38.685Z · score: 16 (16 votes)
Brainstorming thread: ideas for large EA funders 2014-09-28T19:15:59.373Z · score: 7 (7 votes)
Lessons from running Harvard Effective Altruism 2014-09-19T04:05:01.573Z · score: 15 (15 votes)
A critique of effective altruism 2013-12-03T01:39:43.000Z · score: 0 (0 votes)
Replaceability in altruism 2013-08-29T04:00:52.000Z · score: 0 (0 votes)
Effective altruists and outsiders 2013-08-05T04:00:57.000Z · score: 1 (1 votes)
Spending on yourself vs. charity 2013-07-05T04:00:59.000Z · score: 1 (1 votes)
Common responses to earning to give 2013-06-10T04:00:05.000Z · score: 1 (1 votes)

Comments

Comment by ben_kuhn on Why and how to start a for-profit company serving emerging markets · 2019-11-11T14:27:19.289Z · score: 16 (5 votes) · EA · GW

Examples (mostly from Senegal since that's where I have the most experience, caveat that these are generalizations, all of them could be confounded by other stuff, the world is complicated, etc.):

  • Most Senegalese companies seem to place a much stronger emphasis on bureaucracy and paperwork.
  • When interacting with potential business partners in East Africa, we eventually realized that when we told them our user/transaction numbers, they often assumed that we were lying unless the claim was endorsed by someone they had a trusted connection to.
  • In the US, we have fully transparent salaries (everyone at the company can look up anyone else's salary in a spreadsheet). We weren't able to extend this norm to our Senegalese subsidiary because it caused too much interpersonal conflict. (This was at least partly the result of us not putting enough investment into making the salary scale work for everyone, but my understanding is that my Senegalese coworkers were pessimistic about bringing back salary transparency even if we fixed that.)
  • In Senegal people seem less comfortable by default expressing disagreement with someone above them in the hierarchy. (As a funny example, I've had a few colleagues who I would ask yes-or-no questions and they would answer "Yes" followed by an explanation of why the answer is no.)

Exporting different norms is quite hard at scale. You need to hire people who are the closest to the norms that you want, but they'll still probably be fare away so you'll also have to invest a lot in propagating the norms you want, which only really works well 1-on-1. When we needed to scale our local Senegal team quickly we ended up having to compromise on some norms to do so (e.g. salary transparency, amount of paperwork).

Comment by ben_kuhn on Why and how to start a for-profit company serving emerging markets · 2019-11-07T20:29:53.862Z · score: 12 (5 votes) · EA · GW

Broadly agree, but:

You might end up making more impact if you started a startup in your own country, and just earned-to-give your earnings to GiveWell / EA organizations. This is because I think there are very few startups that benefit the poorest of the poor, since the poorest people don't even have access to basic needs.

Can't you just provide people basic needs then though? Many of Wave's clients have no smartphone and can't read. Low-cost Android phones (e.g. Tecno Mobile) probably provided a lot of value to people who previously didn't have smartphones. Providing people cell service is hard (if you're not a telecom), but if an area has cell service but no internet you can still make useful information products with USSD, SMS, etc., or physical shops.

(I do think that many good startup ideas in the developing world involve providing relatively "basic" needs! But it seems to me like there's decent opportunity there.)

Comment by ben_kuhn on Why and how to start a for-profit company serving emerging markets · 2019-11-06T11:32:19.158Z · score: 3 (2 votes) · EA · GW

Haha this is probably the first time someone said that about one of my essays—I’m flattered, and excited to potentially write follow ups!

Is there anything in particular you’re curious about? Sometimes it’s hard to be sure of what’s novel vs obvious/common knowledge.

Comment by ben_kuhn on The Future of Earning to Give · 2019-10-14T00:09:32.292Z · score: 25 (14 votes) · EA · GW
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I'm not too clear why we should believe that. The skills and manpower needed by EA organizations appear to be a small subset of the total careers that the world needs, and it would seem an odd coincidence if the comparative advantage of people who believe in EA happens to overlap heavily with the needs of EA organizations. Remember that EA principles suggest that you should donate to approximately one charity (i.e. the current best one). The same general idea applies to need for talent: there are a relatively small number of tasks that stand out as unusually in need of more talent.

The "one charity" argument is only true on the margin. It would be incorrect to conclude from this that nobody should start additional charities—for instance, even though GiveWell's current highest-priority gap is AMF, I'm still glad that Malaria Consortium exists so that it could absorb $25m from them earlier this year. Similarly, it's incorrect to conclude from this style of argument that the social returns to talent should be concentrated in specific fields. While there may be a small number of "most important tasks" on the margin, the EA community is now big enough that we might expect to see margins changing over time.

Also, the majority of people who are earning to give would probably be able to fund less than one person doing direct work. If your direct work would be mostly non-replaceable, then this compares unfavorably to direct work. (Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.)

Comment by ben_kuhn on Long-term Donation Bunching? · 2019-09-27T15:44:28.261Z · score: 11 (7 votes) · EA · GW

If you're really worried about value drift, you might be able to use a bank account that requires two signatures to withdraw funds, and add a second signatory whom you trust to enforce your precommitment to donate?

I haven't actually tried to do this, but I know businesses sometimes have this type of control on their accounts, and it might be available to consumers too.

Comment by ben_kuhn on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-23T08:57:48.517Z · score: 4 (2 votes) · EA · GW

Whoops, sorry about the quotes--I was writing quickly and intended them to denote that I was using "solve" in an imprecise way, not attributing the word to you, but that is obviously not how it reads. Edited.

Comment by ben_kuhn on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-23T00:32:43.265Z · score: 2 (1 votes) · EA · GW

These theoretical claims seem quite weak/incomplete.

  • In practice, autocrats' time horizons are highly finite, so I don't think a theoretical mutual-cooperation equilibrium is very relevant. (At minimum, the autocrat will eventually die.)
  • All your suggestions about oligarchy improving the tyranny of the majority / collective action problems only apply to actions that are in the oligarchy's interests. You haven't made any case that the important instances of these problems are in an oligarchy's interests to solve, and it doesn't seem likely to me.
Comment by ben_kuhn on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-18T22:46:57.259Z · score: 11 (4 votes) · EA · GW

What's the shift you think it would imply in animal advocacy?

Comment by ben_kuhn on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-17T10:05:11.512Z · score: 6 (3 votes) · EA · GW

I had one of his quotes on partial attribution bias (maybe even from that interview) in mind as I wrote this!

Comment by ben_kuhn on EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? · 2019-06-17T11:29:45.889Z · score: 16 (7 votes) · EA · GW

Yikes; this is pretty concerning data. Great find!

I'd be curious to hear from anyone at GWWC how this updates them, and in particular how it bears on their "realistic calculation" of their cost effectiveness, which assumes 5% annualized attrition. (That's not an apples to apples comparison, so their estimate isn't necessarily off by literally 10x, but it seems like it must be off by quite a lot, unless the survey data is somehow biased.)

Comment by ben_kuhn on Please use art to convey EA! · 2019-05-26T03:02:00.624Z · score: 27 (12 votes) · EA · GW

I suspect that straightforwardly taking specific EA ideas and putting them into fiction is going to be very hard to do in a non-cringeworthy way (as pointed out by elle in another comment). I'd be more interested in attempts to write fiction that conveys an EA mindset without being overly conceptual.

For instance, a lot of today's fiction seems cynical and pessimistic about human nature; the characters frequently don't seem to have goals related to anything other than their immediate social environment; and they often don't pursue those goals effectively (apparently for the sake of dramatic tension). Fiction demonstrating people working effectively on ambitious, broadly beneficial goals, perhaps with dramatic tension caused by something other than humans being terrible to each other, could help propagate EA mindset.

Comment by ben_kuhn on Structure EA organizations as WSDNs? · 2019-05-13T02:50:29.767Z · score: 4 (2 votes) · EA · GW
worker cooperatives have positive impacts on both firm productivity and employee welfare; there is a lot more research showing that worker ownership is modestly better than regular capitalist ownership

This is causal language, but as far as I can tell (at least per the 2nd paper) the studies are all correlational? By default I'm very skeptical of ability to control for confounders in a correlational analysis here. Are there any studies with a more robust way to infer causation?

Comment by ben_kuhn on Is preventing child abuse a plausible Cause X? · 2019-05-07T12:21:20.124Z · score: 4 (2 votes) · EA · GW

(PS: if you're interested in posting but unsure about content, I'd be excited to help answer any q's or read a draft! My email is in my profile.)

Comment by ben_kuhn on Is EA unscalable central planning? · 2019-05-07T12:07:52.083Z · score: 31 (10 votes) · EA · GW

What EA is currently doing would definitely not scale to 10%+ of the population doing the same thing. However, that's not a strong argument against not doing it right now. You can't start a political party with support from 0.01% of the population!

In general, we should do things that don't scale but are optimal right now, rather than things that do scale but aren't optimal right now, because without optimizing for the current scale, you die before reaching the larger scale.

Comment by ben_kuhn on Is preventing child abuse a plausible Cause X? · 2019-05-05T18:08:14.694Z · score: 27 (13 votes) · EA · GW

I would be extremely interested if you were to hypothetically write an "intro to child protection/welfare for EAs" post on this forum! (And it would probably be a great candidate for a prize as well!) I think the number of upvotes on this comment show that other people agree :)

Personally, I have ~zero knowledge of this topic (and probably at least as many misconceptions as accurate beliefs!) and would be happy to start learning about it from scratch.

"Cause X" usually refers to an issue that is (one of) the most important one(s) to work on, but has been either missed or deprioritized for bad reasons by the effective altruism community (it may come from this talk). So I'd expect a cause which the EA community decided was "cause X" to receive an influx of interest in donations and direct work from the EA community, like how GiveWell directed hundreds of millions of dollars to their top charities, or how a good number of EAs went to work at nonprofits working on animal welfare. (For a potentially negative take on being Cause X, see this biorisk person's take.)

Comment by ben_kuhn on Does climate change deserve more attention within EA? · 2019-04-17T20:53:21.427Z · score: 41 (21 votes) · EA · GW

While climate change doesn't immediately appear to be neglected, it seems possible that many people/orgs "working on climate change" aren't doing so particularly effectively.

Historically, it seems like the environmental movement has an extremely poor track record at applying an "optimizing mindset" to problems and has tended to advocate solutions based on mood affiliation rather than reasoning about efficiency. A recent example would be the reactions to the California drought which blame almost anyone except the actual biggest problem (agriculture).

Of course, I have no idea how much this consideration increases the "effective neglectedness" of climate change. I expect that there are still enough people applying an optimizing mindset to make it reasonably non-neglected, but maybe only par with global health rather than massively less neglected like you might guess from news coverage?

Comment by ben_kuhn on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T13:27:42.442Z · score: 13 (7 votes) · EA · GW

If one person-year is 2000 hours, then that implies you're valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.

This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I'm sure there are other overheads that I don't know about, but I'm curious if you (or someone from CEA) knows what they are?

[Not trying to imply that CEA is failing to optimize here or anything—I'm mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]

Comment by ben_kuhn on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T09:45:18.611Z · score: 124 (65 votes) · EA · GW

I think we should think carefully about the norm being set by the comments here.

This is an exceptionally transparent and useful grant report (especially Oliver Habryka's). It's helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.

But the fact that Habryka now must defend all 14 of his detailed write-ups against bikeshedding, uncharitable, and sometimes downright rude commenters seems like a strong disincentive against producing such reports in the future, especially given that the LTF fund is so time constrained.

If you value transparency in EA and want to see more of it (and you're not a donor to the LTF fund), it seems to me like you should chill out here. That doesn't mean don't question the grants, but it does mean you should:

  • Apply even more principle of charity than usual
  • Take time to phrase your question in the way that's easiest to answer
  • Apply some filter and don't ask unimportant questions
  • Use a tone that minimizes stress for the person you're questioning
Comment by ben_kuhn on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T23:28:15.675Z · score: 18 (10 votes) · EA · GW

Wow! This is an order of magnitude larger than I expected. What's the source of the overhead here?

Comment by ben_kuhn on My new article on EA and the systemic change objection · 2019-04-07T23:44:42.310Z · score: 4 (2 votes) · EA · GW

This is true as far as it goes, but I think that many EAs, including me, would endorse the idea that "social movements are the [or at least a] key drivers of change in human history." It seems perverse to assume otherwise on a forum whose entire point is to help the progress of a social movement that claims to e.g. help participants have 100x more positive impact in the world.

More generally, it's true that your chance of convincing "constitutionally disinclined" people with two papers is low. But your chance is zero of convincing anyone with either (1) a bare assertion that there's some good stuff there somewhere, or (2) the claim that they will understand you after spending 20 hours reading some very long books.

Also, I think your chance of convincing non-constitutionally-disinclined people with the right two papers is higher than you think. Although you're correct that two papers directly arguing "you should use paradigm x instead of paradigm y" may not be super helpful, two pointers to "here are some interesting conclusions that you'll come to if you apply paradigm x" can easily be enough to pique someone's interest.

Comment by ben_kuhn on EA is vetting-constrained · 2019-03-09T14:40:55.040Z · score: 44 (22 votes) · EA · GW

I'm very interested in hearing from grantmakers about their take on this problem (especially those at or associated with CEA, which it seems like has been involved in most of the biggest initiatives to scale out EA's vetting, through EA Grants and EA Funds).

  • What % of grant applicants are in the "definitely good enough" vs "definitely (or reasonably confidently) not good enough" vs "uncertain + not enough time/expertise to evaluate" buckets?
  • (Are these the right buckets to be looking at?)
  • What do you feel your biggest constraints are to improving the impact of your grants? Funding, application quality, vetting capacity, something else?
  • Do you have any upcoming plans to address them?

Note also that the EA Meta and Long-Term Future Funds seem to have gone slightly in the direction of "less established" organizations since their management transition, and it seems like their previous conventionality might have been mostly a reflection of one specific person (Nick Beckstead) not having enough bandwidth.

Comment by ben_kuhn on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T09:26:26.458Z · score: 7 (5 votes) · EA · GW
It seems easier to increase the efficiency of your work than the quality.

In software engineering, I've found the exact opposite. It's relatively easy for me to train people to identify and correct flaws in their own code–I point out the problems in code review and try to explain the underlying heuristics/models I'm using, and eventually other people learn the same heuristics/models. On the other hand, I have no idea how to train people to work more quickly.

(Of course there are many reasons why other types of work might be different from software eng!)

Comment by ben_kuhn on Review of Education Interventions and Charities in Sub-Saharan Africa · 2019-02-27T01:25:05.938Z · score: 7 (5 votes) · EA · GW

In addition to Khorton's points in a sibling comment, GiveWell explicitly optimizes not just for expected value by their own lights, but for transparency/replicability of reasoning according to certain standards of evidence. If your donors are willing to be "highly engaged" or trust you a lot, or if they have different epistemics from GiveWell (e.g., if they put relatively more weight on models of root-level causes of poverty/underdevelopment, compared to RCTs), I bet there's something else out there that they would think is higher expected value.

Of course, finding and vetting that thing is still a problem, so it's possible that the thoroughness and quality of GW's research outweighs these points, but it's worth considering.

Comment by ben_kuhn on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T01:00:10.930Z · score: 4 (2 votes) · EA · GW

This is why I think Wave's two-work-test approach is useful; even if someone "looks good on paper" and makes it through the early filters, it's often immediately obvious from even a small work sample that they won't be at the top of the applicant pool, so there's no need for the larger sample.

Comment by ben_kuhn on My new article on EA and the systemic change objection · 2019-02-27T00:51:25.017Z · score: 3 (2 votes) · EA · GW

Downvoted for not being at least two of true, necessary or kind. If you're going to be snide, I think you should do a much better job of defending your claims rather than merely gesturing at a vague appeal to "holistic and historically extended nature."

You've left zero pointers to the justifications for your beliefs that could be followed by a good-faith interlocutor in under ~20h of reading. Nor have you made an actual case for why a 20-hour investment is required for someone to even be qualified to dismiss the field (an incredible claim given the number of scholars who are willing to engage with arguments based on far less than 20 hours of background reading).

Your comment could be rewritten mutatis mutandis with "scientology" instead of "social movement studies," with practically no change the argument structure. I think an argument for why a field is worth looking into should strive for more rigor and fewer vaguely insulting pot-shots.

(EDIT: ps, I'm not the downvoter on your other two responses. Wish they'd explained.)

Comment by ben_kuhn on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T00:17:12.105Z · score: 13 (9 votes) · EA · GW
1. Un-timed work test (e.g. OPP research analyst)

Huh. I'm really surprised that they find this useful. One of the main ways that Wave employees' productivity has varied is in how quickly they can accomplish a task at a given level of quality, which varies by an order of magnitude between our best and worst candidates. (Or equivalently, how good of a job they can do in a fixed amount of time.) It seems like not time-boxing the work sample would make it much, much harder to make an apples-to-apples quality comparison between applicants, because slower applicants can spend more time to reach the same level of quality.

Comment by ben_kuhn on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T00:07:12.475Z · score: 7 (4 votes) · EA · GW

It's much more understandable to me for the grants to have labor-intensive processes, since they can't fire bad performers later so the effective commitment they're making is much higher. (A proposal that takes weeks to write is still a questionable format IMO in terms of information density/ease of evaluation, but I don't know much about grant-making, so this is weakly held.)

Comment by ben_kuhn on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T12:25:43.570Z · score: 29 (18 votes) · EA · GW

I'm sorry to see so many orgs take 10+ hours to get you only partway through the process, let alone multiple 40+ hour processes. This is especially glaring compared to the very low number of orgs that rejected you in under 5 hours.

It sounds like many of these orgs would benefit (both you and themselves!) from improving their evaluations to reject people earlier in the process.

My team at Wave's current technical interview process is under 10 hours over 4 stages (assuming you spend 1 hour on your cover letter and resume); the majority of rejections happen after less than 5 hours. The non-technical interview process is somewhat longer, but I would guess still not more than 15 hours and with the majority of applications being rejected in under 5 hours (the final interview is a full day).

Notably, we do two work samples, a 2hr one (where most applicants are rejected) and a 4-5hr one for the final interview. If I were interviewing for a non-technical role I'd insert a behavioral interview after the first work sample as well. These shorter interviews help us screen out many candidates before we waste a ton of their time. It's hard for me to imagine needing 8+ hours for a work sample unless the role is extremely complex and requires many different skills.

Comment by ben_kuhn on Has your "EA worldview" changed over time? How and why? · 2019-02-24T21:53:05.211Z · score: 6 (4 votes) · EA · GW

Wow, thanks for the great in depth reply!

now weight purchasing fuzzies much more highly than I used to.

Do you mean charitable fuzzies specifically? What kinds of fuzzies do you purchase more of? Do you think this generalizes to more EAs?

What believing that I live in a deterministic system (wherein the current state is entirely the result of the preceding state) implies about morality.

Once upon a time, I read a Douglas Hofstadter book that convinced me that the answer was "nothing" (basically because determinism works at the level of basic physics, and morality / your perception of having free will operates about a gazillion levels of abstraction higher, such that applying the model "deterministic" to your own behavior is kind of like saying that no person is more than 7 years old because that's the point where all the cells in their body get replaced).

I was in high school at the time so I don't know if it would have the same effect on me, or you, today though.

Comment by ben_kuhn on My new article on EA and the systemic change objection · 2019-02-17T19:37:16.707Z · score: 6 (5 votes) · EA · GW

This was very interesting food for thought, thanks!

Taking systemic change seriously would require EA to embrace a much wider range of methods and forms of evidence, embracing the inevitably uncertain judgments involved in the holistic interpretation of social systems and analysis of the dynamics of social change.

This is definitely correct, but I'd guess that where I (and many EAs) part ways with you is not in being in principle unwilling to make commitments to other methods/forms of evidence, but rather, not finding any other existing paradigms compelling or not agreeing on which ones we find compelling.

You can't separate the question of "should I take systemic change seriously" from the question of "how compelling is the most compelling paradigm for thinking about systemic change", so I think you would have a stronger chance of convincing EAs to take systemic change convincingly by arguing why EAs should find a specific paradigm compelling.

Here are some features that might make a paradigm compelling to me. I think the current EA paradigm for addressing global poverty exhibits all of them, but it seems to me that one or more is lacking from (my stereotype of) any current paradigm for addressing systemic change:

  • Tolerance of uncertainty and ability to course-correct
  • Compatibility with our understanding of human behavior (e.g. the tendency of people to follow local incentives)
  • Global impartiality
  • Scope sensitivity (i.e. trying to reason about the relative sizes of different things)
  • Grounding in consequentialism
  • Not having its internal discourse co-opted by status seeking or "mood affiliation"
Comment by ben_kuhn on My new article on EA and the systemic change objection · 2019-02-17T17:14:07.004Z · score: 17 (8 votes) · EA · GW
Defenders of EA chide critics for not setting up organizations to evaluate potential systemic changes and for their vague critiques of capitalism. They ignore the entire academic discipline of Social Movement Studies, which focuses on the processes and dynamics of large-scale social change as well as vast quantities of analysis by social movements themselves. The failure within EA to even acknowledge the existence of this evidence, let alone engage with it, suggests status-quo bias.

I have never heard of this field, as have (I suspect) many of your readers. Because of this, if you aim to persuade EAs I think you would do well by following Noah Smith's "Two Paper Rule" here. Can you recommend some papers that are good exemplars of the "vast quantities of analysis" here?

If you want me to read the vast literature, cite me two papers that are exemplars and paragons of that literature. Foundational papers, key recent innovations - whatever you like (but no review papers or summaries). Just two. I will read them.
Comment by ben_kuhn on Talent gaps from the perspective of a talent limited organization. · 2017-11-06T22:45:20.931Z · score: 1 (1 votes) · EA · GW

But you did find that 20k (above min wage in most places) was not appreciably different from 50k in terms of "talent pool in the traits we would like to see more of"? I'm still extremely surprised. While above the minimum wage, 20k would require many EAs I know to make large sacrifices in housing location, quality and/or savings buffer.

Are you only looking for people who are willing to move to India, or do you think the traits you care about are strongly correlated with being willing to make large sacrifices on those dimensions, or what?

Comment by ben_kuhn on Talent gaps from the perspective of a talent limited organization. · 2017-11-06T09:58:11.325Z · score: 6 (6 votes) · EA · GW

We have experimented with different levels of salaries between 10k and 50k USD and have not found increasing the salary increases the talent pool in the traits we would like to see more of

Is this really true throughout the whole range? It seems extraordinary to claim that moving from a salary of 10k to 20k truly had no effect. Most EAs live on much more than US$10k, and I think this is the right call for most of them.

Comment by ben_kuhn on Introducing the EA Funds · 2017-02-09T04:15:39.115Z · score: 10 (10 votes) · EA · GW

Awesome!

Is there a difference between donating to the Global Health and Development fund and donating to GiveWell top charities (as Elie has done with his personal donation for each of the last four years)?

Comment by ben_kuhn on Estimating the Value of Mobile Money · 2017-01-05T21:34:51.194Z · score: 2 (2 votes) · EA · GW

I think their profit is 20% of their revenue (for a money transfer company, revenue is the total fees brought in, not total money paid into the network).

Comment by ben_kuhn on Estimating the Value of Mobile Money · 2017-01-05T21:33:11.043Z · score: 1 (1 votes) · EA · GW

Re 1, it's worth noting that M-Pesa was administered by very different teams in different countries. In Kenya it was allowed to operate mostly as a startup with limited oversight from Safaricom (or anyone else), whereas in other countries (to varying degrees) the people running M-Pesa were constrained by stricter management from the telecom's country head. This means that there are obvious ways in which M-Pesa was executed less well in other countries. For instance, Wave integrates with M-Pesa in both Kenya and Tanzania, and despite running on exactly the same technology, the Tanzanian system's uptime is substantially worse. Similarly, the quality of their agents and their customer support staff in Tanzania is noticeably lower.

Since Wave isn't hamstrung by oversight from a stodgy and risk-averse telecom, I think you should give less weight to examples from countries other than Kenya as a base rate.

Comment by ben_kuhn on Matching-donation fundraisers can be harmfully dishonest · 2016-11-16T04:42:17.611Z · score: 9 (9 votes) · EA · GW

The difference between matching and challenge grants was not statistically significant, actually. More generally, that study's evidence is suggestive at best; it was underpowered (couldn't have distinguished a 30% increase in donations from noise) and didn't correct for multiple (12 in the field, 10 in the lab) hypothesis tests. They also mis-described what a p-value means, which doesn't directly invalidate their results but makes me pretty generally worried.

Comment by ben_kuhn on Students for High Impact Charity: Review and $10K Grant · 2016-10-08T19:14:15.379Z · score: 2 (2 votes) · EA · GW

Sorry, I'm seeing this late, but this is an area I have some experience in, so I'll ask anyway:

Tee believes that THINK content was written more for people who already know EA.

This was definitely not my impression. For instance, the THINK's Resources page includes, as their first workshop, an "introduction to effective altruism," plus other standard intro-to-EA exercises like "guess which charity is actually effective" and intro-level modules about a number of favorite EA causes.

Has Tee spoken to anyone at THINK about why it went dormant to confirm this impression? Or about any of the other numerous failed student-outreach initiatives?

Additionally, THINK was launched at a time when the EA community was smaller. Now that the EA community has dramatically increased in size, it may be easier to attain critical mass.

Why does the size of the EA community matter if SHIC is not associating themselves with the EA community at all?


As someone who ran a student group that was at one point nominally a THINK group: I suspect the real reason THINK failed was that they didn't actually do very much. I think I met up with a THINK representative once or twice and tried one of their modules once, but other than that, they were completely non-proactive. Maybe they helped other groups more, but if not, the default state of college students is to flake out on everything, and it seemed like THINK was not doing much to avoid having their group leaders fall into that failure mode.

Of course, that's just the proximate cause of failure; there were probably deeper underlying reasons. For instance, maybe THINK stopped being proactive because they got discouraged by how little initiative most of their group leaders took. And maybe the group leaders didn't take much initiative because THINK's resources were too focused on more speculative arguments and cause areas (e.g., their sequence of workshops goes right from "charity assessment" to "intelligence amplification").


To get more speculative, my guess is that the core problem of student group outreach is finding motivated group leaders and keeping them motivated, and I haven't noticed any of the student-group-outreach efforts doing particularly well at this. There's some base rate of highly motivated student group leaders popping up basically by reading things on the Internet and deciding to run a student group, and I haven't yet noticed any focused outreach effort improving on that base rate very much. Although of course I might be missing some.

Comment by ben_kuhn on Some Organisational Changes at the Centre for Effective Altruism · 2016-07-29T04:45:50.759Z · score: 1 (1 votes) · EA · GW

Awesome--please put me on the list when those updates start happening :)

The idea that's currently in my head, but not (yet) a policy, is that we to a first approximation only accept unrestricted donations, but that every donor is asked to 'vote' by telling us how, ideally, they would want their donation to be used.

I really like this idea. Hopefully donors are happy with it (I know I personally would be).

Comment by ben_kuhn on 50% every 5 years > 10% every year · 2016-07-24T17:25:48.051Z · score: 2 (2 votes) · EA · GW

Also, three of the largest EA states (CA, NY and MA) have high enough state taxes that it becomes worth itemizing around $100k of income for that alone.

Comment by ben_kuhn on Some Organisational Changes at the Centre for Effective Altruism · 2016-07-24T17:18:43.912Z · score: 10 (10 votes) · EA · GW

Awesome! Really great to see the move towards consolidating the many overlapping projects, something that's made me skeptical of a number of them in the past. (Also excited that you'll be more directly involved!) This makes me a lot more excited about CEA.

How will fundraising work under the new structure?

  • Will different projects still fundraise on their own or will CEA fundraise for all projects?
  • Do you expect most donors will earmark their funds for one project or another, or will you try to raise most funding unrestricted?
  • How do you plan to communicate with funders (or other people interest in following CEA's progress) about the status of various projects?
Comment by ben_kuhn on Thoughts about organizing an EAGx Conference · 2016-06-28T02:05:58.936Z · score: 4 (4 votes) · EA · GW

Amazing writeup! What an awesome amount of detail. Wish I could have been there. Thanks so much for the hard work you all put in!

Comment by ben_kuhn on Is effective altruism overlooking human happiness and mental health? I argue it is. · 2016-06-25T02:17:35.324Z · score: 3 (3 votes) · EA · GW

Good find. Should have known better than to trust well-established pysch findings. (sob) Thanks for the correction, I'll edit the OP.

Comment by ben_kuhn on Is effective altruism overlooking human happiness and mental health? I argue it is. · 2016-06-23T20:50:26.360Z · score: 6 (6 votes) · EA · GW

Great point about the cross-cultural validity of depression diagnosis.

For that matter, I'd be awfully concerned about the cross-cultural (or cross-socioeconomic-group!) validity of life-satisfaction measures. Often they are asked something like so:

  • Please imagine a ladder with steps numbered from zero at the bottom to 10 at the top.
  • The top of the ladder represents the best possible life for you and the bottom of the ladder represents the worst possible life for you.
  • On which step of the ladder would you say you personally feel you stand at this time? (ladder-present)
  • On which step do you think you will stand about five years from now? (ladder-future)

There are obvious ways in which this question might cause someone to give, say, their life satisfaction as a percentile compared to people around them, rather than an absolutely comparable number, which would bias it up a lot for poor countries.

Comment by ben_kuhn on Is effective altruism overlooking human happiness and mental health? I argue it is. · 2016-06-23T20:44:05.226Z · score: 2 (2 votes) · EA · GW

I wouldn't expect people to be able to adapt to severe pain, not when you consider the evolutionary advantages of always taking your hand out of the fire. I'd expect people to die before they got used to pain.

Sorry. Severe pain may have been a bad example. Other high-DALY-weight conditions do seem to show hedonic adaptation though, e.g. paraplegia (see my response to Lila for sources).

Comment by ben_kuhn on Is effective altruism overlooking human happiness and mental health? I argue it is. · 2016-06-23T20:41:17.683Z · score: 2 (2 votes) · EA · GW

Sorry. Severe pain may have been a bad example. However, for instance, paraplegia does exhibit hedonic adaptation (source) despite having a disability weight of 0.57 (source).

Comment by ben_kuhn on Is effective altruism overlooking human happiness and mental health? I argue it is. · 2016-06-22T18:57:03.757Z · score: 7 (7 votes) · EA · GW

[Has anyone from GiveWell looked into mental health interventions? I couldn't find an intervention report on their website but I'd be interested to know whether they have any informal take on it.]

At first blush this is pretty intriguing, especially the following points:

  • Apparently, people's prediction about how bad depression would be compared to, e.g., severe pain is off by a factor of about 10, because hedonic adaptation applies to severe pain but not to depression. This biases DALY burden and cost-effectiveness statistics against mental health interventions.* (EDIT: not sure I buy this anymore; the "established" psych research is more questionable than I thought. See convo with Lila below.)
  • Note that despite this bias, unipolar depressive disorders incur the 9th biggest DALY burden of any disease according to the Global Burden of Disease 2012 update.
  • Most developing countries spend ~nothing on mental health and there is only one large charity working on it.

Other things this makes me wonder:

  • Where (geographically/demographically) is the DALY burden of depression/unhappiness concentrated? This would seem to have strong implications for where work should be focused. E.g., anti-depression smartphone apps developed in the US are unlikely to transfer well to India.
  • What is the actual effect size of CBT run "in the wild" via a scalable delivery mechanism like an app? How much of depression can we expect it to mitigate? Is the main problem to solve here finding a good intervention, or distributing it (i.e. getting people to use the CBT app or whatever)?
  • Have other (non-depression-related) interventions aimed directly at developed-world quality of life been tested? For instance, people notoriously neglect the effect of having a long daily commute on their happiness, and I suspect something similar applies to exercise and to food quality (at least, it does for me).

BTW, one note on the paper: you remark that "[a billionaire] should also run randomised controlled trials to assess how much happiness is increased by anti-poverty and anti-malarial interventions"--in fact, you can achieve a lower bound on the happiness increase of anti-malarial interventions because the main mechanism by which they reduce DALY burden (at least in GiveWell's cost-effectiveness analysis) is by reducing mortality. Unlike severe pain, one cannot hedonically adapt to being dead, so anti-malarial interventions (and other mortality-reducing interventions) should have less of the 10x bias than e.g. cash transfers.

*I'm not incredibly confident in this argument; determining the actual quality of life burden here seems like a pretty subtle measurement problem of which I'd love to see a more thorough treatment than the paper provides, since it's really the crux of the quantitative argument.

Comment by ben_kuhn on Against segregating EAs · 2016-01-21T19:07:37.624Z · score: 3 (5 votes) · EA · GW

I'm interested to see people phrasing their arguments in terms of distinguishing how much sacrifice people make.

Personally, I'm sympathetic to distinguishing between how much impact people have, but thinking too hard about who sacrifices the most (except inasmuch as it's correlated with the former) seems like it's against the spirit of EA. It's about how much good you do, not how much you give up to do it!

If you're living on $10k and donating $90k, then donating your marginal $10k is WAY more of a sacrifice than if you're living on $90k and donating $10k. But it doesn't do any more good! I have a lot of respect for people who donate/sacrifice up to that margin, but it's the same kind of respect I have for, like, Wim Hof.*

(Of course, a lot of those people are also doing really important/awesome things, and I have EA-respect for them because of that. But the EA-respect isn't because they live on small amounts of money or spend every waking hour thinking about EA. It's what they actually get done!)

*the man who holds the world record for longest time spent immersed in an ice bath.

Comment by ben_kuhn on [link] GiveWell's 2015 recommendations are out! · 2015-11-21T08:16:47.517Z · score: 2 (2 votes) · EA · GW

I know. I'm surprised it took 8 hours :)

Comment by ben_kuhn on The term "Vegan" needs to evolve · 2015-09-12T20:08:18.697Z · score: 5 (5 votes) · EA · GW

It seems like trying to change the definition of the term "vegan" invites a huge amount of confusion and blowback from people who don't use it in the same way.

Why not just start calling your diet "cruelty-free" instead?