EA Infrastructure Fund: May 2021 grant recommendations 2021-06-03T01:01:01.202Z
What I learned from working at GiveWell 2021-02-22T06:31:21.960Z
We're Lincoln Quirk & Ben Kuhn from Wave, AMA! 2020-10-27T17:34:21.538Z
Why and how to start a for-profit company serving emerging markets 2019-11-06T01:00:08.621Z
"Why Nations Fail" and the long-termist view of global poverty 2019-07-16T07:59:04.566Z
Has your "EA worldview" changed over time? How and why? 2019-02-23T14:22:30.855Z
Where I gave and why in 2016 2017-01-06T05:10:45.783Z
[link] GiveWell's 2015 recommendations are out! 2015-11-21T02:49:59.680Z
Solving donation coordination problems 2015-05-28T01:04:42.999Z
How important is marginal earning to give? 2015-05-19T20:41:47.098Z
Explaining impact purchases 2015-04-16T00:53:20.126Z
Tech job Q&A 2015-03-19T17:52:17.386Z
Results from a survey of people's views on donation matching 2015-03-01T07:30:55.575Z
[cross-post] Does donation matching work? 2015-01-11T03:49:56.538Z
Where are you giving and why? 2014-12-12T03:32:17.417Z
Spitballing EA career ideas 2014-11-30T00:02:32.521Z
Career choice: Evaluate opportunities, not just fields 2014-09-28T21:37:38.685Z
Brainstorming thread: ideas for large EA funders 2014-09-28T19:15:59.373Z
Lessons from running Harvard Effective Altruism 2014-09-19T04:05:01.573Z
A critique of effective altruism 2013-12-03T01:39:43.000Z
Replaceability in altruism 2013-08-29T04:00:52.000Z
Effective altruists and outsiders 2013-08-05T04:00:57.000Z
Spending on yourself vs. charity 2013-07-05T04:00:59.000Z
Common responses to earning to give 2013-06-10T04:00:05.000Z


Comment by Ben_Kuhn on Announcing "Naming What We Can"! · 2021-04-04T01:24:48.446Z · EA · GW

Looks like if this doesn't work out, I should at least update my surname...

Comment by Ben_Kuhn on My mistakes on the path to impact · 2020-12-09T14:04:47.208Z · EA · GW

I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing "person X is skeptical of MIRI" in the "cons" column) and this parent comment ("imagine I pointed a gun to your head and... offer you to give you additional information;" "never stopping at [person X thinks that p]"). I'm not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people's top-line views on questions where there's substantial disagreement, based on your overall assessment of that particular person's credibility / quality of intuition / whatever, separately from your evaluation of their finer-grained sub-claims.

If you are staking $5m on something, it's hard for me to imagine a case where it makes sense to end up with an important node in your tree of claims whose justification is "opinions diverge on this but the people I think are smartest tend to believe p." The reason I think this is usually bad is that (a) it's actually impossible to know how much weight it's rational to give someone else's opinion without inspecting their sub-claims, and (b) it leads to groupthink/herding/information cascades.

As a toy example to illustrate (a): suppose that for MIRI to be the optimal grant recipient, it both needs to be the case that AI risk is high (A) and that MIRI is the Best organization working to mitigate it (B). A and B are independent. The prior is (P(A) = 50, P(B) = 50). Alice and Bob have observed evidence with a 9:1 odds ratio in favor of A, so think (P(A) = 90, P(B) = 50). Carol has observed evidence with a 9:1 odds ratio in favor of B. Alice, Bob and Carol all have the same top-line view of MIRI (P(A and B) = 0.45), but the rational aggregation of Alice and Bob's "view" is much less positive than the rational aggregation of Bob and Carol's.

It's interesting that you mention hierarchical organizations because I think they usually follow a better process for dividing up epistemic labor, which is to assign different sub-problems to different people rather than by averaging a large number of people's beliefs on a single question. This works better because the sub-problems are more likely to be independent from each other, so they don't require as much communication / model-sharing to aggregate their results.

In fact, when hierarchical organizations do the other thing—"brute force" aggregate others' beliefs in situations of disagreement—it usually indicates an organizational failure. My own experience is that I often see people do something a particular way, even though they disagree with it, because they think that's my preference; but it turns out they had a bad model of my preferences (often because they observed a contextual preference in a different context) and would have been better off using their own judgment.

Comment by Ben_Kuhn on My mistakes on the path to impact · 2020-12-08T01:24:21.548Z · EA · GW

if you make a decision with large-scale and irreversible effects on the world (e.g. "who should get this $5M grant?") I think it would usually be predictably worse for the world to ignore others' views

Taking into account specific facts or arguments made by other people seems reasonable here. Just writing down e.g. "person X doesn't like MIRI" in the "cons" column of your spreadsheet seems foolish and wrongheaded.

Framing it as "taking others' views into account" or "ignoring others' views" is a big part of the problem, IMO—that language itself directs people towards evaluating the people rather than the arguments, and overall opinions rather than specific facts or claims.

Comment by Ben_Kuhn on My mistakes on the path to impact · 2020-12-07T01:09:39.618Z · EA · GW

Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.

I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:

  • Moral uncertainty, giving other moral systems weight "because other smart people believe them" rather than because they seem object-level reasonable
  • Lots of emphasis on avoiding accidentally doing harm by being uninformed
  • People bring up "intelligent people disagree with this" as a reason against something rather than going through the object-level arguments

Being epistemically modest by, say, replacing your own opinions with the average opinion of everyone around you, might improve the epistemics of the majority of people (in fact it almost must by definition), but it is a terrible idea on a group level: it's a recipe for information cascades, groupthink and herding.

In retrospect, it's not surprising that this has ended up with numerous people being scarred and seriously demoralized by applying for massively oversubscribed EA jobs.

I guess it's ironic that 80,000 Hours—one of the most frequent repeaters of the "don't accidentally cause harm" meme—seems to have accidentally caused you quite a bit of harm with this advice (and/or its misinterpretations being repeated by others)!

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-11-16T00:08:33.256Z · EA · GW

I haven't had the opportunity to see this play out over multiple years/companies, so I'm not super well-informed yet, but I think I should have called out this part of my original comment more:

Not to mention various high-impact roles at companies that don't involve formal management at all.

If people think management is their only path to success then sure, you'll end up with everyone trying to be good at management. But if instead of starting from "who fills the new manager role" you start from "how can <person X> have the most impact on the company"—with a menu of options/archetypes that lean on different skillsets—then you're more likely to end up with people optimizing for the right thing, as best they know how.

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-11-07T22:37:32.419Z · EA · GW

I had a hard time answering this and I finally realized that I think it's because it sort of assumes performance is one-dimensional. My experience has been quite far from that: the same engineer who does a crap job on one task can, with a few tweaks to their project queue or work style, crush it at something else. In fact, making that happen is one of the most important parts of my (and all managers') jobs at Wave—we spend a lot of time trying to route people to roles where they can be the most successful.

Similarly, management is also not one-dimensional: different management roles need different skill sets which overlap with individual-contributor roles in different ways. Not to mention various high-impact roles at companies that don't involve formal management at all. So I think my tl;dr answer would be "you should try to figure out how your current highest performers on various axes can have more leveraged impact on your company, which is often some flavor of management, but it depends a lot on the people and roles involved."

For example, take engineering at Wave. Our teams are actually organized in such a way that most engineers are on a team led by (i.e. whose task queue is prioritized by) a product manager. Each engineer also has an engineering mentor who is responsible for giving them feedback, conducts 1:1s with them, contributes to their performance, etc.

Product managers don't have to be technical at all, and some of the best ones aren't, but some of the best engineers also move laterally into product management because the ways in which they are good engineers overlap a lot with that role. For engineering mentors, they usually need to be more technically skilled than their mentees, but they don't necessarily have to be the best engineers in the company; skill at teaching and resonance with the role of mentor is more important.

We also have a "platform" team which works on engineer-facing tooling and infrastructure. Currently, I'm leading this team, but in the end state I expect it to have a more traditional engineering manager. For this person, some dimensions of engineering competence will be quite important, others won't, and they'll need extra skills that are not nearly as important to individual contributors (prioritization, communication, organization...). I expect they would probably be one of our "best performers" by some metrics, but not by others.

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-30T12:34:46.241Z · EA · GW

I'll let Lincoln add his as well, but here are a few things we do that I think are really helpful for this:

  1. We've found our bimonthly in-person "offsites" to be extremely important. For new hires, I often see their happiness and productivity increase a lot after their first retreat because it becomes easier and more fun for them to work with their coworkers.
  2. Having the right cadence of standing meetings (1-on-1s, team meetings, retrospectives, etc.) becomes much more important since issues are less likely to surface in "hallway" conversations.
  3. We try to make it really easy for people to upgrade conversations to video calls, both by frequently encouraging them to do so, and by making sure that every new hire has a "get to know you" call with as many coworkers as possible in their first few weeks.

(Your mileage may vary with these, of course! In particular, one relevant difference between Wave and other remote organizations is that I think Wave leans more heavily on "synchronous" calls relative to "asynchronous" Slack/email messages. This is important for us since 80%+ of us speak English as a third-plus language—it's easier to clear up misunderstandings on a call!)

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-29T20:28:33.628Z · EA · GW

Agree that if you put a lot of weight on the efficient market hypothesis, then starting a company looks bad and probably isn't worth it. Personally, I don't think markets are efficient enough for this to be a dominant consideration (see e.g. my response here for partial justification; not sure it's possible to give a convincing full justification since it seems like a pretty deep worldview divergence between us and the more modest-epistemology-focused wing of the EA movement).

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-29T15:57:05.284Z · EA · GW

2. For personal work, it's annoying, but not a huge bottleneck—my internet in Jijiga (used in Dan's article) was much worse than anywhere else I've been in Africa. (Ethiopia has a monopoly, state-run telecom that provides among the worst service in the world.) You do have to put in some effort to managing usage (e.g. tracking things that burn network via Little Snitch, caching docs offline, minimizing Docker image size), but it's not terrible.

It is a sufficient bottleneck to reading some blogs that I wrote a simple proxy to strip bloat from web pages while I was in Senegal. But, those are mostly pathologically un-optimized blogs—e.g., their page weight was larger than the page-weight of the web-based IDE (Glitch) that I used to write the proxy.

3. Network latency has been a major bottleneck for our programming; for instance, we wrote a custom UDP-based transport layer protocol to speed up our app because TCP handshakes were too slow (I gave a talk on this if you're curious). We also adopted GraphQL relatively early in part because it helped us reduce request/response sizes and number of roundtrips.

On the UX design side, a major obstacle is that many of our users aren't particularly literate (let alone tech-literate). For instance, we often communicate with users via (in-app) voice recordings instead of the more traditional text announcements. More generally, it's is a strong forcing function to keep our app simple so that the UI can be easily memorized and reading is as optional as possible. It also pushes us towards having more in-person touch points with our users—for instance, agents often help new users download the app and learn how to use it, and pre-COVID we had large teams of distributors who would go to busy markets and sign people up for the app in-person.

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-29T15:07:38.779Z · EA · GW

The main outcome metric we try to optimize is currently number of monthly active users, because our business has strong network effects. We can't share exact stats for various reasons, but I am allowed to say that we crossed 1m users in June, and our growth rates are sufficiently high that our current user base is substantially larger than that. We're currently growing more quickly than most well-known fintech companies of similar sizes that I know of.

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-29T14:46:24.116Z · EA · GW

On EA providing for-profit funding: hard to say. Considerations against:

  • Wave looks like a very good investment by non-EA standards, so additional funding from EAs wouldn't have affected our fundraising very much (not sure how much this generalizes to other companies)
  • At later stages, this is very capital-intensive, so probably wouldn't make sense except as a thing for eg Open Phil to do with its endowment
  • Founding successful companies requires putting a lot of weight on inside-view considerations, a trait that's not particularly compatible with typical EA epistemology. (Notably, Wave gets the most of this trait from Drew, the CEO, who, while value-aligned with EA, finds it hard to engage with standard EA-style reasoning for this reason.)

Considerations in favor:

  • Helps keep the company controlled by value-aligned people (not sure how important this is, I think the founders of Wave will end up retaining full control)
  • If the companies are good, it doesn't actually cost anything except tying up capital for a while

Overall, I think it could make sense at early stages, where people matter more and metrics matter less (and capital goes further), but even at early stages there's probably much more of a talent constraint than a funding constraint.

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-28T17:01:07.188Z · EA · GW

Cool! With the understanding that these aren't your opinions, I'm going to engage with them anyway bc I think they're interesting. I think for all four of these I agree that they directionally push toward for-profits being less good, but that people overestimate the magnitude of the effect.

For-profit entrepreneurship has built-in incentives that already cause many entrepreneurs to try and implement any promising opportunities. As a result, we'd expect it to be drastically less neglected, or at least drastically less neglected relative to nonprofit opportunities that are similar in how promising they are

Despite the built-in incentives, I think "which companies get built" is still pretty contingent and random based on which people try to do things. For instance, it's been obvious that M-Pesa had an amazing business in Kenya since ~2012, but it still  hasn't had equally successful copycats, let alone people trying to improve it, in other countries. If the market were really efficient here I think something like Wave would be 4+ years further along in its trajectory.

The specific cause areas that the EA movement currently sees as the most promising - including global poverty and health, animal welfare, and the longterm future - all serve recipients who (to different degrees) are incapable of significantly funding such work

Similarly, this is directionally correct but easy to overweight—there are still for-profit companies working in all of these spaces that seem likely to have very large impacts (Wave, Impossible Foods, Beyond Meat, SpaceX, OpenAI...)

For-profit organizations may produce incentives that make it unlikely to make the decisions that will end up producing enormous impact (in the EA sense of that term).

This is definitely a risk, and something that we worry about at Wave. That said:

  1. In many cases, revenue/growth and impact are highly correlated. In the examples I can think of where they aren't, it mostly involves monopolies doing anticompetitive or user-hostile things.
  2. In the monopoly case, many monopolies seem to have wide freedom of action and are still controlled by founders (e.g. Google, Facebook) and their decisions are often driven as much by internal dynamics as external incentives. Uncertain here, but it seems likely that if these companies thought more like EA's they would produce more impact.

Finally, I've also heard from several people the claim that today EA has an immense amount of funding, and if you're a competent person founding a charity that works according to EA principles it is incredibly easy to get non-trivial amounts of funding

I think "nontrivial" for a nonprofit is trivial for a successful for-profit :) Wave has raised tens of millions of dollars in equity and hundreds of millions in debt, and we're likely to raise 10x+ more in success cases. We definitely could not have raised nearly this much as a nonprofit. Same with eg OpenAI which got $1b in nonprofit commitments but still had to become (capped) for-profit in order to grow.

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-28T14:05:56.553Z · EA · GW

Hmm. This argument seems like it only works if there are no market failures (i.e. ideas where it's possible to capture a decent fraction of the value created), and it seems like most nonprofits address some sort of market failure? (e.g. "people do not understand the benefits of vitamin-fortified food," "vaccination has strong positive externalities"...)

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-28T14:00:57.974Z · EA · GW

I agree with most of what Lincoln said and would also plug Why and how to start a for-profit company serving emerging markets as material on this, if you haven't read it yet :)

Can you elaborate on the "various reasons" that people argue for-profit entrepreneurship is less promising than nonprofit entrepreneurship or provide any pointers on reading material? I haven't run across these arguments.

Comment by Ben_Kuhn on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-28T13:35:00.907Z · EA · GW

Great questions!

What are common failure cases/traps to avoid

I don't know about "most common" as I think it varies by company, but the worst one for me was allowing myself to get distracted by problems that were more rewarding in the short term, but less important or leveraged. I wrote a bit about this in Attention is your scarcest resource.

How much should I be directly coding vs "architecting" vs process management

Related to the above, you should never be coding anything that's even remotely urgent (because it'll distract you too much from non-coding problems). For the first while, you should probably try not to code at all because learning how not to suck as a manager will be more than full-time. Later, it's reasonable to work in "important but not urgent" stuff in your slack time, as long as you have the discipline not to get distracted by it.

Architecting vs process management depends on what your problems are, what kind of leader you want to be and what you can delegate to other people.

How do I approach hiring?

If you are hiring, hiring is your #1 priority and you should spend as much time and attention on it as is practical. Hiring better people has a magical way of solving many of your other problems.

Hiring can also be really demoralizing (because you are constantly rejecting people and/or being rejected), so it's hard to have the conviction to put more effort into it until you've seen firsthand how much of a difference it makes.

For me, the biggest hiring improvement was getting our final interview to a point where I was quite confident that anyone who passed it would be a good engineer at Wave. This took many iterations, but lowering the risk of a bad hire meant that (a) I wasn't distracted by stressing out about tricky hire/no-hire decisions, (b) we could indiscriminately put people through our hiring funnel and trust that the process would come to a reasonable verdict. After this change, our 10th-percentile hire has been about as good as our 50th-percentile hire previously, and we went from 4 engineers to 25 in a bit over a year.

I expect the exact same thing goes for investing in people once you've hired them, but I'm not as good at that yet so don't have concrete advice.

Just generally, what would you have imparted on past-you?

  1. You suck at hiring, get better.
  2. If you're worried that someone is sad about something (especially something you did), ask them!
  3. Org structure matters a lot; friction, bad execution, etc. is often downstream of a bad division of responsibility between teams, teams having the wrong goals, etc. (Matters more once you are responsible for multiple teams)
  4. Accept that you hate telling people what to do, and manage in such a way that you don't have to. (Perhaps specific to me.)
  5. Hiring.
Comment by Ben_Kuhn on Weird Wealth Creation ideas - Mobile Money · 2020-10-26T13:46:45.174Z · EA · GW

Sorry for the minimalist website :) A couple clarifications:

  • We indeed split our businesses into Sendwave (international money transfer) and Wave (mobile money). is the website for the latter.
  • The latter currently operates only in Senegal and Cote d'Ivoire (stay tuned though).
  • In addition to charging no fees for deposits or withdrawals, we charge a flat 1% to send. All in, I believe we're about 80% cheaper than Orange Money for typical transaction sizes.
  • We don't provide services to Orange—if you saw the logo on the website it's just because we let our customers use their Wave balance to purchase Orange airtime.

For the focus of this concept, I am more concerned with providing Mobile Money from the most relevant and fair company available (whoever that is) to areas and people that so far did not have that service, rather than promoting movements from one company to the other which might be more efficient but will have a much smaller effect in poverty reduction. 

This is our goal as well; to quote myself in another comment:

Despite the fact that M-Pesa started in 2008, mobile money in most other countries in sub-Saharan Africa is kind of crap by comparison (much more expensive, worse service, smaller agent network, etc.) because most telecoms have not even been able to copycat M-Pesa effectively. By executing better, you can speed up the adoption of mobile money.

Even Orange (which is fairly widespread in Senegal) has only gotten 25% of their own userbase onto mobile money (source) because they, like most mobile money systems, are executing really badly compared to what's possible. There is a lot of room to make mobile money more accessible even in countries with already-existing mobile money. (Which at this point is nearly all countries AFAIK—it's easy for a telecom to buy an off the shelf mobile money service from something like Ericsson or Huawei—much harder for them to actually execute well on rolling it out.)

Comment by Ben_Kuhn on Weird Wealth Creation ideas - Mobile Money · 2020-10-26T13:37:46.847Z · EA · GW

Hey Marc, cool that you're thinking about this!

I work for Wave, we build mobile money systems in Senegal, Cote d'Ivoire, and hopefully soon other countries. Here are some thoughts on these interventions based on Wave's experience:

Interventions 1-2 (creating accounts): I think for most people that don't use mobile money, in countries where mobile money is available, "not having an account" is not the main blocker. It's more likely to be something like

  • They don't live near enough to an agent
  • Mobile money charges fees that is too high given the typical amounts the person wants to send
  • They don't trust the service
  • They don't trust the agent they live near
  • They can't read, so it's hard for them to use the app (they would have to memorize the UI)

Intervention 3 (lower restrictions): In countries that don't have a way of using mobile money without an ID, that's an extremely valuable thing to advocate for. Also, at least for Wave, our users constantly ask for higher transaction limits than the central bank allows us to give them. Both of these policies are probably at least somewhat based on FUD spread by established players (banks?) that don't want mobile money to succeed. However, you're probably right that mobile money companies already have the best incentive to accomplish this change; it's also hard to get the ear of a central bank as a random foreigner. But there may be something interesting in this space.

Intervention 4 (accounts for ID-less people): this is interesting, although I believe that at least in WAEMU, it's already possible to use mobile money without an ID with low transaction limits (you can receive at most ~$400/mo). Still, a lot of people want to send/receive more than that, and helping people with paperwork to get a replacement ID is likely to be very helpful in other ways too :)

Intervention 5 (starting agencies): In Wave's experience, better access to agents is the #1 driver of mobile money growth (at least until a system is so big it hits geographic saturation here). Most mobile money systems also end up working with third-party providers of agent services because they don't have the organizational capacity to manage a huge number of agents themselves. There's probably room for an org that's a third-party agent network focused on the poorest areas in a given country which would otherwise be last on the mobile money system's priority list for expansion.

Intervention 6 (more research): We've found good research on other mobile money systems to be hard to come by, but incredibly useful, even just basics like "here is how M-Pesa expanded over time" or "here are some statistics on ZAAD" (these help us a lot with our own expansion strategy). Although the type of research we want is probably somewhat different from the type of research that would be most useful to other consumers of mobile money research.

I would also add another:

Intervention 7—build a better mobile money system: 

  1. Despite the fact that M-Pesa started in 2008, mobile money in most other countries in sub-Saharan Africa is kind of crap by comparison (much more expensive, worse service, smaller agent network, etc.) because most telecoms have not even been able to copycat M-Pesa effectively. By executing better, you can speed up the adoption of mobile money.
  2. Mobile money systems have network effects, meaning that it is somewhat path-dependent which one "wins the market" in a country. Most current mobile money systems that win are the ones offered by monopoly telecoms, so they end up both charging a lot themselves, and also entrenching the telecom's monopoly. If you were to, say, start an EA mobile money system that wasn't telco-affiliated, and preferred to lower prices rather than raise them at scale, you could generate a lot more surplus.

If anyone is excited about that, Wave is hiring for many roles, especially engineers—you can contact me here or at :)

Comment by Ben_Kuhn on The case for investing to give later · 2020-07-05T12:29:06.020Z · EA · GW

Some of your "conservative" parameter estimates are surprising to me.

For instance, your conservative estimate of the effect of diminishing marginal returns is 2% per year or 10% over 5y. If (say) the total pool of EA-aligned funds grows by 50% over the next 5 years due to additional donors joining—which seems extremely plausible—it seems like that should make the marginal opportunity much more than 10% less good.

You also wrote

we’ll stick with 5% as a conservative estimate for real expected returns on index fund investing

but used 7% as your conservative estimate in the spreadsheet and in the bottom-line estimates you reported.

Comment by Ben_Kuhn on CEA's Plans for 2020 · 2020-04-24T00:26:30.226Z · EA · GW

I'm looking forward to CEA having a great 2020 under hopefully much more stable and certain leadership!

I’d welcome feedback on these plans via this form or in the comments, especially if you think there’s something that we’re missing or could be doing better.

This is weakly held since I don't have any context on what's going on internally with CEA right now.

That said: of the items listed in your summary of goals, it looks like about 80% of them involve inward-facing initiatives (hiring, spinoffs, process improvements, strategy), and 20% (3.3, 4.1-5) involve achieving concrete outcomes that affect things outside of CEA. The report on progress from last year also emphasized internal process improvements rather than external outcomes.

Of course, it makes sense that after a period of rapid leadership churn, it's necessary to devote some time to rebuilding and improving the organization. And if you don't have a strategy yet, I suppose it makes sense to put "develop a strategy" as your top goal and not to have very many other concrete action items.

As a bystander, though, I'll be way more excited to read about whatever you end up deciding your strategy is, than about the management improvements that currently seems to be absorbing the bulk of CEA's focus.

Comment by Ben_Kuhn on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-29T23:32:56.256Z · EA · GW

Hmm. You're betting based on whether the fatalities exceed the mean of Justin's implied prior, but the prior is really heavy-tailed, so it's not actually clear that your bet is positive EV for him. (e.g., "1:1 odds that you're off by an order of magnitude" would be a terrible bet for Justion because he has 2/3 credence that there will be no pandemic at all).

Justin's credence for P(a particular person gets it | it goes world scale pandemic) should also be heavy-tailed, since the spread of infections is a preferential attachment process. If (roughly, I think) the median of this distribution is 1/10 of the mean, then this bet is negative EV for Justin despite seeming generous.

In the future you could avoid this trickiness by writing a contract whose payoff is proportional to the number of deaths, rather than binary :)

Comment by Ben_Kuhn on [deleted post] 2020-01-04T00:06:36.026Z

Oops. I searched for the title of the link before posting, but didn't read the titles carefully enough to find duplicates that edited the title. Should have put more weight on my prior that this would already have been posted :)

Comment by Ben_Kuhn on Why and how to start a for-profit company serving emerging markets · 2019-11-15T22:41:07.864Z · EA · GW

I'm guessing that they assumed we were exaggerating the numbers in order to make them more interested in working with us. The fact that you're so ready to call anyone who lies about user numbers a "scammer" may itself be part of the cultural difference here :)

Comment by Ben_Kuhn on Why and how to start a for-profit company serving emerging markets · 2019-11-11T14:27:19.289Z · EA · GW

Examples (mostly from Senegal since that's where I have the most experience, caveat that these are generalizations, all of them could be confounded by other stuff, the world is complicated, etc.):

  • Most Senegalese companies seem to place a much stronger emphasis on bureaucracy and paperwork.
  • When interacting with potential business partners in East Africa, we eventually realized that when we told them our user/transaction numbers, they often assumed that we were lying unless the claim was endorsed by someone they had a trusted connection to.
  • In the US, we have fully transparent salaries (everyone at the company can look up anyone else's salary in a spreadsheet). We weren't able to extend this norm to our Senegalese subsidiary because it caused too much interpersonal conflict. (This was at least partly the result of us not putting enough investment into making the salary scale work for everyone, but my understanding is that my Senegalese coworkers were pessimistic about bringing back salary transparency even if we fixed that.)
  • In Senegal people seem less comfortable by default expressing disagreement with someone above them in the hierarchy. (As a funny example, I've had a few colleagues who I would ask yes-or-no questions and they would answer "Yes" followed by an explanation of why the answer is no.)

Exporting different norms is quite hard at scale. You need to hire people who are the closest to the norms that you want, but they'll still probably be fare away so you'll also have to invest a lot in propagating the norms you want, which only really works well 1-on-1. When we needed to scale our local Senegal team quickly we ended up having to compromise on some norms to do so (e.g. salary transparency, amount of paperwork).

Comment by Ben_Kuhn on Why and how to start a for-profit company serving emerging markets · 2019-11-07T20:29:53.862Z · EA · GW

Broadly agree, but:

You might end up making more impact if you started a startup in your own country, and just earned-to-give your earnings to GiveWell / EA organizations. This is because I think there are very few startups that benefit the poorest of the poor, since the poorest people don't even have access to basic needs.

Can't you just provide people basic needs then though? Many of Wave's clients have no smartphone and can't read. Low-cost Android phones (e.g. Tecno Mobile) probably provided a lot of value to people who previously didn't have smartphones. Providing people cell service is hard (if you're not a telecom), but if an area has cell service but no internet you can still make useful information products with USSD, SMS, etc., or physical shops.

(I do think that many good startup ideas in the developing world involve providing relatively "basic" needs! But it seems to me like there's decent opportunity there.)

Comment by Ben_Kuhn on Why and how to start a for-profit company serving emerging markets · 2019-11-06T11:32:19.158Z · EA · GW

Haha this is probably the first time someone said that about one of my essays—I’m flattered, and excited to potentially write follow ups!

Is there anything in particular you’re curious about? Sometimes it’s hard to be sure of what’s novel vs obvious/common knowledge.

Comment by Ben_Kuhn on The Future of Earning to Give · 2019-10-14T00:09:32.292Z · EA · GW
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I'm not too clear why we should believe that. The skills and manpower needed by EA organizations appear to be a small subset of the total careers that the world needs, and it would seem an odd coincidence if the comparative advantage of people who believe in EA happens to overlap heavily with the needs of EA organizations. Remember that EA principles suggest that you should donate to approximately one charity (i.e. the current best one). The same general idea applies to need for talent: there are a relatively small number of tasks that stand out as unusually in need of more talent.

The "one charity" argument is only true on the margin. It would be incorrect to conclude from this that nobody should start additional charities—for instance, even though GiveWell's current highest-priority gap is AMF, I'm still glad that Malaria Consortium exists so that it could absorb $25m from them earlier this year. Similarly, it's incorrect to conclude from this style of argument that the social returns to talent should be concentrated in specific fields. While there may be a small number of "most important tasks" on the margin, the EA community is now big enough that we might expect to see margins changing over time.

Also, the majority of people who are earning to give would probably be able to fund less than one person doing direct work. If your direct work would be mostly non-replaceable, then this compares unfavorably to direct work. (Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.)

Comment by Ben_Kuhn on Long-term Donation Bunching? · 2019-09-27T15:44:28.261Z · EA · GW

If you're really worried about value drift, you might be able to use a bank account that requires two signatures to withdraw funds, and add a second signatory whom you trust to enforce your precommitment to donate?

I haven't actually tried to do this, but I know businesses sometimes have this type of control on their accounts, and it might be available to consumers too.

Comment by Ben_Kuhn on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-23T08:57:48.517Z · EA · GW

Whoops, sorry about the quotes--I was writing quickly and intended them to denote that I was using "solve" in an imprecise way, not attributing the word to you, but that is obviously not how it reads. Edited.

Comment by Ben_Kuhn on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-23T00:32:43.265Z · EA · GW

These theoretical claims seem quite weak/incomplete.

  • In practice, autocrats' time horizons are highly finite, so I don't think a theoretical mutual-cooperation equilibrium is very relevant. (At minimum, the autocrat will eventually die.)
  • All your suggestions about oligarchy improving the tyranny of the majority / collective action problems only apply to actions that are in the oligarchy's interests. You haven't made any case that the important instances of these problems are in an oligarchy's interests to solve, and it doesn't seem likely to me.
Comment by Ben_Kuhn on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-18T22:46:57.259Z · EA · GW

What's the shift you think it would imply in animal advocacy?

Comment by Ben_Kuhn on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-17T10:05:11.512Z · EA · GW

I had one of his quotes on partial attribution bias (maybe even from that interview) in mind as I wrote this!

Comment by Ben_Kuhn on EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? · 2019-06-17T11:29:45.889Z · EA · GW

Yikes; this is pretty concerning data. Great find!

I'd be curious to hear from anyone at GWWC how this updates them, and in particular how it bears on their "realistic calculation" of their cost effectiveness, which assumes 5% annualized attrition. (That's not an apples to apples comparison, so their estimate isn't necessarily off by literally 10x, but it seems like it must be off by quite a lot, unless the survey data is somehow biased.)

Comment by Ben_Kuhn on Please use art to convey EA! · 2019-05-26T03:02:00.624Z · EA · GW

I suspect that straightforwardly taking specific EA ideas and putting them into fiction is going to be very hard to do in a non-cringeworthy way (as pointed out by elle in another comment). I'd be more interested in attempts to write fiction that conveys an EA mindset without being overly conceptual.

For instance, a lot of today's fiction seems cynical and pessimistic about human nature; the characters frequently don't seem to have goals related to anything other than their immediate social environment; and they often don't pursue those goals effectively (apparently for the sake of dramatic tension). Fiction demonstrating people working effectively on ambitious, broadly beneficial goals, perhaps with dramatic tension caused by something other than humans being terrible to each other, could help propagate EA mindset.

Comment by Ben_Kuhn on Structure EA organizations as WSDNs? · 2019-05-13T02:50:29.767Z · EA · GW
worker cooperatives have positive impacts on both firm productivity and employee welfare; there is a lot more research showing that worker ownership is modestly better than regular capitalist ownership

This is causal language, but as far as I can tell (at least per the 2nd paper) the studies are all correlational? By default I'm very skeptical of ability to control for confounders in a correlational analysis here. Are there any studies with a more robust way to infer causation?

Comment by Ben_Kuhn on Is preventing child abuse a plausible Cause X? · 2019-05-07T12:21:20.124Z · EA · GW

(PS: if you're interested in posting but unsure about content, I'd be excited to help answer any q's or read a draft! My email is in my profile.)

Comment by Ben_Kuhn on Is EA unscalable central planning? · 2019-05-07T12:07:52.083Z · EA · GW

What EA is currently doing would definitely not scale to 10%+ of the population doing the same thing. However, that's not a strong argument against not doing it right now. You can't start a political party with support from 0.01% of the population!

In general, we should do things that don't scale but are optimal right now, rather than things that do scale but aren't optimal right now, because without optimizing for the current scale, you die before reaching the larger scale.

Comment by Ben_Kuhn on Is preventing child abuse a plausible Cause X? · 2019-05-05T18:08:14.694Z · EA · GW

I would be extremely interested if you were to hypothetically write an "intro to child protection/welfare for EAs" post on this forum! (And it would probably be a great candidate for a prize as well!) I think the number of upvotes on this comment show that other people agree :)

Personally, I have ~zero knowledge of this topic (and probably at least as many misconceptions as accurate beliefs!) and would be happy to start learning about it from scratch.

"Cause X" usually refers to an issue that is (one of) the most important one(s) to work on, but has been either missed or deprioritized for bad reasons by the effective altruism community (it may come from this talk). So I'd expect a cause which the EA community decided was "cause X" to receive an influx of interest in donations and direct work from the EA community, like how GiveWell directed hundreds of millions of dollars to their top charities, or how a good number of EAs went to work at nonprofits working on animal welfare. (For a potentially negative take on being Cause X, see this biorisk person's take.)

Comment by Ben_Kuhn on Does climate change deserve more attention within EA? · 2019-04-17T20:53:21.427Z · EA · GW

While climate change doesn't immediately appear to be neglected, it seems possible that many people/orgs "working on climate change" aren't doing so particularly effectively.

Historically, it seems like the environmental movement has an extremely poor track record at applying an "optimizing mindset" to problems and has tended to advocate solutions based on mood affiliation rather than reasoning about efficiency. A recent example would be the reactions to the California drought which blame almost anyone except the actual biggest problem (agriculture).

Of course, I have no idea how much this consideration increases the "effective neglectedness" of climate change. I expect that there are still enough people applying an optimizing mindset to make it reasonably non-neglected, but maybe only par with global health rather than massively less neglected like you might guess from news coverage?

Comment by Ben_Kuhn on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T13:27:42.442Z · EA · GW

If one person-year is 2000 hours, then that implies you're valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.

This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I'm sure there are other overheads that I don't know about, but I'm curious if you (or someone from CEA) knows what they are?

[Not trying to imply that CEA is failing to optimize here or anything—I'm mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]

Comment by Ben_Kuhn on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T09:45:18.611Z · EA · GW

I think we should think carefully about the norm being set by the comments here.

This is an exceptionally transparent and useful grant report (especially Oliver Habryka's). It's helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.

But the fact that Habryka now must defend all 14 of his detailed write-ups against bikeshedding, uncharitable, and sometimes downright rude commenters seems like a strong disincentive against producing such reports in the future, especially given that the LTF fund is so time constrained.

If you value transparency in EA and want to see more of it (and you're not a donor to the LTF fund), it seems to me like you should chill out here. That doesn't mean don't question the grants, but it does mean you should:

  • Apply even more principle of charity than usual
  • Take time to phrase your question in the way that's easiest to answer
  • Apply some filter and don't ask unimportant questions
  • Use a tone that minimizes stress for the person you're questioning
Comment by Ben_Kuhn on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T23:28:15.675Z · EA · GW

Wow! This is an order of magnitude larger than I expected. What's the source of the overhead here?

Comment by Ben_Kuhn on My new article on EA and the systemic change objection · 2019-04-07T23:44:42.310Z · EA · GW

This is true as far as it goes, but I think that many EAs, including me, would endorse the idea that "social movements are the [or at least a] key drivers of change in human history." It seems perverse to assume otherwise on a forum whose entire point is to help the progress of a social movement that claims to e.g. help participants have 100x more positive impact in the world.

More generally, it's true that your chance of convincing "constitutionally disinclined" people with two papers is low. But your chance is zero of convincing anyone with either (1) a bare assertion that there's some good stuff there somewhere, or (2) the claim that they will understand you after spending 20 hours reading some very long books.

Also, I think your chance of convincing non-constitutionally-disinclined people with the right two papers is higher than you think. Although you're correct that two papers directly arguing "you should use paradigm x instead of paradigm y" may not be super helpful, two pointers to "here are some interesting conclusions that you'll come to if you apply paradigm x" can easily be enough to pique someone's interest.

Comment by Ben_Kuhn on EA is vetting-constrained · 2019-03-09T14:40:55.040Z · EA · GW

I'm very interested in hearing from grantmakers about their take on this problem (especially those at or associated with CEA, which it seems like has been involved in most of the biggest initiatives to scale out EA's vetting, through EA Grants and EA Funds).

  • What % of grant applicants are in the "definitely good enough" vs "definitely (or reasonably confidently) not good enough" vs "uncertain + not enough time/expertise to evaluate" buckets?
  • (Are these the right buckets to be looking at?)
  • What do you feel your biggest constraints are to improving the impact of your grants? Funding, application quality, vetting capacity, something else?
  • Do you have any upcoming plans to address them?

Note also that the EA Meta and Long-Term Future Funds seem to have gone slightly in the direction of "less established" organizations since their management transition, and it seems like their previous conventionality might have been mostly a reflection of one specific person (Nick Beckstead) not having enough bandwidth.

Comment by Ben_Kuhn on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T09:26:26.458Z · EA · GW
It seems easier to increase the efficiency of your work than the quality.

In software engineering, I've found the exact opposite. It's relatively easy for me to train people to identify and correct flaws in their own code–I point out the problems in code review and try to explain the underlying heuristics/models I'm using, and eventually other people learn the same heuristics/models. On the other hand, I have no idea how to train people to work more quickly.

(Of course there are many reasons why other types of work might be different from software eng!)

Comment by Ben_Kuhn on Review of Education Interventions and Charities in Sub-Saharan Africa · 2019-02-27T01:25:05.938Z · EA · GW

In addition to Khorton's points in a sibling comment, GiveWell explicitly optimizes not just for expected value by their own lights, but for transparency/replicability of reasoning according to certain standards of evidence. If your donors are willing to be "highly engaged" or trust you a lot, or if they have different epistemics from GiveWell (e.g., if they put relatively more weight on models of root-level causes of poverty/underdevelopment, compared to RCTs), I bet there's something else out there that they would think is higher expected value.

Of course, finding and vetting that thing is still a problem, so it's possible that the thoroughness and quality of GW's research outweighs these points, but it's worth considering.

Comment by Ben_Kuhn on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T01:00:10.930Z · EA · GW

This is why I think Wave's two-work-test approach is useful; even if someone "looks good on paper" and makes it through the early filters, it's often immediately obvious from even a small work sample that they won't be at the top of the applicant pool, so there's no need for the larger sample.

Comment by Ben_Kuhn on My new article on EA and the systemic change objection · 2019-02-27T00:51:25.017Z · EA · GW

Downvoted for not being at least two of true, necessary or kind. If you're going to be snide, I think you should do a much better job of defending your claims rather than merely gesturing at a vague appeal to "holistic and historically extended nature."

You've left zero pointers to the justifications for your beliefs that could be followed by a good-faith interlocutor in under ~20h of reading. Nor have you made an actual case for why a 20-hour investment is required for someone to even be qualified to dismiss the field (an incredible claim given the number of scholars who are willing to engage with arguments based on far less than 20 hours of background reading).

Your comment could be rewritten mutatis mutandis with "scientology" instead of "social movement studies," with practically no change the argument structure. I think an argument for why a field is worth looking into should strive for more rigor and fewer vaguely insulting pot-shots.

(EDIT: ps, I'm not the downvoter on your other two responses. Wish they'd explained.)

Comment by Ben_Kuhn on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T00:17:12.105Z · EA · GW
1. Un-timed work test (e.g. OPP research analyst)

Huh. I'm really surprised that they find this useful. One of the main ways that Wave employees' productivity has varied is in how quickly they can accomplish a task at a given level of quality, which varies by an order of magnitude between our best and worst candidates. (Or equivalently, how good of a job they can do in a fixed amount of time.) It seems like not time-boxing the work sample would make it much, much harder to make an apples-to-apples quality comparison between applicants, because slower applicants can spend more time to reach the same level of quality.

Comment by Ben_Kuhn on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T00:07:12.475Z · EA · GW

It's much more understandable to me for the grants to have labor-intensive processes, since they can't fire bad performers later so the effective commitment they're making is much higher. (A proposal that takes weeks to write is still a questionable format IMO in terms of information density/ease of evaluation, but I don't know much about grant-making, so this is weakly held.)

Comment by Ben_Kuhn on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T12:25:43.570Z · EA · GW

I'm sorry to see so many orgs take 10+ hours to get you only partway through the process, let alone multiple 40+ hour processes. This is especially glaring compared to the very low number of orgs that rejected you in under 5 hours.

It sounds like many of these orgs would benefit (both you and themselves!) from improving their evaluations to reject people earlier in the process.

My team at Wave's current technical interview process is under 10 hours over 4 stages (assuming you spend 1 hour on your cover letter and resume); the majority of rejections happen after less than 5 hours. The non-technical interview process is somewhat longer, but I would guess still not more than 15 hours and with the majority of applications being rejected in under 5 hours (the final interview is a full day).

Notably, we do two work samples, a 2hr one (where most applicants are rejected) and a 4-5hr one for the final interview. If I were interviewing for a non-technical role I'd insert a behavioral interview after the first work sample as well. These shorter interviews help us screen out many candidates before we waste a ton of their time. It's hard for me to imagine needing 8+ hours for a work sample unless the role is extremely complex and requires many different skills.