Brendan Eappen: Lessons from an EA-Aligned Charity Startuppost by EA Global Transcripts (The Centre for Effective Altruism) · 2020-02-06T16:13:30.882Z · score: 14 (4 votes) · EA · GW · None comments
When GiveWell wrote that they were looking for charities to work on micronutrient fortification, Fortify Health rose to the challenge. With help from a $300,000 GiveWell grant, they began to work on wheat flour fortification, hoping to reduce India’s rate of iron deficiency. In this talk, co-founder Brendan Eappen discusses the charity’s story and crucial decisions they faced along the way. He also offers advice to members of the effective altruism community interested in pursuing similar work in the field of global development.
Below is a transcript of Brendan’s talk, which has been lightly edited for clarity. You can also watch it on YouTube and read it on effectivealtruism.org.
I want to talk about what we're doing at Fortify Health, and then, more broadly, about some of the central tensions [I’ve experienced] as someone who started an effective altruism [EA]-aligned charity startup in a world of other global health actors.
Our goal at Fortify Health is to improve population health by addressing widespread iron-deficiency anemia and neural tube defects in India. We're doing that through fortification (i.e., adding vitamins and minerals like iron, folic acid, and vitamin B12 to the foods that people already eat).
The main problem we address is anemia. Half of the women and children in India suffer from anemia. This is generally characterized by an inability to carry enough oxygen in your body to your brain and muscles, which leads to fatigue, exhaustion, stunted cognitive development, economic loss, pregnancy complications, and other problems.
Neural tube defects are the most common birth defect in India. Nearly four out of every one thousand children born have this defect. Essentially, it is a malformation of the spine. The spinal cord or the brain can be severely debilitated, leading to physical and mental impairment. This is most often caused by folic acid deficiency in the first month of pregnancy — in some cases, before people even know they're pregnant.
Fortification, or adding vitamins and minerals to food, is an evidence-based, cost-effective strategy to prevent these problems. But why did we start working on fortification?
Here is some backstory: As someone who has been interested in effective altruism for quite a while, I was excited to see Charity Science Health launch. They looked at a number of interventions that they thought members of the EA community could successfully deploy as new startups. They looked at GiveWell's “Charities we’d like to see” blog post. They took suggestions from around the community and came up with their top five that they thought a non-expert could implement.
Charity Science Health had been launched in-house [at GiveWell] to send text messages reminding new moms to keep their kids healthy and safe by vaccinating them. We, as you may have guessed, started Fortify Health as part of the iron and folic acid fortification group.
One of the central questions that we had to consider was whether young, foreign, non-experts could responsibly have an impact. This was not a question we took lightly. The EA movement is very enthusiastic. There are a lot of resources and young people who are trying to do good. But the question is: Is it always good to deploy [an intervention] without the relevant expertise?
I'm curious if anyone in the audience has any insight into what some key concerns would be for someone like me — who doesn't necessarily have a background in the area that we're working in, the geographic region, or a wealth of organizational experience — to get involved.
Audience member: Why launch a startup when there are other people who are already established [in the region]?
Brendan: Absolutely. Do we need to do our own thing when there's already an infrastructure, a framework, a wealth of expertise, and some true experts [working on this problem]? Why not work together? I'll touch on this.
Audience member: You could be doing more harm than good because you’re actually displacing state activity.
Brendan: Absolutely. We would be devastated to find out that we jumped into this in a flaky or transient way. Other actors within the country — even government actors — may have decided not to work on these problems. Or maybe they would bring a different wealth of expertise, resources, power, and credibility — and perhaps a better sense of the best approaches to these issues.
Audience member: And what's the sustainability of your approach?
Brendan: Right. What happens when we go away, or when the effective altruism movement changes its mind about the most important priorities to fund? Are we able to sustain the work we're doing? Is someone else able to sustain that work? That is a really important question.
I'll add a few more things. [Let me address] the consideration of whether to join the existing infrastructure and the experts who are already aligned on these issues, rather than [starting a venture] that could perhaps antagonize them or [drain] their resources. That is a difficult question. A lot of EAs would think that the counterfactual value lies in creating something new that wouldn't be done otherwise.
But that relies on an assumption that existing organizations couldn't use you or the resources you could bring to the table — and that the kind of work you're doing in isolation is adding value in a way that won’t hurt other organizations. I think that’s an assumption that needs to be tested on a case-by-case basis, and with great humility.
Another related concern is gauging neglect. Is there really a gap? If there are great organizations already working on a problem, do we really need one more? And if one more organization could add value, [who should run it]? After all, I'm not an expert. Couldn't we find someone to start an organization [more effectively] than I? If those people are already busy doing other good work, are we good enough?
Also, what does the world look like when we start Fortify Health? What does our launch mean for the pool of resources that are going toward [the issue of fortification] and others like it? Could we potentially cause harm in some way to the movement, to the particular branding of this intervention, to the other actors, or to the ability of government to invest in these kinds of interventions? Could we even cause direct harm through a short-sighted or superficial approach to the intervention itself? Could we implement something that hurts people?
Then, from our perspective, could we [gain skills]? Could we, as individuals, become better able to have an impact on the world if we took on this project?
Resolving these key considerations was nontrivial. We sought guidance within and beyond the effective altruism community, which has a number of interesting ideas about the moral [implications] of this kind of work. We asked people who are very critical of these moral frameworks whether it made sense for us to get involved. We talked to experts within the fortification space, as well as people who have related expertise in nutrition or public health and maybe don’t think that fortification is the best solution. And we talked to other people who have a sense of our competencies and could help us gauge whether we were the right people to try to do something like this — or what we would need to do in order to become the right people to do it. We assessed ourselves to understand what a team would look like that complements our strengths and weaknesses.
We realized that, to some extent, we could rely on external evaluation to support these kinds of judgments. We considered: Are other organizations willing to put skin in the game? And could we get the kind of funding that would validate this effort? We knew that if we applied for a GiveWell Incubation Grant, they would conduct a thorough review of the team as well as the intervention.
We also thought about alternative career paths. What would we be doing otherwise? If we didn't work on this project, what would it look like to work in a great organization that already existed, or in support of a government project [that could serve as an] advisory and support system and help us build expertise?
(I want to specifically recognize Charity Science Health, which encouraged us to take on the project initially, gave us the seed funding to get started, and then provided mentorship that continues to this day, but has taken different forms.)
So how do you actually start? That is a nontrivial question.
We took some early steps to gain expertise, asking: What is fortification? How are other people doing this? Do we really believe it works as well as we think it does? Then, we filled in our knowledge gaps by talking to experts. We sought to identify the best possible targets. If we were going to start a new project or bring new resources to the table, where could we best put those to use?
Then, we talked to local organizations — in our case, in India — and we were invited to visit. They suggested we come to India, see the work that they were doing, and determine whether [it made sense] for us to get involved.
We were particularly concerned about whether we would be welcomed to the table. We wondered if local organizations would be interested in the kinds of funding and additional support that we could offer. We asked: What strategies are (or are not) being employed? Can we learn from those? And are there actually gaps? Do we need to exist in order for those gaps to be filled, or could other organizations perhaps do the work better — either without us, or with [us playing a supporting role]?
As we were meeting with these organizations, we were anticipating applying for a GiveWell Incubation Grant.
One of the things we wanted to do was connect [the other fortification organizations] to the same potential funding streams [in the EA community], so that they could continue their work at a larger scale. But there are some ideological differences, as well as organizational constraints, that created barriers to [doing that].
As we readied ourselves to apply for an incubation grant, we presented GiveWell with:
- Information about India and why we thought that was the best place to start a new project.
- Conversation notes from the various organizations in India that we had consulted with, and a synthesis of the strategies that we could employ.
- Proposals [centered on] how we thought we could add the most value to the existing ecosystem.
- [The projected cost of our proposals] and how they applied to the cost-effectiveness analysis that GiveWell had developed in-house for iron fortification.
Long story short: We were awarded a GiveWell Incubation Grant. We were asked to refine our strategy, build our dream team, and implement our approach. This is a step that I think often doesn't happen in EA circles because we spend a lot of time on abstract questions, in the process of setting priorities, which is fun and important. But [implementation requires] an entirely different skill set. [There’s a lot involved in making] that transition responsibly.
I want to spend the rest of this talk discussing some key clashes between what I'll call the hyperbolized effective altruist (which doesn't describe the movement as a whole or any particular people, but rather serves as an extreme example) and what I'll call our typical global health actor. These are people who are thoughtfully and ambitiously doing good work in the field. They perhaps have a different moral frame and different intuitions about some of the best strategies to deploy. What I'll argue is that these are central tensions that we faced as an EA-funded, EA-aligned organization that saw a lot of value in the criticism [directed at us] by the global health community. And we sought to reconcile these two camps.
An EA might think that leaders should be strong advocates and defenders of the EA movement’s approach. If you are running an EA charity, then perhaps you should believe that the EA movement’s moral framework takes the cake. But someone critical of the movement might suggest that leaders need to humbly engage with local actors and their local moral worlds. What really matters to the people who are working on these issues and who are affected by these kinds of interventions?
This was critically important, and could have been a substantial failure early on. When we were talking to local NGOs and government officials, we had to be humble enough to learn from their approaches. We had to understand what they were already doing and why. And we had to be open-minded enough to consider what might, at first, seem like less cost-effective or more difficult-to-measure approaches. We also needed to be very receptive, and even proactive, about some of the weaknesses and blind spots of our own strategies (and even some of the weaknesses and blind spots associated with how EAs [operate]).
Also, as EAs, we might want to hire other EAs and build a hierarchy under their leadership. We know that “value drift” in an organization can be very dangerous. But someone critical of the EA movement might say, “Wait a second — we need to develop a high-level local team that has a voice.” Collaboration across a team is strongest when it's participatory and non-hierarchical — when local voices representing what's possible and ideal are actively involved in setting goals for the organization.
This was hard to do. Everyone we sought to hire was older than I was, more informed than I was, and had a better sense of the local context. And that was exactly what we wanted. I would encourage other EAs to do the same. Don’t just aggregate other effective altruists who think [the way you do]. Instead, bring in people who might have very different opinions about what's important and how to accomplish the organization’s goals.
This was particularly important when we thought about our core strategy: What were we willing to do, and what were we unwilling to do? It resulted in us considering some of the less cost-effective approaches to resolving the [iron deficiency] problem, because we thought they might [better fit the goals of the people we’d brought in]. This took a degree of flexibility for us.
EAs may often want to independently execute a consequentialist strategy for a few reasons. One is measurability and credit. EAs are very interested in causal attribution and knowing, for example, that Fortify Health actually made a difference. But that can be very limiting. We don't want to isolate ourselves from other organizations that either can make our work stronger or benefit from our work. Collaboration is key. Even if it muddies the waters on causal attribution, I would really encourage anyone who's working in these spaces to think about where the most productive collaborations could lie.
Others who are critical [of the EA movement] might encourage us to engage other actors, and learn from and respect their approaches. These could include strategic approaches, but also the moral frames used to motivate people in doing this work. This took a great deal of proactivity. We had to recognize the EA blind spots. We had to recognize our naivete and the extent to which we didn't have the expertise that some of these organizations and people had from their decades of work in the field.
As EAs, we also might dismiss justified trade-offs. We might do a cost-benefit analysis to recognize the benefits our work might have and the harm it could cause. We might say what seems best for the world and where we can optimize the difference [between taking action and reaching an ideal outcome]. That's inadequate for a lot of folks who are critical of effective altruism. They would suggest we be wary of any intended or unintended negative consequences that we could [inflict] in the course of doing the work, and encourage us not to treat the harms as negligible. They would remind us that someone suffering as a result of our work [is an effect that] really matters. Even if you're doing something that seems net beneficial, that doesn't excuse you from considering the importance of mitigating other risks. This is something that I think is often missed in effective altruism, or at least missed in the discourse around these ideas. That's harmful to the people you're leaving behind and, from the perspective of other organizations, very alienating. I think this is something EAs need to be quite cautious of.
For us, this included considering the risk of how fortification could be harmful to a subset of the population. It included considering how shifting the grinding of wheat from a local level to a larger, centralized level could hurt local millers and their businesses. That weighed into our decision to focus on some of the already-centralized processes. We even considered the potential risks of various dosing paradigms that we could use when modeling the effectiveness of the intervention.
Effective altruists want to focus on scale and cost-effectiveness. These guide us as a community. But others in global health would have us focus on the vulnerable, prioritizing those who would not be reached otherwise and using that as motivation to innovate for greater benefit. We don't have to stick to what we know in terms of what works and what doesn't and how much it costs. We can challenge the social structures that define those costs and constrain the work that we do. I think the EA community has gotten better at stepping back like this and thinking bigger — and maybe even accepting greater uncertainty that we have [in the past]. But we still haven’t become as flexible as some of the other radical and wonderful global health actors.
One of the reasons why we've fallen into the trap of doing this is our focus on scale. We decided to work in India in large part because of how much we thought we could grow. But as a community, we may be systematically neglecting smaller countries for which scale of this proportion just isn't possible. I commend organizations like Project Healthy Children that are working on fortification in some of the countries that have been systematically neglected.
[We also may fall into this trap when] considering whether to do what's easier or put our heads together and affect the more challenging-to-reach populations. This can mean the difference between [focusing on] centralized fortification via the mills catering to the most well-off people and [focusing on] decentralization, which might be harder or more expensive to monitor, but necessary to reach the people who are poorest or the bulk of the population.
EAs may want to focus on abstract problems and rational responses. That can be fun and good. But others want to focus on solidarity, compassion, and caregiving — the humanistic side that, at its core, focuses on individual people rather than the scale of a problem. I invite folks to integrate the two in the way we think about our work and [align it] with the people we serve, as well as in how we talk about this. [We risk] losing people by talking about the massive impact we can have on countless unknowns. But we're aggregating all of this data because we care about every single individual who's affected. And most people who are working in this field [place a different level of attention] than most EAs on the people they're serving, and for whom they are trying to improve some aspect of life.
EAs often want to deploy and measure vertical interventions. It's cleaner and easier to implement. The evidence base may be more robust. But others who are critical of effective altruism may really push us to strengthen existing systems and focus on long-term impact and sustainability. As we try to work at scale, recognize that there are others [with many more resources] than the EA community who are involved in this game. If we're going to be able to work together, we should be thinking about how our work corresponds with and strengthens the work that other substantial actors — particularly government actors — may be doing in the field.
Although I've highlighted a few central tensions that I think characterize this work and have been important to some of the operational and strategic decisions that we've made, I do want to [acknowledge] that others in the field want a lot of the same things that the EA community wants.
We're trying to thoughtfully, creatively, and enthusiastically take action to serve the needs of others. So instead of siloing ourselves based on a strategy or a consequentialist worldview that [runs counter] to mainstream approaches, I think we need to find ways to integrate. We need to humbly ask ourselves, “How can we learn from the people who are doing this good work, and how do we work together?”
To close, the themes of this talk have been fortitude, collaboration, and humility.
I want to provide you with the fortitude or boldness to take well-guided actions to overcome these barriers to doing something rather than nothing — and to surround yourself with the right kind of support to do your work responsibly and in alignment with other actors.
I want you to collaborate with others, to have meaningful, close engagement with an empowered team. Think of it not just as your EA vision being implemented by agent-less actors, but rather something that is collaborative. That collaboration extends beyond your team into the space where other actors are working.
I also encourage you to be more humble, to embrace this humility, to prepare to be wrong, and to change course [when implementing] your strategy. Prepare to stop when what you set out to do doesn't work, or work well enough, to support the work of others. Be eager to learn from others rather than [force] an agenda to advance an EA cause or an EA-aligned strategy.
I want to put up a few pictures of some of our awesome team members in India, who have been at the core of our strategic decisions in terms of how we implement our work.
Our strategy has changed course and we've learned quite a bit by having a really strong group of individuals putting our heads together on how to use the awesome resources that EA can provide in the most effective — but maybe not always the most intuitive — ways for an EA. Thank you.
Moderator: Thank you. That was a scintillating talk. The many ways in which we have natural inclinations — and how they may butt heads with the global health community — are not obvious.
I’m hoping you can contextualize your talk with some of the specific work that you do. Could you share how long it took before you were doing something on the ground, and what some of the biggest trials were in that initial startup [phase]?
Brendan: Yeah, absolutely. I think that one of the early [questions] was: How far do we need to go in order to know that this is a good idea? I think it wasn't until we were sitting down face-to-face with other people who are working on these kinds of projects in India that we felt we had a strong enough invitation to join them and be a productive actor in this space. [This work also enabled us to see that] the gap was large enough for there to be something meaningful we could do, something that wouldn't happen without us. That happened after we spent about four months on this project, conceptualizing these ideas.
The most essential work happened over the course of last summer, when we were trying to assemble the dream team. We were figuring out what it took to do the work that we didn’t know how to do ourselves. GiveWell is reviewing us now, and I hope we can get some more money to do this work. But the money doesn't speak for itself. You have to implement a strategy that is thoughtful and effective, and those strategies have really come from our partners and teammates in India.
Moderator: It’s a difficult research problem in and of itself, let alone figuring out how to execute in a totally unknown environment. How can somebody determine whether or not this is something that they would be a good fit for?
Brendan: I think that the critical considerations are:
- How flexible can you be?
- What kind of attitude can you set, and what kind of culture can you build, within the team?
- How willing are you to be wrong?
- How willing are you to defer to the judgments of other people and their expertise? Can you set yourself up to rely on the knowledge of others to determine the best possible path forward?
Moderator: Thank you for your presentation.
Comments sorted by top scores.