We have thought about this but we are not confident weaker charities would not crowd out stronger ones with funders and thus lead to less overall impact.
I think tautological measurement is a real concern for basically every meta charity, although I'm not sure I agree with your solution. I think the better solution is external evaluation, someone like GiveWell or Founders Pledge who does not have any reason to value CE charities. Typically, these organizations do their own independent research and compare it across their current portfolio of projects. If CE can, for example, fairly consistently incubate charities that GW/FP/etc. rank as best in the world, I think that is at least not organizationally tautological (it is assuming that these charity evaluators are in fact identifying the best areas/charities, and replicating any flaws they have though).
In terms of success rate, I agree 40% is high but I would expect many NGO incubators to be considerably higher than in the for-profit space, for a few reasons (a couple listed below):
General competition: There are just not that many charities aiming for pure impact (in an EA way), unlike the for-profit market. The general efficiency of the charity market is pretty low, and thus there are lots of fairly easy wins.
Scale sensitivity: Generally, successful for-profits are seen as really large-scale ventures (e.g., unicorns) and the market is consistently hunting for that. Debatably, the only charity currently seen as highly impactful that can get to that sort of scale is GiveDirectly. Thus the bar for success in the charity sector is significantly lower in terms of a money spent scale. For example, if we founded a charity running with a 1m a year budget, that was x2 as effective as top GW ones, we could count that as a success. But an organization of the same size would be considered a rounding error by YC. If we take size expectations into account, it might be like 1/25 charities that we incubated that have any significant chance of getting to unicorn-level size.
Sadly, my circumstances have changed such that this was no longer possible without significant work-productivity trade-offs. Specifically, I moved to London, UK (due to work) and have only intermittently been living with a partner. I now am living off a range between £20k-£30k depending on year. I still have the view that a higher salary would not significantly increase my productivity beyond that and have, if anything, more concerns about the current spending habits of EA for reasons described pretty well here.
I would definitely expect some of those 1000 ideas to have been researched by Open Philanthropy or Rethink; a long list like that would include both researched and un-researched areas. I think new nonprofits often come at things with a different angle, e.g., ways of weighting evidence, or tweaks in ethical views or baseline assumptions. For example, GiveWell is both highly well-run and huge, but they would not come to the same considerations that HLI has come to by looking at subjective well-being. I think the same thing will happen with CEARCH; there are lots of areas that might be missed by other actors but that would be picked up by a more systematic search done at a lower level of depth per area.
Currently: Currently we have a backend CEA that evaluates the possible scenarios and impact outcomes for each of the charities. It starts out with pretty wide confidence intervals but tends to narrow as the charities get older (e.g., 2nd or 3rd year). We also write up more narrative reviews that go to a set of external advisors.
Long term plan: Longer term we want to hire an external evaluation organization to evaluate every charity we found two years after founding, and use those numbers instead of internal ones.
Compared to other movements it seems pretty good; relative to the ideal, we of course could do better. In general, I think encouraging more critical thinking and debate is likely a step in the right direction. Right now I think disagreements can be handled a bit indirectly (e.g., I would love to see even more open cause area debates instead of just funding of outreach in one area and not another).
Our policy regarding salaries has not changed as much as other meta charities; leanness tends to attract a different sort of applicant. We have a range ($40-$60k) but would consider applications from candidates who need higher than that range. In practice, we have often found the most talented candidates are less concerned with salary and more concerned about other factors (impact of the role, culture, flexibility, etc.). We are a bit skeptical about the perception that talent increases from offering higher salaries (instead of attracting new talent, we typically see the same EA people getting job roles but just for a higher cost).
This in many ways is the default path for how many NGOs grow. I think there are quite a few reasons why CE overperforms relative to this. Decentralization broadens the risk profile that each charity is able to take, and smaller organizations move far, far quicker. I suspect the biggest factor though, is not structural but social. The level of founders we get applying are really strong relative to an organization like CE hiring program directors. Due to the psychology of ownership they work far more effectively for their project than they would as an employee of a larger organization.
I think something talking about the concept of cause X , or an area we think is a top contender that many EAs have not yet considered deeply (e.g., family planning). Even with the recent challenge prize on this, I think EA is way over-indexed on exploit vs. explore when it comes to cause areas.
I think there are a few things that fit into this category, how much deference is in the EA space would be one. Another would be the relative importance of high-absorbency career paths. Some things we have not written about but also fit would be how EA deals with low evidence base/feedback loop spaces. Or how little skepticism is applied to EA meta charities.
We try to keep a page with information (including room for funding numbers) for the organisations that get founded through Charity Entrepreneurship. Many of them are in a situation where marginal, small donors could make an impact.
Right now the door is pretty open. The projects we would consider are ones that can make a case for being highly impactful relative to other options in the space. I suspect projects with large funding gaps would be less of a good fit (e.g., people seeking over $500k).
So I think this conversation might be more productive if we clarified some terminology/dove into the specifics. There are a lot of different ways to set salaries in general.
- Needs of the employee
- Resources the organization has
- Market rate including benefits (how desirable the job is - e.g. hedge funds pay loads but are stressful so need to pay more to make up for that)
- Amount for the employee to be psychologically content
- Amount that creates the best incentives for the organization/EA movement
- Market rate replacement (if someone left, what you’d have to pay to get someone equally talented)
- Pure market-rate earnings (what would be the highest salary job rate- not taking into account non-salary benefits - e.g. a hedge fund salary)
- Value in impact to the organization
These varying ways cause a pretty dramatically wide spectrum of possible salaries. There is a case for using basically any of them. Ballpark numbers might range from 40k-400k depending on which system you use.
I think a lot of people are conflating the conversation a bit, there seem to be two central questions; 1) which of the systems (or index of systems) that’s best to use, and 2) pragmatically, what do these systems look like when cashed out?
For example, Josh’s comment is getting at number 1; maybe we should be using “pure market rate earnings” or “value in impact to the organization” instead of “amount that creates best incentives”.
Ryan’s comment on the other hand is basically “the ideal incentives” might in fact correlate quite a lot to the resources the organization has.
I think splitting these out can make it easier to discuss each possibility.
Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Let's start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.
Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. “this doctor knows what a CAT scan looks like better than me”, or “Givewell has considered the impact effects of malaria much more than me”). I think these sorts of situations lend themselves to higher deference. Something like “how much ethical value do I prescribe to animals”, or “what is my tradeoff of income to health” are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.
I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).
As someone who has been concerned about insects as an area for years, I think the aspect that stops animal-focused people I speak to from engaging with insects as a cause area is not really to do with scale or neglectedness. Many vegans do not eat honey; suggesting a concern for the bees creating it, and SWP (https://www.shrimpwelfareproject.org/) has gotten quite a lot of support from the animal movement. The issue is pretty directly tied to tractability and concrete actions that can be taken. If the current inventions focused on insects are research-orientated with unclear pathways for how insects do in fact get helped, that will be a blocking factor for many EA animal advocates. I think in many cases right now, people see insect welfare much like wild animal suffering; as an interesting, high scale area with no clear significant actions that can be taken.
I quite like this idea, and many of the most frugal people I know also do a ton of these things as well. I think a bunch of them pretty clearly signal altruism. Interestingly, I would say that things that make EA soft and cushy financially seem to cross apply to non-financial areas as well. E.g. I am not sure the average EA is working more hours compared to what they worked 5 years ago; even with the increases in salary, PAs and time to money tradeoffs.
I also agree there are a lot more that could be listed. I think "leave a fun and/or high-status job for an unpleasant and/or low-status one" hints at the idea of decisions that need to be made with competing values. I think this is maybe the biggest way more dedicated EAs have really different impacts vs less dedicated ones, e.g. it may not be the biggest part of someones impact if someone works 5% more or takes 5% less salary but it correlates (due to selection effect) with when hard choices come up with impact on one side and personal benefit on the other. The person is more likely to pick impact and this can lead to huge differences in impact. E.g. The charity research I find most fun to do might have ~0 impact whereas research I think is the highest impact might be considerably less fun, but significantly more valuable.
https://www.ppf.org/ or https://rethink.charity/fiscal-sponsorship are the most common organizations used for new EA projects.
Indeed this is only considering nonprofit funding sources. I think the data would be quite different if also considering for-profit options.
Keen to hear about any data on this topic, James is right it is the number of ~EA funders with unique perspectives.
"Organisations should be open about where they stand in relation to long-termism."
Agree strongly with this. One of the most common reasons I hear for people reacting negatively to EA is feeling tricked by self-described "cause open organizations" that really just focus on a single issue (normally AI).
"Please don't criticize central figures in EA because it may lead to an inability to secure EA funding?" I have heard this multiple times from different sources in EA.
We (Charity Entrepreneurship) have considered doing something like this. Would love to see the results and to know what locations you are considering. We are in west London.
1) Where do you see untapped opportunities for nonprofit entrepreneurs in the space of mental health?
2) What role do you see entrepreneurs (vs. established organizations) play in this field, including incubation programs like CharityEntrepreneurship.com that has incubated mental health charities before?
3) How do you assess the potential of new mental health treatments for the Global South? Is this sufficiently prioritized and do you see particular roadblocks to rapid adoption?
Hey Larks, thanks for the great comment. I think it gets at some key assumptions one has to consider when evaluating this as an intervention. We didn’t end up going into that in this post, but happy to cover it below.
I both see the scenario in which the benefits outweigh the costs (the one in which we are happy to incubate this charity), and I also see scenarios where the costs are higher than the benefits (in that case we wouldn't recommend it). Specifically:
Existing people get the benefit of building relationships with these new people.
When you consider the context of the families that an intervention such as this would be impacting I think the benefits you layed out are a lot smaller (to the point they do not largely change the calculation). They are typically families with large family size (my expectation is that the 4th child or grandchild does not carry the same weight as the first, particularly when it comes to long term support of the family).
Division of Labour - whereby people specialise in one specific area they become more efficient at it. The larger the population, the more specialisation it can support.
They are also typically in low-income jobs with limited specialization (often family planning is most needed in families earning income from primary agriculture). I expect that averting unwanted pregnancy frees up the income of the household to spend on the current family, e.g. on more education opportunities or a more nutritious diet that has further positive flow-through effects on the family. I think this same education confounder also cross-applies to creating more artists and scientists. It's not at all clear to me that net higher population vs higher average education but smaller families would result in this.
Many things have increasing returns to scale, and so are more efficient with larger populations
Although I have some sympathy for the economies of scale arguments, I think depending on the country the efficiency effects of having a very young or rapidly growing population trade off against this in quite an unfavourable way. I also think there are less economies of scale in less connected and more rural settings. (E.g. things like electricity or water have limited scale in these locations.) I also expect these benefits to be quite small relative to the current factors we consider.
It is of course possible that these benefits might be outweighed by the costs outlined in the report. But we cannot simply assume that this is the case.
When we are modelling cost-effectiveness on that sheet we are not aiming to take into account all of the externalities, but rather compare between interventions within family-planning, so you probably won’t find them there. We would use a different methodology to take them into account. But I take your point about the broader cost-benefit considerations.
As life is good for most people, this is a major advantage. They get to experience the joys of playing and growing and love and all the other good things in the world.
I do think you have hit on the really key assumption that can change one’s model of family planning though. “Life is good for most people”. We spend a considerable amount of time and work thinking about it and I agree that there is a lot of moral and epistemic uncertainty around the issue. It is probably the hardest thing to take into account when it comes to the assessment of moral weights of various outcomes. Depending on how one takes it, it can either result in 60 years equivalent of utility or disutility. However, I think again we have to look at the population very closely. Populations that do not have access to family planning information or counselling are more likely to have lower happiness levels. The country our last family planning charity chose to work in is Nigeria, where the average happiness goes up and down between 5 and 6 out of 10. Another country we recommend is Senegal, where the numbers are even lower. But I would say even this data is not precise enough as even within countries populations without access to family planning are typically far lower income than average. Also, the child whose existence would be prevented would be a child the family would prefer not to have, and this seems likely to have an effect on the average happiness of both the child and the family. We know the SD of happiness in Nigeria is pretty large ~2.5 (this variation is also typical across other locations). It's hard to know exactly what happiness that person would have over their life. It could easily be in the 3-4/10 range. If you think a year lived at 3-4 is net positive and something you would want to create more of, then indeed this is a huge factor against family planning. If you think its net negative then its a huge factor in favour. I think this is one of the key ethical questions. It comes down a lot more to do with positive vs negative leaning utilitarianism and how you view various weightings of subjective well being. This is a factor we considered a lot when thinking about it and although I think there are defendable different perspectives our team generally came down on the side of this effect being a net positive for family planning (some more info here).
I do think we could have made improvements to the report to make some of these judgement calls more clear and bring people's attention to the factors that affect the analysis significantly. We do tend to discuss these considerations and outline when the results of the general judgement about family planning may differ according to some ethical or empirical differences in much greater depth with incubatees who are considering working in these areas and it’s indeed a complex issue, because of this we have typically found it it easier to discuss it in conversation rather than in writing. I agree that the report could have been better written to take that into account.
- Your intuitions are right here that these skills are not unique to EA, and I am generally thinking of skills that are not exclusive to EA. I would expect this training organization not to create a ton of original content so much as to compile, organize and prioritize existing content. For example, the org might speak to ten people in EA operations roles, and based on that information find the best existing book and online course that if absorbed would set someone up for that role. So I see the advantage as being, more time to select and combine existing resources than an individual would have. I also think that pretty small barriers (e.g. price of a professional course, not having peers who share the same goals, lack of confidence that this content is useful for the specific jobs they are aiming for) currently stop people from doing professional training. And that the many common paths to professional training (e.g. PhD programs) are too slow to readily adapt to the needs of EA. I would generally expect the gaps in EA to move around quite a bit year to year.
- I think certification or proof of ability is a non-trivial part. The second half of our Incubation Program puts the earlier training into action through working on projects that are publicly shareable and immediately useful for the charity. I would guess that a training focused organization would also have a component like a capstone project at the end of each course.
I would also note that I think just giving EAs the ability to coordinate and connect with each other while learning seems pretty valuable. A lot of EAs are currently ruled out of top jobs in the space due to not being “trusted” or known by others in the EA movement. I think providing more ins for people to get connected seems quite valuable and would not happen with e.g. a local Toastmasters.
I think the majority of unusual empirical beliefs that came to mind were more in the longtermist space. In some ways these are unusual at even a deeper level than the suggested beliefs e.g. I think EAs generally give more credence epistemically to philosophical/a priori evidence, Bayesian reasoning, sequence thinking, etc.
If I think about unusual empirical beliefs Charity Entrepreneurship has as well, it would likely be something like the importance of equal rigor, focusing on methodology in general, or the ability to beat the charity market using research.
In both cases these are just a couple that came to mind – I suspect there are a bunch more.
"I now believe that less work is being done by these moral claims than by our unusual empirical beliefs, such as the hinge of history hypothesis, or a belief in the efficacy of hits-based giving. "
This is also a view I have moved pretty strongly towards.
Thanks for this – I checked out the full list when the post went up.
We will be researching increasing development aid and possibly researching getting money out of politics and into charity as our focus moves to more policy-focused research for our 2022 recommendations.
We also might research epistemic progress in the future, but likely from a meta science-focused perspective.
We definitely considered non-Western EA when thinking through EA meta options, but ended up with a different idea for how to best make progress on it (see here).
For-profit companies serving emerging markets I see as a very interesting space but a whole different research year from EA meta. Maybe even outside of CE’s scope indefinitely.
I do not expect us to research Patient Philanthropy, Institutions for Future Generations, Counter-Cyclical Donation Timing or Effective Informational Lobbying in the near future.
In general, I do not expect our research on EA meta to be exhaustive given the scope. I would be excited to see more ideas for EA meta projects, particularly ones with quick and clear feedback loops.
Indeed I have seen that post. I would be keen for more than one group to research this sort of area. I can also imagine different groups coming at it from different epistemic and value perspectives. I expect this research could be more than a full-time job for 5+ people.
Good question. We keep the information updated on room for funding on this page.
Indeed these sorts of issues will be covered in the deeper reports but it’s still valuable to raise them!
A really short answer to an important question: I would expect the research to be quite a bit deeper than the typical proposal – more along the lines of what Brian Tomasik did for wild animal suffering or Michael Plant did for happiness. But not to the point where the researchers found an organization themselves (as with Happier Lives Institute or Wild Animal Initiative). E.g. spending ~4 FT researcher months on a given cause area.
I agree that a big risk would be that this org closes off or gives people the idea that “EA has already looked into that and it should not longer be considered”. In many ways, this would be the opposite of the goal of the org so I think would be important to consider when it’s being structured. I am not inherently opposed to researching and then ruling out ideas or cause areas, but I do think the EA movement currently tends to quickly rule out an area without thorough research and I would not want to increase that trend. I would want an org in this space to be really clear what ground they have covered vs not. For example, I like how GiveWell lays and out and describes their priority intervention reports.
Our current plan is to publish a short description but not a full report of the top ideas we plan to recommend in the first week of Jan so possible applicants can get a sense before the deadline (Jan 15th).
Glad you found it interesting!
- It tended to come from people focused on that area but the concerns were not exclusive to technologies (or even xrisk more broadly).
- To put it another way, people were concerned that “EAs tended to help their friends and others in their adjacent peer-group disproportionality to the impact it would cause.”
- Regarding polarization in "Intercommunity coordination and connection," my sense is this came from different perceptions about how past projects had gone. No clear trend as to why for “community member improvement”
- I think the way this is reconciled is the view that “current organizations are highly capable and do prioritization research themselves when determining where they should work.” But prioritization including that type is hard to do right and others would struggle to do an equally good job.
P.s. reminder these are not my or CE’s views just describing what some interviewees thought.
I agree I was expecting a much stronger consensus as well. Sorry to say I told the folks I interviewed the data would remain at this level of anonymity many were fine with sharing their results but some preferred it to be pretty anonymous.
Number scores are based on people ranking the option above or below average with 3 being average.
Sadly not able to share that data I can say it tended to be bigger organizations and bigger chapters.
Happy to have my posts used for this. One thing I would love to see integrated would be a willingness to pay metric as we have been experimenting with this a bit in our research process and have found it quite useful.
Great question. Keen to see other people’s recommendations. We have a list of some of our team’s favorites organized into categories – can be seen on the website here or below. My personal top 5 are Principles, Made to Stick, The Life You Can Save, Algorithms to Live By, and The Lean Startup.
Values and ethics
Getting things done
A few examples:
- Introduction of new cause areas (e.g. mental health, WAS)
- Debates about key issues (e.g. INT framework issues, flaws of the movement)
- More concrete issues vs philosophical ones (e.g. how important is outreach, what % of EAs should earn to give)
I think the bar I generally compare EA to is, do I learn more from reading the EA forum per minute than from reading a good nonfiction book? Some years this has definitely been true but it has been less true in recent years.
This could be turned into one quite quickly https://forum.effectivealtruism.org/posts/kFmFLcdSFKo2GFJkc/cause-x-guide
Hey Ramiro and Thomas,
Thanks for your engagement with this system. I think in general our system has lots of room for improvement - we are in fact working on refining it right now. However, I am pretty strongly in favor of having evaluation systems even if the numbers are not based on all the data we would like them to be or even if they come to surprising results.
Cross species comparison is of course very complex when it comes to welfare. Some factors are fairly easy to measure across species (such as death rates) while others are much more difficult (diseases rates are a good example of where it's hard to find good data for wild animals). I can imagine researchers coming to different conclusions given the same initial data.
It’s worth underlining that our system does not aim to evaluate the moral weight of a given species, but merely to assess a plausible state of welfare. (Thomas: this would be one caveat to add when sharing.) In regards to moral weight (e.g. what moral weight do we accord a honey bee relative to a chicken etc.) – that is not really covered by our system. We included the estimates of probability of consciousness per Open Phil’s and Rethink Priorities’ reports on the subject, but the moral weight of conscious human and non-human animals is a heavily debated topic that the system does not go into. Generally I recommend Rethink Priorities’ work on the subject.
In regards to welfare, I think it's conceptually possible that e.g. a well treated pet dog in a happy family may be happier and their life more positive than a prisoner in a North Korean concentration camp. This may seem unintuitive, but I also find the inverse conclusion unintuitive. As mentioned above, that doesn’t mean that we should be prioritizing our efforts on improving the welfare of pet dogs vs. humans in North Korea. Prioritizing between different species is a complex issue, of which welfare comparisons like this index may form one facet without being the only tool we use.
To cover some of the specific claims.
- Generally, I think there is some confusion here between the species having control vs the individual. For example, North Korea as a country has a very high level of control over their environment, and can shape it dramatically more than a tribe of chimps can. However, each individual in North Korea has extremely limited personal control over their life – often having less free time and less scope for action than a wild chimp would practically (due to the constraints of the political regime) if not theoretically (given humanity’s capabilities as a species).
- We are not evaluating hunter gatherers, but people in an average low-income country. Life satisfaction measures show that in some countries, self-evaluated levels of subjective well-being are low. (Some academics even think that this subjective well-being could be lower than those of hunter gatherer societies.)
- Humanity has indeed spent a great deal more on diagnosing humans than chimps. However, there is some data on health that is comparable, particularly when it comes to issues that are clearer to observe such as physical disability.
- There is in fact some research on hunger and malnutrition in wild chimps, so this was not based on intuitions but on best estimates of primatologists. Malnourishment in chimps can be measured in some similar ways to human malnourishment, e.g. stunting of growth. I do think you’re right that concerns with unsafe drinking water could be factored into the disease category instead of the thirst one.
I would be keen for more research to be done on this topic but I would expect it to take a few hours of research into chimp welfare and a decent amount of research into human welfare to get a stronger sense than our reports currently offer. I think these sorts of issues are worth thinking about and we would like to see more research being done using such a system that aims to evaluate and compare the welfare of different species. Thank you again for engaging with the system - we’ll bear your comments in mind as we work on improvements.
Equally or more focused on doing good but less involved with the EA movement. Broadly I am less sold that engaging with the EA movement is the best way to increase knowledge or impact. This is due to a bit of an intellectual slowdown in EA, with fewer concepts being generated that connect to impact and a bit of perceived hostility towards near-term causes (which I think are the most impactful).
Hey Charles, we don’t prioritize long termist projects as we do not think they are the highest impact (for epistemic not ethical reasons). This view is pretty common in EA, but most people who hold this perspective do not engage much on the EA forum. In the future we might write more on it.
We have recommended meta charities in the past (e.g. animal careers) and expect to recommend more in the future. There are some people considering a long-term/AI focused incubator, so this might be a project that happens at some point.
Sadly don’t have time to go into much depth on this, but we strongly recommend it to all charities that run through our CE program (including all the research orgs) and create a ToC for each idea we research.
Here are a few different areas that look promising. Some of these are taken from other organizations’ lists of promising areas, but I expect more research on each of them to be high expected value.
- Donors solely focused on high-income country problems.
- Mental health research (that could help both high and low income countries).
- Alcohol control
- Sugar control
- Salt control
- Trans-fats control
- Air pollution regulation
- Medical research
- Lifestyle changes including "nudges" (e.g. more exercise, shorter commutes, behaviour, education)
- Mindfulness education
- Sleep quality improvement
- Donors focused on animal welfare.
- Wild animal suffering (non-meta, non-habitat destruction) interventions
- Animal governmental policy, particularly in locations outside of the USA and EU.
- Treat disease that affects wild animals
- Banning live bait fish
- Transport and slaughter of turkeys
- Pre-hatch sexing
- Brexit related preservation of animal policy
- Donors focused on improving the welfare of the current generation of humans.
- Pain relief in poor countries
- Tobacco control
- Lead paint regulation
- Road traffic safety
- Micronutrient fortification and biofortification
- Sleep quality improvement
- Immigration reform
- Mosquito gene drives, advocacy, and research
- Voluntary male circumcision
- Research to increase crop yields
Slight correction: The Charity Entrepreneurship program will be based in London, UK this year.
When I was writing this, I was mostly comparing it to other highly time consuming activism (e.g. many people are getting a degree hoping it will help them acquire an EA job). In terms of being the optimal thing for EA organizations to look for, I do not really have a view on that. I was more so hoping to level the understanding between people who have a pretty good sense that this sort of information is what you need, and people who might think that this would be worth far less than, say, a degree from a prestigious university.
Ok given multiple people think this is off I have changed it to 3 hours to account for variation in application time.
My sense is they already had a CV that required very minimal customization and spent almost all the time on the cover letter.
It came from asking ~4 successful employees who where hired