[Coronavirus] Is it a good idea to meet people indoors if everyone's rapid antigen test came back negative? 2021-03-24T15:45:58.043Z
Some quick notes on "effective altruism" 2021-03-24T15:30:48.240Z
EA Funds has appointed new fund managers 2021-03-23T10:55:29.535Z
EA Funds is more flexible than you might think 2021-03-05T09:35:11.737Z
Apply to EA Funds now 2021-02-13T13:36:10.840Z
Giving What We Can & EA Funds now operate independently of CEA 2020-12-22T03:47:48.140Z
How to best address Repetitive Strain Injury (RSI)? 2020-11-19T09:15:27.271Z
Why you should give to a donor lottery this Giving Season 2020-11-17T12:40:02.134Z
Apply to EA Funds now 2020-09-15T19:23:38.668Z
The EA Meta Fund is now the EA Infrastructure Fund 2020-08-20T12:46:31.556Z
EAF/FRI are now the Center on Long-Term Risk (CLR) 2020-03-06T16:40:10.190Z
EAF’s ballot initiative doubled Zurich’s development aid 2020-01-13T11:32:35.397Z
Effective Altruism Foundation: Plans for 2020 2019-12-23T11:51:56.315Z
Effective Altruism Foundation: Plans for 2019 2018-12-04T16:41:45.603Z
Effective Altruism Foundation update: Plans for 2018 and room for more funding 2017-12-15T15:09:17.168Z
Fundraiser: Political initiative raising an expected USD 30 million for effective charities 2016-09-13T11:25:17.151Z
Political initiative: Fundamental rights for primates 2016-08-04T19:35:28.201Z


Comment by Jonas Vollmer on [Coronavirus] Is it a good idea to meet people indoors if everyone's rapid antigen test came back negative? · 2021-04-16T17:25:20.348Z · EA · GW

Sensitivity is relative to PCR tests, and tends to be reported quite incorrectly. So unlike I suggested in the OP, I think the adjustment should probably be ~4x and not ~12x.

Comment by Jonas Vollmer on Announcing "Naming What We Can"! · 2021-04-10T12:35:04.097Z · EA · GW

If someone writes another book about EA, it should be titled How to Be Great at Doing The Most Good You Can Do Better

Comment by Jonas Vollmer on A Comparison of Donor-Advised Fund Providers · 2021-04-06T18:29:59.898Z · EA · GW

You said it in the "My process" section, but not earlier.

Comment by Jonas Vollmer on A Comparison of Donor-Advised Fund Providers · 2021-04-06T10:52:32.017Z · EA · GW

Also, if anyone is up for it, I think a resource for DAF providers in other countries would seem useful as well

Comment by Jonas Vollmer on A Comparison of Donor-Advised Fund Providers · 2021-04-06T10:52:06.294Z · EA · GW

Does this apply to US only? If so, could be good to say at the very top.

(I haven't read the post, but I'm very excited that such a resource exists!)

Comment by Jonas Vollmer on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-30T15:22:19.407Z · EA · GW

Thanks, that makes sense!

Comment by Jonas Vollmer on The Long-Term Future Fund has room for more funding, right now · 2021-03-30T13:50:30.705Z · EA · GW

I don't really think there's a difference between the two: 

  • The LTFF can encourage anyone to apply. Several of the grants of the current round are a result of proactive outreach to specific individuals. (This still involves filling in the application form, but that's just because it's slightly lower-effort than exchanging the same information via email.) 
  • A donor lottery winner can only grant to individuals who submit due diligence materials to CEA, which also involves filling in some forms.
Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-30T13:45:17.903Z · EA · GW

I specifically wrote:

Perhaps partly because of this, at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists", despite self-identifying, e.g., as feminists, utilitarians, or atheists.

For further clarification, see also the comment I just left here.

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-03-30T13:42:31.154Z · EA · GW

Not yet, but I hope to publish it soon. (Sometime this year, ideally within the next few weeks.)

Comment by Jonas Vollmer on What are your main reservations about identifying as an effective altruist? · 2021-03-30T13:41:40.027Z · EA · GW

It's worth mentioning that during that session, we realized that some people want to keep their identity small as a general rule. For this reason, someone specifically asked the following question (paraphrased): "Who 1) has some labels ('-isms') they identify with (e.g. atheist, feminist, utilitarian) and 2) does NOT identify as 'effective altruist'?" And in response to that particular question, around half of people raised their hands. (I didn't count them – might also have been just 30% or so, but definitely a significant percentage. You might think "okay, probably those were the participants who were mainly into AI safety or rationality rather than EA" but that wasn't the case.)

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-29T16:44:53.958Z · EA · GW

Thanks for clarifying – I basically agree with all of this. I particularly agree that the "government job" idea needs a lot more careful thinking and may not turn out to be as great as one might think.

I think our main disagreement might be that I think that donating large amounts effectively requires an understanding of EA ideas and altruistic dedication that only a small number of people are ever likely to develop, so I don't see the "impact through donations" route as an unusually strong argument for doing EA messaging in a particular direction or having a very large movement. And I consider the fact that some people can have very impactful careers a pretty strong argument for emphasizing the careers angle a bit more than the donation angle (though we should keep communicating both).

(Disclaimer: Written very quickly.)

I also edited my original comment (added a paragraph at the top) to make this clearer; I think my previous comment kind of missed the point.

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-29T16:30:56.363Z · EA · GW

Yeah, these are great points. I agree that with enough structure, larger-scale growth seems possible. Basically, I agree with everything you said. I'd perhaps add that in such a world, "EA" would have a quite different meaning from how we use the term now. I also don't quite buy the point about Ramanujan – I think "spreading the ideas widely" is different from "making the community huge".

(Small meta nitpick: I find it confusing to call a community of 2 million people "small" – really wish we were using "very large" for 2 million and "insanely huge" for 1% of the population, or similar. Like, if someone said "Jonas wants to keep EA small", I would feel like they were misrepresenting my opinion.)

Comment by Jonas Vollmer on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T16:29:10.069Z · EA · GW

We also did a lot more promotion and encouraged more people to submit promising applications, and this plausibly also caused more people to apply – so it may be faster-than-organic growth.

Comment by Jonas Vollmer on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T16:28:07.000Z · EA · GW

I wonder whether this alters the calculus for whether to give to donor lotteries (as opposed to EA Funds)? 

I personally think it doesn't change it much. As you previously mentioned, there's a risk of donors being biased against giving to funds and instead wanting to do their "own thing"; I hope that donor lottery winners will be able to overcome that.

It seemed at the time that the response was fairly sanguine about the possibility that individual donors (e.g. lottery winners) might make better allocations than the fund managers. 

It's worth noting that I only believe this under the assumption that the individual donors know about some specific opportunities that the fund managers are unaware of, or perhaps have significant worldview differences with the fund managers.

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-29T10:16:28.985Z · EA · GW

Edit: I think my below comment kind of misses the point – my main response is simply: Some people could probably do a huge amount of good by, e.g., helping increase meat alternatives R&D budgets, this seems a much bigger opportunity than increasing donations and similarly tractable, so we should focus more on that (while continuing to also increase donations).


Some quick thoughts:

  • I personally think the EA community could plausibly grow 1000-fold compared to its current size, i.e. to 2 million people, which would correspond to ~0.1% of the Western population. I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community. We should decide whether we want to grow more than 1000-fold once we've grown 100-fold and have more information.
  • Low donation rates indeed feel concerning. To me, the lack of discussion of "how can we best make ODA budgets more effective" and similar questions feels even more concerning, as the latter seems a much bigger missed opportunity.
  • I think lots of people can get government jobs where you can have a significant positive impact in a relevant area at some point of your career, or otherwise contribute to making governments more effective. I tentatively agree that personal donations seem more impactful than the career impact in many cases, but I don't think it's clear that we should overall aim to maximize donations. It could be worth doing some more research into this.
  • I would feel excited about a project that tries to find out why donation rates are low (lack of money? lack of room for more funding? saving to give later and make donations more well-reasoned by giving lump sums? a false perception that money won't do much good anymore? something else?) and how we might increase them. (What's your guess for the reasons? I'd be very interested in more discussion about this, it might be worth a separate EA Forum post if that doesn't exist already.)
  • As you suggest, if the EA community doesn't have the expertise to have a positive impact on developing-world policy, perhaps it should develop more of it. I don't really know, but some of these jobs might not be very competitive or difficult but disproportionately impactful. Even if you just try help discontinue funding programs that don't work, prevent budget cuts for the ones that do, and generally encourage better resource prioritization, that could be very helpful.
Comment by Jonas Vollmer on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T11:06:36.682Z · EA · GW

Jonas Vollmer and others have a good argument that we should change the name of our movement from EA to Global Priorities.

Ugh, I really want to strongly object to that characterization of my post! I was mostly trying to share some concerns that I wasn't sure what to make of, and my key recommendation was that we "might want to consider de-emphasizing the EA brand". Rebranding the EA community was more of a tentative personal opinion, and "global priorities" was just a very tentative example for what the name could be in such a case.

I would appreciate if you could edit the post to make this clearer. I only discovered your post a day after it was posted, and am worried that people will now read my piece as saying something that I tried to avoid saying.

Otherwise, I think these are great points, and I agree with them. A lot ultimately comes down to empirical testing.

Comment by Jonas Vollmer on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T11:05:08.513Z · EA · GW

(moved comment elsewhere)

Comment by Jonas Vollmer on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T10:56:12.327Z · EA · GW

(moved comment elsewhere)

Comment by Jonas Vollmer on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T10:52:03.552Z · EA · GW

I think it's almost certainly possible to change the name of the movement if we want to – I think this would take an organization taking ownership of the project, hosting a well-organized Coordination Forum for the main stakeholders, and some good naming suggestions that lots of people can get behind.  Doing something ambitious like this might also generally improve the EA community's ability to coordinate around larger projects, which generally seems a useful capacity to develop.

That said, it would be a very effortful project, and should be carefully traded off against other priorities that might have a better benefit/cost ratio. It seems pretty likely to me that other priorities should be higher up on the list. This is why I also emphasized the "use of different labels in different contexts" more than the suggestion that we should rebrand EA in my original post.

(Perhaps that's what you meant with "not an option on the table"? I felt sad when reading that because I understood it as pessimism about EA's ability to coordinate, which I think hasn't really been attempted very well yet.)

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-26T10:21:35.305Z · EA · GW

New post that's related to this (just discovered it now):

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-25T21:20:04.569Z · EA · GW

Thanks, I find that helpful, and agree that's a dangerous dynamic, and could be exacerbated by such a name change.

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-03-25T21:17:00.228Z · EA · GW

Thanks, that makes sense! Before we can do this, we'd probably need a more reliable and faster way to screen through applications. If we develop that, it could be a good idea.

I think our current process would definitely include candidates that are as well-networked in the EA community as you are. I personally was aware of some of your work, and some of our informal advisors have talked to you before I think.

Comment by Jonas Vollmer on Long-Term Future Fund: Ask Us Anything! · 2021-03-25T16:33:31.849Z · EA · GW

Yeah, we could simply explain transparently that it would funge with Open Phil's longtermist budget.

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-03-25T08:58:34.273Z · EA · GW

Here's my attempt at a characterization of the distinction (people should feel free to correct me if they think I'm wrong):

The GHDF made several grants to IDinsight and IPA for RCT research, which could help with the development of new proven interventions (for Covid prevention in particular). It also made grants to Instiglio for technical design of results-based financing, to J-PAL's Innovation in Government Initiative to promote evidence-based policies in developing countries, and Fortify Health, a new potential GiveWell top charity.

These grants are all related to proven, evidence-based interventions, and help scale or promote that approach. The only exception is the grant to One For The World, which is more like an EA meta intervention. EDIT: There's also the Centre for Pesticide Suicide Prevention grant, some info here.

In contrast, one could try to do things that are even more hits-based. An example of a past EA success in this area is helping reallocate £2.5 billion in DFID resources towards research funding for neglected tropical diseases (as I understand it, there were some specific reasons to believe that EAs actually had a large impact on that budget change that weren't discussed publicly). Of course, some of that research was in the form of RCTs, but I guess a lot of it was more fundamental. The key motivating factor is more like "NTD research still is a very important, neglected, and tractable area" rather than "we want to scale proven interventions". The indirectness of the policy advocacy route makes it more hits-based as well; there are no RCTs on whether that kind of policy advocacy works.

Or take the idea of developing-world public health regulation, e.g. tobacco taxation. This is not an RCT-backed intervention in a narrow sense, but nonetheless estimated to be extremely cost-effective based on some back-of-the-envelope calculations.

Another example might be ballot initiatives, though these are less scalable.

It's not clear that any of this requires an additional fund. Perhaps the GHDF can simply do more of both. (edited)

Comment by Jonas Vollmer on [Coronavirus] Is it a good idea to meet people indoors if everyone's rapid antigen test came back negative? · 2021-03-25T08:38:21.023Z · EA · GW

Thanks, this is helpful! Feel free to PM me your payment details so I can send you the $100 reward mentioned in the post.

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-03-25T08:31:40.996Z · EA · GW

Thanks, I personally agree with these points, and I think this is a useful input for our internal discussion.

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-25T08:21:43.226Z · EA · GW

Right now, reaching non-EA donors is not a big priority, and the rebrand is correspondingly pretty far down on my priority list. This may change on a horizon of 1-3 years, though. (Rebranding has some benefits other than reaching non-EA donors, such as reducing reputational risk for the community from making very weird grants.) 

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-25T08:16:52.129Z · EA · GW

I want to push back against the idea that a name change would implicitly change the movement in a more longtermist direction (not sure you meant to suggest that, but I read that between the lines). I think a name change could quite plausibly also be very good for the global health and development and animal welfare causes. It could shift the focus from personal life choices to institutional change, which I think people aren't thinking about enough. 

The EA community would probably greatly increase its impact if it focused a bit less on personal donations and a bit more on spending ODA budgets more wisely, improving developing-world health policy, funding growth diagnostics research, vastly increasing government funding for clean meat research, etc.

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-25T08:08:42.073Z · EA · GW

I really liked this comment, thanks!

The current discussion in the comments seems quite centered on "effective altruism vs. global priorities". I just wanted to highlight that I spent, like, 3 minutes in total thinking about alternative naming options, and feel pretty confident that there are probably quite a few options that work better than "global priorities". In fact, when renaming CLR, we only came up with the new name after brainstorming many options. So I would really like us to generate a list of >10 great alternatives (i.e. actually viable alternatives) before starting to compare them.

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-25T07:59:28.938Z · EA · GW

I think it might actually be pretty good if EA groups called themselves Global Priorities groups, as this shifts the implicit focus from questions like "how do we best collect donations for charity?" to questions like "how can I contribute to [whichever cause you care about] in a systematic way over the course of a lifetime?", and I think the latter question is >10x more impactful to think about.

(I generally agree if there are different brands for different groups, and I think it's great that e.g. Giving What We Can has such an altruism-oriented name. I'm unconvinced that we should have multiple labels for the community itself.)

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-25T07:52:55.756Z · EA · GW

I'm noticing I don't fully understand the way in which you think "Global Priorities" would attract power-seekers, or what you mean by that. Like, I have a vague sense that you're probably right, but I don't see the direct connection yet. Would be very interested in more elaboration on this.

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-03-24T16:51:41.574Z · EA · GW

I still think "effective altruism" sounds a bit more like we've already found the correct answer to "what should we prioritize" rather than just being interested in the question, but I agree these are some good points.

Comment by Jonas Vollmer on [Coronavirus] Is it a good idea to meet people indoors if everyone's rapid antigen test came back negative? · 2021-03-24T16:14:27.282Z · EA · GW

Oops, yes, edited.

Comment by Jonas Vollmer on Responses and Testimonies on EA Growth · 2021-03-24T14:53:40.766Z · EA · GW

Just wanted to say that I really liked this comment, thanks for writing it.

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-03-24T12:50:07.674Z · EA · GW

Right now, I don't think this would be better than the alternative because:

  • To make the form useful, we'd have to ask quite a few questions and people would have to spend a correspondingly large amount of time filling it in. This is already a big issue for the permanent fund managers, but the cost-benefit-ratio looks even worse for guest managers. E.g. if 200 people fill in the form and spend 1h on it per person, that's a 200 hours time cost for EA community members. Out of those people, we might only appoint ~5 as guest managers, and guest managers would only spend ~50h total on the fund. It's likely not worth spending 200 hours of EAs' time (plus time of the chairperson, EA Funds team, and CEA operations) on optimizing how we spend 250 hours of guest managers time.
  • I expect a lot of people would fill it in, so it would be a lot of work to sort through the replies. I think it would be better to spend the additional time getting more recommendations and reaching out to them, because it will involve fewer wasted applications, for the reasons outlined in the "Our process" section above:
  • Fund managers control large and fast-growing pools of money and have access to sensitive data. We want to feel confident in their judgment and trustworthiness based on their track record. We felt that it would be considerably harder to sufficiently vet these qualities in candidates who weren’t familiar to us or informal advisors.
  • We believe that fund managers with a large network in the relevant areas do substantially better. Given that we weren’t expanding into new areas, we thought we would likely learn about the strongest and most well-networked candidates through our extended networks. This seems to have been borne out in practice: applicants who were recommended directly to us on average performed substantially better on our anonymized work tests than applicants who were recommended by other candidates or community members.
  • Public hiring processes tend to be more effortful, both for us and the applicants. By reaching out to people who we thought might be a good fit, we hopefully saved a lot of time for ourselves and others.
  • The guest manager model is generally quite effortful already, and making it even more effortful might tip the balance form "overall worth it" to "overall not worth it". However, if it turns out to work very well and we find a way to make it more efficient, I think it'll be worth reconsidering having a public application form.


That said, if you think there are significant benefits of having such a form that I'm overlooking, I'm keen to hear them and open to changing my mind and potentially implementing your suggestion.

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-03-24T12:39:49.644Z · EA · GW

It's Elie + other GiveWell staff, most notably James Snowden.

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-03-24T10:33:28.685Z · EA · GW

I don't think that's right. As you can see by comparing the payout reports of the Maximum Impact Fund with those of the Global Health and Development Fund,  these two funds serve different purposes:

  • The Maximum Impact Fund grants exclusively to GiveWell top charities implementing proven interventions
  • The Global Health and Development Fund makes grants for technical assistance that helps scale evidence-backed global health and development interventions, but isn't itself RCT-backed

So the Global Health and Development Fund is more speculative and serves a different purpose than the Maximum Impact Fund. At least for now, I'm happy that both options exist.

I'm personally also interested in setting up a third option in the global health and development space: a hits-based global health and development fund. This fund could support things like developing-world public health regulation, research on neglected developing-world diseases, policy advocacy for cost-effective interventions, research growth diagnostics, etc. We could set up this option in collaboration with GiveWell (which has been doing a lot of work in the area) or independently.

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-03-23T20:46:27.457Z · EA · GW

Current guest managers:

  • EA Infrastructure Fund: Ben Kuhn.
  • Long-Term Future Fund: Daniel Eth, Ozzie Gooen, Evan Hubinger.

I've also edited the post to include that information.

There's no secrecy here. Their names will appear in the public payout reports in May. If there's a lot of interest from the community in always learning two months earlier who the guest managers are, we could consider having a public list somewhere. I thought the two-month delay probably wouldn't bother anyone and it's a bit of a hassle to keep the list continually updated (especially if the list of guest managers changes multiple times, as was the case for this round).

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-03-23T20:38:52.865Z · EA · GW

I agree that greater representation of different viewpoints on the EA Infrastructure Fund seems useful. We aim to add more permanent neartermist fund managers (not just guest managers). Quoting from above:

  • We’ve struggled to round out the EAIF with a balance of neartermism- and longtermism-focused grantmakers because we received a much larger number of strong applications from longtermist candidates. To better reflect the distribution of views in the EA community, we would like to add more neartermists to the EAIF and have proactively approached some additional candidates. That said, we plan to appoint candidates mostly based on their performance in our hiring process rather than their philosophical views.

Does that answer your question? Please let me know if you had already seen that paragraph and thought it didn't address your concern.

EDIT: Also note that Ben Kuhn is serving as a guest manager this round. He works on neartermist issues.

(Terminological nitpick: It seems this is not an issue of "cause neutrality" but one of representation of different viewpoints. See here –  the current fund managers are all cause-impartial; neartermist fund managers wouldn't be cause-agnostic either; and the fund is supporting cause-general and cause-divergent work either way.)

Comment by Jonas Vollmer on EA Funds is more flexible than you might think · 2021-03-15T10:30:04.098Z · EA · GW

Again, some fairly quick, off-the-cuff thoughts (if they sound wrong, it's possible that I communicated poorly):

  • Avoiding duplication of effort. E.g., lots of grantees apply to multiple funders simultaneously, and in Q4 2020, 3 grants were approved both by LTFF and SAF/SFF, creating substantial unnecessary overhead.
  • Syncing up on whether grants are net negative. E.g., we may think that grant A is overall worth funding, but has a risk of being net-negative, so would err on the side of not making it (to avoid acting unilaterally). If we talk to other grantmakers and they agree with our assessment after careful investigation of the risks, we can still go ahead and make the grant. Similarly, we may think grant B doesn't have such a risk, but by talking to another grantmaker, we may learn about an issue we were previously unaware of, and may decide not to make the grant.
  • Similar to the above, syncing up on grants in general (i.e. which ones are good use of resources, or what the main weaknesses of existing organizations are).
  • Joining forces on active grantmaking. E.g., another funder may have some promising ideas but not enough time to implement them all. EA Funds may have some spare resources and a comparative advantage for working on a particular one of those ideas, so can go ahead and implement them, receiving input/mentorship from the other funder we wouldn't otherwise have received.
  • Generally giving each other feedback on their approach and prioritization. E.g., we may decide to pursue an active grantmaking project that seems like poor use of resources, and other grantmakers may make us aware of that fact.
Comment by Jonas Vollmer on EA Funds is more flexible than you might think · 2021-03-10T10:25:46.109Z · EA · GW

What Asya said.

I'd add that fund managers seem aware of it being bad if everyone relies on the opinion of a single person/advisor, and generally seem to think carefully about it.

Comment by Jonas Vollmer on EA Funds is more flexible than you might think · 2021-03-05T23:15:53.080Z · EA · GW

Some quick thoughts:

  • EA seems constrained by specific types of talent and management capacity, and the longtermist and EA meta space has a hard time spending money usefully
  • In this environment, funders need to work proactively to create new opportunities (e.g., by getting new, high-value organizations off the ground that can absorb money and hire people)
  • Proactively creating such opportunities is typically referred to as "active grantmaking"
  • I think active grantmaking benefits a lot from resource pooling, specialization, and coordination, and less from diversification [edit: I think active grantmaking relies on diverse, creative ideas, but can be implemented within a single organization]
  • Likewise, in an ecosystem that's overall saturated with funding, it's quite likely that net-harmful projects receive funding; preventing this requires coordination, and diversification can be bad in such a situation
  • Given the above, I think funder coordination and specialization will have large benefits, and think the costs of funder diversification will often outweigh the benefits
  • However, I think the optimum for the EA ecosystem might still be to have 3-5 large donors  instead of the status quo of 1-3 funders (depending on how you count them) 
  • I think small and medium donors will continue to play an important role by funding projects they have local/unique information about and funding them (instead of giving to EA Funds)

(Thanks to some advisors who recently helped me think about this.)

Comment by Jonas Vollmer on EA Funds is more flexible than you might think · 2021-03-05T13:29:21.085Z · EA · GW

Some further, less important thoughts:

  • Some people who repeatedly didn’t get funded have been fairly vocal about that fact, creating the impression that it’s really hard to get funded at least among some people. I feel unhappy about this because it seems to discourage people from launching new things. The reason why a proposal doesn’t get funded is usually quite specific to the project and person. They may get funded with a different project, or a different person may get funded for the same kind of project.
  • The absolute number of careful long-term EA funders is still low, but this has been growing over the past years. Extrapolating from that, it seems plausible that the funding situation in EA will likely be excellent in the years to come.
  • I believe (and others at EA Funds agree) that novel projects often shouldn’t receive long-term funding because it’s still unclear whether they will have much of an impact. At the same time, I am also keen to ensure that the staff of the project can feel financially secure. Based on this, my suggestion is that grantseekers should ask to pay themselves generous salaries for a short time frame, so they don’t have to worry about financial security, but will also strongly consider discontinuing their project early on if it doesn’t bear fruit. And we should encourage grantseekers to do so.
Comment by Jonas Vollmer on Apply to EA Funds now · 2021-03-05T09:42:56.969Z · EA · GW

I just published this article about some potential misconceptions that may help people decide whether to apply.

Comment by Jonas Vollmer on Missing Market: Sustainable African ETF · 2021-03-04T22:22:33.746Z · EA · GW

It seems plausibly good for the world if this existed. But for you personally, investing in AFK (or investing conventionally and donating the higher risk-adjusted returns) might be fine. See these articles:


If you wanted to make this happen, another path to success could also be to find investors with sufficient interest, then approach a white-label ETF provider and get them to set up a fund, see here.

Comment by Jonas Vollmer on Apply to EA Funds now · 2021-03-04T20:17:29.180Z · EA · GW

After looking more into this, we've decided not to evaluate applications for Community Building Grants during this grant application cycle. This is because we think CEA has a comparative advantage here due to their existing know-how, and they're still taking some exceptional or easy-to-evaluate grant applications, so some of the most valuable work will still be funded. It's currently unclear when CBG applications will reopen, but CEA is thinking carefully about this question and I'll be coordinating with them.

That said, we're interested in receiving applications from EA groups that aren't typical community-building activities – e.g., new experiments, international community-building, spin-offs of local groups, etc. If you're unsure whether your project qualifies, just send me a brief email.

I'm aware this isn't the news you and others may have been hoping for, so I personally want to contribute to resolving this gap in the funding ecosystem long-term.

Edit: Huh, some people downvoted. If you have concerns about this comment or decision, please leave a comment or send me a PM.

Comment by Jonas Vollmer on Why EA groups should not use “Effective Altruism” in their name. · 2021-03-02T09:29:05.199Z · EA · GW

Some further, less important points:

  • We actually care about cost-effectiveness or efficiency (i.e., impact per unit of resource input), not just about effectiveness (i.e., whether impact is non-zero). This sometimes leads to confusion among people who first hear about the term.
  • Taking action on EA issues doesn't really require altruism. While I think it’s important that key decisions in EA are made by people with a strong moral motivation, involvement in EA should be open to a lot of people, even if they don’t strongly self-identify as altruists. Some may be mostly interested in contributing to the intellectual aspects without making large personal sacrifices.
  • There was a careful process where the name of CEA was determined. However, the adoption of the EA label for the entire community happened organically and wasn’t really a deliberate decision.
  • "Effective altruism" sounds more like a social movement and less like a research/policy project. The community has changed a lot over the past decade, from "a few nerds discussing philosophy on the internet" with a focus on individual action to larger and respected institutions focusing on large-scale policy change, but the name still feels reminiscent of the former.
Comment by Jonas Vollmer on Why EA groups should not use “Effective Altruism” in their name. · 2021-03-02T09:22:28.668Z · EA · GW

Great points, I had been thinking along similar lines. I want to second the points about awkward translations, and that a lot of people don't really know what "altruism" means. 

Some additional thoughts:

"Effective Altruism" sounds self-congratulatory and arrogant to some people:

  • Calling yourself an "altruist" is basically claiming moral superiority, and anecdotally, my parents and some of my friends didn't like it for that reason. People tend to dislike it if others are very public with their altruism, perhaps because they perceive them as a threat to their own status (see this article, or do-gooder derogation against vegetarians). Other communities and philosophies, e.g., environmentalism, feminism, consequentialism, atheism, neoliberalism, longtermism don't sound as arrogant in this way to me.
  • Similarly, calling yourself "effective" also has an arrogant vibe, perhaps especially among professionals. E.g., during the Zurich ballot initiative, officials at the city of Zurich unpromptedly asked me why I consider them "ineffective", indicating that the EA label basically implied to them that they were doing a bad job. I've also heard other professionals in different contexts react similarly. Sometimes I also get sarcastic "aaaah, you're the effective ones, you figured it all out, I see" reactions.

"Effective altruism" sounds like a strong identity:

  • Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community. By contrast, terms like "longtermism" are somewhat weaker and more about the ideas per se.
  • Perhaps partly because of this, at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists", despite self-identifying, e.g., as feminists, utilitarians, or atheists. I don't think the terminology was the primary concern for everyone, but it may play a role for several individuals.
  • In general, it feels weirdly difficult to separate agreement with EA ideas from the EA identity. The way we use the term, being an EA or not is often framed as a binary choice, and it's often unclear whether one identifies as part of the community or agrees with its ideas.

Some thoughts on potential implications:

  • These concerns don't just affect EA groups. The longer-term goal is for the EA community to attract highly skilled students, academics, professionals, policy-makers, etc., and the EA brand might plausibly be unattractive for some of these people. If that's true, the EA brand might act as a cap on EA's long-term growth potential, so we should perhaps aim to de-emphasize it. Or at least do some marketing research on whether this is indeed an issue.
  • EA organizations that have "effective altruism" in their name or make it a key part of their messaging might want to consider de-emphasizing the EA brand, and instead emphasize the specific ideas and causes more. I personally feel interested in rebranding "EA Funds" (which I run) to some other name partly for these reasons.
  • I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community"), but I realize that others probably wouldn't agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it's perhaps worth making the change now? I'd love to be proven wrong, of course.

Thanks to Stefan Torges and Tobias Pulver for prompting some of the above thoughts and helping me think about them in more detail.

Comment by Jonas Vollmer on Everyday Longtermism · 2021-02-22T13:14:32.466Z · EA · GW

It might be interesting to compare that to everyday environmentalism or everyday antispeciesism. EAs have already thought about these areas a fair bit and have said interesting things about in the past.

In both of these areas, the following seems to be the case:

  1. donating to effective nonprofits is probably the best way to help at this point, 
  2. some other actions look pretty good (avoiding unnecessary intercontinental flights and fuel-inefficient cars, eating a plant-based diet), 
  3. other actions are making a negligibly small difference per unit of cost (unplugging your phone charger when you're not using it, avoiding animal-based food additives), 
  4. there are some harder-to-quantify aspects that could be very good or not (activism, advocacy, etc.),
  5. there are some virtues that seem helpful for longer-term, lasting change (becoming more aware of how products you consume are made and what the moral cost is, learning to see animals as individuals with lives worth protecting).

EAs are already thinking a lot about optimizing #1 by default, so perhaps the project of "everyday longtermism" could be about exploring whether actions fall within #2 or #3 or #4 (and what to do about #4), and what the virtues corresponding to #5 might look like.

Comment by Jonas Vollmer on Interview with Tom Chivers: “AI is a plausible existential risk, but it feels as if I’m in Pascal’s mugging” · 2021-02-21T17:30:14.490Z · EA · GW

I think this post uses the term "Pascal's mugging" incorrectly, and I've seen this mistake frequently so I thought I'd leave a comment. 

Pascal's mugging refers to scenarios with tiny probabilities (less than 1 in a trillion or so) of vast utilities (potentially higher than the largest utopia/dystopia that could be achieved in the reachable universe), and presents a decision-theoretic problem. Some discussion in Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? and  Pascal's Muggle: Infinitesimal Priors and Strong Evidence. Quoting from the first of those pieces:

Yet it would also be naive to say things like “Long-termists are victims of Pascal’s Mugging.”

I think the correct term for the issue you're describing might be something like "cause robustness" or "conjunctive arguments" or similar.