Posts

Public reports are now optional for EA Funds grantees 2021-11-13T00:37:23.281Z
You can now apply to EA Funds anytime! (LTFF & EAIF only) 2021-06-17T20:06:11.610Z
EA Infrastructure Fund: Ask us anything! 2021-06-03T01:06:19.360Z
EA Infrastructure Fund: May 2021 grant recommendations 2021-06-03T01:01:01.202Z
[Coronavirus] Is it a good idea to meet people indoors if everyone's rapid antigen test came back negative? 2021-03-24T15:45:58.043Z
Some quick notes on "effective altruism" 2021-03-24T15:30:48.240Z
EA Funds has appointed new fund managers 2021-03-23T10:55:29.535Z
EA Funds is more flexible than you might think 2021-03-05T09:35:11.737Z
Apply to EA Funds now 2021-02-13T13:36:10.840Z
Giving What We Can & EA Funds now operate independently of CEA 2020-12-22T03:47:48.140Z
How to best address Repetitive Strain Injury (RSI)? 2020-11-19T09:15:27.271Z
Why you should give to a donor lottery this Giving Season 2020-11-17T12:40:02.134Z
Apply to EA Funds now 2020-09-15T19:23:38.668Z
The EA Meta Fund is now the EA Infrastructure Fund 2020-08-20T12:46:31.556Z
EAF/FRI are now the Center on Long-Term Risk (CLR) 2020-03-06T16:40:10.190Z
EAF’s ballot initiative doubled Zurich’s development aid 2020-01-13T11:32:35.397Z
Effective Altruism Foundation: Plans for 2020 2019-12-23T11:51:56.315Z
Effective Altruism Foundation: Plans for 2019 2018-12-04T16:41:45.603Z
Effective Altruism Foundation update: Plans for 2018 and room for more funding 2017-12-15T15:09:17.168Z
Fundraiser: Political initiative raising an expected USD 30 million for effective charities 2016-09-13T11:25:17.151Z
Political initiative: Fundamental rights for primates 2016-08-04T19:35:28.201Z

Comments

Comment by Jonas Vollmer on A Red-Team Against the Impact of Small Donations · 2021-11-25T09:58:03.771Z · EA · GW

Though is that case, is the upshot that I should donate to EA Funds, or that I should tell EA Funds to refer weird grant applicants to me?

If you're a <$500k/y donor, donate to EA Funds; otherwise tell EA Funds to refer weird grant applications to you (especially if you're neartermist – I don't think we're currently constrained by longtermist/meta donors who are open to weird ideas).

Regarding Charter Cities, I don't think EA Funds would be worried about funding them. However, I haven't yet encountered human-centric (as opposed to animal-inclusive) neartermist (as opposed to longtermist) large private donors who are open to weird ideas, and fund managers haven't been particularly excited about charter cities.

Comment by Jonas Vollmer on A Red-Team Against the Impact of Small Donations · 2021-11-25T09:50:15.337Z · EA · GW

This doesn't seem like it is common knowledge. 

To me, it feels like I (and other grantmakers) have been saying this over and over again (on the Forum, on Facebook, in Dank EA Memes, etc.), and yet people keep believing it's hard to fund weird things. I'm confused by this.

Also, "weird things that make sense" does kind of screen off a bunch of ideas which make sense to potential applicants, but not to fund managers. 

Sure, but that argument applies to individual donors in the same way. (You might say that having more diverse decision-makers helps, but I'm pretty skeptical and think this will instead just lower the bar for funding.)

Comment by Jonas Vollmer on A Red-Team Against the Impact of Small Donations · 2021-11-25T09:44:04.316Z · EA · GW

Yeah, I agree. (Also, I think it's a lot harder / near-impossible to sustain such high returns on a $100b portfolio than on a $1b portfolio.)

Comment by Jonas Vollmer on A Red-Team Against the Impact of Small Donations · 2021-11-25T09:42:18.512Z · EA · GW

Markets are made efficient by really smart people with deep expertise. Many EAs fit that description, and have historically achieved such returns doing trades/investments with a solid argument and without taking crazy risks. 

Examples include: crypto arbitrage opportunities like these (without exposure to crypto markets), the Covid short, early crypto investments (high-risk, but returns were often >100x, implying very favorable risk-adjusted returns), prediction markets, meat alternatives.

Overall, most EA funders outperformed the market over the last 10 years, and they typically had pretty good arguments for their trades.

But I get your skepticism and also find it hard to believe (and would also be skeptical of such claims without further justification).

Also note that returns will get a lot lower once more capital is allocated in this way. It's easy to make such returns on $100 million, but really 

(Made some edits)

Comment by Jonas Vollmer on A Red-Team Against the Impact of Small Donations · 2021-11-24T21:58:27.334Z · EA · GW

Strong upvote, I think the "GiveDirectly of longtermism" is investing* the money and deploying it to CEPI-like (but more impactful) opportunities later on. 

* Donors should invest it in ways that return ≥15% annually (and plausibly 30-100% on smaller amounts, with current crypto arbitrage opportunities). If you don't know how to do this yourself, funging with a large EA donor may achieve this.

(Made a minor edit)

Comment by Jonas Vollmer on A Red-Team Against the Impact of Small Donations · 2021-11-24T21:55:18.472Z · EA · GW

I want to mildly push back on the "fund weird things" idea. I'm not aware of EA Funds grants having been rejected due to being weird. I think EA Funds is excited about funding weird things that make sense, and we find it easy to refer them to private donors. It's possible that there are good weird ideas that never cross our desk, but that's again an informational reason rather than weirdness.

Edit: The above applies primarily to longtermism and meta. If you're a large (>$500k/y) neartermist donor who is interested in funding weird things, please reach out to us (though note that we have had few to none weird grant ideas in these areas).

Comment by Jonas Vollmer on Disentangling "Improving Institutional Decision-Making" · 2021-11-13T17:37:01.785Z · EA · GW

I've been skeptical of much of the IIDM work I've seen to date. By contrast, from a quick skim, this piece seemed pretty good to me because it has more detailed models of how IIDM may or may not be useful, and is opinionated in a few non-obvious but correct-seeming ways. I liked this a lot – thanks for publishing!

Like, if anyone feels like handing out prizes for good content, I'd recommend that this piece of work should receive a $10k prize (though perhaps I'd want to read it in full before fully recommending).

Comment by Jonas Vollmer on Should EA Global London 2021 have been expanded? · 2021-11-10T14:58:07.040Z · EA · GW

With that sentence, I only meant to suggest that I wouldn't want CEA to become more risk-averse due to this post (or similar future posts). I didn't mean to implicitly discourage thoughtful critiques like this one. Sorry if my comment read that way! I also agree with you that CEA should avoid repeating any mistakes that were made.

I've edited the previous comment to clarify.

Comment by Jonas Vollmer on Should EA Global London 2021 have been expanded? · 2021-11-09T17:46:29.499Z · EA · GW

I think it's great that CEA increased the event size on short notice. It's hard to anticipate everything in advance for complex projects like this one, and I think it's very cool that when CEA realized the potential mistake, it fixed the issue and expanded capacity in time.

I'd much rather have a CEA that gets important things broadly right and acts swiftly to fix any issues in time, than a CEA that overall gets less done due to risk aversion resulting from pushback from posts like this one*, or one that stubbornly sticks to early commitments rather than flexibly adjusting its plans.

I also feel like the decision not to worry too much about Covid seems correct given the most up-to-date risk estimates, similar to how conference organizers usually don't worry too much about the risk of flu/norovirus outbreaks.

(Edit - disclosure: From a legal perspective, I am employed by CEA, but my project (EA Funds) operates independently (meaning I don't report to CEA staff), and I wasn't involved in any decisions related to EA Global.)

* Edit: I don't mean to discourage thoughtful critiques like this post. I just don't want CEA to become more risk-averse because of them.

Comment by Jonas Vollmer on RyanCarey's Shortform · 2021-10-22T15:11:14.169Z · EA · GW

PPP-adjusted GDP seems less geopolitically relevant than nominal GDP, here's a nominal GDP table based on the same 2017 PwC report (source), the results are broadly similar:

Comment by Jonas Vollmer on Truthful AI · 2021-10-22T11:39:28.406Z · EA · GW

(Unimportant: Why is falsity raised to the fourth power?)

Comment by Jonas Vollmer on MaxDalton's Shortform · 2021-10-20T17:14:02.196Z · EA · GW

I agree with those examples! 

(Maybe I feel somewhat skeptical about 'move slowly with high quality' ever being a good choice – it seems to me that the quality/speed tradeoff is often overstated, and there's actually not that much of a tradeoff.)

Comment by Jonas Vollmer on Listen to more EA content with The Nonlinear Library · 2021-10-20T17:11:19.876Z · EA · GW

If someone deletes their original post, do you auto-remove it from the podcast as well? That would seem important to me.

Comment by Jonas Vollmer on Listen to more EA content with The Nonlinear Library · 2021-10-20T17:09:29.890Z · EA · GW
  • Once something is up on the internet, it's up forever. Taking it down post-facto doesn't actually undo the damage.

I think this isn't actually correct – I think it depends a lot on the type of content, how likely it is to get mirrored, the data format, etc. E.g. the old Leverage Research website is basically unavailable now (except for the front page I think), despite being text (which gets mirrored a lot more).

  • You only need one person to sue you for things to go quite badly wrong.

Whether it actually goes 'badly wrong' depends on the type of lawsuit, the severity of the violation, the PR effects, etc. It's probably good to err on the side of not violating any laws, and worth looking into it a bit before doing it.

I otherwise agree with your points!

Comment by Jonas Vollmer on MaxDalton's Shortform · 2021-10-18T18:27:30.251Z · EA · GW

Interesting. Could you say more why you believe that there are clusters of traits that go well together? 

The main example that comes to my mind is that people have different personalities and preferences, so if your team clusters around a set of certain personality traits and preferences, that implies that some specific organizational design choices work better than others.

But I'd feel more reluctant to say things like "move fast and break things works well with hiring quickly"; I find it hard to see any obvious hills based on the variables you mentioned.

I would have said something more like: Which strategy is best will depend on the specifics of what you're trying to do (market, product, goals). 

Comment by Jonas Vollmer on You can talk to EA Funds before applying · 2021-10-01T22:24:37.371Z · EA · GW

I've updated that post.

Comment by Jonas Vollmer on Apply to EA Funds now · 2021-10-01T22:24:13.028Z · EA · GW

These dates are now out of date, you can now apply anytime. Please refer to our website for up-to-date information.


 

Comment by Jonas Vollmer on How are resources in EA allocated across issues? · 2021-09-27T07:21:33.816Z · EA · GW

Thanks, agreed!

Comment by Jonas Vollmer on How are resources in EA allocated across issues? · 2021-09-19T03:26:51.050Z · EA · GW

I'd guess that the labor should be valued at significantly more than $100k per person-year. Your calculation suggests that 64% of EA resources spent are funding and 36% are labour, but given that we're talent-constrained, I would guess that the labor should be valued at something closer to $400k/y, suggesting a split of 31%/69% between funding and talent, respectively. (Or put differently, I'd guess >20 people pursuing direct work could make >$10 million per year if they tried earning to give, and they're presumably working on things more valuable than that, so the total should be a lot higher than $200 million.)

Using those figures, the overallocation to global poverty looks less severe, we're over- rather than underallocating to meta, and the other areas look roughly similar (e.g., there still is a large gap in AI).

Regarding the overallocation to meta, one caveat is that the question was multi-select, and many people who picked that might only do a relatively small amount of meta work, so perhaps we're allocating the appropriate amount.

Link to spreadsheet

Comment by Jonas Vollmer on What we learned from a year incubating longtermist entrepreneurship · 2021-09-19T02:53:04.349Z · EA · GW

Regarding a YC incubator model, I think the main issue is just that people rarely generate sufficiently well-targeted and ambitious startup ideas. I really don't think we need another dozen donation apps or fundraising orgs, but that's what people often come up with. I think we'd want something that does more to help people develop better ideas. (Perhaps that's what you had in mind as well.)

Comment by Jonas Vollmer on What we learned from a year incubating longtermist entrepreneurship · 2021-09-19T02:49:49.388Z · EA · GW

FWIW, as someone who previously warned about risk of accidental harm, I personally mostly agree with this comment. I think what I care more is "option value to shut projects down if they turn out to be harmful" than preventing damage in the first place (with the exception of projects that have very large negative effects from the very beginning).

Comment by Jonas Vollmer on EffectiveAltruismData.com: A Website for Aggregating and Visualising EA Data · 2021-09-19T02:43:21.030Z · EA · GW

Very exciting! In case funding would help with further developing this project, consider applying here, our process is designed to be fast and easy.

Edit: Ah, I can see that you mention this in your post - we're looking forward to receiving your application!

Comment by Jonas Vollmer on University EA Groups Should Form Regional Groups · 2021-09-13T18:01:06.555Z · EA · GW

I commented on a draft of this post. I haven't re-read it in full, so I don't know to what degree my comments were incorporated. Based on a quick glance it seems they weren't, so I thought I'd copy the main comments I left on that draft. My main point is that I think inserting regional groups into the funding landscape would likely worsen rather than improve the funding situation. I still think regional groups seem promising for other reasons.

Some of my comments (copy-paste, quickly written):

[Regarding applying for funding:] At a high level, my guess would be that this solution would increase overhead and friction in distributing money, rather than reducing it. I think setting up lots of regional grantmakers is a lot of work

That said, I think regional groups can be very useful and valuable for other reasons. Just don't really think they should do grantmaking.

I'm worried about different regional groups applying inconsistent quality service, and/or inconsistent criteria in distributing money

I think we should think of ways to address the psychological issue of people being afraid, rather than building a lot of structure around this

I think [the EAIF would] have a pretty easy time setting up more scalable systems [once there is a much larger number of groups]

E.g. we could set up more standardized, faster processes for grant applications that fit certain categories that can be quickly reviewed by less senior people. The bottleneck for setting up such a system is having a sufficient number of applications for it to be worth doing

You also need to build the infrastructure for making the payments themselves efficiently, doing the financial accounting, running an entity, tax reporting, etc. – (…)

I think people routinely underestimate the time cost of running a legal entity with a lot of activity. I wish people generally try really hard to eliminate any unnecessary operational busywork. Instead, we should focus relentlessly on the EA content and promising people, and use very pragmatic fast solutions for handling admin things

Comment by Jonas Vollmer on How to best address Repetitive Strain Injury (RSI)? · 2021-09-07T06:17:59.581Z · EA · GW

Some further recommendations:

  • Keep using your hands, acknowledging it may be (partly) psychosomatic, and not worrying too much about it. A friend told me they saw a surgeon for RSI and the surgeon recommended to keep using the hands as normally and not worry too much, and that helped in their case.
  • Reducing phone usage; not using the phone in bed while lying down; not playing games on my phone.
Comment by Jonas Vollmer on Get 100s of EA books for your student group · 2021-09-01T09:30:02.575Z · EA · GW

In 80K's The Precipice mailing experiment, 15% of recipients reported reading the book in full after a month, and ~7% of people reported reading at least half.

I'm also aware of some anecdotal cases where books seemed pretty good - e.g., I know of a very promising person who got highly involved with longtermism within a few months primarily based on reading The Precipice.

The South Korea case study is pretty damning, though. I wonder if things would look better if there had been a small number of promising people who help onboard newly interested ones (or whether that was already the case and it didn't work despite that).

I'd be pretty interested in engagement hours based on email clicks, if you have that data. I care less about open rates and more about whether someone goes on to read through key ideas pages for several hours based on that.

All that said, the high open rates you mentioned have updated me somewhat towards mailing lists being more valuable than I previously thought.

Comment by Jonas Vollmer on Get 100s of EA books for your student group · 2021-08-25T11:17:51.280Z · EA · GW

To me it sounds like you're underestimating the value of handing out books: I think books are great because you can get someone to engage with EA ideas for ~10 hours, without it taking up any of your precious time.

As you said, I think books can be combined with mailing lists. (If there was a tradeoff, I would estimate they're similarly good: You can either get a ~20% probability of getting someone to engage for ~10h via a book, or a ~5%(? most people don't read newsletters) probability of getting someone to engage for ~40h via a mailing list. And while I'd rather have one person engage deeply than many people engage shallowly, I think the first few engagement hours tend to be more valuable (less overdetermined) than the ones that follow later.)

Comment by Jonas Vollmer on What are the top priorities in a slow-takeoff, multipolar world? · 2021-08-25T09:46:10.402Z · EA · GW

Some work that seems relevant:

Comment by Jonas Vollmer on What are the EA movement's most notable accomplishments? · 2021-08-23T06:48:36.851Z · EA · GW

Strictly speaking, a lot of the examples are outputs or outcomes, not impacts, and some readers may not like that. It could be good to make that more explicit at the top.

I also want to suggest using more imagery, graphs, etc. – more like visual storytelling and less like just a list of bullet points.

Comment by Jonas Vollmer on Open Phil EA/LT Survey 2020: Introduction & Summary of Takeaways · 2021-08-19T15:50:41.940Z · EA · GW

I think it's really cool that you're making this available publicly, thanks a lot for doing this!

Comment by Jonas Vollmer on Mission Hedgers Want to Hedge Quantity, Not Price · 2021-08-19T15:48:53.283Z · EA · GW

Great points, thanks for raising them!

One potential takeaway could be that we may want to set up the financial products we'd like to use for hedging ourselves – e.g., by setting up prediction markets for the quantity of oil consumption. (Perhaps FTX would be up for it, though it won't be easy to get liquidity.)

Comment by Jonas Vollmer on AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA · 2021-08-19T09:45:14.416Z · EA · GW

I'm surprised this comment was downvoted so much. It doesn't seem very nuanced, but here's obviously a lot going wrong with modern capitalism. While free markets have historically been a key driver of the decline of global poverty (see e.g. this and this), I don't think it's wrong to say that longtermists should be thinking about large scale economic transition (though should most likely still involve free markets).

Comment by Jonas Vollmer on Some quick notes on "effective altruism" · 2021-08-16T10:52:00.777Z · EA · GW

A friend (edit: Ruairi Donnelly) raised the following point, which rings true to me:

If you mention EA in a conversation with people who don't know about it yet, it often derails the conversation in unfruitful ways, such as discussing the person's favorite pet theory/project for changing the world, or discussing whether it's possible to be truly altruistic. It seems 'effective altruism' causes people to ask the wrong questions.

In contrast, concepts like 'consequentialism', 'utilitarianism', 'global priorities', or 'longtermism' seem to lead to more fruitful conversations, and the complexity feels more baked into the framing.

Comment by Jonas Vollmer on Denise_Melchin's Shortform · 2021-08-12T22:51:25.104Z · EA · GW

I generally agree with most of what you said, including the 3%. I'm mostly writing for that target audience, which I think is probably at least a partial mistake, and seems worth improving.

I'm also thinking that there seem to be quite a few exceptions. E.g., the Zurich ballot initiative I was involved in had contributors from a very broad range of backgrounds. I've also seen people from less privileged backgrounds make excellent contributions in operations-related roles, in fundraising, or by welcoming newcomers to the community. I'm sure I'm missing many further examples. I think these paths are harder to find than priority paths, but they exist, and often seem pretty impactful to me.

I'm overall unsure how much to emphasize donations. It does seem the most robust option for the greatest number of people. But if direct work is often even more impactful, perhaps it's still worth emphasizing that more; it often seems more impactful to have 10 extra people do direct work than 100 people donate 10%. Of course, ideally we'd find a way to speak to all of them.

Comment by Jonas Vollmer on Most research/advocacy charities are not scalable · 2021-08-10T10:42:40.806Z · EA · GW

I have edited all our fund pages to include the following sentence:

Note: We are temporarily unable to display correct fund balances. Please ignore the balance listed below while we are fixing the issue.

Comment by Jonas Vollmer on Most research/advocacy charities are not scalable · 2021-08-10T10:39:42.729Z · EA · GW

I strongly agree with the premise of this post and really like the analysis, but feel unhappy with the strong focus on physical products. I think we should instead think about a broader set of scalable ways to usefully spend money, including but not limited to physical products. E.g. scholarships aren't a physical product, but large scholarship programs could plausibly scale to >$100 million.

(Perhaps this has been said already; I haven't bothered reading all the comments.)

Comment by Jonas Vollmer on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2021-07-27T12:54:10.540Z · EA · GW

Yeah, in my model, I just assumed lower returns for simplicity. I don't think this is a crazy assumption – e.g., even if the AI portfolio has higher risk, you might keep your Sharpe ratio constant by reducing your equity exposure. Modelling an increase in risk would have been a bit more complicated, and would have resulted in a similar bottom line.

I don't really understand your model, but if it's correct, presumably the optimal exposure to the AI portfolio would be at least slightly greater than zero. (Though perhaps clearly lower than 100%.)

Comment by Jonas Vollmer on What would you do if you had half a million dollars? · 2021-07-22T11:19:19.420Z · EA · GW

I think deciding between capital allocators is a great use of the donor lottery, even as a Plan A. You might say something like: "I would probably give to the Long-Term Future Fund, but I'm not totally sure whether they're better than the EA Infrastructure Fund or Longview or something I might come up with myself. So I'll participate in the donor lottery so if I win, I can take more time to read their reports and see which of them seems best." I think this would be a great decision.

I'd be pretty unhappy if such a donor then felt forced to instead do their own grantmaking despite not having a comparative advantage for doing so (possibly underperforming Open Phil's last dollar), or didn't participate in the donor lottery in the first place. I think the above use case is one of the most central one that I hope to address.

I tentatively agree that further diversification of funding sources might be good, but I don't think the donor lottery is the right tool for that.

Comment by Jonas Vollmer on What would you do if you had half a million dollars? · 2021-07-22T11:11:30.165Z · EA · GW

I really liked this comment. Three additions:

  • I would take a close look at who the grantmakers are and whether their reasoning seems good to you. Because there is significant fungibility and many of these funding pools have broad scopes, I personally expect the competence of the grantmakers to matter at least as much as the specific missions of the funds.
  • I don't think it's quite as clear that the LTFF is better than the EA Infrastructure Fund; I agree with your argument but think this could be counterbalanced by the EA Infrastructure Fund's greater focus on talent recruitment, or other factors.
  • I don't know to what degree it is hard for Longview to get fully unrestricted funding, but if that's hard for Longview, giving it unrestricted funding may be a great idea. They may run across promising opportunities that aren't palatable to their donors, and handing them over to EA Funds or Open Philanthropy may not be straightforwardly easy in some cases.

(Disclosure: I run EA Funds, which hosts the LTFF and EA Infrastructure Fund. Opinions my own, as always.) 

Comment by Jonas Vollmer on Metaculus Questions Suggest Money Will Do More Good in the Future · 2021-07-22T10:36:53.413Z · EA · GW

It's worth pointing out that these questions apply specifically to global health and development, but could be very different in other cause areas.

I don't think question 1 provides evidence that money will do more good in the future. It might even suggest the opposite: As you point out, malaria prevention and deworming might run out of room for more funding, and to me this seems more likely than the discovery of a more cost-effective option that is also highly scalable (beyond >$30 million per year).

Comment by Jonas Vollmer on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2021-07-09T09:49:53.939Z · EA · GW

I took your spreadsheet and made a quick estimate for an AI mission hedging portfolio. You can access it here

The model assumes:

  • AI companies return 20% annually over the next 10 years in a short-timelines world, but less than the global market portfolio in a long-timelines world,
  • AI companies have equal or lower expected returns than the global market portfolio (otherwise we're just making a bet on AI),
  • money is 10x more useful in a short-timelines world than in a long-timelines world,
  • logarithmic utility.

In the model, the extra utility from the AI portfolio is equivalent to an extra 2% annual return. 

My guess is that this is less than the extra returns one might expect if one believes the market doesn't price in short AI timelines sufficiently, but it makes the case for investing in an AI portfolio more robust.

Caveat: I did this quickly. I haven't thought very carefully about the choice of parameters, haven't done sensitivity analyses, etc.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-07-06T17:03:28.724Z · EA · GW

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

Comment by Jonas Vollmer on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-07-06T17:03:12.812Z · EA · GW

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-07-06T17:02:07.016Z · EA · GW

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-07-06T16:59:44.499Z · EA · GW

I am very excited to announce that we have appointed Max Daniel as the chairperson at the EA Infrastructure Fund. We have been impressed with the high quality of his grant evaluations, public communications, and proactive thinking on the EAIF's future strategy. I look forward to having Max in this new role!

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-07-06T09:05:49.327Z · EA · GW

I'm also in favor of EA Funds doing generous back payments for successful projects. In general, I feel interested in setting up prize programs at EA Funds (though it's not a top priority).

One issue is that it's harder to demonstrate to regulators that back payments serve a charitable purpose. However, I'm confident that we can find workarounds for that.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-07-05T11:45:28.768Z · EA · GW

> Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?

Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too).

Just wanted to flag briefly that I personally disagree with this:

  • I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Phil's last dollar), and are truly increasing overall resources*. I think that there's a high chance that more financial resources won't be helpful at all, but some small chance that they will be, so the EV is still weakly positive.
  • I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*.

* Some models/calculations that I've seen don't do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations / Shapley value (the fundraising organization often doesn't deserve 100% of the credit for the money raised – some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments).

I still somewhat share Buck's overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. I'd rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level.

Overall, I want to continue funding good fundraising organizations.

Comment by Jonas Vollmer on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-07-01T19:52:11.271Z · EA · GW

(Also agree with Max. Long lead times in academia definitely qualify as a "convincing reason" in my view)

Comment by Jonas Vollmer on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-29T13:55:07.570Z · EA · GW

I wouldn't rule it out, but typically we might say something like: We are interested in principle, but would like to wait for another 6-12 months to see how your project/career/organization develops in the meantime before committing the funding (unless there's a convincing reason for why you need the funding immediately).

Comment by Jonas Vollmer on Refining improving institutional decision-making as a cause area: results from a scoping survey · 2021-06-28T15:15:25.304Z · EA · GW

I'm excited that there's now more work happening on Effective Institutions / IIDM!

Some questions and constructive criticism that's hopefully useful:

The aim was to gauge the diversity of perspectives in the EA community on what “counts'' as IIDM. This helps us understand what the community thinks is important and has the most potential for impact. We hope that the results will shape the rest of our work as a working group and provide a helpful starting point for others as well. 

It seems that you're starting out with the assumption that IIDM is a useful category/area, and that figuring out its scope is helpful for determining what's the most impactful. Was there a particular reason for taking the intermediate step via the scope/definition of IIDM? I personally would be curious to learn which kinds of activities people find most promising in this area, and why so. In comparison, the scope question might just track a 'verbal dispute' rather than opinions on ground truths. (Edit: Looks like EdoArad pointed out something similar above.)

Relatedly, the survey gives a picture of what some people interested in IIDM believe about some high-level abstract categories. I wonder if the survey also gave you any insight into the types of activities that people think we should work on. E.g., what specific things do people have in mind when they talk about "Institutional design / governance", and why exactly do they think it's important? Does their reasoning hold up on closer inspection? I personally would feel very excited to see more object-level discussion of that kind. Perhaps a small number of people who have thought about IIDM carefully and systematically could share their object-level arguments on which approaches seem the most promising to them.

Comment by Jonas Vollmer on Shallow evaluations of longtermist organizations · 2021-06-26T16:17:44.807Z · EA · GW

I actually think it would be cool to have more posts that explicitly discuss which organizations people should go work at (and what might make it a good personal fit for them).