EA Infrastructure Fund Grants (November 2020) 2020-12-01T12:11:58.488Z
EA Meta Fund Grants – July 2020 2020-08-12T11:58:52.900Z
Long-Term Future Fund and EA Meta Fund applications open until June 12th 2020-05-15T12:28:10.437Z
Long-Term Future Fund and EA Meta Fund applications open until January 31st 2020-01-06T18:26:27.335Z
EA Meta Fund Grants - July 2019 2019-09-13T15:40:59.914Z
EA Meta Fund Grants - March 2019 2019-05-23T08:55:23.292Z
EA Meta Fund AMA: 20th Dec 2018 2018-12-19T18:06:02.484Z
EA Meta Fund Grants Report: Nov 18 2018-12-03T10:45:26.767Z


Comment by agdfoster on Can my self-worth compare to my instrumental value? · 2020-11-25T18:24:50.665Z · EA · GW

I echo thoughts here re helping yourself generally being the smart thing to do. I personally love that my current work relies so heavily on my mental wellbeing, it means I can't tempt myself with overly self-sacrificial narratives.

This said, I also LOVE that EA isn't about me/us. It's the tool for doing more good with our careers, and lots of the people involved in it make for great like-minded friends, but it isn't, and shouldn't be, our home, or crutch.

I don't think a community full of people who [make EA too much their everything] is as stable or as robust as one full of people with simply a shared mission.

I like that I have a strong instrumental reason, beyond just common sense, not to feel like maybe I am compelled to make EA/utilitarianism/impact my everything.

Comment by agdfoster on Can my self-worth compare to my instrumental value? · 2020-11-25T18:06:36.955Z · EA · GW

I found this a helpful sound bite, thank you

Comment by agdfoster on Investing to Give: FP Research Report · 2020-11-13T15:39:27.620Z · EA · GW

If someone felt we might have insights in the future that would be so valuable it makes sense to hold onto a 3-5% yearly return over inflation, why not tell them to instead fund achieving those insights, make them happen sooner?

Comment by agdfoster on Some (Rough) Thoughts on the Value of Campaign Contributions · 2020-08-25T13:38:52.601Z · EA · GW

This was helpful research for something I needed - thank you

Comment by agdfoster on EA Meta Fund Grants – July 2020 · 2020-08-14T13:27:01.038Z · EA · GW

High-impact, for simplicity, (they have a very large total number of grants) is set just as the rough status quo of groups on GiveWell, funded by Open Phil, ACE charities etc., FP manage their own list and we >90% are in agreement on what is in that list. None of the largest grants in the list are groups we feel conflicted about.

In an ideal world we would of course evaluate every group their pledgers have counterfactually funded but that's not really tractable. And we try to only use their quantitative outcomes as one of several signals as to how well they're doing (it's very tempting to fall into a rabbit hole of data analysis for a group with such clear and measurable first order outcomes)

Comment by agdfoster on EA Meta Fund Grants – July 2020 · 2020-08-14T13:23:25.160Z · EA · GW

FP aren't a straight forward advisory group, they have a pledge and a community, so the $19m is the total to high-impact charities within their pledger community. FP's research team have attempted to estimate which of those donations happened as a result of FP advisory / marketing work, which is hard, and as with any self-reporting, open to becoming a KPI that ends up drifting and becoming misreported. My current view of the FP individuals that did this estimate work though is that they have high intellectual honesty and thoroughness, that they are aware of their own misincentives and when I spot-checked a number of their figures in 2018-19 they were good estimates, perhaps even on the conservative side.

Comment by agdfoster on EA Meta Fund Grants – July 2020 · 2020-08-14T13:18:44.886Z · EA · GW

I agree with Denise. Although it's worth noting that our bar for a mentorship program worth funding does have to be quite high.

Comment by agdfoster on EA Meta Fund Grants – July 2020 · 2020-08-14T13:16:54.284Z · EA · GW

+1. A major factor is also that writing tastefully and responsibly about the things we are concerned about with an organisation would probably more than double or triple the size of all our write-ups. I'd expect the amount of time it took us to carefully think through those write-ups would be much higher than for the main writeup and we would be more likely to make mistakes which resulted in impact destruction.

Where a concern is necessarily part of the narrative for the decision or it feels like it's very important and can easily be shared with confidence, I think we have. But generally it's not necessary for the argument, and we stick to the default policy.

Comment by agdfoster on Growth and the case against randomista development · 2020-01-20T19:09:25.147Z · EA · GW

I wish forum authors would avoid framing their arguments as "us vs them". It makes me spend far more time engaging with the piece than I would have rationally chose to!

Thanks again for the valuable thought provocation anyway though :)

Comment by agdfoster on Growth and the case against randomista development · 2020-01-20T19:07:28.015Z · EA · GW

Hm, just found the appendix mentioned in 5.3 - so never mind! I think I'm persuaded that it's likely very valuable looking for opportunities.

Comment by agdfoster on Growth and the case against randomista development · 2020-01-20T18:54:32.270Z · EA · GW

Good to see some of these arguments making their way into EA analysis!

Given the number of economists, the number of countries and that there does seem to be relatively wide agreement behind some important economic policies: are there lists floating around of remaining low-hanging fruit for economic policy changes in certain countries?

I would have thought that there are just so many economists, think tanks etc., and people keen to make money/prestige off of advising governments on how to run their economy, that most those remaining low-hanging fruit policy changes are stuck where they are for some very-hard-to-change reason.

Comment by agdfoster on Which Community Building Projects Get Funded? · 2019-11-15T22:45:14.332Z · EA · GW

Yes that's right, thanks Julia and apologies for any confusion. I use the term differently to how the OP has used it. We are still looking at anything that falls outside of the remit of CBG but are unable to offer the level of vetting and direct support that we think is necessary to fund local groups well and so are funding that via CBG. We have still been looking at special projects by local groups that fall outside of the remit of CBG also.

Comment by agdfoster on Which Community Building Projects Get Funded? · 2019-11-15T18:59:06.985Z · EA · GW

Thanks for sharing this! I’m on the meta fund team and open to feedback.

First I want to quickly flag that we no longer do community building grants due to their complexity and instead intend to fund CEA CBG. Community building projects are very hard to properly evaluate, track and support and I applaud CEAs team for working so hard at this. However, at each round CBG sends us a proposal on what additional funding can achieve in terms of fulfilling existing commitments and new applications, to help inform our funding decision to them (if any).

Addressing the sourcing-through-networks point, I just went through our early stage grants in the last year quickly and it looks like the majority were people we met through our various funding processes as opposed to people we knew already. Late stage grants we did of course have connections to already but I think that’s to be expected, and these organisations are quite well known in any case.

We could say that people don’t apply because they haven’t heard of us, that could be true. Thanks for the idea to share to FB groups and newsletter, we will try that and see if it affects application quality. We are very keen for more applications and any suggestions most welcome.

There’s a separate question of secondary network effects like people getting better talent, better mentoring and stronger referrals based on geographical networks. I won’t address that here now but can have a go if interesting.

I can only speak for myself but I think we already have a slightly lower bar for funding project proposals in further flung locations and I’m not convinced it would be worth it to lower it further.

I could imagine a scenario where it became clear we were missing lots of really strong opportunities further afield but I'd be surprised we weren't seeing more signs of it already with many strong projects appearing just through the application forms and intros. The very very early stage work of getting local groups going and that sort of thing falls outside of what the Meta Fund can realistically include within its remit. We also don't have so much funding that funding projects without clear merit for the sake of stimulating growth in a region would make sense any time soon.

I do think ideally we would have a Bay Area based committee member. Peter M was there until earlier this year and we are all fairly well connected there.

Thanks again

Comment by agdfoster on Why we think the Founders Pledge report overrates CfRN · 2019-11-04T19:26:12.433Z · EA · GW

FP’s estimate for their $ per tonne was something like 0.1, with large error bars. It’s a policy intervention after all. How big an adjustment would each of your issues raise that by? Your “it’s still good” comment seems a bit throw away.

Would you, in total, adjust it by >100x I.e. estimating >$10 per tonne?

Comment by agdfoster on Who should I put a funder of a $100m fund in touch with? · 2019-10-16T10:33:17.485Z · EA · GW

Nice one! There are a few strong options but would need more info to give a decent suggestion, hard to say just based on the above, drop me an email at [myusername] and probably happy to put you in touch with some people. I’d generally advise erring against discussing potential major donors on a public forum.

Comment by agdfoster on Updated Climate Change Problem Profile · 2019-10-08T10:12:45.380Z · EA · GW

Thanks! Cool perspective. I’ll just make a quick comment on the neglectedness point:

Much of diminishing returns in an area come not from crowdedness or inefficient economies of scale but because the low hanging fruit are gone. Most EA top reccommended cause areas are so neglected that the majority of the work is still focused on basic/applied research and basic policy advocacy, strategy etc. Work that can have huge ripples, steering global markets and governments where they otherwise would have done nothing for potentially many years.

Another way of evaluating neglectedness is to ask ‘what’s the chance I’ll have an outsized counterfsftual impact by doing X’ and the base rates are just going to be lower for that now given the sheer number of people working on it.

It’s also worth noting that the impact of direct work in climate is inherently limited due to the huge scale of emissions today, so I think it’s very fair to say it has a low neglectedness score.

I think the argument for climate does have to be focused on the scale of the bad outcomes that can be mitigated through additional work, and it still looks very good under that lens. We don’t have to work on neglected problems, it’s just a heuristic for affecting larger speed ups.

Comment by agdfoster on EA Meta Fund Grants - July 2019 · 2019-09-16T19:21:04.229Z · EA · GW

Thanks for the question. All the grantees except the EA survey have funding gaps greater than our grants. Honestly it's really hard to give an estimate for this quickly as we didn't review all of the options to the same level (as we knew we only had $X to grant). Of the smaller orgs that applied I think somewhere between $200k and $1m, but if we include the orgs with run-rates over $500k pa then this number gets much bigger.

Comment by agdfoster on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-08-02T11:25:11.787Z · EA · GW

I work with donors and feel I don’t have enough to say when ageing comes up. Fancy giving me a quick primer?

A few things I’m clearly confused on:

  • why aging isn’t always discussed under the restriction of global population capacity, seems like we should have reached it by then (assuming no AGI). I realise this is everyone’s first intuitive response but seems like it should still be a key factor in any analysis.

  • seems like under a full capacity assumption the impact either comes from an improved distribution of where people are at in life (less teenage depression and end of life suffering or something like that) which seems uncertain. Or from increased career length which also seems not certainly net positive. Maybe I missed it but couldn’t see any of this in your calc discussion.

  • I know there are plenty of arguments for aging being neglected and so worth it from the medium term also but it seems like if you’re talking about escape velocity then pop cap needs to factor in.

If you’d be up for that primer I’m On my username (Agdfoster) and

Comment by agdfoster on Do we know how many big asteroids could impact Earth? · 2019-07-08T01:43:59.951Z · EA · GW

Don’t know much about this but I thought you could estimate a ballpark for the total frequency by looking at craters on moons and mars.

Comment by agdfoster on Corporate Global Catastrophic Risks (C-GCRs) · 2019-07-02T12:43:47.243Z · EA · GW

Is it meaningful to say that some companies are growing exponentially when the markets they take part in are growing exponentially?

After a quick Google, Microsoft was $500bn market cap in it's 2000 peak and now is about $1tn, but world GDP has almost doubled in that time also (60tn -> 110tn).

Comment by agdfoster on What is the effect of relationship status on EA impact? · 2019-06-27T17:11:34.375Z · EA · GW

If you want to try a work strategy that involves long hours then a positive successful relationship may be harder to achieve.

Otherwise I’d advocate you don’t instrumentalise your non-work time for impact. I know it’s a cliche but do what you enjoy. Instrumentalising your free time seems to make people less relatable (probably an understatement), less trustworthy, more prone to depression, less robust to sudden changes etc

Having a strong base, whatever it is for you, is pretty important I think. When impact stuff is going badly you don’t want to feel like everything is going badly. That kind of instability is going to have more Long term effect for your impact than a few thousand pounds a year in one direction or another.

Comment by agdfoster on How to evaluate the impact of influencing governments vs direct work in a given cause area? · 2019-06-26T12:08:43.468Z · EA · GW

As you suggest in the question I think an improvement would be: "impact of -org X- -trying- to influence governments vs direct work"

Some key considerations:

  • Gov interventions' numbers in my experience generally have much better expected values in back of the envelope models and then often look much less good when you add a bunch of additional intuitive discount factors.
  • Attribution is highly uncertain with policy interventions.
  • This said, I expect many of the highest return per dollar funding opportunities to be in policy and research.
  • What's the tractability/chance of success of the government intervention for the given team? It might actually be very very low.
  • Is this team more likely to do more harm than good? There are many ways of doing damage or damaging future efforts. I think this rules out a bunch of approaches that would otherwise justify their low tractability and
  • How time pressured is the intervention? Potentially risk is worth it. My general rule of thumb currently on this is something like "major risks are very rarely worth it, be very careful and wait for great te
  • Sometimes "direct work+ " is the best policy intervention. Do something the government could adopt really well, show that it works and has public support, then try lobbying for it. This said, it seems like quite a few charities just strap this argument on to what they were going to do anyway, which is fine so long as they've thought it through properly, just something to look out for.
  • Lobbying / technical assistance etc opportunities are generally going to be fewer and further between.
Comment by agdfoster on EA Meta Fund Grants - March 2019 · 2019-06-12T10:49:28.418Z · EA · GW

Thanks for flagging, now updated

Comment by agdfoster on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-10T18:38:41.657Z · EA · GW

I’d guess the best argument is the obvious one:

  • most the professional world and voting populace have a very negative view on psychedelics
  • whilst the potential upsides might be sizeable, they likely don’t compare to the negative damage to EA that EA orgs publicly supporting such work would likely do.
  • if done in secret that’s a) a secret (generally bad) and b) inevitably going to get out.
  • and a fair number of non EAs are working on it anyway as it’s quite a popular idea in California. I’m guessing anyone super passionate about it could get funding and hire without having to be associated with EA at all.
Comment by agdfoster on Is EA ignoring significant possibilities for impact? · 2019-05-10T18:29:44.571Z · EA · GW

I think your reasoning here needs a lot of work. Few quick points:

  • better to critic specific points rather than something broad like ‘all strategy of EA affiliated orgs’.
  • generally, if it seems like a large number of really smart people in EA appear to be missing something, you should have a strong prior that you are the one missing something. Took me a long time to accept this. It’s not wrong to shine a light on things of course, but a touch more humility in your writing would go a long way.
  • reasoning and evidence aren’t exclusive things, evidence is part of reasoning.
  • this said, I don’t think the criticism of “too evidence based” sticks anyway, have you read much academic ea research recently? Maybe in poverty.. but that’s a very busy area with loads of evidence where most approaches don’t work so it would be pretty crazy not to put heavy weight on evidence in that cause area.
  • Jude’s spends 2.1m a day but given the differences between the impact p dollar of projects easily gets into the order of 100s-1000s this isn’t very relevant.
  • OpenPhil could spend that. There are complex reasons why it doesn’t but the main thing to note that total spend is a terrible terrible signal.
  • for profit models have been explored numerous times, while still promising, little really great stuff has been found. People are working on it but it’s not a slam dunk.
  • earning to give is a great way to build career capital and do good.
  • advocacy and philanthropic advisory is really hard. People in that area are going as fast as they sensibly can.
  • it takes a long time to become a chief of staff at a powerful org.
  • policy / lobbying approaches are really hard, and people are again working on it as fast as they can.
Comment by agdfoster on Will splashy philanthropy cause the biosecurity field to focus on the wrong risks? · 2019-04-30T18:12:18.646Z · EA · GW

Really interesting read, a few thoughts below. Only skim read article so mostly responding to your prompts. I should also note I advise philanthropists for a living and so am inherently biased!

  • I’ve found Open Phil’s reasoning to be rigorous and thorough, far more so than virtually all of their peers. I also have deep intellectual trust for, so far without exception, everyone I’ve met that works there.

  • from a skim read the OPs arguments feel pretty zero sum. Perhaps the argument should instead be “should open Phil also fund non-GCBR bio work as well”.

  • It doesn’t seem even handed to both portray these researchers as easily swayed by flashy deep pocketed philanthropists as well as lamenting the loss of highly intelligent research talent. If they’re highly intelligent and also updated their actions based on open phils reasoning (albeit also including cash), the OP should probably be humble themselves about the likelihood of being right.

  • the OP seems to present philanthropy as this potentially negative steering force. Even if the field is zero sum (gov funds less as a result / too little talent to use extra funds wel), are we to believe that altnerate funding sources apply no directional pressure?

  • whilst voting keeps governments relatively aligned with the populace’s needs, it has only a small alignment with global needs and global public goods. The short time frame (4 years) also seems to result in shorter-term thinking. Future generations can’t vote. Philanthropy seems uniquely well positioned to be reasoning and funding in areas poorly tended to by the democratic system and markets.

  • a bunch of the arguments wouldn’t seem intuitive if re-applied to other more familiar causes like climate change or global poverty, reasons for difference should be highlighted and then the extent of the OPs arguments capped respectively.

Comment by agdfoster on Getting the Rich and Powerful to Give · 2019-04-29T09:34:06.150Z · EA · GW

From experience this gets more and more desirable both the wealthier a donor is and the longer the donor has been wealthy for.

I think the papers phrase “agency” is missing something though. It’s not a sensation of making decisions or being listened to that people are drawn to generally. I’d say it was more ‘proprietorship’ or ‘feeling ownership over the project’, feeling like a causally relevant component of the whole endeavour.

Agency is also, I find, quite patronising!

Comment by agdfoster on Why does EA use QALYs instead of experience sampling? · 2019-04-26T12:46:12.770Z · EA · GW

That's fine! Thanks.

Comment by agdfoster on Why does EA use QALYs instead of experience sampling? · 2019-04-25T10:31:53.748Z · EA · GW
[Disclosure: In February 2019, I corresponded about the experience-sampling idea with Alex Foster of the EA Meta Fund. He said my points were "certainly quite compelling," but the correspondence fell off.

Please note that the content of my correspondence with Milan was exploratory but primarily from a position of skepticism. Whilst technically accurate I find this quote to be misleading and not very good form.

Comment by agdfoster on Who is working on finding "Cause X"? · 2019-04-15T21:07:45.896Z · EA · GW

Arguably it was the philosophers that found the last few. Once the missing moral reasoning was shored up the cause area conclusion was pretty deductive.

Comment by agdfoster on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-04-15T14:59:04.164Z · EA · GW

haha - good question. And yes, from notes.

Comment by agdfoster on New Top EA Cause: Flying Cars · 2019-04-06T14:15:01.535Z · EA · GW

I hate April 1st so much.

Comment by agdfoster on New Top EA Cause: Flying Cars · 2019-04-02T09:27:06.854Z · EA · GW

A small gripe with the title - you don’t make any argument for this tech solving global poverty, just congestion in the wealthiest cities on earth. I know transportation has economic benefits elsewhere but your post makes no claims about this.

Comment by agdfoster on New Top EA Cause: Flying Cars · 2019-04-02T09:22:52.514Z · EA · GW

I want more posts about flying cars.

I’m still assuming the reliability requirement is too high. If a car stops working it rolls to a halt, a flying crashes into a residential area. Planes don’t do this, but they have costly constant checks. Maybe a fleet owner (non personal ownership) and lots of sensors for automated checks makes the reliability feasible.

Similarly security seems like a daunting challenge.

Noise I hadn’t thought of.

Do we even need them though, if a city goes full AV you can theoretically have very high speed regular cars and no junctions / traffic. At even just 60mph, a 30 min commute encompasses an area significantly larger than Greater London. And commuting in an AV could be very comfortable with a desk and WiFi. Whilst it’s hard to work on trains I could imagine even “going for an AV pomodoro” in the middle of the day just for the concentration benefits of reclusion and a fixed travel time.

Assuming good automation is required for good flying cars, I’m also not sold on automation being net-good for employment. Life satisfaction - sure.

Comment by agdfoster on a black swan energy prize · 2019-03-29T14:02:42.516Z · EA · GW

I drafted but didn’t publish a post yesterday titled “where are all the ideas?”. Really glad to see a contribution of this type.

Comment by agdfoster on Request for comments: EA Projects evaluation platform · 2019-03-22T16:48:20.787Z · EA · GW

I regularly simplify my evaluations into pros and cons lists and find them surprisingly good. Open Phil’s format essentially boils down into description, pros, cons, risks.

Check out kialo. It allows really nice nested pro con lists built of claims and supporting claims +discussion around those claims. It’s not perfect but we use it for keeping track of logic for our early stage evals / for taking a step back on evals we have gotten too in the weeds with

Comment by agdfoster on Primates vs birds: Is one brain architecture better than the other? · 2019-03-05T10:25:06.790Z · EA · GW

Really love how clearly you’ve communicated the relevance of the findings and how they fit in.

Comment by agdfoster on Latest Research and Updates for February 2019 · 2019-02-28T17:21:18.794Z · EA · GW

This is great - thanks for continuing to do these roundups, always things I’ve missed

Comment by agdfoster on Why you should NOT support Aubrey de Grey's work on ageing. (maybe) · 2019-02-28T16:51:01.405Z · EA · GW

Seconded on title, enjoyed content but title felt click-baity and misleading, especially given 90% of readers will only read the title.

Comment by agdfoster on Effective Impact Investing · 2019-02-28T12:42:20.049Z · EA · GW

I’m so glad to see a post from people working in the industry in question - thank you for taking the time, making the post and contributing to the discussion, I strong upvoted.

Impact investing comes up a lot in donor advisory, so have a few points to add:

1 I generally still advise donors not to use their philanthropic, impact maximising, allocation for impact investing. I still do not have a very thorough way of explaining how I came to this conclusion and no existing materials I know of could be sent to a HNW.

2 most large donors only give a small portion of their assets, >90% of their capital is generally invested and impact investing (II) can come from that allocation. This changes the discussion considerably. Eg from ‘II vs donating’ to ‘where are you spending your time’. We could also discuss lost returns but it seems like most II does not have sufficiently lower returns on average to worry about it from an impact perspective. Giving better will make significantly more of a difference than 5-20% more or less profit from their investments. Not always going to be true but generally seems the right view.

3 donor funds are a very different question to an individuals career, the leverage available in some impact investing careers is so high that it requires a separate investigation that I haven’t seen. I would guess that an evaluator could move 10-100x more capital for the same effort in II than in donation evaluation.

4 stepping back, effectiveness minded II might be an important consideration in designing the models orgs within top cause areas consider. If you were comparing two models for your org and one assumed limited philanthropic capital and so maximised its impact per dollar but the other assumed huge swathes of II funding and so took on a much lower impact per dollar, and tried to design a model that would be copied, build IP, be acquired by industry giant etc..

5 bio / ai safety have many opportunities for doing far more damage than good, at face value it does not seem a good fit for larger, IP-driven investment models. However, a big worry is control and trust. If top researchers and strategists in the area felt there was capacity for responsible cautious impact investing the area, that might speed up how quickly market driven approaches emerged.

Comment by agdfoster on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T10:49:04.792Z · EA · GW

I ran my first hiring process to hire someone for an EA role last year and was amazed how long it took me. I’ve hired around 20 times in the past and only spent a couple weeks and 20-40h per role. Last year I spent 8 months and hundreds of hours. I reflected afterwards on why and can list a few hypotheses:

  • normally rely heavily on gut to build my shortlist. Did not feel comfortable doing this for this role as it felt like there were so many failure modes for a bad hire. Both ways that a hire could go badly and severity of impact for a hire going badly.

  • normally relying on intuition heavily is highly reversible. Worst case scenario I have to fire the candidate after probation, I’ll never see them again and no one knows them. I’m open with candidates that this is my policy and that they should be careful accepting an offer. In EA I felt like everyone knows everyone and a fired hire could cause significant reputational damage with a one-sided narrative. I don’t endorse this view as rational but the fear was definitely a factor in why I took so long.

  • I was hiring for a role that defies regular role definition. No one applying to the role had applied to a similar position before let alone worked in one. Potentially this was the largest factor and my other points are moot.

  • I wasn’t hiring someone to have similar skills to me, instead hiring someone to have the skills I don’t have. Normally I would judge experience, passion, intelligence, lateral thinking ability, ambition and team fit then let a team lead judge specific ability.

  • many candidates treated the process like a 2 way application the whole way through the process. This three off my intuitions and normally I would have dropped all candidates who weren’t signalling they were specifically very excited about my role. First call excluded.

  • many candidates’ conversations included career advice from me. This threw off my intuitions but I consider it time well spent in ll the cases where I spent over 2h

  • I worried a lot about how much time of others I was using. Assuming a candidate spent 4x more time than I did, I used over a thousand hours of people’s time.

  • ultimately I made offers to two candidates both of which I had had strong gut feelings about very early, which was rewarding but also highly frustrating.

The key thing I intend to change next time is being much faster. I didn’t feel like (for me) the extra process complexity and caution added that much insight and crucially, it threw off my intuition.

The main downside of reduced complexity seems to be the increased chance of a bad hire and the potential damage of firing them. I think next time I will return to my original method and be very transparent with the person I make an offer to that their 3 month probation is not just a formality, pointing them to this article as an explanation to why it’s not worth it for others for me to have a long drawn out process that may only slightly reduce the risk of a bad hire.


** I do not advocate anyone else doing this unless they are confident in their hiring intuitions. I also haven’t tried it yet and it may go terribly. **

Thank you to the OP for posting. Illuminating!

Comment by agdfoster on Impact investing is only a good idea in specific circumstances · 2019-01-09T18:06:37.087Z · EA · GW

Found Bridgespan's 2018 report useful and interesting.

Comment by agdfoster on List of possible EA meta-charities and projects · 2019-01-09T18:04:02.982Z · EA · GW

Nice list Saulius, thank you.

Comment by agdfoster on How should large donors coordinate with small donors? · 2019-01-09T17:40:13.840Z · EA · GW

[idea]: Invite-only Google Sheet List of considerations relevant to funding a group (one group per tab) and then columns of donor's weights for those considerations. I would find this really interesting.

Deal could be that you only get access if you're willing to share your weights!

Comment by agdfoster on EA orgs are trying to fundraise ~$10m - $16m · 2019-01-07T21:17:49.475Z · EA · GW
For instance, like other big non-profits, EA orgs might want to hire institutional fundraisers to tap into larger grants from big foundations other than the usual suspects

I've looked into this a few times and it does seem like it will become a promising channel. In particular from the big donors that do very large checks (>$500k). At least one org I know is experimenting with hiring a full-time grant-writer. I currently think it won't work well for most EA orgs for some time to come.

Worth noting that most big foundations have large sr. management time overheads and often require designing bespoke projects just for that foundation. Grant-writers also generally have slow payback periods (>1-2 years not rare, more if first one doesn't work out), are very tricky to evaluate during hiring and expire once you run out of foundations / major donors to apply to (most don't do much repeat funding). Not insurmountable challenges.

An alternative is to hire one-off fundraisers who approach lesser known major donors for you, I think that may be promising but requires a large time investment to train that person to talk about your charity. They also still require a large amount of Sr. mgmt's time (non-foundation major donors will generally want to speak to the founders and most those conversations will end up being a no) and are more likely to generate one-off funding rather than repeat.

It may be that expanding philanthropic advisory within EA in general is more promising. Whilst not specifically focused on raising funds for EA orgs, an increased number of smart and best-arguments-aware donors in the space in general could well have a similar result for less sr. mgmt time cost.

It could also be that having a semi-centralised fundraising team that manages a team of generalist fundraisers that are shared and specialist fundraisers that each work for a different large EA org could work really well. Train them all up in tandem and work out how to evaluate them, focus on >$300k checks from major donors but also have a grant-writer or two shared between them, hire most talent from mainstream pools, etc., We looked into something like this to function across all the GiveWell charities but it ultimately looked like it wouldn't work.

It doesn't seem unlikely that that last option never makes sense, because by the time you have orgs large enough to justify the above ($5m-$10m pa), those orgs also organically start to hire their own internal fundraisers and grant-writers just to meet their large budgets.

Looking forward to putting more thought into this.

Comment by agdfoster on The Global Priorities of the Copenhagen Consensus · 2019-01-07T20:51:40.481Z · EA · GW
Social protection system coverage (helping more people access government benefits); CC estimates that this is less than one-fifth as valuable as cash transfers

That is surprising, they've done a lot of work in and around India where welfare budget utilisation has been infamously poor until only quite recently and where the huge rural population seems to make it particularly hard to get welfare to the poorest who need it most.

I wonder how their economists account for the counterfactual of unused government funds, I've seen quite a few calcs where unused welfare funds that go back into the central pot are only discounted by 1/4 - 1/2, which I still find very unintuitive even given only that the average wealth of a government spending recipient is far higher than that of a recipient of welfare.

I've been keeping an eye out for a charity / org in India that is particularly good at increasing access to government welfare so this is relevant for that.

Comment by agdfoster on EA Giving Tuesday Donation Matching Initiative 2018 Retrospective · 2019-01-07T20:36:04.629Z · EA · GW

Really impressed by both how you've executed this as well as the write-ups. 🙌

Thank you!!

Comment by agdfoster on Challenges in Scaling EA Organizations · 2018-12-21T16:04:04.345Z · EA · GW

Awesome think-piece, thank you

Comment by agdfoster on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-21T16:00:58.745Z · EA · GW

Note: I have a feeling that 'policing tone' is an annoying meme for a forum and something more appropriate for moderators than for readers so I'll post this one and default to refraining from doing it again.

Quick few thoughts on the tone of this, feel free to ignore if it doesn't change your mind:

Most of these articles have been good but this one is certainly the worst out of all that I have seen (n=25 or so, from multiple writers) and I believe it has negative expected value.

This part, right at the top and at a few other points, I felt a little uncomfortable. If I were the author and I read this, I feel like I would feel more 'attacked by my allies' than 'constructively critiqued'.

I feel like some quick changes to the tone, particularly early on (e.g., 'I found this one distasteful' rather than 'this is the worst I've seen') feels less aggressive. Perhaps adding an extra paragraph at the beginning saying a few positive things about their column in general (if you have those views) and saying that you only mean your comments in a constructive way. Personally, that would be enough for me to take the feedback well. Maybe no one on their team reads it, maybe one person reads it and forwards it to the whole team. Seems worth assuming the latter is the case and it's that scenario that prompted me to make this comment.

Given in particular that Future Perfect is not funded by donors that explicitly identify with EA ideas and that run by Vox, my quick guess is that careful constructive criticism is far more valuable / low risk than more assertive / slightly aggressive criticism (apologies if I'm already preaching to the choir here). I'm currently still really glad that Vox, Ezra etc., have chosen to do this column taking lots of EA ideas into account.

Funnily enough I had a similar opinion about one of the mobile thumbnails for their anti-mars piece. The thumbnail read "Elon wants to go to Mars, here's why that's a bad idea" which didn't seem worth it.

Comment by agdfoster on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-20T21:10:52.094Z · EA · GW

Some part of the large potential upside of this fund, and the reason why some of the team are excited to put so much of their time into it, is that if we do a really good job it could grow and attract significant additional capital into the meta cause areas.

Whilst a relatively small cause area, in a more efficient market for non-profits, I would expect that the space was funded to the brim due to the outsized returns available within it. I see moving additional capital into this space as highly valuable and I think it's often a smaller, easier jump for many donors than some of the more exotic super-high-impact cause areas.

My intuition is that I think that the fund would not be particularly attractive to new donors or have that much potential for growth if we only funded one project at a time and given that there are a number of projects available with similarly high expected value, spreading over a number of orgs (and including some early stage orgs) seems like a valuable thing to do.

I think this sentiment is shared across the team but may also include other reasons.