Issues with centralised grantmaking

post by MathiasKB (MathiasKirkBonde) · 2022-04-04T10:45:03.743Z · EA · GW · 56 comments

Contents

  Issues with centralised funding
  What we can do to improve grantmaking
None
56 comments

Recently someone made a post [EA · GW] expressing their unease with EA's recent wealth. I feel uncomfortable too. The primary reason I feel uncomfortable is that a dozen people are responsible for granting out hundreds of millions of dollars, and that as smart and hardworking as these people are, they will have many blindspots. I believe there are other forms of grantmaking structures that should supplement our current model of centralised grantmaking, as it would reduce the blindspots and get us closer to optimal allocation of resources.

In this post I will argue:

Issues with centralised funding

Similarly to the USSR's economic department that struggled with determining the correct price of every good, I believe EA grantmaking departments will struggle for similar reasons. Grantmakers have imperfect information! No matter how smart the grantmaker, they can't possibly know everything.

To overcome their lack of omniscience grantmakers must rely on heuristics such as:

These heuristics can be perfectly valid for grantmakers to use, and result in the best allocation they can achieve given their limited information. But the heuristics are biased and result in sub-optimal allocation to what could theoretically be achieved with perfect information.

For example, people who have spent significant time in EA hubs are more likely to be vouched for by someone in the grantmakers network. Having attended an ivy league university is a great signal that someone is talented, but there is a lot of talent that did not.

My issue is not that grantmakers use these proxies. My issue is that if all of our grantmaking uses the same proxies, then there will be a great deal of talented people with great projects that should have been funded but were overseen. I'm not sure about this, but I imagine that some complaints about EA's perceived elitism stem from this. EA grantmakers are largely cut from the same cloth, live in the same places, and have similar networks. Two anti-virus systems that detect the same 90% of viruses is no more useful than a single anti-virus system, two systems that are uncorrelated will instead detect 99% of all viruses. Similarly we should strive for our grantmakers's biases to be uncorrelated if we want the best allocation of our capital.

In the long run, overreliance on these proxies can also lead to bad incentives and increased participation in zero-sum games such as pursuing expensive degrees to signal talent.

We shouldn't expect for our current centralised grantmaking to be optimal in theory, and I don't think it is in practice either. But fortunately I think there's plenty we can do to improve it.

What we can do to improve grantmaking

The issue with centralised grantmaking is that it operates off imperfect information. To improve grantmaking we need to take steps to introduce more information into the system. I don't want to propose anything particularly radical. The system we have in place is working well, even if it has its flaws. But I do think we should be looking into ways to supplement our current centralised funding with other forms of grantmaking that have other strengths and weaknesses.

Each new type of grantmaking and grantmaker will spot talent that other grantmaking programs would have overseen. Combined they create a more accurate and robust ecosystem of funding.

FTX Future fund's regranting programme is a great example of the type of supplementing grantmaking structure I think we should be experimenting with. I feel slightly queasy that their system to decide new grantmakers may perpetuate the biases of the current grantmakers. But I don't want to let perfect be the enemy of the good, and their grantmaker programme is yet another reason I'm so excited about the FTX future fund.

Below are a few off-the-cuff ideas that could supplement our current centralised structure:

Hundreds of people spent considerable time writing applications to FTX Future fund's first round of funding. It seems inefficient to me that there aren't more sources of funding looking over these applications and funding the projects they think look the most promising.


Given that many are receiving answers from their FTX Grant, I think the timing of this post is unfortunate. I worry that our judgement will be clouded by emotions over whether we received a grant, and if we didn't whether we approved of the reasoning and so fourth. My goal is not to criticise our current grantmakers. I think they are doing an excellent job considering their constraints. My goal is instead to point out that it's absurd to expect them to be superhuman and somehow correctly identify every project worth funding!


No grantmaker is superhuman, but we should strive for a grantmaking ecosystem that is.

56 comments

Comments sorted by top scores.

comment by Stefan_Schubert · 2022-04-04T11:29:54.864Z · EA(p) · GW(p)

One issue is that decentralised grant-making could increase the risk that projects that are net negative get funding, as per the logic of the unilateralist's curse [? · GW]. The risk of that probably varies with cause area and type of project.

My hunch is that many people have a bit of an intuitive bias against centralised funding; e.g. because it conjures up images of centralised bureaucracies (cf. the reference to the USSR) or appears elitist. I think that in the end it's a tricky empirical question and that the hypothesis that relatively centralised funding is indeed best shouldn't be discarded prematurely.

I should also say that how centralised or coordinated grant-makers are isn't just a function of how many grant-makers there are, but also of how much they communicate with each other. There might be ways of getting many of the benefits of decentralisation while reducing the risks, e.g. by the right kinds of coordination.

Replies from: MichaelPlant, Ivy_Mazzola, MathiasKirkBonde, Linda Linsefors, Brendon_Wong, freedomandutility
comment by MichaelPlant · 2022-04-05T09:43:10.745Z · EA(p) · GW(p)

Right, but the unilateralist's curse is just a pro tanto reason not to have dispersed funding. It's something of a false positive (funding stuff that shouldn't get funded) but that needs to be considered against the false negatives of centralised funding (not funding stuff that should get funded). It's not obvious, as a matter of conjecture, which is larger.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2022-04-05T09:47:53.389Z · EA(p) · GW(p)

Yes, but it was a consideration not mentioned in the OP, so it seemed worth mentioning.

comment by Ivy_Mazzola · 2022-04-06T19:20:20.582Z · EA(p) · GW(p)

To be honest, the overall (including non-EA) grantmaking ecosystem is not so centralized that people can't get funding for possibly net-negative ideas elsewhere. Especially given they have already put work in, have a handful of connections, or will be working in a sort of "sexy" cause area like AI that even some rando UHNWI would take interest in. 

Given that, I don't think that keeping grantmaking very centralized yields enough of a reduction in risk that it is worth protecting centralized grantmaking on that metric. And frankly, sweeping such risky applications under the rug hoping they disappear because they aren't funded (by you, that one time) seems a terrible strategy. I'm not sure that is what is effectively happening, but if it is:

I propose a 2 part protocol within the grantmaking ecosystem to reduce downside risk:
1. Overt feedback from grantmakers in the case that they think a project is potentially net-negative. 
2. To take it a step further, EA could employ someone whose role it is to try to actively sway a person from an idea, or help mitigate the risks of their project if the applicants affirm they are going to keep trying. 

Imagine, as an applicant, receiving an email saying:

"Hello [Your Name],

Thank you for your grant application. We are sorry to bear the bad news that we will not be funding your project. We commend you on the effort you have already put in, but we have concerns that there may be great risks to following through and we want to strongly encourage you to consider other options.

We have CC'ed [name of unilateralist's curse expert with domain expertise], who is a specialist in cases like these who contracts with various foundations. They would be willing to have a call with you about why your idea may be too risky to move forward with. If this email has not already convinced you, we hope you consider scheduling a call on their [calendly] for more details and ideas, including potential risk mitigation.

We also recommend you apply for 80k coaching [here]. They may be able to point you toward roles that are just as good or a better fit for you, but with no big downside risk and with community support. You can list us a recommendation on your coaching application. 

We hope that you do not take this too personally as this is not an uncommon reason to withhold funding (hopefully evidenced by the resources in place for such cases), and we hope to see you continuing to put your skills toward altruistic efforts. 

Best,
[Name of Grantmaker]"

Should I write a quick EA forum post on this 2 part idea? (Basically I'll copy-paste this comment and add a couple paragraphs). Is there a better idea?

I realize that email will look dramatic as a response to some, but it wouldn't have to be sent in every "cursed case". I'm sure many applications are rather random ideas. I imagine that a grantmaker could tell by the applicants' resumes and their social positioning how likely the founding team are to keep trying to start or perpetuate a project. 

I think giving this type of feedback when warranted also reflects well on EA. It makes EA seem less of an ivory tower/billionaire hobby and more of a conversational and collaborative movement.

*************************************

The above is a departure from the point of the post. FWIW, I do think the EA grantmaking ecosystem is so centralized that people who have potentially good ideas which stem from a bit of a different framework than those of typical EA grantmakers will struggle to get funding elsewhere. I agree decentralizing grantmaking to some extent is important and I have my reasoning here [EA(p) · GW(p)]

Replies from: konrad
comment by konrad · 2022-04-13T09:27:37.052Z · EA(p) · GW(p)

tl;dr please write that post

I'm very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEA's community health team. But if I understand correctly, they're not that up front about why they're reaching out. Being more "on the nose" about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, that's a question of qualified manpower - arguably our most limited resource - but we shouldn't let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.

comment by MathiasKB (MathiasKirkBonde) · 2022-04-04T11:47:49.384Z · EA(p) · GW(p)

I completely agree with this actually. I think concerns over unilaterialist's curse is a great argument in favour of keeping funding central, at least for many areas. I also don't feel particularly confident that attempts to spread out or democratize funding would actually lead to net-better projects.

But I do think there is a strong argument in favour of experimenting with other types of grantmaking, seeing as we have identified weaknesses in the current form which could potentially be alleviated.

I think the unilateralist's curse can be avoided if we make sure to avoid hazardous domains of funding  for our experiements to evaluate other types of grantmaking.

Replies from: evelynciara
comment by BrownHairedEevee (evelynciara) · 2022-04-06T04:37:14.996Z · EA(p) · GW(p)

Actually, a simple (but perhaps not easy) way to reduce the risks of funding bad projects in a decentralized system would be to have a centralized team screen out obviously bad projects. For example, in the case of quadratic funding, prospective projects would first be vetted to filter out clearly bad projects. Then, anyone using the platform would be able to direct matching funds to whichever of the approved projects they like. As an analogy, Impact CoLabs is a decentralized system for matching volunteers to projects, but it has a centralized screening process with somewhat rigorous vetting criteria.

Replies from: hibukki, Linda Linsefors
comment by Yonatan Cale (hibukki) · 2022-04-06T16:00:43.935Z · EA(p) · GW(p)

(Just saying I did lots of the vetting for colabs and I think it would be better if our screening would be totally transparent instead of hidden, though I don't speak for the entire team)

comment by Linda Linsefors · 2022-04-10T11:47:07.084Z · EA(p) · GW(p)

Yes! Exactly!

If you want a system to counter the univerversalist curse, then designen a system with the goal of countering the univeralist curse. Don't relly on an unintended sidefect of a coincidental system design.

comment by Linda Linsefors · 2022-04-10T11:38:09.135Z · EA(p) · GW(p)

I don't think there is a negative bias against centalised funging in the EA netowrk.

I've discussed funding with quite a few people, and my experience is that EAs like experts and efficiency, which mathces well with centralisd funding, at least in theory. I never heard anyone compare it to USSR and similar before.

Even this post is not against centralsised funding. The autor is just arguing that any system have blindspots, and we should have other systems too.

comment by Brendon_Wong · 2022-04-05T19:38:32.780Z · EA(p) · GW(p)

While it's definitely a potential issue, I don't think it's a guaranteed issue. For example, with a more distributed grantmaking system, grantmakers could agree to not fund projects that have consensus around potential harms, but fund projects that align with their specific worldviews that other funders may not be interested in funding but do not believe have significant downside risks. That structure was part of the initial design intent of the first EA Angel Group [EA · GW] (not to be confused with the EA Angel Group that is currently operating).

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2022-04-05T19:46:23.888Z · EA(p) · GW(p)

Yes, cf. my ending:

There might be ways of getting many of the benefits of decentralisation while reducing the risks, e.g. by the right kinds of coordination.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2022-04-05T19:57:47.098Z · EA(p) · GW(p)

I see, just pointing out a specific example for readers! You mention the "hypothesis that relatively centralised funding is indeed best shouldn't be discarded prematurely." Do you think it's concerning that EA hasn't (to my understanding) tried decentralized funding at any scale?

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2022-04-05T20:12:45.419Z · EA(p) · GW(p)

I haven't studied EA grant-making in detail so can't say with any confidence, but if you ask me I'd say I'm not concerned, no.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2022-04-05T20:38:05.841Z · EA(p) · GW(p)

Isn't there a very considerably potential opportunity cost by not trying out funding systems that could vastly outperform the current funding system?

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2022-04-05T20:42:08.369Z · EA(p) · GW(p)

Obviously there is a big opportunity cost to not trying something that could vastly outperform something we currently do - that's more or less true by definition.  But the question is whether we could (or rather - whether there is a decent chance that we would) see such a vast outperformance.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2022-04-05T20:49:57.116Z · EA(p) · GW(p)

There's evidence to suggest that decentralized decision making can outperform centralized decision making; for example with prediction markets and crowdsourcing. I think it's problematic in general to assume that centralized thinking and institutions are better than decentralized thinking and institutions, especially if that reasoning is based on the status quo. I was asking this series of questions because by wording that centralized funding was a "hypothesis," I thought you would support testing other hypotheses by default.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2022-04-05T23:39:53.877Z · EA(p) · GW(p)

I don't think there's evidence that centralised or decentralised decision-making is in general better than the other. It has to be decided on a case-by-case-basis.

I think this discussion is too abstract and that to determine whether EA grant-making should be more decentralised one needs to get into way more empirical detail. I just wanted to raise a consideration the OP didn't mention in my top-level comment.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2022-04-06T02:49:49.312Z · EA(p) · GW(p)

I agree! I was trying to highlight that because we're not sure that centralized funding is better or not, it would be a high priority to test other mechanisms, especially if there's reason to believe other mechanisms could result in significantly different outcomes.

comment by freedomandutility · 2022-04-04T14:30:14.564Z · EA(p) · GW(p)

One idea I have:

Instead of increasing the number of grantmakers, which would increase the number of altruistic agents and increase the risks from the unilateralists’ curse, we could work on ways for our grantmakers to have different blind spots. The simplest approach would be to recruit grantmakers from different countries, academic backgrounds, etc.

That being said, I am still in favour of a greater number of grantmakers but in areas unrelated to AI Safety and biosecurity so that the risks from the unilateralists curse are much smaller - such as global health, development, farmed animal welfare, promoting evidence based policy, promoting liberal democracy etc.

comment by Charles He · 2022-04-04T16:31:16.554Z · EA(p) · GW(p)

I’m not sure this comment is helping, but I don’t agree with this post.
 

  1. Separate from any individual grant, a small number of grant makers have unity and serve critical coordination purposes, especially in design of EA and meta projects, but in other areas as well.
  2. Most of the hardest decisions in grant making require common culture and communicating private, sensitive information. Ideas are worth less than execution, well-aligned competent grantees are critical. Also, success of the project is only one consideration  (deploying money has effects on the EA space and also into the outside world, maybe with lasting effects that can be hard to see).
  3. Once you solve the above problems, which benefit from a small number of grant makers, there are classes of projects where you can deploy a lot of money into (AMF, big science grants, or CSET).
  4. The above response doesn’t cover all kinds of EA projects, like the development of people, or nascent smaller projects that are important. To address this, outreach is a focus and grant makers are often generous with small grants.
  5. Grant makers aren't just passively gatekeeping money, just saying yes or no to proposals. There’s an extremely important and demanding role that grant makers perform (that might be unique to EA) where they develop whole new fields and programmes. So grant makers fund and build institutions to create and influence generations of projects. This needs longevity and independence.
  6. The post doesn’t mention how advisers and peripheral experts, in and outside of EA, are used. Basically, key information to inform grant making decisions is outsourced, in the best sense, to a diverse group of people. This probably expands grant making capacity many, many, times. (Of course this can be poorly executed, capture, etc. is possible, but someone I know is perceptive and hasn’t seen evidence of this.)
  7. I'm not sure I'm wording this well, but inferential distance can be vast. I find it difficult to even “see” how better people are better than me. It’s hard to understand this, you sort of have to experience it. To give an analogy, an Elo 1800 chess player can beat me, and an Elo 2400 chess player can beat that person. In turn, an Elo 2800 player can effortlessly beat those people. When being outplayed in this way, communication is literally impossible, I wouldn't understand what is going on in a game between me and the Elo 1800, even if they explained everything, move by move. In the same way, the very best experts in a field have deep and broad understanding, so they can make large, correct inferential leaps very quickly. I think this should be appreciated. I don't think it's unreasonable that EA can get the very best experts in the world and they have insights like this. This puts constraints on the nature and number of grant makers who need to communicate and coordinate with these experts, and grantmakers themselves may have these qualities.

 

I think someone might see a large amount of money and see a small amount of people deciding where it goes. They might feel that seems wrong. 

But what if the causal story is the exact opposite of this intuition? The people who have donated this money seem to be competent, and have specifically set up these systems. We’ve seen two instances of this now. The reason why there is money at all is because these structures have been setup successfully.

 

I’m qualified and well positioned to give the perspective above. I’m someone who has benefitted and gotten direct insights from grant makers, and probably seen large funding offered. At the same time, I don’t have this money. Due to the consequences of my actions, I’ve removed myself from the EA projects gene pool. I'm sort of an EA Darwin award holder. So I have no personal financial/project motivation to defend this thing if I thought it was bad.

Replies from: Brendon_Wong, Peterslattery, Linda Linsefors, Charles He
comment by Brendon_Wong · 2022-04-05T19:50:22.429Z · EA(p) · GW(p)

Separate from any individual grant, a small number of grant makers have unity and serve critical coordination purposes, especially in design of EA and meta projects, but in other areas as well.

There are ways to design centralized, yet decentralized grantmaking programs. For example, regranting programs that are subject to restrictions, like not funding projects that some threshold of grantmakers/other inputs consider harmful.

Can you specify what "in design of EA and meta projects" means?

Most of the hardest decisions in grant making require common culture and communicating private, sensitive information. Ideas are worth less than execution, well-aligned competent grantees are critical. Also, success of the project is only one consideration  (deploying money has effects on the EA space and also into the outside world, maybe with lasting effects that can be hard to see).

EA has multiple grantmakers right now, and lots of people that are aware of various infohazards, and it doesn't seem to me like the communication of private, sensitive information has been an issue. I'm sure there's a threshold at which this would fail (perhaps if thousands of people were all involved with discussing private, sensitive information) but I don't  think we're close to that threshold.

I think the perception of who is a well-aligned, competent grantee can vary by person. More of a reason to have more decentralization with grantmaking. Also, the forecating of effects can also vary by person, and having this be centralized may lead to failures to forecast certain impacts accurately (or at all).

The post doesn’t mention how advisers and peripheral experts, in and outside of EA, are used. Basically, key information to inform grant making decisions is outsourced, in the best sense, to a diverse group of people. This probably expands grant making capacity many, many, times. (Of course this can be poorly executed, capture, etc. is possible, but someone I know is perceptive and hasn’t seen evidence of this.)

My sense is that this is still fairly centralized and capacity constrained, since this only engages a very small fraction of the community. This stands in contrast to a highly distributed system, like EAs contributing to and voting in the FTX Project Ideas competition, which seems like it surfaced both some overlap and some considerable differences in opinion on certain projects.

But what if the causal story is the exact opposite of this intuition? The people who have donated this money seem to be competent, and have specifically set up these systems. We’ve seen two instances of this now. The reason why there is money at all is because these structures have been setup successfully.

There have also been large amounts of funds granted with decentralized grantmaking; see Gitcoin's funding of public goods as an example.

Replies from: Charles He
comment by Charles He · 2022-04-06T02:23:30.105Z · EA(p) · GW(p)

These are good questions. 

So this is getting abstract and outside my competency, I'm basically LARPing now.

I wrote something below that seems not implausible.

 

not funding projects that some threshold of grantmakers/other inputs consider harmful.


EA has multiple grantmakers right now, and lots of people that are aware of various infohazards, and it doesn't seem to me like the communication of private, sensitive information has been an issue. I'm sure there's a threshold at which this would fail (perhaps if thousands of people were all involved with discussing private, sensitive information) but I don't  think we're close to that threshold.

I didn't mean infohazards or downsides. 

This is about intangible characteristics that seem really important in a grantee. 

To give intuition, I guess one analogy is hiring. You wouldn't hire someone off a LinkedIn profile, there's just so much "latent" or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.

This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well. 

I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people. 

My sense is that this is still fairly centralized and capacity constrained, since this only engages a very small fraction of the community. This stands in contrast to a highly distributed system, like EAs contributing to and voting in the FTX Project Ideas competition, which seems like it surfaced both some overlap and some considerable differences in opinion on certain projects.

I guess this is fair, that my answer is sort of kicking the can. More grant makers is more advisers too.

On the other hand, I think there's two other ways to look at this: 

  • Let's say you're in AI safety or global health,
    • There may only be like say about 50 experts in malaria or agents/theorems/interpretability. So it doesn't matter how large your team is, there's no value getting 1000 grantmakers if you only need to know 200 experts in the space.
  • Another point is that decentralization might make it harder to use experts, so you may not actually get deep or close understanding to use the expert.

This answer is pretty abstract and speculative. I'm not sure I'm saying anything above noise.

Can you specify what "in design of EA and meta projects" means? 

Let's say Charles He starts some meta EA service, let's say an AI consultancy, "123 Fake AI". 

Charles's service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.

Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.

Someone has to kibosh this, and a set of unified grant makers could do this.
 

Replies from: Brendon_Wong, Linda Linsefors
comment by Brendon_Wong · 2022-04-06T03:00:16.656Z · EA(p) · GW(p)

This is about intangible characteristics that seem really important in a grantee. 

To give intuition, I guess one analogy is hiring. You wouldn't hire someone off a LinkedIn profile, there's just so much "latent" or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.

This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well. 

I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people. 

I see! Interestingly there are organizations, like DAOs, that do hiring in a decentralized manner (lots of people deciding on one candidate). There probably isn't much efficacy data on that compared to more centralized hiring, but it's something I'm interested in knowing.

I think there are ways to assess candidates that can be less centralized, like work samples, rather than reference checks. I mainly use that when hiring, given it seems some of the best correlates of future work performance are present and past work performance on related tasks.

If sensitive info matters, I can see smaller groups being more helpful, I guess I'm not sure the degree to which that's necessary. Basically I think that public info can also have pretty good signal.

So it doesn't matter how large your team is, there's no value getting 1000 grantmakers if you only need to know 200 experts in the space.

That's a good point! Hmm, I think that does go into interesting and harder to answer questions like whether experts are needed/how useful they are, whether having people ask a bunch of different subject matter experts that they are connected with (easier with a more decentralized model) is better than asking a few that a funder has vetted (common with centralized models), whether an expert interview that can be recorded and shared is as good as interviewing the expert yourself, etc., some of which may be field-by-field.

Someone has to kibosh this, and a set of unified grant makers could do this.

Is there a reason a decentralized network couldn't also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.

Replies from: Charles He
comment by Charles He · 2022-04-06T03:12:36.691Z · EA(p) · GW(p)

Is there a reason a decentralized network couldn't also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.

So this is borderline politics as this point, but I would expect that a malign agent could capture or entrench in some sort of voting/decentralized network more easily than any high quality implementation of an EA grant making system (E.g, see politicians/posturing).

 

(So this is a little spicy and there's maybe some inferential leaps here? but ) a good comment related to the need for centralization comes from what I think are very good inside views on ETH development.

In ETH development, it's clear how centralized decision-making de-facto occurs, for all important development and functionality. This is made by a central leadership, despite there technically being voting and decentralization in a mechanical sense. 

That's pretty telling since this is like the canonical decentralized thing.

 

Your comments are really interesting and important. 

I guess that public demand for my own personal comments is low, and I'll probably no longer reply, feel free to PM!

comment by Linda Linsefors · 2022-04-11T16:43:43.256Z · EA(p) · GW(p)

Let's say Charles He starts some meta EA service, let's say an AI consultancy, "123 Fake AI". 

Charles's service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.

Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.

Someone has to kibosh this, and a set of unified grant makers could do this.

 

I don't understand your model of crowding out? How exatly is Charles and his firends shouting everyone down? If everyone supsects 123 Fake AI to be bad, it will not be hard to get funding to set up a compeeting service. 

In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone. 

 

As far as I can tell, EA grantmakers and leadership are overly worried about crowding out effects. They don't want to give money to a project if there might be a similar but better funding options later, because they think funding the first will crowd out the later. But my experience from the other side (applying and talking to other applicants) is that the effect is the compleet oposite. If you fund a type of project, others will see that this is the type of project that can be funded, and you'll get more similar applications. 

Replies from: Charles He
comment by Charles He · 2022-04-14T12:12:31.578Z · EA(p) · GW(p)

Ok, so either you have a service funded by EA money and claims to support EAs, or it’s not funded by EA money and claims to support EAs.

(Off topic: If it’s not funded by EA money, this is a yellow flag. There’s many services like coaching, mental health targeting EAs that are valuable. But it’s good to be skeptical of a commercial service that seems to try hard to aim at an EA audience—why isn’t it successful in the real world?)

The premise of my statement is that you have an EA service funded by EA money. There’s many issues if done poorly.

Often, the customers/decision makers (CEOs) are sitting ducks because they don’t know the domain that is being offered ( law/ML/IT/country or what have you) very well. At the same time, they aren’t going to pass up a free or subsidized service by EA money—even more a service with the imprimatur of EA funds.

This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I don’t need to become a dark thought or something.)

I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.

But really, your response/objection is about something else.

There’s a lot of stuff going on but I think it’s fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasn’t some giant statement about the color and shape of institutional space in general.

Replies from: Charles He
comment by Charles He · 2022-04-14T15:32:44.526Z · EA(p) · GW(p)

Ok, my above comment is pretty badly written and I’m not sure I’m right and if I’m right I don’t think I’m right for the reason stated. Linda may be right, but I don’t agree.

In particular, I don’t answer this:

“In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.”

I’m describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.

I think an answer is that actually, to start something, you shouldn’t have to convince everyone in a decentralized system. That seems unworkable and won’t happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.

This isn’t good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/lock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders aren’t willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).

comment by PeterSlattery (Peterslattery) · 2022-04-05T20:48:40.473Z · EA(p) · GW(p)

(on phone again - I really need to change this wakeup routine 😄!)

This was helpful. Alongside further consideration of risks, it has made me update to thinking about an intermediate approach. Will be interested to hear what people think!

This approach could be a platform like Kickstarter that is managed and moderated by EA funders. It is an optimal home for projects that are in the gap between good enough to fund centrally by EA orgs and judged best never to fund.

For instabce, if you submit to FTX for and they think that you had a good idea but weren't quite sure enough that you could pull it off, or that it wasn't high value relative to competitors, then you get the opportunity to rework the application into funding request for this platform.

It then lives there so that others can see it and support it if they want. Maybe your local community members know you better or there is a single large donor who is more sympathetic to your theory of change and together these are sufficient to give you some initial funding to test the idea.

Having such platform therefore helps aggregate interesting projects and helps individuals and organisations to find and support them. It also reduces the effort involved in seeking funding by reducing it to being closer to submitting a single application.

It addresses several of the issues raised in the post and elsewhere without much additional risk and also provides a better way to do innovation competitions and store and leverage the ideas.

Replies from: Charles He
comment by Charles He · 2022-04-06T02:17:38.543Z · EA(p) · GW(p)

(I'm just writing fan fiction here, I don't know much about your project, this is like "discount hacker news" level advice. )

This seems great and could work!

 

I guess an obvious issue is "adverse selection". You're getting proposals that couldn't make the cut, so I would be concerned about the quality of the pool of proposals. 

At some point, average quality might be too low for viability, so the fund can't sustain itself or justify resources. Related considerations:

  • Adverse selection probably gets worse the more generous FTX or other funders gets
  • Related to the above, I guess it's relatively common to be generous to give smaller starter grants, so the niche is might be particularly crowded.
  • Note that many grant makers ask for revise and resubmits, it's relationship focused, not grant focused.

Note that adverse selection often happens on complex, hard to see characteristics. E.g. people are hucksters asking money for a business, the cause area is implausible and this is camouflaged, or the founding team is bad or misguided and this isn't observable from their resume.

Adverse selection can get to the point it might be a stigma, e.g. good projects don't even want to be part of this fund.

This might be perfectly viable and I'm wrong. Another suggestion that would help is to have a different angle or source of projects besides those "not quite over the line" at FTX/Open Phil. 


 

comment by Linda Linsefors · 2022-04-11T16:26:11.370Z · EA(p) · GW(p)

The chess analogy don't work. We don't have grant experts in the same way we have chess experts.

Expertice is created by experience coupled with high quality feedback. This type of expertice exists in chess, but not much grantmaking. EA grantmaking is not old enough to have experts. This is extra true in longtermist grantmaking where you don't get true feedback at all, but have to rely on proxies.

I'm not saying that there are no diffence in relevant skills. Beeing genneraly smart and having related knolage is very usefull in areas where no-one is an expert.  But the level of skill you seem to be claming is not belivable. And if they convinced themselves of that level of supeiriority, that's evidence of group think. 

 

Multiple grantmakers with diffrent heruristics will help deveolop expertice, since this means that we can compare diffrent strategies, and sometimes a grantmaker get to see what happens to projects they rejected that got funding somwhere else.

So grant makers fund and build institutions to create and influence generations of projects. This needs longevity and independence.

I agree, but this don't require that there are only few funders. 
 

 

Now we happen to be in a situation where almost all EA money comes from a few rich people. That's just how things are wether I like it or not.  It's their money to distrubute as they want. Trying to argue that the EA bilionares should not have the right to direct their donations as they want, would be pointless or couterproductive.

Also, I do think that these big donors are awsome people and that the world is better for their generosity.  As far as I can see, they are spending their money on very important projects. 

But they are not perfect! (This is not an attack!)

I think it would be very bad for EA to spread the idea that the large EA funders are some how infalable, and that small donors should avoid making their on grant decition.  

Replies from: Charles He
comment by Charles He · 2022-04-14T11:39:42.351Z · EA(p) · GW(p)

Hi,

So I’ll start off by being a jerk and say that it seems there are a lot of spelling things going on in your comment.

These spelling boo-boos are, like, sort of on the nose here, for this particular topic and maybe why it got only a downvote.

What gets under my skin is that I suspect I put even less effort into writing and spelling than you and that my ability isn’t higher. I’m not a better writer or speller. I have tools or something, so my half baked ideas come out pretty smooth.

Like, I’m mansplaining but a trick is to try writing up or copying into Google docs, which fixes up a lot of grammar and writing snags. Also, people are working on more general tools to help spread and replicate principled and clear thought (FTX idea #2 or something) but that takes more time.

Replies from: technicalities, Charles He
comment by Gavin (technicalities) · 2022-04-14T22:08:07.650Z · EA(p) · GW(p)

This is good advice in the wrong place. DMs exist dude.

Replies from: Charles He
comment by Charles He · 2022-04-14T23:25:50.378Z · EA(p) · GW(p)

If someone had access to DMs, what are possible reasons they would make this message public? Would a reasonable person knowing EA forum norms expect writing this message to help them? What would the actual impact on public perception on the other person be? Why would someone do something like this rhetorically?

By the way, this person obviously speaks a second (or third language). This is close to heroic. Me writing quickly in Swedish would be impossible.

comment by Charles He · 2022-04-14T11:47:38.071Z · EA(p) · GW(p)

So, if you made it past this far, about grantmaking skill and concentration: I think I’m right, but I could be wrong and it seems good to have the strongest form of this criticism.

But look, especially in this situation, it seems difficult to communicate and explain the topic of grant making skill. There’s a lot of things going on and I’m also sort of dumb. If we don’t agree on the premises it’s hard to make progress.

Because of this, I want to ask, is this really about grant making skill (which I think is extremely, comically demanding) or is it about perceived control, values or fairness, or something else?

Did you see Mackenzie Scott’s “org” distributing 8.6B? She wrote a public letter on medium explaining her views.

https://mackenzie-scott.medium.com/helping-any-of-us-can-help-us-all-f4c7487818d9

After reading this, it “feels” strange to imagine walking into Scott’s office and telling them about democracy or something, even though I don’t agree with all the funding choices.

But certainly this feeling isn’t the same for EA. But why is this?

For EA grantmaking, what’s the “promise”, what is owed, and to whom? I honestly want to learn from you.

Replies from: Linda Linsefors
comment by Linda Linsefors · 2022-04-14T21:36:56.244Z · EA(p) · GW(p)

I agree that grantmaking is hard! 

There are gaps in the sytem exactly because grantmaking is hard. 

No, this is not about grantmakting skills, or at least not directly. But skills in relation to the task dificulty is very relevant. But nither is it about fairness. Slowing down to worry about fairness with in EA seems dumb.

This is about not spreading harmfull missleading information to applicants, and other potential donors who are concidering if they want to make thier own donation decition or not.

I'm mostly just trying to say that can we please accknolage that the system is not perfect? How do I say this without anyone feeling attact?

Getting rejected hurts. If you tell everyone that EA has heeps of money and that the grantmakers are perfect, then it hurts about 100x more. This is a real cost. EA is loosing members because of this, and almost no-one talks about it. But it would not be so bad, if we could just agree that grantmaking is hard, and therefor grantmakers makes mistakes sometimes.

https://forum.effectivealtruism.org/posts/Khon9Bhmad7v4dNKe/the-cost-of-rejection [EA · GW]

My current understanding is that the bigest dificulty in grantmaking is the information bandwith. The text in the application is usually not nearly enough information, which is why grantmakers rely on other channels of information. This information is nesserarly biased by their network, mainly it is much easier to get funded if you know the right people. This is all fine! I want grantmakers to use all the information they can, even if this casues unfairness. All successfull networks rely hevily on personal conections, becasue it's just more efficient. Personal trust beats formal systems every day. I just wish we could be honest about what is going on. 

I don't expect rich people to deligate their funding decitions to unknown people outside their network, just for fairness. I don't think that would be a good idea.

But I do want EAs who who happen to have some money to give, and happen to have significantly diffrent networks compared to the super donors, to be aware of this, to be aware of their comparative advantage to donate in their own network, instead of deligating this away to EA Funds.

What is owed is honesty. That is all.

It's not even the case that the grant makers themsevels exagurate their own infalability, at least not explicitly.  But others do, which leads to the same problems.  This makes it harder to answer "who owes what".  Fortunatly I don't care much about blame. I just want to spread more accurate informations becasue I've seen the harm of the missinformation. That's why I decided to argue against your comment. Leaving those claims unchalanged would add to the problems I tried to explain here.

_____________________

Regarding spelling. I usually try harder. But this topic makes me very angry, so I try minimising the time I spend on writing this. Sorry about that.

Replies from: Charles He
comment by Charles He · 2022-04-14T23:12:23.426Z · EA(p) · GW(p)

What you’re saying makes sense and is important to me. In fact it’s mainly what I care about.

In the comment that appeared above your first reply. I said the experts (like take the billions of people in the world, and then take the best in each domain) might be so good that it’s difficult to communicate or understand them.

So my claim was that it is unwieldy for a large group of people to act like grant makers because of the nature of these experts. I left the door open to grantmakers being this good (because that seems positive and it’s strong to say they can’t be?).

I think you believe I’m arguing that current grantmakers are unquestionable. That isn’t what I wrote (you can look again at the top comment, I can’t link, I’m typing on my phone and it’s hard, seriously this physically hurts my thumbs ).

In the other comment chain with you, you replied objecting to the idea of malign behaviour requiring centralization. Here, sort of like above, I find it tempting to see you pushing back against a broader point than I originally made.

You did this because it was important to you.

Replies from: Charles He
comment by Charles He · 2022-04-14T23:17:01.998Z · EA(p) · GW(p)

I’m not writing this comment, the previous comment or any comment here to you because I want to argue. I didn’t write it because I want to be polite or even strictly because I had a “scout mentality”. I literally don’t have any attachments for or against what you said. I wanted to understand.

You expressed something important to you. I’m sorry you felt the need to write or defend with the effort and emotion you did.

The reason why this is valuable is that most of what I wrote and the top of what you wrote are just arguments.

We can take these arguments and knock them out of someone’s hand, or give better new ones instead. It’s just logic and reasoning.

It’s the values that I care about and wanted to understand. The reasons why you wanted to talk and how you felt. (This wasn’t supposed to be difficult or cause stress either).

comment by Charles He · 2022-04-09T20:09:04.477Z · EA(p) · GW(p)

The end of the above comment included a statement about no funding, which suggested that my comment is entirely disinterested. 

I've since learned (this morning) of additional funding and/or interest in funding and this statement about no funding is no longer true. It was probably also misleading or unfair to have made it in the first place. 

comment by tobyj (tobyjolly) · 2022-04-04T11:20:27.774Z · EA(p) · GW(p)

Hundreds of people spent considerable time writing applications to FTX Future fund's first round of funding. It seems inefficient to me that there aren't more sources of funding looking over these applications and funding the projects they think look the most promising.

This wouldn't directly address your main concern, but I'd be really interested to see more full grant applications posted publicly (both successful and non-successful). 

Replies from: Linch, Charles He
comment by Linch · 2022-04-04T16:50:46.191Z · EA(p) · GW(p)

Manifold Markets (which I have a COI with) posted their FTX FF grant application here [EA · GW]. 

comment by Charles He · 2022-04-04T16:38:21.516Z · EA(p) · GW(p)

I want you to know there isn't some secret sauce or special formula in the words of a grant proposal itself. I don't think there is really anything canonically correct.

There might be one such grant application shared publicly, if that person ever gets around to it. 

This is grant is interesting because it was both successful and non-successful at the same time. This is because it has interest but was rejected due to the founder, so the project might be "open". 

comment by PabloAMC · 2022-04-04T16:22:40.368Z · EA(p) · GW(p)

One advantage of centralized grantmaking though is that it can convey more information, due to the experience of the grantmakers. In particular, centralized decision-making allows for better comparisons between proposals. This can lead to only the most effective projects being carried out, as it would be the case with startups if one were to restrict himself to only top venture capitalists.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2022-04-05T19:53:37.471Z · EA(p) · GW(p)

Do you have any evidence for this? There's definitely evidence to suggest that decentralized decision making can outperform centralized decision making; for example, prediction markets and crowdsourcing. I think it's dangerous to automatically assume that all centralized thinking and institutions are better than decentralized thinking and institutions.

Replies from: PabloAMC
comment by PabloAMC · 2022-04-05T22:03:37.276Z · EA(p) · GW(p)

I recall reading that top VC's are able to outperform the startup investing market, although it may have a causal relationship going the other way around. That being said, the very fact that superforecasters are able to outperform prediction markets should signal that there are (small groups of) people able to outperform the average, isn't it?

On the other hand prediction markets are useful, I'm just wondering how much of a feedback signal there is for altruistic donations, and if it is sufficient for some level of efficiency.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2022-04-06T02:47:01.682Z · EA(p) · GW(p)

I recall reading that top VC's are able to outperform the startup investing market, although it may have a causal relationship going the other way around.

Yep, there's definitely return persistence with top VCs, and the last time I checked I recall there was uncertainty around whether that was due to enhanced deal flow or actual better judgement.

That being said, the very fact that superforecasters are able to outperform prediction markets should signal that there are (small groups of) people able to outperform the average, isn't it?

I think that just taking the average is one decentralized approach, but certainly not representative of decentralized decision making systems and approaches as a whole.

Even the Good Judgement Project can be considered a decentralized system to identify good grantmakers. Identifying superforecasters requires having everyone do predictions and then find the best forecasters among them, whereas I do not believe the route to become a funder/grantmaker is that democratized. For example, there's currently no way to measure what various people think of a grant proposal, fund that regardless of what occurs (there can be rules about not funding downside risk stuff, of course), and then look back and see who was actually accurate.

There haven't actually been real prediction markets implemented at a large scale (Kalshi aside, which is very new), so it's not clear whether that's true. Denise quotes Tetlock mentioning that objection here [EA · GW].

I also think that determining what to fund requires certain values and preferences, not necessarily assessing what's successful. So viewpoint diversity would be valuable. For example, before longtermism became mainstream in EA, it would have been better to allocate some fraction of funding towards that viewpoint, and likewise with other viewpoints that exist today. A test of who makes grants to successful individuals doesn't protect against funding the wrong aims altogether, or certain theories of change that turn out to not be that impactful. Centralized funding isn't representative of the diversity of community views and theories of change by default (I don't see funding orgs allocating some fraction of funding towards novel theories of change as a policy).

Replies from: PabloAMC
comment by PabloAMC · 2022-04-06T10:08:24.814Z · EA(p) · GW(p)

So viewpoint diversity would be valuable. Definitely. In particular, this is valuable when the community also pivots around cause neutrality. So I think it would be good to have people with different opinions on what cause areas are better to support.

comment by PeterSlattery (Peterslattery) · 2022-04-04T19:59:14.100Z · EA(p) · GW(p)

(On phone, early in the morning!)

Thanks for this.

I agree with nearly all of it.

I'd like us to have a community fundraising platform and a coexisting crowdfunding norm so that more good ideas get proposed and backed. Also, so that the community (including centralised funders) have a better read on what the community wants and why.

As an example, I have several desires for changes and innovations that I'd be happy to help fund. As an example, I would like to be able to read and refer to a really detailed assessment and guesstimate model for whether, when, and how best to decide on giving now v saving and giving later. I'd help fund an effective bequest or volunteer pledge program. I know others who share my views. I'd like to know the collective interest in funding, either of these. I'd also like centralised funders to know that information, as that community willingness to funding something might make them decide to fund it in conjunction or instead. I don't currently have any easy way to do this.

I suspect there are many ideas in EA that would possibly attract crowdfunding but not centralised funding (at least initially) because many people in some part of the EA community have some individually small, but collectively important need that funders don't realise.

With regard to Stefan's point, rather than reduce risk by reducing and centralising access to funding like we do now, we could reduce it in other ways. We could have community feedback. We could also have contingencies within grants (e.g., projects only funded after a risk assessment is conducted). We could have something modelled on ethics committees to assess what project types are higher risk.

comment by Brendon_Wong · 2022-04-05T20:37:11.477Z · EA(p) · GW(p)

I agree with the issues related to centralized grantmaking flagged by this article! I wrote a bit about this [EA · GW] back in 2018. To my understanding, EA has not been trying forms of decentralized/collective thinking, including decentralized grantmaking. I think that this is definitely a very promising area of inquiry worthy of further research and experimentation.

One example of the blind spots and differences in theories of change you mention is reflected in the results [EA · GW] of the Future Fund's Project Ideas Competition [EA · GW]. Highly upvoted ideas like "Investment strategies for longtermist funders" and "Highly effective enhancement of productivity, health, and wellbeing for people in high-impact roles," which came in at #3 and #4 respectively, did not win any awards or mention. This suggests that there is decent community interest and consensus around projects and project areas that aren't being funded or funded sufficiently by centralized entities. For those project areas, there are a decent number of people within EA , project leads, and smaller-scale funders (BERI, EA Funds, various HNWIs) that I am aware of that either believe such efforts are valuable and underfunded or have funded projects in those areas in the past. The specific grantmaking team at The Future Fund may have interests and theories of change that aren't the same as other grantmaking teams and EAs. It's definitely fine to have specialized interests and theories of change, and indeed everyone does, but the issue is only one set of those is coming through to decide how to allocate all of the Future Fund's funding. As you point out, that's basically guaranteed to be suboptimal.

comment by Ivy_Mazzola · 2022-04-06T19:17:30.121Z · EA(p) · GW(p)

As a community manager, I care a lot about maximizing the potential of any community member who is already deep enough on the EA engagement funnel to even be applying for a grant. In addition to the (very good) reasons in OP's post, I want to see the grantmaking ecosystem become less centralized because:

1. Founders, scalers, and new projects are a bottleneck for EA and it is surprisingly hard to prompt people to take such a route. It seems to be a personality thing, so we should look twice before dismissing people who want to try. 

2. Even if a project ends up underperforming, the opportunity to try scaling or starting up a project does give a dedicated and self-starting EA valuable experience. That innovator-EA may get more potential benefit from being funded than a lot of other ways that one might slowly gain experience. And funding the project should come with some potential positive impact, even if it isn't the most impactful and exciting project to many grantmakers. 
Similar tactics exist in the movement already: EA/80K recommends people enter the for-profit world to gain experience, which comes with near-zero positive impact potential during that time. EA also subsidizes career trainings, workshops, and even advanced degrees toward filling bottlenecks of all types. 
Therefore, I'd also advocate for being a bit more lax in funding/subsidizing relatively cheap new projects or scale-ups which can help dedicated innovator/self-starter EAs gain career experience and  yield some altruistic wins. (I admit that some funders may already be thinking this way, I don't know!)

3. It is sad to me that dedicated EAs can essentially be blackballed in what I'd still like to think of as an egalitarian movement. I don't think it is anyone's fault (mad props to grantmakers and funders), but if the funding ecosystem evolves to be a bit more diverse, I think it would be good for the movement's impact and reputation, at least via the mental health and value drift levels of EAs themselves. I'm not saying "fund everything that isn't risky", but that being gatekept/blackballed is a uniquely frustrating experience that can sour one's involvement with the movement. Despite good intentions and a mature personality, it seems natural to stick more to the sidelines after being rejected the first time you stick your neck out and not given any recommendations for where else to apply for funding. The more avenues the movement has and the more obvious these avenues are, the less a rejection will feel like a blackball and prompt people to stop trying.

FWIW I really like the vetted kickstarter idea posted by Peter Slattery below [EA(p) · GW(p)]. A bonus with an idea like that is that it will also keep E2Gers engaged. It is a lot more interesting than, say,  donating to EAIF every year, and maybe they can get their warm fuzzies there too.

comment by Chris Leong (casebash) · 2022-04-04T14:22:11.574Z · EA(p) · GW(p)

This is yet another reason why I'd love to see mini-EA hotels in major cities around the world as I described in this Twitter thread. Obviously, this wouldn't remove the bias towards people in major cities, but it would decrease geographical bias overall and the perfect shouldn't be the enemy of the good.

Replies from: ElliotJDavies
comment by ElliotJDavies · 2022-04-05T20:26:53.372Z · EA(p) · GW(p)

I would be very interested in doing this in Copenhagen. If anybody going to EA global has strong opinions this I would love to set up a meeting and chat about this

Replies from: casebash
comment by Chris Leong (casebash) · 2022-04-05T20:38:21.601Z · EA(p) · GW(p)

I'll be at EAGlobal. Feel free to reach out to me.

comment by Jamie_Harris · 2022-04-30T14:01:52.541Z · EA(p) · GW(p)

I agree that centralised grant-making might mean that some promising projects are missed. But we're not solely interested in this? We're overall interested in:

Average cost-effectiveness per $ granted * Number of $ we're able to grant

My intuition would be that the more decentralised the grant-making process, the more $ we're able to grant.

But this also requires us to invest more talent in grant-making, which means, in practice, fewer promising people applying for grants themselves, which might non-negligibly reduce average cost-effectiveness per $ granted.

Beyond the above consideration, it seems unclear whether decentralised grant-making would overall increase of decrease the average cost-effectiveness. Sure, fewer projects above the current average cost-effectiveness would slip through the net, but so too fewer projects below the current average cost-effectiveness would slip through the net. So I'd expect these things to balance each other out roughly UNLESS we're making a separate claim that the current grantmakers are making poor / miscalibrated decisions. But at that point, this is not an argument in favour of decentralising grant-making, but an argument in favour of replacing (or competing with) the current grantmakers.

So maybe overall, decentralising grant-making would trade an increase in $ we're able to grant for a small decrease in average cost-effectiveness of granted $.

(I felt pretty confused writing these comments and suspect I've missed many relevant considerations, but thought I'd flesh out and share my intuitive concerns with the central argument of this post, rather than just sit on them.)

comment by ElliotJDavies · 2022-04-05T20:09:51.153Z · EA(p) · GW(p)

[Quick thoughts whilst on mobile]

My takeaway: interested to hear what said grant makers think about this idea.

I find the arguments re: efficient market hypothesis pretty compelling , but also find the arguments re: "inferential distance" and unilateralist curse also compelling.

One last points, so far, I think one EA's biggest achievements is around truly unsually good epistemics, and I'm particularly concerned around how centralised small groups could damage that - especially since more funding could exacerbate this effect

comment by Yitz · 2022-04-04T22:06:53.282Z · EA(p) · GW(p)

Posted on my shortform, but thought it’s worth putting here as well, given that I was inspired by this post to write it:

Thinking about what I’d do if I was a grantmaker that others wouldn’t do. One course of action I’d strongly consider is to reach out to my non-EA friends—most of whom are fairly poor, are artists/game developers whose ideas/philosophies I consider high value, and who live around the world—and fund them to do independent research/work on EA cause areas instead of the minimum-wage day jobs many of them currently have. I’d expect some of them to be interested (though some would decline), and they’d likely be coming from a very different angle than most people in this space. This may not be the most efficient use of money, but making use of my peculiar/unique network of friends is something only I can do, and may be of value.