Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds?

post by yhoiseth · 2019-04-01T20:25:31.176Z · score: 10 (4 votes) · EA · GW · 14 comments

This is a question post.

Contents

  Answers
    6 aarongertler
    1 rohinmshah
    1 Milan_Griffes
None
2 comments

In Liberal Radicalism: A Flexible Design For Philanthropic Matching Funds, Vitalik Buterin, Zoë Hitzig and E. Glen Weyl propose a mechanism for (near) optimal provision of public goods.

The paper is quite recent (December 2018). Considering that the EA community may be vetting-constrained [EA · GW], has anyone implemented or considered implementing the suggested mechanism?

Answers

answer by aarongertler · 2019-04-01T21:04:55.792Z · score: 6 (3 votes) · EA · GW

MIRI took part in a Liberal Radicalism-based fundraiser at the end of 2018 (see "WeTrust Spring" in this post).

answer by rohinmshah · 2019-04-02T15:52:44.348Z · score: 1 (1 votes) · EA · GW

The main issue with the mechanism seems to be collusion between donors. As Aaron mentioned, MIRI took part in such a fundraiser. I claim that it was so successful for them precisely because MIRI supporters were able to coordinate well relative to the supporters of the other charities -- there were a bunch of posts about how supporting this fundraiser was effectively a 50x donation multiplier or something like that.

comment by yhoiseth · 2019-04-02T22:43:08.648Z · score: 2 (2 votes) · EA · GW

Are you saying that this was an example of collusion?

comment by rohinmshah · 2019-04-03T00:02:42.696Z · score: -4 (2 votes) · EA · GW

Yes.

comment by yhoiseth · 2019-04-03T07:58:36.244Z · score: 2 (2 votes) · EA · GW

I'm not sure if I see how this is collusion. Would you mind elaborating?

comment by Ben Pace · 2019-04-03T11:11:18.552Z · score: 3 (2 votes) · EA · GW

MIRI helped us know how much to donate and how much of a multiplier it would be, and updated this recommendation as other donors made their moves. I added something like $80 at one point because a MIRI person told me it would have a really cool multiplier, but not if I donated a lot more or a lot less.

comment by yhoiseth · 2019-04-03T11:55:50.120Z · score: 3 (2 votes) · EA · GW

This does not sound like collusion, at least according to the Merriam-Webster definition:

secret agreement or cooperation especially for an illegal or deceitful purpose

To me, it seems more like promotion.

comment by rohinmshah · 2019-04-03T15:06:56.827Z · score: 9 (4 votes) · EA · GW

Sorry, I meant "collusion" in the sense that it is used in the game theory literature, where it's basically equivalent to "coordination in a way not modeled by the game theory", and doesn't carry the illegal/deceitful connotation it does in English. See e.g. here, which is explicitly talking about this problem for Glen Weyl's proposal.

The overall point is, if donors can coordinate, as they obviously can in the real world, then the optimal provisioning of goods theorem no longer holds. The example with MIRI showcased this effect. I'm not saying that anyone did anything wrong in that example.

comment by yhoiseth · 2019-04-03T22:45:09.730Z · score: 1 (1 votes) · EA · GW

The overall point is, if donors can coordinate, as they obviously can in the real world, then the optimal provisioning of goods theorem no longer holds.

I don't find this to be obvious. In my understanding, coordination/collusion can be limited by keeping donations anonymous. (See the first two paragraphs on page 16 in the paper for an example.)

comment by rohinmshah · 2019-04-04T15:34:42.666Z · score: 1 (1 votes) · EA · GW
In my understanding, coordination/collusion can be limited by keeping donations anonymous.

It's not hard for an individual to prove that they donated by other means, e.g. screenshots and bank statements.

(See the first two paragraphs on page 16 in the paper for an example.)

Right after that, the authors say:

There is a broader point here. If perfect harmonization of interests is possible, Capitalism leads to optimal outcomes. LR is intended to overcome such lack of harmonization and falls prey to manipulation when it wrongly assumes harmonization is difficult

With donations it is particularly easy to harmonize interests: if I'm planning to allocate 2 votes to MIRI and you're planning to allocate 2 votes to AMF, we can instead have each of us allocate 1 vote each to MIRI and AMF and we both benefit. Yes, we have to build trust that neither of us would defect by actually putting both of our votes to our preferred charity; but this seems doable in practice: even in the hardest case of vote trading (where there are laws attempting to enforce anonymity and inability to prove your vote) there seems to have been some success.

comment by Ben Pace · 2019-04-03T12:10:32.516Z · score: 8 (3 votes) · EA · GW

Ah yes, agree. I meant coordination, not collusion. Promotion also seems fine.

answer by Milan_Griffes · 2019-04-01T20:51:23.049Z · score: 1 (3 votes) · EA · GW

I haven't heard of anyone implementing something like this. I think it's a great idea – would love to see more exploration of it!

14 comments

Comments sorted by top scores.

comment by cole_haus · 2019-04-02T05:02:39.442Z · score: 1 (1 votes) · EA · GW

IIRC, the mechanism has problems with collusion/dissembling. For example, one backer with $46 dollars and 4 backers with $1 each will get significantly better results by splitting their money into 5 contributions of $10 each. This seems like a problem that's actually moderately likely to arise in practice.

comment by yhoiseth · 2019-04-02T06:54:43.154Z · score: 1 (1 votes) · EA · GW

Yeah, this and fraud are potential problems. They're discussed in 5.2 Collusion and deterrence (pages 15 to 19).