Experiment in Retroactive Funding: An EA Forum Prize Contest

post by DonyChristie, Dawn Drescher (Telofy), Matt Brooks · 2022-06-01T21:15:09.031Z · EA · GW · 16 comments

Contents

  Background
    TL;DR: We want to buy the impact of quality EA Forum posts.
  Instructions
  Process
  Other less important details
  What does it mean to sell your impact, exactly?
  The Story of An Impact Sale: Two Examples
None
16 comments

Coauthors: Denis Drescher, Matt Brooks

Background

TL;DR: We want to buy the impact of quality EA Forum posts.

We’re working on building impact markets to make public goods tradeable by retroactively funding the labor of the people who create them. 

Our overall goal: make the nonprofit space work more like for-profits. Pay for outcomes, not for promises.

We’re doing this to increase funding for public goods and make it easier for large funders to find and fund valuable projects among other benefits [EA(p) · GW(p)]. (See below for more detail on our vision for these markets.) We are receiving a grant via the Future Fund Regranting Program and are excited to start launching experiments to iterate towards a large open impact market. You can check out our new informational website and join our community Discord.

Our first “minimal viable product” is this contest. We are purchasing the impact of EA Forum posts in a centralized and moderated way to create a controlled sandbox in which we can observe the consequences and course-correct with feedback from the community. This way, we plan to mitigate potential risks and downsides [EA(p) · GW(p)] to impact markets.

If it goes well, we plan to progressively expand both the prize pool amount for each contest and the scope of the market to things beyond the EA Forum. More broadly, we envision a world where markets for public and common goods (with mechanisms such as impact certificates, impact stock, retroactive funding, quadratic funding, dominant assurance contracts, Harberger taxes, etc.) play a significant role in the creation and funding of these goods.

Below are the instructions for the contest. If you feel confused, there are examples and analogies below the instructions. [EA · GW]


Instructions

  1. Please submit your post on app.impactmarkets.io.
    1. If you want to save time, you can leave the justification (i.e. description) of your impact certificate empty and add it only if and when someone indicates interest in buying some fraction of it. Without the description it should only take a minute or two.
  2. EA Forum posts from May and June are eligible. Posts submitted after June 30th will not be considered for this particular contest but may be eligible for future contests.
  3. We value forum posts in proportion to how much we consider them morally good, positive-sum, and non-risky. (More on our criteria below.)
    1. We have, for almost a year, thought a lot about risks from contests like these and from the markets that they may turn into. You can read more about our thinking in the post Toward Impact Markets [EA · GW].
    2. We welcome any feedback you might have, positive or negative! Here is an anonymous feedback form that you can use. Comments on this post are of course welcome too.
  4. Please only submit EA Forum posts. For example, if your post discusses a project that you’ve launched, then please make it clear in your certificate text that the certificate is for the Forum post only (a similar purview to copyright) and does not extend to your project.
  5. If we buy some percentage of the impact of your post you will be required to edit your EA forum post and put a note at the bottom linking to your certificate on our website (for future tracking purposes).
  6. You can give some percentage of the impact (and potential sale price) to someone else you collaborated with (coauthors, editors, proofreaders, etc.). Make sure to discuss and agree on the shared percentage before submitting your forum post for sale. We will not facilitate this sharing, you will have to coordinate between yourselves.
  7. Don’t take advance monetary investments or make promises of future profits or you might get in trouble with your local securities law. Ask us if you're unclear whether this is a concern for you.

Process

  1. We and interested funders want to spend ~$10,000 in total to buy the impact of EA forum posts.
    1. We think there is at least a 90% likelihood we spend the full amount, however, on the off chance we do not find enough valuable posts we are not fully committing all of the funds. We may also spend more if we really like what’s on offer. Buying the impact does not give us the rights to any intellectual property.
  2. At our discretion, we will invite other funders to buy impact as well. Please reach out if you're a funder and you’re interested. These purchases will not count as donations for tax purposes.
  3. We will take the month of July to assess which posts we’d like to make offers on. If we’re sufficiently interested in your post, we will contact you via email or EA Forum message. You can decline the offer or make a counteroffer on the price or percentage of the impact being purchased.
  4. If the offer is accepted we will pay you via wire transfer, check, ACH, or any other payment method that works for both of us (e.g., Paypal, Venmo, Zelle, Wise).
  5. If you write a post and submit it, there is no guarantee whatsoever it will be funded.
  6. We do not plan to resell or consume/open the impact of the posts in the short term but reserve the right to do so in the future. We also reserve the right to give you your impact back for free in the case we think this prize contest was not optimal at initializing the right framing for impact markets.
    1. (Parenthetically: Consumption/Opening are tentative names for a potential mechanism whereby a piece of an impact certificate can be "consumed" by an owner and forever after they "own" that piece of impact.)

Other less important details

  1. We’re currently aiming for transactions to be more than ~$250 (e.g., purchasing 50% of a post valued at $500). If you’re unsure whether your post might be worth it, we encourage you to please submit it anyway! It might be considered for future contest rounds and by different future buyers.
  2. It’s not a requirement to read all of the following, but it will become more important in future contests/rounds. The certificate description justifies the value of the impact as defined by the latest version of the Attributed Impact definition (currently 0.2) [EA(p) · GW(p)].
    1. The issuers won’t try to benefit one moral goal at the expense of another or otherwise violate the preferences of others.
    2. In particular, they won’t risk destroying our civilization or creating great suffering, regardless of century, species, or substrate.
    3. Rather they will try to achieve their goals in a morally cooperative, respectful manner.
  3. Topics we’re interested in seeing:
    1. Here’s Dawn’s longer list of inspiration for articles. As a quick-and-dirty heuristic, you can assume that if Brian Tomasik and Nick Beckstead are excited about an article, Dawn will also be excited about it. In particular:
      1. How can you allocate shares in a past project if you can’t talk to all other contributors?
      2. Can you make the case that any of the submitted impact certificates are the result of cheating of some sort? (Submissions of this sort are allowed to come in a few days after the deadline. Please give us the heads-up if you’re working on one.)
      3. But there are also many suggestions on the list that are not directly connected with impact markets.
    2. Dony’s interests are illegible and should not be Goodharted. He wants to incentivize good posts in generality, especially if he couldn’t have predicted them in advance.
    3. Other topics of interest include but are not limited to:
      1. Cause exploration [EA · GW] and new Cause X candidates
      2. All FTX Future Fund areas of interest
      3. Quality criticism [EA · GW] within EA
      4. AI safety research that does not increase AI capabilities
  4. You can enter posts you've submitted into other contests a part of our contest as well!
  5. Our tentatively planned provisional evaluation process (we may change this):
    1. We want to use something like the Utility Function Extractor to aggregate our (Dawn/Denis, Dony, Matt) relative preferences for the submissions,
    2. normalize the sum of the resulting weights to our budget,
    3. set some base fraction that we want to buy for that money,
    4. suggest the deal to the issuers,
    5. and then maybe haggle a bit to make sure everyone is happy with the deals.
    6. Please contact us if you're interested in joining our Evaluation Team!
  6. You can book time on our Calendly, join our Discord, submit anonymous feedback, or comment below if you have any questions.

What does it mean to sell your impact, exactly?

Left: Early supporters fund an impact project.
Right: The retro funder funds the supporters.
Not pictured: Exchange of certificate or royalties on transactions.

An impact certificate [? · GW] describes some impactful action and represents an entitlement to a fraction of future retroactive funding for that action. 

The metaphor here is that you can put some good deed into a bottle, trade it around, and some people will be motivated by profit and some by intrinsic interest in funding good deeds.

It is a new experimental funding mechanism that, we think, can lead to more impact-focused incentive structures in philanthropy.

The Story of An Impact Sale: Two Examples

Story 1 (by Dawn/Denis):

Alice Altruist wants to do a project, such as a scientific paper. Ivan Investor gives Alice money in return for shares in her impact certificate. (This step is not part of this contest.) Ivan also does this with many other projects and helps them succeed. Turns out, Alice did succeed! Finally, Alice and Ivan pitch the project to Reto Retrofunder. Reto loves the outcome of the project and offers to buy shares in it at a price that Alice and Ivan are happy with.

Because of this market/mechanism:

  1. Alice can make a profit from creating good impact
  2. Alice can draw on a larger hiring pool and align incentives through participation
  3. Ivan can make a profit from investing in good impact
  4. Reto can save time and invest it in better prioritization
  5. Reto can draw on market signals to find things he wants to fund


Story 2 (by Dony):

Meet Johnny. Johnny is an altruist. He wants to help reduce global warming and restore the environment. He considers planting trees, but it has costs involved, including the financial cost of buying the seeds and the shovel, as well as the time cost of planting and other personal costs. 

There are many people like Johnny, but while some may be intrinsically motivated enough to go through the effort, most won’t. There are many future beings who may benefit from the presence of trees he plants in the present, but they miss out. Economically speaking, the people and animals and other entities in the future benefit the most from the fruit of the tree, from its shade and beauty and carbon sequestration, while people in the present have to bear the burden of supplying it. 

It would be nice if there was a way for future people to pay for Johnny to plant the trees now.

Impact markets are a way for money to time travel from the future to the present.

Johnny can plant a seed, and issue an impact certificate. The certificate can describe what he did, show the location of the tree, and link to permanently-stored video evidence of him planting it, perhaps with other evidence such as witness testimony.

The tree sprouts to become a seedling. Johnny can hold onto his certificate or sell fractions of it to others, such as friends, early supporters, or unknown profit-oriented people who are combing the marketplace for opportunities. Now he has some money for his altruism and can spend it on more tree-planting, on a completely different cause area, invest in an index fund, buy food, or whatever he feels like. It’s a reward for doing something no one told him to do, but the impact market is acting like a prediction market over what value it thinks the future will give to the act of tree-planting.  

The tree becomes a sapling. The early purchasers can sell their pieces of impact certificates to others. Royalties could be set up to flow back to the issuer or early funders to reward them.

The tree becomes an adult. As it grows over time, the certificate’s value should grow, or at least change as more information comes in. The price may grow if it proves to be a healthy tree and was undervalued by people earlier on. Reasons the price might go down might include: it was planted on private land without permission or was cut down.

At some point, a retroactive funder aka Final Buyer comes in. This is the equivalent of a philanthropist, but instead of funding unproven projects, they’re purchasing certificates representing outcomes that have already happened. They don’t have time to evaluate everything that’s happened, but luckily most of their work has been done for them by smaller funders/investors. The retro funder comes in and says “I will buy certificates for having planted trees”, and either this incentivizes more tree-planting to happen, or they dig up certificates that are already on the market and buy them. Whether this retro funder is actually the Final Buyer or is an early purchaser in a hypothetically unending chain of retro funders depends on your theory of how impact markets will work.

This story, while probably wrong in some details, should give you a broad picture of how this should work. We aren’t accepting submissions for forestry in this contest.

 

 

Forum posts aren’t trees, but they are seeds of thought.


Acknowledgments for reviewing and feedback on this post: Amber Dawn, Ben Hoskin, Chris Leong, Dr. Inga Grossmann, Elika Somani, Fabian Chandler, Jeff Bergen, Keller Scholl, Robert Colvin, Sasha Cooper, Siméon Campos, plex

Further Content:


 


 



 

 

 

16 comments

Comments sorted by top scores.

comment by ofer · 2022-06-02T06:10:02.857Z · EA(p) · GW(p)

Hi Dony!

In a section titled "Other less important details", after a sentence saying "It’s not a requirement to read all of the following, […]" there is the following sentence:

The certificate description justifies the value of the impact as defined by the latest version of the Attributed Impact definition (currently 0.2) [EA(p) · GW(p)].

Other than that sentence, the OP does not convey that the retro funders will consider the ex-ante EV of a post (and won't attribute to the post a higher EV than that, even if the post ends up being extremely beneficial). Instead, the OP lets reader get the idea that retro funders make their decisions based on the ex-post EV alone:

Our overall goal: make the nonprofit space work more like for-profits. Pay for outcomes, not for promises.

Which is reinforced by the first example in the OP:

Reto loves the outcome of the project and offers to buy shares in it at a price that Alice and Ivan are happy with.

I think this post risks creating a basin of attraction around the belief that future retro funders will simply buy impact that they like without considering the ex-ante EV. This will make it more likely that impact markets will end up incentivizing net-negative projects (that have a chance of ending up being beneficial), due to the "distribution mismatch" [EA · GW] problem (which is explained in the Toward Impact Markets post by Denis, that is repeatedly linked to from the OP).

Also, if you go through with this contest, I recommend banning from it posts about bio-risk and AI (people can't perfectly predict what posts will be judged by retro funders as "non-risky").

Replies from: Telofy
comment by Dawn Drescher (Telofy) · 2022-06-02T08:58:05.089Z · EA(p) · GW(p)

We’ve gone through countless iterations with this announcement post that usually took the shape of one of us drafting something, us then wondering whether it’s too complicated and will cause people to tune out and ignore the contest, and us then trying to greatly shorten and simplify it.

There’s a difficult trade-off between the high-fidelity communication of our long explainer posts and the concision that is necessary to get people to actually read a post when it comes to participating in a contest. Our explainer posts get very little engagement. To participate in the contest it’s not necessary to understand exactly how our mechanisms work, so we hope to reach more people by explaining things in simpler terms without words like “ex ante” and comparisons to constructed counterfactual world histories.

Like, grocery shopping would be a terrible experience if every customer had to understand all the scheduling around harvest, stocks and flows between warehouses, just-in-time delivery, pricing in of some expected number of produce that expire before they’re bought, etc. If anyone who wants to use impact markets has to spend more time up front to learn more about them than the markets are worth to them, that’d be a failure.

This is exacerbated in this case where a submitter has a < 100% chance to get a reward of a few hundred dollars. That comes down to quite little money in expectation, so we’ve been trying hard to make the experience as light on the time commitment as possible while linking our full explainer posts at every turn to make sure that people cannot miss the high-fidelity version if they’re looking for it. Once we have bigger budgets, we can also ask people to engage more upfront with our processes.

That said, we’ve thought a lot about the bolded key sentence “morally good, positive-sum, and non-risky.” We hope that everyone who submits will read it. By “non-risky” we mean “ex ante non-risky.” We hoped that the term captured that as it’s not common to talk about “risks” ex post. Even in sentences like “the Cuban Missile Crisis was risky,” the sentence doesn’t say that the event is a risk for us today after the fact but that, at the time when it was happening, it was risky.

But I’ll ask Dony to go over the post again and see if we can clarify this in a place where it doesn’t cause more confusion than it resolves. Maybe my bolded text below can be inserted below the first sentence that you cited.

For now, let me reiterate for every potential submitter reading this:

We will value impact according to Attributed Impact in its latest version at the time, so if writing your post would’ve been net negative in expectation before you wrote it (ex ante), it cannot be valued positively at any later time! The ex ante expected value is the ceiling of any potential future valuation of the impact, regardless how great it happens to turn out.

Every submitter also has to answer questions like “What positive impact did you expect before you started the project? What were unusually good and unusually bad possible outcomes? (Please avoid hindsight bias and take the interests of all sentient beings into account.)” before we will buy any of the impact. (I should reword that a bit, maybe, “What positive impact was to be expected …,” to make it fit with Attributed Impact.)


Here is a section (verbatim) that I originally wrote for the post that we cut entirely for length:

Downsides

Most of the problems that impact markets might cause are detailed in Toward Impact Markets [EA(p) · GW(p)].

We are particularly concerned with the following:

  1. Issuers might be incentivized to:
    1. try many candidate interventions, some of which might backfire terribly, but then only issue certificates only in the rare interventions that succeeded,
    2. try many candidate interventions, some of which might be terrible for some moral systems, but issue the certificates under different aliases and sell them to different retro funders,
    3. try an intervention many times, usually with disastrous results, but issue an impact certificate only for the rare iteration of the intervention that succeeded,
    4. do something good once but then reframe it slightly to sell the impact from it multiple times to different people on different marketplaces,
    5. compete with other issuers for funding by badmouthing them or withholding resources from them when otherwise they would’ve collaborated,
    6. pander to the perceived preferences of the retro funders even in cases where the issuers have a clearer picture of what is impactful,
    7. generate externalities for individuals that are not themselves represented on the market and who the retro funders are not aware of,
    8. issuers using the markets to issue disguised threats against retro funders.
  2. Investors might be incentivized to:
    1. do little research and just invest large sums into a wide range of projects regardless of whether they’re likely to backfire on the off-chance that (1) one of them actually turns out good or (2) at some point in the future there will be a very rich retro funder that will think that a project turned out good,
    2. invest mostly in things that are highly verifiable to avoid the ambiguity about the purview of certificates that comes with lower levels of verifiability, thereby disadvantaging some interventions for reasons unrelated to their impact,
    3. actively trade certificates to the point of creating a lot of noise that distracts issuers from their object-level work,
    4. do 1.e. and 1.f. above.
  3. Retro funders might:
    1. get scammed by some of the above tricks,
    2. abuse their power by incentivizing projects that are disastrous for some moral systems.

We are optimistic that the brunt of these are solvable in a mature impact market. We don’t have a fully general mechanism but a range of incremental ones. Most of them can be summarized as an attempt to facilitate moral trade on a financial market:

  1. Issuers:
    1. commit to and justify their actions according to an operationalization of impact called Attributed Impact [EA · GW] according to which an action that is net negative in ex ante expectation can never be positive in value even if it so happens to turn out well,
    2. can sell only impact in classes of actions that are very unlikely to be extremely harmful, namely articles on the EA Forum (and at a later stage maybe other similar artifacts),
    3. can sell only impact in classes of actions that have passed multiple rounds of vetting – for example in this case because the moderators of the EA Forum allowed the post and because we allowed its certificate to be issued on our platform,
    4. can, conversely, sell impact from exposés of other certificates where the issuers cheated in some fashion to hide negative externalities actual or probabilistic,
    5. con, conversely, sell impact from articles that changes the evaluation of the impact of other certificates,
    6. can, conversely, sell impact from articles detailing new problems of or attack vectors against impact markets.
  2. Investors:
    1. are incentivized by retro funders just enough that those who add information to the market by making good predictions are profitable.
  3. Retro funders:
    1. should commit to Attributed Impact to push issuers and investors to commit to Attributed Impact too, thereby averting negative externalities and threats,
    2. have the option to delegate the decision-making or the prefiltering of funding opportunities to us,
    3. have the option to pivot entirely to retro funding, which should free up so much staff time that they can build expertise in recognizing exploits,
    4. have at some point the support of “the pot,” an investment mechanism that acts as a semi-automated retro funder and reinforces the Schelling point of Attributed Impact.

The remaining problems are mostly related to (1) imperfections in the implementation of these solutions and (2) flaws in the retro funder alignment. If a really generous retro funder joins the market who is unconcerned with moral cooperation or cheating and has enough capital to spend, then impact investors are ready to stay invested in countless projects for decades until the unaligned investor arrives, then that retro funder can have a bad influence on the market even when people merely expect the retro funder to join but when it hasn’t happened.

We don’t think that there is a mechanism that can prevent this from happening because anyone is already free to retroactively reward whoever they like. But we recognize that by writing about impact markets and by running contests like these, we’re making the option more salient.

We want to hit the right balance between minimizing the opportunity costs from delaying the implementation of impact markets and minimizing the direct costs from harm that impact markets might cause. There are those who think that we have an “extreme focus on risks” and those who think that we’re rash for wanting to realize impact markets at all. We would love to get your opinion on where we stand on this balance and how we can improve!

Replies from: ofer
comment by ofer · 2022-06-02T13:21:51.308Z · EA(p) · GW(p)

There’s a difficult trade-off between the high-fidelity communication of our long explainer posts and the concision that is necessary to get people to actually read a post when it comes to participating in a contest. Our explainer posts get very little engagement. To participate in the contest it’s not necessary to understand exactly how our mechanisms work, so we hope to reach more people by explaining things in simpler terms without words like “ex ante” and comparisons to constructed counterfactual world histories.

After this contest, it will still be the case that most people will be more likely to read and use instructions that are short and simple. It may be very hard to later "fix" the influence that posts like the OP has on potential future retro funders. Simpler instructions are more prone to become a meme. Therefore, retro funders may predict that some (most?) future retro funders will use the simple "buy likable impact" rule rather than the "adhere to the safety solutions in the Toward Impact Markets post" rule, and thus be incentivized to follow the simple rule themselves. Posts like the OP risk pushing everyone towards the Schelling point of "retro funders buy likable impact". (All this becomes more worrisome if you or someone else in EA ends up launching a decentralized impact market).

Regarding the claim that "articles on the EA Forum" are "very unlikely to be extremely harmful": EA Forum posts can disseminate info hazards that can be extremely harmful. (And this does not seem very unlikely, considering that the ideas that are discussed on the EA Forum are often related to anthropogenic x-risks.)

Replies from: Telofy
comment by Dawn Drescher (Telofy) · 2022-06-02T15:13:16.688Z · EA(p) · GW(p)

Hmm, I love writing high-fidelity content. Just thinking, “how can I express what I mean as clearly as I can” rather than “how can I simplify what I mean to maximize the fidelity/complexity ratio” is a lot easier for me. But a lot of smart people disagree, and point to how shallow heuristics and layered didactic approaches are essential bridge inferential gaps under time constraints.

So I would like to pose the question to anyone else reading this: If you read “Toward Impact Markets” and you read the above post, do you think we should’ve gone for the same level of fidelity above? Or not? Or something in between?

EA Forum posts can disseminate info hazards that can be extremely harmful. (And this does not seem very unlikely, considering that the ideas that are discussed on the EA Forum are often related to anthropogenic x-risks.)

Excluding whole categories of usually valuable content from contests, though, seems like a very uncommon level of caution. I’m not saying that I *know* that it’s exaggerated caution, but there have been many prize contests for content on the EA Forum, and none of them were so concerned about info hazards. Some of them have had bigger prize pools too. And in addition the EA Forum is moderated, and the moderators probably have a protocol for how to respond to info hazards.

I’ve long pushed for something like the “EA Criticism and Red Teaming [EA · GW]” contest (though I usually had more specific spins on the idea in mind), I’m delighted it exists, and I think it’ll be good. But it is a lot more risky than ours. It has a greater prize pool, the most important red-teaming should focus on topics that are important to EA at the moment, so “longtermism” (i.e. “how do we survive the next 20 years”) topics like biosecurity and AI safety, and the whole notion of red-teaming is conceptually close to info hazards too. (E.g., some people claim that some others invoke “info hazard” as a way to silence epistemic threats to their power. I mostly disagree, but my point is about how close the concepts are to each other.)

The original EA Forum Prize referred readers to the About page at the time (note that they, too, opted to put the details on a separate linked page), which explicitly discourages info hazards, rudeness, illegal activities, etc., but spends about a dozen words on fleshing this out precisely as opposed to our 10k+ words. Of course if you can communicate the same thing in a dozen and in 10k+ words, then a dozen is better, but if you think that “non-risky” is not clear about whether it refers to actions that are risky while they’re being performed or only to actions whose results remain risky indefinitely, then “What we discourage (and may delete) … Information hazards that concern us” is also unclear like that. Maybe someone is aware of an info hazard so dangerous that the moment they post it they can see from their own state of existence or nonexistence whether they got lucky or not. I think that both framings clearly discourage such sharing, but regardless, the contests are parallel in this regard. (Or, if anything, ours is safer because we are very, very explicit about the ex ante ceiling in our detailed explainer, with definitions, examples, diagrams, etc.)

But I don’t want to just throw this out there as an argument from authority, “If the EA Forum gods do it, it got to be okay.” It’s just that there is a precedent (over the course of four years or so) for lower levels of caution than ours and nothing terrible happening. That is valuable information for us when we try to make our own trade-off between risks and opportunity costs. (But of course all the badness can be contained in one Black Swan event that is yet to come, so there’s no certainty.)

Replies from: ofer
comment by ofer · 2022-06-02T16:36:11.905Z · EA(p) · GW(p)

The original EA Forum Prize does not seem to have had the distribution mismatch problem; the posts were presumably evaluated based on their ex-ante EV (or something like that?).

Replies from: Telofy
comment by Dawn Drescher (Telofy) · 2022-06-02T16:53:46.014Z · EA(p) · GW(p)

I don’t know if they were, so either way it was probably also not obvious to some post authors that they’d be judged by ex ante EV, and it’s enough for one of them to only think that they’ll be judged by ex post value to run into the distribution mismatch.

At least to the same extent – whatever it may be – as our contest. Expectational consequentialism seems to me like the norm, though that may be just my bubble, so I would judge both contests to be benign and net positive because I would expect most people to not want to gamble with everyone’s lives, to not think that a contest tries to encourage them to gamble with everyone’s lives, and to not want to just disguise their gamble from the prize committee.

Replies from: ofer
comment by ofer · 2022-06-02T17:29:38.350Z · EA(p) · GW(p)

In the original EA Forum Prize, the ex-post EV at the time of evaluation is usually similar to the ex-ante EV assuming that the evaluation happens closely after the post was written. (In a naive impact market, the price of a certificate can be high due to the chance that 3 years from now its ex-post EV will be extremely high.)

Replies from: Telofy
comment by Dawn Drescher (Telofy) · 2022-06-02T19:14:32.641Z · EA(p) · GW(p)

So you’re saying it’s fine for them not to make the distinction because they’re so quick that it hardly matters, but that it’s important for us? That makes sense. I suppose that circles back to my earlier comment that I think that our wording is pretty clear about the ex ante nature of the riskiness, but that we can make it even more clear by inserting a few more sentences into the post that make the ex ante part very explicit. 

comment by ofer · 2022-06-02T06:56:16.770Z · EA(p) · GW(p)

We do not plan to resell or consume/open the impact of the posts in the short term but reserve the right to do so in the future.

If you end up reselling impact that you've purchased with a grant from the Future Fund Regranting Program, where does the money go?

Replies from: Telofy
comment by Dawn Drescher (Telofy) · 2022-06-02T15:30:16.814Z · EA(p) · GW(p)

We didn’t think about this because we’re not planning this at all. But we’re in the process of forming a public benefit corporation. Our benefit statement is “Increase contributions to public and common goods by developing and deploying innovative market mechanisms.” The PBC will be the one doing the purchases, so if we ever sell the certs again the returns will flow black to the PBC account and will be used in line with the benefit statement.

That’s sort of like when a grant recipient buys furniture for an office but then, a few years later, moves to a group office with existing furniture and sells their own (now redundant) furniture on eBay. Those funds then also flow back to the account of the grant recipient unless they have some nonstandard agreements around their furniture.

But of course we can run this by FTX if it ever becomes an action-relevant question. 

Replies from: ofer
comment by ofer · 2022-06-02T16:31:21.594Z · EA(p) · GW(p)

Thanks for the info!

If the shareholders of the public benefit corporation will be able to receive dividends, I think there's a conflict of interest problem with this setup. The Impact Markets team will probably need to make high-stakes decisions under great uncertainty. (E.g. should an impact market be launched? Should the impact market be decentralized? Should a certain person be invited to serve as a retro funder? How to navigate the tradeoff between explaining the safety rules thoroughly and writing more engaging posts that are more conducive to gaining traction?) It's a big conflict of interest problem if the decision makers can end up making a lot of money via a (future) impact market due to making certain decisions.

Therefore, I think it's better to commit to "consume/open" (i.e. never sell) the certificates that you purchase with the grant.

Replies from: Telofy
comment by Dawn Drescher (Telofy) · 2022-06-02T17:08:36.808Z · EA(p) · GW(p)

I can see the appeal in the commitment to consumption. We might just do that if it inspires trust in the market. Then again it sends a weird signal if not even we want to use our own system to sustain our operation. “Dogfooding” would also allow us to experience the system from the user side and notice problems with it even when no one reports them to us.

Also people are routinely trusted not to make callous decisions even if it’d be to their benefit. For example, charities are trusted to make themselves obsolete if at all possible. The existence of the Against Malaria Foundation hinges on there being malaria. Yet we trust them to do their best to eliminate malaria.

Charities often receive exploratory grants to allow them to run RCTs and such. They’re still trusted to conduct a high-quality RCT and not manipulate the results even though their own jobs and the ex post value of years of their work hinge on the results. 

Even I personally used to run a charity that was very dear to me, but when we became convinced that the program was nonoptimal and found out that we couldn’t change the bylaws of the association to accommodate a more optimal program, we also shut it down.

Replies from: Owen_Cotton-Barratt
comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-06-02T22:08:37.265Z · EA(p) · GW(p)

I think the signalling benefit from providing ultimate consumers is more important than failing to signal there are speculators. I think speculators are logically downstream of consumers, and impact markets are bottlenecked by lack of clarity about whether there will be consumers.

(I'm also quite unclear whether we've worked out enough of the fundamentals of how to avoid bad incentives that it's good to establish trust in impact markets ... I guess you'd want to handle this by saying that people shouldn't buy impact for any work trying to establish them at the moment, since it's ex ante risky?)

Replies from: DonyChristie, Telofy
comment by DonyChristie · 2022-06-02T23:37:28.099Z · EA(p) · GW(p)

Quick comment here - thanks for chipping in! 

I guess you'd want to handle this by saying that people shouldn't buy impact for any work trying to establish them at the moment, since it's ex ante risky?

I personally agree overall with the general gist of this (something like not selling the impact of working on impact markets in the short term, probably years or decades, maybe forever) and was going to make a statement of my own long-term intention along these lines at some point when I got around to personally responding to one of Ofer's comments; the way you put it solidifies further my sense that this would probably be prudent. I have more to say but will bow out for now due to personal needs. I'd prefer to have these discussions in a space dedicated to curiously examining downsides, which I will make a separate post for.

comment by Dawn Drescher (Telofy) · 2022-06-02T23:16:11.800Z · EA(p) · GW(p)

I think the signalling benefit from providing ultimate consumers is more important than failing to signal there are speculators. I think speculators are logically downstream of consumers, and impact markets are bottlenecked by lack of clarity about whether there will be consumers.

That sounds sensible to me. Two considerations that push a bit against in my mind are:

  1. I want to make a binding commitment to a particular consumption schedule, and that burns option value. So if trust in the consumption is the bottleneck and if it fluctuates, I would like to still have the option to increase the consumption rate when the trust drops. It feels like it’s a bit too early to think about the mechanics here since the market is still so illiquid that we can’t easily measure such fluctuations in the first place.
  2. A source of trust in impact markets could also stem from particular long-term commitments such as windfall clauses [EA · GW]. In this case the consumption schedule would have to be tuned such that the windfall funder can still buy and consume the certificates, and it’s usually unclear when the windfall will happen if it happens. So maybe the consumption schedule should always be some thing asymptotic along the lines of consuming half the remaining certificates by some date, and then half again, and so on.

I guess you'd want to handle this by saying that people shouldn't buy impact for any work trying to establish them at the moment, since it's ex ante risky?

Hmm, I don’t understand this? Can you clarify what you’re referring to?

Our strategy mostly rests on Attributed Impact, an operationalization of how we value the impact of an action that someone performs. (This is a short summary.)

Its key features include that it addresses moral trade (including the distribution mismatch problem) by making sure that the impact of actions that are negative in ex ante expectation is worthless regardless of how great they turn out or how great they are for some group of moral patients. (In fact it uses the minimum of ex ante and current expected value, so it can go negative, but we don’t have a legal handle on issuers to make them pay up unless we can push for mandatory insurance or staking.)

Another key feature is that it requires issuers to justify that their certificate has positive Attributed Impact. It also has a feature that is meant to prevent threats against retro funders. Those, in combination with our commitment to buy according to Attributed Impact, will, or so we hope, start a feedback cycle where issuers are vocal about Attributed Impact to sell their certs to the retro funders, possible investors scrutinize the certs to see if they think that we will be happy with the justification, and generally everyone uses it by default just as now people use the keyword “longtermism” to appeal to funders. (Just kidding.) That’ll hopefully make Attributed Impact the de facto standard for valuing impact, so that even less aligned new retro funders will find it easier to go along with the existing norms rather than to try to change them, especially since they probably also appreciate the antithreat feature.

But we have a few more lines of defense against Attributed Impact drift (as it were), such as “the pot.” It’s currently too early in my view to try to implement them.

I’ve recently been wondering though: Many of these risks apply to all prize contests, not only certificate-based ones. Also anyone out there, any unaligned millionaire, is free to announce big prizes for things we would disapprove of. Our goal has so far been to build an ecosystem that is so hard to abuse that these unaligned millionaires will choose to stay away and do their prize contests elsewhere. But that only shifts around where the bad stuff happens.

Perhaps there are even mechanisms that could attract the unaligned millionaires and ever so slightly improve the outcomes of their prize contests. But I haven’t thought about how that might be achieved.

Conversely, the right to retro funding could be tied to a particular first retro funder to eliminate the risk of other retro funders joining later. But that probably also just shifts where the bad stuff happens, so I’m not convinced.

I’d be curious if you have any thoughts on this!

comment by Jordan Arel · 2022-07-11T05:25:05.375Z · EA(p) · GW(p)

Thank you Dony, Denis, and Matt! I really enjoyed reading this post, and excited about the idea. Looking forward to seeing what posts are submitted!