FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good

post by Cullen_OKeefe · 2020-02-05T23:49:43.443Z · score: 51 (25 votes) · EA · GW · 19 comments

Contents

  What is the Windfall Clause?
  Motivations
    Motivations Specific to Effective Altruism
  Limitations
  Next steps
None
19 comments

Full Report

Summary for AIES

Over the long run, technology has improved the human condition. Nevertheless, the economic progress from technological innovation has not arrived equitably or smoothly. While innovation often produces great wealth, it has also often been disruptive to labor, society, and world order. In light of ongoing advances in artificial intelligence (“AI”), we should prepare for the possibility of extreme disruption, and act to mitigate its negative impacts. This report introduces a new policy lever to this discussion: the Windfall Clause.

What is the Windfall Clause?

The Windfall Clause is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. By “extremely large profits,” or “windfall,” we mean profits that a firm could not earn without achieving fundamental, economically transformative breakthroughs in AI capabilities. It is unlikely, but not implausible, that such a windfall could occur; as such, the Windfall Clause is designed to address a set of low-probability future scenarios which, if they come to pass, would be unprecedentedly disruptive. By “ex ante,” we mean that we seek to have the Clause in effect before any individual AI firm has a serious prospect of earning such extremely large profits. “Donate” means, roughly, that the donated portion of the windfall will be used to benefit humanity broadly.

Motivations

Properly enacted, the Windfall Clause could address several potential problems with AI-driven economic growth. The distribution of profits could compensate those rendered faultlessly unemployed due to advances in technology, mitigate potential increases in inequality, and smooth the economic transition for the most vulnerable. It provides AI labs with a credible, tangible mechanism to demonstrate their commitment to pursuing advanced AI for the common global good. Finally, it provides a concrete suggestion that may stimulate other proposals and discussion about how best to mitigate AI-driven disruption.

Motivations Specific to Effective Altruism

Most EA AI resources to-date have been focused on extinction risks from AI. One might wonder whether the problems addressed by the Windfall Clause are really as pressing as these.

However, a long-term future in which advanced forms of AI like AGI or TAI arrive but primarily benefit a small portion of humanity is still highly suboptimal. Failure to ensure advanced AI benefits all could "drastically curtail" the potential of Earth-originating intelligent life. Intentional or accidental value lock-in could result if, for example, a TAI does not cause extinction but is programmed to primarily benefit shareholders of the corporation that develops it. The Windfall Clause thus represents a legal response to this sort of scenario.

Limitations

There remain significant unresolved issues regarding the exact content of an eventual Windfall Clause, and the way in which it would be implemented. We intend this report to spark a productive discussion, and recommend that these uncertainties be explored through public and expert deliberation. Critically, the Windfall Clause is only one of many possible solutions to the problem of concentrated windfall profits in an era defined by AI-driven growth and disruption. In publishing this report, our hope is not only to encourage constructive criticism of this particular solution, but more importantly to inspire open-minded discussion about the full set of solutions in this vein. In particular, while a potential strength of the Windfall Clause is that it initially does not require governmental intervention, we acknowledge and are thoroughly supportive of public solutions.

Next steps

We hope to contribute an ambitious and novel policy proposal to an already rich discussion on this subject. More important than this policy itself, though, we look forward to continuously contributing to a broader conversation on the economic promises and challenges of AI, and how to ensure AI benefits humanity as a whole. Over the coming months, we will be working with the Partnership on AI and OpenAI to push such conversations forward. If you work in economics, political science, or AI policy and strategy, please contact me to get involved.

19 comments

Comments sorted by top scores.

comment by Larks · 2020-02-15T03:25:08.787Z · score: 20 (6 votes) · EA(p) · GW(p)

Thanks very much for sharing this. It is nice to see some innovative thinking around AI governance.

I have a bunch of different thoughts, so I'll break them over multiple comments. This one mainly concerns the incentive effects.

> C.2. “The Windfall Clause operates like a progressive corporate income tax, and the ideal corporate income tax rate is 0%.”

> Some commentators argue that the ideal corporate tax rate is 0%. One common argument for this is that corporate income tax is not as progressive as its proponents think because corporate income is ultimately destined for shareholders, some of whom are wealthy, but many of whom are not. Better, then, to tax those wealthy shareholders more directly and let corporate profits flow less impeded to poorer ones. Additionally, current corporate taxes appear to burden both shareholders and, to a lesser extent, workers."

I think this is a bit of a strawman. While it is true that many people don't understand tax incidence and falsely assume the burden falls entirely on shareholders rather than workers and consumers, the main argument for the optimality of a 0% corporate tax rate is Chamley-Judd (see for example here) and related results. (There are some informal descriptions of the result here and here.) The argument is about disincentives to invest reducing long-run growth and thereby making everyone poorer, not a short-term distributional effect. (The standard counter-argument to Chamley Judd, as far as I know, is to effectively apply lots of temporal discounting, but this is not available to longtermist EAs).

This is sort of covered in B.1., but I do not think the responses are very persuasive. The main response is rather glib:

Further, by capping firm obligations at 50% of marginal profits, the Clause leaves room for innovation to be invested in even at incredibly high profit levels.231

There are a lot desirable investments which would be rendered uneconomic. The fact that some investment will continue at a reduced level does not mean that missing out on the other forgone projects is not a great cost! For example, a 20% pre-tax return on investment for a moderately risky project is highly attractive - but after ~25% corporate taxes and ~50% windfall clause, this is a mere 5% return* - almost definitely below their cost of capital, and hence society will probably miss out on the benefits. Citation 231, which seems like it should be doing most of the work here, instead references a passing comment in a pop-sci book about individual taxes:

There's also an argument that a big part of the very high earnings of many 'superstars' are also rents. These questions turn on whether most professional athletes, CEOs, media personalities, or rock stars are genuinely motivated by the absolute level of their compensation verses the relative compensation, their fame, or their intrinsic love of their work.

But corporations are much less motivated by fame and love of their work than individuals, so this does not seem very relevant, and furthermore it does not address the inter-temporal issue which is the main objection to corporation taxes.

I also think the sub-responses are unsatisfying. You mention that the clause will be voluntary:

> Firstly, we expect firms to agree to the Clause only if it is largely in their self-interest

But this does not mean it won't reduce incentives to innovate. Firms can rationally take actions that reduce their future innovation (e.g. selling off an innovative but risky division for a good price). A firm might voluntarily sign up now, when the expected cost is low, but then see their incentives dramatically curtailed later, when the cost is large. Furthermore, firms can voluntarily but irrationally reduce their incentives to innovate - for example a CEO might sign up for the clause because he personally got a lot of positive press for doing so, even at the cost of the firm.

Additionally, by publicising this idea you are changing the landscape - a firm which might have seen no reason to sign up might now feel pressured to do so after a public campaign, even though their submission is 'voluntary'.

The report then goes on to discuss externalities:

> Secondly, unbridled incentives to innovate are not necessarily always good, particularly when many of the potential downsides of that innovation are externalized in the form of public harms. The Windfall Clause attempts to internalize some of these externalities to the signatory, which hopefully contributes to steering innovation incentives in ways that minimize these negative externalities and compensate their bearers.

Here you approvingly cite Seb's paper, but I do not think it supports your point at all. Firms have both positive and negative externalities, and causing them to internalise them requires tailored solutions - e.g. a carbon tax. 'Being very profitable' is not a negative externality, so a tax on profits is not an effective way of minimising negative externalities. Similarly, the Malicious Use paper is mainly about specific bad use cases, rather than size qua size being undesirable. Moreover, size has little to do with Seb's argument, which is about estimating the costs of specific research proposals when applying for grants.

Finally, one must consider that under windfall scenarios the gains from innovation are already substantial, suggesting that globally it is more important to focus on distribution of gains than incentivizing additional innovation.

I strongly disagree with this non-sequitur. The fact that we have achieved some level of material success now doesn't mean that the future opportunity isn't very large. Again, Chamley-Judd is the classic result in the space, suggesting that it is never appropriate to tax investment for distributional purposes - if the latter must be done, it should be done with individual-level consumption/income taxation. This should be especially clear to EAs who are aware of the astronomical waste of potentially forgoing or delaying growth.

Elsewhere in the document you do hint at another response - namely that by adopting the clause, companies will help avoid future taxation (though I am sceptical):

> A Windfall Clause could build goodwill among the public, dampening harmful public antagonism for a small (expected) cost. Governments may be less likely to excessively tax or expropriate firms committed to providing a public good through the Windfall Clause.

and

> However, from a public and employee relations perspective, the Clause may be more appealing than taxation because the Clause is a cooperative, proactive, and supererogatory action. So, to the extent that the Windfall Clause merely replaces taxation, the Windfall Clause confers reputational benefits onto the signatory at no additional cos

However, it seems that the document equivocates on whether or not the clause is to reduce taxes, as elsewhere in the document you deny this:

> the Windfall Clause is not intended to be a substitute for taxation schemes. We also note that, as a private contract, the Windfall Clause cannot supersede taxation. Thus, if a state wants to tax the windfall, the Clause is not intended to stop it. Indeed, taxation efforts that broadly align with the goals and design principles of the Windfall Clause are highly desirable

\* for clarity of exposition I am assuming the donation is not tax deductible, but the point is not dramatically altered if it is.

comment by Cullen_OKeefe · 2020-02-24T19:51:53.833Z · score: 2 (2 votes) · EA(p) · GW(p)

As a blanket note about your next few points, I agree that the WC would disincentivize innovation to some extent. It was not my intention to claim—nor do I think I actually claimed (IIRC)—that it would have no socially undesirable incentive effects on innovation. Rather, the points I was making were more aimed at illuminating possible reasons why this might not be so bad. In general, my position is that the other upsides probably outweigh the (real!) downsides of disincentivizing innovation. Perhaps I should have been more clear about that.

But corporations are much less motivated by fame and love of their work than individuals, so this does not seem very relevant, and furthermore it does not address the inter-temporal issue which is the main objection to corporation taxes.

Yep, that seems right.

comment by Cullen_OKeefe · 2020-02-24T22:59:31.090Z · score: 1 (1 votes) · EA(p) · GW(p)

I strongly disagree with this non-sequitur. The fact that we have achieved some level of material success now doesn't mean that the future opportunity isn't very large. Again, Chamley-Judd is the classic result in the space, suggesting that it is never appropriate to tax investment for distributional purposes - if the latter must be done, it should be done with individual-level consumption/income taxation. This should be especially clear to EAs who are aware of the astronomical waste of potentially forgoing or delaying growth.

However, it's very hard to get individuals to sign a WC for a huge number of reasons. See

The pool of potentially windfall-generating firms is much smaller and more stable than the number of potential windfall-generating individuals, meaning that securing commitments from firms would probably capture more of the potential windfall than securing commitments from individuals. Thus, targeting firms as such seems reasonable.

comment by Cullen_OKeefe · 2020-02-24T22:56:50.102Z · score: 1 (1 votes) · EA(p) · GW(p)

Elsewhere in the document you do hint at another response - namely that by adopting the clause, companies will help avoid future taxation (though I am sceptical): ... However, it seems that the document equivocates on whether or not the clause is to reduce taxes, as elsewhere in the document you deny this:

I think both outcomes are possible. The second point is simply to point out that the WC does not and cannot (as a legal matter) prevent a state from levying taxes on firms. The first two points, by contrast, are a prediction that the WC will make such taxation less likely.

comment by Cullen_OKeefe · 2020-02-24T22:52:25.643Z · score: 1 (1 votes) · EA(p) · GW(p)

The report then goes on to discuss externalities:

Secondly, unbridled incentives to innovate are not necessarily always good, particularly when many of the potential downsides of that innovation are externalized in the form of public harms. The Windfall Clause attempts to internalize some of these externalities to the signatory, which hopefully contributes to steering innovation incentives in ways that minimize these negative externalities and compensate their bearers.

Here you approvingly cite Seb's paper, but I do not think it supports your point at all. Firms have both positive and negative externalities, and causing them to internalise them requires tailored solutions - e.g. a carbon tax.

I agree that the WC does not target the externalities of AI development maximally efficiently. However, I think that the externalities of such development are probably significantly correlated with windfall-generation. Windfall-generation seems to me to be very likely to accompany a risk of a huge number of negative externalities—such as those cited in the Malicious Use report and classic X-risks.

A good analogy might therefore be to a gas tax for funding road construction/maintenance, which imperfectly targets the thing we actually care about (wear and tear on roads), but is correlated with it so it's a decent policy.

To be clear, I agree that it's not the best way of addressing those externalities, and that the best possible option is to institute a Pigouvian tax (via insurance on them like Farquhar et al. suggest or otherwise).

'Being very profitable' is not a negative externality It is if it leads to inequality, which it seems likely to. Equality is a psychological good, and so windfall has negative psychological externalities on the "losers."

comment by Cullen_OKeefe · 2020-02-24T20:04:31.113Z · score: 1 (1 votes) · EA(p) · GW(p)

Furthermore, firms can voluntarily but irrationally reduce their incentives to innovate - for example a CEO might sign up for the clause because he personally got a lot of positive press for doing so, even at the cost of the firm.

This same reasoning also shows why firms might seek positional goods. E.g., executives and AI engineers might really care about being the first to develop AGI. Thus, the positional arguments for taxing windfall come back into play to the same extent that this is true.

Additionally, by publicising this idea you are changing the landscape - a firm which might have seen no reason to sign up might now feel pressured to do so after a public campaign, even though their submission is 'voluntary'.

This is certainly true. I think we as a community should discuss (as here) what the tradeoffs are. Reduced innovation in AI is a real cost. So too are the harms identified in the WC report and more traditional X-risk harms. We should set the demands of firms such that the costs to innovation are outweighed by benefits from long-run wellbeing.

comment by Cullen_OKeefe · 2020-02-24T19:20:42.813Z · score: 1 (1 votes) · EA(p) · GW(p)

Thanks a ton for your substantial engagements with this, Larks. Like you, I might spread my replies out across a few posts to atomize my replies.

I think this is a bit of a strawman. While it is true that many people don't understand tax incidence and falsely assume the burden falls entirely on shareholders rather than workers and consumers, the main argument for the optimality of a 0% corporate tax rate is Chamley-Judd (see for example here) and related results. (There are some informal descriptions of the result here and here.) The argument is about disincentives to invest reducing long-run growth and thereby making everyone poorer, not a short-term distributional effect. (The standard counter-argument to Chamley Judd, as far as I know, is to effectively apply lots of temporal discounting, but this is not available to longtermist EAs).

Thanks for this. TBQH, I was primarily familiar with the concerns cited as the reasons for opposition to corporate income taxation. I do wish in retrospect I had been able to get more acquainted with the anti-corporate-tax literature like you cited. Since I'm not an economist, I was not aware of and wasn't able to find some of the sources you cited. I agree that they make good points not adequately addressed by the Report.

For some more recent discussion in favor of capital taxation, see Korinek (2019). Admittedly, it's not clear how much this supports the WC because it does not necessarily target rents or fixed factors.

comment by Larks · 2020-02-16T23:24:46.670Z · score: 4 (3 votes) · EA(p) · GW(p)
B.2. “The Windfall Clause will shift investment to competitive non-signatory firms.”

The concern here is that, when multiple firms are competing for windfall profits, a firm bound by the Clause will be at a competitive disadvantage because unbound firms could offer higher returns on new capital. That is, investors would prefer firms that are not subject to a “tax” on their profits in the form of the Windfall Clause. This is especially bad because it could mean that more prosocial firms (i.e., ones that have signed the Clause) would be at a disadvantage to non-signatory firms, making a prosocial “winner” of an AI development race less likely.238

This is a valid concern which warrants careful consideration. Our current best model for how to address this is that the Clause could commit (or at least allow for the option of) distributions of equity,* instead of cash. This could either take the form of stock options or contingent convertible bonds. This avoids the concern identified by allowing firms to, for example, issue new, preferred shares which would have superior claim to windfall profits compared to donees. This significantly diminishes the concern that the Clause would dilute the value of new shares issued in the company and allows the bound firm to raise capital unencumbered by debt owed under the Clause.† Notably, firm management would still have fiduciary duties towards stock-holding windfall donees.

I agree that the problem (that investors will prefer to invest in non-signatories, and hence it will reduce the likelihood of pro-social firms winning, if pro-social firms are more likely to sign) does seem like a credible issue. I found the description of the proposed solution rather confusing however. Given that I worked as an equity analyst for five years, I would be surprised if many other readers could understand it!

Here are my thoughts on a couple of possible versions of what you might be getting at- apologies if you actually intended something else altogether.

1) The clause will allow the company to make the required payments in stock rather than cash.

Unfortunately this doesn't really make much difference, because it is very easy for companies to alter this balance themselves. Consider that a company which had to make a $1 billion cash payment could fund this by issuing $1 billion worth of stock; conversely a company which had to issue stock to the fund could neutralise the effect on their share count by paying cash to buy back $1 billion worth of ordinary shares. This is the same reason why dividends are essentially identical to share buybacks.

2) The clause will allow subsequent financing to be raised that is senior to the windfall clause claim, and thus still attractive to investors.

'Senior' does not mean 'better' - it simply means that you have priority in the event of bankruptcy. However, the clause is already junior to all other obligations (because a bankrupt firm will be making ~0% of GDP in profit and hence have no clause obligations), so this doesn't really seem like it makes much difference. The issue is dilution in scenarios when the company does well, which is when the most junior claims (typically common equity, but in this case actually the clause) perform best.

The fundamental reason these two approaches will not work is that the value of an investment is determined by the net present value of future cashflows (and their probability distribution). Given that the clause is intended to have a fixed impact on these flows (as laid out in II.A.2), the impact on firm valuation is also rather fixed, and there is relatively little that clever financial engineering can do about it.

3) The clause will have claim only to profits attributable to the existing shares at the time of the signing on. Any subsequent equity will have a claim on profits unencumbered by the clause. For example, if a company with 80 shares signs on to the clause, then issues 10 more shares to the market, the maximum % of profits that would be owed is 50%*80/(80+10) = 44.4%

This would indeed avoid most of the problems in attracting new capital (save only the fear that a management team willing to screw over their previous investors will do so to you in the future, which is something investors think about a lot).

However, it would also largely undermine the clause by being easy to evade due to the fungibility of capital. Consider a new startup, founded by three guys in a basement, that signs the clause. Over the next few years they will raise many rounds of VC, eventually giving up the majority of the company, all excluded from the clause. Additionally, they pay themselves and employees in stock or stock options, which are also exempt from the clause. Eventually they IPO, having successfully diluted the clause-affected shares to ~1%. In order to finish the job, they then issue some additional new equity and use the proceeds to buy back the original shares.


One interesting point on the other side, however, is the curious tendency for tech investors to ignore dilution. Many companies will exclude stock-based-comp from their adjusted earnings, and analysts/investors are often willing to go along with this, saying "oh but it's a non-cash expense". Furthermore, SBC is excluded from Free Cash Flow, which is the preferred metric for many tech investors. So it is possible that (for a while) investors would simply ignore it.

comment by Cullen_OKeefe · 2020-02-24T23:09:32.485Z · score: 1 (1 votes) · EA(p) · GW(p)

I agree that the problem (that investors will prefer to invest in non-signatories, and hence it will reduce the likelihood of pro-social firms winning, if pro-social firms are more likely to sign) does seem like a credible issue. I found the description of the proposed solution rather confusing however. Given that I worked as an equity analyst for five years, I would be surprised if many other readers could understand it!

Apologies that this was confusing, and thanks for trying to deconfuse it :-)

Subsequent feedback on this (not reflected in the report) is that issuing low-value super-junior equity at the time of signing (and then holding it in trust) is probably the best option for this.

comment by Mati_Roy · 2020-03-27T17:28:50.856Z · score: 2 (2 votes) · EA(p) · GW(p)

I just want to document that this idea was mentioned in the book Superintelligence by Nick Bostrom.

The ideal form of collaboration for the present may therefore be one that does
not initially require specific formalized agreements and that does not expedite
advances in machine intelligence. One proposal that fits these criteria is that we
propound an appropriate moral norm, expressing our commitment to the idea
that superintelligence should be for the common good. Such a norm could be
formulated as follows:

The common good principle
Superintelligence should be developed only for the benefit of all of
humanity and in the service of widely shared ethical ideals.

Establishing from an early stage that the immense potential of superintelligence
belongs to all of humanity will give more time for such a norm to become
entrenched.
The common good principle does not preclude commercial incentives for
individuals or firms active in related areas. For example, a firm might satisfy the
call for universal sharing of the benefits of superintelligence by adopting a
“windfall clause” to the effect that all profits up to some very high ceiling (say, a
trillion dollars annually) would be distributed in the ordinary way to the firm’s
shareholders and other legal claimants, and that only profits in excess of the
threshold would be distributed to all of humanity evenly (or otherwise according
to universal moral criteria). Adopting such a windfall clause should be
substantially costless, any given firm being extremely unlikely ever to exceed
the stratospheric profit threshold (and such low-probability scenarios ordinarily
playing no role in the decisions of the firm’s managers and investors). Yet its
widespread adoption would give humankind a valuable guarantee (insofar as the
commitments could be trusted) that if ever some private enterprise were to hit
the jackpot with the intelligence explosion, everybody would share in most of

the benefits. The same idea could be applied to entities other than firms. For
example, states could agree that if ever any one state’s GDP exceeds some very
high fraction (say, 90%) of world GDP, the overshoot should be distributed
evenly to all.

The common good principle (and particular instantiations, such as windfall
clauses) could be adopted initially as a voluntary moral commitment by
responsible individuals and organizations that are active in areas related to
machine intelligence. Later, it could be endorsed by a wider set of entities and
enacted into law and treaty. A vague formulation, such as the one given here,
may serve well as a starting point; but it would ultimately need to be sharpened
into a set of specific verifiable requirements.
comment by Peter_Hurford · 2020-03-16T11:59:01.999Z · score: 2 (1 votes) · EA(p) · GW(p)

Do you think this could be more effective as legislation rather than corporate policy? That is, could political advocacy be better than corporate campaigning for achieving this?

comment by Cullen_OKeefe · 2020-03-17T19:49:24.931Z · score: 1 (1 votes) · EA(p) · GW(p)

I am fairly confident that corporate policy is better. Corporate policy has a number of advantages:

  • Firms get more of a reputational boost
  • The number of actors you need to persuade is very small
  • Corporate policy is much more flexible
  • EA is probably better-equipped to getting corporate policy changes than new legislation/regulation
  • It's easier to make corporate policy permanent
comment by Ramiro · 2020-02-14T05:18:01.854Z · score: 1 (1 votes) · EA(p) · GW(p)

I wonder how such a commitment would actually impact a company's balance sheet.

In the example of Windfall Clause worth $649.34 million (in 2010 dollars), I guess that, according to IAS 37, it would be considered a contingent liability of remote possibility - and so wouldn't even need to be disclosed by the company.

Moreover, due to hyperbolic discount, it would probably be perceived as much less costly than $650m. (and I thought time preferences were evil...)

comment by Cullen_OKeefe · 2020-02-14T20:12:47.705Z · score: 1 (1 votes) · EA(p) · GW(p)

Yep, thinking through the accounting of this would be very important. Unfortunately I'm not an accountant but I would very much like to see an accountant discuss how to structure this in a way that does not prematurely burden a signatory's books.

comment by Ramiro · 2020-02-14T21:42:45.731Z · score: 1 (1 votes) · EA(p) · GW(p)

(Epistemic status: there must be some flaw, but I can't find it.)

Sure. But, let me be clearer: what drew my attention is that, apparently, there seems to be no down-side for a company to do this ASAP. My whole point:

First, consider the “simple” example where a signatory company promises to donate 10% of its profits from a revolutionary AI system in 2060, a situation with an estimated probability of about 1%; the present value of this obligation would currently amount to U$650 million (in 2010 dollars). This seems a lot; however, I contend that, given investors’ hyperbolic discount, they probably wouldn’t be very concerned about it – it’s an unlikely event, to happen in 40 years; moreover, I’ve checked with some accountants, and this obligation would (today) be probably classified as a contingent liability of remote possibility (which, under IAS 37, means it wouldn’t impact the company’s balance sheet – it doesn’t even have to be disclosed in its annual report). So, I doubt such an obligation would negatively impact a company’s market value and profits (in the short-term); actually, as there’s no “bad marketing”, it could very well increase them.

Second (all this previous argument was meant to get here), would it violate some sort of fiduciary duty? Even if it doesn’t affect present investors, it could affect future ones: i.e., supposing the Clause is enforced, can these investors complain? That’s where things get messy to me. If the fiduciary duty assumes a person-affecting conception of duties (as law usually does), I believe it can’t. First, if the Clause were public, any investor that bought company shares after the promise would have done it in full knowledge – and so wouldn’t be allowed to complain; and, if it didn’t affect its market value in 2019, even older investors would have to face the objection “but you could have sold your shares without loss.” Also, given the precise event “this company made this discovery in such-and-such way”, it’s quite likely that the event of the promise figures in the causal chain that made this precise company get this result – it certainly didn’t prevent it! Thus, even future investors wouldn’t be allowed to complain.
There must be some flaw in this reasoning, but I can’t find it.

(Could we convince start-ups to sign this until it becomes trendy?)

comment by Cullen_OKeefe · 2020-02-24T19:05:03.358Z · score: 2 (2 votes) · EA(p) · GW(p)

Thanks Ramiro!

First, consider the “simple” example where a signatory company promises to donate 10% of its profits from a revolutionary AI system in 2060, a situation with an estimated probability of about 1%; the present value of this obligation would currently amount to U$650 million (in 2010 dollars). This seems a lot; however, I contend that, given investors’ hyperbolic discount, they probably wouldn’t be very concerned about it

Interesting. I don't think it's relevant, from a legal standpoint, that investors might discount hyperbolically rather than exponentially. I assume that a court would apply standard exponential discounting at market rates. But this is a promising psychological and pragmatic fact!

I’ve checked with some accountants, and this obligation would (today) be probably classified as a contingent liability of remote possibility (which, under IAS 37, means it wouldn’t impact the company’s balance sheet – it doesn’t even have to be disclosed in its annual report). So, I doubt such an obligation would negatively impact a company’s market value and profits (in the short-term); actually, as there’s no “bad marketing”, it could very well increase them.

If this is right, this is very helpful indeed :-)

Second (all this previous argument was meant to get here), would it violate some sort of fiduciary duty? Even if it doesn’t affect present investors, it could affect future ones: i.e., supposing the Clause is enforced, can these investors complain? That’s where things get messy to me. If the fiduciary duty assumes a person-affecting conception of duties (as law usually does), I believe it can’t. First, if the Clause were public, any investor that bought company shares after the promise would have done it in full knowledge – and so wouldn’t be allowed to complain; and, if it didn’t affect its market value in 2019, even older investors would have to face the objection “but you could have sold your shares without loss.” Also, given the precise event “this company made this discovery in such-and-such way”, it’s quite likely that the event of the promise figures in the causal chain that made this precise company get this result – it certainly didn’t prevent it! Thus, even future investors wouldn’t be allowed to complain.

See § III of the report :-)

comment by Ramiro · 2020-05-16T02:49:50.525Z · score: 1 (1 votes) · EA(p) · GW(p)

I wonder if, in addition to the section B.2, the clause could framed as a compensation scheme in favor of a firm's shareholders - at least if it was adopted conditionally to other firms adopting it (a kind of "good cartel"). Since the ex ante probability of one specific firm A obtaining future windfall profits from an AGI is lower than the probability of any of its present or future competitors doing it (so driving A out of business), it might be in the interest of these firms' shareholders to hedge each other by committing to a windfall clause. (Of course, the problem with this argument is that it'd only justify an agreement covering the shareholders of each agreeing firm)

comment by Cullen_OKeefe · 2020-05-18T22:20:02.640Z · score: 2 (2 votes) · EA(p) · GW(p)

You are not the only person to have expressed interest in such an arrangement :-) Unfortunately I think there might be some antitrust problems with that.

comment by Ramiro · 2020-05-22T18:13:07.785Z · score: 1 (1 votes) · EA(p) · GW(p)

I imagined so; but the idea just kept coming to my head, and since I hadn't seen it explicitly stated, I thought it could be worth mentioning.

I think there might be some antitrust problems with that

I agree that, with current legislation, this is likely so.

But let me share a thought: even though we don't have hedge for when one company succeeds so well it ends up dominating the whole market (and ruining all competitors in the process), we do have some compensation schemes (based on specific legislation) for when a company fails, like deposit insurance. The economic literature usually presents it as a public good (they'd decrease the odds of a bank run and so increase macroeconomic stability), but it was only accepted by the industry because it solved a lemons problem. Even today, the "green swan" (s. section 2) talk in finances often appeals to the risk of losses in a future global crisis (the Tragedy of the Horizon argument). My impression is that an innovation in financial regulation often starts with convincing banks and institutions that it's in their general self-interest, and then it will become compulsory only to avoid free-riders.

(So, yeah, if tech companies get together with the excuse of protecting their investors (& everyone else in the process) in case of someone dominating the market, that's collusion; if banks do so, it's CSR)

(epistemic status about the claims on deposit insurance: I shoud have made a better investigation in economic history, but I lack the time, the argument is consistent, and I did have first hand experience with the creation of a depositor insurance fund for credit unions - i.e., it didn't mitigate systemic risk, it just solved depositors risk-aversion)