Posts

EA and the Possible Decline of the US: Very Rough Thoughts 2021-01-08T07:30:54.679Z
Cullen_OKeefe's Shortform 2020-11-22T06:37:39.812Z
FHI Report: How Will National Security Considerations Affect Antitrust Decisions in AI? An Examination of Historical Precedents 2020-07-28T18:33:17.256Z
AI Benefits Post 5: Outstanding Questions on Governing Benefits 2020-07-21T16:45:27.763Z
Parallels Between AI Safety by Debate and Evidence Law 2020-07-20T22:52:42.496Z
AI Benefits Post 4: Outstanding Questions on Selecting Benefits 2020-07-14T17:24:50.683Z
Antitrust-Compliant AI Industry Self-Regulation 2020-07-07T20:52:21.472Z
AI Benefits Post 3: Direct and Indirect Approaches to AI Benefits 2020-07-06T18:46:03.433Z
AI Benefits Post 2: How AI Benefits Differs from AI Alignment & AI for Good 2020-06-29T16:59:29.859Z
CARES Act Allows Charitable Deduction of 100% of Gross Income in 2020 2020-06-23T23:48:31.231Z
AI Benefits Post 1: Introducing “AI Benefits” 2020-06-22T16:58:20.103Z
Should EA Buy Distribution Rights for Foundational Books? 2020-06-17T05:38:32.723Z
FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good 2020-02-05T23:49:43.443Z
I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA 2020-01-11T04:13:33.250Z
Defending Philanthropy Against Democracy 2019-10-06T07:20:45.888Z
Should I give to Our World In Data? 2019-09-10T04:56:41.437Z
Should EA Groups Run Organ Donor Registration Drives? 2019-03-27T16:29:40.261Z
On the (In)Applicability of Corporate Rights Cases to Digital Minds 2019-02-28T06:14:22.176Z
FHI Report: Stable Agreements in Turbulent Times 2019-02-21T17:12:51.085Z
EAs Should Invest All Year, then Give only on Giving Tuesday 2019-01-10T21:17:26.812Z
Which Image Do You Prefer?: a study of visual communication in six African countries 2018-12-03T06:38:40.758Z
Fisher & Syed on Tradable Obligations to Enhance Health 2018-08-12T22:17:20.304Z
Harvard EA's 2018–19 Vision 2018-08-04T22:47:29.289Z
Governmental CBA as an EA Career Step: A Shallow Investigation 2018-07-07T13:31:13.728Z

Comments

Comment by cullen_okeefe on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-14T04:00:45.758Z · EA · GW

Thanks David! I guess I was implicitly thinking of scenarios where the decline of the US was not caused by a GCR, since such cases would already qualify for EA prioritization. But agree that decline of the US due to a GCR would meet my stated definition of Collapse.

Comment by cullen_okeefe on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-11T07:23:28.869Z · EA · GW

I am pretty confident that's wrong. The disanalogy is that with financial markets, you can presently withdraw money and move it to safer assets or spend it on present consumption.

Comment by cullen_okeefe on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-11T07:22:41.569Z · EA · GW

Yeah, they are definitely quite different, and probably less important from an EA perspective. I just included them for completeness because of the definition of "Collapse" I gave.

Comment by cullen_okeefe on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-11T07:21:56.985Z · EA · GW

Very helpful. Thanks!

Comment by cullen_okeefe on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-11T07:21:39.982Z · EA · GW

Figuring out how to move politics towards the exhausted majority seems interesting. They probably care about stability a lot more than hyper-partisans do.

Comment by cullen_okeefe on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-11T07:20:23.379Z · EA · GW

Thanks David. Great analysis as usual :-)

I'm not actually sure we disagree on anything. I agree that

if the worst case happens, we're still likely looking at a decades-long process, during which most of the worst effects are mitigated by other countries taking up the slack, and pushing for the US's decline to be minimally disruptive to the world.

I definitely also agree that it behooves EAs to try to avoid myopia. I have tried to do so here but may very well have failed!

In terms of expected disvalue, I would guess that severe and rapid collapses (more like the USSR than France or Spain) are the most important, due to the nuclear insecurity and possible triggering of great-power conflict.

As for cost-competitiveness with other longtermist interventions, it seems that increasing nuclear security from domestic instability is actually pretty tractable and may be neglected. If so, that suggests to me that it may be approximately as cost-effective as most marginal nuclear security work generally. The only other things that seem plausibly cost-effective to me now are contingency planning for key longtermist institutions so that their operations are minimally disrupted by a turbulent decline.

Comment by cullen_okeefe on Cullen_OKeefe's Shortform · 2020-12-09T19:39:26.172Z · EA · GW

Although I've seen some people say they feel like EA is in a bit of an intellectual slump right now, I think the number of new, promising EA startups may be higher than ever. I'm thinking of some of the recent Charity Entrepreneurship incubated programs and some new animal welfare orgs like the Fish Welfare Initiative.

Comment by cullen_okeefe on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T22:44:20.747Z · EA · GW

Thank you!

Comment by cullen_okeefe on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T00:52:58.248Z · EA · GW

What processes do you have for monitoring the outcome/impact of grants, especially grants to individuals?

Comment by cullen_okeefe on Cullen_OKeefe's Shortform · 2020-11-22T06:37:40.156Z · EA · GW

The venerable Judge Easterbrook appears to understand harms from unaligned AI. In this excerpt, he invokes a favorite fictional example among the AI risk community:

The situation is this: Customer incurs a debt and does not pay. Creditor hires Bill Collector to dun Customer for the money. Bill Collector puts a machine on the job and repeatedly calls Cell Number, at which Customer had agreed to receive phone calls by giving his number to Creditor. The machine, called a predictive dialer, works autonomously until a human voice comes on the line. If that happens, an employee in Bill Collector's call center will join the call. But Customer no longer subscribes to Cell Number, which has been reassigned to Bystander. A human being who called Cell Number would realize that Customer was no longer the subscriber. But predictive dialers lack human intelligence and, like the buckets enchanted by the Sorcerer's Apprentice, continue until stopped by their true master.

Soppet v. Enhanced Recovery Co., LLC, 679 F.3d 637, 638–39 (7th Cir. 2012).

Comment by cullen_okeefe on Progress Open Thread: October // Student Summit 2020 · 2020-10-21T22:17:49.305Z · EA · GW

My parents recently sold an environmental services company and used a significant chunk of the windfall to endow a family foundation on whose board I sit. I recommended, and the rest of the Board agreed, to include GiveDirectly and AMF in our 2020 grants.

Comment by cullen_okeefe on might targeting malnutrition (not undernourishment!) be an important cause area? · 2020-09-20T18:09:36.633Z · EA · GW

Could you clarify the difference between malnutrition and undernourishment?

Comment by cullen_okeefe on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T05:09:19.772Z · EA · GW

[Tangent:] Based on developments since we last engaged on the topic, Wei, I am significantly more worried about this than I was at the time. (I.e., I have updated in your direction.)

Comment by cullen_okeefe on Introducing the Legal Priorities Project · 2020-08-30T19:30:15.569Z · EA · GW

Hi! LPP is actually an outgrowth of the Effective Altruism & Law Facebook group. Please join!

Comment by cullen_okeefe on Putting People First in a Culture of Dehumanization · 2020-07-22T21:58:10.460Z · EA · GW

You might like this post: https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html?m=1

Comment by cullen_okeefe on Parallels Between AI Safety by Debate and Evidence Law · 2020-07-22T17:14:54.103Z · EA · GW

Thanks for this very thoughtful comment!

I think it is accurate to say that the rules of evidence have generally aimed for truth-seeking per se. That is their stated goal, and it generally explains the liberal standard for admission (relevance, which is a very low bar and tracks Bayesian epistemology well), the even more liberal standards for discovery, and most of the admissibility exceptions (which are generally explainable by humans' imperfect Bayesianism).

You're definitely right that the legal system as a whole has many goals other than truth-seeking. However, those other goals are generally advanced through other aspects of the justice system. As an example, finality is a goal of the legal system, and is advanced through, among other things, statutes of limitations and repose. Similarly, the "beyond reasonable doubt" standard for criminal conviction is in some sense contrary to truth-seeking but advances the policy preference for underpunishment over overpunishment.

You're also right that there are some exceptions to this within evidence law itself, but not many. For example, the attorney–client privilege exists not to facilitate truth-seeking, but to protect the attorney–client relationship. Similarly, the spousal privileges exist to protect the marital relationship. (Precisely because such privileges are contrary to truth-seeking, they are interpreted narrowly. See, e.g., United States v. Aramony, 88 F.3d 1369, 1389 (4th Cir. 1996); United States v. Suarez, 820 F.2d 1158, 1160 (11th Cir. 1987)). And of course, some rules of evidence have both truth-seeking and other policy rationales. Still, on the whole and in general, the rules of evidence are aimed towards truth.

Comment by cullen_okeefe on What skill-building activities have helped your personal and professional development? · 2020-07-21T16:43:10.226Z · EA · GW

Learning the Getting Things Done productivity method improved my personal project management skills.

Comment by cullen_okeefe on High stakes instrumentalism and billionaire philanthropy · 2020-07-20T23:07:43.922Z · EA · GW

Good post! Thanks for linking to my work. I also agree with Larks that it's nice to have academic political theory brought in here.

Comment by cullen_okeefe on How to massively increase your donations, for free · 2020-07-10T01:54:55.524Z · EA · GW

Relevant forum post: https://forum.effectivealtruism.org/posts/MAod5gvcQgdxaXdWA/long-term-donation-bunching

Comment by cullen_okeefe on Problem areas beyond 80,000 Hours' current priorities · 2020-06-24T17:40:58.145Z · EA · GW

India v. Pakistan seems very important as well

Comment by cullen_okeefe on Long-term Donation Bunching? · 2020-06-23T23:50:42.101Z · EA · GW

Relevant for 2020: Due to the CARES Act, individuals can deduct 100% of their AGI this year.

Comment by cullen_okeefe on Investing to Give Beginner Advice? · 2020-06-23T23:50:01.836Z · EA · GW

It's actually 100% for 2020 due to the CARES Act!

Comment by cullen_okeefe on AI Benefits Post 1: Introducing “AI Benefits” · 2020-06-22T21:26:16.296Z · EA · GW

Thanks! You are correct. Updated to clarify that this is meant to be "the subset of AI Benefits on which I am focusing"—i.e., nonmarket benefits.

Comment by cullen_okeefe on Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? · 2020-06-22T20:09:37.011Z · EA · GW
  1. suffering that is "meaningful" (such as mourning)

This might be a specific instance of

3*) Suffering that is a natural result of healthy/normal/inevitable/desirable emotional reactions

Comment by cullen_okeefe on How to Fix Private Prisons and Immigration · 2020-06-19T18:47:51.075Z · EA · GW

One such goal might be to maximize the total societal contribution of any given set of inmates within the limits of the law (limits such as “Don’t restrict the freedom of inmates after their release”).

Of course, prisons also serve an important deterrent function, which is not well-addressed by this model.

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-18T17:47:31.106Z · EA · GW

Thank you for this datapoint!

It’s important to note, however, that there would likely be a ton of variation for different books. This would likely depend on what the publisher paid the author in advance and how many books they've sold / how much money they've made back.

Presumably most of that is sunk cost and what the publisher ought to care about is discounted expected cashflows from the book.

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T19:02:16.894Z · EA · GW

Also, are you able to disclose the cost of buying those rights?

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T19:01:36.115Z · EA · GW

This is very helpful data; thank you!

To your knowledge, has Singer ever considered doing the same for any of his other books?

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T13:56:28.963Z · EA · GW

I wonder if there have been any "cost per conversion" estimates for the Gideons.

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T13:10:23.862Z · EA · GW

The 80,000 Hours career guide is available for free with an email signup. They might be in a good position to know, though that does not address the counterfactual. I agree TLYCS is better for that :-)

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T13:04:44.701Z · EA · GW

I wonder whether most of the value of buying back rights could be captured by just buying books for people on request. A streamlined process for doing this could have pretty low overheads — it only takes a couple minutes to send someone a book via Amazon — and seems scalable. This should be easy enough for a donor or EA org to try.

This is a good idea as well, though it could have the downside of preventing some of the more creative uses of community-owned digital distribution such as aiding translation and making excerpting easier. I think something closer to a Creative Commons license for digital versions would be best (though the publisher might not agree to that).

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T12:54:34.956Z · EA · GW

Ah yes, I forgot that we already did this for TLYCS. Would be good to see a retrospective on this :-)

The EA Meta Fund gave $10,000 for this, which seems very worthwhile. Of course, this may not be the full cost, and this also covered some other things. I like that they included free audiobooks; we should probably do that too if we pursue this.

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T05:58:23.703Z · EA · GW

It also occurs to me that doing so would aid in translation and therefore entrance into new markets

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-05-18T22:20:02.640Z · EA · GW

You are not the only person to have expressed interest in such an arrangement :-) Unfortunately I think there might be some antitrust problems with that.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-03-17T19:49:24.931Z · EA · GW

I am fairly confident that corporate policy is better. Corporate policy has a number of advantages:

  • Firms get more of a reputational boost
  • The number of actors you need to persuade is very small
  • Corporate policy is much more flexible
  • EA is probably better-equipped to getting corporate policy changes than new legislation/regulation
  • It's easier to make corporate policy permanent
Comment by cullen_okeefe on Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? · 2020-03-15T04:29:22.615Z · EA · GW

To my understanding, China produce these masks so massively that they can afford selling them to whole population. But, let's say, in US, we have the opposite situation.

Then shouldn't we should just buy them from China?

Comment by cullen_okeefe on Activism for COVID-19 Local Preparedness · 2020-03-03T19:03:00.221Z · EA · GW

Very helpful; thanks!

Comment by cullen_okeefe on Activism for COVID-19 Local Preparedness · 2020-03-02T20:34:41.847Z · EA · GW

Thanks! Here's the quote:

Harvard epidemiologist Marc Lipsitch estimates that 40 to 70 percent of the human population could potentially be infected by the virus if it becomes pandemic. Not all of those people would get sick, he noted.

Comment by cullen_okeefe on Activism for COVID-19 Local Preparedness · 2020-03-02T18:49:17.289Z · EA · GW

COVID-19 may infect 40-70 percent of the world's population.

What is your source for this? This seems way too high given that even in Hubei (population: 58.5 million), only about 1.1 in 1,000 people (total: 67,103) had confirmed cases.

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-28T01:30:10.403Z · EA · GW

The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-28T01:28:07.617Z · EA · GW

Certainly you still need legal accountability -- why wouldn't we have that? If we solve alignment, then we can just have the AI's owner be accountable for any law-breaking actions the AI takes.

I agree that that is a very good and desirable step to take. However, as I said, it also incentives the AI-agent to obfuscate its actions and intentions to save its principal. In the human context, human agents do this but are independently disincentivized from breaking the law they face legal liability (a disincentive) for their actions. I want (and I suspect you also want) AI systems to have such incentivization.

If I understand correctly, you identify two ways to do this in the teenager analogy:

  1. Rewiring
  2. Explaining laws and their consequences and letting the agent's existing incentives do the rest.

I could be wrong about this, but ultimately, for AI systems, it seems like both are actually similarly difficult. As you've said, for 2. to be most effective, you probably need "AI police." Those police will need a way of interpreting the legality of an AI agent's {"mental" state; actions} and mapping them only existing laws.

But if you need to do that for effective enforcement, I don't see why (from a societal perspective) we shouldn't just do that on the actor's side and not the "police's" side. Baking the enforcement into the agents has the benefits of:

  1. Not incentivizing an arms race
  2. Giving the enforcer's a clearer picture of the AI's "mental state"
Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-24T23:34:16.779Z · EA · GW

But my real reason for not caring too much about this is that in this story we rely on the AI's "intelligence" to "understand" laws, as opposed to "programming it in"; given that we're worried about superintelligent AI it should be "intelligent" enough to "understand" what humans want as well (given that humans seem to be able to do that).

My intuition is that more formal systems will be easier for AI to understand earlier in the "evolution" of SOTA AI intelligence than less-formal systems. Since law is more formal than human values (including both the way it's written and the formal significance of interpretative texts), then we might get good law-following before good value alignment.

I'm not sure what you're trying to imply with this -- does this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?

Sorry. I was responding to the "all laws" point. My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law. That is, difficulty of construction scales sub-linearly with the number of laws it needs to follow. The interpretative tools that should get to (B) should be pretty generalizable to (A).

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-24T23:13:21.893Z · EA · GW

First, it would be hard to do. I am a programmer / ML researcher and I have no idea how to program an AI to follow the law in some guaranteed way. I also have an intuitive sense that it would be very difficult. I think the vast majority of programmers / ML researchers would agree with me on this.

This is valuable information. However, some ML people I have talked about this with have given positive feedback, so I think you might be overestimating the difficulty.

Second, it doesn't provide much value, because you can get most of the benefits via enforcement, which has the virtue of being the solution we currently use.

Part of the reason that enforcement works, though, is that human agents have an independent incentive not to break the law (or, e.g., report legal violations) since they are legally accountable for their actions.

But AI-enabled police would be able to probe actions, infer motives, and detect bad behavior better than humans could. In addition, AI systems could have fewer rights than humans, and could be designed to be more transparent than humans, making the police's job easier.

This seems to require the same type of fundamental ML research that I am proposing: mapping AI actions onto laws.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T23:09:32.485Z · EA · GW

I agree that the problem (that investors will prefer to invest in non-signatories, and hence it will reduce the likelihood of pro-social firms winning, if pro-social firms are more likely to sign) does seem like a credible issue. I found the description of the proposed solution rather confusing however. Given that I worked as an equity analyst for five years, I would be surprised if many other readers could understand it!

Apologies that this was confusing, and thanks for trying to deconfuse it :-)

Subsequent feedback on this (not reflected in the report) is that issuing low-value super-junior equity at the time of signing (and then holding it in trust) is probably the best option for this.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T22:59:31.090Z · EA · GW

I strongly disagree with this non-sequitur. The fact that we have achieved some level of material success now doesn't mean that the future opportunity isn't very large. Again, Chamley-Judd is the classic result in the space, suggesting that it is never appropriate to tax investment for distributional purposes - if the latter must be done, it should be done with individual-level consumption/income taxation. This should be especially clear to EAs who are aware of the astronomical waste of potentially forgoing or delaying growth.

However, it's very hard to get individuals to sign a WC for a huge number of reasons. See

The pool of potentially windfall-generating firms is much smaller and more stable than the number of potential windfall-generating individuals, meaning that securing commitments from firms would probably capture more of the potential windfall than securing commitments from individuals. Thus, targeting firms as such seems reasonable.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T22:56:50.102Z · EA · GW

Elsewhere in the document you do hint at another response - namely that by adopting the clause, companies will help avoid future taxation (though I am sceptical): ... However, it seems that the document equivocates on whether or not the clause is to reduce taxes, as elsewhere in the document you deny this:

I think both outcomes are possible. The second point is simply to point out that the WC does not and cannot (as a legal matter) prevent a state from levying taxes on firms. The first two points, by contrast, are a prediction that the WC will make such taxation less likely.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T22:52:25.643Z · EA · GW

The report then goes on to discuss externalities:

Secondly, unbridled incentives to innovate are not necessarily always good, particularly when many of the potential downsides of that innovation are externalized in the form of public harms. The Windfall Clause attempts to internalize some of these externalities to the signatory, which hopefully contributes to steering innovation incentives in ways that minimize these negative externalities and compensate their bearers.

Here you approvingly cite Seb's paper, but I do not think it supports your point at all. Firms have both positive and negative externalities, and causing them to internalise them requires tailored solutions - e.g. a carbon tax.

I agree that the WC does not target the externalities of AI development maximally efficiently. However, I think that the externalities of such development are probably significantly correlated with windfall-generation. Windfall-generation seems to me to be very likely to accompany a risk of a huge number of negative externalities—such as those cited in the Malicious Use report and classic X-risks.

A good analogy might therefore be to a gas tax for funding road construction/maintenance, which imperfectly targets the thing we actually care about (wear and tear on roads), but is correlated with it so it's a decent policy.

To be clear, I agree that it's not the best way of addressing those externalities, and that the best possible option is to institute a Pigouvian tax (via insurance on them like Farquhar et al. suggest or otherwise).

'Being very profitable' is not a negative externality It is if it leads to inequality, which it seems likely to. Equality is a psychological good, and so windfall has negative psychological externalities on the "losers."

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T20:04:31.113Z · EA · GW

Furthermore, firms can voluntarily but irrationally reduce their incentives to innovate - for example a CEO might sign up for the clause because he personally got a lot of positive press for doing so, even at the cost of the firm.

This same reasoning also shows why firms might seek positional goods. E.g., executives and AI engineers might really care about being the first to develop AGI. Thus, the positional arguments for taxing windfall come back into play to the same extent that this is true.

Additionally, by publicising this idea you are changing the landscape - a firm which might have seen no reason to sign up might now feel pressured to do so after a public campaign, even though their submission is 'voluntary'.

This is certainly true. I think we as a community should discuss (as here) what the tradeoffs are. Reduced innovation in AI is a real cost. So too are the harms identified in the WC report and more traditional X-risk harms. We should set the demands of firms such that the costs to innovation are outweighed by benefits from long-run wellbeing.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T19:51:53.833Z · EA · GW

As a blanket note about your next few points, I agree that the WC would disincentivize innovation to some extent. It was not my intention to claim—nor do I think I actually claimed (IIRC)—that it would have no socially undesirable incentive effects on innovation. Rather, the points I was making were more aimed at illuminating possible reasons why this might not be so bad. In general, my position is that the other upsides probably outweigh the (real!) downsides of disincentivizing innovation. Perhaps I should have been more clear about that.

But corporations are much less motivated by fame and love of their work than individuals, so this does not seem very relevant, and furthermore it does not address the inter-temporal issue which is the main objection to corporation taxes.

Yep, that seems right.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T19:20:42.813Z · EA · GW

Thanks a ton for your substantial engagements with this, Larks. Like you, I might spread my replies out across a few posts to atomize my replies.

I think this is a bit of a strawman. While it is true that many people don't understand tax incidence and falsely assume the burden falls entirely on shareholders rather than workers and consumers, the main argument for the optimality of a 0% corporate tax rate is Chamley-Judd (see for example here) and related results. (There are some informal descriptions of the result here and here.) The argument is about disincentives to invest reducing long-run growth and thereby making everyone poorer, not a short-term distributional effect. (The standard counter-argument to Chamley Judd, as far as I know, is to effectively apply lots of temporal discounting, but this is not available to longtermist EAs).

Thanks for this. TBQH, I was primarily familiar with the concerns cited as the reasons for opposition to corporate income taxation. I do wish in retrospect I had been able to get more acquainted with the anti-corporate-tax literature like you cited. Since I'm not an economist, I was not aware of and wasn't able to find some of the sources you cited. I agree that they make good points not adequately addressed by the Report.

For some more recent discussion in favor of capital taxation, see Korinek (2019). Admittedly, it's not clear how much this supports the WC because it does not necessarily target rents or fixed factors.