Posts

Antitrust-Compliant AI Industry Self-Regulation 2020-07-07T20:52:21.472Z · score: 25 (9 votes)
AI Benefits Post 3: Direct and Indirect Approaches to AI Benefits 2020-07-06T18:46:03.433Z · score: 5 (4 votes)
AI Benefits Post 2: How AI Benefits Differs from AI Alignment & AI for Good 2020-06-29T16:59:29.859Z · score: 9 (4 votes)
CARES Act Allows Charitable Deduction of 100% of Gross Income in 2020 2020-06-23T23:48:31.231Z · score: 46 (16 votes)
AI Benefits Post 1: Introducing “AI Benefits” 2020-06-22T16:58:20.103Z · score: 10 (6 votes)
Should EA Buy Distribution Rights for Foundational Books? 2020-06-17T05:38:32.723Z · score: 81 (37 votes)
FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good 2020-02-05T23:49:43.443Z · score: 51 (25 votes)
I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA 2020-01-11T04:13:33.250Z · score: 39 (20 votes)
Defending Philanthropy Against Democracy 2019-10-06T07:20:45.888Z · score: 41 (23 votes)
Should I give to Our World In Data? 2019-09-10T04:56:41.437Z · score: 21 (14 votes)
Should EA Groups Run Organ Donor Registration Drives? 2019-03-27T16:29:40.261Z · score: 9 (8 votes)
On the (In)Applicability of Corporate Rights Cases to Digital Minds 2019-02-28T06:14:22.176Z · score: 13 (5 votes)
FHI Report: Stable Agreements in Turbulent Times 2019-02-21T17:12:51.085Z · score: 25 (12 votes)
EAs Should Invest All Year, then Give only on Giving Tuesday 2019-01-10T21:17:26.812Z · score: 49 (30 votes)
Which Image Do You Prefer?: a study of visual communication in six African countries 2018-12-03T06:38:40.758Z · score: 11 (14 votes)
Fisher & Syed on Tradable Obligations to Enhance Health 2018-08-12T22:17:20.304Z · score: 6 (6 votes)
Harvard EA's 2018–19 Vision 2018-08-04T22:47:29.289Z · score: 12 (14 votes)
Governmental CBA as an EA Career Step: A Shallow Investigation 2018-07-07T13:31:13.728Z · score: 6 (6 votes)

Comments

Comment by cullen_okeefe on How to massively increase your donations, for free · 2020-07-10T01:54:55.524Z · score: 2 (2 votes) · EA · GW

Relevant forum post: https://forum.effectivealtruism.org/posts/MAod5gvcQgdxaXdWA/long-term-donation-bunching

Comment by cullen_okeefe on Problem areas beyond 80,000 Hours' current priorities · 2020-06-24T17:40:58.145Z · score: 1 (1 votes) · EA · GW

India v. Pakistan seems very important as well

Comment by cullen_okeefe on Long-term Donation Bunching? · 2020-06-23T23:50:42.101Z · score: 3 (2 votes) · EA · GW

Relevant for 2020: Due to the CARES Act, individuals can deduct 100% of their AGI this year.

Comment by cullen_okeefe on Investing to Give Beginner Advice? · 2020-06-23T23:50:01.836Z · score: 7 (4 votes) · EA · GW

It's actually 100% for 2020 due to the CARES Act!

Comment by cullen_okeefe on AI Benefits Post 1: Introducing “AI Benefits” · 2020-06-22T21:26:16.296Z · score: 1 (1 votes) · EA · GW

Thanks! You are correct. Updated to clarify that this is meant to be "the subset of AI Benefits on which I am focusing"—i.e., nonmarket benefits.

Comment by cullen_okeefe on Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? · 2020-06-22T20:09:37.011Z · score: 1 (1 votes) · EA · GW
  1. suffering that is "meaningful" (such as mourning)

This might be a specific instance of

3*) Suffering that is a natural result of healthy/normal/inevitable/desirable emotional reactions

Comment by cullen_okeefe on How to Fix Private Prisons and Immigration · 2020-06-19T18:47:51.075Z · score: 5 (4 votes) · EA · GW

One such goal might be to maximize the total societal contribution of any given set of inmates within the limits of the law (limits such as “Don’t restrict the freedom of inmates after their release”).

Of course, prisons also serve an important deterrent function, which is not well-addressed by this model.

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-18T17:47:31.106Z · score: 4 (3 votes) · EA · GW

Thank you for this datapoint!

It’s important to note, however, that there would likely be a ton of variation for different books. This would likely depend on what the publisher paid the author in advance and how many books they've sold / how much money they've made back.

Presumably most of that is sunk cost and what the publisher ought to care about is discounted expected cashflows from the book.

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T19:02:16.894Z · score: 12 (6 votes) · EA · GW

Also, are you able to disclose the cost of buying those rights?

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T19:01:36.115Z · score: 2 (2 votes) · EA · GW

This is very helpful data; thank you!

To your knowledge, has Singer ever considered doing the same for any of his other books?

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T13:56:28.963Z · score: 2 (2 votes) · EA · GW

I wonder if there have been any "cost per conversion" estimates for the Gideons.

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T13:10:23.862Z · score: 2 (2 votes) · EA · GW

The 80,000 Hours career guide is available for free with an email signup. They might be in a good position to know, though that does not address the counterfactual. I agree TLYCS is better for that :-)

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T13:04:44.701Z · score: 2 (2 votes) · EA · GW

I wonder whether most of the value of buying back rights could be captured by just buying books for people on request. A streamlined process for doing this could have pretty low overheads — it only takes a couple minutes to send someone a book via Amazon — and seems scalable. This should be easy enough for a donor or EA org to try.

This is a good idea as well, though it could have the downside of preventing some of the more creative uses of community-owned digital distribution such as aiding translation and making excerpting easier. I think something closer to a Creative Commons license for digital versions would be best (though the publisher might not agree to that).

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T12:54:34.956Z · score: 4 (3 votes) · EA · GW

Ah yes, I forgot that we already did this for TLYCS. Would be good to see a retrospective on this :-)

The EA Meta Fund gave $10,000 for this, which seems very worthwhile. Of course, this may not be the full cost, and this also covered some other things. I like that they included free audiobooks; we should probably do that too if we pursue this.

Comment by cullen_okeefe on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T05:58:23.703Z · score: 16 (8 votes) · EA · GW

It also occurs to me that doing so would aid in translation and therefore entrance into new markets

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-05-18T22:20:02.640Z · score: 2 (2 votes) · EA · GW

You are not the only person to have expressed interest in such an arrangement :-) Unfortunately I think there might be some antitrust problems with that.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-03-17T19:49:24.931Z · score: 1 (1 votes) · EA · GW

I am fairly confident that corporate policy is better. Corporate policy has a number of advantages:

  • Firms get more of a reputational boost
  • The number of actors you need to persuade is very small
  • Corporate policy is much more flexible
  • EA is probably better-equipped to getting corporate policy changes than new legislation/regulation
  • It's easier to make corporate policy permanent
Comment by cullen_okeefe on Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? · 2020-03-15T04:29:22.615Z · score: 3 (2 votes) · EA · GW

To my understanding, China produce these masks so massively that they can afford selling them to whole population. But, let's say, in US, we have the opposite situation.

Then shouldn't we should just buy them from China?

Comment by cullen_okeefe on Activism for COVID-19 Local Preparedness · 2020-03-03T19:03:00.221Z · score: 1 (1 votes) · EA · GW

Very helpful; thanks!

Comment by cullen_okeefe on Activism for COVID-19 Local Preparedness · 2020-03-02T20:34:41.847Z · score: 4 (3 votes) · EA · GW

Thanks! Here's the quote:

Harvard epidemiologist Marc Lipsitch estimates that 40 to 70 percent of the human population could potentially be infected by the virus if it becomes pandemic. Not all of those people would get sick, he noted.

Comment by cullen_okeefe on Activism for COVID-19 Local Preparedness · 2020-03-02T18:49:17.289Z · score: 4 (3 votes) · EA · GW

COVID-19 may infect 40-70 percent of the world's population.

What is your source for this? This seems way too high given that even in Hubei (population: 58.5 million), only about 1.1 in 1,000 people (total: 67,103) had confirmed cases.

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-28T01:30:10.403Z · score: 2 (2 votes) · EA · GW

The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-28T01:28:07.617Z · score: 2 (2 votes) · EA · GW

Certainly you still need legal accountability -- why wouldn't we have that? If we solve alignment, then we can just have the AI's owner be accountable for any law-breaking actions the AI takes.

I agree that that is a very good and desirable step to take. However, as I said, it also incentives the AI-agent to obfuscate its actions and intentions to save its principal. In the human context, human agents do this but are independently disincentivized from breaking the law they face legal liability (a disincentive) for their actions. I want (and I suspect you also want) AI systems to have such incentivization.

If I understand correctly, you identify two ways to do this in the teenager analogy:

  1. Rewiring
  2. Explaining laws and their consequences and letting the agent's existing incentives do the rest.

I could be wrong about this, but ultimately, for AI systems, it seems like both are actually similarly difficult. As you've said, for 2. to be most effective, you probably need "AI police." Those police will need a way of interpreting the legality of an AI agent's {"mental" state; actions} and mapping them only existing laws.

But if you need to do that for effective enforcement, I don't see why (from a societal perspective) we shouldn't just do that on the actor's side and not the "police's" side. Baking the enforcement into the agents has the benefits of:

  1. Not incentivizing an arms race
  2. Giving the enforcer's a clearer picture of the AI's "mental state"
Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-24T23:34:16.779Z · score: 3 (3 votes) · EA · GW

But my real reason for not caring too much about this is that in this story we rely on the AI's "intelligence" to "understand" laws, as opposed to "programming it in"; given that we're worried about superintelligent AI it should be "intelligent" enough to "understand" what humans want as well (given that humans seem to be able to do that).

My intuition is that more formal systems will be easier for AI to understand earlier in the "evolution" of SOTA AI intelligence than less-formal systems. Since law is more formal than human values (including both the way it's written and the formal significance of interpretative texts), then we might get good law-following before good value alignment.

I'm not sure what you're trying to imply with this -- does this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?

Sorry. I was responding to the "all laws" point. My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law. That is, difficulty of construction scales sub-linearly with the number of laws it needs to follow. The interpretative tools that should get to (B) should be pretty generalizable to (A).

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-24T23:13:21.893Z · score: 2 (2 votes) · EA · GW

First, it would be hard to do. I am a programmer / ML researcher and I have no idea how to program an AI to follow the law in some guaranteed way. I also have an intuitive sense that it would be very difficult. I think the vast majority of programmers / ML researchers would agree with me on this.

This is valuable information. However, some ML people I have talked about this with have given positive feedback, so I think you might be overestimating the difficulty.

Second, it doesn't provide much value, because you can get most of the benefits via enforcement, which has the virtue of being the solution we currently use.

Part of the reason that enforcement works, though, is that human agents have an independent incentive not to break the law (or, e.g., report legal violations) since they are legally accountable for their actions.

But AI-enabled police would be able to probe actions, infer motives, and detect bad behavior better than humans could. In addition, AI systems could have fewer rights than humans, and could be designed to be more transparent than humans, making the police's job easier.

This seems to require the same type of fundamental ML research that I am proposing: mapping AI actions onto laws.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T23:09:32.485Z · score: 1 (1 votes) · EA · GW

I agree that the problem (that investors will prefer to invest in non-signatories, and hence it will reduce the likelihood of pro-social firms winning, if pro-social firms are more likely to sign) does seem like a credible issue. I found the description of the proposed solution rather confusing however. Given that I worked as an equity analyst for five years, I would be surprised if many other readers could understand it!

Apologies that this was confusing, and thanks for trying to deconfuse it :-)

Subsequent feedback on this (not reflected in the report) is that issuing low-value super-junior equity at the time of signing (and then holding it in trust) is probably the best option for this.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T22:59:31.090Z · score: 1 (1 votes) · EA · GW

I strongly disagree with this non-sequitur. The fact that we have achieved some level of material success now doesn't mean that the future opportunity isn't very large. Again, Chamley-Judd is the classic result in the space, suggesting that it is never appropriate to tax investment for distributional purposes - if the latter must be done, it should be done with individual-level consumption/income taxation. This should be especially clear to EAs who are aware of the astronomical waste of potentially forgoing or delaying growth.

However, it's very hard to get individuals to sign a WC for a huge number of reasons. See

The pool of potentially windfall-generating firms is much smaller and more stable than the number of potential windfall-generating individuals, meaning that securing commitments from firms would probably capture more of the potential windfall than securing commitments from individuals. Thus, targeting firms as such seems reasonable.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T22:56:50.102Z · score: 1 (1 votes) · EA · GW

Elsewhere in the document you do hint at another response - namely that by adopting the clause, companies will help avoid future taxation (though I am sceptical): ... However, it seems that the document equivocates on whether or not the clause is to reduce taxes, as elsewhere in the document you deny this:

I think both outcomes are possible. The second point is simply to point out that the WC does not and cannot (as a legal matter) prevent a state from levying taxes on firms. The first two points, by contrast, are a prediction that the WC will make such taxation less likely.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T22:52:25.643Z · score: 1 (1 votes) · EA · GW

The report then goes on to discuss externalities:

Secondly, unbridled incentives to innovate are not necessarily always good, particularly when many of the potential downsides of that innovation are externalized in the form of public harms. The Windfall Clause attempts to internalize some of these externalities to the signatory, which hopefully contributes to steering innovation incentives in ways that minimize these negative externalities and compensate their bearers.

Here you approvingly cite Seb's paper, but I do not think it supports your point at all. Firms have both positive and negative externalities, and causing them to internalise them requires tailored solutions - e.g. a carbon tax.

I agree that the WC does not target the externalities of AI development maximally efficiently. However, I think that the externalities of such development are probably significantly correlated with windfall-generation. Windfall-generation seems to me to be very likely to accompany a risk of a huge number of negative externalities—such as those cited in the Malicious Use report and classic X-risks.

A good analogy might therefore be to a gas tax for funding road construction/maintenance, which imperfectly targets the thing we actually care about (wear and tear on roads), but is correlated with it so it's a decent policy.

To be clear, I agree that it's not the best way of addressing those externalities, and that the best possible option is to institute a Pigouvian tax (via insurance on them like Farquhar et al. suggest or otherwise).

'Being very profitable' is not a negative externality It is if it leads to inequality, which it seems likely to. Equality is a psychological good, and so windfall has negative psychological externalities on the "losers."

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T20:04:31.113Z · score: 1 (1 votes) · EA · GW

Furthermore, firms can voluntarily but irrationally reduce their incentives to innovate - for example a CEO might sign up for the clause because he personally got a lot of positive press for doing so, even at the cost of the firm.

This same reasoning also shows why firms might seek positional goods. E.g., executives and AI engineers might really care about being the first to develop AGI. Thus, the positional arguments for taxing windfall come back into play to the same extent that this is true.

Additionally, by publicising this idea you are changing the landscape - a firm which might have seen no reason to sign up might now feel pressured to do so after a public campaign, even though their submission is 'voluntary'.

This is certainly true. I think we as a community should discuss (as here) what the tradeoffs are. Reduced innovation in AI is a real cost. So too are the harms identified in the WC report and more traditional X-risk harms. We should set the demands of firms such that the costs to innovation are outweighed by benefits from long-run wellbeing.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T19:51:53.833Z · score: 1 (1 votes) · EA · GW

As a blanket note about your next few points, I agree that the WC would disincentivize innovation to some extent. It was not my intention to claim—nor do I think I actually claimed (IIRC)—that it would have no socially undesirable incentive effects on innovation. Rather, the points I was making were more aimed at illuminating possible reasons why this might not be so bad. In general, my position is that the other upsides probably outweigh the (real!) downsides of disincentivizing innovation. Perhaps I should have been more clear about that.

But corporations are much less motivated by fame and love of their work than individuals, so this does not seem very relevant, and furthermore it does not address the inter-temporal issue which is the main objection to corporation taxes.

Yep, that seems right.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T19:20:42.813Z · score: 1 (1 votes) · EA · GW

Thanks a ton for your substantial engagements with this, Larks. Like you, I might spread my replies out across a few posts to atomize my replies.

I think this is a bit of a strawman. While it is true that many people don't understand tax incidence and falsely assume the burden falls entirely on shareholders rather than workers and consumers, the main argument for the optimality of a 0% corporate tax rate is Chamley-Judd (see for example here) and related results. (There are some informal descriptions of the result here and here.) The argument is about disincentives to invest reducing long-run growth and thereby making everyone poorer, not a short-term distributional effect. (The standard counter-argument to Chamley Judd, as far as I know, is to effectively apply lots of temporal discounting, but this is not available to longtermist EAs).

Thanks for this. TBQH, I was primarily familiar with the concerns cited as the reasons for opposition to corporate income taxation. I do wish in retrospect I had been able to get more acquainted with the anti-corporate-tax literature like you cited. Since I'm not an economist, I was not aware of and wasn't able to find some of the sources you cited. I agree that they make good points not adequately addressed by the Report.

For some more recent discussion in favor of capital taxation, see Korinek (2019). Admittedly, it's not clear how much this supports the WC because it does not necessarily target rents or fixed factors.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T19:05:03.358Z · score: 2 (2 votes) · EA · GW

Thanks Ramiro!

First, consider the “simple” example where a signatory company promises to donate 10% of its profits from a revolutionary AI system in 2060, a situation with an estimated probability of about 1%; the present value of this obligation would currently amount to U$650 million (in 2010 dollars). This seems a lot; however, I contend that, given investors’ hyperbolic discount, they probably wouldn’t be very concerned about it

Interesting. I don't think it's relevant, from a legal standpoint, that investors might discount hyperbolically rather than exponentially. I assume that a court would apply standard exponential discounting at market rates. But this is a promising psychological and pragmatic fact!

I’ve checked with some accountants, and this obligation would (today) be probably classified as a contingent liability of remote possibility (which, under IAS 37, means it wouldn’t impact the company’s balance sheet – it doesn’t even have to be disclosed in its annual report). So, I doubt such an obligation would negatively impact a company’s market value and profits (in the short-term); actually, as there’s no “bad marketing”, it could very well increase them.

If this is right, this is very helpful indeed :-)

Second (all this previous argument was meant to get here), would it violate some sort of fiduciary duty? Even if it doesn’t affect present investors, it could affect future ones: i.e., supposing the Clause is enforced, can these investors complain? That’s where things get messy to me. If the fiduciary duty assumes a person-affecting conception of duties (as law usually does), I believe it can’t. First, if the Clause were public, any investor that bought company shares after the promise would have done it in full knowledge – and so wouldn’t be allowed to complain; and, if it didn’t affect its market value in 2019, even older investors would have to face the objection “but you could have sold your shares without loss.” Also, given the precise event “this company made this discovery in such-and-such way”, it’s quite likely that the event of the promise figures in the causal chain that made this precise company get this result – it certainly didn’t prevent it! Thus, even future investors wouldn’t be allowed to complain.

See § III of the report :-)

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-14T20:12:47.705Z · score: 1 (1 votes) · EA · GW

Yep, thinking through the accounting of this would be very important. Unfortunately I'm not an accountant but I would very much like to see an accountant discuss how to structure this in a way that does not prematurely burden a signatory's books.

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-10T20:46:25.759Z · score: 2 (2 votes) · EA · GW

Thanks Rohin!

I don't think "alignment" is harder or more indeterminate, where "alignment" means something like "I have in mind something I want the AI system to do, it does that thing, without trying to manipulate me / deceive me etc."

Yeah, I agree with this.

imagine there was a law that said "All AI systems must not deceive their users, and must do what they believe their users want". A real law would probably only be slightly more explicit than that?

I'm not sure that's true. (Most) real laws have huge bodies of interpretative text surrounding them and examples of real-world applications of them to real-world facts.

Creating an AI system that follows all laws seems a lot harder.

Lawyers approximate generalists: they can take arbitrary written laws and give advice on how to conform behavior to those laws. So a lawyerlike AI might be able to learn general interpretative principles and research skills and be able to simulate legal adjudications of proposed actions.

I think this would probably have been true of expert systems but not so true of deep learning-based systems.

Interesting; I don't have good intuitions on this!

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-10T20:37:20.928Z · score: 1 (1 votes) · EA · GW

My guess is that programming AI to follow law might be easier or preferable to enforcing against human-principals. A weakly aligned AI (not X-risk or risk to principals, but not bound by law or general human morality) deployed by a human principal will probably come across illegal ways to advance its principal's goals. It will also probably be able to hide its actions, obscure its motives, and/or evade detection better than humans could. If so, the equilibrium strategy is to give minimal oversight to the AI agent and tacitly allow it to break the law while advancing the principal's goals, since enforcement against the principal is unlikely. This seems bad!

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-06T00:38:02.805Z · score: 5 (4 votes) · EA · GW

You might say that we could train an AI system to learn what is and isn't breaking the law; but then you might as well train an AI system to learn what is and isn't the thing you want it to do. It's not clear why training to follow laws would be easier than training it to do what you want; the latter would be a much more useful AI system.

Some reasons why this might be true:

  • Law is less indeterminate than you might think, and probably more definite than human values
  • Law has authoritative corpora readily available
  • Law has built-in, authoritative adjudication/dispute resolution mechanisms. Cf. AI Safety by Debate.

In general, my guess is that there is a large space of actions that:

  1. Are unaligned, and
  2. Are illegal, and
  3. Due to the formality of parts of law and the legal process, an AI can be made to have higher confidence that an action is (2) than (1).

However, it's very possible that, as you suggest, solving AI legal compliance requires solving AI Safety generally. This seems somewhat unlikely to me but I have low confidence in this since I'm not an expert. :-)

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-06T00:26:36.687Z · score: 6 (5 votes) · EA · GW

Reasons other than directly getting value alignment from law that you might want to program AI to follow the law:

  • We will presumably want organizations with AI to be bound by law. Making their AI agents bound by law seems very important to that.
  • Relatedly, we probably want to be able to make ex ante deals that obligate AI/AI-owners to do stuff post-AGI, which seems much harder if AGI can evade enforcement.
  • We don't want to rely on the incentives of human principals to ensure their agents advance their goals in purely legal ways, especially given AGI's ability to e.g. hide its actions or motives.
Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-06T00:20:11.249Z · score: 2 (2 votes) · EA · GW

I am not a law expert, but my impression is that there is a lot of common sense + human judgment in the application of laws, just as there is a lot of common sense + human judgment in interpreting requests.

(I am a lawyer by training.)

Yes, this is certainly true. Many laws explicitly or implicitly rely on standards (i.e., less-definite adjudicatory formulas) than hard-and-fast rules. "Reasonableness," for example, is often a key term in a legal claim or defense. Juries often make such determinations, which also means whether the actual legality of an action is resolved upon adjudication and not ex ante (although an aligned, capable AI could in principle simulate the probability that a jury would find its actions reasonable--that's what lawyers do.)

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-15T05:26:04.041Z · score: 1 (1 votes) · EA · GW

I'd imagine there's an audience for it!

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-15T05:25:29.788Z · score: 6 (8 votes) · EA · GW

Thanks Wei! This is a very thoughtful comment.

I completely agree that we should be wary of those aspects of SJ as well. I'm not sure that I'm "less" worried about it than you; I do worry about it. However, I have not seen much of this behavior in the EA community so I am not immediately worried and have some reasons to be fairly optimistic in the long run:

  1. Founder effects and strong communal norms towards open discussion in the EA community to which I think most newcomers get pretty heavily inculcated.
  2. Cause prioritization and consequentialism are somewhat incongruous with these things, since many of the things that can get people to be unfairly "canceled" are quite small from an EA perspective.
  3. Heavy influence of and connection to philosophy selects for openness norms as well.
  4. Ability and motivation to selectively adopt the best SJ positions without adopting some of its most harmful practices.

To restate, I would definitely be pretty wary of any attempt to reform EA in a way that seriously endangered norms of civility, open debate, intellectual inquiry, etc. as they currently are practiced. I actually think we do a very good job as a movement of balancing these goals. This is part of why I currently spend more time in EA than SJ.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:49:29.503Z · score: 13 (9 votes) · EA · GW

Beyond movement building & inclusivity, I think it makes sense for EA as a movement to keep their current approach because it's been working pretty well IMO.

I think the thing EAs as people (with a worldview that includes things beyond EA) might want to consider—and which SJ could inform—is the demands that historical injustices of, e.g., colonialism, racism, etc. make on us. I think those demands are plausibly quite large and failure to satisfy them could constitute a ongoing moral catastrophe. Since they're not welfarist, they're outside the scope of EA as it currently exists. But for moral uncertainty reasons I think many people should think about them.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:47:52.535Z · score: 2 (2 votes) · EA · GW

I don't! Would be interesting to see! From an EA perspective, though, flowthrough effects on long-term stuff might dominate the considerations.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:47:15.935Z · score: 2 (2 votes) · EA · GW

Hard to imagine it ever being too much TBH. I and most of my colleagues continue to invest in AI upskilling. However, lots of other skills are worth having too. Basically, I view it as a process of continual improvement: I will probably never have "enough" ML skill because the field moves faster than I can keep up with it, and there are approximately linear returns on it (and a bunch of other skills that I've mentioned in these comments).

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:45:02.593Z · score: 2 (2 votes) · EA · GW

I would lean pretty heavily towards ML. Taking an intro to CS class is good background, but specialize other than that. Some adjacent areas, like cybersecurity, are good too.

(You could help AI development without specializing in AI, but this is specifically for AI Policy careers.)

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:41:39.357Z · score: 11 (5 votes) · EA · GW

Yeah, I think I'm pretty bullish on JDs more than the average EA because it's very useful for a ton of careers. Like, a JD is an asset for pretty much any career in government, where you can work on a lot of EA problems, like:

(Of course, lawyers can usefully work on these outside of government as well.)

I think EA-relevant skills in economics might be particularly valuable in some fields, like governmental cost-benefit analysis.

Of course, people in government can have a lot of impact on problems that most EAs don't work on due to the amount of influence they have.

I also think that there might be opportunities for lawyers to help grow/structure/improve the EA movement, like:

  • Estate planning (e.g., help every EA who wants one get a will or trust that will give a lot of their estate to EA charities)
  • Setting up nonprofits and other organizations
  • Tax help
  • Immigration help for EA employers
  • Creating weird entities or financial instruments that help EAs achieve their goals (e.g., 1, 2, 3)

If I was not doing AI policy, I might write up a grant proposal to spend my time doing these things in CA.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:25:09.623Z · score: 1 (1 votes) · EA · GW

Limiting the discussion to the most impactful jobs from an EA perspective, I think it can be pretty hard for reasons I lay out here. I got lucky in many many ways, including that I was accepted to 80K coaching, turned out to be good at this line of work (which I easily could not have), and was in law school during the time when FHI was just spinning up its GovAI internship program.

My guess is that general credentials are probably insufficient without accompanying work that shows your ability to address the very unique issues of AGI policy well. So opportunities to try your hand at that are pretty valuable if you can find them.

That said, opportunities to show general AI policy capabilities—even on "short-term" issues—are good signals and can lead to a good career in this area!

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:18:49.506Z · score: 13 (6 votes) · EA · GW

I think there's a lot of issues that have applicability to both short- and long-term concerns, like:

Relatedly, this is why people who can't immediately work for an EA-aligned org on "long-term" AI issues can build both useful career capital and do useful work by working in more general AI policy.

For the second question, a pretty boring EA answer: I would like to see more people in near-term AI policy engage in explicit and quantifiable cause prioritization for their work. I think that, as EAs generally recognize, the impact between these things probably varies quite a lot. That should guide which questions people work on.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:09:11.318Z · score: 2 (2 votes) · EA · GW

The boring answer is that there's a variety of relationships that need to be managed well in order for AGI deployment to go optimally. Comparative advantage and opportunity are probably good indicators of where the most fruitful work for any given individual is. That said, I think working with industry can be pretty highly leveraged since it's more nimble and easier to persuade than government IMO.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:05:47.588Z · score: 5 (4 votes) · EA · GW

ML knowledge is good and important; I generally wish I had more of it and use many of my Learning Days to improve it. That link also shows some of the other, non-law subjects I've been studying.

In law school, I studied a lot of different subjects that have been useful, like:

  • Administrative law
  • National Security law
  • Constitutional law
  • Corporate law
  • Compliance
  • Contract law
  • Property law
  • Patent law
  • International law
  • Negotiations
  • Antitrust law

I am pretty bullish on most of the specific stuff you mentioned. I think macrohistory, history of technology, general public policy, forecasting, and economics are pretty useful. Unfortunately, it's such a weird and idiosyncratic field that there's not really a one-size-fits-all curriculum for getting into it, though this also means there's many productive ways to spend one's time preparing for it.