Posts

FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good 2020-02-05T23:49:43.443Z · score: 51 (25 votes)
I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA 2020-01-11T04:13:33.250Z · score: 39 (20 votes)
Defending Philanthropy Against Democracy 2019-10-06T07:20:45.888Z · score: 41 (23 votes)
Should I give to Our World In Data? 2019-09-10T04:56:41.437Z · score: 21 (14 votes)
Should EA Groups Run Organ Donor Registration Drives? 2019-03-27T16:29:40.261Z · score: 9 (8 votes)
On the (In)Applicability of Corporate Rights Cases to Digital Minds 2019-02-28T06:14:22.176Z · score: 12 (4 votes)
FHI Report: Stable Agreements in Turbulent Times 2019-02-21T17:12:51.085Z · score: 25 (12 votes)
EAs Should Invest All Year, then Give only on Giving Tuesday 2019-01-10T21:17:26.812Z · score: 49 (30 votes)
Which Image Do You Prefer?: a study of visual communication in six African countries 2018-12-03T06:38:40.758Z · score: 11 (14 votes)
Fisher & Syed on Tradable Obligations to Enhance Health 2018-08-12T22:17:20.304Z · score: 6 (6 votes)
Harvard EA's 2018–19 Vision 2018-08-04T22:47:29.289Z · score: 11 (13 votes)
Governmental CBA as an EA Career Step: A Shallow Investigation 2018-07-07T13:31:13.728Z · score: 6 (6 votes)

Comments

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-03-17T19:49:24.931Z · score: 1 (1 votes) · EA · GW

I am fairly confident that corporate policy is better. Corporate policy has a number of advantages:

  • Firms get more of a reputational boost
  • The number of actors you need to persuade is very small
  • Corporate policy is much more flexible
  • EA is probably better-equipped to getting corporate policy changes than new legislation/regulation
  • It's easier to make corporate policy permanent
Comment by cullen_okeefe on Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? · 2020-03-15T04:29:22.615Z · score: 3 (2 votes) · EA · GW

To my understanding, China produce these masks so massively that they can afford selling them to whole population. But, let's say, in US, we have the opposite situation.

Then shouldn't we should just buy them from China?

Comment by cullen_okeefe on Activism for COVID-19 Local Preparedness · 2020-03-03T19:03:00.221Z · score: 1 (1 votes) · EA · GW

Very helpful; thanks!

Comment by cullen_okeefe on Activism for COVID-19 Local Preparedness · 2020-03-02T20:34:41.847Z · score: 4 (3 votes) · EA · GW

Thanks! Here's the quote:

Harvard epidemiologist Marc Lipsitch estimates that 40 to 70 percent of the human population could potentially be infected by the virus if it becomes pandemic. Not all of those people would get sick, he noted.

Comment by cullen_okeefe on Activism for COVID-19 Local Preparedness · 2020-03-02T18:49:17.289Z · score: 4 (3 votes) · EA · GW

COVID-19 may infect 40-70 percent of the world's population.

What is your source for this? This seems way too high given that even in Hubei (population: 58.5 million), only about 1.1 in 1,000 people (total: 67,103) had confirmed cases.

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-28T01:30:10.403Z · score: 2 (2 votes) · EA · GW

The key difference in my mind is that the AI system does not need to determine the relative authoritativeness of different pronouncements of human value, since the legal authoritativeness of e.g. caselaw is pretty formalized. But I agree that this is less of an issue if the primary route to alignment is just getting an AI to follow the instructions of its principal.

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-28T01:28:07.617Z · score: 2 (2 votes) · EA · GW

Certainly you still need legal accountability -- why wouldn't we have that? If we solve alignment, then we can just have the AI's owner be accountable for any law-breaking actions the AI takes.

I agree that that is a very good and desirable step to take. However, as I said, it also incentives the AI-agent to obfuscate its actions and intentions to save its principal. In the human context, human agents do this but are independently disincentivized from breaking the law they face legal liability (a disincentive) for their actions. I want (and I suspect you also want) AI systems to have such incentivization.

If I understand correctly, you identify two ways to do this in the teenager analogy:

  1. Rewiring
  2. Explaining laws and their consequences and letting the agent's existing incentives do the rest.

I could be wrong about this, but ultimately, for AI systems, it seems like both are actually similarly difficult. As you've said, for 2. to be most effective, you probably need "AI police." Those police will need a way of interpreting the legality of an AI agent's {"mental" state; actions} and mapping them only existing laws.

But if you need to do that for effective enforcement, I don't see why (from a societal perspective) we shouldn't just do that on the actor's side and not the "police's" side. Baking the enforcement into the agents has the benefits of:

  1. Not incentivizing an arms race
  2. Giving the enforcer's a clearer picture of the AI's "mental state"
Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-24T23:34:16.779Z · score: 3 (3 votes) · EA · GW

But my real reason for not caring too much about this is that in this story we rely on the AI's "intelligence" to "understand" laws, as opposed to "programming it in"; given that we're worried about superintelligent AI it should be "intelligent" enough to "understand" what humans want as well (given that humans seem to be able to do that).

My intuition is that more formal systems will be easier for AI to understand earlier in the "evolution" of SOTA AI intelligence than less-formal systems. Since law is more formal than human values (including both the way it's written and the formal significance of interpretative texts), then we might get good law-following before good value alignment.

I'm not sure what you're trying to imply with this -- does this make the AIs task easier? Harder? The generality somehow implies that the AI is safer?

Sorry. I was responding to the "all laws" point. My point was that I think that making a law-following AI that can follow (A) all enumerated laws is not much harder than one that can be made to follow (B) any given law. That is, difficulty of construction scales sub-linearly with the number of laws it needs to follow. The interpretative tools that should get to (B) should be pretty generalizable to (A).

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-24T23:13:21.893Z · score: 2 (2 votes) · EA · GW

First, it would be hard to do. I am a programmer / ML researcher and I have no idea how to program an AI to follow the law in some guaranteed way. I also have an intuitive sense that it would be very difficult. I think the vast majority of programmers / ML researchers would agree with me on this.

This is valuable information. However, some ML people I have talked about this with have given positive feedback, so I think you might be overestimating the difficulty.

Second, it doesn't provide much value, because you can get most of the benefits via enforcement, which has the virtue of being the solution we currently use.

Part of the reason that enforcement works, though, is that human agents have an independent incentive not to break the law (or, e.g., report legal violations) since they are legally accountable for their actions.

But AI-enabled police would be able to probe actions, infer motives, and detect bad behavior better than humans could. In addition, AI systems could have fewer rights than humans, and could be designed to be more transparent than humans, making the police's job easier.

This seems to require the same type of fundamental ML research that I am proposing: mapping AI actions onto laws.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T23:09:32.485Z · score: 1 (1 votes) · EA · GW

I agree that the problem (that investors will prefer to invest in non-signatories, and hence it will reduce the likelihood of pro-social firms winning, if pro-social firms are more likely to sign) does seem like a credible issue. I found the description of the proposed solution rather confusing however. Given that I worked as an equity analyst for five years, I would be surprised if many other readers could understand it!

Apologies that this was confusing, and thanks for trying to deconfuse it :-)

Subsequent feedback on this (not reflected in the report) is that issuing low-value super-junior equity at the time of signing (and then holding it in trust) is probably the best option for this.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T22:59:31.090Z · score: 1 (1 votes) · EA · GW

I strongly disagree with this non-sequitur. The fact that we have achieved some level of material success now doesn't mean that the future opportunity isn't very large. Again, Chamley-Judd is the classic result in the space, suggesting that it is never appropriate to tax investment for distributional purposes - if the latter must be done, it should be done with individual-level consumption/income taxation. This should be especially clear to EAs who are aware of the astronomical waste of potentially forgoing or delaying growth.

However, it's very hard to get individuals to sign a WC for a huge number of reasons. See

The pool of potentially windfall-generating firms is much smaller and more stable than the number of potential windfall-generating individuals, meaning that securing commitments from firms would probably capture more of the potential windfall than securing commitments from individuals. Thus, targeting firms as such seems reasonable.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T22:56:50.102Z · score: 1 (1 votes) · EA · GW

Elsewhere in the document you do hint at another response - namely that by adopting the clause, companies will help avoid future taxation (though I am sceptical): ... However, it seems that the document equivocates on whether or not the clause is to reduce taxes, as elsewhere in the document you deny this:

I think both outcomes are possible. The second point is simply to point out that the WC does not and cannot (as a legal matter) prevent a state from levying taxes on firms. The first two points, by contrast, are a prediction that the WC will make such taxation less likely.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T22:52:25.643Z · score: 1 (1 votes) · EA · GW

The report then goes on to discuss externalities:

Secondly, unbridled incentives to innovate are not necessarily always good, particularly when many of the potential downsides of that innovation are externalized in the form of public harms. The Windfall Clause attempts to internalize some of these externalities to the signatory, which hopefully contributes to steering innovation incentives in ways that minimize these negative externalities and compensate their bearers.

Here you approvingly cite Seb's paper, but I do not think it supports your point at all. Firms have both positive and negative externalities, and causing them to internalise them requires tailored solutions - e.g. a carbon tax.

I agree that the WC does not target the externalities of AI development maximally efficiently. However, I think that the externalities of such development are probably significantly correlated with windfall-generation. Windfall-generation seems to me to be very likely to accompany a risk of a huge number of negative externalities—such as those cited in the Malicious Use report and classic X-risks.

A good analogy might therefore be to a gas tax for funding road construction/maintenance, which imperfectly targets the thing we actually care about (wear and tear on roads), but is correlated with it so it's a decent policy.

To be clear, I agree that it's not the best way of addressing those externalities, and that the best possible option is to institute a Pigouvian tax (via insurance on them like Farquhar et al. suggest or otherwise).

'Being very profitable' is not a negative externality It is if it leads to inequality, which it seems likely to. Equality is a psychological good, and so windfall has negative psychological externalities on the "losers."

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T20:04:31.113Z · score: 1 (1 votes) · EA · GW

Furthermore, firms can voluntarily but irrationally reduce their incentives to innovate - for example a CEO might sign up for the clause because he personally got a lot of positive press for doing so, even at the cost of the firm.

This same reasoning also shows why firms might seek positional goods. E.g., executives and AI engineers might really care about being the first to develop AGI. Thus, the positional arguments for taxing windfall come back into play to the same extent that this is true.

Additionally, by publicising this idea you are changing the landscape - a firm which might have seen no reason to sign up might now feel pressured to do so after a public campaign, even though their submission is 'voluntary'.

This is certainly true. I think we as a community should discuss (as here) what the tradeoffs are. Reduced innovation in AI is a real cost. So too are the harms identified in the WC report and more traditional X-risk harms. We should set the demands of firms such that the costs to innovation are outweighed by benefits from long-run wellbeing.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T19:51:53.833Z · score: 1 (1 votes) · EA · GW

As a blanket note about your next few points, I agree that the WC would disincentivize innovation to some extent. It was not my intention to claim—nor do I think I actually claimed (IIRC)—that it would have no socially undesirable incentive effects on innovation. Rather, the points I was making were more aimed at illuminating possible reasons why this might not be so bad. In general, my position is that the other upsides probably outweigh the (real!) downsides of disincentivizing innovation. Perhaps I should have been more clear about that.

But corporations are much less motivated by fame and love of their work than individuals, so this does not seem very relevant, and furthermore it does not address the inter-temporal issue which is the main objection to corporation taxes.

Yep, that seems right.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T19:20:42.813Z · score: 1 (1 votes) · EA · GW

Thanks a ton for your substantial engagements with this, Larks. Like you, I might spread my replies out across a few posts to atomize my replies.

I think this is a bit of a strawman. While it is true that many people don't understand tax incidence and falsely assume the burden falls entirely on shareholders rather than workers and consumers, the main argument for the optimality of a 0% corporate tax rate is Chamley-Judd (see for example here) and related results. (There are some informal descriptions of the result here and here.) The argument is about disincentives to invest reducing long-run growth and thereby making everyone poorer, not a short-term distributional effect. (The standard counter-argument to Chamley Judd, as far as I know, is to effectively apply lots of temporal discounting, but this is not available to longtermist EAs).

Thanks for this. TBQH, I was primarily familiar with the concerns cited as the reasons for opposition to corporate income taxation. I do wish in retrospect I had been able to get more acquainted with the anti-corporate-tax literature like you cited. Since I'm not an economist, I was not aware of and wasn't able to find some of the sources you cited. I agree that they make good points not adequately addressed by the Report.

For some more recent discussion in favor of capital taxation, see Korinek (2019). Admittedly, it's not clear how much this supports the WC because it does not necessarily target rents or fixed factors.

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-24T19:05:03.358Z · score: 2 (2 votes) · EA · GW

Thanks Ramiro!

First, consider the “simple” example where a signatory company promises to donate 10% of its profits from a revolutionary AI system in 2060, a situation with an estimated probability of about 1%; the present value of this obligation would currently amount to U$650 million (in 2010 dollars). This seems a lot; however, I contend that, given investors’ hyperbolic discount, they probably wouldn’t be very concerned about it

Interesting. I don't think it's relevant, from a legal standpoint, that investors might discount hyperbolically rather than exponentially. I assume that a court would apply standard exponential discounting at market rates. But this is a promising psychological and pragmatic fact!

I’ve checked with some accountants, and this obligation would (today) be probably classified as a contingent liability of remote possibility (which, under IAS 37, means it wouldn’t impact the company’s balance sheet – it doesn’t even have to be disclosed in its annual report). So, I doubt such an obligation would negatively impact a company’s market value and profits (in the short-term); actually, as there’s no “bad marketing”, it could very well increase them.

If this is right, this is very helpful indeed :-)

Second (all this previous argument was meant to get here), would it violate some sort of fiduciary duty? Even if it doesn’t affect present investors, it could affect future ones: i.e., supposing the Clause is enforced, can these investors complain? That’s where things get messy to me. If the fiduciary duty assumes a person-affecting conception of duties (as law usually does), I believe it can’t. First, if the Clause were public, any investor that bought company shares after the promise would have done it in full knowledge – and so wouldn’t be allowed to complain; and, if it didn’t affect its market value in 2019, even older investors would have to face the objection “but you could have sold your shares without loss.” Also, given the precise event “this company made this discovery in such-and-such way”, it’s quite likely that the event of the promise figures in the causal chain that made this precise company get this result – it certainly didn’t prevent it! Thus, even future investors wouldn’t be allowed to complain.

See § III of the report :-)

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-14T20:12:47.705Z · score: 1 (1 votes) · EA · GW

Yep, thinking through the accounting of this would be very important. Unfortunately I'm not an accountant but I would very much like to see an accountant discuss how to structure this in a way that does not prematurely burden a signatory's books.

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-10T20:46:25.759Z · score: 2 (2 votes) · EA · GW

Thanks Rohin!

I don't think "alignment" is harder or more indeterminate, where "alignment" means something like "I have in mind something I want the AI system to do, it does that thing, without trying to manipulate me / deceive me etc."

Yeah, I agree with this.

imagine there was a law that said "All AI systems must not deceive their users, and must do what they believe their users want". A real law would probably only be slightly more explicit than that?

I'm not sure that's true. (Most) real laws have huge bodies of interpretative text surrounding them and examples of real-world applications of them to real-world facts.

Creating an AI system that follows all laws seems a lot harder.

Lawyers approximate generalists: they can take arbitrary written laws and give advice on how to conform behavior to those laws. So a lawyerlike AI might be able to learn general interpretative principles and research skills and be able to simulate legal adjudications of proposed actions.

I think this would probably have been true of expert systems but not so true of deep learning-based systems.

Interesting; I don't have good intuitions on this!

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-10T20:37:20.928Z · score: 1 (1 votes) · EA · GW

My guess is that programming AI to follow law might be easier or preferable to enforcing against human-principals. A weakly aligned AI (not X-risk or risk to principals, but not bound by law or general human morality) deployed by a human principal will probably come across illegal ways to advance its principal's goals. It will also probably be able to hide its actions, obscure its motives, and/or evade detection better than humans could. If so, the equilibrium strategy is to give minimal oversight to the AI agent and tacitly allow it to break the law while advancing the principal's goals, since enforcement against the principal is unlikely. This seems bad!

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-06T00:38:02.805Z · score: 5 (4 votes) · EA · GW

You might say that we could train an AI system to learn what is and isn't breaking the law; but then you might as well train an AI system to learn what is and isn't the thing you want it to do. It's not clear why training to follow laws would be easier than training it to do what you want; the latter would be a much more useful AI system.

Some reasons why this might be true:

  • Law is less indeterminate than you might think, and probably more definite than human values
  • Law has authoritative corpora readily available
  • Law has built-in, authoritative adjudication/dispute resolution mechanisms. Cf. AI Safety by Debate.

In general, my guess is that there is a large space of actions that:

  1. Are unaligned, and
  2. Are illegal, and
  3. Due to the formality of parts of law and the legal process, an AI can be made to have higher confidence that an action is (2) than (1).

However, it's very possible that, as you suggest, solving AI legal compliance requires solving AI Safety generally. This seems somewhat unlikely to me but I have low confidence in this since I'm not an expert. :-)

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-06T00:26:36.687Z · score: 6 (5 votes) · EA · GW

Reasons other than directly getting value alignment from law that you might want to program AI to follow the law:

  • We will presumably want organizations with AI to be bound by law. Making their AI agents bound by law seems very important to that.
  • Relatedly, we probably want to be able to make ex ante deals that obligate AI/AI-owners to do stuff post-AGI, which seems much harder if AGI can evade enforcement.
  • We don't want to rely on the incentives of human principals to ensure their agents advance their goals in purely legal ways, especially given AGI's ability to e.g. hide its actions or motives.
Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-06T00:20:11.249Z · score: 2 (2 votes) · EA · GW

I am not a law expert, but my impression is that there is a lot of common sense + human judgment in the application of laws, just as there is a lot of common sense + human judgment in interpreting requests.

(I am a lawyer by training.)

Yes, this is certainly true. Many laws explicitly or implicitly rely on standards (i.e., less-definite adjudicatory formulas) than hard-and-fast rules. "Reasonableness," for example, is often a key term in a legal claim or defense. Juries often make such determinations, which also means whether the actual legality of an action is resolved upon adjudication and not ex ante (although an aligned, capable AI could in principle simulate the probability that a jury would find its actions reasonable--that's what lawyers do.)

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-15T05:26:04.041Z · score: 1 (1 votes) · EA · GW

I'd imagine there's an audience for it!

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-15T05:25:29.788Z · score: 4 (6 votes) · EA · GW

Thanks Wei! This is a very thoughtful comment.

I completely agree that we should be wary of those aspects of SJ as well. I'm not sure that I'm "less" worried about it than you; I do worry about it. However, I have not seen much of this behavior in the EA community so I am not immediately worried and have some reasons to be fairly optimistic in the long run:

  1. Founder effects and strong communal norms towards open discussion in the EA community to which I think most newcomers get pretty heavily inculcated.
  2. Cause prioritization and consequentialism are somewhat incongruous with these things, since many of the things that can get people to be unfairly "canceled" are quite small from an EA perspective.
  3. Heavy influence of and connection to philosophy selects for openness norms as well.
  4. Ability and motivation to selectively adopt the best SJ positions without adopting some of its most harmful practices.

To restate, I would definitely be pretty wary of any attempt to reform EA in a way that seriously endangered norms of civility, open debate, intellectual inquiry, etc. as they currently are practiced. I actually think we do a very good job as a movement of balancing these goals. This is part of why I currently spend more time in EA than SJ.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:49:29.503Z · score: 13 (9 votes) · EA · GW

Beyond movement building & inclusivity, I think it makes sense for EA as a movement to keep their current approach because it's been working pretty well IMO.

I think the thing EAs as people (with a worldview that includes things beyond EA) might want to consider—and which SJ could inform—is the demands that historical injustices of, e.g., colonialism, racism, etc. make on us. I think those demands are plausibly quite large and failure to satisfy them could constitute a ongoing moral catastrophe. Since they're not welfarist, they're outside the scope of EA as it currently exists. But for moral uncertainty reasons I think many people should think about them.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:47:52.535Z · score: 2 (2 votes) · EA · GW

I don't! Would be interesting to see! From an EA perspective, though, flowthrough effects on long-term stuff might dominate the considerations.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:47:15.935Z · score: 2 (2 votes) · EA · GW

Hard to imagine it ever being too much TBH. I and most of my colleagues continue to invest in AI upskilling. However, lots of other skills are worth having too. Basically, I view it as a process of continual improvement: I will probably never have "enough" ML skill because the field moves faster than I can keep up with it, and there are approximately linear returns on it (and a bunch of other skills that I've mentioned in these comments).

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:45:02.593Z · score: 2 (2 votes) · EA · GW

I would lean pretty heavily towards ML. Taking an intro to CS class is good background, but specialize other than that. Some adjacent areas, like cybersecurity, are good too.

(You could help AI development without specializing in AI, but this is specifically for AI Policy careers.)

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:41:39.357Z · score: 11 (5 votes) · EA · GW

Yeah, I think I'm pretty bullish on JDs more than the average EA because it's very useful for a ton of careers. Like, a JD is an asset for pretty much any career in government, where you can work on a lot of EA problems, like:

(Of course, lawyers can usefully work on these outside of government as well.)

I think EA-relevant skills in economics might be particularly valuable in some fields, like governmental cost-benefit analysis.

Of course, people in government can have a lot of impact on problems that most EAs don't work on due to the amount of influence they have.

I also think that there might be opportunities for lawyers to help grow/structure/improve the EA movement, like:

  • Estate planning (e.g., help every EA who wants one get a will or trust that will give a lot of their estate to EA charities)
  • Setting up nonprofits and other organizations
  • Tax help
  • Immigration help for EA employers
  • Creating weird entities or financial instruments that help EAs achieve their goals (e.g., 1, 2, 3)

If I was not doing AI policy, I might write up a grant proposal to spend my time doing these things in CA.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:25:09.623Z · score: 1 (1 votes) · EA · GW

Limiting the discussion to the most impactful jobs from an EA perspective, I think it can be pretty hard for reasons I lay out here. I got lucky in many many ways, including that I was accepted to 80K coaching, turned out to be good at this line of work (which I easily could not have), and was in law school during the time when FHI was just spinning up its GovAI internship program.

My guess is that general credentials are probably insufficient without accompanying work that shows your ability to address the very unique issues of AGI policy well. So opportunities to try your hand at that are pretty valuable if you can find them.

That said, opportunities to show general AI policy capabilities—even on "short-term" issues—are good signals and can lead to a good career in this area!

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:18:49.506Z · score: 13 (6 votes) · EA · GW

I think there's a lot of issues that have applicability to both short- and long-term concerns, like:

Relatedly, this is why people who can't immediately work for an EA-aligned org on "long-term" AI issues can build both useful career capital and do useful work by working in more general AI policy.

For the second question, a pretty boring EA answer: I would like to see more people in near-term AI policy engage in explicit and quantifiable cause prioritization for their work. I think that, as EAs generally recognize, the impact between these things probably varies quite a lot. That should guide which questions people work on.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:09:11.318Z · score: 2 (2 votes) · EA · GW

The boring answer is that there's a variety of relationships that need to be managed well in order for AGI deployment to go optimally. Comparative advantage and opportunity are probably good indicators of where the most fruitful work for any given individual is. That said, I think working with industry can be pretty highly leveraged since it's more nimble and easier to persuade than government IMO.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:05:47.588Z · score: 5 (4 votes) · EA · GW

ML knowledge is good and important; I generally wish I had more of it and use many of my Learning Days to improve it. That link also shows some of the other, non-law subjects I've been studying.

In law school, I studied a lot of different subjects that have been useful, like:

  • Administrative law
  • National Security law
  • Constitutional law
  • Corporate law
  • Compliance
  • Contract law
  • Property law
  • Patent law
  • International law
  • Negotiations
  • Antitrust law

I am pretty bullish on most of the specific stuff you mentioned. I think macrohistory, history of technology, general public policy, forecasting, and economics are pretty useful. Unfortunately, it's such a weird and idiosyncratic field that there's not really a one-size-fits-all curriculum for getting into it, though this also means there's many productive ways to spend one's time preparing for it.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T20:09:22.407Z · score: 14 (7 votes) · EA · GW

Lots of good stuff is in SF (OpenAI, PAI, Open Phil). However, there are also very good options for EAs in DC (CSET, US government stuff) and UK (FHI, DeepMind, CSER, CFI).

You can also build good career experience in general AI Policy work (i.e., not AGI- or LTF-focused) in a pretty big number of areas, like Boston (Berkman-Klein, MIT, FLI) or NYC (AI Now). I don't know of AI-specific stuff in Chicago or LA, but of course they both have good universities where you could probably do AI policy research.

See also my replies to this comment. :-)

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T04:39:15.823Z · score: 2 (2 votes) · EA · GW

Thanks! See if this post helps answer that! If not, feel free to follow-up!

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T04:12:58.512Z · score: 3 (3 votes) · EA · GW

I also wanted to pass along this explanation from our team manager, Jack Clark:

OpenAI’s policy team looks for candidates that display an ‘idiosyncratic specialism’ along with verifiable interest and intuitions regarding technical aspects of AI technology; members of OpenAI’s team currently have specialisms ranging from long-term TAI-oriented ethics, to geopolitics of compute, to issues of representation in generative models, to ‘red teaming’ technical systems from a security perspective, and so on. OpenAI hires people with a mixture of qualifications, and is equally happy hiring someone with no degrees and verifiable industry experience, as well as someone with a PHD. At OpenAI, technical familiarity is a prerequisite for successful policy work, as our policy team does a lot of work that involves embedding alongside technical teams on projects (see: our work throughout 2019 on GPT2).

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T04:11:58.182Z · score: 3 (3 votes) · EA · GW

I’m not involved in hiring at OpenAI, so I’m going to answer more in the spirit of “advice I would give for people interested in pursuing a career in EA AI policy generally.”

In short, I think actually trying your hand at the research is probably more valuable on the margin, especially if it yields high-quality research. (And if you discover it’s not a good fit, that’s valuable information as well.) This is basically what happened to me during my FHI internship: I found out that I was a good fit for this work, so I continued on in this path. There are a lot of very credentialed EAs, but (for better or worse), many EA AI policy careers take a combination of hard-to-describe and hard-to-measure skills that are best measured by actually trying to do it. Furthermore, there is unfortunately a managerial bottleneck in this space: there are far more people interested in entering it than people that can supervise potential entrants. I think it can be a frustrating space to enter; I got very lucky in many ways during my path here.

So, if you can’t actually try the research in a supervised setting, cultivating general skills or doing adjacent research (e.g., general AI policy) is a good step too. There are always skills I wish I had (and which I am fortunate to get to cultivate at OpenAI during Learning Day). Some of the stuff I studied during Learning Day which might guide your own skill cultivation include:

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T04:05:58.451Z · score: 5 (4 votes) · EA · GW

I think it might be somewhat more complicated. As far as I know, the LSAT+GPA measure is actually a pretty strong predictor of law school performance as far as standardized tests go. But there's some controversy in the literature about how much law school grades matter for success. Good law schools also have much higher bar passage rates, though there could be confounding factors there too.

In general, the legal market seems somewhat weird to me. E.g., it's pretty easy for T3 students to get a BigLaw job, but often hard for students near the bottom of the T14 too. I do not understand why firms don't just hire such students and thereby lower wages, which are very high. My best guess is, Hansonianly, that there's a lot of signaling going on, where firms try to signal their quality by hiring only T6 law students. Also, I imagine the T6 credential is important for recruiting clients, which is very important to BigLaw success.

But query how much of this matters if you want to do a non-ETG law path.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T03:38:08.469Z · score: 10 (5 votes) · EA · GW

It’s useful to think of the OpenAI Policy Team’s work as falling into a few buckets:

  1. Internal advocacy to help OpenAI meet its policy-relevant goals.
  2. External-facing research on issues in AI policy (e.g., 1 2 3 4)
  3. Public and private advocacy on issues in AI policy with a variety of public and industry actors

Most of my work so far has been focused on 1 and 2, and less on 3. That work is largely what you would expect it to look like: a lot of time spent reviewing academic literature or primary sources; drafting papers; soliciting comments and feedback; and discussing with colleagues.They also involve meeting with other internal stakeholders, which is an especially exciting part of my job because the resulting outputs will be very technically-informed.

I plan to do some more of 3 this year, which will generally be helping coordinate discussion on some issues in AI economics. Stay tuned for more info on that!

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T03:35:00.646Z · score: 9 (4 votes) · EA · GW

I should also mention that I am generally excited to chat with EAs considering law school. PM me if interested, or join the Effective Altruism & Law Facebook Group.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T03:20:48.178Z · score: 1 (1 votes) · EA · GW

I'm not actually sure what difference you're referring to. Could you please elaborate? :-)

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T03:15:11.102Z · score: 7 (4 votes) · EA · GW

Hm, I haven't thought about this particular issue a lot. I am more focused on research and industry advocacy right now than government work.

I suppose one nice thing would be to have an explicit area of antitrust leniency carved out for cooperations on AI safety.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T03:12:32.326Z · score: 10 (4 votes) · EA · GW

Yeah, I think it's definitely true that some lawyers feel trapped in their current career sometimes. Law is a pretty conservative profession and it's pretty hard to find advice for non-traditional legal jobs. I myself felt this: it was a pretty big career risk to do an internship at FHI the summer after 2L.

(For context, summer after 2L is when most people work at the firm that eventually hires them right after law school. So, I would have had a much harder time finding a BigLaw job if the whole AGI policy thing didn't work out. The fact that I worked public interest both summers would have been a serious signal to firms that I was more likely than average to leave BigLaw ASAP.)

I think EAs can hedge against this if they invest in maintaining ties to the EA community, avoiding sunk-cost and status quo biases, and careful career planning.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T23:27:42.947Z · score: 4 (4 votes) · EA · GW

My approach is generally to identify relevant bodies of law that will affect the relationships between AI developers and other relevant entities/actors, like:

  1. other AI developers
  2. governments
  3. AI itself
  4. Consumers

Much of this is governed by well-developed areas of law, but in very unusual (hypothetical) cases. At OpenAI I look for edge cases in these areas. Specifically I collaborate with technical experts who are working on the cutting edge of AI R&D to identify these issues more clearly. OpenAI empowers me and the Policy team so that we can guide the org to proactively address these issues.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T22:21:54.739Z · score: 6 (6 votes) · EA · GW

I do basically think that EA could learn a lot of things from SJ in terms of being an inclusive movement. I think it's possible that there's a lot of value to be had (in EA terms) in continuing to increase the inclusivity of EA.

I agree that part of the issue is who feels empowered to make a difference. Part of this is because SJ, in my view, often focuses on things that are not very marginally impactful, but to which many people can contribute. However, I am very excited about recent efforts within the EA community to support a variety of career paths and routes to impact beyond the main ones identified by main EA orgs.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T22:18:57.454Z · score: 4 (4 votes) · EA · GW

Thanks all! This is a good, useful discussion. I wanted to clarify slightly but what I mean when I say EA is the "better" ideology. Mainly, I mean that EA is better at guiding my actions in a way that augments my ethical impact much more than SJ does. They're primarily rivalrous only insofar as I can only make a limited number of ethical deliberations per day, and EA considerations more strongly optimize for impact than SJ considerations.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T22:14:56.507Z · score: 21 (9 votes) · EA · GW

TL;DR, I think EAs should probably use the following heuristics if they are interested in some career for which law school is a plausible path:

  1. If you can get into a T3 law school (Harvard, Yale, Stanford), have a fairly strong prior that it's worth going.
  2. If you can get into a T6 law school (Columbia, Chicago, NYU), probably take it.
  3. If you can get into a T14 law school, seriously consider it. But employment statistics at the bottom of the T14 are very different from those at the top.
  4. Be wary of things outside the T14.

In general, definitely carefully research employment prospects for the school you're considering.

Other notes:

  1. The 80K UK commercial law ETG article significantly underestimates how much US-trained lawyers can make. Starting salary for commercial law firms in the US are $190K. It is pretty easy to get these jobs from T6 schools. Of course, American students will probably have higher debt burden.
  2. Career dissatisfaction in biglaw firms is high. The hours can be very brutal. Attrition is high. Nevertheless, a solid majority of HLS lawyers (including BigLaw lawyers) are satisfied with their career and would recommend the same career to newer people. Of course, HLS lawyers are not representative of the profession as a whole.
  3. ROI on the mean law schools is actually quite good, though should be adjusted for risk since the downside (huge debt + underemployment) is huge.
  4. If you're going to ETG, try to get a bunch of admissions offers and heavily negotiate downwards to get a good scholarship offer within the T6 or ~T10. Chicago, NYU, and Columbia all offer merit scholarships; if you want to ETG, these seem like good bets.
  5. If you're going to ETG, probably work in Texas, where you get NY salaries at a much lower cost of living and tax burden.
  6. If you're going into law for policy or other non-ETG reasons, go to a law school with a really good debt forgiveness program (unless you get a good scholarship elsewhere). HLS's LIPP is quite good; Yale has an even better similar program.
  7. You should also account for the possibility of economic downturn; many law firms stopped hiring during the '08 crash.
  8. If you're an undergrad, you have a high leverage over your law school potential choices. 80% of your law school admissions decision will be based on your GPA and LSAT scores. Carefully research the quartiles for your target schools and aim for at least median, ideally >75%ile. The LSAT is very learnable with focused study. Take a formal logic course and LSAT courses if you can afford them. This will help tremendously in law school scholarship negotiations.
Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T21:29:52.582Z · score: 2 (2 votes) · EA · GW

I actually don't think I have very good insights on this topic, despite spending a lot of my time on politics Twitter (despite my best judgment). I didn't have any particular experience in electoral politics and never really considered it as a career myself.

I guess one "take" would be that there's a lot of ways to improve the world via government that don't involve seeking elected office or getting heavily involved in politics, and so people should have a clear idea of why elected office is better than that.

All that said, my position is largely aligned with 80,000 Hours': from an expected value perspective it looks promising, but is obviously a low-probability route to impact.

I'd be interested to see more research into how constrained altruistic decisionmakers actually are. There are some theoretical reasons to suspect that decisionmakers are actually quite constrained, which, if true, would maybe suggest we're over-estimating how important it is to get altruistic decision-makers (or change our identification of which offices are most worth seeking).

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T21:19:06.379Z · score: 9 (7 votes) · EA · GW

My EA origins story is pretty boring! I was a research assistant for a Philosophy professor who included a unit on EA in her Environmental Ethics course. That was my first exposure to the ideas of EA (although obviously I had exposure to Peter Singer previously). As a result, I added Doing Good Better to my reading list, and I read it in December 2016 (halfway through my first year of law school). I was pretty immediately convinced of its core ideas.

I then joined the Harvard Law School EA group, which was a really cool group at the time. In fact, it's somewhat weird that a school of HLS's size (ca. 1600 students) was able to sustain such a group, so I was very fortunate in that way.