Posts

FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good 2020-02-05T23:49:43.443Z · score: 45 (22 votes)
I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA 2020-01-11T04:13:33.250Z · score: 39 (20 votes)
Defending Philanthropy Against Democracy 2019-10-06T07:20:45.888Z · score: 41 (23 votes)
Should I give to Our World In Data? 2019-09-10T04:56:41.437Z · score: 20 (13 votes)
Should EA Groups Run Organ Donor Registration Drives? 2019-03-27T16:29:40.261Z · score: 9 (8 votes)
On the (In)Applicability of Corporate Rights Cases to Digital Minds 2019-02-28T06:14:22.176Z · score: 12 (4 votes)
FHI Report: Stable Agreements in Turbulent Times 2019-02-21T17:12:51.085Z · score: 25 (12 votes)
EAs Should Invest All Year, then Give only on Giving Tuesday 2019-01-10T21:17:26.812Z · score: 49 (30 votes)
Which Image Do You Prefer?: a study of visual communication in six African countries 2018-12-03T06:38:40.758Z · score: 11 (14 votes)
Fisher & Syed on Tradable Obligations to Enhance Health 2018-08-12T22:17:20.304Z · score: 6 (6 votes)
Harvard EA's 2018–19 Vision 2018-08-04T22:47:29.289Z · score: 11 (13 votes)
Governmental CBA as an EA Career Step: A Shallow Investigation 2018-07-07T13:31:13.728Z · score: 6 (6 votes)

Comments

Comment by cullen_okeefe on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-14T20:12:47.705Z · score: 1 (1 votes) · EA · GW

Yep, thinking through the accounting of this would be very important. Unfortunately I'm not an accountant but I would very much like to see an accountant discuss how to structure this in a way that does not prematurely burden a signatory's books.

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-10T20:46:25.759Z · score: 2 (2 votes) · EA · GW

Thanks Rohin!

I don't think "alignment" is harder or more indeterminate, where "alignment" means something like "I have in mind something I want the AI system to do, it does that thing, without trying to manipulate me / deceive me etc."

Yeah, I agree with this.

imagine there was a law that said "All AI systems must not deceive their users, and must do what they believe their users want". A real law would probably only be slightly more explicit than that?

I'm not sure that's true. (Most) real laws have huge bodies of interpretative text surrounding them and examples of real-world applications of them to real-world facts.

Creating an AI system that follows all laws seems a lot harder.

Lawyers approximate generalists: they can take arbitrary written laws and give advice on how to conform behavior to those laws. So a lawyerlike AI might be able to learn general interpretative principles and research skills and be able to simulate legal adjudications of proposed actions.

I think this would probably have been true of expert systems but not so true of deep learning-based systems.

Interesting; I don't have good intuitions on this!

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-10T20:37:20.928Z · score: 1 (1 votes) · EA · GW

My guess is that programming AI to follow law might be easier or preferable to enforcing against human-principals. A weakly aligned AI (not X-risk or risk to principals, but not bound by law or general human morality) deployed by a human principal will probably come across illegal ways to advance its principal's goals. It will also probably be able to hide its actions, obscure its motives, and/or evade detection better than humans could. If so, the equilibrium strategy is to give minimal oversight to the AI agent and tacitly allow it to break the law while advancing the principal's goals, since enforcement against the principal is unlikely. This seems bad!

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-06T00:38:02.805Z · score: 5 (4 votes) · EA · GW

You might say that we could train an AI system to learn what is and isn't breaking the law; but then you might as well train an AI system to learn what is and isn't the thing you want it to do. It's not clear why training to follow laws would be easier than training it to do what you want; the latter would be a much more useful AI system.

Some reasons why this might be true:

  • Law is less indeterminate than you might think, and probably more definite than human values
  • Law has authoritative corpora readily available
  • Law has built-in, authoritative adjudication/dispute resolution mechanisms. Cf. AI Safety by Debate.

In general, my guess is that there is a large space of actions that:

  1. Are unaligned, and
  2. Are illegal, and
  3. Due to the formality of parts of law and the legal process, an AI can be made to have higher confidence that an action is (2) than (1).

However, it's very possible that, as you suggest, solving AI legal compliance requires solving AI Safety generally. This seems somewhat unlikely to me but I have low confidence in this since I'm not an expert. :-)

Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-06T00:26:36.687Z · score: 6 (5 votes) · EA · GW

Reasons other than directly getting value alignment from law that you might want to program AI to follow the law:

  • We will presumably want organizations with AI to be bound by law. Making their AI agents bound by law seems very important to that.
  • Relatedly, we probably want to be able to make ex ante deals that obligate AI/AI-owners to do stuff post-AGI, which seems much harder if AGI can evade enforcement.
  • We don't want to rely on the incentives of human principals to ensure their agents advance their goals in purely legal ways, especially given AGI's ability to e.g. hide its actions or motives.
Comment by cullen_okeefe on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-06T00:20:11.249Z · score: 2 (2 votes) · EA · GW

I am not a law expert, but my impression is that there is a lot of common sense + human judgment in the application of laws, just as there is a lot of common sense + human judgment in interpreting requests.

(I am a lawyer by training.)

Yes, this is certainly true. Many laws explicitly or implicitly rely on standards (i.e., less-definite adjudicatory formulas) than hard-and-fast rules. "Reasonableness," for example, is often a key term in a legal claim or defense. Juries often make such determinations, which also means whether the actual legality of an action is resolved upon adjudication and not ex ante (although an aligned, capable AI could in principle simulate the probability that a jury would find its actions reasonable--that's what lawyers do.)

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-15T05:26:04.041Z · score: 1 (1 votes) · EA · GW

I'd imagine there's an audience for it!

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-15T05:25:29.788Z · score: 4 (6 votes) · EA · GW

Thanks Wei! This is a very thoughtful comment.

I completely agree that we should be wary of those aspects of SJ as well. I'm not sure that I'm "less" worried about it than you; I do worry about it. However, I have not seen much of this behavior in the EA community so I am not immediately worried and have some reasons to be fairly optimistic in the long run:

  1. Founder effects and strong communal norms towards open discussion in the EA community to which I think most newcomers get pretty heavily inculcated.
  2. Cause prioritization and consequentialism are somewhat incongruous with these things, since many of the things that can get people to be unfairly "canceled" are quite small from an EA perspective.
  3. Heavy influence of and connection to philosophy selects for openness norms as well.
  4. Ability and motivation to selectively adopt the best SJ positions without adopting some of its most harmful practices.

To restate, I would definitely be pretty wary of any attempt to reform EA in a way that seriously endangered norms of civility, open debate, intellectual inquiry, etc. as they currently are practiced. I actually think we do a very good job as a movement of balancing these goals. This is part of why I currently spend more time in EA than SJ.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:49:29.503Z · score: 13 (9 votes) · EA · GW

Beyond movement building & inclusivity, I think it makes sense for EA as a movement to keep their current approach because it's been working pretty well IMO.

I think the thing EAs as people (with a worldview that includes things beyond EA) might want to consider—and which SJ could inform—is the demands that historical injustices of, e.g., colonialism, racism, etc. make on us. I think those demands are plausibly quite large and failure to satisfy them could constitute a ongoing moral catastrophe. Since they're not welfarist, they're outside the scope of EA as it currently exists. But for moral uncertainty reasons I think many people should think about them.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:47:52.535Z · score: 2 (2 votes) · EA · GW

I don't! Would be interesting to see! From an EA perspective, though, flowthrough effects on long-term stuff might dominate the considerations.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:47:15.935Z · score: 2 (2 votes) · EA · GW

Hard to imagine it ever being too much TBH. I and most of my colleagues continue to invest in AI upskilling. However, lots of other skills are worth having too. Basically, I view it as a process of continual improvement: I will probably never have "enough" ML skill because the field moves faster than I can keep up with it, and there are approximately linear returns on it (and a bunch of other skills that I've mentioned in these comments).

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:45:02.593Z · score: 2 (2 votes) · EA · GW

I would lean pretty heavily towards ML. Taking an intro to CS class is good background, but specialize other than that. Some adjacent areas, like cybersecurity, are good too.

(You could help AI development without specializing in AI, but this is specifically for AI Policy careers.)

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T04:41:39.357Z · score: 11 (5 votes) · EA · GW

Yeah, I think I'm pretty bullish on JDs more than the average EA because it's very useful for a ton of careers. Like, a JD is an asset for pretty much any career in government, where you can work on a lot of EA problems, like:

(Of course, lawyers can usefully work on these outside of government as well.)

I think EA-relevant skills in economics might be particularly valuable in some fields, like governmental cost-benefit analysis.

Of course, people in government can have a lot of impact on problems that most EAs don't work on due to the amount of influence they have.

I also think that there might be opportunities for lawyers to help grow/structure/improve the EA movement, like:

  • Estate planning (e.g., help every EA who wants one get a will or trust that will give a lot of their estate to EA charities)
  • Setting up nonprofits and other organizations
  • Tax help
  • Immigration help for EA employers
  • Creating weird entities or financial instruments that help EAs achieve their goals (e.g., 1, 2, 3)

If I was not doing AI policy, I might write up a grant proposal to spend my time doing these things in CA.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:25:09.623Z · score: 1 (1 votes) · EA · GW

Limiting the discussion to the most impactful jobs from an EA perspective, I think it can be pretty hard for reasons I lay out here. I got lucky in many many ways, including that I was accepted to 80K coaching, turned out to be good at this line of work (which I easily could not have), and was in law school during the time when FHI was just spinning up its GovAI internship program.

My guess is that general credentials are probably insufficient without accompanying work that shows your ability to address the very unique issues of AGI policy well. So opportunities to try your hand at that are pretty valuable if you can find them.

That said, opportunities to show general AI policy capabilities—even on "short-term" issues—are good signals and can lead to a good career in this area!

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:18:49.506Z · score: 13 (6 votes) · EA · GW

I think there's a lot of issues that have applicability to both short- and long-term concerns, like:

Relatedly, this is why people who can't immediately work for an EA-aligned org on "long-term" AI issues can build both useful career capital and do useful work by working in more general AI policy.

For the second question, a pretty boring EA answer: I would like to see more people in near-term AI policy engage in explicit and quantifiable cause prioritization for their work. I think that, as EAs generally recognize, the impact between these things probably varies quite a lot. That should guide which questions people work on.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:09:11.318Z · score: 2 (2 votes) · EA · GW

The boring answer is that there's a variety of relationships that need to be managed well in order for AGI deployment to go optimally. Comparative advantage and opportunity are probably good indicators of where the most fruitful work for any given individual is. That said, I think working with industry can be pretty highly leveraged since it's more nimble and easier to persuade than government IMO.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T22:05:47.588Z · score: 5 (4 votes) · EA · GW

ML knowledge is good and important; I generally wish I had more of it and use many of my Learning Days to improve it. That link also shows some of the other, non-law subjects I've been studying.

In law school, I studied a lot of different subjects that have been useful, like:

  • Administrative law
  • National Security law
  • Constitutional law
  • Corporate law
  • Compliance
  • Contract law
  • Property law
  • Patent law
  • International law
  • Negotiations
  • Antitrust law

I am pretty bullish on most of the specific stuff you mentioned. I think macrohistory, history of technology, general public policy, forecasting, and economics are pretty useful. Unfortunately, it's such a weird and idiosyncratic field that there's not really a one-size-fits-all curriculum for getting into it, though this also means there's many productive ways to spend one's time preparing for it.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T20:09:22.407Z · score: 14 (7 votes) · EA · GW

Lots of good stuff is in SF (OpenAI, PAI, Open Phil). However, there are also very good options for EAs in DC (CSET, US government stuff) and UK (FHI, DeepMind, CSER, CFI).

You can also build good career experience in general AI Policy work (i.e., not AGI- or LTF-focused) in a pretty big number of areas, like Boston (Berkman-Klein, MIT, FLI) or NYC (AI Now). I don't know of AI-specific stuff in Chicago or LA, but of course they both have good universities where you could probably do AI policy research.

See also my replies to this comment. :-)

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T04:39:15.823Z · score: 2 (2 votes) · EA · GW

Thanks! See if this post helps answer that! If not, feel free to follow-up!

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T04:12:58.512Z · score: 3 (3 votes) · EA · GW

I also wanted to pass along this explanation from our team manager, Jack Clark:

OpenAI’s policy team looks for candidates that display an ‘idiosyncratic specialism’ along with verifiable interest and intuitions regarding technical aspects of AI technology; members of OpenAI’s team currently have specialisms ranging from long-term TAI-oriented ethics, to geopolitics of compute, to issues of representation in generative models, to ‘red teaming’ technical systems from a security perspective, and so on. OpenAI hires people with a mixture of qualifications, and is equally happy hiring someone with no degrees and verifiable industry experience, as well as someone with a PHD. At OpenAI, technical familiarity is a prerequisite for successful policy work, as our policy team does a lot of work that involves embedding alongside technical teams on projects (see: our work throughout 2019 on GPT2).

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T04:11:58.182Z · score: 3 (3 votes) · EA · GW

I’m not involved in hiring at OpenAI, so I’m going to answer more in the spirit of “advice I would give for people interested in pursuing a career in EA AI policy generally.”

In short, I think actually trying your hand at the research is probably more valuable on the margin, especially if it yields high-quality research. (And if you discover it’s not a good fit, that’s valuable information as well.) This is basically what happened to me during my FHI internship: I found out that I was a good fit for this work, so I continued on in this path. There are a lot of very credentialed EAs, but (for better or worse), many EA AI policy careers take a combination of hard-to-describe and hard-to-measure skills that are best measured by actually trying to do it. Furthermore, there is unfortunately a managerial bottleneck in this space: there are far more people interested in entering it than people that can supervise potential entrants. I think it can be a frustrating space to enter; I got very lucky in many ways during my path here.

So, if you can’t actually try the research in a supervised setting, cultivating general skills or doing adjacent research (e.g., general AI policy) is a good step too. There are always skills I wish I had (and which I am fortunate to get to cultivate at OpenAI during Learning Day). Some of the stuff I studied during Learning Day which might guide your own skill cultivation include:

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T04:05:58.451Z · score: 5 (4 votes) · EA · GW

I think it might be somewhat more complicated. As far as I know, the LSAT+GPA measure is actually a pretty strong predictor of law school performance as far as standardized tests go. But there's some controversy in the literature about how much law school grades matter for success. Good law schools also have much higher bar passage rates, though there could be confounding factors there too.

In general, the legal market seems somewhat weird to me. E.g., it's pretty easy for T3 students to get a BigLaw job, but often hard for students near the bottom of the T14 too. I do not understand why firms don't just hire such students and thereby lower wages, which are very high. My best guess is, Hansonianly, that there's a lot of signaling going on, where firms try to signal their quality by hiring only T6 law students. Also, I imagine the T6 credential is important for recruiting clients, which is very important to BigLaw success.

But query how much of this matters if you want to do a non-ETG law path.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T03:38:08.469Z · score: 10 (5 votes) · EA · GW

It’s useful to think of the OpenAI Policy Team’s work as falling into a few buckets:

  1. Internal advocacy to help OpenAI meet its policy-relevant goals.
  2. External-facing research on issues in AI policy (e.g., 1 2 3 4)
  3. Public and private advocacy on issues in AI policy with a variety of public and industry actors

Most of my work so far has been focused on 1 and 2, and less on 3. That work is largely what you would expect it to look like: a lot of time spent reviewing academic literature or primary sources; drafting papers; soliciting comments and feedback; and discussing with colleagues.They also involve meeting with other internal stakeholders, which is an especially exciting part of my job because the resulting outputs will be very technically-informed.

I plan to do some more of 3 this year, which will generally be helping coordinate discussion on some issues in AI economics. Stay tuned for more info on that!

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T03:35:00.646Z · score: 9 (4 votes) · EA · GW

I should also mention that I am generally excited to chat with EAs considering law school. PM me if interested, or join the Effective Altruism & Law Facebook Group.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T03:20:48.178Z · score: 1 (1 votes) · EA · GW

I'm not actually sure what difference you're referring to. Could you please elaborate? :-)

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T03:15:11.102Z · score: 7 (4 votes) · EA · GW

Hm, I haven't thought about this particular issue a lot. I am more focused on research and industry advocacy right now than government work.

I suppose one nice thing would be to have an explicit area of antitrust leniency carved out for cooperations on AI safety.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-12T03:12:32.326Z · score: 10 (4 votes) · EA · GW

Yeah, I think it's definitely true that some lawyers feel trapped in their current career sometimes. Law is a pretty conservative profession and it's pretty hard to find advice for non-traditional legal jobs. I myself felt this: it was a pretty big career risk to do an internship at FHI the summer after 2L.

(For context, summer after 2L is when most people work at the firm that eventually hires them right after law school. So, I would have had a much harder time finding a BigLaw job if the whole AGI policy thing didn't work out. The fact that I worked public interest both summers would have been a serious signal to firms that I was more likely than average to leave BigLaw ASAP.)

I think EAs can hedge against this if they invest in maintaining ties to the EA community, avoiding sunk-cost and status quo biases, and careful career planning.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T23:27:42.947Z · score: 4 (4 votes) · EA · GW

My approach is generally to identify relevant bodies of law that will affect the relationships between AI developers and other relevant entities/actors, like:

  1. other AI developers
  2. governments
  3. AI itself
  4. Consumers

Much of this is governed by well-developed areas of law, but in very unusual (hypothetical) cases. At OpenAI I look for edge cases in these areas. Specifically I collaborate with technical experts who are working on the cutting edge of AI R&D to identify these issues more clearly. OpenAI empowers me and the Policy team so that we can guide the org to proactively address these issues.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T22:21:54.739Z · score: 6 (6 votes) · EA · GW

I do basically think that EA could learn a lot of things from SJ in terms of being an inclusive movement. I think it's possible that there's a lot of value to be had (in EA terms) in continuing to increase the inclusivity of EA.

I agree that part of the issue is who feels empowered to make a difference. Part of this is because SJ, in my view, often focuses on things that are not very marginally impactful, but to which many people can contribute. However, I am very excited about recent efforts within the EA community to support a variety of career paths and routes to impact beyond the main ones identified by main EA orgs.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T22:18:57.454Z · score: 4 (4 votes) · EA · GW

Thanks all! This is a good, useful discussion. I wanted to clarify slightly but what I mean when I say EA is the "better" ideology. Mainly, I mean that EA is better at guiding my actions in a way that augments my ethical impact much more than SJ does. They're primarily rivalrous only insofar as I can only make a limited number of ethical deliberations per day, and EA considerations more strongly optimize for impact than SJ considerations.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T22:14:56.507Z · score: 21 (9 votes) · EA · GW

TL;DR, I think EAs should probably use the following heuristics if they are interested in some career for which law school is a plausible path:

  1. If you can get into a T3 law school (Harvard, Yale, Stanford), have a fairly strong prior that it's worth going.
  2. If you can get into a T6 law school (Columbia, Chicago, NYU), probably take it.
  3. If you can get into a T14 law school, seriously consider it. But employment statistics at the bottom of the T14 are very different from those at the top.
  4. Be wary of things outside the T14.

In general, definitely carefully research employment prospects for the school you're considering.

Other notes:

  1. The 80K UK commercial law ETG article significantly underestimates how much US-trained lawyers can make. Starting salary for commercial law firms in the US are $190K. It is pretty easy to get these jobs from T6 schools. Of course, American students will probably have higher debt burden.
  2. Career dissatisfaction in biglaw firms is high. The hours can be very brutal. Attrition is high. Nevertheless, a solid majority of HLS lawyers (including BigLaw lawyers) are satisfied with their career and would recommend the same career to newer people. Of course, HLS lawyers are not representative of the profession as a whole.
  3. ROI on the mean law schools is actually quite good, though should be adjusted for risk since the downside (huge debt + underemployment) is huge.
  4. If you're going to ETG, try to get a bunch of admissions offers and heavily negotiate downwards to get a good scholarship offer within the T6 or ~T10. Chicago, NYU, and Columbia all offer merit scholarships; if you want to ETG, these seem like good bets.
  5. If you're going to ETG, probably work in Texas, where you get NY salaries at a much lower cost of living and tax burden.
  6. If you're going into law for policy or other non-ETG reasons, go to a law school with a really good debt forgiveness program (unless you get a good scholarship elsewhere). HLS's LIPP is quite good; Yale has an even better similar program.
  7. You should also account for the possibility of economic downturn; many law firms stopped hiring during the '08 crash.
  8. If you're an undergrad, you have a high leverage over your law school potential choices. 80% of your law school admissions decision will be based on your GPA and LSAT scores. Carefully research the quartiles for your target schools and aim for at least median, ideally >75%ile. The LSAT is very learnable with focused study. Take a formal logic course and LSAT courses if you can afford them. This will help tremendously in law school scholarship negotiations.
Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T21:29:52.582Z · score: 2 (2 votes) · EA · GW

I actually don't think I have very good insights on this topic, despite spending a lot of my time on politics Twitter (despite my best judgment). I didn't have any particular experience in electoral politics and never really considered it as a career myself.

I guess one "take" would be that there's a lot of ways to improve the world via government that don't involve seeking elected office or getting heavily involved in politics, and so people should have a clear idea of why elected office is better than that.

All that said, my position is largely aligned with 80,000 Hours': from an expected value perspective it looks promising, but is obviously a low-probability route to impact.

I'd be interested to see more research into how constrained altruistic decisionmakers actually are. There are some theoretical reasons to suspect that decisionmakers are actually quite constrained, which, if true, would maybe suggest we're over-estimating how important it is to get altruistic decision-makers (or change our identification of which offices are most worth seeking).

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T21:19:06.379Z · score: 9 (7 votes) · EA · GW

My EA origins story is pretty boring! I was a research assistant for a Philosophy professor who included a unit on EA in her Environmental Ethics course. That was my first exposure to the ideas of EA (although obviously I had exposure to Peter Singer previously). As a result, I added Doing Good Better to my reading list, and I read it in December 2016 (halfway through my first year of law school). I was pretty immediately convinced of its core ideas.

I then joined the Harvard Law School EA group, which was a really cool group at the time. In fact, it's somewhat weird that a school of HLS's size (ca. 1600 students) was able to sustain such a group, so I was very fortunate in that way.

Comment by cullen_okeefe on Space governance is important, tractable and neglected · 2020-01-11T06:55:58.443Z · score: 10 (5 votes) · EA · GW

All else equal, would you prefer to see marginal dollars invested in fundamental research in this area (e.g., legal scholarship on space property law from an EA perspective) or advocacy (building better institutions or more political support for improved space governance)? I kinda suspect we're more limited by the latter currently.

Comment by cullen_okeefe on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T06:27:38.211Z · score: 28 (15 votes) · EA · GW

(For relevant background, I spent ~all of my undergraduate career heavily involved in social justice before discovering EA in law school and then switching primarily to EA)

A bunch of high-level thoughts:

  1. EA is overall the better ideology/movement due to higher-quality reasoning, prioritization, embrace of economics, and explicit welfarism.
  2. There is probably lots of potential for useful alliances between EAs and SJ people on poverty and animal welfare issues, but I think certain SJ beliefs and practices unfortunately frustrate these. Having EAs who can communicate EA ideas to SJ audiences to form these alliances is both valuable and, in my experience, possible.
  3. SJ has captured a huge amount of well-educated people who want to do good in the world. From a strategic perspective, this is both a problem and an opportunity. It is a problem because, in my on-campus experience, there is somewhat strong lock-in to an ideology after undergrad, after which point it is hard to "convert" people or persuade them to act outside their ideology. Thus, the prominence of SJ on colleges frustrates the work of post-college EA movement-building/ -growth. However, I think we have a much more compelling message, ideology, and community, and with sustained movement growth at colleges could represent a plausible alternative attractive worldview to sell to undergrads who are interested in improving the world but also identify the same weaknesses in SJ that I did.
  4. SJ has a lot of approximately true insights into ways that social dynamics can cause harm, but many of them are not compelling EA causes.
  5. EA should probably do a better job taking seriously and leftist critiques of Western philanthropy in the Global South, and have better responses to them than citing GiveWell cost-effectiveness analyses. (To be clear, I think many people do this; it should just be a salient talking point because it's the most common objection I heard.)
  6. Overall, I would recommend a soft embrace of SJ, which is nothing more than accepting the valid parts of the ideology while also retaining firm cause prioritization. We should also use SJ insights to build a larger, more inclusive movement. We should do all of this while also being careful not to alienate moderates and conservatives who are sympathetic to EA. Again, in my experience at Harvard, I had success communicating to both groups—I sold some very progressive friends on EA while also recruiting some very conservative donors. Cause prioritization is our strength in this sense, since the issues most likely to cause ideological conflicts are also probably not major causes by mainstream EA analysis.
Comment by cullen_okeefe on Long-term investment fund at Founders Pledge · 2020-01-11T04:31:58.700Z · score: 11 (5 votes) · EA · GW

Getting the legal structure of this right will be as important as the financial structure. Getting a good trust lawyer to set it up is crucial.

Comment by cullen_okeefe on Long-term Donation Bunching? · 2019-10-08T18:08:15.633Z · score: 1 (1 votes) · EA · GW

Yes, it might be. Feel free to sync offline if you want to investigate this.

Comment by cullen_okeefe on Defending Philanthropy Against Democracy · 2019-10-07T23:55:07.833Z · score: 1 (1 votes) · EA · GW

Thanks for your reply!

you're assuming an abstract notion of 'democraticness' that infuses everything the government does

Isn't this what commitment to democracy entails if you think that democratic governance is procedurally valuable? If a decision derives from a democratic body, then that decision at least prima facie deserves respect as a democratic decision.

whereas the critics don't care whether it's a democratic government that's making a bad decision―it's still a bad decision that leaves individuals with outsized power.

If this was their criticism, they wouldn't bring up democracy, since it's irrelevant. This is a substantive criticism: our democracy has done the wrong thing here. This is not the same thing as being anti-democratic, which is what they seem to be arguing.

I think there is a steelman of this argument which is something like:

A decision made by a democratic body is prima facie democratic, but can be undemocratic if it has certain characteristics like undermining democracy in the long-run or abusing “market failures” in the democratic system itself.

But the problem is I don’t think “making someone more powerful” is necessarily a procedurally objectionable outcome—I don’t think it necessarily undermines democracy. It seems perfectly reasonable to me for a democracy to decide that it will allow billionaires to make a lot of money if they give it away. What the critics have failed to do, in my estimation, is argue that this is not the type of decision that democracies can ratify. In the absence of such a showing, it seems reasonable to me to conclude that a well-known and easily stoppable pattern of mega-philanthropy has been democratically acquiesced to.

Comment by cullen_okeefe on Defending Philanthropy Against Democracy · 2019-10-07T19:43:39.615Z · score: 1 (1 votes) · EA · GW

I think I address that here:

The critic could also argue that the problem is the “whitewashing” effect of philanthropy. Like Alexander, I am not convinced that this is a real phenomenon, but even if it was, I don’t think the criticism holds. A democracy should be able to weigh the pros of philanthropy (solution of market and policy failures) against cons it might have (whitewashing a bad or unequal economic system). If the democracy decides that the pros outweighs the cons, that calculus deserves respect. Through the various policy subsidies of philanthropy, our democracy appears to have arrived at such a decision. Again, that might be a substantively bad decision, but it is not an anti-democratic one. And if the decision to subsidize philanthropy was substantively flawed, one wonders why we should expect better disposition of money that would have otherwise gone to philanthropy.

Comment by cullen_okeefe on Defending Philanthropy Against Democracy · 2019-10-07T18:27:51.545Z · score: 1 (1 votes) · EA · GW

Thanks, this is helpful. Not sure I agree that (1) has an overall anti-democratic effect procedurally, but I see the worry.

Comment by cullen_okeefe on Defending Philanthropy Against Democracy · 2019-10-07T16:51:46.911Z · score: 1 (1 votes) · EA · GW

Do you have a hypothetical example? It’s hard for me to imagine such a case. Seems like most things that are cost-effective would have a (gross, if not net) strong positive procedural effect by making a healthier, more active, smarter demos.

Comment by cullen_okeefe on Defending Philanthropy Against Democracy · 2019-10-07T16:38:28.150Z · score: 1 (1 votes) · EA · GW

Thanks Peter!

Selfish spending may be a bad use of resources, but it doesn't affect other people's lives (at least not on conventional morality where "failing to act" is not considered a relevant issue) the same way philanthropy does. Philanthropy ends up uniquely suspect because of it replacing what is viewed as the role of government and affecting people from a policy perspective without giving them (democratic) representation.

Yes, my point is derived from the fact that I don’t think there is a meaningful act-omission difference. So any omission of philanthropy has the same effect size as philanthropy, just in the opposite direction.

they would allow for the idea that maybe things have gotten out of hand more recently as the rich got richer and the philanthropy got more blatant (note: I'm not sure if this is an actual trend but it certainly is a perceived trend) and so we need to intervene now. This view seems perfectly consistent to me.

I agree that that’s a coherent viewpoint, but it’s also not procedurally cognizable. It’s a substantive claim about the goodness of the current balance of private and public power—a balance which has been democratically set. I agree with the critics that the balance is bad, but I don’t think they do a good job showing how it’s either procedural or substantively bad in the case of charity (see SSC).

Winners Take All tries to do this the most, since a key part of its thesis is that philanthropy masks the harms of the current system. But I don’t think the author actually shows this to be the case. And even if he did, it’s still a substantive point.

I don’t think my argument depends on asserting that the critics think democracy is perfect. It does depend on them thinking that democratic control is a virtue, but insofar as we have made a democratic decision not to intervene, the argument that philanthropy isn’t democratic seems misguided to me.

On your last two points together:

I think the find-the-biggest-demos argument is probably the strongest argument for government spending instead of philanthropy. I really disagree with the nationalism inherent in the premise of the last two defenses for reasons of equity. I also don’t think that the nation is an obvious level to spend philanthropy at when most very rich people made their money through a globalized market.

Comment by cullen_okeefe on Defending Philanthropy Against Democracy · 2019-10-07T16:04:40.789Z · score: 1 (1 votes) · EA · GW

Yes, you are right.

I think it’s less interesting to EAs because we already buy the view that we should try to do cost-effectiveness comparisons of these things.

Comment by cullen_okeefe on Defending Philanthropy Against Democracy · 2019-10-07T16:03:25.480Z · score: 1 (1 votes) · EA · GW

Good point!

Comment by cullen_okeefe on Defending Philanthropy Against Democracy · 2019-10-07T16:03:08.989Z · score: 1 (1 votes) · EA · GW

Perhaps, but that’s just an argument for higher taxes (which I support), not an argument against philanthropy.

Comment by cullen_okeefe on Defending Philanthropy Against Democracy · 2019-10-07T03:50:49.074Z · score: 1 (1 votes) · EA · GW

Thanks!

Comment by cullen_okeefe on Why did MyGiving need to be replaced? And why is the EffectiveAltruism.org replacement so bad? · 2019-10-05T23:07:53.219Z · score: 5 (4 votes) · EA · GW

Agreed! CSV exportability would also be good. So would receipt storage/linking for tax reasons.

Comment by cullen_okeefe on FHI Report: Stable Agreements in Turbulent Times · 2019-10-05T23:06:18.706Z · score: 1 (1 votes) · EA · GW

Thanks for your thoughts!

I think it's not quite right to say that anyone is "changing" the contracts. The more accurate way, in my mind, is that parts of the most concrete contents of performance obligations ("what do I have to do to fulfill my obligations?") is determined ex post via flexible decision procedures that can account for changed circumstances. Thus I think "settling" is more accurate than "changing," since the later implies that the actual performance was unsatisfactory of the original contract, which is not true.

You're right that there are interesting parallels to the AI alignment problem. See here.

There are two considerations that need to be balanced in any case of flexibility: the expected (dis)value of inflexible obligations and the expected (dis)value of flexible obligations. A key input to this is the failure mode of flexible obligations would include something like the ability of a powerful obligor to take advantage of that flexibility. In some cases that will be so large that ex post flexibility is not worth it! But in other cases, where inflexibility seems highly risky (e.g., because we can tell it depends on a particularly contingent assumption about the state of the world that is unlikely to hold post-AGI) and sufficiently strong ex post term-settling procedures are available, it seems possibly worthwhile.

Comment by cullen_okeefe on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-05T03:23:00.279Z · score: 18 (9 votes) · EA · GW

Thank you for a very thorough and transparent reply!

Comment by cullen_okeefe on Why did MyGiving need to be replaced? And why is the EffectiveAltruism.org replacement so bad? · 2019-10-05T03:18:22.347Z · score: 3 (3 votes) · EA · GW

Agreed that recurring donation support would be good. But I also like the current interface better aesthetically and, on function terms, equally.