High-priority policy: towards a co-ordinated platform?post by MichaelPlant · 2019-01-14T17:05:02.413Z · EA · GW · Legacy · 3 comments
What would an EA policy platform look like? A moral trade argument for policy co-ordination What happens next? Conclusion None 3 comments
[Epistemic status: I have some familiarity with policy, but not deep expertise (I spent one - almost entirely unproductive - year working as a researcher for an British MP). I wrote this up because I wasn't aware these points had been made before and thought someone should make them. I first circulated this as a google doc back in June 2018 and am posting it now because I can't see myself having time to develop these ideas further.]
Lots of EAs are interested in policy. This is sensible, given the resources governments control. However, while there are increasing efforts to do policy work in specific areas (namely, AI and factory farming), it seems fair to say this interest has not translated into widespread action. The stumbling blocks seem to be doubts about (1) what the policies would be, (2) whether this is worth doing, and (3) what we would actually do ("we" refers to the EA community in general). In this post I aim to make progress on all three concerns. I claim (1) shouldn't be a barrier: there is at least a skeletal policy platform we could create from drawing on the major EA causes areas. Regarding (2), I argue that there are reasons of moral trade for EAs to co-ordinate to bring about policy change that are absent in, say, effective philanthropy. This gives us more reason to work on policy issues that we might have thought, but isn't, by itself, sufficient to show we should be prioritising policy change. I make some suggestions about (3), but I don't have a full answer to offer.
What would an EA policy platform look like?
If you divide the current EA world up by cause area, people tend to focus on X-risk, global poverty, animal welfare or 'meta' (i.e. research/movement building). From this, we can see planks of an EA policy platform. I won't go into the details, as my aim is just to give the broad outline, but that broad outline is:
1. Greater attention to, co-ordination on and funding for work on X-risks and global catastrophic risks.
2. Improvements in the welfare of farmed animals, possibly through taxation and/or regulation.
3. Defending and increasing foreign aid budgets, and for those aid budgets to moved towards more cost-effective and rigorously tested interventions.
4. Improving the policy process by e.g. proliferating and championing evidence-based approaches to policy (foreign and domestic) and training policy-makers about cognitive biases.
These 4 still leave a bit of a gap, because they don't say anything about what sort of concrete policies developed countries should be pursuing domestically. My suggestion here would be:
5. Using evidence from self-reported subjective well-being (i.e. life satisfaction) scores to drive policy.
The headline implication of 5. is a much greater priority for mental health, but this touches many policy areas as this Global Happiness Policy Report outlines - work, schools, cities, etc.
If we pull those together, we seem to have a wide policy platform: there is an 'EA' policy for a range of areas, not just one or two.
A moral trade argument for policy co-ordination
A problem with the EA movement is that much of our time is spent trying to convince one another (fighting? squabbling? disagreeing?) about what the most effective thing is. While this can be acrimonious, it often makes sense. If resources, i.e. money and time, are limited, and some things are be magnificently more valuable than others, it's sensible to fight about how the 'pie' gets divided and win people over to your cause.
By contrast, when we think of an EA policy agenda, we find an opportunity for moral trade. Suppose A thinks X-risk is the most important, but B thinks global poverty is. A and B would ordinarily disagree about where C, a philanthropist, should donate. However, both A and B could agree to champion both their own, and the other's, causes to their shared government. It's reasonably unlikely that if a government spending more on foreign aid it would take all of that money from another EA cause area, say X-risk. Indeed, for some policies they would be no conflict at all: A should have no objection to the government spending it's aid budget, whatever it is, on more effective interventions.
As far as I can tell, none of the 5 planks of EA policy mentioned above conflict with each other. It's not as if, say, a relative prioritisation of mental health compared to physical health will, for instance, increase the consumption of factory farmed meat. This is convenient because it means that, while there will be conflicts between EAs about where the marginal pound should go, there don't seem to be as many (any?) about what governments should do. A and B can help each other relatively without undermining the projects that they each think are the highest priority. Hence this gives individuals a reason to cooperate and try to advance each other's preferred policies at the governmental level. I note this reason is not overwhelming, as such actions may come at a cost - time spent campaigning is time not spent doing something else, and so on.
If we could achieve such co-ordination, it strikes me this could potentially be powerful. One reason is that, if advocates of each cause area are prepared to lobby for others' cause areas, this would instantaneously magnify the voice of every group: rather than just X-riskers saying governments should do more about X-risk, advocates of animal welfare, global poverty etc. would do so, and so on. Another potential benefit is this would make the EA world much less combative. This might sound trivial, but it strikes me as important for harmony, good social norms and high-quality movement growth that people see they have reasons to co-operate, rather than just disagree.
What happens next?
In reading the last section, you might be thinking 'okay, this sounds cute, but it also sounds irrelevantly hypothetical: the EA movement is not well placed to do anything about policy'. Let's think for a moment about what the EA movement being well-placed for policy might look like and how important it might be to get there.
I can think of two scenarios where an EA policy platform would make sense. First, if there were just lots of individual voters prepared to advocate for and lobby on this issues. If there were an 'EA voting bloc' just like there are conservative, liberal, social democrat, etc. voting blocs, then this could work. Intuitively, it's hard to believe EA will ever be a large enough popular movement for this to work, but I'd like someone to convince me otherwise.
The other scenario would be if there were lots of EA-sympathetic policy makers - politicians, civil servants, think tank researchers, academics, staff in NGOs (have I left a group out here?) - who co-ordinated and advocated each other issues where they found the opportunity. This strikes me a slightly more plausible option, because EA as it stands tilts towards a small, networked group of high-achieving graduates. I can well imagine in years to come many EAs will end up as policymakers; I know some that already are.
I accept the EA world isn't very well developed on this front at the moment. However, it seems obvious that this is something the EA movement will need to think about at some point, simply because there are so many opportunities to do good that arise through policy change. Admittedly, when it was just Peter Singer worrying about pond safety, it didn't make sense to think much grander than individual action. At some stage, political co-ordination on EA issues will stop seeming so far fetched. It doesn't seem far fetched to our critics currently: a common complaint of EA is that is routinely ignores politics and systemic changes.
My guess is the next concrete step would be for an EA organisation to write a report setting out the policy agenda and try to work with an existing sympathetic think-tank to launch it. I think it would make sense to, at some point, either create an EA think-tank or perhaps for one of (a) the Global Priorities Institute or (b) the Open Philanthropy Project to expand their remit into think-tank activities. I'm not claiming this should happen now - I'm also not claiming it shouldn't happen now - but I do think the EA world should start examining the possibility.
I think there is an EA policy platform available, should we wish to do anything about it, and we have more reason to co-ordinate than we might have supposed. It's not totally clear to me what we do next, and I welcome further suggestions.
 Thanks to Sam Hilton, Haydn Belfield, Jonathan Courtney, Scott Weathers, Codie Marie Wild, Michael Sadowsky, Bryan Pick, Tom Stocker and Hayden Wilkinson for comments on an earlier google doc version of this post.
Comments sorted by top scores.