The Legal Priorities Project is an independent, global research project founded by researchers from Harvard University. Our mission is to conduct legal research that tackles the world’s most pressing problems. This currently leads us to focus on the protection of future generations.
The idea was born at the EA group at Harvard Law School in Fall 2018. Since then, we raised two rounds of funding from Ben Delo at the advice of Effective Giving, built a highly motivated and mission-aligned core team, registered as a 501(c)(3) nonprofit, hosted a seminar at Harvard Law School, and organized our first summer research fellowship. Besides that, we worked on our research agenda and a number of other research projects.
We’re currently assessing the desirability and feasibility of having a formal affiliation with a university. We consider founding a center or institute at a leading law school in the US or UK within the coming 2 years.
We aim to establish “legal priorities research” as a new research field. At the meta-level, we determine which problems legal scholars should work on in order to tackle the world’s most pressing problems. At the object-level, we conduct legal research on the identified problems.
Our approach to legal priorities research is influenced by the longtermism paradigm. Consequently, we are currently focusing on the following cause areas: (1) improving the governance of advanced artificial intelligence, (2) mitigating risks from synthetic biology, (3) mitigating extreme risks from climate change, and (4) improving institutional design and decision-making.
Legal priorities research can be viewed as a subset of global priorities research. While global priorities research is located at the intersection of philosophy and economics, legal priorities research focuses primarily on legal studies, although it is still highly interdisciplinary.
We are currently working on a research agenda for legal priorities research. The agenda will be divided by cause areas and will contain a list of promising research projects for legal scholars. We hope to publish the agenda in December 2020. Sign up to our newsletter, if you want to receive an email when it gets published.
Sounds exciting! It's nice to see specialised, professionalised efforts to prioritise, coordinate, and do EA-aligned work in fields/disciplines where that hadn't been done yet. (I'd be keen to see something similar for history, for example.)
Inspired by this post, I've just made a tag for forum posts relevant to law [? · GW]. More specifically, ripping off the Effective Altruism & Law FB group's description, I said it's for posts "discussing EA-related legal research questions, when pursuing a law degree makes sense for EAs, how EAs should use a law degree, and EA-related legal jobs".
If people reading this comment know of other posts that fit that description, please tag them :)
This project seems exciting. Also, the website design and branding of this project is great! Kudos to whoever designed it. The website and branding seems similar to OpenAI's, but I don't think that's an issue - both look great and distinct enough.
Thanks, I'm glad you like the design and branding! The website was designed and developed by Hendrik vor dem Berge, a German user interface designer (http://www.vor-dem-berge.com/). We're also super happy with the result! OpenAI indeed served as an inspiration. We even checked with them if our design is too close, but they didn't see a problem. As you said: it's distinct enough.
Yes, we are considering including nuclear security in our agenda! However, prioritisation is inherently rather selective, and we are focusing on areas that are both immensely relevant, neglected (which nuclear security might not be as much in relation to others), and tractable. We do plan to include a list of cause areas for further exploration, though. Nuclear security is definitely a promising consideration.
Perhaps by reaching out to the career offices at T14 U.S. law schools and other top schools to offer to be in touch with intellectually ambitious recent law grads (including those finishing up clerkships) who are now "on ice" until January? Other AI-oriented faculty - like Jonathan Masur at UChicago - could be good talent spotters, as well.
Thank you for the great suggestions! It is definitely important to keep in mind such an opportunity. As we are on an early stage, we are currently focusing on developing a strong basis of quality work that will help solidify the reputation of the field. For this reason, we are being careful regarding growing too much in the beginning and are limiting our active search for new collaborators.