A central directory for open research questions
post by MichaelA · 2020-04-19T23:47:12.003Z · EA · GW · 12 commentsContents
Various EA-related topics Mostly focused on longtermism, existential risks, or GCRs Mostly focused on AI Fairly technical (I think) Less technical / AI strategy / AI governance focused on biorisk or coronavirus Cause prioritisation/global priorities Animal welfare Global health and development Other areas many EAs are interested in institutional decision-making & forecasting health, happiness, etc. Potential lists of lists which I haven’t properly taken the lists from yet What this could become (with your help!) from Aaron Gertler None 12 comments
Quite wonderfully, there has been a proliferation of research questions EAs have identified as potentially worth pursuing, and now even a proliferation of collections of such questions. So like a good little EA, I’ve gone meta: this post is a collection [EA · GW] of all such collections I’m aware of. I hope this can serve as a central directory to all of those other useful resources, and thereby help interested EAs find questions they can investigate to help inform our whole community’s efforts to do good better.
Some things to note:
- It may be best to just engage with one set of questions that are relevant to your skills, interests, or plans, and ignore the rest of this post.
- It’s possible that some of these questions are no longer “open”.
- I’ve included some things that aren’t explicitly written as collections of research questions, as long as research questions could very easily be inferred from them (e.g., from the problems people identify, or the posts people want written).
- You can also find a Google Doc version of this post here; as explained at the bottom of this post, I hope that that can grow into something much better than this.
Various EA-related topics
- List of EA-related thesis topics - Effective Thesis, no date
- You can also contact them for discussion, help, or coaching.
- A collection of researchy projects for Aspiring EAs [EA · GW] - EdoArad, 2019
- What questions could COVID-19 provide evidence on that would help guide future EA decisions? [EA · GW] - Michael Aird (i.e., me) and others, 2020
- Technical and Philosophical Questions That Might Affect Our Grantmaking - Open Philanthropy Project, 2017
- What are the key ongoing debates in EA? [EA · GW] - various, 2020
- What posts do you want someone to write? [EA · GW] - various, 2020
- 2018 list of half-baked volunteer research ideas [EA · GW] and its comments - Jacy Reese and others, 2018
- EA Summit Project Ideas (specifically the “Research Projects”) - various, no date
- What are some lists of open questions in effective altruism? [EA · GW] - Aaron Gertler and others, 2019
- This and the following list are roughly the same sort of “meta collection” as this post, and I think I took everything relevant from them already.
- The most important questions and problems - Pablo Stafforini
- Some history topics it might be very valuable to investigate [EA · GW] - Michael Aird, 2020
Mostly focused on longtermism, existential risks, or GCRs [EA(p) · GW(p)]
- The Precipice, Appendix F: Policy and research recommendations - Toby Ord, 2020
- Research questions that could have a big social impact, organised by discipline - Arden Koehler & Howie Lempel (80,000 Hours), 2020
- Crucial questions for longtermists [EA · GW] - Michael Aird for Convergence Analysis, 2020
- Legal Priorities Research: A Research Agenda - Legal Priorities Project, 2021
- Open Research Questions - Center on Long-Term Risk, no date
- ALLFED’s research priorities and Effective Theses topic ideas - 2019
- Open Research Questions - Center for Reducing Suffering, no date
- Some history topics it might be very valuable to investigate [EA · GW] - Michael Aird, 2020
- Questions related to moral circles that are listed at the end of this post [EA · GW] and in this comment [EA(p) · GW(p)] - Michael Aird, 2020
- Cause prioritisation / macrostrategy topics Denis Drescher collected and may investigate - 2020
Mostly focused on AI
Fairly technical (I think)
- “clusters of ideas that we believe warrant further attention and research” - Center for Human-Compatible AI (CHAI), no date
- Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda - Nate Soares and Benya Fallenstein (MIRI), published originally in 2014, and in 2017 in The Technological Singularity
- Alignment for Advanced Machine Learning Systems - Jessica Taylor et al. (MIRI), 2016
- Research Agenda v0.9: Synthesising a human's preferences into a utility function [LW · GW] - Stuart Armstrong, 2019
- Related talk here
- The Learning-Theoretic AI Alignment Research Agenda [AF · GW] - Vanessa Kosoy, 2019
- FLI AI Safety Research Landscape - Future of Life Institute, 2018
- Associated paper here
- Concrete Problems in AI Safety - Amodei et al., 2016
- Some things from or related to Paul Christiano that some people have indicated serve as research agendas, collections of questions, or supporting materials:
- Iterated Distillation and Amplification
- Paul's research agenda FAQ [LW · GW]
- AI alignment landscape
- Directions and desiderata for AI alignment [LW · GW]
- Note: I haven’t checked most of these out myself
- There may be other research agendas listed here [? · GW]
Less technical / AI strategy / AI governance
- Promising research projects - AI Impacts, 2018
- They also made a list in 2015; I haven’t checked how much they overlap
- The Centre for the Governance of AI’s research agenda - 2018
- Cooperation, Conflict, and Transformative Artificial Intelligence [? · GW] (the Center on Long-Term Risk’s research agenda) - Jesse Clifton, 2019
- Problems in AI Alignment that philosophers could potentially contribute to [LW · GW] - Wei Dai, 2019
- Problems in AI risk that economists could potentially contribute to [LW(p) · GW(p)] - Michael Aird, 2021
- Technical AGI safety research outside AI [EA · GW] - Richard Ngo, 2019
- Artificial Intelligence and Global Security Initiative Research Agenda - Centre for a New American Security, no date
- A survey of research questions for robust and beneficial AI - Future of Life Institute, no date
- “studies which could illuminate our strategic situation with regard to superintelligence” - Luke Muehlhauser, 2014 (he also made a list in 2012 [LW · GW])
- A shift in arguments for AI risk - Tom Sittler, 2019
Mostly focused on biorisk or coronavirus
- Coronavirus Research Ideas for EAs [EA · GW] - Peter Hurford, 2020
- LessWrong Coronavirus Agenda [LW · GW] - Elizabeth, 2020
Cause prioritisation/global priorities
- The Global Priorities Institute’s research agenda - 2019
- The most important unsolved problems in ethics - Will MacAskill, 2012
- I’m guessing this is mostly superseded by GPI’s agenda, but I haven’t checked.
Animal welfare
- Animal Advocacy Research Fund’s Focus Areas - Animal Charity Evaluators, no date
- Wild Animal Initiative’s research agenda - 2019
- Sentience Institute’s research agenda - 2019
- Sentience Institute's summary of "Foundational Questions for Effective Animal Advocacy" - 2019
- “Less explored” foundational questions in effective animal advocacy - Sentience Institute, 2019
- Faunalytics’s Research Priorities - 2019
- Alternative Proteins: 2020 Consumer Research Priorities - Good Food Institute, 2019
(Perhaps Charity Entrepreneurship and Rethink Priorities have relevant collections or research agendas?)
Global health and development
Important unresolved research questions relevant to macroeconomic policy - Open Philanthropy Project, 2014
(Perhaps Charity Entrepreneurship and GiveWell have relevant collections or research agendas?)
Other areas many EAs are interested in
Improving institutional decision-making & forecasting
- Forecasting AI Progress: A Research Agenda - Gruetzemacher, Dorner, Bernaola-Alvarez, Giattino, & David Manheim, 2020 (comments here [AF · GW])
- How valuable would more academic research on forecasting be? What questions should be researched? [EA · GW] - Michael Aird, 2020
- Research Directions on Improving Policymaking - EA Geneva, probably 2020
Rationality
- What are the open problems in Human Rationality? [LW · GW] - Raemon, 2019
- Note: I haven’t read this and don’t know how well it fits here.
Mental health, happiness, etc.
- Happier Lives Institute’s research agenda - Michael Plant, 2019
- Qualia Research Institute’s research agenda, no date
- Health and happiness: some open research topics [EA(p) · GW(p)] - Derek Foster, 2019
Other?
I’d guess there are other relevant areas for which research questions have been collected somewhere.
Potential lists of lists which I haven’t properly taken the lists from yet
- What are EA project ideas you have? [EA · GW] - Mati_Roy and others, 2020
- What new EA project or org would you like to see created in the next 3 years? [EA · GW] - Ozzie Gooen and others, 2019
- Concrete project lists [EA · GW] - Richard Batty, 2017
What this could become (with your help!)
As noted earlier, I hope this can help some of the many wonderfully curious EAs out there to find important questions they can start plugging away at, to help guide us all in our various efforts to improve the world.
But I’m sure that:
- I’ve missed various collections of questions, especially for cause areas other than longtermism (my personal focus)
- New collections will be made in future
- There are many individual questions that haven’t yet been collected anywhere, or new individual questions that could be suggested (I’ve added some as “Comments” in the google doc already)
- Some people would find this more useful if someone actually pulled out all of the questions from those collections and organised them, by topic and subtopic and so on (with the original source of each question referenced).
- This could be in one central document, in a “family” of interlinked documents (e.g., one for each broad cause area), in a spreadsheet, or in a wiki-style page.
And I think we could do more to inspire and support people to actually investigate these questions than just assemble a big list. For example, we could somehow “attach” to each question, perhaps as comments or indented bullet points, things like:
- thoughts on how to approach the question
- potential breakdowns into subquestions
- links to relevant resources
- links to draft documents where someone has begun answering certain questions
- “tags” indicating what sort of skills or backgrounds are required for answering each question or set of questions
- offers of “prizes” (payment) for sufficiently high quality explorations of the questions
- Ideally, it’d be easy to offer the prizes, stipulate the terms, and see the total amount offered by everyone for a particular question
And this could all be done collaboratively. (Plus, I don’t expect to have time to do it myself.)
So here’s a Google Doc version of this post. Anyone can comment and make suggestions. Please do so, to make this as useful as it can be! (You can either say “someone should probably do X”, or just do go do X yourself.)
I’ll monitor and accept changes regularly, and occasionally update the post version as well. I’ll thank contributors at the bottom.
Also feel free to:
- Duplicate the doc
- Create other docs and suggest links to them from this central directory
- Let me know if you want to get full editing permissions and be the person “in charge” of this doc
I’d be really excited to see this develop into something that can really help people advance our movement’s collective knowledge, and to see people actually executing on that - actually making those advancements.
Thoughts from Aaron Gertler
I emailed Aaron Gertler [EA · GW] of the Centre for Effective Altruism to ask his thoughts on how valuable something like this would be, and what its ideal eventual form might be. His reply, which he confirmed it was ok for me to quote here, included the following:
I'm not sure how often people actually look at these "open question" lists to decide on research priorities, so I don't know what kind of return you'd get on your time. However, some kind of Google Doc for this should exist, and if your post is what causes that to happen, I think it will be valuable (over time, some number of people will eventually go looking for this sort of thing -- I've been asked for it before, and it will be nice to have a good place to send people).
A really comprehensive list of open questions (which is regularly updated both with new questions and with new resources relevant to old questions) would be an interesting resource, and is the kind of thing one could apply for an EA Funds grant to support; however, I think you'd first have to make a case that such a thing would be used by at least a few people who otherwise wouldn't have picked very good research topics (the Effective Thesis use case is a classic example of this). It seems to me like any such list should be research-oriented (pointing out where work can be done to resolve confusion) more than debate-oriented (pointing out what different people believe), though of course your ability to emphasize that will vary from question to question.
Hopefully that can provide food for thought for people who might want to develop this idea further.
Thanks to all the people who created all the lists I’ve shown and/or taken from here. And thanks to Aaron Gertler for his above-quoted thoughts, to David Kristoffersson [EA · GW] for helpful feedback and additions, and to Remmelt Ellen for helpful comments.
This post is related to my work with Convergence Analysis.
12 comments
Comments sorted by top scores.
comment by Peterslattery · 2020-04-24T23:08:23.416Z · EA(p) · GW(p)
Thanks so much for this! I am keen to discuss this when Covid-19 has passed. I have some ideas and see opportunities for collaboration. EdoArad - I would love to talk with you too at that time. For context, I am one of the people involved in READI which is led by EA volunteers and seeking to tackle high impact/EA aligned research questions. This is our current project, you can see other work here.
comment by EdoArad (edoarad) · 2020-04-25T06:04:26.689Z · EA(p) · GW(p)
I'm very interested in the work you are doing at READI, and it would be great to discuss ideas and collaborate.
(by the way, what does READI stand for?)
comment by EdoArad (edoarad) · 2020-04-20T07:28:48.068Z · EA(p) · GW(p)
For calibration, so far no one has contacted me to take on one of the research projects in the list of concrete researchy projects [EA · GW]. And even in 1-1s, with people that are interested in joining EA Israel and are interested in taking on research project, it had very limited success going over this list and thinking together about possible research questions.
comment by MichaelA · 2020-04-20T07:59:23.534Z · EA(p) · GW(p)
(Upvoted)
Yeah, I've seen that sort of thing mentioned a few times, such that I no longer find it surprising, though I initially did, and I still don't fully understand why it's the case.*
That's why I included "I think we could do more to inspire and support people to actually investigate these questions than just assemble a big list", and the points after that. But I'd definitely be keen to hear more thoughts on how to provide effective inspiration and support for that. (Indeed, it seems that could be a research question in itself. Now, if only we could inspire and support people to investigate it...)
*It does seem there are a lot of interesting and important questions to be explored, many of which may not require extremely specialised skills. As well as a lot of intellectually curious, research-minded EAs interested in having more EA-y things to do. So my guess before hearing that sort of thing mentioned a few times probably would've been that there'd be more uptake of these sorts of lists, and I'm not entirely sure what ingredients are missing.
Obviously payment and organisational infrastructures would be very helpful for most people, and necessary for many. But I wouldn't guess they'd be necessary for all curious EAs with some slices of free time? I wonder if there are other levers that could be pulled to unlock some of this extra talent that seems to be floating around?
comment by EdoArad (edoarad) · 2020-04-20T16:57:57.760Z · EA(p) · GW(p)
My current model is something like this. #BetterWrongThanVague
It is difficult to make noticeable research contribution. Even small incremental steps can be intimidating and time consuming.
It is hard to motivate oneself to work alone on someone else's problems. I think that most people probably have their own passions and model of what's important, and it's unclear why subquestion 3.5.1 should be the single thing that they focus on.
Three of the main motivators that might mitigate that here are recognition for completing the work well and presenting something interesting, better career capital (learning something new or displaying skills) and socializing/partnering.
comment by EdoArad (edoarad) · 2020-04-20T17:06:49.514Z · EA(p) · GW(p)
One thing which I thought about trying which might be related is to take on a small scale research problem and set up an open call to globally collaborate on this. To make it successful, we can set up something formal that some organisation is interested in this result (and better yet, possibly supply a prize - doesn't have to be monetary) and coordinate with local groups to collect an initial team.
That could be fun and engaging, but I'm not sure how scalable this is and how much impact we can expect from that (which is uncertainty probably worth of testing out). I've tried to start a small ALLFED-directed research group locally, as part of our research team [EA · GW], but that also didn't work out. I think that going global might possibly work though.
comment by Prabhat Soni · 2020-09-07T11:59:51.219Z · EA(p) · GW(p)
Hey, thanks for putting this together. I think it would be quite valuable to have these lists be put up on Effective Thesis's research agenda page. My reasoning for this is that Effective Thesis's research agenda page probably has more viewers than this EA Forum post or the Google Doc version of this post.
Additionally, if you agree with the above, I'd be curious to hear your thoughts on how we could make Effective Thesis's research agenda page open source?
comment by MichaelA · 2020-09-07T14:49:05.157Z · EA(p) · GW(p)
I think those are both good ideas! (This is assuming that by "open source" you mean something like "easy for anyone to make suggestions to, in a way that lets the page be efficiently expanded and updated". Did you have something else in mind?)
I don't know the Effective Thesis people personally (though what they're doing seems really valuable to me). But I've now contacted them via their website, with a message quoting your comment and asking for their thoughts.
comment by Prabhat Soni · 2020-09-08T03:23:30.146Z · EA(p) · GW(p)
Yep, that's what I meant by "open source"! Awesome to hear you're taking this forward!
comment by MichaelA · 2020-09-10T18:20:04.403Z · EA(p) · GW(p)
Update: Effective Thesis have now basically done both of the things you suggested (you can see the changes here). So thanks for the suggestions!
comment by Prabhat Soni · 2020-09-11T06:39:47.798Z · EA(p) · GW(p)
Glad to hear this!
comment by MichaelA · 2020-09-06T12:28:33.854Z · EA(p) · GW(p)
Update: 80,000 Hours have released an article entitled Research questions that could have a big social impact, organised by discipline, which draws on the lists of questions listed by this post, but also includes some new questions (sometimes from personal correspondences with the authors). Readers may want to check that article out too. (I've now added a link from this post.)