Why I've come to think global priorities research is even more important than I thoughtpost by Benjamin_Todd · 2020-08-15T13:34:36.423Z · score: 63 (30 votes) · EA · GW · 7 comments
Positive recent progress Implications of longtermism Patient longtermism Relative neglectedness Scale of the community Importance of ideas How might you contribute? You can see a list of jobs in research and supporting roles here. Footnotes None 7 comments
We’ve rated global priorities research (GPR) as one of our top priority areas for some time, but over the last couple of years I’ve come to see it as even more promising.
The field of GPR is about rigorously investigating what the most important global problems are, how we should compare them, and what kinds of interventions best address them. For example, how to compare the relative importance of tackling global health vs. existential risks.
It also considers questions such as how much weight to put on longtermism, or whether we should give now or later.
I’d be keen to see more investment in the field, both in absolute terms and relative to the portfolio of effort within the effective altruism community.
Here are some reasons why. Each reason is weak by itself, but taken together they’ve caused me to shift my views.
Positive recent progress
I think the Global Priorities Institute has made good progress, which makes me optimistic about further work.
One form of work is putting existing ideas about global priorities on a firmer intellectual footing, of which I think Hilary Greaves and Will MacAskill’s strong longtermism paper is a great example. This kind of work is useful because it encourages the ideas to be taken seriously within academia, and also helps to uncover new flaws in them.
Another form of work is aimed at directly changing the priorities of the effective altruism community or other altruists. I think Philip Trammell’s work on optimal timing is a success, and might significantly change how we want to allocate resources. Another example is Will MacAskill’s work on whether we’re at the most influential time in history [EA · GW].
Implications of longtermism
I’ve come to better appreciate how little we know about what longtermism implies. Several years ago, it seemed clearer that focusing on reducing existential risks over the coming decades—especially risks posed by AI—was the key priority.
Now, though, we realise there could be a much wider range of potential longtermist priorities, such as patient longtermism, work focused on reducing risk factors rather than risks, or other trajectory changes. There has been almost no research on the pros and cons of each.
I now put a greater credence in patient longtermism compared to the past (due to arguments by Will MacAskill and Phil Trammell and less credence in very short AI timelines), which makes GPR look more attractive. (And in general GPR seems more robustly good across a variety of forms of longtermism, except for the most urgency-focused forms.)
AI safety has caught on more broadly, while GPR hasn’t. Because of that, the resources invested in AI safety seem have substantially increased over the last few years, decreasing its neglectedness, while GPR seems to have seen a smaller increase.
Scale of the community
At a lower bound, we can think of GPR as a multiplier on the effectiveness of the rest of the effective altruism community, and so the larger the community, the more valuable the research.
There are now hundreds of millions of dollars spent by the community each year, and thousands of community members doing direct work (probably several fold higher than 5 years ago), and this research can have real effects on what they do. The research can also be applied beyond the community, so it hopefully has even more potential than this increase would imply.
Importance of ideas
If anything, I’m even more convinced that the ideas are what matter most about EA, and that there should at least be a branch of EA that’s focused on being an intellectual project. The field of GPR is perhaps our best chance of being this project, and either way, it helps to put EA on a firmer intellectual footing.
Is there anything that has made me less keen on GPR in the last few years? There are a couple of factors, but I think they’re small.
One issue is that it’s still proving hard to attract academic economists into the field, though there has been some progress.
Some have pointed out that there haven’t been paradigm shifting new arguments in the last couple of years, perhaps suggesting progress is harder than expected, though I think progress has been reasonable or good compared to my expectations.
Another issue is that it’s difficult for most people to contribute to the field, and this bottleneck makes it hard for many more people to contribute to it than do today. Still, there are ways that more people can get involved, and anyone can contribute through donating.
Here’s some more detail on how people can contribute:
How might you contribute?
If you’re on track to be an academic researcher, you can seek out relevant topics. Economics and philosophy are the most obviously relevant fields, but there should be useful work in a much wider range of fields. The Global Priorities Institute focuses on the most fundamental questions (see their research agenda), but there is also plenty of applied research to be done in comparing different issues, such as those we list here. There is some more detail on how to pursue the academic path in our priority path write up and profile on academic research.
If you know anyone who might be able to do this kind of research (or you have a public platform with this kind of audience), you could consider telling them about the field and why it matters. Many researchers dream of doing work that’s both intellectually fascinating and has major real-world applications.
You can take a supporting job at one of the organisations that does this kind of research, such as the Global Priorities Institute or Open Philanthropy. There are not many of these organisations currently, but I expect that the field will grow over the next 10 years, and more centres will be established in a number of universities around the world—so it could be worth bearing in mind that career capital in research management may be relevant down the line.
You can see a list of jobs in research and supporting roles here.
One way to help with global priorities outside of a formal research setting is to test out a project within an unexplored problem area, to help work out if more people should enter that area in the future.
You can donate to the Global Priorities Institute at Oxford. Besides carrying out research, they also have scholarships to support the careers of young researchers at other institutions.
GPI recently received a grant from Open Philanthropy, but Open Philanthropy are usually not willing to cover 100% of an organisation’s budget (and this is a precarious position to be in too), so GPI are planning to raise another about £1m over the next months. I expect they could fund more scholarships and positions beyond this, and further diversify their funding base, though with diminishing returns. Note that two of our trustees work at the Global Priorities Institute, so we may be biased.
In the future, I hope there will be opportunities to fund new academic centres. This is another potential use of the Global Priorities Institute’s funding, though you could also tell them you’d be interested in doing this when the time comes, and invest your money in a DAF until then.
Crossposted from the 80,000 Hours blog.
Strictly speaking, even if we just focus on the community, the immediate scale of the community is not what’s most relevant. We care more about something like the integral of the scale of the community over its entire future, and research discoveries made today only speed up future discoveries. It’s less obvious that GPR is higher impact based on this analysis, though the current scale of the community is a relevant factor that’s easier to measure. ↩︎
Comments sorted by top scores.