I am volunteering at Rethink Priorities doing forecasting research, and am looking to see if there are EA related questions with long time horizons (>5 years) people are interested in seeing predictions on, and if there are I am willing to put some time into operationalising them and submitting them to Metaculus.
I think this would be both directly useful for those who have these questions and others who find them interesting, and also useful for expanding the database of such questions we have for the purpose of improving long term forecasting.
I have a doc on my computer with some notes on Metaculus questions that I want to see, but either haven't gotten around to writing up yet, or am not sure how to operationalize. Feel free to take any of them.
Giving now vs. later parameter values
"In 2030, I personally will either donate at least 10% of my income to an EA cause or will work directly on an EA cause full time"
attempting to measure value drift
or maybe ask about Jeff Kaufman or somebody like that because he's public about his donations
or make a list of people, and ask how many of them will fulfill the above criteria
"According to the EA Survey, what percent of people who donated at least 10% in 2018 will donate at least 10% in 2023?"
Not sure if it's possible to derive this info
According to David Moss in Rethink Priorities Slack, it's probably not feasible to get data on this
"When will the Founders Pledge's long-term investment fund make its last grant?"
"When will the longest-lived foundation or DAF owned by an EA make its last grant?"
EA defined as someone who identifies as an EA as of this prediction
Just to be clear, you specifically mean to exclude not-yet-EAs who set up DAFs in, say, 2025?
"What annual real return will be realized by the Good Ventures investment portfolio 2022-2031?"
Can be calculated by Form 990-PF, Schedule B, Part II, which gives the gain of any assets held
Might make more sense to look at Dustin Moskowitz's net worth
But that doesn't account for spending
It might be interesting to have forecasts on the amount of resources expected to be devoted to EA causes in the future, e.g. by more billionaires getting involved. This could be useful for questions like "how fast should Good Ventures be spending their money?" if we expect to have 5 more equally big donors in 2030 that might suggest they should be spending down faster than if they are still expected to be the biggest donor by a wide margin.
For this, would you prefer to condition on something like there being no transformative AI, or not? I feel like sometimes these questions end up dominated by considerations like this, and it is plausible you care about this answer only conditional on something like this not happening.
The question is intended to look at tail risk associated with stock markets shutting down. Transformative AI may or may not constitute such a risk; for example, the AI might shut down the stock market because it's going to do something far better with people's money, or it might shut down the market because everyone is turned into paperclips. So I think it should be unconditional.
This sounds like a cool idea, thanks for doing it!
One place where you could find a bunch of ideas is my Database of existential risk estimates (context here [EA · GW]). It could be interesting to put very similar questions/statements on Metaculus and see how their forecasts differ from the estimates given by these individuals/papers (most of whom don't have any known forecasting track record). It could also be interesting to put on Metaculus:
questions inspired by (but different from) the statements in that database
questions inspired by what you notice there aren't any statements on
e.g., neglected categories of risks, or risks where there are very long time-scale estimates but nothing for the next few decades
I think authoritarianism and dystopias are examples of that
questions that could serve as somewhat nearer-term, less extreme proxies of later catastrophes
On the other hand, forecasting existential risks (or similar things) introduces other challenges aside from being (usually) long-range. So this might not be the ideal approach for your specific goals - not sure.
(This is a somewhat lazy response, since I'm just pointing in a direction rather than giving specifics, but maybe it could still be helpful.)
This comment is again asking you to do most of the work, in the form of picking out which questions in that agenda are about the future and then operationalising them into crisp forecasting questions. But I'll add as replies a sample of some questions from the agenda that I think it'd be cool to operationalise and put on Metaculus.
How likely are international tensions, armed conflicts of various levels/types, and great power war specifically at various future times? What are the causes of these things?
How might shifts in technology, climate, power, resource scarcity, migration, and economic growth affect the likelihood of war?
Are Pinker’s claims in The Better Angels of Our Nature essentially correct?
Are the current trends likely to hold in future? What might affect them?
How do international tensions, strategic competition, and risks of armed conflict affect the expected value of the long-term future? By what pathways?
What are the plausible ways a great power war could play out?
E.g., what countries would become involved? How much would it escalate? How long would it last? What types of technologies might be developed and/or used during it?
What are the main pathways by which international tensions, armed conflicts of various levels/types, or great power war specifically could increase (or decrease) existential risks? Possible examples include:
Spurring dangerous development and/or deployment of new technologies
Spurring dangerous deployment of existing technologies
Impeding existential risk reduction efforts (since those often require coordination and are global public goods)
What are the main pathways by which each type of authoritarian political system could reduce (or increase) the expected value of the long-term future?
E.g., increasing the rate or severity of armed conflict; reducing the chance that humanity has (something approximating) a successful long reflection [? · GW]; increasing the chances of an unrecoverable dystopia [? · GW].
Risk and security factors for (global, stable) authoritarianism
How much would each of the “risk factors for stable totalitarianism” reviewed by Caplan (2008) increase the risk of (global, stable) authoritarianism (if at all)?
How likely is the occurrence of each factor?
What other risk or security factors should we focus on?
What effects would those factors have on important outcomes other than authoritarianism? All things considered, is each factor good or bad for the long-term future?
E.g., mass surveillance, preventive policing, enhanced global governance, and/or world government might be risk factors from the perspective of authoritarianism but security factors from the perspective of extinction or collapse risks (see also Bostrom, 2019).
What are the best actions for influencing these factors?
How likely is it that relevant kinds of authoritarian regimes will emerge, spread (especially to become global), and/or persist (especially indefinitely)?
How politically and technologically feasible would this be?
Under what conditions would societies trend towards and/or maintain authoritarianism or a lack thereof?
What strategic, military, economic, and political advantages and disadvantages do more authoritarian regimes tend to have? How does this differ based on factors like the nature of the authoritarian regime, the size of the state/polity it governs, and the nature and size of its adversaries?
How likely is it that relevant actors will have the right motivations to bring this about?
How many current political systems seem to be trending towards authoritarianism?
How much (if at all) are existing authoritarian regimes likely to spread? How long are they likely to persist? Why?
How likely is it that any existing authoritarian regimes would spread globally and/or persist indefinitely? Why?
Typology of, likelihoods of, and interventions for dystopias
How likely is each type of dystopia to arise initially and then to persist indefinitely?
How bad would each type of unrecoverable dystopia be, relative to each other, to other existential catastrophes, and to other possible futures?
How much should we worry about recoverable or temporary equivalents of each type of unrecoverable dystopia?
E.g., how much would each increase (or decrease) the risk of later extinction, unrecoverable collapse, or unrecoverable dystopia?
What are the main factors affecting the likelihood, severity, and persistence of each type of dystopia?
What would be the best actions for reducing the likelihood, severity, or persistence of each type of dystopia?