I'm also a hobbyst forecaster: I am LokiOdinevich on GoodJudgementOpen, and Loki on CSET-Foretell. I have been running a Forecasting Newsletter since April 2020, and have written Metaforecast.org, a search tool which aggregates predictions from many different platforms. I also generally enjoy winning bets against people too confident in their beliefs.
I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. A good fraction of my research is available either on the EA Forum or on nunosempere.github.io.
Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.
Indeed, in some sense, Solomonoff Inductors are in a boat similar to the one that less computer-science-y Bayesians were in all along: you’ll plausibly converge on the truth, and resolve disagreements, eventually; but a priori, for arbitrary agents in arbitrary situations, it’s hard to say when. My main point here is that the Solomonoff Induction boat doesn’t seem obviously better.
Not necessarily true! See Scott Aaronson on this (but iirc, he makes some assumptions I disagreed with)
Thinking more about this, these are more of an upper bound, which don't bind (because you can probably buy a 0.01% risk reduction per year much cheaper. So the parameter to estimate would be more like 'what are the other cheaper interventions'
Just to complement Khorton's answer: With a discount rate of d , and a steady-state population of N, and a willingness to pay of $X, the total value of the future is N∗$X/d, so the willingness to pay for 0.01% of it would be 0.01∗N∗$X/d
This discount rate might be because you care about future people less, or because you expect a d% of pretty much unavoidable existential risk going forward.
Some reference values
N=1010 (10 billion), X=$104, r=0.03 means that willingness to pay for 0.01% risk reduction should be 0.0001∗1010∗$104/0.03=333∗109, i.e., $333 billion
N=7∗109 (7 billion), X=$5∗103, r=0.05 means that willingness to pay for 0.01% risk reduction should be 0.0001∗7∗109∗5∗103/0.05=70∗109 i.e., $70 billion.
I notice that from the perspective of a central world planner, my willingness to pay would be much higher (because my intrinsic discount rate is closer to ~0%). Taking d=0.0001
N=1010 (10 billion), X=$104, r=0.0001 means that willingness to pay for 0.01% risk reduction should be 0.0001∗1010∗$104/0.0001=100∗1012, i.e., $100 trillion
The above might be the right way to model willingness to pay from 0.02% risk per year to 0.01% risk per year. But with, e.g,. 3% remaining per year, willingness to pay is lower, because over the long-run we all die sooner.
E.g., reducing risk from 0.02% per year to 0.01% per year is much more valuable that reducing risk from 50.1% to 50%.
: Where you value the i-th year in the steady-state at (1−d)i of the value of the first year. If you don't value future people, the discount rate d might be close to 1, if you do value them, it might be close to 0.
Nitpick: Assuming that for every positive state there is an equally negative state is not enough to think that the maximally bad state is only -100% of the expected value of the future, it could be much worse than that..
I'm curious about potential methodological approaches to answering this question:
Arrive at a possible lower bound for the value of averting x-risk by thinking about how much one is willing to pay to save present people, like in Khorton's answer.
Arrive at a possible lower bound by thinking about how much is willing to pay for current and discounted future people
Thinking about what EA is currently paying for similar risk reductions, and arguing that one should be willing to pay at least as much for future risk-reduction opportunities
I'm unsure about this, but I think this is most of what's going on with Linch's intuitions.
Overall, I agree that this question is important, but current approaches don't really convince me.
My intuition about what would convince me would be some really hardcore and robust modeling coming out of e.g., GPI taking into account both increased resources over time and increased risk. Right now the closest published thing that exists might be Existential risk and growth and Existential Risk and Exogenous Growth—but this is inadequate for our purposes because it considers stuff at the global rather than at the movement level—and the closest unpublished thing that exists are some models I've heard about that I hope will get published soon.
How much do you think forecasting well on given questions is different from the skill of creating new questions? I notice that I'm trending to be increasingly impressed by people who are able to ask questions that seem important but that I wouldn't even have thought about
They seem similar because being able to orient oneself in a new domain would feed into both things. One can probably use (potentially uncalibrated) domain experts to ask questions which forecasters then solve. Overall I have not thought all that much about this.
I wonder how much you'd consider "changing governance culture" as part of the potential impact, e.g. I hope that Metaculus and co. will stay clear success stories and motivate government institutions to adopt and make probabilistic and evaluable predictions for important projects
I'm fairly skeptical about this for e.g., national governments. For the US government in particular, the base rate seems low; people were trying to do things like this since at least 1964 and mostly failing.
It's hard to give a nuanced answer, but I'd mostly say that your update is not directionally correct. In particular, I'd expect the number of "EA jobs" to be in the hundreds to low thousands, but the number of EAs to be in the mid to high thousands.
Around 135 people out of 1,679 non-students and 2,166 responses mentioned that they were employed at EA organizations. So this is 8.7% of non-students and 6.2% of total EA respondents.
Not that many people respond to surveys, so the total EA population is probably higher than 2k, but it's difficult to say how much higher.
Because I don't get the impression that the number of "EA jobs" has literally doubled in the past year, I think that the chances of getting accepted into any EA org seem at most something like 10%, but more like 2 to 5%. So I'd say that the mood of your update doesn't seem to be directionally correct.
In particular, just in the case of uni EA groups, I imagine that there might be one organizer for every, say, 20 to 50 people (?? I really have no idea about this), which is also a ratio of 2 to 5%.
One major way in which I could imagine being wrong is if you're at a very prestigious uni, or if your definition of "hard work and dedicated" does convey 2 to 10% to your audience.
I also drew some pathways to impact for QURI itself and for software, but I’m significantly less satisfied with them.
I thought that the software pathway was fairly abstract, so here would be something like my approximation of whyMetaforecast is or could be valuable.
Note that QURI's pathway would just be the pathway of the individual actions we take around forecasting, evaluations, research and software, plus maybe some adjustment for e.g., mentorship, coordination power, helping funding, etc.
This doesn't seem like it is common knowledge. Also, "weird things that make sense" does kind of screen off a bunch of ideas which make sense to potential applicants, but not to fund managers.
It's possible that there are good weird ideas that never cross our desk, but that's again an informational reason rather than weirdness.
This is not the state of the world I would expect to observe if the LTF was getting a lot of weird ideas. In that case, I'd expect some weird ideas to be funded, and some really weird ideas to not get funded.
I'd be very curious about you feeding your intuitions into this utility function extractor (and then dividing your estimates of their relative value by their yearly budget.) I'm curious enough to put a small bounty on this, i.e., a $50 donation to a charity of your choice.
The way you would do this would be to go to Advanced options > Use your own data > Paste the below with the names of the orgs in the technology alternative space changed > Click on "change dataset"
This article is kind of too "feel good" for my tastes. I'd also like to see a more angsty post that tries to come to grips with the fact that most of the impact is most likely not going to come from the individual people, and tries to see if this has any new implications, rather than justifying that all is good.
Maybe given that there are billions of money floating around the kind of thing would be to try to influence them
But OpenPhil doesn't seem that approachable, and its not like they can be influenced all that much by that many people
Maybe there is some cause X that we're missing that would make the broad EA community great again
More generally, maybe the patterns in the early EA community were more suitable to a social movement without billionaires, and there are better patterns that we could be executing now. For instance, maybe trying to get prestige outside of EA dominates earning to give now that EA is better funded. Or maybe EA is better funded but you'd still expect most people to have idiosyncratic preferences not shared by central funders.
Rethink Priorities has been trusted by EA Funds and Open Philanthropy to start new projects (e.g., on capacity for welfare of different animal species) and open entire new departments (such as AI governance).
These and other large organizations often only fund 25–50% of our needs in any particular area because they trust our ability to find other sources of funding. Therefore we rely on a broad range of individual donors to continue our work.
This surprised me, because I fairly often hear the advice of "donate to EA Funds" as the optimal thing to do, but it seems that if everybody did that, RP would not get funded. Do you have any thoughts on this?
Hey, thanks for the comments. Your point about a bull market is welcome, and I think similar to the point that Phil made in the 80kh podcast. Some nitpicks:
Nino -> Nuño
When people say that "capital depreciates", they generally mean " capital investments", i.e., machinery, computers, etc.
Note that labor depreciates at a rate d, in the sense that people move out of the movement because of value drift, but it also increases in value because of productivity improvements (see the exponentials in the model)
But in models in which labor replicated itself (i.e., there was some "naturally arising movement-building"), we still didn't see that earning to give (in the sense of earning a salary) was favored in the limit either.
Hey, good questions, thanks for cross-posting this from the EA Discord :)
OpenPhil is included in the model because the EA movement starts out with some capital. But convincing additional billionaires (or "earning to give" in the sense of "trying to become a billionaire to donate the billions to charity") is not modelled.
Also, the model does not (yet) include research, which is also part of what OpenPhil does.
One-time big donors could be modelled by increasing the initial capital, but this is kind of a kludge.
Also, once that small model exists, we can reason in ways like: The small model recommends doing direct work or movement building over earning to give, in the limit. Adding billionaires to the mix doesn't seem like it would change that property (unless "earning to give" includes "taking a shot at becoming a billionaire".)
Hey, in hindsight I realize that the paper + summarization don't make clear that this does depend on model assumptions/empirical points, sorry. I've edited the post to make this clearer (here is the previous version without the edits, in case it's of interest.)
tl;dr: This comes from model assumptions which seem reasonable, but empirical investigations + historical case studies, or alternatively sci-fi scenarios could flip the conclusion.
In particular, let L′=−r⋅L+f(a⋅L,b⋅K), i.e. roughly L(t)=L(t−1)⋅(1−r)+f(a⋅L,b⋅K), so each year you lose r% of people, but you also do some movement building, for which you spend a⋅L labor and b⋅K capital.
Then for some functions f which determine movement building, this already implies that the movement has a maximum size. So for instance, if you havef(a⋅L,b⋅K)=log(11a⋅L+1b⋅K), then with infinite capital this reduces to f(a⋅L,b⋅K)=log(11a⋅L+1∞)=log(11a⋅L)=log(a⋅L)
But then even if you allocate all labor to movement building (so that a=1, or something), you'd have something like L′=−r⋅L+log(L) , and this eventually converges to the point where log(L)=r⋅L no matter where you start.
Now, above I've omitted some constants, and our function isn't quite the same, but that's essentially what's going on (see ρR<0,λ<1 in equation 6 in page 4.) I.e., if you lose movement participants as a percentage but have a recruitment function that eventually has "brutal" diminishing returns (sub-linear diminishing returns to labor + throwing money at movement building doesn't solve it), you get a similar result (movement converges to a constant.)
But you could also imagine a scenario where the returns are less brutal—e.g., you're always able to recruit an additional participant by throwing money at the problem, or every movement builder can sort of eternally always recruit a person every year, etc. You could also imagine more sci-fi like scenario, where humanity is expanding exponentially (cubically) in space, and a social movement is a constant fraction of humanity.
More realistically, if f instead looks like √(a∗L)⋅(b∗K), which has diminishing returns but not brutally so, movement size can increase forever because you can always throw more money at the problem until √(a∗L)⋅(b∗K)>r⋅L
Note that if you have a less brutal recruitment function, this increases the appeal of movement building, not of earn to giving.
Also, I'm not sure whether "brutal" is the right way to be talking about this. "Brutal" is the term I use when I think about this but if I recall correctly the function we use is standard in the literature, and it seems plausible when you start to think about groups which reach a large size. But there is definitely an empirical question here about how movement results actually look like.
I can imagine that feedback loop (good in the world -> movement building) being important at the beginning. Arguably one of the reasons why the global health & development -> longtermism change of minds is so common is because longtermism has good arguments in principle but no big tangible wins to its name, so it's better able to convince those who pay attention to it because they're drawn to EA because of global health & development's big wins, rather than convince people directly.
But even in that case, if one wants longtermism to get a few big wins to increase its movement building appeal, it would surprise me if the way to do this was through more earning to give, rather than by spending down longtermism's big pot of money and using some of its labor for direct work.
If the arrow is from good in the world, this could increase the value of direct work and direct spending (and thus earning to give) relative to movement building. I can imagine setups where this might flip the conclusion, but I think that this would be fairly unlikely.
E.g., because of scope insensitivity, I don't think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions.
If the arrow is from direct work, this increases the value of direct work relative to everything else, and our conclusions almost certainly still hold.
I imagine that Phil might have some other thoughts to share.
Bounty suggestion: Reach out to people who have had their grants accepted (or even not accepted) by the LTFF, and ask them to publish them in exchange for $100-$500.
Why is this good: This might make it easier for prospective candidates to write their applications
Why do this as a bounty + assurance contract:
Why assurance contract: I might find it kind of scary to publish my own application alone, but easier if others do as well.
Why bounty: It feels like there is a cost to publishing an application because they were written by one's younger self, and they are slightly personal, and people have limited capacity to internalize externalities before they burn out.
This would require taking on some coordination costs
E.g., talking to the LTFF about whether the risks of people "hacking" the application process is worth the increase in the ease of applying.
E.g., actually enforcing strict comment guidelines about not posting comments which would make it more costly to publish applications.
Thinking about things which could wrong.
Comment by NunoSempere on [deleted post]
Seems kind of similar to https://forum.effectivealtruism.org/tag/charity-evaluation
Collaborate with Jaime Sevilla on datasets for various values related to size, performance, training expense, etc. of large machine learning models.
Having high quality data on this which one knows is going to be maintained makes it much easier to elicit forecasts about these topics, and eventually resolve those forecasts and keep track of track-records, and I know that Jaime has been working on this.
Coming back to this post, I'm thinking about what it means in terms of collaboration. Tetlock found that teams of superforecasters did better than people going at it alone. One process that could produce this kind of data is Metaculus being able to meaningfully coordinate 10 forecasters on one question (but not beyond that), whereas prediction markets right now kind of have people going at it alone.
My thoughts are that this problem is, well, not exactly solved, but perhaps solved in practice if you have competent and aligned forecasters, because then you can ask conditional questions which don't resolve.
Given such and such measures, what will the spread of covid be.
Given the lack of such and such measures, what will the spread of covid be
Then you can still get forecasts for both, even if you only expect the first to go through.
This does require forecasters to give probabilities even when the question they are going to forecast on doesn't resolve.
This is easier to do with EAs, because then you can just disambiguate the training and the deployment step for forecasters. That is, once you have an EA that is a trustworthy forecaster, you could in principle query them without paying that much attention to scoring rules.