↑ comment by Linch ·
2021-10-07T09:05:29.928Z · EA(p) · GW(p)
(All views are my own. I'm not entirely sure how Marcus and Peter, and other people at RP, thinks about RP's impact; I know I've had at least one major disagreement before).
Oooh asking the tough questions here! The short answer is that you should probably just apply and learn about your own fit, RP as an institution, and what you think about stuff like our theory of change through the application process!
The longer answer is that I don't have a good sense of how good your counterfactual is. My understanding is that QURI's work is trying to do revolutionary change in epistemics, while RP's work* is more tightly scoped.
In addition, my best guess is that at this stage in your career, direct impact is likely less important than other concerns like career capital and personal motivation.
Still, for the sake of having some numbers to work with (even if the numbers are very loose and subjective, etc), here is a very preliminary attempt to estimate impact at RP, in case it's helpful for you or other applicants:
The easiest way to analyze RP impact is looking at our projects that aims to improve funder priorities, and guesstimating how much it improves decision quality.
When I do some back of the envelope calculations on the direct impact of an RP researcher, I get something like mid 6 figures to high 7 figures** in terms of improving decision quality for funders, with a (very not robust) median estimate in the 1-2M range.
I think this approach survives reasonable external validity checks, some of which will be in the higher end (Michael Aird has been working part-time as an EAIF guest manager, he's approximately indifferent between marginal time doing RP work vs marginal EAIF work, and amortizing his time on that would get you to the upper ends of that range) and some of which will be in the lower end (RP has had ~12 FTE researchers 6 months ago, EA overall is deploying 400M per year, saying RP's responsible for ~1.5% of the decision-quality of deployed capital feels much more intuitively defensible than saying we're responsible for ~$8M/researcher-year x 12 ~=25% of deployed capital***).
There are out-of-model reasons to go lower than say 1-2M, like replaceability arguments, and reasons to go higher, like RP is trying to expand in the future, so joining earlier is more helpful than later as you can help us while in the early stages to scale well while maintaining/improving culture and research quality. My current guess is that the reasons to go higher are slightly stronger.
So my very very loose and subjective guess is that you should maybe be indifferent between working at RP now as a junior longtermism researcher for a year vs say ~$2.2M**** of increase in EA capital on impact grounds. So one approach is considering whether ~$2.2M growth in EA capital is better or worse than your counterfactual, though again it's reasonably likely that career capital and motivation concerns might dominate.
*or at least the direct impact of our specific projects, I think one plausible good future/goal for "RP" as an institution is roughly trying to be a remote-first megaproject in research, which doesn't have close analogues (though "thinktank during covid" is maybe closest). Replies from: MichaelA, NunoSempere
** Note that this is a range of different possible averages using different assumptions rather than a real 90% credible interval. I do think it's >5% likely that our work is net negative.
*** This is pretty loose, not all RP projects are meant to advise funders, also some of what's going on is that I'm forecasting that RP's research will impact 2-5 years of funding rather than 1 year of funding in some cases, like Neil's EU farmed legislation work.
**** Precision of estimates do not imply greater confidence.
↑ comment by MichaelA ·
2021-10-07T10:00:41.114Z · EA(p) · GW(p)
Some thoughts from my own perspective (again, not necessarily RP-wide views):
I agree with Linch that this is a tricky question. I also agree with a lot of the specific things he says (though not all). My own brief reply would be:
- It seems unclear to me whether you'd have more impact in the short-term and in the long-term if you work at QURI or Rethink next year.
- Therefore, applying, seeing what happens, and thinking harder about an offer only if and when you get it seems probably worthwhile (assuming you're somewhat interested, of course).
- I think if you got an offer, I'd see you either accepting or declining it as quite reasonable, and I'd want you to just get the best info you can and then make your own informed decision.
- I'm pretty excited about QURI's work, so this is a matter of me thinking both options seem pretty great.
(I think this is basically what I'd say regarding someone about whom I know nothing except that they work at QURI in a substantial capacity - not just e.g. intern. I.e., I don't think my knowledge of you specifically alters my response here.)Replies from: MichaelA, Linch, Linch
↑ comment by MichaelA ·
2021-10-07T10:02:22.679Z · EA(p) · GW(p)
[This comment is less important, and you may want to skip it]
Some places where my views differ from Linch's comment:
Replies from: Linch
- In my role with EAIF,~$750k of grants that I was primary investigator for have been approved. This was from ~10hrs/week of work over ~4 months. So that's equivalent to (hopefully!) improving the allocation of ~$9m over a year of full-time work. Yet I think I'd go full-time Rethink rather than continue to be part-time EAIF, from next year onwards, if given the choice. This gives some indication of how valuable I think me working at Rethink full-time next year is, which then more weakly indicates (a) what's true and (b) how valuable other people working at Rethink is. And it seems to suggest something notably higher than the 1-2M range.
- There are complexities to all of this, which I can get into if people want, but I think that picture ends up roughly correct.
- But note that this has a lot to do with my long-term career plans (with research management as plan A), rather than just comparing direct impact during 2022 alone.
- Also note that my dollars moved at EAIF and my hours worked for EAIF have both been above average, I think, partly because I've been keen to take on extra things so I can learn more and because it's fun.
- Also note that I'm very glad I did this term at EAIF, have strongly recommended other people apply to do a stint at EA Funds, and would definitely consider an extended term at EAIF if offered it (though I think I'd ultimately lean against).
- Linch's comment could be (mis?)read as equating the value of adding 1 dollar to the pool of EA resources to the value of guiding 1 dollar towards a good donation target (rather than it going towards a net negative target, a less good target, or going nowhere for a while). But I think he doesn't actually mean that (given that he uses 1-2M in one case and 2.2M in the other). And I think I'd (likewise?) say that the latter is probably better than the former, given that I think EA funders would be keen to spend much faster than they currently are if they had more vetting, ideas, strategic clarity, etc.
- But I haven't thought much about this, and it probably depends on lots of specifics (e.g. are you "just" guiding a dollar that would be moved anyway to instead be moved to a 10% better opportunity, or are you suggesting a totally new intervention idea that someone can then active grantmake an org into existence to execute, or are you improving strategic clarity sufficiently to unlock money that would otherwise sit around due to fear of downside risks?). Not sure though.
- I also haven't tried to make any of the estimates Linch tried to make above. But I appreciate him having done so and acknowledge that that makes it easily to productively disagree with him than with my more vague statements!
↑ comment by Linch ·
2021-10-07T10:45:47.702Z · EA(p) · GW(p)
Linch's comment could be (mis?)read as equating the value of adding 1 dollar to the pool of EA resources to the value of guiding 1 dollar towards a good donation target (rather than it going towards a net negative target, a less good target, or going nowhere for a while)
For what it's worth, I don't think those two are the same thing, but I usually think of "improving decision quality" of situations that roughly looks like a funder wants to invest in $X in Y, we look at the evidence and suggest something like
a) best bets are Z charities/interventions in Y
b) this isn't worth investing for ABC reasons, or
c) ambiguous, more research is needed
as some percentage improvement on $X in the first two cases, and I usually think of that percentage as less than 100%. Maybe 20-50%*? Depends on funder quality. So I don't think "adding 1 dollar to the pool of EA resources to the value of guiding 1 dollar towards a good donation target" are the same thing, but you're implying >100% improvement and I usually think improvements are lower, especially in expectation. Keep in mind that there's usually at least an additional grantmaker layer between donors and the people doing direct work, and we have to be careful to avoid double-counting (which I was maybe a bit sloppy at too but worth noting).Replies from: MichaelA
The other thing to note is that this "decision quality" approach already might inflate our importance at least a little (compared to a more natural question candidates might ask like at what $X amount should they be indifferent between working for us and earning-to-give for $X) because it implies that the cause prioritization of EA is already basically reasonable, and I don't actually believe this, in either my research or my other career/life decisions.
A different tack here is a quick sanity check: maybe it has happened a few times before, but I'm not aware of any point that an RP employee was so confident about an intervention/donation opportunity that they've researched that they decided that the donation opportunity is a clearly better bet than RP. Obviously there are self-serving reasons/biases for this, but I basically think this is a directionally correct move from the POV of the universe.
* I need to check if how much I can share, but 20% is not the lowest number I've seen from other people at RP, at least when I talk about specific intervention reports.
↑ comment by MichaelA ·
2021-10-07T12:52:28.414Z · EA(p) · GW(p)
Keep in mind that there's usually at least an additional grantmaker layer between donors and the people doing direct work, and we have to be careful to avoid double-counting (which I was maybe a bit sloppy at too but worth noting).
Yeah, this is a good point that I think I hadn't had saliently in mind, which feels a bit embarrassing.
I think the important, correct core of what I was saying there is just that "the value of adding 1 dollar to the pool of EA resources" and "the value of guiding 1 dollar towards a good donation target (rather than it going towards a net negative target, a less good target, or going nowhere for a while)" are not necessarily the same, and plausibly actually differ a lot, and also the value of the latter thing will itself differ a lot depending on various specifics. I think it's way less clear which of the two things is bigger and by how much, and I guess I'd now back down from even my tentative claims above and instead mostly shrug.
(EDIT: I realised I should note that I have more reasons behind my originally stated views than I gave, basically related to my EAIF work. But I haven't given those reasons here, and overall my views on this are pretty unstable and not super informed.)Replies from: Linch
↑ comment by Linch ·
2021-10-07T21:06:47.036Z · EA(p) · GW(p)
I think the important, correct core of what I was saying there is just that "the value of adding 1 dollar to the pool of EA resources" and "the value of guiding 1 dollar towards a good donation target (rather than it going towards a net negative target, a less good target, or going nowhere for a while)" are not necessarily the same
I agree with this.
↑ comment by Linch ·
2021-10-07T10:26:26.667Z · EA(p) · GW(p)
I think this is basically what I'd say regarding someone about whom I know nothing except that they work at QURI in a substantial capacity - not just e.g. intern.
For onlookers, note that QURI has ~2 FTEs or so, so Michael isn't exactly anonymizing a lot.Replies from: MichaelA
↑ comment by MichaelA ·
2021-10-07T12:47:04.770Z · EA(p) · GW(p)
(I didn't mean just existing QURI staff - I meant like imagining that I'd stopped paying attention to QURI's staff for a year but still knew their work in some sense and knew they had 1-4 people other than Ozzie, or something. I guess you'd have to imagine I knew the output scaled up to match the number of people, and that it seemed to me each non-Ozzie employee was contributing ~equally to the best of my knowledge, and there's probably tricky things around management or seniority levels, but hopefully people get what I'm gesturing at.)