Intervention options for improving the EA-aligned research pipeline

post by MichaelA · 2021-05-28T14:26:50.602Z · EA · GW · 25 comments

Contents

  Summary
  Target audience
  Caveats and clarifications
  The intervention options
    scaling, and/or improving EA-aligned research orgs
    scaling, and/or improving EA-aligned research training programs
    grantmaking capacity and/or improving grantmaking processes
    Effective Thesis, improving it, and/or creating new things sort-of like it
    and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc.
    and/or improving research by non-EAs on high-priority topics
    a central, editable database to help people choose and do research projects
    Elicit (an automated research assistant tool) or a similar tool
    the impact projects will have
    to and/or improving options for collaborations, mentorship, feedback, etc. (including from peers)
    the vetting of (potential) researchers, and/or better “sharing” that vetting
    and/or improving career advice and/or support with network-building
    the financial costs of testing fit and building knowledge & skills for EA-aligned research careers
    and/or improving relevant educational materials[13]
    improving, and/or scaling market-like mechanisms for altruism
    and/or improving the use of relevant online forums
    the number of EA-aligned aspiring/junior researchers
    the amount of funding available for EA-aligned research(ers)
    writing, and/or promoting positive case studies
None
25 comments

See the post introducing this sequence [EA · GW] for context, caveats, credits, and links to prior discussion relevant to this sequence as a whole. This post doesn’t necessarily represent the views of my employers.

Summary

In a previous post [EA · GW], I highlighted some observations that I think collectively demonstrate that the current processes by which new EA-aligned research and researchers are “produced” are at least somewhat insufficient, inefficient, and prone to error. In this post, I’ll briefly discuss 19 interventions that might improve that situation. I discuss them in very roughly descending order of how important, tractable, and neglected I think each intervention is, solely from the perspective of improving the EA-aligned research pipeline.[1] The interventions are:

  1. Creating, scaling, and/or improving[2] EA-aligned research orgs
  2. Creating, scaling, and/or improving EA-aligned research training programs [? · GW] (e.g. certain types of internships or summer research fellowships)
  3. Increasing grantmaking capacity and/or improving grantmaking processes
  4. Scaling Effective Thesis [? · GW], improving it, and/or creating new things sort-of like it
  5. Increasing and/or improving EAs’ use of non-EA options for research training, credentials, etc.[3]
  6. Increasing and/or improving research by non-EAs on high-priority topics
  7. Creating a central, editable database to help people choose and do research projects
  8. Using Elicit (an automated research assistant tool) or a similar tool
  9. Forecasting the impact projects will have
  10. Adding to and/or improving options for mentorship, feedback sources, etc. (including from peers)
  11. Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
  12. Increasing and/or improving career advice and/or support with networking
  13. Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers
  14. Creating and/or improving relevant educational materials
  15. Creating, improving, and/or scaling market-like mechanisms for altruism [? · GW] (e.g., impact certificates [? · GW])
  16. Increasing and/or improving the use of relevant online forums
  17. Increasing the number of EA-aligned aspiring/junior researchers
  18. Increasing the amount of funding available for EA-aligned research(ers)
  19. Discovering, writing, and/or promoting positive case studies

Feel free to skip to sections that interest you; each section should make sense by itself.

Target audience

As with the rest of this sequence:

(For illustration, I’ve added a comment below this post [EA(p) · GW(p)] regarding how my own career, project, and donation decisions have been influenced by thinking about why and how the EA-aligned research pipeline should be improved.)

Caveats and clarifications

The intervention options

Creating, scaling, and/or improving EA-aligned research orgs

Creating, scaling, and/or improving EA-aligned research training programs

Increasing grantmaking capacity and/or improving grantmaking processes

Scaling Effective Thesis, improving it, and/or creating new things sort-of like it

Increasing and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc.

Increasing and/or improving research by non-EAs on high-priority topics

Creating a central, editable database to help people choose and do research projects

Using Elicit (an automated research assistant tool) or a similar tool

Forecasting the impact projects will have

Adding to and/or improving options for collaborations, mentorship, feedback, etc. (including from peers)

This could include things like:

Improving the vetting of (potential) researchers, and/or better “sharing” that vetting

For example:

Increasing and/or improving career advice and/or support with network-building

Examples of existing efforts along these lines include:

Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers

Creating and/or improving relevant educational materials[13]

Creating, improving, and/or scaling market-like mechanisms for altruism

Increasing and/or improving the use of relevant online forums

Increasing the number of EA-aligned aspiring/junior researchers

Increasing the amount of funding available for EA-aligned research(ers)

Discovering, writing, and/or promoting positive case studies


If you have thoughts on these interventions or other interventions to achieve a similar goal, or would be interested in supporting such interventions with your time or money, please comment below, send me a message, or fill in this anonymous form. This could perhaps inform my future efforts, allow me to connect you with other people you could collaborate with or fund, etc.


  1. Though it’s hard to even say what that means, let alone how much anyone should trust my quick rankings; see also the “Caveats and clarifications” section. ↩︎

  2. Note that even good things can be made better! ↩︎

  3. I’m using the term “EAs” as shorthand for “People who identify or interact a lot with the EA community”; this would include some people who don’t self-identify as “an EA”. ↩︎

  4. For example, one could view each of these intervention options through the lens of creating and/or improving “hierarchical network structures” (see What to do with people? [EA · GW]). ↩︎

  5. But I think it would be possible and valuable to do so. E.g., one could find many examples of people who were hired as a researcher at an EA-aligned org, went through an EA-aligned research training program, or did a PhD under a non-EA supervisor; look at what they’ve done since then; and try to compare that to some reasonable guesses about the counterfactual and/or people who seemed similar but didn’t have those experiences. (I know of at least one attempt [EA · GW] to do roughly this.) It would of course be hard to be confident about causation and generalisability, but I think we’d still learn more than we know now. ↩︎

  6. For example, creating, scaling, and/or improving EA-aligned research organisations and doing the same for EA-aligned research training programs might be complementary goods; more of the former means more permanent openings for the “graduates” of those programs, and more of the latter means more skilled, motivated, and vetted candidates for those orgs. ↩︎

  7. For convenience, I’ll sometimes lump various different types of people together under the label “aspiring/junior researchers”. I say more about this group of people in a previous post of this sequence. ↩︎

  8. See “active funding”. See also field building [? · GW]. ↩︎

  9. This is based on reading some of what they’ve written about their activities, strategy, and impact assessment; talking to people involved in the project; and my more general thinking about what the EA-aligned research pipeline needs. But I haven’t been an Effective Thesis coach or mentee myself, nor have I tried to carefully evaluate their impact. ↩︎

  10. The original Director of CSET and several of its staff have been involved in the EA community, but many other members of staff are not involved in EA. ↩︎

  11. See, for example, Learnings about literature review strategy from research practice sessions [EA · GW]. ↩︎

  12. This idea was suggested as a possibility by Peter Hurford. See some thoughts on the idea here [EA(p) · GW(p)]. ↩︎

  13. I’m grateful to Edo Arad for suggesting I include roughly this intervention idea. ↩︎

25 comments

Comments sorted by top scores.

comment by Linch · 2021-06-11T04:40:02.300Z · EA(p) · GW(p)

Notably missing from this list, but related to 5,11, and 17 (and arguably 1 and 18) is increasing the number and EA alignment of  currently non-EA or weakly EA-aligned senior researchers.  

That is, increasing the number of senior EA aligned researchers not via the pipeline of 

get interested in EA-> be a junior EA researcher -> be a intermediate EA researcher -> be a senior EA researcher, 

but via

be a senior researcher -> get interested in EA -> be a senior EA researcher. 

I don't have very obvious examples in mind, but potential case studies so far include Phillip Tetlock, David Roodman, Rachel Glennester, Michael Kremer, Kevin Esvelt, and Stuart Russell. 

Replies from: MichaelA
comment by MichaelA · 2021-06-11T06:04:45.880Z · EA(p) · GW(p)

Yeah, I think this is a quite important point that's sort-of captured by the other paths you mention, but (in hindsight) not sufficiently highlighted/emphasised.

I think another possible example is Allan Dafoe - I don't know his full "origin story", and it's possible he was already very EA-aligned as a junior researcher, but I think his actual topic selection and who he worked with switched quite a lot (and in an EA-aligned direction) after he was already fairly senior. And that seniority allowed him to play a key role in GovAI, which was (in my view) extremely valuable.

One place where I kind-of nod to the path you mention is:

Increasing and/or improving research by non-EAs on high-priority topics [...]

In addition to improve the pipeline for EA-aligned research produced by non-EAs, this might also improve the pipeline for EA-aligned researchers, such as by:

  • Causing longer-term shifts in the views of some of the non-EAs reached
  • Making it easier for EAs’ to use non-EA options for research training, credentials, etc. (see my next post)
Replies from: HowieL
comment by HowieL · 2021-06-11T17:46:42.965Z · EA(p) · GW(p)

I don't think Alan's really an example of this.

 

I think I’ve always been interested in computers and artificial intelligence. I followed Kasparov and Deep Blue, and it was actually Ray Kurzweil’s Age of Spiritual Machines, which is an old book, 2001 … It had this really compelling graph. It’s sort of cheesy, and it involves a lot of simplifications, but in short, it shows basically Moore’s Law at work and extrapolated ruthlessly into the future. Then, on the second y-axis, it shows the biological equivalent of computing capacity of the machine. It shows a dragonfly and then, I don’t know, a primate, and then a human, and then all humans.

Now, that correspondence is hugely problematic. There’s lots we could say about why that’s not a sensible thing to do, but what I think it did communicate was that the likely extrapolation of trends are such that you are going to have very powerful computers within a hundred years. Who knows exactly what that means and whether, in what sense, it’s human level or whatnot, but the fact that this trend is coming on the timescale it was was very compelling to me. But at the time, I thought Kurzweil’s projection of the social dynamics of how extremely advanced AI would play out unlikely. It’s very optimistic and utopian. I actually looked for a way to study this all through my undergrad. I took courses. I taught courses on technology and society, and I thought about going into science writing.

And I started a PhD program in science and technology studies at Cornell University, which sounded vague and general enough that I could study AI and humanity, but it turns out science and technology studies, especially at Cornell, means more a social constructivist approach to science and technology.

. . . 

Okay. Anyhow, I went into political science because … Actually, I initially wanted to study AI in something, and I was going to look at labor implications of AI. Then, I became distracted as it were by a great power politics and great power peace and war. It touched on the existential risk dimensions that I didn’t have the word for it, but was sort of a driving interest of mine. It’s strategic, which is interesting. Anyhow, that’s what I did my PhD on, and topics related to that, and then my early career at Yale.

I should say during all this time, I was still fascinated by AI. At social events or having a chat with a friend, I would often turn to AI and the future of humanity and often conclude a conversation by saying, “But don’t worry, we still have time because machines are still worse than humans at Go.” Right? Here is a game that’s well defined. It’s perfect information, two players, zero-sum. The fact that a machine can’t beat us at Go means we have some time before they’re writing better poems than us, before they’re making better investments than us, before they’re leading countries.

Well, in 2016, DeepMind revealed AlphaGo, and it was almost this canary in the coal mine, that Go was to me, that was sort of deep in my subconscious keeled over and died. That sort of activated me. I realized that for a long time, I’d said post tenure I would start working on AI. Then, with that, I realized that we couldn’t wait. I actually reached out to Nick Bostrom at the Future of Humanity Institute and began conversations and collaboration with them. It’s been exciting and lots of work to do that we’ve been busy with ever since.

https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/

Replies from: MichaelA
comment by MichaelA · 2021-06-11T17:58:45.950Z · EA(p) · GW(p)

I think that quote makes it sound like Allan already had a similar worldview and cause prioritisation to EA, but wasn't aware of or engaged with the EA community (though he doesn't explicitly say that), and so he still seems like sort-of an example. 

It also sounds like he wasn't actively and individually reached out to by a person from the EA community, but rather just found relevant resources himself and then reached out (to Bostrom). But that still seems like it fits the sort of thing Linch is talking about - in this case, maybe the "intervention (for improving the EA-aligned research pipeline)" was something like Bostrom's public writing and talks, which gave Allan a window into this community, which he then joined. And that seems like a good example of a field building intervention?

(But that's just going from that quote and my vague knowledge of Allan.)

Replies from: HowieL
comment by HowieL · 2021-06-11T21:18:05.303Z · EA(p) · GW(p)

Fair enough. I guess just depends on exactly how broad/narrow of a category Linch was gesturing at.

Replies from: Linch
comment by Linch · 2021-06-11T21:23:52.176Z · EA(p) · GW(p)

I think the crux to me is to what extent Allan's involvement in EAish AI governance is overdetermined. If, in a world with 75% less public writings on transformative AI of Bostrom's calibre, Allan would still be involved in EAish AI governance, then this would point against the usefulness of this step in the pipeline (at least with the Allan anecdote).

Replies from: MichaelA
comment by MichaelA · 2021-06-12T05:57:15.056Z · EA(p) · GW(p)

I roughly agree, though would also note that the step could be useful by merely speeding up an overdetermined career move, e.g. if Allan would've ended up doing similar stuff anyway but only 5 years later.

Replies from: Linch
comment by Linch · 2021-06-12T11:06:07.914Z · EA(p) · GW(p)

Yes, I agree that speeding up career moves is useful.

comment by MichaelA · 2021-05-28T14:34:34.304Z · EA(p) · GW(p)

Some quick notes on how my own career, project, and donation decisions have been influenced by thinking about the value of and methods for improving the EA-aligned research pipeline

(Note that most of these decisions were made before I drafted this sequence of posts, and thus weren’t based on my latest thinking. Also, I am likely missing some relevant things and will fail to explain some things well. Finally, as usual, this comment expresses my personal views only.)

Career decisions:

  • Thinking about the EA-aligned research pipeline was a key factor in me choosing to work for Rethink Priorities
    • I got other appealing job offers at the same time as the RP offer
    • A key selling point for RP for me was that, as far as I could tell before joining RP, RP had done well at scaling, being strategic, and assessing its impact, and seemed set to continue to do so
      • And it seemed like I could be a good fit for helping scale the longtermism team, e.g. through later taking on management responsibilities and helping develop RP's longtermist research agendas/priorities
      • I am now more confident that those guesses were correct, and that it made accept the RP offer partly for these reasons
  • I’m currently focusing mostly on testing and improving my fit for research management roles/activities
  • I’ve also taken some steps to test my fit for grantmaking, and am likely to take more such steps soon

Project decisions:

Donation decisions:

  • A desire to improve the EA-aligned research pipeline was a notable factor in me donating to ALLFED and GCRI in 2020
    • Though not the single largest factor
    • I explained those donation decisions here [EA(p) · GW(p)]
  • I’m considering donating this year to Effective Thesis and/or to someone who’s excited about working on the database idea I’ll describe in a later post
comment by Jamie_Harris · 2021-07-16T07:43:34.422Z · EA(p) · GW(p)

Just came here to comment something that's been on my mind that I didn't recall being suggested in the post, though it partly overlaps with your suggestions 1, 2, 4, 11, and 19.

Suggestion: Paid literature reviews with some (relatively low level) supervision.

Context: Since working at Sentience Institute, I've done quite a few literature reviews. (I've also done some more "rough and ready" ones at Animal Advocacy Careers.) I think that these have given me a much better understanding of how social sciences academia works, what sort of information is most helpful etc. A lot of the knowledge comes in handy in places that I wouldn't necessarily have predicted, too. This makes me feel like the benefits might be comparable to the sorts of benefits that I expect lots of people get from PhDs -- some methodological training / familiarity, and some useful knowledge. It wouldn't give you  some benefits of PhDs like signalling value, familiarity with the peer review process, or close mentorship relationships, but if you tried to get the literature reviews published in peer-reviewed journals, then that would add some of those benefits back in (and maybe help to improve the end product too).

Lit reviews can be quite time-consuming, but don't necessarily require any very special skills -- just willingness to spend time on it and look things up (e.g. methodological aspects) when you don't know or understand them, rather than plowing on regardless. Obviously some methodological background in the topic would be helpful, but doesn't always seem necessary; I'm a history grad and have done literature reviews on subjects from psychology to ethics to management.

It might be quite easy to explicitly offer (1) funding and (2) facilitation for independent researchers to be connected to potential reviewers of the end product. It could be up to the individual to suggest topics, or to some centralised body (as in your suggestion 7).

 

I'm not sure whose responsibility this should be. It could be EA Funds, Effective Thesis, or individual research orgs.

 

Caveats

  • I have found review + comments from colleagues helpful, so some supervision may be necessary, but these have tended to cluster at the start and end of projects with the vast majority of the work being independent.
  • To do rigorous systematic reviews, you generally want more than one person actually checking through the data, coding decisions etc, which would require more coordination. But this is not always necessary. Indeed, one of my lit reviews is currently going through the peer review process (and looks likely to be accepted) and didn't use multiple author checks on these decisions. And less formal/systematic literature reviews can still be valuable, I think, both for the researcher and the readers.
Replies from: MichaelA
comment by MichaelA · 2021-07-16T10:23:51.020Z · EA(p) · GW(p)

Thanks! Yeah, this seems like a handy idea. 

I was recently reminded of the "Take action" / "Get involved" page [? · GW] on effectivealtruism.org, and I now see that that actually includes a page on Write a literature review or meta-analysis [? · GW]. That Take action page seems useful, and should maybe be highlighted more often. In retrospect, I probably should've linked to various bits of it from this post.

Replies from: Jamie_Harris
comment by Jamie_Harris · 2021-07-17T07:13:22.973Z · EA(p) · GW(p)

True! I'd forgotten about that page. I think some sort of fairly minimal infrastructure might notably increase the number of people actually doing it though.

Replies from: MichaelA
comment by MichaelA · 2021-07-17T08:05:38.215Z · EA(p) · GW(p)

(Yeah, I didn't mean that this meant your comment wasn't useful or that it wouldn't be a good idea to set up some sort of intervention to support this idea. I do hope someone sets up such an intervention, and I may try to help that happen sometime in future if I get more time or think of a particularly easy and high-leverage way to do so.)

comment by Ben_Snodin · 2021-06-01T09:58:10.322Z · EA(p) · GW(p)

Thanks, I think this is a great topic and this seems like a useful list (although I do find reading through 19 different types of options without much structure a bit overwhelming!).

I'll just ~repost a private comment I made before.

Encouraging and facilitating aspiring/junior researchers and more experienced researchers to connect in similar ways

This feels like an especially promising area to me. I'd guess there are lots of cases where this would be very beneficial for the junior researcher and at least a bit beneficial for the experienced researcher. It just needs facilitation (or something else, e.g. a culture change where people try harder to make this happen themselves, some strong public encouragement to juniors to make this happen, ...).

This isn't based on really strong evidence, maybe mostly my own (limited) experience + assuming at least some experienced researchers are similar to me. And that there are lots of excellent junior researcher candidates out there (again from first hand impressions).

Improving the vetting of (potential) researchers, and/or better “sharing” that vetting

This also seems like a big deal and an area where maybe you could improve things significantly with a relatively small amount of effort. I don't have great context here though.

Replies from: MichaelA
comment by MichaelA · 2021-06-01T19:00:06.457Z · EA(p) · GW(p)

Thanks for these thoughts!

although I do find reading through 19 different types of options without much structure a bit overwhelming!

Interesting. I received similar feedback on the previous post in the sequence, and re-organised it into "clusters" in response to that. And I've received similar feedback on a separate, upcoming draft of mine that also has a big list of things, and due to that feedback I plan to organise that list into clusters before publishing the post. Maybe this is a recurring issue with my writing that I should be on the lookout for. So thanks for that feedback :) 

I guess this also relates to my caveat that "There are various other ways to carve up the space of options, various complementary framings that can be useful, etc.", and to me trying to produce these posts relatively quickly and to be relatively thorough. I expect with more time, I could come up with better ways to organise the space of options - e.g. via creating diagrams representing various different pathways to getting more EA-aligned research or researchers, showing how each intervention could connect to one or more steps on those pathways, and then somehow using that to organise the interventions into broad types and then subtypes. (And if someone else did that, I'd be interested to read what they come up with!)

Replies from: Ben_Snodin
comment by Ben_Snodin · 2021-06-02T09:44:08.469Z · EA(p) · GW(p)

One (maybe?) low-effort thing that could be nice would be saying "these are my top 5" or "these are listed in order of how promising I think they are" or something (you may well have done that already and I missed it).

Replies from: MichaelA
comment by MichaelA · 2021-06-02T13:01:54.519Z · EA(p) · GW(p)

Ah, yes, this is probably useful and definitely low-effort (I've now done it in 1 minute, due to your comment). 

The list was actually already in order of how promising I think they are, and I mentioned that in footnote 1. But I shouldn't expect people to read footnotes, and your feedback plus that other feedback I got on other posts suggests that readers want that sort of thing enough / find it useful enough that that should be said in the main text. So I've now moved that info to the main text (in the summary, before I list the 19 interventions).

I think the main reason I originally put it in a footnote is that it's hard to know what my ranking really means (since each intervention could be done in many different ways, which would vary in their value) or how much to trust it. But my ranking is still probably better than the ranking a reader would form, or than an absence of ranking, given that I've spent more time thinking about this. Going forward, I'll be more inclined to just clearly tell readers things like my ranking, and less focused on avoiding "anchoring" them or things like that.

(So thanks again for the feedback!)

comment by MichaelA · 2021-06-06T15:02:25.205Z · EA(p) · GW(p)

In the EA Infrastructure Fund's Ask Us Anything, I asked for their thoughts on the sorts of topics covered in this sequence, e.g. their thoughts on the intervention options mentioned in this post. I'll quote Buck's interesting reply in full. See here [EA · GW] for precisely what I asked and for replies to Buck's reply (including me agreeing or pushing back on some things). 

---

"Re your 19 interventions, here are my quick takes on all of them

Creating, scaling, and/or improving EA-aligned research orgs

Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.

Creating, scaling, and/or improving EA-aligned research training programs

I am in favor of this. I think one of the biggest bottlenecks here is finding people  who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, it's very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on the job and not really try to make the experience useful.

Increasing grantmaking capacity and/or improving grantmaking processes [? · GW]

Yeah this seems good if you can do it, but I don't think this is that much of the bottleneck on research. It doesn't take very much time to evaluate a grant for someone to do research compared to how much time it takes to mentor them.

My current unconfident position is that I am very enthusiastic about funding people to do research if they have someone who wants to mentor them and be held somewhat accountable for whether they do anything useful. And so I'd love to get more grant applications from people describing their research proposal and saying who their mentor is; I can make that grant in like two hours (30 mins to talk to the grantee, 30 mins to talk to the mentor, 60 mins overhead). If the grants are for 4 months, then I can spend five hours a week and do all the grantmaking for 40 people. This feels pretty leveraged to me and I am happy to spend that time, and therefore I don't feel much need to scale this up more.

I think that grantmaking capacity is more of a bottleneck for things other than research output.

Scaling Effective Thesis, improving it, and/or creating new things sort-of like it

I don't immediately feel excited by this for longtermist research; I wouldn't be surprised if it's good for animal welfare stuff but I'm not qualified to judge. I think that most research areas relevant to longtermism require high context in order to contribute to, and I don't think that pushing people in the direction of good thesis topics is very likely to produce extremely useful research.

I'm not confident.

Increasing and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc. [? · GW]

The post doesn't seem to exist yet so idk

Increasing and/or improving research by non-EAs on high-priority topics

I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models [LW · GW], because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.

Creating a central, editable database to help people choose and do research projects [? · GW]

I feel pessimistic; I don't think that this is the bottleneck. I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldn't need this database as much. This database is mostly helpful in the very-little-supervision world, and so doesn't seem like the key thing to work on.

Using Elicit (an automated research assistant tool) or a similar tool [? · GW]

I feel pessimistic, but idk maybe elicit is really amazing. (It seems at least pretty cool to me, but idk how useful it is.) Seems like if it's amazing we should expect it to be extremely commercially successful; I think I'll wait to see if I'm hearing people rave about it and then try it if so.

Forecasting the impact projects will have [? · GW]

I think this is worth doing to some extent, obviously; I think that my guess is that EAs aren't as into forecasting as they should be (including me unfortunately.) I'd need to know your specific proposal in order to have more specific thoughts.

Adding to and/or improving options for collaborations, mentorship, feedback, etc. (including from peers) [? · GW]

I think that facilitating junior researchers to connect with each other is somewhat good but doesn't seem as good as having them connect more with senior researchers somehow.

Improving the vetting of (potential) researchers, and/or better “sharing” that vetting [? · GW]

I'm into this. I designed a noticeable fraction of the Triplebyte interview at one point (and delivered it hundreds of times); I wonder whether I should try making up an EA interview.

Increasing and/or improving career advice and/or support with network-building [? · GW]

Seems cool. I think a major bottleneck here is people who are extremely extroverted and have lots of background and are willing to spend a huge amount of time talking to a huge amount of people. I think that the job "spend many hours a day talking to EAs who aren't as well connected as would be ideal for 30 minutes each, in the hope of answering their questions and connecting them to people and encouraging them" is not as good as what I'm currently doing with my time, but it feels like a tempting alternative.

I am excited for people trying to organize retreats where they invite a mix of highly-connected senior researchers and junior researchers to one place to talk about things. I would be excited to receive grant applications for things like this.

Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers [? · GW]

I'm not sure that this is better than providing funding to people, though it's worth considering. I'm worried that it has some bad selection effects, where the most promising people are more likely to have money that they can spend living in closer proximity to EA hubs (and are more likely to have other sources of funding) and so the cheapo EA accommodations end up filtering for people who aren't as promising.

Another way of putting this is that I think it's kind of unhealthy to have a bunch of people floating around trying unsuccessfully to get into EA research; I'd rather they tried to get funding to try it really hard for a while, and if it doesn't go well, they have a clean break from the attempt and then try to do one of the many other useful things they could do with their lives, rather than slowly giving up over the course of years and infecting everyone else with despair.

Creating and/or improving relevant educational materials [? · GW]

I'm not sure; seems worth people making some materials, but I'd think that we should mostly be relying on materials not produced by EAs

Creating, improving, and/or scaling market-like mechanisms for altruism [? · GW]

I am a total sucker for this stuff, and would love to make it happen; I don't think it's a very leveraged way of working on increasing the EA-aligned research pipeline though.

Increasing and/or improving the use of relevant online forums [? · GW]

Yeah I'm into this; I think that strong web developers should consider reaching out to LessWrong and saying "hey do you want to hire me to make your site better".

Increasing the number of EA-aligned aspiring/junior researchers [? · GW]

I think Ben Todd is wrong here. I think that the number of extremely promising junior researchers is totally a bottleneck and we totally have mentorship capacity for them. For example, I have twice run across undergrads at EA Global who I was immediately extremely impressed by and wanted to hire (they both did MIRI internships and have IMO very impactful roles (not at MIRI) now). I think that I would happily spend ten hours a week managing three more of these people, and the bottleneck here is just that I don't know many new people who are that talented (and to a lesser extent, who want to grow in the ways that align with my interests).

I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity. Though I'd still love to have more of these people. I think that I want people who are between 25th and 90th percentile intellectual promisingness among top schools to try first to acquire some specific and useful skill (like programming really well, or doing machine learning, or doing biology literature reviews, or clearly synthesizing disparate and confusing arguments), because they can learn these skills without needing as much mentorship from senior researchers and then they have more of a value proposition to those senior researchers later.

Increasing the amount of funding available for EA-aligned research(ers) [? · GW]

This seems almost entirely useless; I don't think this would help at all.

discovering, writing, and/or promoting positive case studies

Seems like a good use of someone's time.

---------------

This was a pretty good list of suggestions. I guess my takeaways from this are:

  • I care a lot about access to mentorship
  • I think that people who are willing to talk to lots of new people are a scarce and valuable resource
  • I think that most of the good that can be done in this space looks a lot more like "do a long schlep" than "implement this one relatively cheap thing, like making a website for a database of projects"."
comment by MichaelA · 2021-05-28T14:29:48.727Z · EA(p) · GW(p)

Additional intervention ideas

Here I’ll keep track of additional intervention ideas that have occurred to me since I finished drafting this post. Perhaps in future I’ll integrate some into the post itself.

  • Creating and/or improving EA-relevant journals
    • Could draw more people towards paying attention to important topics
    • Could make it easier for EAs doing graduate programs (especially PhDs) or pursuing academic careers to focus on high-priority topics and pursue them in the most impactful ways
      • That could in turn help with “Increasing and/or improving EAs’ use of non-EA options for research training, credentials, etc.”
  • Making high-quality data that’s relevant to high-priority topics more easily available
    • The idea here is that “a lot of researchers will follow good data wherever it comes from”
    • (This was suggested by a commenter on a draft of this post)
Replies from: MichaelA, MichaelA, MichaelA
comment by MichaelA · 2021-06-23T06:47:45.383Z · EA(p) · GW(p)

An idea from Linch: [EA(p) · GW(p)]

Red teaming papers as an EA training exercise?

I think a plausibly good training exercise for EAs wanting to be better at empirical/conceptual research is to deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important. 

I'm not sure how knowledgeable you have to be to do this well, but I suspect it's approachable for smart people who finish high school, and certainly by the time they finish undergrad with  a decent science or social science degree.

I think this is good career building for various reasons:

  • you can develop a healthy skepticism of the existing EA orthodoxy
    • I mean skepticism that's grounded in specific beliefs about why things ought to be different, rather than just vague "weirdness heuristics" or feeling like the goals of EA conflict with other tribal goals.
  • you actually deeply understand at least one topic well enough to point out errors
  • creates legible career capital (at least within EA)
  • requires relatively little training/guidance from external mentors, meaning
    • our movement devotes less scarce mentorship resources into this
    • people with worse social skills/network/geographical situation don't feel (as much) at a disadvantage for getting the relevant training
  • you can start forming your own opinions/intuitions of both object-level and meta-level heuristics for what things are likely to be correct vs wrong.
  • In some cases, the errors are actually quite big, and worth correcting  (relevant parts of ) the EA movement on.

Main "cons" I can think of:

  • I'm not aware of anybody successfully  doing a really good critique for the sake of doing a really good critique. The most exciting things I'm aware of (publicly, zdgroff's critique of Ng's original paper on wild animal suffering, alexrjl's critique of Giving Green. I also have private examples) mostly comes from people trying to deeply understand a thing for themselves, and then along the way spotting errors with existing work.
  • It's possible that doing deliberate "red-teaming" would make one predisposed to spot trivial issues rather than serious ones, or falsely identify issues where there aren't any.
  • Maybe critiques are a less important skill to develop than forming your own vision/research direction and executing on it, and telling people to train for this skill might actively hinder their ability to be bold & imaginative?

(See also the comments on the shortform [EA(p) · GW(p)].)

comment by MichaelA · 2021-06-23T06:48:28.633Z · EA(p) · GW(p)

An idea from Buck [EA(p) · GW(p)] (see also the comments on the linked shortform itself):

Here's a crazy idea. I haven't run it by any EAIF people yet.

I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)

Basic structure:

  • Someone picks a book they want to review.
  • Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
  • They write a review, and send it to me.
  • If it’s the kind of review I want, I give them $500 in return for them posting the review to EA Forum or LW with a “This post sponsored by the EAIF” banner at the top. (I’d also love to set up an impact purchase thing [? · GW] but that’s probably too complicated).
  • If I don’t want to give them the money, they can do whatever with the review.

What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically:

  • Things directly related to traditional EA topics
  • Things about the world more generally. Eg macrohistory, how do governments work, The Doomsday Machine, history of science (eg Asimov’s “A Short History of Chemistry”)
  • I think that books about self-help, productivity, or skill-building (eg management) are dubiously on topic.

Goals:

  • I think that these book reviews might be directly useful. There are many topics where I’d love to know the basic EA-relevant takeaways, especially when combined with basic fact-checking.
  • It might encourage people to practice useful skills, like writing, quickly learning about new topics, and thinking through what topics would be useful to know more about.
  • I think it would be healthy for EA’s culture. I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff. I think that this might be improved both by people writing these reviews and people reading them.
    • Conversely, sometimes I worry that rationalists are too interested in thinking about the world by introspection or weird analogies relative to learning many facts about different aspects of the world; I think book reviews would maybe be a healthier way to direct energy towards intellectual development.
  • It might surface some talented writers and thinkers who weren’t otherwise known to EA.
  • It might produce good content on the EA Forum and LW that engages intellectually curious people.

Suggested elements of a book review:

  • One paragraph summary of the book
  • How compelling you found the book’s thesis, and why
  • The main takeaways that relate to vastly improving the world, with emphasis on the surprising ones
  • Optionally, epistemic spot checks
  • Optionally, “book adversarial collaborations”, where you actually review two different books on the same topic.
comment by MichaelA · 2021-06-02T19:48:14.216Z · EA(p) · GW(p)

Rough notes on another idea, following a call I just had:

  • Setting up something in between a research training program and a system for collaborations in high schools, universities, or local EA groups
    • Less vetting and probably lower average current knowledge, aptitude, etc. than research training program participants undergo/have
    • But this reduces the costs for vetting
    • And this opens this up to an additional pool of people (who may not yet be able to pass that vetting)
    • Plus, this could allow more people to test their fit for and get better at mentorship, by mentoring people in these "programs" or simply by collaborating with peers in these programs (since collaboration still has some mentorship-like elements)
      • E.g., in some cases, someone's who just started a PhD student or just recently learned about the cause area they're now focused on may not be able to usefully serve as a mentor for a participant in a research training program like SERI, but they may be able to usefully serve as a mentor for a high school student or some other undergrads
        • (I'm just saying there'd be some cases in that space in between - there'd also be some e.g. PhD students who can usefully serve as mentors for SERI fellows, and some who can't usefully serve as mentors for high school students)
comment by MichaelA · 2021-05-28T14:29:23.098Z · EA(p) · GW(p)

Complementary perspectives/framings that didn’t quite fit into this post

David Janku of Effective Thesis has written about [EA · GW] interventions - other than Effective Thesis  which also aim to influence which research is generated. I recommend reading that section, but here’s the list of interventions with the explanations and commentary removed:

  1. influencing individuals by giving them information on what the potentially most impactful directions are and motivating them to pursue these directions
  2. providing funding for research directions that seem promising
  3. setting up research organisations producing research in a specific direction
  4. organising research workshops
  5. setting up prestigious prizes/awards
  6. providing mentorship and space for exploration

David adds that an additional approach which doesn't aim to influence which research is generated is “coordination - e.g. connecting students/researchers interested in the same topics”.

---

Meanwhile, Jonas Vollmer of EA Funds has written that [EA(p) · GW(p)], to achieve one possible vision for the EA Long-Term Future Fund:

we need 1) more grantmaking capacity (especially for active grantmaking), 2) more ideas that would be impactful if implemented well, and 3) more people capable of implementing these ideas. EA Funds can primarily improve the first factor, and I think this is the main limiting factor right now (though this could change within a few months).

I think that similar points could also be made for longtermist grantmaking by other actors (e.g., Open Philanthropy) and for grantmaking in some other areas (e.g., I’m guessing, wild animal welfare). And I think many of the interventions mentioned in this post might help address those needs.

comment by MichaelA · 2021-05-28T14:28:56.357Z · EA(p) · GW(p)

Here are my thoughts on discovering, writing, and/or promoting positive case studies (moved to a comment since I tentatively think this intervention would be less valuable than the others):

  • I know of some cases (in addition to me) of people who are now doing impactful EA-aligned research and got to that point partly via something related to one of the interventions discussed elsewhere in this post or sequence
    • E.g., via doing independent research/writing published on the EA Forum, choosing a thesis and getting mentored via Effective Thesis, or doing a research training program
  • But I mostly know these cases because I’m now well-networked in EA, rather than because of easily findable public writeups. And I’d also guess that there are many more cases that I’m not aware of.
  • This could cause people to underestimate how achievable this is, underestimate the value of these “interventions” (e.g., writing on the Forum), or simply have a harder time motivating themselves to try (since success doesn’t feel like a real possibility) 
  • So maybe it’d be valuable to simply: 
    1. Find and collect a larger set of positive case studies
    2. Write many of them up (or record podcasts or videos or whatever)
    3. Promote those writeups (or whatever) in such a way that they’ll be found by the people who’d benefit from them
      • E.g., so that the relevant people would stumble upon these case studies, or so that the people they’d reach out to (e.g., community-builders offering careers advice) would know to mention these case studies
  • This process could also provide useful data on which methods of entering and progressing through the EA-aligned research pipeline have been used, how successful the methods have been, how they could be supported, etc.^[Though I think the data collection that would be best for directly encouraging and guiding aspiring/junior researchers would differ from that which is best for guiding efforts to improve the pipeline.]
  • I haven’t thought much about how best to do this, who would be best placed to do it, how valuable it’d be, or what the most similar existing things are
    • Obviously there are already some things like case studies of successful-seeming EA-aligned careers, including research ones. 
    • Maybe WANBAM have done something similar specifically for women, trans people of any gender, and non-binary people?
  • Obvious downside risk: Focusing solely on positive case studies could mislead people about how easy these pathways are and cause them to overly focus on pursuing research roles or roles at explicitly EA orgs
comment by MichaelA · 2021-05-28T14:27:53.607Z · EA(p) · GW(p)

Readers of this post may also be interested in my rough collection of Readings and notes on how to do high-impact research [EA(p) · GW(p)].