Improving EAs’ use of non-EA options for research training, credentials, testing fit, etc.

post by MichaelA · 2021-09-11T13:52:15.738Z · EA · GW · 2 comments

Contents

  Summary
  Caveats and clarifications
  What are some non-EA options for research training, credentials, etc.?
  Pros of EAs using these non-EA options
    
    fit
    
    up fewer EA resources
  Cons of EAs using these non-EA options
    
    fit
    
    drift
  What are some ways EAs’ use of these options could be increased or improved?
    awareness and providing encouragement
    EAs’ towards the most suitable options
      Example intervention: List of useful PhD supervisors
    supporting the use of these options
      Example intervention: Funding EAs to work at think tanks[7]
    and/or improving these non-EA options
None
2 comments

See the post introducing this sequence [EA · GW] for context, caveats, credits, and links to prior discussion relevant to this sequence as a whole.[1] This post doesn’t necessarily represent the views of my employers.

Summary

Caveats and clarifications

What are some non-EA options for research training, credentials, etc.?

I mean this to contrast with “EA options” such as:

Pros of EAs using these non-EA options

Training

These non-EA options might provide better training (for the relevant EA’s needs) than non-EA options would, because:

I think it’s at least clear that some of these non-EA options provide better training for some purposes than some EA options. But it seems less clear to me whether non-EA or EA options are “better on average”. And it seems probably more productive to think about whether non-EA or EA options are better on average for specific types of people, career plans, etc., and ideally to break down “non-EA options” and “EA options” into more fine-grained categories when thinking about this.[5] Similar caveats apply to the following points as well.

(See also this comment thread [EA(p) · GW(p)].)

Testing fit

These non-EA options will serve better for testing fit for some later roles/projects than EA options would.

Credentials

These non-EA options options might tend to provide credentials that are more prestigious, widely recognised, credible, etc. This is most relevant for later getting jobs, funding, etc. from non-EA sources.

Use up fewer EA resources

Using the non-EA options uses up fewer “EA resources”, especially the scarce time of (relatively) senior EA researchers. Other relevant resources include time spent on vetting by EA hirers or grantmakers, and time or money spent producing (or running) EA research training programs, educational materials, etc.

Cons of EAs using these non-EA options

Training

These non-EA options might provide training that's less good for the relevant EA’s needs than EA options would, because:

(But this will of course differ depending on what specific options are being compared and what the specific EA’s future plans are.)

Testing fit

These options will serve less well for testing fit for some later roles/projects.

This is partly for the reasons noted above. It's also partly because some non-EA options require strong commitments and involve quite low room for exploration. In particular, for a PhD, one often has to choose a relatively narrow focus in advance and stick to that for several years. And having completed most of a PhD program seems to be a much less good credential for many purposes than having completed a PhD program, which reduces the value of trying a PhD for a year or two.

Credentials

These non-EAs options will sometimes provide credentials that are less relevant, credible, etc., for the relevant EA’s needs than the credentials an EA option would provide. For example, I believe at least some people involved in hiring for EA research roles would see high-quality blog-post-style explicitly EA research as a better proxy of an applicant’s fit for their roles than the completion of a PhD program (except where the PhD is especially relevant. Additionally, in some cases, the credentials from non-EA options would be less prestigious and widely recognised - for example, in the case of an obscure online course vs a DPhil done through the Future of Humanity Institute at Oxford.

Value drift

Using non-EA options may tend to create a higher chance of value drift [? · GW].

What are some ways EAs’ use of these options could be increased or improved?

Essentially, I see four main types of interventions for achieving this goal.

Raising awareness and providing encouragement

Meaning: Simply raise awareness of these options and the benefits of using them, and/or encourage their use.

Guiding EAs’ towards the most suitable options

Meaning: Help guide people to either non-EA or EA options (depending on what’s appropriate in their individual situation or type of situation), or help guide them towards the non-EA options that are particularly high-quality and suited to their needs.

Example intervention: List of useful PhD supervisors

I think someone should create a list of potential PhD supervisors who are either focused on high-priority topics or flexible enough that they’re happy to supervise work on such topics.

If you're interested in helping make that happen, please let me know, and I could put you in touch with another person who independently had a similar idea and might implement it at some point.

It could also be good to ask people what processes or proxies they used to find the relevant kind of PhD supervisor, and writing up these guidance somewhere or using it to expand the list.

Financially supporting the use of these options

Obviously this could include providing scholarships, grants, etc. to people doing graduate degrees (as is already often done by, for example, Open Philanthropy or the EA Long-Term Future Fund). Another approach is discussed below.

Example intervention: Funding EAs to work at think tanks[7]

One could fund EAs to work at prestigious think tanks alongside or under excellent researchers, perhaps on topics that the EA and/or the funder are especially keen for the EA to work on.

Advantages of this approach:

(Of course, people can seek jobs their without bringing their own funding!)

(I think it would also be useful to work out which think tanks and collaborating/supervising researchers would be best for this, which would be an example of "Guiding EAs' towards the most suitable options", similar to creating a list of useful PhD supervisors, as discussed above.)

Creating and/or improving these non-EA options

Meaning: Work to build fields [? · GW], shift incentives, shift norms, etc. such that more relevant non-EA options come into existence and/or become more useful for EAs seeking research-relevant training, credentials, testing fit, etc.

See my previous post’s section on “Increasing and/or improving research by non-EAs on high-priority topics” [EA · GW] for further thoughts relevant to this.

---

If you have thoughts on these ideas or would be interested in implementing (with funding) projects to help with this sort of thing, please comment below, send me a message, or fill in this anonymous form. This could perhaps inform my future efforts; allow me to provide advice or connections; etc.


  1. For this post in particular, I should especially thank Nora Ammann, Edo Arad, Alexis Carlier, Peter Hurford, and an Anonymous Intellectual Benefactor. ↩︎

  2. I’m using the term “EAs” as shorthand for “People who identify or interact a lot with the EA community”; this would include some people who don’t self-identify as “an EA”. ↩︎

  3. Here I use "credentials" as shorthand for something like "credible signals of fit", which can include not just completed degrees and work experience but also published outputs, strong letters of recommendation, etc. ↩︎

  4. Perhaps also “bootcamps” that are analogous to coding bootcamps but that are more relevant to research. But I don’t know if such things exist. ↩︎

  5. I think some of this thinking has been done and written up, for example in some 80,000 Hours career reviews, but I expect there’s room for more valuable work here. ↩︎

  6. Or just have conversations with them, but that seems less good. ↩︎

  7. This idea, and several of the specific points I make, are based on a conversation with someone who’s been thinking about this as an intervention for improving the EA-aligned research pipeline. ↩︎

2 comments

Comments sorted by top scores.

comment by Locke_USA · 2021-09-12T14:51:22.887Z · EA(p) · GW(p)

For more on "Example intervention: Funding EAs to work at think tanks", see here [EA · GW]. That post and those notes are specific to the US system; I'm not sure it would work (or at least work the same way) in other systems. Think tanks are also much bigger parts of the policy research ecosystem in the US than in other countries. I'm a big fan of this model, but I'm not sure anyone has checked whether it could work outside of the US context.

A couple of other caveats:

Think tanks tend to have more flexibility than academia in what they write about, as their reports don’t have to pass peer-review, fit into established journals, etc.

I don't think this is true. Think tank researchers indeed face fewer journal/peer review constraints, but they have some additional ones, especially perceptions of policy relevance. There are academic journals/conferences for most topics, but you're going to have a hard time finding a think tank interested in speculative longtermist research. My guess is a large majority (probably >75%) of EA researchers (even those who would self-identify as being interested in "policy") would have a rather hard time with think tank constraints.

apparently some (or many?) think tanks are able and willing to essentially just accept funding for a specific person to work on a specific topic (with the funder deciding on the person and the topic).

From a think tank perspective, there is a big difference between flexible individual-level funding and individual-level funding to work on a specific topic from a specific perspective. Most think tanks are very sensitive about the optics of being "bought" by outside interests. They're fine with outside funding and eager for free labor, but I think many (especially reputable/high-quality) think tanks would not want to accept someone who comes in saying "I come to you from X funder and they want me to write Y and Z." The easiest way to get around this issue is joining a think tank that has overlapping interests (e.g. if you want to work on nuclear nonproliferation, you can join the Nuclear Threat Initiative or the Arms Control Association teams already working on that issue).

Replies from: MichaelA
comment by MichaelA · 2021-09-12T17:27:12.974Z · EA(p) · GW(p)

Nice, thanks for that info! I'll check out that post soon, and might reach out to you with questions at some point.