Notes on EA-related research, writing, testing fit, learning, and the Forum

post by MichaelA · 2021-03-27T09:52:24.521Z · EA · GW · 8 comments

Contents

  Disclaimers
  Regarding writing, the Forum, etc.
  Research ideas
  Programs, approaches, or tips for testing fit for (longtermism-related) research
  Getting “up to speed” on EA, longtermism, x-risks, etc.
  Other
None
8 comments

Cross-posted to LessWrong [LW · GW].

I've had calls with >30 people who are interested in things like testing their fit for EA-aligned research careers, writing on the Forum, "getting up to speed" on areas of EA, etc. (This is usually during EA conferences.) 

I gradually collected a set of links and notes that I felt that many such people would benefit from seeing, then turned that into a Google Doc. Many people told me they found that doc useful, so I'm now (a) sharing it as a public post, and (b) still entertaining the hypothesis that those people were all just vicious liars and sycophants, of course. 

Disclaimers

Regarding writing, the Forum, etc.

Research ideas

Programs, approaches, or tips for testing fit for (longtermism-related) research

Not all of these things are necessarily "open" right now. 

Here are things I would describe as Research Training Programs [? · GW] (in alphabetical order to avoid picking favourites):

Note: I know less about what the opportunities at the Center for Reducing Suffering and the Nonlinear Fund would be like than I know about what the other opportunities would be like, so I'm not necessarily able to personally endorse those two opportunities. 

Here are some other things:

Getting “up to speed” on EA, longtermism, x-risks, etc.

Other

I'd welcome comments suggesting other relevant links, or just sharing people's own thoughts on any of the topics addressed above!

8 comments

Comments sorted by top scores.

comment by nil (eFish) · 2021-04-04T00:23:27.423Z · EA(p) · GW(p)

Thanks for sharing, Michael!

I think the Center for Reducing Suffering's Open Research Questions may be a helpful addition to Research ideas. (Do let me know if you think otherwise!)

Relatedly, CRS has an internship opportunity.

Also, perhaps this is intentional but "Readings and notes on how to do high-impact research" is repeated twice in the list.

Replies from: MichaelA, MichaelA, MichaelA
comment by MichaelA · 2021-04-04T01:59:10.684Z · EA(p) · GW(p)

Relatedly, CRS has an internship opportunity.

Thanks for mentioning this - I'll now added it to the "Programs [...]" section :)

comment by MichaelA · 2021-04-04T01:59:35.452Z · EA(p) · GW(p)

Also, perhaps this is intentional but "Readings and notes on how to do high-impact research" is repeated twice in the list.

This was intentional, but I think I no longer endorse that decision, so I've now removed the second mention.

comment by MichaelA · 2021-04-04T01:56:18.722Z · EA(p) · GW(p)

I think the Center for Reducing Suffering's Open Research Questions may be a helpful addition to Research ideas. (Do let me know if you think otherwise!)

I definitely think that that list is within-scope for this document, but (or "and relatedly") I've already got it in the Central directory for open research questions [EA · GW] that's linked to from here.

There are many relevant collections of research questions, and I've already included all the ones I'm aware of in that other post. So I think it doesn't make sense to add any here unless I think the collection is especially worth highlighting to people interested in testing their fit for (longtermism-related) research. 

I think the 80k collection fits that bill due to being curated, organised by discipline, and aimed at giving a representative sense of many different areas. I think my "Crucial questions" post fits that bill due to being aimed at overviewing the whole landscape of longtermism in a fairly comprehensive and structured way (though of course, there's probably some bias in my assessment here!). 

I think my history topics collection fits that bill, but I'm less sure. So I've now added below it the disclaimer "This is somewhat less noteworthy than the other links".

I think my RSP doc doesn't fit that bill, really, so in the process of writing this comment I've decided to move that out of this post and into my Central directory post. 

(The fact that this post evolved out of notes I shared with people also helps explain why stuff I wrote has perhaps undue prominence here.)

comment by MichaelA · 2021-03-27T09:54:37.990Z · EA(p) · GW(p)

Here's one other section that was in the doc. I'm guessing this section will be less useful to the average person than the other sections, so I've "demoted" it to a comment.

Some quick thoughts regarding the value of posting on the Forum and/or conducting independent research, in my experience

  • Note that:
    • This section is lightly edited from what I wrote ~August 2020; I didn't bother fully updating it with newer evidence and thoughts
    • This may of course not generalise to other people.
    • Some of this work was independent, some was associated with Convergence Analysis (who I worked for), and some was in between
  • Doing this definitely improved my thinking, my network, and how well-known I am among EAs
    • Not sure how much the third thing actually matters
  • Doing this seems to have accelerated my career trajectory via the above and via providing evidence of my abilities
  • I have some evidence of impact from my work
  • The network-building/signalling from this may have also helped me have impact in other ways
comment by Mauricio · 2021-03-28T00:41:42.502Z · EA(p) · GW(p)

Thanks, Michael!

Another opportunity that just came out is the Stanford Existential Risks Initiative's summer research program - people can see info and apply here. This summer, we're collaborating with researchers at FHI, and all are welcome to apply.

Replies from: MichaelA
comment by MichaelA · 2021-03-28T03:59:53.140Z · EA(p) · GW(p)

Yeah, thanks for point this out! SERI seems cool to me, and I've now added a link to that form :)

(I actually added the link right before you made your comment, I think, due to someone else highlighting it to me in a different context. But it was indeed absent from the initial version of the post.)

comment by MichaelA · 2021-04-01T01:31:11.892Z · EA(p) · GW(p)

Some people might also find it useful to check out EA-related facebook groups, which there's a directory for here: https://www.facebook.com/EffectiveGroups/