Learnings about literature review strategy from research practice sessions

post by alexlintz · 2020-11-20T08:35:44.698Z · EA · GW · 6 comments

Earlier this year, myself and a few other early-career researchers decided to try creating sessions where we could deliberately practice certain research skills by creating a high-feedback environment where we could build intuitions and refine techniques. This grew out of a frustration with the slow feedback loops typical of research, the exciting prospect of improving skills that should pay off consistently through an entire career, and a lack of good resources on how to actually do research

This document collects some of the lessons learned from our first research practice experiment, a series of five literature review practice sessions with between 3-6 longtermist researchers. Note though that many of the benefits were from improved intuitions or small details about how to do literature reviews and were hard to boil down into concrete advice. For example, we found this to be a useful environment to explore the finer details of how to do research that do not otherwise come up easily in conversation (e.g. how many searches on Google Scholar do you do before you start reading papers?), build intuitions on research taste (i.e. judging trustworthiness of a paper quickly), and to better understand what we actually do when we do literature reviews (think for a moment, do you actually know what steps you take when starting a review?). 

As such, I think it is likely worthwhile to try the exercise for yourself with a group of peers. The structure we used was as follows:

The following are techniques we identified as being useful. We believe that these should be valid for literature reviews on almost any topic. Note though that we focused primarily on questions within the social sciences, so take this advice with a grain of salt if you are doing technical work. 

Thanks Nora Amman, Jan Brauner, and Ben Snodin for feedback. Thanks to Nora Amman, Ondrej Bajgar, Jan Brauner, Lukas Finnveden, Chris van Merwijk, and others for attending the sessions, coming up with some of the above ideas, and helping improve the process!


Comments sorted by top scores.

comment by gavintaylor · 2020-11-20T22:06:53.360Z · EA(p) · GW(p)

This article on doing systematic reviews well might also be of interest if you want to refine your process to make a publishable review. It's written by environmental researchers, but I think the ideas should be fairly general (i.e. they mention Cochrane for medical reviews).

I'd also recommend having a loot at Iris.ai. It is a bit similar to ConnectedPapers but works off a concept map (I think) rather than than a citation map, so it can discover semantic linkages between your paper of interest and others that aren't directly connected through reference links. I've just started looking at it this week and have been quite impressed with the papers it suggested. 

The idea of doing deliberate practice on research skills is great. I agree that learning to do good research is difficult and poor feedback mechanisms certainly don't help. Which other skills are you aiming to practice?

Replies from: alexlintz
comment by alexlintz · 2020-11-23T16:14:55.937Z · EA(p) · GW(p)

Iris.ai sounds potentially useful, I'll definitely check it out!

So far we've done some things on inspectional note-taking, finding the logical argument structure of articles, and breaking down questions into subquestions. I'm not too sure what the next big thing will be though. Some other ideas have been to practice finding flaws in articles (but it takes a bit too long for a 2hr session and is too field specific), abstract writing, making figures, and picking the right research question. 

I haven't been spending too much time on this recently though so the ideas for actually implementing these aren't top of mind

comment by JoelMcGuire · 2020-11-20T19:41:27.364Z · EA(p) · GW(p)

...a lack of good resources on how to actually do research.

Yes! It's hard to convey that you need to have already done a literature search to know what you need to search in the first place.

I second "Focus on breadth first!". Googling is cheap. Search longer than you think you need to. An additional good paper can be decisive in forming a view on a new topic. 

Searching smartly is often more effective than going down the citation trail.

I think "going down the citation trail" can often be very fruitful, especially if you search citations within a foundational article. E.g., 

Also: a good template can help you organize and focus your search. I only sorted the studies I found by their most salient features (the 4 colored 0/1 columns) after I'd gathered quite a few.

 I did not know about http://connectedpapers.com/. Seems useful! 

Replies from: alexlintz
comment by alexlintz · 2020-11-23T16:19:46.485Z · EA(p) · GW(p)


Yes! You're totally right that going down the citation trail with the right paper can be better than search, I just edited to reflect that.

This spreadsheet seems great. So far we've only found ways to practice the early parts of literature review so we never created anything so sophisticated but that seems like a good method

comment by Jamie_Harris · 2020-11-23T20:45:13.687Z · EA(p) · GW(p)

"Searching smartly is often more effective than going down the citation trail" I'd love more detail / clarification on this if you're happy to share? I think I pretty much exclusively go down the citation trail.

Relatedly, what's the benefit of having "a pile of papers ready to look at" before you start reading them? Unless you're trying to be systematic and comprehensive (in which case you might as wel gather them all first), it seems to me that reading through papers as you go helps you realise if you need to adjust your search terms or add new ones, or if you're just hitting diminishing returns on the review generally. I pretty much just Google Scholar search and start reading the first item that comes up.

Replies from: alexlintz
comment by alexlintz · 2020-12-24T12:09:26.557Z · EA(p) · GW(p)

Yeah, maybe I should change some text... but I guess I have assumption built in that when finding papers which seem relevant you'd be reading the abstract,  getting a basic idea of what they're about, and then adjusting search terms. 

The reason having a pile of papers is useful is because the value of papers is extremely uneven for any given question and by having a pile you get a better feel for the range of what people say about a topic before diving into one perspective. Wrt the first point I'd argue that in most cases there are one or two papers which would be perfect for getting an overview. Reading those might be 100x more valuable than reading something which is just kind of related (what you are likely to find on the first search). If that's true it's clearly worth spending a lot of time looking around for the perfect paper rather than jumping into the first one you find. Obviously this can be overdone but I expect most people err toward too little search. Note  that you might also find the perfect paper by skimming through an imperfect one. I tend to see this  as another way of searching as you can look for that without actually 'reading' the paper,  just by skimming through their lit review or intro.