Non-pharmaceutical interventions in pandemic preparedness and response 2021-04-08T14:13:04.872Z
How to give as you earn in the UK? GAYE vs. GiftAid 2020-07-14T18:18:15.075Z


Comment by James Smith ( on Why scientific research is less effective in producing value than it could be: a mapping · 2021-06-15T07:49:05.990Z · EA · GW

A system somewhat similar to what you are talking about exists. Pubpeer, for example, is a place where post-publication peer reviews of papers are posted publicly ( I'm not sure at this stage how much it is used, but in principle it allows you to see criticism on any article. is also relevant - it uses AI to try and say whether citations of an article are positive or negative. I don't know about its accuracy. 

Neither of these address the problem of what happens if a study fails to replicate - often what happens is that the original study continues to be cited more than the replication effort.

Comment by James Smith ( on Why scientific research is less effective in producing value than it could be: a mapping · 2021-06-14T15:05:44.212Z · EA · GW

That view seems reasonable to me and I agree that a clearer analysis would be useful. 

An additional and very minor point I missed out from my comment is that I'm sceptical that the relationship between impact factor and retraction (original paper here) is causal. It seems very likely to me that something like "number of views of articles" would be a confounder, and it is not adjusted for as far as I can tell. I'm not totally sure that is the part of the article that you were referring to when citing this, so apologies if not!

Comment by James Smith ( on Why scientific research is less effective in producing value than it could be: a mapping · 2021-06-14T14:00:38.848Z · EA · GW

Thanks a lot for writing this post. I'm interested in these topics and was just thinking the other day that a write up of this sort would be valuable. 

A relevant and fairly detailed write-up (not mine) of this problem area and how meta-research might help  is available here: (I didn't see it cited but may have missed it).

In terms of the content of the post, a couple of things that I might push back on a little: 

  1. Peer review: I’m not sure that poor peer review (of papers) is a major cause of ineffective value production (though I agree that it is a minor contributor). By the time a project is written up as a paper, it will invariably be published somewhere in the literature in roughly the format that it was first submitted. If top journals had better peer review (but other journals did not), the research would likely be published elsewhere anyway. Basically it strikes me as too late in the process to be that important. Poor methodology (which I would attribute largely to lack of training and the incentives to rush research) seems more important . Lack of peer review at an appropriate time in the research process (i.e. before the research is done to get feedback on methods) also seems more important than the quality of peer review of the final paper (which is what i understood the section on peer review to be describing).
  2. Intellectual property: this seems mostly relevant to a smallish subset of research that is directly involved in making products. Even in those cases, it isn’t clear that IP is a big barrier. In fact, it can be argued that not patenting is better for development of products in some cases, because it allows multiple commercialisation attempts in parallel with slightly different aims. For an example of this in the context of drug development, see here: The basic idea is that if e.g. a molecule is not patented when it is initially described, you can still patent the use of that molecule for a particular indication, so the molecule can still be commercialised for that indication, while another organisation may pursue the same molecule for another indication. This potentially increases rather than decreases the potential for commercialisation of the molecule. 

I'd be interested in learning what projects you have planned and discussing some solutions to the problems that you have mapped. I'm quite involved in the reproducible research community in the UK (particularly in Oxford, so perhaps could be helpful. 

Comment by James Smith ( on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-05-25T08:52:02.548Z · EA · GW

Thanks a lot for sharing this. I need to update the post to add this and other research that has been pointed out to me. 

Comment by James Smith ( on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-04-16T15:30:41.570Z · EA · GW

For future searching, where/how did you come across that paper? 

Comment by James Smith ( on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-04-15T08:55:17.066Z · EA · GW

Good find - thanks for sharing that paper which I hadn't included.  If I update the post I'll add that. 

Comment by James Smith ( on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-04-12T10:03:34.031Z · EA · GW

I haven't thought much about this so can't add anything useful at the moment. If I think of / come across anything I'll reply again. 

Comment by James Smith ( on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-04-09T12:38:40.784Z · EA · GW

Good point. This is similar to what I was trying to get at when talking about lack of willingness to engage in probabilistic reasoning. 

Comment by James Smith ( on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-04-09T09:48:28.620Z · EA · GW

Thanks a lot for the  comment. I was a bit nervous to put my first post up so some positive feedback is very much appreciated.

Comment by James Smith ( on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-04-09T09:45:51.932Z · EA · GW

Thanks a lot for the comment. I do think that what your gesturing at makes sense: if I understand correctly you are saying that certain physical interventions can have more predictable effects that ‘biological’ ones because we have a decent idea of exactly how they work. In some cases this is definitely true: as an extreme example, we don’t need RCTs of aeroplane safety as we have a very good understanding of the physical processes and are able to model them well. If we have an airborne pathogen, it’s hardly necessary to run an RCT to see whether or not there is an effect of a stay at home order: there will be one. 

In many of the example questions I gave though, I think the fact that there is a large behavioural component pushes us closer to the situation we have with drugs than to the aeroplane. For example, although it could be demonstrated in a laboratory which of mask or shield is actually more effective at blocking exhaled particles, it would be harder to capture the different effects that each has on how often you touch your face, how often it is removed, or other aspects of compliance. These will differ a lot between people, so you’d need to test it on a large group, and the social setting might influence behaviour. I don’t think that we can decompose the often important behavioural component of these interventions in the same way that we can the physical components. 

That said, the air filtration question I posed might not have been well chosen. As you point out, it seems reasonable that we can get a good understanding of whether that is likely to be helpful by applying what we know about the filters and viral transmission. Of the questions I posed, RCTs are likely to be the least useful there and may not be useful at all.

However, I do have some thoughts on why an RCT could still be worthwhile. I’m not saying these because I disagree with your points; I’m just providing some possible counterargument.  

  1. Learning: by introducing the filters not in an RCT, you are basically doing an experiment but losing the opportunity to learn from it. Even if it has been decided that filters should be introduced in all schools/offices (or whatever unit), it won’t normally be possible to install all of those in parallel. So there is a time where some offices have the filter and some don’t. As long as you can randomise this, you can take advantage of the differences in time for implementation in something like a stepped wedge cluster randomised trial. The effect could be analysed on an ongoing basis in a Bayesian analysis such that if there are large effects they would be detected early in the experiment and implementation of the remaining filters can be accelerated. If you are doing something like this across several interventions, this would help with deciding which to prioritise. 
  2. Cost-benefit: There are ~ 137,000 schools in the US. I don’t know how much it costs to install and maintain filtration systems, but I imagine it is not negligible. There are a lot more schools globally. Doing an RCT comparing e.g. air filtration to opening the windows could save quite a bit of money if it turns out that filtration systems don’t provide additional benefit. 
  3. Implementation and interaction with behaviour: even assuming that they do work, do people use them? Maybe the filtration is noisy so teachers turn it off; maybe they simply forget to turn it on. In medicine, even with drugs that demonstrably improve the patient’s condition, adherence is (to me) surprisingly low. Perhaps large rooms where people tend to congregate most cannot be adequately filtered, maybe the filtration system gives people a sense of security so they congregate more. 

Overall, I think the areas where trials would be most useful are those where we can expect relatively modest effects and where there is a larger behavioural component. The combination of modest effects, if better understood, might be quite important. 

Comment by James Smith ( on What are the highest impact questions in the behavioral sciences? · 2021-04-07T20:55:05.637Z · EA · GW

Some quick thoughts (there is certainly already research on these but they seem important, and I don't know about reliability of existing research): 

  • Scope insensitivity: e.g. why do people find it hard to care proportionally more about proportionally bigger things?
  • Probabilistic reasoning: e.g. how can decision makers be ‘taught’ to take seriously low probability, high impact events?
  • Decision-making under uncertainty: e.g. how can this be improved? Can be people be efficiently taught to become more bayesian?
  • Group decision making: e.g. do more diverse groups really make better decisions?
  • Meta: e.g. how can social science become more reliable?
Comment by James Smith ( on peterbarnett's Shortform · 2021-03-11T14:10:01.415Z · EA · GW

I like this perspective. I've never really understood why people find the repugnant conclusion repugnant! 

Comment by James Smith ( on How valuable would more academic research on forecasting be? What questions should be researched? · 2021-02-25T11:07:14.157Z · EA · GW

Not really answering your question, but there is some recent work attempting to forecast clinical trial results that may be relevant: Can Oncologists Predict the Efficacy of Treatments in Randomized Trials? Kimmelman (the senior author) is doing other work on the topic too (e.g. here). I'm not aware of much published work in this space in a biomedical context. 

My guess is that key decision makers in medicine (e.g. funders of trials), would not be very open to paying attention to forecasts (even if shown to be accurate to some degree), as there is a very strong culture of relying on data and in particular on RCTs. 

Comment by James Smith ( on How do ideas travel from academia to the world: any advice on what to read? · 2021-02-25T10:40:48.210Z · EA · GW

This may be less meta than you are hoping for, but may contain some useful advice/references: The dos and don’ts of influencing policy: a systematic review of advice to academics. Influencing policy is at least one way that academic ideas can travel to the wider world. 

I expect another is producing accessible content on the topic in question (e.g. writing popular blog posts, books, documentaries). It seems like these can sometimes be a catalyst for ideas becoming more widely known in the public. Examples of books that might have had or could have a broad impact are Animal liberation (Peter Singer), Silent Spring (Rachel Carson), Doing Good Better (Will Macaskill) or Human Compatible (Stuart Russel). 

Comment by James Smith ( on How can non-biologists contribute to wild animal welfare? · 2021-02-21T16:25:20.633Z · EA · GW

As someone who did an undergraduate degree in biology, I think that as a computer scientist you probably already have many of the skills that you'd need to contribute to biology research directly. Welfare bio is a very new field so getting on top of the literature would likely not be too tricky, and most biologists would not have an in depth understanding of that particular sub-field anyway. There may be systematic reviews or modelling studies that you could contribute to, or you could look for existing datasets that could be reanalysed through a welfare bio 'lens'. 

In general I think it's much easier to go from comp-sci/ maths/something quantitative to bio than the other way around, as bio is not particularly 'linear' (i.e. there isn't necessarily a base of knowledge that everyone has and builds on over time). 

Comment by James Smith ( on How to give as you earn in the UK? GAYE vs. GiftAid · 2020-07-28T01:36:32.386Z · EA · GW

Thanks a lot for this response. I would probably donate through EA funds, so yes that should work. It seems like doing that with GiftAid will be a better bet than GAYE in my case then. The tip about HMRC is really useful to know - I have a friend who is giving regularly through GAYE and paying the 4% fee who is in a higher tax bracket, so I've recommended that he try this instead.

Comment by James Smith ( on What book(s) would you want a gifted teenager to come across? · 2020-07-14T19:32:39.038Z · EA · GW

The Precipice by Toby Ord would be high up on my list. It is accessible and covers a lot of ground, illustrating a diversity of possible career paths and study areas that are relevant to existential risk.