The Intellectual and Moral Decline in Academic Research

post by gavintaylor · 2020-02-07T16:47:32.079Z · score: 20 (10 votes) · EA · GW · 17 comments

This is a link post for https://www.jamesgmartin.center/2020/01/the-intellectual-and-moral-decline-in-academic-research/

A very pessimistic view on the state of research quality in the US, particularly in public health research. Some choice quotes:


My experiences at four research universities and as a National Institutes of Health (NIH) research fellow taught me that the relentless pursuit of taxpayer funding has eliminated curiosity, basic competence, and scientific integrity in many fields.

Yet, more importantly, training in “science” is now tantamount to grant-writing and learning how to obtain funding. Organized skepticism, critical thinking, and methodological rigor, if present at all, are afterthoughts.


From 1970 to 2010, as taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent, with most due to misconduct.


The widespread inability of publicly funded researchers to generate valid, reproducible findings is a testament to the failure of universities to properly train scientists and instill intellectual and methodologic rigor.


academic research is often “conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure.” In other words, taxpayers fund studies that are conducted for non-scientific reasons such as career advancement


Incompetence in concert with a lack of accountability and political or personal agendas has grave consequences:  *The Economist*  stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.


Still, there the author says there is hope for reform. The last three paragraphs suggest abolishing overheads, have limits on the number of grants received by and the maximum age of PIs, and preventing the use of public funding for publicity.

17 comments

Comments sorted by top scores.

comment by Tom_Beggs · 2020-02-07T19:39:06.189Z · score: 12 (12 votes) · EA(p) · GW(p)

I would argue the article is extremely pessimistic.

Yes, funds sometimes get misallocated or are given to people who have committed fraud.

More often, they go to hard-working researchers who really don't make that much at all...people who hate fake or misleading scientific claims more than the average taxpayer.

And yes, there's a replication crisis...that people are aware of working to address.

In short, I think the author uses an extremely broad brush: "The widespread inability of publicly funded researchers to generate valid, reproducible findings is a testament to the failure of universities to properly train scientists and instill intellectual and methodologic rigor."

And yet, scientific breakthroughs happen all the time and the world is better for it.

In short, maybe the author is burnt out or has only ever worked with poor colleagues? Or hasn't been funded in a while?

Most of the researchers I've met are honest and hard-working and doing their best to get it right, even in the face of challenging questions and strained resources.

comment by gavintaylor · 2020-02-09T17:12:24.071Z · score: 10 (7 votes) · EA(p) · GW(p)

I agree that it's an extreme stance and probably overly-general (although the specificity to public health and biomedical research is noted in the article).

Still, my feeling is that this is closer to the truth than we'd want. For instance, from working in three research groups (robotics, neuroscience, basic biology), I've seen that the topic (e.g. to round out somebody's profile) and participants (e.g re-doing experiments somebody else did so they don't have to be included as an author, instead of just using their results directly) of a paper are often selected mainly on perceived career benefits rather than scientific merit. This is particularly true when the research is driven by junior researchers rather than established professors, as the value of papers to former is much more about if they will help get grants and a faculty position rather than their scientific merit. For example, it's very common that a group of post-docs and PhDs will collaborate to produce a paper without a professor to 'demonstrate' their independence, but these collaborations often just end up describing an orphan finding or obscure method that will never be really be followed up on, and the junior researchers time could arguable have produced more scientifically meaningful results if they focused on their main project. Of course, its hard to evaluate how such practices influence academic progress in the long run, but they seem inefficient in the short-term and stem from a perverse incentive of careerism.

My impression is that questionable research practices probably vary a lot by research field, and the fields most susceptible to using poor practices are probably ones where the value of the findings won't really be known for a long time, like basic biology research. My experience in neuroscience and biology is that much more 'spin', speculation, and story telling goes into presenting the biological findings than there was in robotics (where results are usually clearer steps along a path towards a goal). While a certain amount of story telling is required to present a research finding convincingly, it has become a bit of a one-up game in biology where your work really has to be presented as a critical step towards an applied outcome (like curing a disease, or inspiring a new type of material) for anybody to take it seriously, even when it's clearly blue-sky research that hasn't yet found an application.

As for the author, it looks like he is no longer working in Academia. From his publication record it looks like he was quite productive for a mid-career researcher, and although he may have an axe to grind (presumably he applied for many faculty positions but didn't get any, common story) being outside the Ivory Tower can provide a lot more perspective about it's failings than what you get from inside it.

comment by Tom_Beggs · 2020-02-10T18:56:53.956Z · score: 3 (3 votes) · EA(p) · GW(p)

I wouldn't say that there are no inefficiencies in academia. There are inefficiencies in every line of work.

I would say that on the whole, a lot of great still work gets done.

I definitely wouldn't say that academia is rife with "incompetence in concert with a lack of accountability."

Sure, there are people with Ph.Ds who are not strong researchers. There are lot of them who are, though.

We may just disagree on the ratio of the two groups based on our own experiences.

comment by willbradshaw · 2020-02-11T22:32:03.333Z · score: 4 (5 votes) · EA(p) · GW(p)

In short, maybe the author is burnt out or has only ever worked with poor colleagues? Or hasn't been funded in a while?

I downvoted this comment based on this paragraph. Arch speculations that a position taken is probably due to inadequacies and personal frustrations of the author are nearly always uncharitable, unwarranted and, in my experience, well-correlated with sloppy and defensive thinking.

No, the guy probably isn't just mad because he couldn't cut it in academia.

comment by Tom_Beggs · 2020-02-12T23:05:43.938Z · score: 3 (3 votes) · EA(p) · GW(p)

Thanks for your feedback.

I was trying to figure out why the author would be so, so critical of scientific research.

I would say he was downright uncharitable, in fact.

It turns out that he's also argued quite strongly that high levels of refined sugar in people's diets are no problem: e.g., https://www.sciencedaily.com/releases/2018/08/180827110730.htm

To do so, he has to throw aside mountains of scientific research. I would say his attack above is a necessary part of that effort.

So while I am concerned about inefficiencies in academic work and the waste of taxpayer dollars, I'm much more worried about the effects of corporate money on research.

comment by gavintaylor · 2020-02-13T14:34:46.242Z · score: 3 (3 votes) · EA(p) · GW(p)

Thanks for the discussion on this Tom and Will.

I originally posted this article as, although it presents a very strong opinion on the matter and admittedly uses shock tactics by taking many values out of context (as pointed out by Romeo and Will), I thought that the sentiment was going in both same the direction that I personally felt science was moving and also with several other sources I'd read. I hadn't looked into any of authors other work, and although his publication record seems reasonable, he has pushed some fairly fringe views on nutrition and knowing this does reduce the weight I give to views in this article (thanks for digging into it Tom).

For a more balanced critic of recent scientific practice I'd recommend the book Real Science by John Ziman (I have a pdf, PM if you'd like a copy). It’s a long but fairly interesting read on the sociology of science from a naturalistic perspective, and claims that University research has moved from an 'academic' to 'post-academic' phase, characterised as the transition from the rigorous pursuit of knowledge to a focus on applications, which represents a convergence between academic and industrial research traditions. Although this may lead to more applications diffusing out of academia in the short-term, the 'post-academic' system is claimed to loose some important features of traditional research, like disinterestedness, organised skepticism, and universality, and tends to trade quality for quantity. The influence of societal interests (including corporate goals) would be expected to have much influence on the work done by 'post-academic' researchers.

Agreed with both Will and Tom that there are certainly are still lot of people doing good academic research, and how strongly you weight the balance will depend on which scientists you interact with. Personally, I ended up leaving Academia without pursuing a faculty position (in-part) because I felt I the push to use excessive spin and hype in order to publish my work and attract funding was making it quite substanceless. Of course, this may have been specific to the field I was working in (invertebrate sensory neuroscience) and I'm glad to hear that you both have more positive outlooks.

comment by willbradshaw · 2020-02-12T23:31:36.520Z · score: 3 (3 votes) · EA(p) · GW(p)

Yeah, I don't want to imply that I strongly support the original claims. I think there are lots of very serious problems with incentives and epistemics in science, but nevertheless that both the incentives and the epistemics of scientists are unusually good in important ways.

(As an anecdote that probably shouldn't be taken as strong evidence, but that I found striking, I once tried out the 2-4-6 test [LW · GW] on my lab, and IIRC something like two-thirds of members got the right answer first-time, and both group leaders present did so fairly quickly.)

I'm also very worried about the effects of corporate funding on research, at least in some domains.

comment by Tom_Beggs · 2020-02-12T23:40:09.962Z · score: 2 (2 votes) · EA(p) · GW(p)

Yeah, the more I looked into the guy, the more his critique fit into context. His work finds a home on some websites of questionable repute. haha

And as you point out, the people you meet in academia generally don't tend to be as he's characterized them.

I would be willing to bet that he has a financial motive to argue against the prevailing scientific consensus, just as we see in other instances where facts turn out to be inconvenient for corporate interests.

comment by Aaron Gertler (aarongertler) · 2020-02-12T20:42:15.059Z · score: 3 (2 votes) · EA(p) · GW(p)

I agree with this comment and retracted my upvote for the same reason, though I thought the rest of Tom's comment was quite reasonable (see Alexey Guzey for some examples of quiet scientific progress).

comment by RomeoStevens · 2020-02-07T18:54:07.711Z · score: 10 (7 votes) · EA(p) · GW(p)

> stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.

we can't know if this is a good or bad number without context.

comment by gavintaylor · 2020-02-07T20:51:41.697Z · score: 14 (7 votes) · EA(p) · GW(p)

Good point. Unfortunately the Economist article referenced for this number is pay-walled for me and I am not sure if it indicates the total number of clinical trial participants during that time.

Your comment got me interested so I did some quick googling. In the US in 2009 there were 10,974 registered trials with 2.8 Million participants, and in the EU the median number of patients studied for a drug to be approved was 1,708 (during the same time window). I couldn't quickly find the average length of a clinical trial.

I expect 80,000 patients would be at most 1% of population of total clinical trial participants during that 10 year window, so this claim might be a bit over-emphasised (although it does seem striking at first read).

comment by willbradshaw · 2020-02-08T17:33:29.486Z · score: -3 (6 votes) · EA(p) · GW(p)

From 1970 to 2010, as taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent, with most due to misconduct.

https://www.tylervigen.com/spurious-correlations

comment by mike_mclaren · 2020-02-11T01:51:59.717Z · score: 2 (2 votes) · EA(p) · GW(p)

Can you clarify the point you're trying to make with the reference to spurious correlations, Will? I don't think the author is trying to make any deep claim about causation here, but just pointing out that a growing amount of taxpayer money is wasted due to retractions. (I appreciate the point from other commenters that this is still presumably a small fraction of the total funding though and so might not be as big a concern as the author suggests.)

comment by willbradshaw · 2020-02-11T22:43:16.892Z · score: 8 (6 votes) · EA(p) · GW(p)

Sure.

Taken at face value, the claim is that taxpayer funding and number of retractions have increased over time, at rates not hugely different from one another. I think both can almost entirely be accounted for by an increase in the total number of researchers. If you have more researchers producing papers, this will result in both a big increase in funding required and in number of papers retracted without any change in the quality distribution.

I would want to see evidence for a big increase in retractions per number of researchers, researcher hours or some other aggregative measure before taking this seriously as a claim that science has got worse over time. It's well-known that if you don't control for the total number of people in a place or doing a thing, all sorts of things will correlate (homicides and priests, ice-cream sales and suicides, etc.).

More substantively, I also disagree with the claim that a big increase in retractions is evidence of scientific decline. Insofar as there has been any increase in the per-capita rate of retractions, I regard this as a sign of increasing epistemic standards, and think both editors and scientists are still way too reluctant to retract papers. It's like the replication crisis: the problems have always been there, but we only started paying attention to them recently. That's a good sign, not a bad one.

comment by gavintaylor · 2020-02-12T12:29:02.225Z · score: 2 (2 votes) · EA(p) · GW(p)

Thanks for elaborating Will.

Agreed that the increase in funding for science will generally just increase the size of science, and the base assumption should be that the retraction rate will stay the same, which would lead to a roughly proportionate increase in the number of retractions with science funding. The 700% vs. 900% roughly agrees with that assumption (although it could still be that the reasons for retraction change over time).

The idea of increasing retractions being a beneficial sign of better epistemic standards is interesting. My observation is that papers are usually basically only retracted if scientific fraud or misconduct was committed (e.g. falsifying or manipulating research data) - questionable research practices (e.g. P-hacking, optional stopping or HARKing), failure to replicate, or even technical errors don't usually lead to a retraction (Wikipedia also notes that plagiarism is a common cause of retractions). It is a pity there is no ground truth for scientific misconduct to reference the retraction rate against.

Aside, this summary of the influence of retractions and failure to replicate on later citations may be of interest. Thankfully, retraction usually has a strong reduction on the amount of citations the retracted paper receives.

comment by willbradshaw · 2020-02-12T16:00:54.068Z · score: 2 (2 votes) · EA(p) · GW(p)

Thanks Gavin.

I'd be interested in seeing data on the distribution of causes of retraction and how it's changed over time. I know RetractionWatch likes to say that scientists tend to underestimate the proportion of retractions that are down to fraud. I do think some (many?) retractions are due to serious technical errors with no implication of deliberate fraud or misconduct. I suspect RetractionWatch has data on this.

I'm not claiming that it's inevitably true that more retractions indicates better community epistemics, but I do think it's a big part of the story in this case. A paper retraction requires someone to notice that the paper is worthy of retraction, bring that to the editors and, very often, put a lot of pressure on the editors to retract the paper (who are usually extremely reluctant to do so). That requires people to be on the lookout for things that might need to be retracted and willing to put in the time and effort to get it retracted.

In the past this was very rare, and only extremely flagrant fraud or misconduct (or unusually honest scientists retracting their own work) led to retractions. Now, partly as a side consequence of the replication crisis but also more general (and incomplete) changes in norms, we have a lot more people who spend a lot of time actively searching for data manipulation and other retraction-worthy things in papers.

This is just the science version of the common claim that a recorded increase (or decrease) in the rate of a particular crime, or a particular mental disorder, or some such, is mainly due to changes in how closely we're looking for it.

comment by willbradshaw · 2020-02-12T23:34:34.562Z · score: 1 (1 votes) · EA(p) · GW(p)

Unrelatedly, I'm quite enjoying watching the karma on this comment go up and down. Currently at -1 karma after 7 votes. Interesting data on differing preferences over commenting norms.