A study from 2016 has hit a nerve among many Danish researchers. Highly cited articles are not necessarily of high quality, they say. (Photo: Shutterstock).

“Slippery Slope” to place citations above quality in research

The more the better when it comes to the number of scientific publications and citations, say some scientists. But not everyone agrees.

Published

When researchers publish a scientific article, they have first familiarised themselves with the existing research and assessed which parts of the literature are relevant to their own work. They then refer to it in their own scientific article.

In this way, the best articles on a specific subject end up quoted many times over. Being quoted is therefore a good measure of an article's quality. That is the idea, at least.

According to new research from a Swedish-Dutch research duo, the system is working, but Associate Professor David Budtz Pedersen from the Department of Communication at Aalborg University, Denmark, disagrees.

“You’re on a very slippery slope if you blindly accept that the number of citations is equivalent to higher quality. Citations can be gamed and traded and primed by journals. In some cases, they might be a good indicator but they cannot be a goal in itself,” says Pedersen.

   Basic Research

 

ScienceNordic takes you into the engine room of basic research to find out:

  • What is it?
  • Who does it?
  • And how does it benefit us?

Read More: What is basic research?

The new research was conducted by Ulf Sandström, a visiting professor of science and technology studies at the KTH Royal Institute of Technology in Sweden, and Professor Peter van den Besselaars, from the Faculty of Social Sciences at VU University Amsterdam, The Netherlands,  and published in PLOS ONE in late 2016.

“Citations are not a neutral measure”

Sandström and Besselaars argue that researchers who publish many scientific articles, also publish scientific articles of higher quality.

In the article, the research duo use the number of citations as an indicator for new scientific breakthroughs.

Pedersen considers this assumption that citations equal significant science as “provocative.”

“Citations are not an independent indicator of scientific quality,” says Pedersen.

“There’s now a lot of studies that show that citations are traded, and that lots of citations are self-citations,” says Pedersen, who refers to a study from 2016 in which the authors concluded that there is a imminent danger of so-called “citation cartels”—although they are difficult to detect.

Articles cited for many reasons

According to Pedersen, there are many other factors that influence how often an article is cited. For example, whether the journal chooses to place the article on the front cover, how long the title is, and how easily digestible the main message is.

“We know how citations can be leveraged, and in the [new study] the researchers assume that the pattern of citations is unregulated. It’s like the authors believe science is a perfect market, where researchers cite the articles that they think are most interesting. But in reality, articles are cited for many other reasons,” says Pedersen.

Dangerous to reward researchers based on citations

Sandström and Besselaar believe their findings can and should have political significance in the form of greater funding to scientists who are cited more often. But this is a dangerous idea, says Pedersen.

“If one only evaluates scientific quality based on citations then you ignore other indicators that are important for research. In the end, this way of managing science will lead researchers to speculate how they can maximize citations, for example, by working on easily recognisable research problems that are more citable,” says Pedersen.

“If we only reward citations then scientists may chase scientific recognition instead of solving important social problems such as climate and health,” he says.

When scientists’ careers depend on citations, their other tasks are likely to become less important, says philosopher Claus Strue Frederiksen.

“You cannot write in a grant application that you spend a lot of time teaching. It almost sounds silly to say, but it’s deeply troubling,” says Frederiksen, a lecturer at the Department of Media, Cognition and Communication at University of Copenhagen, Denmark.

Analysts: The methods are flawed

Senior Scientist Kaare Aagaard and Professor Jesper Schneider from the Danish Center for Research Analysis at Aarhus University, Denmark, argue that it is problematic to consider citations as equal to quality.

Aagaard and Schneider also point to some methodological problems in the new study, which is based on 50,000 scientific articles by Swedish researchers.

“The fundamental problem with the analysis is that it lacks a real causal-analysis  that clarifies the alleged relationship between the independent variable (productivity) and the dependent variable (the number of highly cited articles),” writes the two scientists in an article on Videnskab.dk (in Danish only).

“They must have read another article”

The two Swedish researchers flatly reject the critique from the Danish scientists.

In an email, they write that the Danish scientists must have “read another article.”

Sandström and Besselaar also write that they based their analyses on a psychological theory of scientific creativity, which they summarise as such:

“Progress of science depends on highly creative researchers. The more creative a researcher is, the more new ideas he/she generates, and as a consequence the more publications to present those ideas. Furthermore, the probability of having a good idea is low, and therefore we expect that the more creative someone is, the more someone publishes, and, consequently, the more high quality papers are produced..”

“We indeed find this in our study, and this is an important finding for research management and policy, as it shows that there is not a trade off between quality and quantity - as has become common knowledge among those who study the dynamics of the science system,” they write in an email to ScienceNordic.

Says nothing about sustainability

Pedersen remains unconvinced.

Sandström and Besselaars method of addressing the problem posed is still skewed, he says.

“When they choose from the highest shelf and look at top performers: the researchers who are cited the most—then it says something about scientific excellence, but it doesn’t say anything about the sustainability of the [broader] scientific system. For example, knowledge transfer, reproducibility, communication, and not least, access to data. It leads to a simplistic view of research,” says Pedersen, adding that highly cited research can turn out to be completely wrong.

“The risk of having a very one-dimensional use of citations is that it creates counterproductive behaviour where you think you can work out who should receive funding or positions just by looking at the indices and lists of publications. It reflects a mind-set that publishing as much as possible equals high-quality research, and that’s deeply problematic,” he says.


------------------
Read the full story in Danish on Videnskab.dk
 

Translated by: Catherine Jex

Scientific links

External links

Powered by Labrador CMS