If you want to form a conclusion based on an experiment, it is a basic rule in science that you include all data—both positive and negative.
The same applies to the global collection of scientific knowledge that should include all scientific results, even when the results are unexpected.
But this does not always happen, and a number of concerned scientists say that a large portion of research today is incorrect, which is particularly problematic and even dangerous within the health sciences, say the scientists.
This problem is arguably part of a crisis within basic research, which has been described in a series of articles here on ScienceNordic.
“It’s a big problem and we need to publish more negative results,” says Silas Boye Nissen, a PhD student in biophysics at the Niels Bohr Institute, Copenhagen, Denmark, who has just published an article that highlights the consequences of not reporting negative results.
Nissen developed a mathematical computer simulation to calculate how a false scientific theory becomes accepted as a scientific truth.
The probability of this happening increases significantly when negative results are not published.
When scientists design experiments around something called a null hypothesis.
This is a declaration of the expected outcome of their experiment.
For example, a null hypothesis may be that a new medical treatment is effective at treating depression.
At the end of the experiment, the scientists either accept their null hypothesis if the treatment worked (a positive result), or reject it, if it did not (a negative result).
If scientists only report the positive results, this can lead to so-called publication bias.
This may slow down the progress of a particular scientific field.
In health sciences it could even cost lives.
Source: Silas Boye Nissen (Niels Bohr Institute, Denmark)
“Our model shows that if we want to avoid making false assumptions about scientific facts, then we need to publish at least 20 per cent of all negative results that are produced in every field of scientific research,” says Nissen.
This number could be even higher in some cases, he says. “The number will vary depending on parameters such as field of research and the number of the positive results that are in fact false. So in some fields, we may need to set the minimum publication level for negative results at 40 per cent.”
Anything less than this minimum level of negative result publications could mean that both scientists and informed citizens could form a false impression of what is true, even though if there are a number of studies that show the opposite, says Nissen.
And this limit is not being met today.
“An American study of new medicines showed that there’s only ten per cent chance of getting a negative result published, while nearly all positive results come out,” says Nissen.
He also points to another study that showed the number of positive results published in the journal Nature rose by 22 per cent between 1990 and 2007.
Preferential treatment of positive results can, according to Nissen, create a skewed picture of the truth, where researchers, industry, and society are tempted to think, for example, that a particular medicine works, even if it does not.
“A good example is a new antidepressant treatment being tested in the USA in 14 clinical trials. Seven of them gave positive results and all of them were published. The rest of the studies gave negative results, with no effect, but only two of these were published,” he says and points to a 2008 study in The New England Journal of Medicine.
Has basic research fallen ill?
Researchers all over the world are busy creating robust and ground-breaking research.
But many scientists believe that basic research is in a crisis.
The “publish or perish” mantra means that the quality of research is declining, say some scientists.
ScienceNordic presents a series of articles focusing on the symptoms, consequences, and solutions to this crisis in basic research.
In this article we take a closer look at the lack of publication of positive results.
This skewed picture of scientific results could even cost lives, according to research librarian Thea Marie Drachen and library director Bertil Dorch, from the University of Southern Denmark, writing in a commentary on Videnskab.dk.
“We waste time and money when scientists repeat trials, because there is no information on whether other researchers have, for example, tested a potential drug and found that it had no effect, or worse, that it had an adverse effect. The kind of unnecessary duplication of tests costs time, money, effort, animal life, and has also proven to cost human lives,” they write.
The problem crops up frequently in the scientific community, and the discussion has picked up in recent decades. Especially following the publication an article in PLOS Medicine in 2005: “why most published research findings are false.”
A panel debate among scientists at an international conference on cancer research (ESMO) in Copenhagen in 2016 discussed the importance of publishing negative results.
“In cancer research and health research in general we should publish and present all negative data, particularly if we can avoid using time and money on treatments that have already been rejected,” says Belgian oncologist Evandro de Azambuja, a member of the ESMO press and media affairs committee.
But even though scientists agree that they should publish negative results, these data still all too often remain languishing in the bottom drawer.
There is a good reason for this, says Torben Lund from the Department of Clinical Medicine, Aarhus University, Denmark.
“The problem is that negative results don’t pay off. The structure of the scientific community today, where citations of your articles are the currency, makes it really difficult to create a career on negative results, and therefore they are neglected,” he says.
Professor Kaj Sand-Jensen from the University of Copenhagen, Denmark, does not agree that publishing few negative results is such a big problem.
“There may be good reasons why many negative results are rejected. It could be because the methods or the results are weak. We shouldn’t forget about that in this discussion,” says Sand-Jensen.
There have been many studies that go against the tide. In 2005, one publisher in the Elsevier group announced that they were launching a journal only for negative results: ‘new negatives in plant science.’
But the journal was closed after a year. Something that does not surprise Nissen.
“Negative articles don’t generate enough citations for a journal to survive,” he says.