Researchers' Zone:

With the new university legislation of 2003, new public management entered Danish universities.

The H-index turns 15: but is it a good idea to put a number on researchers' performance?

The H-index is an attempt at measuring the productivity and impact of researchers. This is an illustration of the universities’ shift from trusting researchers towards micromanagement, efficiency and competition.

In the old days – that is, before the mid-00s – it was, in many ways, easier to be a researcher.

Back then, everything you did was not measured and weighed. Back then, it was not about rankings, productivity and impact.

Back then, there was trust that researchers – like the highly educated and professional people they are – could judge for themselves what best served their disciplines and research.

The most important thing in the vast majority of disciplines was that, as a researcher, you helped create and spread knowledge in the most appropriate way within your field of expertise.

This could be done in many different ways – for example, through teaching, news media and academic books and articles – all of which were largely recognised and understood as equally good: the end did, namely, justify the means to a much greater extent than it does today.

Back then, the so-called bibliometric research indicators – which basically measure the researchers’ output by the number of research publications and how willing other researchers are to quote them – were quite new and something that few people were paying attention to.

The H-index, which has since become triumphant, had not yet been invented.

It didn't stay that way. This change – and what it has meant for the way researchers work – is the focal point of this article.

The inventor of the H-index had perfect timing

In Denmark, new university legislation was passed in 2003. It marked a shift in the way universities were managed and measured. Whereas the paradigm before was professionalism, it now became new public management.

Professionalism as a governing paradigm means that it is left to the disciplines and their highest qualified practitioners to strengthen and develop the field. The many expert panels that researchers encounter throughout their careers are a result of this thinking.

The new public management paradigm rests on the basic assumption that the public sector is inefficient but can be made more efficient through the use of management tools such as competition, performance goals and the administering of resources towards the highest performing sections.

The new paradigm requires the ability to summarise the productivity and efficiency of researchers in simple numbers that can be reported to university management.

Those figures are then to be passed on to the Ministry of Higher Education and Science, which uses them to determine how much of the universities’ basic funding has been earned by each university.

The higher the productivity, the more money – thus a new paradigm was born.

The fact that the physics professor Jorge E. Hirsch received so much momentum with his Hirsch index, or the H-index, which he presented exactly 15 years ago, is, therefore, hardly a coincidence: it was simply perfect timing.

How the H-index works

The figure shows the H-intersection where the number of publications matches the number of citations.

The H-index summarises the researcher's production (number of publications) and impact (number of citations) in a single number, and it is also very easy to calculate:

You take the researcher's publications and list them in descending order according to how many citations each publication has achieved, and where the ranking is the same as the number of citations, there you find the researcher's H-index.

A high H-index is assumed by most to testify that a researcher is very productive and has a large impact; a low score shows the contrary. But it is not that simple.

Firstly, there are different publishing traditions within the different disciplines. Secondly, like most other indicators, the H-index can be manipulated and, thirdly, there are different ways of achieving a high H-index.

For example, if researcher A has 100 publications and 100 citations, but all 100 citations are from the same publication, then their H-index is 1. If researcher B has the same 100 citations and 100 publications, but their citations are evenly distributed, then researcher B may have an H-index as high as 10.

Therefore, it can be common sense for researchers to keep an eye on how the citations are distributed amongst their publications and to work towards obtaining citations for the publications that 'lag behind'.

Two very different types of researchers

The question is whether it has come to the point where researchers no longer aim to create and spread knowledge in the most appropriate way but rather look after and polish their H-indexes instead.

This is the question we set out to investigate. First, we identified a number of medical researchers with a high H-index. Next, we examined how the citations are distributed amongst their publications.

Then we selected some researchers where the distribution of citations looked very different: they were either very evenly or very unevenly distributed across the publications. After this, we contacted the researchers for an interview.

The method itself is described in more detail in our academic article, which you can read here.

Through the interviews, it transpired that there were two relatively clear – but also quite different – research profiles. For fun, we called them 'prioritisers' and 'promoters'. What these two types are, we will look at now.

Research must be understood and used

A promoter disregards the journal's formal ranking in the hierarchy and is numbingly indifferent to Journal Impact Factors and SCImago Journal Ranks.

As a consequence, the promoter publishes frequently and preferably in lower ranking journals. For the promoter, it is much more important that the knowledge produced reaches the people for whom it is relevant, and that this should be done as soon as possible.

This is why these researchers are also often visible on dissemination platforms other than just the academic ones: they are frequent guests in traditional mass media such as newspapers, radio and television, and they are happy to give popular academic presentations to non-experts.

As one of them said: the most important thing is that our knowledge is dispersed and gets where it can be understood and used.

Marketing of oneself and one’s research

A prioritiser looks at the matter rather differently. They publish slightly fewer articles, but in higher ranking journals. For the prioritiser, citations are important and typically the highly ranked journals are the way to do this because articles in the top journals tend to get more citations.

The prioritisers were quite engaged with collaborating internationally and had international co-authors on 55 percent of their articles, compared with 24 percent among promoters. As one of them put it: ‘it is by working together internationally that we increase our professional competencies’.

New public management of universities requires a specific type of researcher who focuses mostly on promoting research within the academic world.

Prioritisers are also far more ‘online’ on various social media platforms, where they market their research. They also talked a lot about how they consider the marketing value of possible co-authors for their publications.

Some mentioned how they chose collaborators who were known for having a large professional network. This is because large networks enable more efficient marketing of one's research.

This is no single way of being a successful researcher

If these results are discussed in relation to which governance paradigms exist in academia today, one might rather squarely argue that the promoters are children of professionalism, while the prioritisers are children of new public management.

In this context, one might be tempted to think that, in the current climate, the prioritisers have made the wisest choice – that adapting to the factors on which you are measured simply makes for a more successful career.

But that is not necessarily the case. Projections of our results and the literature on governance paradigms in the public sector both suggest that several paradigms can exist at the same time, and that both promoters and prioritisers will be able to remain successful researchers in their own ways and on different trajectories, so to speak.

What type of researcher does society want more of?

In many ways, the question of what the two different types achieve career-wise is not interesting in and of itself. What is much more interesting is the question of what politicians, citizens and university leaders want researchers to do:

Should they optimise towards half-empty and partially misleading performance indicators, such as the H-index, which, for want of something better, are used as international standards and can therefore help to signal 'high professional standards' on the international stage?

Or should they optimise towards spreading their knowledge as quickly as possible for the benefit and contentment of the population, perhaps at the expense of a few points in the international rankings?

We know what we think is the right thing to do, but we may be in disagreement with some researchers, university leaders and politicians.

Translated by Stuart Pethick, e-sp.dk translation services. Read the Danish version at Videnskab.dk’s Forskerzonen.

References

· Charlotte Wiens profile (SDU)

· Bertil F. Dorchs profile (SDU)

· Evgenios Vlachos' profile (SDU)

· Dorte Drongstrups profile (SDU)

· Daniella Bayle Deutzs profile (SDU)

· 'Effective publication strategies in clinical research', PLoS ONE (2020), DOI: 10.1371/journal.pone.0228438

· 'An index to quantify an indicidual’s scientific research output', PNAS (2005), DOI: 10.1073/pnas.0507655102

· 'Offentlige styringsparadigmer – konkurrence of sameksistens', Djøf Forlag (2017)

Powered by Labrador CMS