Bibliometrics


papersIn the latest issue of Nature a group of researchers published what they call “The Leiden Manifesto” for how to evaluate research using a mixture of bibliometric and qualitative data. They don’t say that bibliometric evaluation should be abolished completely, when used properly such data can reveal very interesting information on the general impact of the research at a university or a country etc. What they don’t like is when bibliometric measures are used stupidly as the only evaluation tool for purposes for which they were not designed.

As I have mentioned before, Poland, and our institute, has an obsession about bibliographic data such as impact factors, citation numbers, etc. The authors of the Leiden manifesto writes that

Some recruiters request h-index values for candidates. Several universities base promotion decisions on threshold h-index values and on the number of articles in ‘high-impact’ journals. Researchers’ CVs have become opportunities to boast about these scores, notably in biomedicine. Everywhere, supervisors ask PhD students to publish in high-impact journals and acquire external funding before they are ready.

This is very much true. At the meeting where I and two other colleagues were evaluated for our habilitations, a lot of focus was on the number of citations, h-index, total impact factor, and so on. It is very common that PhD students apply for research grants during their last year, and there is at least one program aimed specifically for those beginning their PhD studies.  The Leiden group also writes that:

In Scandinavia and China, some universities allocate research funding or bonuses on the basis of a number: for example, by calculating individual impact scores to allocate ‘performance resources’ […]

This is also true in our institute. Each research group gets a number based on IF on publications, grants, patents and invited talks. On the basis of that number next year’s funding is allocated.

So, there is an inordinate focus on bibliometric parameters for evaluation of both individual researchers. But of course some sort of evaluation is needed, and bibliometry has an allure of objectivity. However, with the huge differences between different field, and even subfields, one must be very careful when doing such comparisons. This is something that the people at Centre for Science and Technology Studies in Leiden has been working at.

I was recently at a seminar where the Foundation for Polish Science had asked researchers from CWTS to evaluate the recipients of two of their prestigious grants, WELCOME, which aims to bring top researchers to Poland, and TEAM, which is for starting up a research team as a new group or within an existing group. The CWTS had calculated the Mean Normalised Citation Score (and some other indicators) for the grant recipients as groups and compared their scores to the averages in the EU, Poland and some other countries. The EU average is slightly above 1 (1.07), as expected since 1 is the world average. Poland is found at the bottom of the heap of tested countries with a dismal 0.64. (You get what you pay for.) The WELCOME grantees did indeed score very high, above 2, and higher than the average of any individual country. This all well and good – they are after all selected for being top scientists. More worrying was that the TEAM winners only ranked on the EU-average. That means that even high-ranked Polish researchers are only very average on a European level, and far behind top countries like the UK and the US.

Leave a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.