Altmetrics: How Researchers Assess the Significance for Scholarly Impact

by Steffen Lemke

When you want to quantitatively ascertain the impact of a scientific publication, the usual procedure is to count the number of times the article is quoted in other publications. The idea for this procedure originated as early as the 1920s; it was significantly influenced by the compilation and publication of the first citation indexes in the 1960s. Yet describing academic impact by counting citations has diverse limitations: many years often pass between the publication of an article and the majority of its citations; citation indexes are mostly limited to a few types of publication; furthermore, citation rates merely reflect influence within the scientific community and do not give any information on the reasons or the context of individual mentions.

New forms of impact measurement

Spurred on by the digitalisation of scholarly work-processes as well as the ambition to establish standards for the impact of scientific publications that circumvent the known limitations of citations, ideas for various alternative metrics have developed over time. These so-called “altmetrics” describe a heterogeneous group of indicators which normally have in common the fact that they are intended to make the resonance of scientific publications on diverse web-based media measurable. For example, the altmetrics of a publication describe how often it is shared on social media such as Twitter or Facebook, quoted on the online presences of mass media or referenced in policy papers.

The Altmetrics Manifesto, that significantly contributed to encouraging specialist discussion about alternative impact metrics and pooled a variety of related concepts under one common collective term, is celebrating its tenth birthday this year. Despite this, little is known about whether and to what extent altmetrics has found its way into the everyday working lives of German researchers. Do German scientists and academics use altmetrics to identify influential publications during their references research, for example? And what worries and concerns might they have regarding the use of altmetrics to measure relevance?

Have altmetrics arrived in everyday research life?

In order to get to the bottom of questions such as these, we carried out a series of group interviews with researchers followed by an online survey.

In the interviews with a total of nine researchers from the sectors economic sciences, biology and computer science, a consistent picture emerged regarding the prior experiences of our test persons in measuring impact: citation based indicators, particularly the journal impact factor, are already known to researchers on the earliest rungs of their career ladder and are actively born in mind – for example, when considering to which journal one’s own work should be submitted or which sources to quote in preference. By contrast, the concept of “altmetrics” seemed to be new to almost all participants.

So although we could assume a minimal level of experience regarding altmetrics on the part of many German researchers, we wanted to know what thoughts and possible misgivings scientists and academics would have regarding the use of alternative metrics to measure scientific impact. As well as the group interviews, we also carried out a more widely applied online survey among researchers, with a focus on a target group of economic and social scientists who were already known to us from other research projects.

What misgivings do German researchers have regarding the use of altmetrics?

The total of 320 participants in the survey who selected Germany as their country of affiliation expressed diverse misgivings regarding the use of altmetrics in a variety of free text answers. Certain hypotheses kept reoccurring:

  • Altmetrics measure popularity, efforts at dissemination or the extent to which the authors are networked, but not the relevance or even quality of individual articles.
  • Altmetrics are susceptible to targeted manipulation (a procedure also known as “gaming”) – for example by repeated sharing of one’s own article.
  • Altmetrics can lead to decision-makers neglecting to examine the content of the publications to be assessed.
  • Altmetrics reveal inherent biases to the advantage of western English-language writings, for example.

Some participants described the negative effects that the widespread use of altmetrics could have on science in general. The metrics hype would strengthen the “publish or perish” principle, which in turn would encourage poor scientific practices. The participants expressed the concern that focussing attention on metrics could lead to scientific journals taking on ever more generic designs in an attempt to optimise their own metrics.

Implications for the work with and communication about (alt)metrics

If we observe the misgivings expressed by German researchers regarding altmetrics, we can establish that the positive aspect from the answers given there is that there is an awareness of diverse aspects, whose consideration is always of great importance in the quantitative assessment of scientific performance. However, we can also establish that nearly all the feared weaknesses of altmetrics also play a role in citation-based indicators – a field which has been much more researched and for which more substantial best practices have been already worked out.

Many of the misgivings expressed by the German researchers could therefore already be addressed by more clearly communicating findings that are already established in metrics research (also known as scientometrics) also to scientists and academics outside this specialist field, so they can apply them in assessment situations. For example the Leiden Manifesto on Research Metrics (which received high attention within the scientometrics community) already specifies that quantitative assessments should only be used complementary to qualitative expert evaluations, or that indicators should be selected with consideration of field-specific characteristics and should be evaluated in a normalised form.

Regarding the hypothesis that altmetrics are vulnerable to targeted manipulation, it should be noted that the altmetrics manifesto cites the vulnerability of the journal impact factor as a justification for the need for new kinds of metrics. The large number and diversity of platforms on which altmetrics are intended to measure scientific impact should make the conscious manipulation of indicators more difficult. Nevertheless, dealing adequately with gaming in relation to altmetrics – as well as citation-based metrics – and the search for suitable conceptional and technical solutions in scientometrics remains highly important.

Skills in dealing with metrics

Most of the misgivings which currently limit the value of altmetrics for researchers represent justified indications about existing limitations of the metrics, which must be faced on the part of the users by taking existing scientometrics findings into consideration. A significant proportion of the answers from interviews and the survey indicated however a further hindrance, which affects the researchers themselves as users of the metrics: a lack of knowledge about the backgrounds and limitations of the indicators often significantly hinders their valid interpretation. In scientific reality, in which researchers are often taught at the beginning of their career to select their references and publication routes based on impact factors, this represents a serious drawback.

A rejection of quantitative performance measurement in the sciences is not to be expected in the short term – therefore at least the creation of fair conditions for researchers through interdisciplinary communication of skills for the valid use of metrics should be an objective for the scientific community. Because currently, only very few future scientists and academics experience any formal training in “metrics literacy” at all; in their future careers they will certainly be assessed (and perhaps carry out assessments themselves) using metrics.

And ultimately the misgivings of the researchers also indicate that there is still a lot of research to be done in the field of altmetrics. Because although representations of altmetric indicators can now be found on various publication sites, our knowledge regarding what altmetrics reveal and how various measured signals differ from one another, actually has many gaps.

This article is based on a presentation for the webinar series of Altmetric.com. You can download the slides of the original presentation here and there is also a recording you can listen to.

The results described in the article are based on a partial sample of the study “When You Use Social Media You Are Not Working”: Barriers for the Use of Metrics in Social Sciences that was published in the journal “Frontiers in Research Metrics and Analytics”. And here you can listen to the recording of the presentation

About the Author:

Steffen Lemke is a member of the Web Science research group at the ZBW – Leibniz Information Centre for Economics and a doctoral candidate in computer science at the Christian-Albrechts-Universität (Kiel University). Steffen researches topics concerning scientometrics and scientific communication. His particular field of research is assessment systems of scientific performance and their effects on the working methods of scientists.
Portrait: Steffen Lemke©

Share this post:

The ZBW – Leibniz Information Centre for Economics is the world’s largest research infrastructure for economic literature, online as well as offline.

Research Project OA-FWM: How Community of Practice and Gamification Bring New Momentum to Open Access Open Science Conference 2017: Report Review: How the Economic Impacts of Open Science are Being Measured

View Comments

Machine Learning: How to Automate Subject Indexing
Next Post