Discussion paper: New indicators for Open Science and Open Innovation

by Birgit Fingerle

In order to better identify the opportunities and risks of open science and open innovation, including those that relate to research and technology policy, a fundamental reconsidering of the recording, analysis and evaluation of the practices and structures of open science and open innovation should take place. The discussion paper “Open Science and Open Innovation – New Indicators for the Analysis of the Science and Innovation System in the Digital Age” (“Open Science und Open Innovation – Neue Indikatoren für die Analyse des Wissenschafts- und Innovationssystems im digitalen Zeitalter”, link in German language) by Clemens Blümel, published by the Stifterverband in September 2019, explores how this could be done.

The paper first outlines the status quo of existing practices and standards in research and innovation indicators in Germany and analyses existing undesirable developments and areas in which further development is needed. Subsequently, the author derives problems, objectives and construction principles for a new indicator system and presents an exemplary selection of possible new indicators based on these. In addition, the paper discusses the need for further research into basic questions regarding indicators for open research and innovation.

New indicators needed for open science

The collection of indicators on research and innovation is complex and based on very different data sources. For example, while information on innovation activity within companies often comes from self-disclosure in the context of recurring surveys, data for publications and citations are often based on information from bibliometric databases.

The extent to which existing performance indicators (for instance publications, patents) can capture novel innovation practices is controversial. For example, the inclination to acquire intellectual property rights in sectors with fast-moving innovation and technology dynamics both already varies wildly and continues to decline; as a result, there is a lack of meaningful conclusions to be drawn. In addition, the established indicators focus on the results and products of scientific and technological activity. However, specific resources required for scientific and technical activities (for example datasets, software or code) are hardly considered, nor are new practices such as cooperation with third parties and unusual knowledge providers.

At the EU level, there have already been initial attempts to develop new indicators for research and innovation. These include, among others, the Open Science Monitor, Monitoring of Responsible Research and Innovation (MORRI) and proposals for measuring new digital research and innovation practices under the Open Science Policy Platform.

In order to map open innovation and open science processes, the activities and results of the research process itself should be reflected more strongly in research and innovation indicators in the future. This includes the sharing of research data or program codes. Existing indicators should therefore, on the one hand, be supplemented by the aspect of openness. On the other, important new indicators should be proposed and developed to complement this process. Some of the examples given in the discussion paper for the expansion of open science indicators are listed below.

Capturing the accessibility of the scientific system and new forms of dissemination

Improvements in the coverage of open access literature are what is most needed to provide indicators to measure the degree to which access to scientific results has been opened up.

Measuring the accessibility of scientific literature is generally difficult, since

  • published literature can be made accessible subsequently via open access,
  • reference values for open access literature are difficult to find, and
  • the measurement methods for the various open-access types differ considerably.

In order to better capture open forms of scientific output, it has therefore been proposed (inter alia) that, in addition to the quality-assured journals recorded in the established scientific databases Scopus or Web of Science, new databases, such as the Directory of Open Access Journals (DOAJ), should be used for measuring open access journals. In addition, preprints could be included in the survey of the volume of publications, which in some subjects contribute greatly to the opening up and diffusion of knowledge. In order to record the number of preprints by German authors, however, problems such as the determination of nationality would have to be solved. International comparative data would be necessary in order not only to measure the extent of open access literature, but also to be able to make an international comparison. This would also require extensive efforts, since only a portion of the open access contributions is recorded in the established databases and, moreover, the existence of recording varies from country to country. In addition, the quality of metadata in traditional databases for international comparisons is not yet satisfactory. Since the proportion of open access material varies greatly between disciplines, these indicators should be displayed in a field- or discipline-specific manner.

The indicators described would improve only the measurement of the proportion of openly accessible literature. In order to gain insight into the visibility and relevance of this literature, a detailed analysis of the reception of these publications would also be important. For example, the citations of open access articles and preprints could be collected for this purpose and the frequently invoked “citation advantage” of open access literature could be reviewed. So far, however, it has been difficult to adequately record citations outside conventional databases. In other words, more work would be needed here.

Another issue that should play a role in the development of new indicators is the collection of new formats for the dissemination of scientific work (such as videologs, TED talks, science “slams”, blogs or wikis). For this purpose, the extent to which open or alternative metrics can be used for their measurement and recording is discussed.

Measuring the reuse of research data and program code

According to the paper, the collection of post-use research data is both important and complex. The assignment of persistent identifiers is seen as an essential prerequisite for this. Quantitatively, the provision of open data could initially be recorded in terms of the number of open data platforms and repositories and their use by German researchers. The establishment of research data centres could serve as an indication of increased post-use activities in the research and innovation system. New services such as the Datacite platform could be used to record these repositories. One difficulty so far, however, is that data is often stored in institutional or disciplinary repositories and is therefore difficult to find. In order not to exclusively reflect the existence of resources, further indicators should be developed which also cover researchers’ use of and interaction with data sets. It should be noted, however, that any analysis of the use of research data is difficult at this point in time, as data referencing is heterogeneous and no standards for data metadata have been established.

Due to difficulties in capturing references to data sets in texts, an analysis of usage statistics of data repositories (such as Datacite or Dryad) would be a further possibility for recording open-data effects. However, the very unspecific and thus problematic indicator of downloads would be used for this. The sharing of code, the writing of scripts and the like could also be recorded in this way. As an example, the paper discusses the platform Github.

The reproducibility of scientific results is an essential quality criterion for the documentation of a scientific instrument. The number of replications of studies could be a useful indicator of the quality of the verifiability of open scientific practices. Some platforms, such as Open Science Framework, already provide information on the feasibility and reproducibility of studies.

Mapping citizen science and crowdfunding

Since the contributions of laypersons to Citizen Science projects are generally collected and processed via digital platforms, the number of projects with Citizen Science shares and the number of participants can be well determined at the national level. For this purpose, data from the “GEWISS” project funded by the BMBF could be used. Another indicator for the inclusiveness of research and technology could be the number of supporters of, and the amount of funding for, crowdfunding projects. Crowdfunding platforms like startnext startnext or betterplace, as well as science-specific platforms such as the Science Starter platform (link in German language) operated by Wissenschaft im Dialog (WID), are now quite established. As a result, their data is well suited for monitoring the inclusiveness of research and innovation.

Opportunities and risks of new indicators

The development of new metrics and indicators for open science and open innovation is seen by many open science representatives as an opportunity to create new incentives in the scientific system. Moreover, with the introduction of new metrics, other important forms of scientific work – such as the provision of research infrastructures, software development or science communication – could also gain more recognition. In addition, it is hoped that the introduction of new or alternative indicators will correct misincentives caused by the repercussions of existing indicators. These include, for example, focusing on high-impact journals, looking solely at (generally) countable services or unintended effects of output indicators on the behaviour of individuals and research organisations.

On the other hand, there are possible new risks and dangers: For example, metrics collected via platforms carry a high risk of manipulation; also, due to the nature of interaction channels, communication can be largely empty of content, and long-term false incentives can arise. An impact assessment of new indicators is therefore necessary in their development in order to counteract unintended consequences.

Share this post:

Birgit Fingerle holds a diploma in economics and business administration and works at ZBW, among others, in the fields innovation management, open innovation, open science and currently in particular with the "Open Economics Guide". Portrait: Copyright Northerncards ©

Open Science Meet Up: Creating the Future Together Impact School 2018: Training for Transfer and Social Impact Open Access: Is It Fostering Epistemic Injustice?

View Comments

Blockchain: How it Could Make Research More Open and Transparent
Next Post