Sharing Insights on Interoperability and Linked Data in Heritage and Research Contexts: Building Bridges Across Disciplines

by Maryam Mazaheri (Maastricht University Library)

In the world of Open Science, interoperability is the invisible glue that makes collaboration possible. Maryam Mazaheri, a Product Owner of Linked Data at Maastricht University Library, describes what is important for ensuring interoperability.

Why Interoperability Matters

Interoperability ensures that data from different systems can be exchanged, understood, and reused without friction. The FAIR principles describe interoperability as one of the four foundations of good data practice, alongside findability, accessibility, and reusability. Yet the “I” that stands for Interoperability in FAIR is often misunderstood: it’s not just about using a common format, but about shared meaning, structure, and governance.

The four levels of interoperability — technical, syntactic, semantic, and organisational — illustrate this progression. Technical and syntactic levels ensure that systems can connect and that data follows consistent structures. Semantic interoperability adds meaning through vocabularies and ontologies, while organisational interoperability aligns policies and workflows. When all four come together, data can move smoothly across projects, institutions, and even disciplines.

Linked Data as the Common Language

This is where Linked Data comes in. By connecting datasets through shared identifiers and structured metadata, Linked Data allows machines (and humans) to understand relationships between entities — a person, a place, or a publication — across domains. In heritage, this has transformed how cultural institutions make their collections visible online (for instance, Maastricht University Library Digital Collections).

Initiatives from Organisations such as Europeana, the Dutch Digital Heritage Network and university libraries including Maastricht University Library illustrate how Linked Data helps cultural institutions interlink digitised objects, metadata, and authority files to enhance discoverability. This not only improves access but also situates objects within a broader web of cultural and historical context. In the research domain, Linked Data supports interoperability between repositories, data management systems, and scholarly knowledge graphs, allowing researchers to trace meaningful connections between datasets, publications, and people.

Shared Challenges and Practical Lessons

Despite progress, interoperability is rarely straightforward. A few misconceptions persist:

  • Using the same file format (like CSV or JSON) does not guarantee interoperability if metadata and semantics differ.
  • It’s nearly impossible to “add” interoperability at the end of a project — it needs to be designed in from the start.
  • Standards alone don’t solve the problem; collaboration and alignment do.

These lessons are echoed across initiatives such as the Dutch Interoperability Network, which brings together experts from universities, organisations, and heritage institutions across the Netherlands to share practices and build cross-domain understanding. Their discussions highlight how interoperability is contextual — it depends on who is using the data, for what purpose, and in which environment. What works for a biomedical dataset may not fit a historical archive, yet both can benefit from shared frameworks and mutual learning.

Addressing these challenges means shifting focus from isolated projects to joint, community-driven development. Interoperability advances when libraries, researchers, and data stewards work together to align their standards, tools, and governance. Strengthening these connections lays the groundwork for broader European and international coordination.

Looking Ahead

Across Europe, initiatives such as the EOSC Technical and Semantic Interoperability Task Force, along with standards and guidelines like GO FAIR, and the Cross-Domain Interoperability Framework (CDIF) drive efforts to align the technical, semantic, and legal aspects of data exchange. Rather than fixed frameworks, these initiatives function as collaborative networks and sets of best practices that promote shared standards and policies. Their shared objective is to make data integration possible not only within disciplines, but across them.

Cultural heritage and life sciences are often cited as leading examples — the former for its rich, multilingual metadata and open sharing culture, the latter for their early adoption of ontologies and machine-actionable data. Both demonstrate that interoperability thrives where communities collaborate across institutional boundaries.

Ultimately, interoperability is less about technology than about shared understanding. It’s a long-term commitment to designing systems — and relationships — that enable data to move freely, remain meaningful, and grow in value with each connection.

This might also interest you:

About the Author:
Maryam Mazaheri is a Product Owner of Linked data at Maastricht University Library. She coordinates projects on Linked Open Data and interoperability in cultural heritage and research contexts. Maryam can be found on LinkedIn and Orcid.

Portrait, photo credit: Maastricht University Library©

Share this post:

The ZBW – Leibniz Information Centre for Economics is the world’s largest research infrastructure for economic literature, online as well as offline.

Promoting OER: How to Create an Open Textbook First Open Science Retreat: On the Future of Research Evaluation FAIR Data Spaces Project: Distributed Data System for Industry and Research

View Comments

Open Access Days 2025: Goal Achieved – or how can it (Ever) be Accomplished?
Next Post