How do we translate open science monitoring policies into real, sustainable institutional practices?

How do we translate open science monitoring policies into real, sustainable institutional practices? 1024 683 Open and Universal Science (OPUS) Project

Organised by UNESCO, OPUS, OpenAIRE, and OSMI on 8 July 2025, as part of the Open Science: Monitoring Progress, Assessing Impact conference at UNESCO Headquarters in Paris, a crucial joint session brought together leading international initiatives (projects) to address one of the most pressing questions in research policy: how do we translate open science monitoring policies into real, sustainable institutional practices?

About the Session: The Role of RPOs and RFOs in Open Science Monitoring: Connecting Institutional Practices to Policy Aggregation

Date: Tuesday, 8 July 2025
Organised by: UNESCO, OPUS, OpenAIRE, and OSMI

This roundtable session explored the practical realities behind policy implementation, addressing the mounting pressures facing research-performing organisations (RPOs) and research-funding organisations (RFOs) as they navigate rising demands for accountability, transparency, and alignment with evolving open science frameworks.

Experiences and Barriers that Institutions Face While Monitoring Systems

As open science frameworks proliferate across Europe and beyond, a critical gap has emerged between policy vision and operational capacity. The session moved beyond theoretical discussions to surface lived experiences, assumptions, and barriers that institutions face when implementing monitoring systems. Key topics included institutional-level engagement with EU monitoring recommendations, feasibility and cost concerns for implementation, researcher engagement and resistance, and gaps between policy vision and operational capacity.

Session Participants

Moderator:

  • Gareth O’Neill, Principal Consultant on Open Science, Technopolis Group Belgium (OPUS consortium partner)

Roundtable Contributors:

  • Natalia Manola, CEO, OpenAIRE AMKE
  • Laetitia Bracco, OSMI Co-coordinator, Université de Lorraine
  • Alina Irimia, Open Science Projects Coordinator, UEFISCDI-Romania (OPUS pilot RFO organisation)
  • Nataša Jakominić Marot, Head of Research and Innovation, University of Rijeka, Croatia (OPUS pilot RPO organisation)
  • Manuel Ricardo Galindo Moreno, Science, Technology, and Innovation Policy, UNESCO

Natalia Manola, CEO, OpenAIRE AMKE 

OpenAIRE is a European infrastructure for open science and scholarly communication. Naturally, one of our major interests is how to monitor open science.

Over the years, we have realised that, as an infrastructure builder and provider, it is essential for us, when creating infrastructure, to ensure that from day one it is capable of gathering the evidence needed to measure policies on open science. We understand that monitoring can happen at various levels — for example, at the institutional level, where decisions need to be taken quickly in ways that influence organisational policies and researchers.

This means a strong focus on quantitative indicators. As we move up to higher levels, however, it also becomes a question of strategic alignment. Within OpenAIRE, we have a working group on monitoring open science policies, led by Eloy Rodriguez from the University of Minho. Given that we are a member-based organisation, this group is extremely valuable, members share the types of monitoring they carry out, and we can see that there is a great deal of overlap between indicators.

One of the major challenges we see, from the perspective of infrastructures, is that overlapping indicators make evidence collection resource-intensive and highly demanding for both organisations and funders. Multiple monitoring initiatives often request similar or identical data, which forces organisations to populate various systems, duplicating effort.

This is why infrastructures must be ready to support evidence collection and help organisations, funders, and institutions to extract this evidence from their own systems, so it can feed into the different monitoring frameworks. One way to reduce overlap is through better coordination between monitoring initiatives, something we can see happening, for instance, through OSMI, which is very encouraging and important.

Secondly, infrastructures should provide the evidence required by the various monitoring initiatives — and this needs to be well-coordinated and improved. Thirdly, institutions themselves need to better understand where their evidence can be most effectively used for monitoring purposes, and to take a more strategic approach in deciding which indicators they track internally versus those they report externally.

I don’t think this is a matter of “research washing” or “greenwashing”, monitoring open science is critically important. It can not only help us measure progress, but also identify gaps in implementation and guide the development of targeted support interventions. In other words, it’s not just about measurement; it’s about informed action and improvement.

Gareth O’Neill (Moderator)

The Open Science Monitoring initiative currently involves nearly 200 people and more than 100 research institutions. It was established to promote transparency, inclusiveness, and adaptability in tracking open science progress at a global level.

Laetitia Bracco, OSMI Co-coordinator, Université de Lorraine 

OSMI stands for the Open Science Monitoring Initiative. So, what is it about? As you may have guessed, it’s about open science monitoring.

At present, OSMI brings together nearly 200 people and more than 100 research institutions. It was established to promote transparency, inclusiveness, and adaptability in tracking open science progress globally. OSMI is organised through several working groups that support and advance its activities.

It operates as a grassroots initiative, engaging in consultation cycles with the community. Through this approach, it develops principles and guidelines for open science monitoring, ensuring that monitoring efforts are coordinated, comprehensive, and aligned with the diverse needs of stakeholders. In essence, the goal is to strengthen coordination and alignment across the many different initiatives already under way.

Let me also briefly outline OSMI’s core goals. These are:

  • To support the development and implementation of transparent, accountable, and sustainable monitoring systems that advance open science at local, national, and international levels.
  • To foster dialogue and knowledge sharing among researchers, policymakers, funders, and other stakeholders involved in open science monitoring — building a strong and connected community around these efforts.
  • And finally, to provide resources and practical guidance to help organisations implement effective monitoring practices — ensuring that we share the same understanding of what monitoring means and apply similar, well-informed practices across contexts.

So, in short, OSMI is about building principles, providing resources, and helping the community move forward together, speaking the same language when it comes to open science monitoring.

Alina Irimia, UEFISCDI-Romania, OPUS pilot RFO organisation

Regarding the groundwork of funding in the context of the OPUS project, in Romania I also wear the hat of national reporter for UNESCO, responsible for reporting on the implementation of the Recommendation on Open Science. This means I have visibility over monitoring both at the funding organisation level and at the national level.

From the perspective of the funding organisation, we have several types of monitoring activities. One is compliance monitoring, which is about ensuring that funded projects meet open science requirements, for example, making sure that publications resulting from funded research are openly accessible, or that research data is shared according to the FAIR principles, where applicable.

We also carry out performance monitoring, which assesses how effectively open science practices are being adopted and the extent of their impact on research outcomes. This could involve examining citation rates of open access publications, download statistics for shared datasets, or mapping the collaboration networks that emerge from openly conducted projects.

The third type is strategic monitoring, which evaluates the broader impact of our open science policies and how well they align with national and international goals. This helps us determine whether our funding programmes are effectively promoting open science and identify where adjustments may be needed.

From a national perspective, the monitoring process becomes even more complex. We need to aggregate data from multiple sources, such as funding organisations like ours, research institutions, libraries, and other stakeholders. We look at indicators that measure progress towards the country’s open science goals, as well as Romania’s performance in comparison to other nations.

However, there are notable challenges. One is data collection and harmonisation, different organisations may work with different metrics or definitions, making it difficult to develop a coherent national picture. We also face resource constraints: effective monitoring requires dedicated personnel and robust technical infrastructure, which can be especially challenging for smaller organisations.

Another challenge is engaging researchers and institutions. Many perceive monitoring as an additional administrative burden rather than a tool for improvement. We therefore need to communicate its value more clearly and show how it can help institutions and researchers enhance their practices.

Looking ahead, I see opportunities for better coordination between monitoring initiatives, as Natalia mentioned, and for leveraging technology to automate data collection, reducing the effort required from researchers and institutions. There is also considerable potential to move beyond simple compliance metrics towards more meaningful indicators of impact.

In my view, we need to work collaboratively to develop monitoring systems that are genuinely useful, efficient, and effective, systems that actively help us to advance open science, rather than merely measure it.

Nataša Jakominić Marot, Head of Research and Innovation, University of Rijeka, Croatia

At the University of Rijeka, what we did was select from the assessment framework in OPUS, which, I have to say, was like a menu, and a very comprehensive one at that. It contained eight dimensions, each with several practices and indicators, so, really, quite a big menu to choose from.

From this, we selected around 15 practices that we felt best suited our current situation and strategic priorities. We didn’t try to implement everything at once; instead, we chose what made the most sense for us at this stage. Some of the practices we selected included:

  • open access publishing policies,
  • research data management,
  • open educational resources, and
  • transparent evaluation criteria for hiring and promotion.

What was interesting was that, when we began implementing these, we found that some things we had assumed would be easy turned out to be quite complex, and, conversely, some things we thought would be difficult were actually relatively straightforward. For example, we expected implementing open access policies to be simple, but we encountered resistance from some researchers who were concerned about publication costs and journal quality. On the other hand, introducing research data management practices was easier than expected, because many researchers were already keen to better organise their data.

One of the clearest lessons we learned was the importance of communication and training. We couldn’t simply announce new policies and expect them to be adopted. We had to explain why these practices matter, provide training on how to apply them, and offer ongoing support. We organised workshops, developed guidance documents, and set up help desks where researchers could bring their questions.

Another lesson was about timing and sequencing. We found it far better to implement practices gradually, making sure each one was embedded and working effectively before moving to the next. We also realised that some practices are interconnected, for example, open access publishing and research data management tend to go hand in hand, so it can be sensible to address them together.

From a monitoring perspective, we discovered that it is vital to have both quantitative and qualitative indicators. Numbers can tell you what is happening, but they don’t always explain why. We therefore combined metrics, such as the percentage of open access publications, with qualitative feedback from researchers about their experiences and challenges.

We also concluded that monitoring should be used not just for checking compliance, but as a tool to identify where further support is needed. If we saw researchers struggling with a particular practice, that was a signal for us to improve our training or support services.

One challenge we faced was balancing institutional priorities with the needs of individual researchers. Some were very enthusiastic about open science practices, while others were sceptical or worried about the extra workload. We addressed this by focusing on the benefits of these practices, rather than simply presenting them as requirements.

Another challenge was aligning our institutional practices with national and international monitoring requirements. At times, the indicators that made perfect sense for us internally did not align fully with what national or EU-level systems were asking for. This created an additional administrative burden in terms of data collection and reporting.

Looking at the bigger picture, I believe there is a clear need for better coordination between different levels of monitoring. If institutional, national, and international systems were better aligned, the administrative burden would be reduced, and the data collected would be more meaningful.

I also think there is great value in sharing experiences between institutions. What works well at one university may not work at another, but there are always lessons to be learned. Having platforms where institutions can share best practices, and also challenges, would be extremely useful.

Finally, I want to emphasise that monitoring should ultimately serve the goal of improving open science practices. It is not just about collecting data or meeting reporting requirements, it is about creating a culture in which open science practices are genuinely valued and supported.

Manuel Ricardo Galindo Moreno, UNESCO

From UNESCO’s perspective, we are well aware of the principles proposed for monitoring by OSMI. We also have the UNESCO Recommendation on Open Science, which we have been implementing and tracking through a global survey.

The first survey, conducted in 2023 with our Member States, received 118 responses, each providing insights into the level of implementation of the Recommendation.

What we found is that there are significant variations in how different countries approach open science monitoring. Some have well-developed national strategies and sophisticated monitoring systems in place. Others are only at the very beginning of developing their approaches. This reflects, of course, differing levels of resources, infrastructure, and political commitment to advancing open science.

One of the key challenges we identified is the urgent need for capacity building, particularly in developing countries. Many nations aspire to make progress in open science but lack either the technical expertise or the institutional infrastructure to design and implement effective monitoring systems. This is precisely where international cooperation and knowledge sharing become so critical.

We also observed that there is often a disconnect between national policy commitments and what actually happens at the institutional level. A country may have strong, well-written policies on paper, but translating these into practice within universities, research institutes, and other organisations requires significant effort, resources, and coordination. In this context, initiatives like OPUS play an essential role, helping to bridge that gap between policy and practice.

Another important finding was the role of stakeholder engagement. Countries that achieved more success in implementing open science were typically those that actively engaged a wide range of stakeholders, researchers, institutions, funders, publishers, and civil society, in shaping their national approaches. This highlights the need for monitoring systems to be developed through genuinely inclusive processes.

From a global perspective, we believe monitoring systems must be able to capture both progress and the challenges along the way. It is not enough simply to measure what is working; there is equal value in identifying the barriers, understanding why they exist, and finding ways to address them. This requires going beyond simple quantitative metrics to indicators that can reflect the complexity and nuance of the open science transition.

We are also very interested in how monitoring can directly support evidence-based policymaking. The data collected should not sit idle. It should inform decisions about where to invest resources, what kinds of support to provide, and how best to adapt policies based on clear evidence of what is effective and what is not.

Looking ahead, we see real opportunities for greater coordination between the many different monitoring initiatives currently operating. Instead of having parallel systems that operate in isolation, we could be working towards more integrated approaches, systems that serve multiple purposes, are interoperable, and reduce the reporting burden on both institutions and researchers.

Gareth O’Neill (Moderator)

I’m wondering, at which level should alignment happen? Is it at the national level, considering there are different monitoring systems even within Europe? For instance, we have the European Research Area monitoring, the EOSC monitoring, and various other systems. So, the question is: at what level should this alignment take place? And how can we ensure that institutions do not have to report to multiple systems with differing requirements? 

Laetitia Bracco, OSMI Co-coordinator, Université de Lorraine

I think that OSMI is aligning in three different ways.

Firstly, through the principles, as I’ve said, these are the result of extensive consultation with the community, including a range of different monitoring initiatives. These principles reflect a shared consensus on what good monitoring should look like, regardless of the specific context or level. They emphasise values such as transparency, inclusiveness, adaptability, and proportionality. These are relevant whether you’re monitoring at the institutional, national, or international level, and they provide a common framework that different initiatives can use to align their approaches.

Secondly, there is the community aspect. OSMI brings together people from different monitoring initiatives, different institutions, and different countries. This creates real opportunities for dialogue and mutual learning. When people are talking to one another, they are far more likely to spot opportunities to align their work and to avoid unnecessary duplication.

And thirdly, we are working on practical resources and guidance, things like indicator frameworks, data collection templates, and best practice guides. The idea is to provide ready-to-use tools which different initiatives can adapt for their needs, rather than having everyone reinvent the wheel.

But you raise an important point about the multiplicity of monitoring systems. This is indeed a challenge. From an institutional perspective, having to report to multiple systems, each with different requirements, can be both burdensome and inefficient. From a policy perspective, fragmented monitoring makes it much harder to build a coherent picture of progress.

I think there are several steps that could help here.

One is better coordination at the planning stage. If monitoring initiatives talked to one another while designing their systems, they could identify ways to align and find opportunities for shared data collection.

Another is the development of common data standards and formats. If different systems use the same definitions and formats, it becomes much easier to share data and reduce duplication. This doesn’t mean all systems have to be identical — but they should be interoperable.

We also need to consider the user experience: what it is like for institutions and researchers who are providing the data. If we can design systems that minimise the reporting burden while still meeting the needs of different users, everyone benefits.

I think technology also has a role to play here. We need systems that can automatically extract and format data from institutional sources, rather than requiring manual entry for every initiative. We also need dashboards that can present the same underlying data in different ways, depending on the purpose.

But ultimately, alignment is not just a technical challenge. it’s also a political and organisational one. Different monitoring initiatives exist because they serve different stakeholder needs. Funders require different information than policymakers, and policymakers often need different information than research institutions. The challenge is to meet these diverse needs while avoiding duplication and reducing burden.

This is where I believe initiatives like OSMI can add real value, not by trying to replace existing monitoring systems, but by helping them work together more effectively. We can provide a forum for dialogue, develop shared principles and standards, and create resources that all can use.

Of course, this requires buy-in from every stakeholder. Monitoring initiatives need to recognise the value of coordination and be willing to adapt their approaches. Institutions need to feel confident that coordination will actually reduce their workload, rather than adding another layer of complexity.

I think we are making progress, but there is still much work to do. The fact that we’re having this conversation, and that projects like OPUS are actively testing these approaches on the ground, is very encouraging. But we need to keep pushing for stronger coordination and alignment.

One point I’d like to highlight is the importance of learning from what doesn’t work. We should be honest about where our monitoring systems are falling short, and be willing to adapt. That means creating feedback loops where those providing data can share their experiences and suggest improvements.

It also means being willing to stop doing things that add no value. Sometimes, the most effective way to reduce burden is to eliminate redundant or low-priority monitoring activities, and that requires honest conversations about priorities and trade-offs.

We can also learn from other sectors that have faced similar coordination challenges, for example, healthcare, environmental monitoring, or quality assurance in manufacturing. These fields have developed methods for aligning multiple monitoring systems while still preserving their specific purposes.

Finally, I want to emphasise that alignment should not mean full standardisation. Different contexts need different approaches. The objective is to reduce unnecessary burden while still allowing for diversity and innovation in how monitoring is done.

So, in summary: alignment can take place at multiple levels, but it requires deliberate effort, coordination between initiatives, the development of shared standards and tools, and keeping the user experience front and centre. OSMI is working to contribute to this, but ultimately, it has to be a collective effort involving all stakeholders.

Pil Maria Saugmann , European Council of Doctoral Candidates and Junior Researchers

I am the current President of Eurodoc, the European Council of Doctoral Candidates and Junior Researchers, and we represent around 35 member organisations across Europe.

From the perspective of early career researchers, we often see a disconnect between the high-level policy discussions on open science monitoring and the reality of what researchers are experiencing on the ground. Many of our members tell us that they fully support open science in principle, but they face very real, practical barriers that make it difficult to put into practice.

For example, many early career researchers worry about the career implications of engaging in open science practices. If their institution tells them that it values open science, but promotion and evaluation criteria still prioritise traditional metrics such as the journal impact factor, they receive mixed messages. This creates a tension between doing what they genuinely believe is the right thing for science and doing what they believe will advance their careers.

There is also the matter of resources and support. Many early career researchers want to practise open science, but they simply do not know how to start. They need training in areas such as research data management, open access publishing, and collaborative and transparent research practices. However, this training is not always available, or it is poorly integrated into doctoral and early career development programmes.

Another challenge is the additional workload that open science practices can create, especially at the beginning. Learning new tools, adopting proper data management protocols, and navigating open access requirements all take time. For researchers already stretched thin by research, teaching, and administrative responsibilities, this can feel overwhelming.

From a monitoring perspective, we believe it is important to capture not just what researchers are doing, but also the support they are, or are not, receiving. Are institutions providing adequate training? Are career advancement criteria genuinely aligned with open science values? Is there sufficient infrastructure and resourcing to enable the wider adoption of open practices?

It is also worth noting that early career researchers are often the most enthusiastic advocates of open science, but we are also the most vulnerable to the potential risks. We may be more willing to share our data or publish in open access journals, but we are also more acutely aware of the possible negative implications for our career progression.

This suggests that monitoring systems need to take into account both generational differences and career-stage effects. What might appear to be slow institutional progress could in fact reflect the understandable caution of more established researchers when it comes to changing long-held practices.

We would also like to see more participatory approaches to monitoring. Instead of just measuring researcher behaviour from the outside, why not actively involve researchers, especially early career ones, in designing these systems? We often have valuable insights into which indicators are truly meaningful and which data collection methods are practical and realistic.

Finally, I want to stress that monitoring should exist to serve researchers, not the other way around. The ultimate goal should be to create systems that help us do better science and have greater impact, rather than simply ticking administrative boxes.

There are, fortunately, some positive developments. Increasingly, more funders require open science practices, which is creating incentives for institutions to strengthen their support. More institutions are also revising their evaluation criteria to include open science indicators. Still, there remains a great deal of work to align the different levels of the system so that policies, funding, and institutional practice work in harmony.

I believe initiatives like OPUS and OSMI are particularly valuable because they bring together diverse stakeholders to address these alignment challenges. But it is essential that early career researchers have a place at the table in these conversations, because we are the ones who will ultimately be implementing these practices throughout our careers.

So, I would encourage everyone involved in designing monitoring systems to think from the perspective of the researcher, and to actively involve us in the process. After all, if monitoring systems do not work for researchers, they are not going to work at all.

Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Our Privacy Policy can be read here.

Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.