Open Science News

How do we translate open science monitoring policies into real, sustainable institutional practices?
How do we translate open science monitoring policies into real, sustainable institutional practices? 1024 683 Open and Universal Science (OPUS) Project

Organised by UNESCO, OPUS, OpenAIRE, and OSMI on 8 July 2025, as part of the Open Science: Monitoring Progress, Assessing Impact conference at UNESCO Headquarters in Paris, a crucial joint session brought together leading international initiatives (projects) to address one of the most pressing questions in research policy: how do we translate open science monitoring policies into real, sustainable institutional practices?

About the Session: The Role of RPOs and RFOs in Open Science Monitoring: Connecting Institutional Practices to Policy Aggregation

Date: Tuesday, 8 July 2025
Organised by: UNESCO, OPUS, OpenAIRE, and OSMI

This roundtable session explored the practical realities behind policy implementation, addressing the mounting pressures facing research-performing organisations (RPOs) and research-funding organisations (RFOs) as they navigate rising demands for accountability, transparency, and alignment with evolving open science frameworks.

Experiences and Barriers that Institutions Face While Monitoring Systems

As open science frameworks proliferate across Europe and beyond, a critical gap has emerged between policy vision and operational capacity. The session moved beyond theoretical discussions to surface lived experiences, assumptions, and barriers that institutions face when implementing monitoring systems. Key topics included institutional-level engagement with EU monitoring recommendations, feasibility and cost concerns for implementation, researcher engagement and resistance, and gaps between policy vision and operational capacity.

Session Participants

Moderator:

  • Gareth O’Neill, Principal Consultant on Open Science, Technopolis Group Belgium (OPUS consortium partner)

Roundtable Contributors:

  • Natalia Manola, CEO, OpenAIRE AMKE
  • Laetitia Bracco, OSMI Co-coordinator, Université de Lorraine
  • Alina Irimia, Open Science Projects Coordinator, UEFISCDI-Romania (OPUS pilot RFO organisation)
  • Nataša Jakominić Marot, Head of Research and Innovation, University of Rijeka, Croatia (OPUS pilot RPO organisation)
  • Manuel Ricardo Galindo Moreno, Science, Technology, and Innovation Policy, UNESCO

Natalia Manola, CEO, OpenAIRE AMKE 

OpenAIRE is a European infrastructure for open science and scholarly communication. Naturally, one of our major interests is how to monitor open science.

Over the years, we have realised that, as an infrastructure builder and provider, it is essential for us, when creating infrastructure, to ensure that from day one it is capable of gathering the evidence needed to measure policies on open science. We understand that monitoring can happen at various levels — for example, at the institutional level, where decisions need to be taken quickly in ways that influence organisational policies and researchers.

This means a strong focus on quantitative indicators. As we move up to higher levels, however, it also becomes a question of strategic alignment. Within OpenAIRE, we have a working group on monitoring open science policies, led by Eloy Rodriguez from the University of Minho. Given that we are a member-based organisation, this group is extremely valuable, members share the types of monitoring they carry out, and we can see that there is a great deal of overlap between indicators.

One of the major challenges we see, from the perspective of infrastructures, is that overlapping indicators make evidence collection resource-intensive and highly demanding for both organisations and funders. Multiple monitoring initiatives often request similar or identical data, which forces organisations to populate various systems, duplicating effort.

This is why infrastructures must be ready to support evidence collection and help organisations, funders, and institutions to extract this evidence from their own systems, so it can feed into the different monitoring frameworks. One way to reduce overlap is through better coordination between monitoring initiatives, something we can see happening, for instance, through OSMI, which is very encouraging and important.

Secondly, infrastructures should provide the evidence required by the various monitoring initiatives — and this needs to be well-coordinated and improved. Thirdly, institutions themselves need to better understand where their evidence can be most effectively used for monitoring purposes, and to take a more strategic approach in deciding which indicators they track internally versus those they report externally.

I don’t think this is a matter of “research washing” or “greenwashing”, monitoring open science is critically important. It can not only help us measure progress, but also identify gaps in implementation and guide the development of targeted support interventions. In other words, it’s not just about measurement; it’s about informed action and improvement.

Gareth O’Neill (Moderator)

The Open Science Monitoring initiative currently involves nearly 200 people and more than 100 research institutions. It was established to promote transparency, inclusiveness, and adaptability in tracking open science progress at a global level.

Laetitia Bracco, OSMI Co-coordinator, Université de Lorraine 

OSMI stands for the Open Science Monitoring Initiative. So, what is it about? As you may have guessed, it’s about open science monitoring.

At present, OSMI brings together nearly 200 people and more than 100 research institutions. It was established to promote transparency, inclusiveness, and adaptability in tracking open science progress globally. OSMI is organised through several working groups that support and advance its activities.

It operates as a grassroots initiative, engaging in consultation cycles with the community. Through this approach, it develops principles and guidelines for open science monitoring, ensuring that monitoring efforts are coordinated, comprehensive, and aligned with the diverse needs of stakeholders. In essence, the goal is to strengthen coordination and alignment across the many different initiatives already under way.

Let me also briefly outline OSMI’s core goals. These are:

  • To support the development and implementation of transparent, accountable, and sustainable monitoring systems that advance open science at local, national, and international levels.
  • To foster dialogue and knowledge sharing among researchers, policymakers, funders, and other stakeholders involved in open science monitoring — building a strong and connected community around these efforts.
  • And finally, to provide resources and practical guidance to help organisations implement effective monitoring practices — ensuring that we share the same understanding of what monitoring means and apply similar, well-informed practices across contexts.

So, in short, OSMI is about building principles, providing resources, and helping the community move forward together, speaking the same language when it comes to open science monitoring.

Alina Irimia, UEFISCDI-Romania, OPUS pilot RFO organisation

Regarding the groundwork of funding in the context of the OPUS project, in Romania I also wear the hat of national reporter for UNESCO, responsible for reporting on the implementation of the Recommendation on Open Science. This means I have visibility over monitoring both at the funding organisation level and at the national level.

From the perspective of the funding organisation, we have several types of monitoring activities. One is compliance monitoring, which is about ensuring that funded projects meet open science requirements, for example, making sure that publications resulting from funded research are openly accessible, or that research data is shared according to the FAIR principles, where applicable.

We also carry out performance monitoring, which assesses how effectively open science practices are being adopted and the extent of their impact on research outcomes. This could involve examining citation rates of open access publications, download statistics for shared datasets, or mapping the collaboration networks that emerge from openly conducted projects.

The third type is strategic monitoring, which evaluates the broader impact of our open science policies and how well they align with national and international goals. This helps us determine whether our funding programmes are effectively promoting open science and identify where adjustments may be needed.

From a national perspective, the monitoring process becomes even more complex. We need to aggregate data from multiple sources, such as funding organisations like ours, research institutions, libraries, and other stakeholders. We look at indicators that measure progress towards the country’s open science goals, as well as Romania’s performance in comparison to other nations.

However, there are notable challenges. One is data collection and harmonisation, different organisations may work with different metrics or definitions, making it difficult to develop a coherent national picture. We also face resource constraints: effective monitoring requires dedicated personnel and robust technical infrastructure, which can be especially challenging for smaller organisations.

Another challenge is engaging researchers and institutions. Many perceive monitoring as an additional administrative burden rather than a tool for improvement. We therefore need to communicate its value more clearly and show how it can help institutions and researchers enhance their practices.

Looking ahead, I see opportunities for better coordination between monitoring initiatives, as Natalia mentioned, and for leveraging technology to automate data collection, reducing the effort required from researchers and institutions. There is also considerable potential to move beyond simple compliance metrics towards more meaningful indicators of impact.

In my view, we need to work collaboratively to develop monitoring systems that are genuinely useful, efficient, and effective, systems that actively help us to advance open science, rather than merely measure it.

Nataša Jakominić Marot, Head of Research and Innovation, University of Rijeka, Croatia

At the University of Rijeka, what we did was select from the assessment framework in OPUS, which, I have to say, was like a menu, and a very comprehensive one at that. It contained eight dimensions, each with several practices and indicators, so, really, quite a big menu to choose from.

From this, we selected around 15 practices that we felt best suited our current situation and strategic priorities. We didn’t try to implement everything at once; instead, we chose what made the most sense for us at this stage. Some of the practices we selected included:

  • open access publishing policies,
  • research data management,
  • open educational resources, and
  • transparent evaluation criteria for hiring and promotion.

What was interesting was that, when we began implementing these, we found that some things we had assumed would be easy turned out to be quite complex, and, conversely, some things we thought would be difficult were actually relatively straightforward. For example, we expected implementing open access policies to be simple, but we encountered resistance from some researchers who were concerned about publication costs and journal quality. On the other hand, introducing research data management practices was easier than expected, because many researchers were already keen to better organise their data.

One of the clearest lessons we learned was the importance of communication and training. We couldn’t simply announce new policies and expect them to be adopted. We had to explain why these practices matter, provide training on how to apply them, and offer ongoing support. We organised workshops, developed guidance documents, and set up help desks where researchers could bring their questions.

Another lesson was about timing and sequencing. We found it far better to implement practices gradually, making sure each one was embedded and working effectively before moving to the next. We also realised that some practices are interconnected, for example, open access publishing and research data management tend to go hand in hand, so it can be sensible to address them together.

From a monitoring perspective, we discovered that it is vital to have both quantitative and qualitative indicators. Numbers can tell you what is happening, but they don’t always explain why. We therefore combined metrics, such as the percentage of open access publications, with qualitative feedback from researchers about their experiences and challenges.

We also concluded that monitoring should be used not just for checking compliance, but as a tool to identify where further support is needed. If we saw researchers struggling with a particular practice, that was a signal for us to improve our training or support services.

One challenge we faced was balancing institutional priorities with the needs of individual researchers. Some were very enthusiastic about open science practices, while others were sceptical or worried about the extra workload. We addressed this by focusing on the benefits of these practices, rather than simply presenting them as requirements.

Another challenge was aligning our institutional practices with national and international monitoring requirements. At times, the indicators that made perfect sense for us internally did not align fully with what national or EU-level systems were asking for. This created an additional administrative burden in terms of data collection and reporting.

Looking at the bigger picture, I believe there is a clear need for better coordination between different levels of monitoring. If institutional, national, and international systems were better aligned, the administrative burden would be reduced, and the data collected would be more meaningful.

I also think there is great value in sharing experiences between institutions. What works well at one university may not work at another, but there are always lessons to be learned. Having platforms where institutions can share best practices, and also challenges, would be extremely useful.

Finally, I want to emphasise that monitoring should ultimately serve the goal of improving open science practices. It is not just about collecting data or meeting reporting requirements, it is about creating a culture in which open science practices are genuinely valued and supported.

Manuel Ricardo Galindo Moreno, UNESCO

From UNESCO’s perspective, we are well aware of the principles proposed for monitoring by OSMI. We also have the UNESCO Recommendation on Open Science, which we have been implementing and tracking through a global survey.

The first survey, conducted in 2023 with our Member States, received 118 responses, each providing insights into the level of implementation of the Recommendation.

What we found is that there are significant variations in how different countries approach open science monitoring. Some have well-developed national strategies and sophisticated monitoring systems in place. Others are only at the very beginning of developing their approaches. This reflects, of course, differing levels of resources, infrastructure, and political commitment to advancing open science.

One of the key challenges we identified is the urgent need for capacity building, particularly in developing countries. Many nations aspire to make progress in open science but lack either the technical expertise or the institutional infrastructure to design and implement effective monitoring systems. This is precisely where international cooperation and knowledge sharing become so critical.

We also observed that there is often a disconnect between national policy commitments and what actually happens at the institutional level. A country may have strong, well-written policies on paper, but translating these into practice within universities, research institutes, and other organisations requires significant effort, resources, and coordination. In this context, initiatives like OPUS play an essential role, helping to bridge that gap between policy and practice.

Another important finding was the role of stakeholder engagement. Countries that achieved more success in implementing open science were typically those that actively engaged a wide range of stakeholders, researchers, institutions, funders, publishers, and civil society, in shaping their national approaches. This highlights the need for monitoring systems to be developed through genuinely inclusive processes.

From a global perspective, we believe monitoring systems must be able to capture both progress and the challenges along the way. It is not enough simply to measure what is working; there is equal value in identifying the barriers, understanding why they exist, and finding ways to address them. This requires going beyond simple quantitative metrics to indicators that can reflect the complexity and nuance of the open science transition.

We are also very interested in how monitoring can directly support evidence-based policymaking. The data collected should not sit idle. It should inform decisions about where to invest resources, what kinds of support to provide, and how best to adapt policies based on clear evidence of what is effective and what is not.

Looking ahead, we see real opportunities for greater coordination between the many different monitoring initiatives currently operating. Instead of having parallel systems that operate in isolation, we could be working towards more integrated approaches, systems that serve multiple purposes, are interoperable, and reduce the reporting burden on both institutions and researchers.

Gareth O’Neill (Moderator)

I’m wondering, at which level should alignment happen? Is it at the national level, considering there are different monitoring systems even within Europe? For instance, we have the European Research Area monitoring, the EOSC monitoring, and various other systems. So, the question is: at what level should this alignment take place? And how can we ensure that institutions do not have to report to multiple systems with differing requirements? 

Laetitia Bracco, OSMI Co-coordinator, Université de Lorraine

I think that OSMI is aligning in three different ways.

Firstly, through the principles, as I’ve said, these are the result of extensive consultation with the community, including a range of different monitoring initiatives. These principles reflect a shared consensus on what good monitoring should look like, regardless of the specific context or level. They emphasise values such as transparency, inclusiveness, adaptability, and proportionality. These are relevant whether you’re monitoring at the institutional, national, or international level, and they provide a common framework that different initiatives can use to align their approaches.

Secondly, there is the community aspect. OSMI brings together people from different monitoring initiatives, different institutions, and different countries. This creates real opportunities for dialogue and mutual learning. When people are talking to one another, they are far more likely to spot opportunities to align their work and to avoid unnecessary duplication.

And thirdly, we are working on practical resources and guidance, things like indicator frameworks, data collection templates, and best practice guides. The idea is to provide ready-to-use tools which different initiatives can adapt for their needs, rather than having everyone reinvent the wheel.

But you raise an important point about the multiplicity of monitoring systems. This is indeed a challenge. From an institutional perspective, having to report to multiple systems, each with different requirements, can be both burdensome and inefficient. From a policy perspective, fragmented monitoring makes it much harder to build a coherent picture of progress.

I think there are several steps that could help here.

One is better coordination at the planning stage. If monitoring initiatives talked to one another while designing their systems, they could identify ways to align and find opportunities for shared data collection.

Another is the development of common data standards and formats. If different systems use the same definitions and formats, it becomes much easier to share data and reduce duplication. This doesn’t mean all systems have to be identical — but they should be interoperable.

We also need to consider the user experience: what it is like for institutions and researchers who are providing the data. If we can design systems that minimise the reporting burden while still meeting the needs of different users, everyone benefits.

I think technology also has a role to play here. We need systems that can automatically extract and format data from institutional sources, rather than requiring manual entry for every initiative. We also need dashboards that can present the same underlying data in different ways, depending on the purpose.

But ultimately, alignment is not just a technical challenge. it’s also a political and organisational one. Different monitoring initiatives exist because they serve different stakeholder needs. Funders require different information than policymakers, and policymakers often need different information than research institutions. The challenge is to meet these diverse needs while avoiding duplication and reducing burden.

This is where I believe initiatives like OSMI can add real value, not by trying to replace existing monitoring systems, but by helping them work together more effectively. We can provide a forum for dialogue, develop shared principles and standards, and create resources that all can use.

Of course, this requires buy-in from every stakeholder. Monitoring initiatives need to recognise the value of coordination and be willing to adapt their approaches. Institutions need to feel confident that coordination will actually reduce their workload, rather than adding another layer of complexity.

I think we are making progress, but there is still much work to do. The fact that we’re having this conversation, and that projects like OPUS are actively testing these approaches on the ground, is very encouraging. But we need to keep pushing for stronger coordination and alignment.

One point I’d like to highlight is the importance of learning from what doesn’t work. We should be honest about where our monitoring systems are falling short, and be willing to adapt. That means creating feedback loops where those providing data can share their experiences and suggest improvements.

It also means being willing to stop doing things that add no value. Sometimes, the most effective way to reduce burden is to eliminate redundant or low-priority monitoring activities, and that requires honest conversations about priorities and trade-offs.

We can also learn from other sectors that have faced similar coordination challenges, for example, healthcare, environmental monitoring, or quality assurance in manufacturing. These fields have developed methods for aligning multiple monitoring systems while still preserving their specific purposes.

Finally, I want to emphasise that alignment should not mean full standardisation. Different contexts need different approaches. The objective is to reduce unnecessary burden while still allowing for diversity and innovation in how monitoring is done.

So, in summary: alignment can take place at multiple levels, but it requires deliberate effort, coordination between initiatives, the development of shared standards and tools, and keeping the user experience front and centre. OSMI is working to contribute to this, but ultimately, it has to be a collective effort involving all stakeholders.

Pil Maria Saugmann , European Council of Doctoral Candidates and Junior Researchers

I am the current President of Eurodoc, the European Council of Doctoral Candidates and Junior Researchers, and we represent around 35 member organisations across Europe.

From the perspective of early career researchers, we often see a disconnect between the high-level policy discussions on open science monitoring and the reality of what researchers are experiencing on the ground. Many of our members tell us that they fully support open science in principle, but they face very real, practical barriers that make it difficult to put into practice.

For example, many early career researchers worry about the career implications of engaging in open science practices. If their institution tells them that it values open science, but promotion and evaluation criteria still prioritise traditional metrics such as the journal impact factor, they receive mixed messages. This creates a tension between doing what they genuinely believe is the right thing for science and doing what they believe will advance their careers.

There is also the matter of resources and support. Many early career researchers want to practise open science, but they simply do not know how to start. They need training in areas such as research data management, open access publishing, and collaborative and transparent research practices. However, this training is not always available, or it is poorly integrated into doctoral and early career development programmes.

Another challenge is the additional workload that open science practices can create, especially at the beginning. Learning new tools, adopting proper data management protocols, and navigating open access requirements all take time. For researchers already stretched thin by research, teaching, and administrative responsibilities, this can feel overwhelming.

From a monitoring perspective, we believe it is important to capture not just what researchers are doing, but also the support they are, or are not, receiving. Are institutions providing adequate training? Are career advancement criteria genuinely aligned with open science values? Is there sufficient infrastructure and resourcing to enable the wider adoption of open practices?

It is also worth noting that early career researchers are often the most enthusiastic advocates of open science, but we are also the most vulnerable to the potential risks. We may be more willing to share our data or publish in open access journals, but we are also more acutely aware of the possible negative implications for our career progression.

This suggests that monitoring systems need to take into account both generational differences and career-stage effects. What might appear to be slow institutional progress could in fact reflect the understandable caution of more established researchers when it comes to changing long-held practices.

We would also like to see more participatory approaches to monitoring. Instead of just measuring researcher behaviour from the outside, why not actively involve researchers, especially early career ones, in designing these systems? We often have valuable insights into which indicators are truly meaningful and which data collection methods are practical and realistic.

Finally, I want to stress that monitoring should exist to serve researchers, not the other way around. The ultimate goal should be to create systems that help us do better science and have greater impact, rather than simply ticking administrative boxes.

There are, fortunately, some positive developments. Increasingly, more funders require open science practices, which is creating incentives for institutions to strengthen their support. More institutions are also revising their evaluation criteria to include open science indicators. Still, there remains a great deal of work to align the different levels of the system so that policies, funding, and institutional practice work in harmony.

I believe initiatives like OPUS and OSMI are particularly valuable because they bring together diverse stakeholders to address these alignment challenges. But it is essential that early career researchers have a place at the table in these conversations, because we are the ones who will ultimately be implementing these practices throughout our careers.

So, I would encourage everyone involved in designing monitoring systems to think from the perspective of the researcher, and to actively involve us in the process. After all, if monitoring systems do not work for researchers, they are not going to work at all.

Global Leaders Converge in Paris to Chart the Future of Open Science Monitoring
Global Leaders Converge in Paris to Chart the Future of Open Science Monitoring 1024 683 Open and Universal Science (OPUS) Project

The OPUS team participated at the Open Science: Monitoring Progress, Assessing Impact conference, which took place on 7 & 8 July 2025, at UNESCO Headquarters in Paris. 

Over two days at UNESCO Headquarters and online, more than 500 participants from across the globe gathered for the Open Science: Monitoring Progress, Assessing Impact conference, an event uniting policymakers, researchers, and open science practitioners in defining how to track, evaluate, and advance the global shift towards more transparent and equitable research.

Organised by UNESCOOpenAIREPathOSEOSC Track, the French Ministry of Higher Education and ResearchInria, and OSMI, the conference marked the culmination of the EU-funded PathOS project and showcased the launch of new tools and frameworks set to shape open science policy worldwide.

A Global Call to Action from UNESCO

Opening the event, Lidia Brito, UNESCO’s Assistant Director-General for Natural Sciences, underscored the urgency of building robust, inclusive monitoring systems:

“Open Science is not a choice, it is a necessity for the equitable and sustainable future we seek. Measuring impact is the key to ensuring our efforts truly benefit science, society, and the planet.”

Lidia Brito, UNESCO’s Assistant Director-General for Natural Sciences

Her address set the tone for two intense days blending visionary discussions with concrete tools for policymakers, research institutions, and funders.

PathOS: Evidence, Methods, and Impact Pathways

The PathOS project, a three-year EU-funded project, took centre stage in multiple sessions. Ioanna Grypari (ATHENA RC & OpenAIRE) introduced the project’s innovative evidence base, methodological approaches, and hands-on tools, including real-world case studies revealing both the tangible benefits and persistent blind spots in open science adoption.

Tony Ross-Hellauer (Graz University of Technology) expanded on the project’s strategic lessons, inviting open dialogue on how PathOS findings can translate into actionable policies and clearer pathways for impact.

Turning Indicators into Insights

One of the conference’s key findings came from Vincent TraagJessica Catalano, and Louis Colnot, who presented the Open Science Impact Indicator Handbook alongside a Cost-Benefit Analysis Framework. These resources aim to help institutions and governments measure not only the academic but also the societal and economic value of open science, moving debates beyond citation counts to real-world impact.

Setting the Standards: OSMI Principles

In a milestone for global policy coherence, Nicolas Fressengeas and Laetitia Bracco formally launched the final version of the OSMI Principles of Open Science Monitoring, a comprehensive, internationally consulted framework promoting transparency, inclusiveness, and adaptability. The principles are designed to help policymakers, funders, and infrastructures align monitoring systems with shared values while remaining adaptable to local contexts.

Scaling Up: EOSC Observatory 2.0

A standout moment came with the launch of the second phase of the EOSC Open Science Observatory by Stefan Liebler and Tereza Szybisty. This next-generation monitoring platform offers accessible, harmonised data to track open science uptake across Europe, supporting evidence-based policy at both national and continental levels.

UNESCO’s Global Vision and Regional Realities

Rania Sabo presented UNESCO’s first comprehensive global assessment under the Recommendation on Open Science, offering a panoramic view of international progress. The discussion revealed striking variations in national readiness, as well as shared barriers such as resourcing, policy alignment, and researcher incentives.

Connecting Policy to Practice: OPUS Collaboration

A joint session with the Open and Universal Science (OPUS) project brought the conversation to the institutional level. Representatives of research-performing (RPOs) and research-funding organisations (RFOs) examined the tangible challenges of translating policy frameworks into everyday research practice, grappling with data gaps, compliance burdens, and cultural resistance.

See the full report here.

Closing with a Vision for the Future

In closing remarks, Ioanna GrypariNicolas Fressengeas, and Shaofeng Hu (UNESCO) reiterated that the real challenge ahead lies not only in measuring open science, but in ensuring that these measurements guide meaningful action.

“Monitoring must be a tool for transformation,” Hu emphasised, “not a mere accounting exercise.”

Key Takeaways from Paris

  • Evidence matters — Data-driven insights are essential to justify and guide open science policies.
  • Principles and flexibility — Monitoring frameworks must balance common standards with local adaptability.
  • Beyond academia — Measuring societal and economic benefit is as crucial as scholarly impact.
  • Integration across scales — National, institutional, and global efforts must be aligned to avoid duplication and maximise relevance.

With the PathOS legacy cemented, the EOSC Observatory upgraded, and UNESCO’s global vision charted, the Paris conference left participants with both the tools and the mandate to make open science impact assessment a standard — and transformative — part of the research system.

See more:

OPUS Final Conference: First Day Resume
OPUS Final Conference: First Day Resume 1024 771 Open and Universal Science (OPUS) Project

On July 9, 2025, the OPUS (Open and Universal Science Project) Consortium held its Final Conference at the UNESCO International Institute of Educational Planning (IIEP) in Paris. The event brought together around 100 attendees in-person and online including leading stakeholders from across Europe to reflect on OPUS’s ambitious journey to transform research assessment and incentivize Open Science. Throughout the day, participants explored key themes such as new indicators for assessing researchers, the Researcher Assessment Framework (RAF), pilot actions, targeted interventions, and the future trajectory of Open Science in Europe.

Speaker Highlights

Corina Moya-Falcón, OPUS Coordinator, PLOCAN

Corina Moya-Falcón opened the conference by reflecting on OPUS’s vision to revolutionize research assessment in Europe. She emphasized the consortium’s commitment to fostering a culture of openness, transparency, and collaboration. Corina detailed how OPUS had worked to bridge gaps between institutions, funders, and researchers, highlighting the value of shared learning and collective action.

In her remarks, Corina discussed the strategic approach taken by OPUS to incentivize Open Science. She described the development of new indicators and assessment tools that reward researchers for open practices, such as sharing data and engaging with non-academic stakeholders. Corina also addressed the challenges of changing institutional mindsets and the importance of sustained advocacy.

Concluding her contributions, Corina outlined the next steps for OPUS beyond the project’s formal end. She spoke about PLOCAN’s ongoing initiatives to embed Open Science principles in research management and the broader goal of building a pan-European community dedicated to responsible research assessment.

Gustavo Merino, Director, UNESCO Social Policies Division

Gustavo Merino welcomed participants on behalf of UNESCO, situating OPUS’s work within the global movement for equitable knowledge sharing. He recounted UNESCO’s longstanding commitment to supporting Open Science as a driver of sustainable development and social progress. Gustavo praised OPUS for aligning its objectives with UNESCO’s vision of science as a public good.

He elaborated on the importance of international cooperation in advancing Open Science. Gustavo highlighted the need for harmonized policies, cross-border collaboration, and inclusive frameworks that ensure no researcher or institution is left behind. He cited examples of successful partnerships and underscored the role of UNESCO in facilitating dialogue and knowledge exchange.

In his closing remarks, Gustavo encouraged participants to continue advocating for Open Science at both national and international levels. He stressed that the lessons learned from OPUS could serve as a model for other regions, reinforcing the idea that open and responsible research benefits society as a whole.

Rinske Van Den Berg, REA Project Officer

Representing the European Commission, Rinske Van Den Berg provided a policy perspective on the importance of research assessment reform. She acknowledged the significant progress made by OPUS in developing practical tools and frameworks that align with European priorities for Open Science. Rinske highlighted the critical role of funding agencies in driving systemic change.

Rinske discussed the challenges faced by Research Funding Organisations (RFOs) and Research Performing Organisations (RPOs) in adopting new assessment criteria. She emphasized the need for clear guidance, capacity building, and ongoing support to ensure successful implementation. Rinske also noted the value of pilot programs in testing and refining new approaches.

She concluded by expressing optimism about the future of Open Science in Europe. Rinske called for continued collaboration among stakeholders and reiterated the European Commission’s commitment to supporting initiatives that promote openness, transparency, and societal impact in research.

Gareth O’Neill (TGB) & Emma Day (Vitae): OPUS Researcher Assessment Framework (RAF)

Gareth O’Neill and Emma Day delivered an in-depth presentation on the OPUS Researcher Assessment Framework (RAF). They began by outlining the limitations of traditional assessment metrics and the need for a more holistic approach. Gareth and Emma explained how the RAF was designed to evaluate researchers based on their contributions to Open Science, including open access publishing, data sharing, and public engagement.

They described the four key categories of the RAF, Proposal Development, Methods Development, Research Outputs, and Research Impact, and provided examples of how each category could be assessed using qualitative and quantitative indicators. Gareth and Emma emphasized the flexibility of the framework, noting its adaptability to different disciplinary and institutional contexts.

In the final part of their session, they shared early results from pilot implementations of the RAF. They highlighted positive feedback from researchers and institutions, as well as areas for further refinement. Gareth and Emma concluded by inviting participants to contribute to the ongoing development and scaling of the RAF across Europe.

Raquel Vega (YERUN): Introduction to Pilot Action and Mutual Learning

Raquel Vega introduced the pilot action segment by framing it as an opportunity for mutual learning and practical experimentation. She explained the rationale behind piloting research assessment reforms in diverse settings, emphasizing the importance of context-specific solutions. Raquel highlighted the collaborative spirit that characterized the OPUS pilots.

She provided an overview of the pilot sites and the methodologies used to assess the effectiveness of new indicators and interventions. Raquel stressed the value of open communication and knowledge sharing among participating organisations, which helped to identify best practices and address common challenges.

Raquel concluded by encouraging participants to view the pilot actions as a foundation for broader reform. She called for sustained engagement and the scaling of successful initiatives to ensure lasting impact on research assessment practices.

Alina Irimia, UEFISCDI (Romania) – Pilot Action

Alina Irimia shared Romania’s experience with piloting OPUS’s recommendations at the Executive Agency for Higher Education, Research, Development and Innovation Funding (UEFISCDI). She described the process of integrating Open Science principles into national funding schemes, including the development of tailored indicators and reporting mechanisms.

Alina discussed the challenges encountered, such as resistance to change and the need for extensive stakeholder engagement. She highlighted the strategies employed to overcome these obstacles, including targeted training sessions and awareness campaigns. Alina also noted the importance of aligning national policies with European standards.

In her closing remarks, Alina reflected on the early impacts of the pilot, including increased researcher participation in Open Science activities and improved transparency in funding decisions. She expressed optimism about the long-term benefits of embedding Open Science in research assessment.

Julija Baniukevic, Research Council of Lithuania (RCL) – Pilot Action

Julija Baniukevic presented the Lithuanian pilot action, focusing on the transformation of assessment criteria within the Research Council of Lithuania (RCL). She outlined the steps taken to incentivize open practices among researchers, such as revising grant evaluation guidelines and introducing new performance metrics.

Julija described the collaborative process of engaging with researchers, evaluators, and institutional leaders to ensure buy-in and successful implementation. She emphasized the importance of transparency and consistency in applying the new criteria. Julija also shared examples of positive changes observed in funding decisions and institutional policies.

In her final remarks, Julija highlighted the lessons learned from the pilot, including the need for ongoing dialogue and flexibility. She encouraged other funding agencies to consider similar reforms and to share their experiences for collective advancement.

Isabel L. Nunes, Vice-Rector, University Nova of Lisbon (UNL) – Pilot Action

Isabel L. Nunes detailed the University Nova of Lisbon’s journey towards embedding Open Science in academic evaluation. She described the collaborative process of co-designing new assessment tools with faculty and staff, ensuring that the unique needs of different disciplines were addressed.

Isabel discussed the cultural shifts observed within the university, including increased recognition of open practices and greater collaboration among researchers. She highlighted the role of leadership in championing change and the importance of providing resources and support for implementation.

In her concluding remarks, Isabel reflected on the broader implications of the pilot for Portuguese higher education. She expressed hope that the lessons learned would inspire other institutions to embark on similar journeys towards responsible research assessment.

Nataša Jakominić Marot, Head, University of Rijeka Centre for Research and Innovation (UNIRI) – Pilot Action

Nataša Jakominić Marot recounted the University of Rijeka’s pilot, which focused on fostering innovation and open collaboration within the university’s research ecosystem. She described the steps taken to build support structures for Open Science, such as developing institutional repositories and launching training programs.

Nataša highlighted the challenges of overcoming resistance to change, particularly among established researchers. She shared strategies for engaging stakeholders, including regular workshops and transparent communication about the benefits of Open Science. Nataša also discussed the importance of aligning institutional incentives with desired behaviors.

In her closing remarks, Nataša reflected on the early successes of the pilot, such as increased participation in open initiatives and the establishment of new partnerships. She emphasized the need for sustained investment and leadership to ensure long-term impact.

Panagiotis Moiras, University of Cyprus (UCY) – Pilot Action

Panagiotis Moiras discussed the University of Cyprus’s approach to implementing OPUS-inspired reforms. He described the process of reviewing and updating institutional policies to better support Open Science, including the introduction of new assessment criteria and recognition schemes.

Panagiotis highlighted the importance of aligning incentives with open research practices, such as rewarding data sharing and collaborative projects. He shared examples of increased researcher engagement and the formation of new partnerships as a result of the reforms.

In his concluding remarks, Panagiotis emphasized the value of continuous evaluation and adaptation. He encouraged other institutions to remain flexible and responsive to feedback as they pursue Open Science reforms.

Corina Moya-Falcón, Beyond the Project Pilots: PLOCAN Action

Returning to the stage, Corina Moya-Falcón discussed PLOCAN’s ongoing commitment to Open Science beyond the OPUS project. She outlined future plans for scaling up successful interventions, such as expanding training programs and developing new policy frameworks.

Corina emphasized the importance of building a pan-European community dedicated to responsible research assessment. She called for continued collaboration among institutions, funders, and policymakers to sustain momentum and drive further innovation.

In her closing remarks, Corina reaffirmed PLOCAN’s dedication to fostering a culture of openness and transparency in research.

Join the Global Dialogue on Research Integrity at the 9th WCRI in Vancouver, May 2026
Join the Global Dialogue on Research Integrity at the 9th WCRI in Vancouver, May 2026 1024 486 Open and Universal Science (OPUS) Project

The 9th World Conference on Research Integrity (WCRI) will take place from May 3–6, 2026, in the vibrant city of Vancouver, Canada, bringing together researchers, institutions, funders, publishers, and policymakers from all disciplines and regions to advance responsible research practices.

Open to professionals at all career stages, the WCRI fosters a dynamic and inclusive environment through presentations, policy discussions, and networking sessions. With a strong focus on diversity, early-career support, and global collaboration, the conference will feature workshops, doctoral forums, and meet-the-expert sessions designed to equip the next generation of integrity leaders.

This edition will spotlight three critical themes:

  • Artificial Intelligence: Exploring AI’s dual role in enhancing research quality while also posing new risks for misconduct and bias.
  • Research Security: Addressing the balance between protecting sensitive research and maintaining openness and transparency in science.
  • Indigenous Knowledge Systems: Deepening the dialogue between Indigenous ways of knowing and traditional research integrity frameworks.

In line with past WCRI events that produced influential statements like the Singapore, Hong Kong, and Cape Town Principles, WCRI 2026 also aims to deliver a major contribution to global research policy.

Participants are welcome to join on-site or virtually. To stay informed, register for the newsletter at www.wcri2026.org and be part of shaping the future of ethical and trustworthy research.

Transforming Research Assessment: GraspOS Conference to Champion Open Science Principles
Transforming Research Assessment: GraspOS Conference to Champion Open Science Principles 900 616 Open and Universal Science (OPUS) Project

In the pursuit of a more equitable and transparent research ecosystem, the upcoming Conference on Opening Research Assessment will take place in Pisa, Italy, spotlighting the outcomes of the GraspOS project and offering a platform for transformative dialogue around responsible research assessment aligned with Open Science values.

Set to be hosted at the CNR Area della Ricerca di Pisa, the conference aims to reshape how research activities and contributions are evaluated—emphasizing transparency, inclusivity, and fair recognition of all contributions, particularly those supporting Open Science. This shift requires not only valuing diverse research outputs but also adopting open data, tools, and infrastructures throughout the assessment process.

At the heart of the event is the GraspOS project, a forward-thinking EU initiative dedicated to developing a federated infrastructure that aggregates open resources to support emerging policy reforms. The project’s work is designed to help institutions transition towards assessment systems that are not only robust and transparent but also actively foster Open Science practices.

The conference programme will feature:

  • Sessions on open infrastructures for responsible assessment,
  • Discussions on transparency and inclusivity in evaluation processes,
  • Panels on recognising Open Science contributions, and
  • A poster session, with a call for contributions opening soon.

Important logistical details:

  • Venue: CNR Area della Ricerca, Via Giuseppe Moruzzi, Pisa
  • Access: Easily reachable via bus (LAM Verde), taxi, or a 30-minute walk from the city center
  • Closest airport: Pisa International Airport (Galileo Galilei)
  • Train station: Pisa Centrale

The event is supported by a diverse Scientific Committee from leading European research organizations, ensuring a wide-ranging and inclusive perspective.

For more details and updates on registration and poster submissions, visit: www.graspos.eu/graspos-conference-2025 or contact the organizing team at graspos-pm@athenarc.gr.

This conference represents a significant step toward embedding fairness, openness, and responsibility into how we assess and reward scientific work.

Fifth National Open Science Festival Set for Groningen in October 2025
Fifth National Open Science Festival Set for Groningen in October 2025 694 463 Open and Universal Science (OPUS) Project

The fifth edition of the National Open Science Festival (#OSF2025NL) will take place on 24 October 2025 at the University Medical Center Groningen (UMCG), marking a significant milestone for the Dutch open science community.

A Hub for Open Science Enthusiasts

This annual event, organized by Open Science NL in collaboration with the University of Groningen and UMCG, is designed for researchers, research support professionals, and anyone interested in open science practices. The festival aims to foster collaboration, transparency, and innovation in research by promoting the sharing of data, publications, software, and results as openly and early as possible.

Interactive Program and Community Engagement

Attendees can expect a full day of interactive sessions, an information market, and numerous networking opportunities. The program is curated by an independent committee to ensure broad representation from the open science community. This year, the committee is chaired by Casper Albers, dean of the Faculty of Behavioural and Social Sciences at the University of Groningen, and Ana Ranitović, chief of Open Science at the same university.

Continuing a Tradition of Inclusivity

The National Open Science Festival is known for its community-driven approach. Previous editions have been hosted by Wageningen University (2021, online), Vrije Universiteit Amsterdam (2022), Erasmus University Rotterdam (2023), and Maastricht University (2024). The selection of Groningen as the 2025 host follows a transparent process that prioritizes institutions new to hosting and encourages collaboration between universities and research institutes.

Barcamp Returns for a Second Year

On the day before the festival, the OSC-NL Barcamp will return to Groningen. This informal, participant-driven event offers a platform for open science enthusiasts-from newcomers to experienced practitioners-to exchange ideas and collaborate in a flexible, agenda-free format. The Barcamp is organized by OSC-NL and the National Coordination Point for Research Data Management (LCRDM).

Open Science: A Growing Movement

Open Science, as defined by UNESCO and embraced by the festival, seeks to make scientific knowledge openly available, accessible, and reusable for all. The movement leverages digital technologies to enable unprecedented sharing and analysis of research data, making global collaboration the new norm. By adopting open methods, the scientific community aims to achieve more robust results, greater efficiency, and faster dissemination of knowledge for the benefit of society.

Looking Ahead

Registration details and the full program for #OSF2025NL will be announced through official festival channels and partner organizations. With its focus on inclusivity, innovation, and community, the 2025 National Open Science Festival in Groningen promises to be a landmark event for anyone passionate about the future of open research.

Key Details at a Glance:

  • Event: 5th National Open Science Festival (#OSF2025NL)
  • Date: 24 October 2025
  • Location: University Medical Center Groningen (UMCG)
  • Audience: Researchers, research support staff, open science advocates, and the broader community
  • Highlights: Interactive sessions, information market, networking, OSC-NL Barcamp
Final Conference to Shape Future Policy Frameworks
Final Conference to Shape Future Policy Frameworks 724 1024 Open and Universal Science (OPUS) Project

On May 13–14, 2025, Brussels will host the final joint conference of three groundbreaking EU-funded projects — IANUS, POIESIS, and VERITY — all united under the theme #TogetherforSocietalTrustinScience, Research and Innovation. Taking place at Talk C.E.C. in Ixelles, Belgium, this high-level workshop and discussion event aims to advance policy frameworks that strengthen public trust in science through transparency, inclusivity, and shared responsibility.

Understanding and Strengthening Trust in Science

Public trust in science is under pressure from misinformation, polarisation, and rapid technological change. The IANUS, POIESIS, and VERITY projects have each tackled these challenges from complementary perspectives:

  • IANUS examined how misinformation and emerging technologies affect scientific credibility.
  • POIESIS explored the interplay between research integrity, public engagement, and institutional trust.
  • VERITY analysed the trust ecosystem, identifying key societal actors as “Stewards of Trust.”

Together, these initiatives have developed actionable insights and policy recommendations to bridge the gap between scientific institutions and society.

A Co-Creative Policy Dialogue

The two-day event invites policymakers, researchers, institutional leaders, and artists into a Co-Creative World Café setting. Through interactive sessions and open dialogue, attendees will:

  • Explore policy strategies that enhance trust in areas like climate change, health, and emerging technologies.
  • Discuss mechanisms for ensuring public confidence in science-based innovation.
  • Exchange best practices for ethical, transparent, and inclusive research communication.

Highlights from the Programme

  • Opening Remarks and Keynotes by project leads and EU representatives, including Michael Arentoft (DG RTD) and Georgios Papanagnou (EC).
  • Panel on Policy Implementation featuring UNESCO representatives and European Commission officials.
  • World Café Discussions to co-develop trust-enhancing policies in real-time.
  • Rapporteur Feedback Session to transform table discussions into actionable takeaways.

The second day will focus on policy synthesis and sustainability, followed by internal sessions among the project partners.

Event Details

  • Dates: May 13 (10:00–18:00 CEST) & May 14 (09:00–11:00 CEST)
  • Location: Talk C.E.C., Chaussee de Wavre 143, Ixelles, Brussels
  • Registration Deadline: April 29, 2025
    Register here

This final event represents not just the conclusion of three influential projects, but a pivotal opportunity to build lasting frameworks for societal trust in science. For more information, contact: Steffi@AcumenIST.com.

Webinar Explores Slow Science as a Path to Better Research Careers and Policies
Webinar Explores Slow Science as a Path to Better Research Careers and Policies 900 506 Open and Universal Science (OPUS) Project

On May 9, 2025, researchers and policymakers will come together for a thought-provoking webinar titled Slow Science to Move Forward: Rethinking Science Policy & Research Careers. The event will begin at 4:00 PM and aims to challenge the current culture of accelerated scientific output by spotlighting the principles of Slow Science – an approach that prioritizes quality over quantity, ethical reflection, and long-term societal impact.

In a research environment increasingly driven by speed and short-term results, Slow Science offers a compelling vision for a more sustainable and thoughtful academic culture. The webinar will explore how slower, more deliberate research methods can reshape science policy, improve research assessment practices, and support more meaningful academic careers, particularly for early- and mid-career researchers.

The session features three expert speakers:

  • Sergio Crespo-García, Assistant Professor at Université de Montréal, a biomedical researcher focused on retinal diseases who advocates for quality-driven research in the life sciences.
  • Sandra Montón Subías, ICREA Research Professor at Universitat Pompeu Fabra (UPF), a prominent archaeological theorist whose work addresses colonialism, gender, and reflective academic practices.
  • Annelies Van de Ven, Policy Officer at the European Commission, who brings a unique perspective from the intersection of academic research and EU science policy, with a focus on research evaluation and sustainability.

Together, the speakers will discuss how Slow Science can inspire structural changes in research funding, evaluation, and institutional norms. Participants can expect to gain practical tools for fostering resilient careers and contributing to a more inclusive, ethical, and impactful research ecosystem.

The webinar is free to attend and especially relevant for researchers seeking to balance professional success with integrity and meaningful scientific contribution.

OSTrails Launches Mentorship Programme to Boost Skills in Data Management, Knowledge Graphs, and FAIR Assessment
OSTrails Launches Mentorship Programme to Boost Skills in Data Management, Knowledge Graphs, and FAIR Assessment 888 500 Open and Universal Science (OPUS) Project

The OSTrails project (https://ostrails.eu/), committed to improving FAIRness, interconnectivity, and machine actionability across the Planning, Tracking, and Assessing phases of research, has announced the launch of its new Mentorship Programme. Bringing together senior experts in Data Management Plans (DMPs), Scientific Knowledge Graphs (SKGs), and FAIR Assessment, the programme offers a unique opportunity for participants to gain personalised guidance, enhance technical and soft skills, and grow their professional networks.

Why Join?

Participants will benefit from:

  • Expert Guidance: Direct learning from leading experts in the field;
  • Skill Development: Strengthening competencies in DMPs, SKGs, and FAIR Assessment;
  • Networking: Building connections with peers and professionals;
  • Personalised Support: Tailored mentorship to meet individual or group needs.

The programme welcomes both individuals and teams from the same institution, making it flexible for a variety of participants.

Learning Objectives

Participants will:

  • Improve their technical and soft skills in targeted topics;
  • Learn best practices and insights from experienced mentors;
  • Develop the ability to implement OSTrails services and specifications;
  • Expand their professional and research networks.

Programme Details

  • Duration: June 2025 – May 2026
  • Format: One-to-one and group mentoring
  • Activities:
    • Kick-off meeting (1 hour)
    • Monthly mentor sessions (1–1.5 hours)

Key Dates

  • Applications: March 31st – April 25th (extended to May 9th)
  • Selection and Notification: May 5th
  • Kick-off Meeting: June 2025
  • Programme Duration: June 2025 – May 2026

Who Can Apply

The programme is open to those eager to deepen their knowledge in Data Management, Scientific Knowledge Graphs, and FAIR Assessment, particularly:

  • IT infrastructure managers
  • Research support staff
  • Funders’ support officers
    Priority will be given to participants involved in OSTrails pilot projects and those from European Research Area competence centres and ESFRI Research Infrastructures.
Thinking With Machines: How Academics Can Use Generative AI Thoughtfully and Ethically
Thinking With Machines: How Academics Can Use Generative AI Thoughtfully and Ethically 1024 482 Open and Universal Science (OPUS) Project

As generative AI tools like ChatGPT revolutionize the landscape of academic research and scholarship, the need to approach these technologies with thoughtfulness and ethics has become increasingly crucial. On May 2, 2025, DARIAH will host a webinar titled “Thinking With Machines: How Academics Can Use Generative AI Thoughtfully and Ethically”, where Dr. Mark Carrigan, Senior Lecturer in Education at the University of Manchester, will guide attendees on integrating generative AI into academic practice in ways that enhance, rather than replace, human intellectual engagement.

Event Details

  • Date: Friday, May 2, 2025
  • Time: 4:00 PM IST / 5:00 PM CEST / 6:00 PM EEST
  • Speaker: Dr. Mark Carrigan, University of Manchester
  • Registration: Free but required

This webinar is part of the DARIAH Spring/Summer 2025 series, which aims to foster learning, share best practices, and explore social impacts through the lens of digital humanities. The Friday Frontiers webinars provide a platform for researchers, practitioners, and stakeholders to engage in current academic discussions, exchange knowledge, and learn about innovative tools and methods in digital humanities scholarly practices.

The sessions are recorded and available later on DARIAH-Campus for those unable to attend live.

Webinar Abstract

Generative AI tools, like ChatGPT, offer immense potential to support academic scholarship, but their indiscriminate use may pose risks to academic integrity. In this talk, Dr. Carrigan will discuss a framework that encourages the thoughtful adoption of generative AI in academic settings. Rather than treating these tools as mere productivity boosters, the presentation will explore how AI-driven conversational agents can serve as intellectual partners that complement human thought. Dr. Carrigan will also address the broader implications these advancements may have for universities and the future of academia.

About the Speaker

Dr. Mark Carrigan is a Senior Lecturer in Education at the University of Manchester, where he leads the MA Digital Technologies, Communication and Education (DTCE) program. His expertise spans philosophy, sociology, and education, with a focus on integrating emerging technologies in academic practice. Dr. Carrigan is the author of several works, including Social Media for Academics (2nd edition, Sage) and Generative AI for Academics (Sage, December 2024). He is an active contributor to academic communities, including the Critical Realism Network and the Centre for Social Ontology.

This session is especially timely, as generative AI tools continue to evolve, raising questions about their impact on the traditional norms of academia. Dr. Carrigan’s vast experience will provide attendees with essential insights into how these technologies can be harnessed ethically and thoughtfully to enhance academic research and practice.

Registration Information

The webinar is free to attend, but prior registration is required. Visit DARIAH’s registration page to secure your spot. Don’t miss out on this opportunity to engage with Dr. Carrigan and other scholars in the field of digital humanities.

Join for this insightful session on May 2 and explore how academics can harness the power of generative AI while maintaining ethical integrity in scholarly work.

Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Our Privacy Policy can be read here.

Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.