Researcher Assessment Framework (RAF) presented at the OPUS Final Conference

Researcher Assessment Framework (RAF) presented at the OPUS Final Conference 1024 683 Open and Universal Science (OPUS) Project

OPUS Researcher Assessment Framework (RAF)

New Approach to Research Evaluation

The centrepiece of the OPUS Final Conference was the presentation of the OPUS Researcher Assessment Framework (RAF) by Gareth O’Neill from Technopolis Group Belgium (TGB) and Emma Day from Vitae. O’Neill, Principal Consultant on Open Science at Technopolis Group, specialises in the implementation of Open Science across Europe, particularly focusing on sustainable careers for researchers and research assessment reform.

Gareth O’Neill (TGB) on the OPUS Researcher Assessment Framework (RAF)

One of the main focuses was developing a framework of indicators to incentivise and reward open science. However, as we examined these indicators, it became clear that they are only one piece of the puzzle. Open science activities are not separate from other research activities; researchers are never solely engaged in open science, nor are they rewarded only for that. Therefore, we had to take a step back and consider the full range of activities researchers undertake, what they are currently assessed for, and what we would like them to be assessed for in the future—whether in project applications, job applications, or during ongoing employment. The open science component should play a major role, but it is not the only aspect.

We have not worked in isolation; there have been many steps to reach this point, and there are ongoing efforts. Comments on assessment and open science were present in the original Charter and Code for Researchers, but the revised version is more explicit. We have considered the DORA principles, the Leiden Manifesto for research metrics, and the work of the Open Science Policy Platform. Many colleagues, including Rinska and Rene, contributed to these efforts, particularly on indicators for research assessment.

There are numerous principles, such as the Hong Kong and COARA principles, so the question is which ones to use. The Coalition for Advancing Research Assessment is a major initiative aiming to reform research assessment and align organisations in a common direction. With over 750 members, it is challenging to reach agreement, but shared goals exist. We have monitored these principles and many available indicators as we developed our framework.

We established our own set of principles for the framework. We did not delve deeply into organisational values and principles, as these are usually already defined. Our focus was to provide indicators to support assessment and open science, essentially creating a toolbox. Our aim was to develop a comprehensive set of indicators, enough to capture all researcher activities, but not so many as to become unmanageable. We sought a balance, avoiding an overwhelming list of indicators.

The framework is designed to be applicable across countries, disciplines, and types of organisations, whether research-performing or research-funding. It is agnostic to context and can be applied at various levels. Importantly, we are not dealing with metrics per se, but with indicators that can be used qualitatively or quantitatively. Organisations are free to determine how to deploy them; we believe assessment should combine responsible quantitative metrics with clear qualitative guidance.

Our focus is predominantly on the assessment of individual researchers, not research quality or peer review. However, as research funding organisations often work on a project or principal investigator basis, the framework is also scalable to projects. It is not intended for monitoring large groups, faculties, or entire universities, though it could be adapted for such purposes if desired.

We sought to capture all main activities undertaken by researchers, not just publications or publication metrics such as journal impact factors, citations, or the h-index. While these remain important, we want to provide organisations with more options.

The research assessment framework considers all researcher activities, with a specific lens for open science, two sides of the same coin. Indicators are separated into processes, outputs, and outcomes. A researcher may be at different stages: undertaking an activity (process), producing a tangible result (output), or achieving short- to medium-term outcomes. We do not focus on long-term impact.

Indicators are phrased to be clear yet flexible, allowing organisations to interpret and adapt them. Organisations can select, refine, merge, or subdivide indicators as needed. We do not prioritise indicators; any ranking or weighting is left to the organisation. The result is a flexible framework.

Our final version identifies four key domains of researcher activities, tested with researchers:

  1. Research: Developing proposals, methods, methodologies, protocols, data (planning, collection, management, curation, archiving), software, publications, materials (laboratory specimens, artistic objects), and reviews (publications, data, proposals).
  2. Education: Developing course programmes, resources, teaching, supervision (bachelor’s, master’s, PhD students), and skills development.
  3. Leadership: Demonstrating leadership or management (staff, projects), holding organisational roles (team leader, institute director), participating in committees, advisory boards, panels, and receiving awards.
  4. Valorisation: Science communication and popularisation (written, physical, online, recorded), engagement (industry, public sector), exploitation of research results (intellectual property), and entrepreneurship.

The framework includes two cross-cutting dimensions:

  • Openness: The Open Science Career Assessment Matrix (OSCAM 2) lenses all activities for open science.
  • Teamwork/Collaboration: Recognising that most research is collaborative, though we have not created a separate framework for teamwork.

Equality, diversity, and inclusion are overarching considerations, though not directly embedded in the indicators.

For example, in the domain of data, the process may involve developing a data management plan. Outputs are the completed plan or dataset. Outcomes include implementation, publication in repositories or journals, and indicators such as access, citation, or reuse. These can be measured quantitatively or qualitatively. Whether these steps are open depends on institutional policy.

These indicators are intended as building blocks; organisations already have frameworks and systems in place. Our indicators and ideas can help them revise and reform their systems to be more comprehensive and inclusive of open science. Deployment is left to each organisation.

Emma Day (Vitae) on the interventions

Our task was to consider interventions for implementing the framework. There are many principles and frameworks in this area, but they mean little without action.

We debated what interventions were needed and, in the end, developed ten general interventions for any organisation looking to implement the frameworks. These are starting points to consider before implementing either OSCAM2 or the RAF:

  1. Collect existing best practices in researcher assessment reform, highlighting existing initiatives and commitments.
  2. Ensure researchers understand the reasons for reforming researcher assessment.
  3. Ensure researchers, support staff, and policymakers are aware of the RAF and its implications, without overburdening them.
  4. Approve piloting or adoption of the RAF at the highest organisational level.
  5. Develop policies and procedures to implement, monitor, and evaluate the RAF, recognising that initial indicators may need adjustment.
  6. Develop or utilise tools and platforms to support RAF implementation, assessing whether existing infrastructure suffices.
  7. Ensure the RAF does not overburden researchers with additional tasks or reporting requirements.
  8. Revise organisational policies and procedures based on lessons learned, embedding the framework into broader policy.
  9. Engage policymakers to define and revise national policies on research assessment.

We identified four categories of specific interventions: policy, awareness, training, and resources.

Initially, we created a lengthy document linking interventions to every indicator, but pilot feedback indicated this was too burdensome. Instead, we now recommend policies and training that encompass all indicators at a higher level.

At the policy level, the process is to approve piloting or adoption of indicators and interventions at the highest level, define what will be piloted or adopted (as definitions vary between institutions), develop policies and procedures, and then monitor and evaluate.

Resource interventions concern the people and infrastructure required, financial input, platforms, repositories, and staff. Without adequate resources, implementation will not succeed.

Awareness interventions address whether people understand why these changes are happening, how to make them happen, and whom to approach for assistance. It is important that all relevant staff, not just researchers, are aware.

Training interventions ensure people know what to do, how to do it, and why. Training may be linked to frameworks such as ResearchComp or the Researcher Development Framework, and should focus on developing the necessary skills and competencies.

Gareth O’Neill (TGB) on work with pilot organisations

Our framework was tested by pilot organisations who did significant work. We had three research-performing organisations (Nova University of Lisbon, University of Cyprus, and University of Rijeka) and two national funding agencies: one in Romania UEFISCDI and the Research Council of Lithuania. Each created cohorts of researchers, selected indicators from the framework, and deployed them with their own interventions.

Within universities, implementation varied by institute. For example, at Nova, the framework was used in the Global Health and Tropical Medicine Research Centre and the Information Management Research Centre, which have different evaluation criteria and focuses. This demonstrates the need for flexibility. In Cyprus, it was piloted at the KIOS Research and Innovation Centre of Excellence; at Rijeka, at the Faculty of Law. The funding agencies applied it to specific programmes, which are complex and evaluated over many years. For example, one agency mandated data management plans and assessed how to implement these fairly and openly.

Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Our Privacy Policy can be read here.

Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.