OPUS Consultation on Researcher Assessment

OPUS Consultation on Researcher Assessment 960 653 Open and Universal Science (OPUS) Project

On 10 June 2025, the OPUS project brought together researchers, research managers, funders, and policy makers to collaboratively shape the revision of the OPUS Researcher Assessment Framework (RAF). The workshop aimed to engage in discussion and collect feedback from the participants on the first version of the RAF. The workshop was hosted by the Marie Curie Alumni Association and led by Gareth O’Neill from Technopolis Group. The workshop resulted in  a lively and engaging interaction  with the participants who acknowledged the value of the RAF and provided critical suggestions to revise the RAF.

OPUS Researcher Assessment Framework (RAF)

The RAF consists of a comprehensive framework of indicators which are designed to provide research-performing and research-funding organisations with options to expand their researcher assessment systems beyond the traditional focus on publications and citations to include a much wider diversity of activities conducted by researchers. The RAF is further divided into a generic set of indicators covering all possible activities by researchers and a more specific  set of indicators to incentivise and reward Open Science.

The RAF is structured around four key domains of activities relevant for researchers:

  • The research domain covers writing proposals, developing methods, managing data, developing software, writing publications, creating research  materials, and peer reviewing research outputs.
  • The education domain includes  developing educational courses, creating educational resources, teaching students, supervising students, and developing personal and professional skills.
  • The leadership domain focuses on leading personnel and projects, taking management roles within the organisation, and being recognised as an expert and for significant contributions.
  • The valorisation domain lastly includes science communication to peer and general audiences, academic and intersectoral collaboration and engagement, and exploitation and entrepreneurship.

The RAF offers organisations an array of indicators from which to select and include in their researcher assessment systems. The indicators can be deployed qualitatively or quantitatively whereby the OPUS project encourages a combination of a qualitative and responsible quantitative approach to assessment. The RAF divides the indicators into processes, outputs, and outcomes as a researcher may find themself in the middle of a research activity, or have realised an output, or have realised an outcome depending on the moment of their assessment. The selection, refinement, and prioritisation of indicators is left to the organisations which will tailor and implement the indicators according to their own strategic interests and needs and existing assessment processes.

Highlights from the OPUS Consultation Meeting

The participants raised a number of important points with regard generally to researcher assessment and specifically to the RAF in the consultation meeting:

  • Participants stressed the importance of qualitative assessment and cited the lack of homogeneity across countries and disciplines as well as a lack of considering a global picture of skills development.
  • There was a call for clearer definitions of assessment criteria, especially in academic hiring, where unwritten rules and lack of transparency around the assessment procedure continue to create inequity.
  • We should acknowledge different types of personalities and contributions and embed these reforms into structural processes, such as habilitations, theses evaluations, and postdoc funding decisions
  • The framework should be applicable to all researcher levels from early-career researchers (R1-R2) to senior researchers (R3-R4) and be applied flexibly depending on the researcher career stage and context,
  • The term ‘valorisation’ is not well known to researchers and seems to include a mix of science communication, stakeholder engagement, and societal impact activities, although no better alternatives for the category were proposed.
  • The term ‘community service’ (which refers to researcher contributions such as peer review and mentorship) are not grouped together but are spread across the four main categories of activities in the framework.
  • Participants noted the distinction between supervising students (such as bachelor, master, and PhD students) which falls under the category of education versus supervising staff (such as PhD and other staff) which falls under the category of leadership in the framework.
  • The term ‘intersectoral engagement’ is not so clear and seems to consist of different types of engagement activities (such as interactions, mobility, and collaboration) yet only applies across sectors while intrasectoral engagement within the academic sector is also important.
  • The framework recognises actual effort by researchers and allows for the recognition of contributions by early-career researchers (such as proposal writing, project management, and peer reviews) which are often hidden.
  • The implementation of the framework and researcher assessment procedure should not be a burden on researchers and should be kept simple and automated as much as possible to support researchers in their assessment.
  • The framework is not directly linked to ResearchComp but could in its implementation be linked to the skills/competences recognised in ResearchComp such as in the skills development activities recognised in the framework.
  • The framework should connect to the values and evolving priorities of the relevant organisations whereby the selection and refinement of indicators from the framework may reflect these values and priorities.
  • Participants noted that the framework should directly link to the values and principles of the Coalition for Advancing Research Assessment (CoARA) as well as the evolving strategic interests and needs of the organisations.
  • The reform of research assessment may be impeded by national regulations which set predefined criteria for assessment and which must be followed and cannot be reformed by research-performing and research-funding organisations.
  • The framework could be supported by a set of guidelines to explain how to implement the framework as well as examples of possible implementation for research-performing and research-funding organisations.
  • The framework recognises a cross-cutting dimension for Open Science and could include a cross-cutting dimension to recognise teamwork and collaboration as well as equality, diversity, and inclusion.
  • The framework is not focused on industry but the inclusion of indicators which are relevant for industry could help researchers in developing themselves professionally and collaborating with or even transitioning to industry.
  • Candidates under assessment should be considered not only for the diversity of their research, education, leadership, and valorisation activities but also for their own specific skills/competences as well as their specific personalities  

Next Steps for OPUS

The first version of the RAF will be revised and updated according to the lessons learned by pilot organisations testing the RAF in the project as well as the feedback gathered during the consultation and will be published in the OPUS Project Zenodo Community.

Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Our Privacy Policy can be read here.

Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.