Research Evaluation Should be Pragmatic, Not a Choice Between Peer Review and Metrics

Research Evaluation Should be Pragmatic, Not a Choice Between Peer Review and Metrics 800 415 Open and Universal Science (OPUS) Project

Giovanni Abramo argues for a nuanced approach to research assessment amidst rising movements like DORA and CoARA, which advocate for qualitative judgments over quantitative metrics. CoARA, supported by the European Research Area (ERA), seeks to shift the focus from metrics to peer review, suggesting that research evaluation should primarily rely on qualitative assessment, with quantitative indicators as supportive tools.

However, Abramo questions whether this shift is wise, suggesting that discarding metrics could be detrimental. Metrics, when applied responsibly, provide objectivity, consistency, and scalability—qualities crucial for large-scale assessments. Abramo argues that the real issue is not with metrics themselves but with their misuse. The key is to use them wisely rather than eliminate them entirely.

CoARA’s emphasis on qualitative evaluation overlooks the value of scientometricians, who specialize in applying metrics effectively. Just as robotic surgeons use technology wisely, scientometricians know when to employ quantitative measures and when to rely on peer review.

The choice between peer review and scientometric methods should be based on specific goals and contexts. For instance, boosting interdisciplinary research or international collaboration may benefit more from scientometric analysis, while peer review remains vital in fields with limited bibliographic coverage.

Peer review, while central to CoARA’s vision, presents challenges such as high costs and labor intensity. The U.K.’s Research Excellence Framework (REF) 2021, for instance, involved substantial expenses and reviewer workloads. National peer-review exercises also limit submissions, leading to increased costs and potential distortions in results.

Despite criticisms of metrics, peer review also faces issues like selective evaluation and high costs. Scientometric assessments, conducted by smaller expert teams, offer a more cost-effective and precise alternative for large-scale evaluations, providing continuous, detailed data that peer reviews often lack.

Abramo calls for a balanced approach, combining the strengths of both metrics and peer review. In fields with extensive bibliographic coverage, scientometrics can complement peer review, while in others, peer review remains crucial. The goal should be to blend these methods to create a comprehensive and nuanced assessment of research impact.

The debate over research assessment is not about choosing sides but finding harmony. Combining the rigor of scientometrics with the depth of peer review will better capture and understand the impact of scholarly work. As the field evolves, maintaining an open mind and diverse methods will ensure a more accurate and fair assessment of research.

This post reflects ideas from Giovanni Abramo’s article, “The forced battle between peer-review and scientometric research assessment: Why the CoARA initiative is unsound,” published in Research Evaluation. The views expressed are those of the author.

Original article at LSE Blog

Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Our Privacy Policy can be read here.

Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.