Guidance on the Responsible Use of Quantitative Indicators in Research Assessment

Guidance on the Responsible Use of Quantitative Indicators in Research Assessment 556 720 Open and Universal Science (OPUS) Project

Produced by the DORA Research Assessment Metrics Task Force:

  • Ginny Barbour
  • Rachel Bruce
  • Stephen Curry
  • Bodo Stern
  • Stuart King
  • Rebecca Lawrence

This content is available under a Creative Commons Attribution Share Alike License (CC BY-SA 4.0). Please cite this document as: DORA. 2024. Guidance on the responsible use of quantitative indicators in research assessment. http://doi.org/10.5281/zenodo.10979644 For more information, contact Zen Faulkes, DORA Program Director.

DORA Metrics Guidance 3

The term “metric” often implies a direct measurement, whereas “indicator” better reflects the indirect nature of quantities used in research assessment. Throughout this document, we use “indicator” to emphasize this distinction.

Research assessment is vital, but reliance on quantitative indicators, often assumed to offer objectivity, remains common. While such indicators are valuable in bibliometrics and scientometrics, their reductive nature necessitates careful contextualization when assessing individual researchers or projects.

DORA, known for its critique of the Journal Impact Factor (JIF), addresses other indicators here. This briefing aims to extend DORA’s principles to various quantitative indicators used in research evaluation.

DORA Metrics Guidance 4

When selecting quantitative indicators for research or researcher assessments, consider:

  • Grounding in evidence.
  • Relevance to the qualities being assessed, avoiding reliance on aggregate or composite metrics.
  • Acknowledgment of an indicator’s proxy and reductive nature.
  • Mitigation of biases inherent in quantitative indicators, ensuring transparency in assessment processes.

The Declaration on Research Assessment advocates five principles for using quantitative information in research assessment:

  • Engage research communities in rule development.
  • Publish assessment criteria for transparency.
  • Ensure reviewers understand quantitative information usage.

Additionally, indicators should ideally rely on open data and algorithms for transparency, though many common indicators still use closed data.

DORA Metrics Guidance 5

Applying these principles to indicators requires co-creation of assessment processes with the research community, setting benchmarks based on agreed values, outcomes, and behaviors. Tools like the INORMS SCOPE framework or DORA’s SPACE rubric aid this process.

The SPACE Rubric is available in the DORA Resource Library. The SCOPE framework is created by INORMS.

DORA Metrics Guidance 6

The Journal Impact Factor (JIF), often used as a quality signifier, lacks evidence supporting its efficacy for evaluating individual papers. Other journal-based indicators share this limitation, including Citescore, Eigenfactor Score, and SNIP.

DORA Metrics Guidance 7

While citations offer granular insight into individual research articles, they have limitations in assessing recent scholarship or researchers at different career stages or disciplines. Citation patterns can be skewed by author and journal reputations, leading to bias.

Citation data cannot alone determine research quality; additional context is necessary.

DORA Metrics Guidance 8

The h-index, commonly used for comparing researchers, suffers from interpretational challenges and database dependency. It lacks contextual information such as career stage or contribution type.

Organizations using the h-index should justify its relevance and consider individual circumstances.

DORA Metrics Guidance 9

Field-normalized citation indicators like FWCI or RCR attempt to correct citation variability between fields. Caution is necessary due to difficulties in defining fields and sample size impact.

These indicators are unreliable for evaluating individual researchers.

DORA Metrics Guidance 10

Altmetrics attempt to measure non-academic attention to research outputs but lack context and transparency in calculation. They do not indicate research quality but can provide insights into specific engagements with research outputs when detailed information is available.

DORA Metrics Guidance 11

This briefing note illustrates applying DORA’s principles to various metrics in research assessment. It emphasizes contextualization and transparency in using indicators. Other indicators, like grant funding income, should also be contextualized due to inherent biases and uncertainties.

For more information, contact DORA at 6120 Executive Blvd., Rockville, MD, USA or visit sfdora.org.

Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Our Privacy Policy can be read here.

Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.