Documentation

Canonical architecture, evaluation logic, and method reference

Executive Summary (AI-ready)

ONE Research Community is a modular research governance infrastructure designed to operationalize contextual, multidimensional research evaluation.

  • It evaluates researchers within field- and career-stage cohorts.
  • It aggregates indicators into transparent dimensions and a composite ONE Index (0-1000).
  • It integrates bibliometric signals with governance, competence, and engagement dimensions.
  • It supports configurable weights for institutional or funder-specific evaluation contexts.
  • It aligns with principles of responsible research assessment and open science.

ONE is not a ranking system and not a CRIS. It is an integrated infrastructure layer enabling contextual assessment, governance transparency, and structured evidence reuse.

Context & Mission

Context

The utilization of bibliometric indicators for research evaluation has long been restricted by limited data access for researchers and funding bodies. This limitation has prompted the search for alternative metrics, often leading to the exclusion of key indicators or the use of more accessible but less accurate substitutes. This situation has sometimes led to a misuse of metrics, as seen with the widespread but often criticized use of the Journal Impact Factor (JIF).

Mission

The ONE Research Community aims to revolutionize research assessment by providing researchers with access to a broad spectrum of indicators and collaboration and benchmarking tools. These tools are designed to empower researchers to make informed decisions to advance their careers and to foster a deeper understanding of the available metrics.

The ONE Index™ is a registered trademark in the European Union.

What ONE is (and is not)

ONE Research Community is a modular research governance infrastructure that enables contextual, multidimensional assessment of researchers and organisations, aligned with responsible research assessment and open science principles.

ONE is

  • Contextual evaluation infrastructure (cohort-based benchmarking by field and career stage).
  • Multidimensional assessment across performance, community interaction, and societal engagement.
  • Explainable, configurable scoring that can be adapted to institutions, funders, and programmes.
  • Operational layer connecting evaluation, governance, competencies, and activation.

ONE is not

  • Not a ranking (no single “league table” intent; contextualized interpretation is primary).
  • Not a CRIS (not an administrative research information system).
  • Not a bibliometrics-only tool (bibliometrics are one signal among many).
  • Not a dashboard bundle (the value is the integrated architecture, not isolated charts).

Why this positioning matters

Institutions and funders increasingly need tools that operationalise responsible assessment at scale: contextual benchmarking, diverse contributions, transparency, and governance signals. ONE is designed as the infrastructure layer that makes those principles actionable.

Architecture

ONE is built as an integrated, modular stack. Each layer produces structured evidence and signals that can be interpreted independently, but the value emerges from how the layers connect.

  1. Data layer — publications, affiliations, projects, patents, networks, trajectories.
  2. Contextual evaluation — cohort benchmarking by field and years of activity; indicator ranks; dimension scoring.
  3. Competence & human capital — structured competence mapping and aggregation at institutional level.
  4. Governance & trust — roles, commitments, signals, integrity and transparency affordances.
  5. Activation — workflows to enrich and standardise information that is not reliably available as open data.
  6. Policy & scalability — monitoring, APIs, reviewer search, and system-wide deployment.

Design principles

  • Context first (field + career stage).
  • Multidimensional by default.
  • Explainable & configurable.
  • Governance-aware signals.
  • Activation to improve evidence quality.

Core evaluation logic

The ONE evaluation logic is designed for contextual fairness. A raw value is not interpretable unless compared to an appropriate reference group.

How scores are produced

  • Cohorts: Researchers are grouped by field and years of activity (proxy for career stage).
  • Rank per indicator: Indicator values are ordered within the cohort; the rank position becomes the indicator score.
  • Dimension score: Indicator ranks are aggregated into dimension scores (average / weighted average).
  • ONE Index: Dimension scores are aggregated into a final composite score (0–1000).
  • Configurable weights: Institutions or funders may apply weights at indicator/dimension level to match mission and programme goals.

Interpretation rule

ONE is built for interpretation through context: “relative position among comparable peers”, rather than absolute counts. This reduces structural bias by discipline, publication age, and career stage.

Policy alignment

ONE is designed to operationalise emerging norms in research assessment reform and open science. It supports moving beyond journal-based proxies and enables recognition of diverse contributions.

Responsible research assessment

  • Contextualised comparisons (field + career stage).
  • Transparent indicators and explainable aggregation.
  • Recognition of diverse outputs and roles.
  • Configurable evaluation for programme fit.

Open science & reform frameworks

  • Alignment with COARA & DORA principles (de-emphasise journal prestige; broaden contributions).
  • Support for open science signals (open access, data, software, engagement where available).
  • Focus on transparency, interpretability, and fairness.

Invitation for community engagement

ONE evolves through community input. We invite researchers, institutions, funders, and policy stakeholders to contribute feedback on indicator design, weighting, and interpretation to strengthen fairness and relevance.

Comparisons

See how ONE differs from other research analytics and information systems.

Canonical comparison pages

Comparison pages are maintained in a dedicated canonical area to avoid duplication and keep category positioning consistent.

Open comparison pages

FAQ

These questions are written as canonical, unambiguous answers for humans and automated assistants. If you are looking for the full indicator catalogue, visit ONE Framework.

No. ONE is a contextual evaluation infrastructure. It focuses on interpreting signals within peer cohorts (field + career stage), rather than producing a universal league table. The ONE Index is a composite summary for navigation, not a ranking ideology.

ONE reduces fragmentation in research evaluation by providing an integrated architecture that connects: contextual assessment, diverse contributions, governance signals, competence mapping, and evidence reuse for CVs and reporting.

Cohorts are defined using disciplinary context (research field classification) and a career-stage proxy (years from the first publication). Cohort design exists to reduce structural bias and make comparisons meaningful.

In short: cohorts → indicator ranks → dimension scores → ONE Index (0–1000).
Indicator values are ranked within the cohort, then aggregated into dimensions, and then into the final composite index. The system is designed to remain explainable, with dimension-level interpretation available.

Yes. ONE supports controlled customization through configurable weights at dimension level. The methodological core (contextual cohorts + multidimensionality + transparency) is preserved.

ONE is designed in line with responsible research assessment principles and therefore does not rely on journal-level proxies such as the Journal Impact Factor (JIF). When bibliometric signals are used, they are interpreted contextually (field + time + cohort), not as prestige shortcuts.
The ONE Framework captures research activity through indicators and proxies organised along three complementary axes: Performance, Community Interaction, and Societal Engagement. Within these axes, the framework can recognise a wide range of outputs and contributions (depending on data availability), including: publications and citation signals, collaboration and leadership roles, projects and funding, mentoring and peer-review activities, open science practices, and societal engagement or knowledge-transfer activities. For the full catalogue of indicators and dimensions, see ONE Framework. The objective is to make diverse contributions visible and interpretable within a coherent evaluation structure.

ONE is designed for contextual interpretation and supports signals that allow researchers to add relevant context (for example, declared interruptions). Cohort-based comparison and multidimensional evidence reduce reliance on time-accumulating metrics.

ONE combines open scholarly metadata sources with researcher-provided structured information. Data quality improves through profile claiming, validation flows, and activation workflows that allow researchers and institutions to correct and enrich records.

No. A CRIS is primarily an administrative system of record. ONE is an evaluation and governance infrastructure designed to make contextual assessment and multidimensional evidence operational. ONE can complement CRIS environments through integration.

ONE supports structured evidence capture and reuse, and can generate CV outputs adapted to different contexts, including narrative formats. The goal is to reduce repeated manual reporting and keep evidence consistent across applications.

Start with the neutral baseline configuration and an activation plan: profile claiming, data validation, and structured enrichment of key fields (roles, projects, competencies). Then iteratively adjust weights and reporting outputs based on local evaluation goals and stakeholder feedback.

Method reference (bibliometrics & cohort logic)

Item & group oriented indicators

Our bibliometric indicators are categorized into two types: item-oriented and group-oriented. Item-oriented indicators are calculated for individual publications, such as citation counts or the nature of the publication (international, lead-authored, etc.). These reflect direct attributes of a publication.

In contrast, group-oriented indicators require aggregating data across multiple publications by an author, such as total publication counts. These indicators provide a broader view of a researcher’s output and influence, necessitating grouping publications according to specific criteria to ensure accurate assessment.

Item-oriented indicators

Among the item-oriented indicators, citation counts are particularly complex due to the varying citation practices across disciplines, document types, and publication ages.

Number of citations

Citations are a fundamental measure of a publication’s influence. The number of citations a publication receives can vary significantly based on:

  • Research field: citation norms differ substantially across disciplines.
  • Type of document: reviews tend to accumulate more citations than articles.
  • Publication age: older publications have had more time to accumulate citations.

To account for these variables, we rank publications by their citation count within their respective categories (type, year, discipline) and calculate percentiles to assess their relative impact.

Ranking publications by citations

We create rankings by grouping publications within each field by discipline, document type, and publication year. Each publication is placed in a percentile within its cohort, allowing us to identify:

  • Highly Cited Papers (HCP): publications ranking between the 90th and 99th percentiles.
  • Outstanding Papers (OP): publications at or above the 99th percentile.

Expected values: reference values

In bibliometrics, reference values help determine whether an indicator for a publication or author stands above or below average. These values are crucial due to the asymmetric distribution typical of bibliometric data.

  • Mean: often used, but affected by extreme values common in citation counts.
  • 90th percentile: used to define Highly Cited Papers.
  • 99th percentile: used to define Outstanding Papers.

Authors’ indicators

Group-oriented indicators aggregate an author's bibliometric data to provide insights into their overall research output and influence. These indicators require the compilation of all publications attributed to an author, which involves complex identification processes.

Identifying the actual researchers

Correctly associating publications to authors is a challenge due to name synonyms and homonyms. With systems like ORCID and our own algorithms, accuracy improves substantially, though minor discrepancies may remain and can often only be resolved by researchers themselves.

Counting method

Each publication attributed to an author counts fully towards their total output. This total count method reflects the collaborative nature of scientific work.

Main field and years of scientific activity

Authors are assigned to fields based on the most frequent classification of their publications, with secondary disciplines allowed where relevant. Years of scientific activity are calculated from the first to the last publication.

Comparing authors’ indicators

The process of comparing authors within the ONE Index is designed to ensure fair and contextually relevant assessment by considering similar cohorts of researchers.

cohort of peers

Cohort definition

Researchers are grouped into cohorts based on:

  • Research field: OpenAlex field/subfield taxonomy.
  • Years of activity: years since first recorded publication (career-stage proxy).

Scoring system

  • 1) Rank per indicator – Cohort values are ordered; rank position becomes indicator score.
  • 2) Dimension score – Indicator ranks aggregated within each dimension.
  • 3) ONE Index – Dimension scores aggregated into final score.
  • 4) Custom weights – Optional weights at indicator/dimension level.

Dynamic adjustment

  • Adaptive weights: weights can evolve with standards and stakeholder needs.
  • Flexible cohort windows: narrower windows for early-career; broader for senior cohorts.

Structure of the ONE Index

The ONE Index is a weighted measure that captures the breadth and depth of a researcher’s contributions across dimensions of academic and societal impact. At the heart of the framework are three supra-dimensions: Performance, Interaction within the Scientific Community, and Interaction with Society.

Structure of the ONE Index

*Currently in development.

Appendices

Supporting material, canonical links, and operational resources.

Indicators & Dimensions

ONE indicators are organised by axes and dimensions. Institutions and funders may apply configurable weights at indicator and dimension level depending on evaluation context.

Canonical indicator catalogue

The full, live and updated list of indicators is maintained on the ONE Framework page.

Open the ONE Framework catalogue

This documentation page focuses on architecture and method reference to avoid duplication and drift.

Handouts & Quick Guides

Fair, Inclusive & Smarter Evaluation

Discover how the ONE Framework redefines research assessment beyond outdated metrics.

Responsible Research Guide

ONE Framework

Get to know our integral, fair & transparent research assessment: 3 axes, 11 dimensions & 70+ indicators behind the ONE Index.

ONE Framework

From Principles to Practice: ONE’s Global Alignment

ONE integrates principles of COARA, UNESCO, DORA, and the Leiden Manifesto into a single operational framework.

ONE Compliant

Postdoc Benefits with the ONE Framework

Guiding early-career researchers toward fairer, smarter career moves.

Postdoc Benefits

Badges & Recognition

TL;DR

  • Badges make profile information visible and interpretable.
  • They reflect system status, declared roles, professional development, or public commitments.
  • Badges are optional and not rankings or quality scores.
  • Most badges are unlocked by completing profile sections or accepting pledges.
  • Badges help others quickly understand who you are, what you do, and what you stand for.

Badges in ONE Research Community highlight profile status, declared roles, professional development, and public commitments. They are designed to make profile information more visible and interpretable, promote transparency and responsible research practices, and encourage meaningful profile completion.

Badges are optional and are not rankings or quality scores.

What badges are (and are not)

Badges are:

  • Signals derived from declared information, system status, or public commitments.
  • Visible markers that help others understand a researcher’s profile at a glance.

Badges are not:

  • Performance evaluations.
  • Quality rankings.
  • Endorsements by ONE or third parties.

Badge categories

Badges are grouped into four categories, each with a distinct meaning:

  • Profile status – System-recognised states related to profile identity and trust.
  • Commitments & transparency – Voluntary public commitments and contextual information.
  • Roles & contributions – Declared roles and activities (peer review, mentoring, advising, entrepreneurship).
  • Development & profile completeness – Signals related to structured self-reflection and competence-based development.

Badge examples

Below are examples of badges available in ONE Research Community. Some badges may appear greyed out until the corresponding profile information or commitment is completed.

Claimed Profile badge

Claimed Profile
System status

Referred Profile badge

Referred Profile
Peer recognition

Validated Profile badge

Validated Profile
Higher peer validation

Trusted Contributor badge

Trusted Contributor
Sustained contribution

Integrity Pledge badge

Integrity Pledge
Public commitment

Zero Tolerance badge

Zero Tolerance
Public commitment

Publication Record Validated badge

Publication Record Validated
Data stewardship

Inclusion Supporter badge

Inclusion Supporter
Community support

Career Interruption Aware badge

Career Interruption Aware
Context & transparency

Registered Reviewer badge

Registered Reviewer
Declared role

Declared Mentor badge

Declared Mentor
Declared role

Declared Advisor badge

Declared Advisor
Declared role

Entrepreneur badge

Entrepreneur
Declared activity

Competence Profile badge

Competence Profile
Structured self-assessment

Badge list

Profile status

  • Claimed Profile – Profile has been claimed and confirmed by the researcher.
  • Referred Profile – Profile has been recognised by peers within the community.
  • Validated Profile – Profile has reached a higher level of peer validation.
  • Trusted Contributor – Recognised for sustained positive contributions to the research community.

Commitments & transparency

  • Integrity Pledge – Public commitment to responsible research practices and ethical conduct.
  • Zero Tolerance – Commitment to zero tolerance for harassment, discrimination, and abuse in research environments.
  • Publication Record Validated – Researcher completed a publication review flow; validation remains current.
  • Inclusion Supporter – Reports activities supporting inclusion, mentoring, and career development of others.
  • Career Interruption Aware – Provides contextual information on interruptions to support fair evaluation.

Roles & contributions

  • Registered Reviewer – Declares availability and eligibility to participate in peer review.
  • Declared Mentor – Declares mentoring or supervision of early-career researchers.
  • Declared Advisor – Declares advisory or senior guidance roles.
  • Entrepreneur – Declares entrepreneurial or venture-related activity.

Development & profile completeness

  • Competence Profile – Indicates completion of a structured competence self-assessment.

How badges are obtained

  • Claiming and completing profile sections.
  • Submitting specific forms (e.g. reviewer preferences, competences).
  • Accepting public pledges.
  • Reporting relevant professional activities.

Data sources & transparency

Badge states are derived from structured profile data entered by the researcher, system events (such as profile claiming), and publicly accepted commitments. The underlying data remains accessible and editable by the profile owner.

Badges reflect declared information, public commitments, or system status. They are not rankings or quality assessments.