A unified and international effort to boost research integrity was first crystallized in the Singapore Statement on Research Integrity in 2010. Today, integrity in research and scientific practices is center stage in policies set forth by governments, policy-makers, and funders. The recently released UK House of Commons Science, Innovation and Technology Committee report and the US’s Framework for Strengthening Federal Scientific Integrity Policies and Practices, for example, highlight that research integrity is being recognized as a fundamental practice not just among scientists but within the larger research ecosystem that includes academic institutions, publishers, and funding agencies.

The importance of trust in science

At the heart of research integrity lies trust, which allows researchers to build science on science. Issac Newton famously emphasized this principle of building on previous discoveries, writing, “If I have seen further [than others], it is by standing on the shoulders of giants.” It follows that if no new science can stand alone, it is essential that scientists must be able to trust the work of other scientists. And, in a larger societal context, society can benefit from science only if public trust exists in science. However, ensuring trust in science today has become more challenging than ever because the research ecosystem is not immune to the effects of rapidly evolving information sharing and dissemination means, and the advent of generative AI tools such as ChatGPT and Bard makes it increasingly easy to generate plausible but false scientific content. 

Enter Trust Markers

That is why it has become crucial to identify explicit and implicit elements of trust within published scientific works. Elements that indicate that the research carried out honors the norms of the scientific method and encompasses the hallmarks of responsible science: a clearly stated study objective, a statement of how the research is funded, guidance about obtaining a copy of the study data, and many others. Dimensions Research Integrity, formerly known as Ripeta, has adopted a term for these hallmarks of trust: Trust Markers. These, along with traditional checks and balances such as the peer-review process, can give a stamp of confidence to published research.

The presence of Trust Markers in a scientific publication is a handshake, an understanding between authors and readers that proper research practices have been observed and the norms of the scientific method have been honored. Trust Markers within Dimensions are a new type of article metadata representing the transparency and reproducibility of scientific research. Trust in reproducibility highlights elements of a paper that facilitate the ability to replicate or reproduce the original study, for example, a data availability statement indicating where other researchers can access the study data for future work. Trust through transparency spotlights markers in communicating research practices such as ethical approval and conflict of interest statements. These trust markers allow improved research scrutiny and governance.

Developing Trust Markers within Dimensions 

Dimensions Research Integrity applies the AI models developed by Ripeta to Dimensions, the world’s largest linked research database, to generate a data set about reproducibility and transparency that is the first of its kind in both size and scope. The Dimensions Research Integrity Dataset contains data for over 35 million research articles and conference papers from 2010 onward, providing a rich data source on research integrity and quality. The training, evaluation, and validation datasets used to create the AI models that recognize Trust Markers span content from many fields. On average, the model for each Trust Marker incorporated in Dimensions Research Integrity was trained, evaluated, and validated across ten fields of study. Additional Trust Markers are developed to align with various study reporting guidelines such as MDAR and ARRIVE 2.0 and showcased with the Equator Network. The following Trust Markers are available as of May 2023:

  • Funding statement – does the publication state whether the author(s) were granted funding to conduct their research and where that funding came from?
  • Ethical approval statement – does the publication have a statement affirming that ethical approval from a human subject or animal welfare committee was either obtained or was not required for the study?
  • Conflict of interest statement- does the publication declare possible sources of bias based on the personal interests of the author(s) in the research findings? For example, the head of funding, past or present employers, or the author(s) financial interests.
  • Author contribution statement – does the publication provide details of each author’s role in the development and publication of the manuscript?
  • Data availability statement – does the publication have a dedicated section indicating whether data from the research is available and where it can be found?
  • Data location – does the publication provide the locations where research data (raw or processed) can be accessed?
  • Repositories – does the publication provide the names of any research data repositories used by the author(s) to preserve, organize and facilitate access to study data?
  • Code availability statement – does the publication state how to access the code used for the study/research?

As part of the deep dive into Trust Markers, we will have an in-depth look into each of these Trust Markers in the coming months. Stay tuned for more in the series!