Why connected evidence matters
AI is moving fast across the research landscape, but for many academic institutions, the real challenge is not in adopting new tools. It is ensuring that research intelligence remains trustworthy, auditable, and useful in an environment shaped by increasing compliance demands, shifting evaluation frameworks, and growing expectations around integrity and transparency.
That was the central theme of an EARMA–Digital Science webinar, Future-proofing Research Management in the Age of AI, featuring Digital Science thought leaders Ann Campbell and Jürgen Wastl. Their message was pragmatic: AI can reduce burden and unlock insight, but only when it is built on connected, trusted research data. Integrity isn’t a principle that is audited at the end. It is an architecture that we need to build from the start.
AI is not new, but its visibility is
AI has been part of research analytics for decades, from early pattern recognition to machine learning–based classification and, more recently, large language models (LLMs). What has changed is not just capability, but accessibility. “AI has been around a long time,” Campbell noted, “… generative AI has taken it one step further by making these capabilities available to everybody.”
Campbell’s point is worth underlining for research managers: GenAI does not just analyse content at scale; it can also generate additional content. For example, drafting impact narratives, opportunity briefs, and plain-language summaries. And because it is far more accessible than earlier forms of AI, these capabilities are now within reach of everyday research-office workflows.
That accessibility raises both opportunity and concern. GenAI can synthesize evidence, draft narratives, and surface patterns at scale, but it also amplifies long-standing issues around fragmented data, inconsistent records, and unclear provenance. Wastl noted that this makes identifiers such as ORCID critical in establishing human accountability and provenance, vital for trust and reproducibility when AI tools are part of the workflow.
The real bottleneck: fragmented research information
Across institutions, research information still lives in multiple systems: CRIS platforms, HR databases, repositories, funder portals, bibliometric tools. The result is duplication, missing links, and a heavy administrative burden, especially during reporting cycles. “There’s so much data there,” Campbell said, “but it’s of little use because it just sits in different places.”
This is where persistent identifiers (PIDs) become foundational rather than optional. ORCID iDs for researchers, DOIs for outputs, organizational identifiers, and grant IDs allow institutions to link people, projects, funding, and outcomes in a way machines – and humans – can trust. Campbell explained that PIDs remove the ambiguity that AI cannot reason with.
Connected research intelligence platforms like Dimensions, which link grants, publications, policy documents, datasets, and organizations, are built on this principle: clean connections first, advanced analytics second. For Wastl, the point is practical as well as principled: connected identifiers make reporting more efficient, and they make governance and integrity easier to embed in day-to-day processes.
From reactive reporting to proactive intelligence
One of the strongest messages from the webinar was a shift in mindset. AI’s real value for research management is not automation for automation’s own sake, but enabling institutions to move from describing what happened to exploring what could happen next.
An informed funding example could be as follows:
- Analyze awarded grant data to detect emerging funder priorities
- Identify collaboration patterns and thematic shifts
- Generate opportunity briefs aligned to institutional strengths and strategic priorities
“We’re moving from chasing calls to anticipating them,” Campbell said. For research offices under pressure to “do more with less”, this represents a shift from reactive support to strategic partnership, using connected evidence to inform decisions earlier.
Automation and judgment: drawing the line clearly
A recurring concern in research evaluation is whether AI will replace expert judgment. Both speakers were unequivocal: it should not. Instead, AI should handle the heavy lifting: synthesizing large volumes of evidence, summarizing outputs, and drafting narratives. Human expertise remains essential for interpreting quality, societal value, and disciplinary nuance. The idea is to let AI remove the administrative burden and let humans focus on strategy.
Integrity by design: efficiency, not just ethics
AI also introduces new integrity questions, but it can strengthen safeguards when used transparently. According to the speakers the focus should be to treat AI-based integrity as an advisory process, not an automated verdict.
Human review, disclosure of AI use, and clear governance remain essential. Wastl added an important leadership perspective. To summarize his point: Integrity isn’t just about ethics. It’s about measurable data and efficiency.
For senior leadership, investing in connected identifiers and trusted data infrastructure reduces reporting friction, improves auditability, and lowers institutional risk.
Governance must be agile, not static
One of the clearest practical lessons from the webinar was that AI governance cannot follow traditional policy cycles. “The policy you build is not meant to be static,” Wastl said. Instead, governance should function as a feedback loop, regularly reviewed, updated, and informed by real-world use. “AI readiness isn’t buying the next fancy tool,” Campbell concluded. “It’s really just about building that foundation around the data quality, the literacy, and just the governance around that, that can let us use these tools responsibly.”
Why connected evidence matters
Academic institutions are under pressure to deliver faster insight, stronger assurance, and clearer narratives, and all this with limited resources.
Connected research intelligence, linking funding, outputs, impact, and attention, creates the conditions where AI can be applied responsibly and effectively. When the foundations are in place, AI becomes not a risk to manage, but a capability to deploy with confidence.
Learn more: Dimensions helps academic institutions turn connected research data into insight. By linking grants, publications, policy documents, datasets, and attention signals in one integrated platform, Dimensions supports funding intelligence, portfolio analysis, impact reporting, and responsible use of AI-driven analytics.
Explore how Dimensions supports academic institutions
https://www.dimensions.ai/sector/academic-institutions/
