TLDR: A new collaborative council, the ‘Council for the Responsible Use of AI in Clinical Trials,’ has been formed by leading organizations including Advarra, Sanofi, Recursion, and Velocity Clinical Research. Its primary objective is to develop ethical standards, transparency, and oversight frameworks for the rapidly expanding use of artificial intelligence in clinical research, addressing the current lack of clear governance in the field.
The rapid integration of artificial intelligence (AI) into clinical trials, spanning areas like trial design, feasibility, and patient enrollment, has prompted a critical need for robust ethical and transparent oversight. Despite the widespread adoption and perceived benefits of AI, a significant gap exists in establishing clear guidelines and accountability, as highlighted by industry leaders.
To address this, the ‘Council for the Responsible Use of AI in Clinical Trials’ has been established as a non-commercial initiative. Its founding members include Advarra, a prominent provider of IRB, technology, and consulting services, alongside pharmaceutical giant Sanofi, techbio company Recursion, and Velocity Clinical Research.
Gadi Saarony, CEO of Advarra, emphasized the current landscape, stating, ‘There’s this panacea around AI. Everybody talks about it, everybody claims they’re using it. But in clinical trials, there’s very little focus on the ethics of it, or the transparency or oversight. How do we know these tools make sense not just for sponsors, but for participants and sites?’ This new council aims to fill that void by putting essential ‘guardrails’ around a technology that is largely ungoverned despite its deep integration into trial processes.
While the FDA’s recent draft guidance on AI was a welcome initial step, Saarony noted that it left considerable areas of implementation and accountability in a ‘gray zone,’ underscoring that guidance is not equivalent to regulation.
The council plans to meet quarterly, with two in-person sessions, and will establish working groups focused on specific areas such as AI use cases, ethical considerations, regulatory questions, and real-world pilot programs. Early deliverables are expected to include a shared AI glossary, a typology of use cases, and reference models for validating AI tools, mirroring the validation processes for traditional IT systems.
Saarony stressed the importance of measurable outcomes over philosophical manifestos, aiming for ‘benchmark KPIs embedded in tools and workflows.’ Key performance indicators could include improvements in time-to-site activation, reduction in protocol amendments, optimized enrollment timelines, and enhanced data quality.
Initial outputs from the council are targeted for late 2025, with a broader framework anticipated in early 2026. This will be followed by real-world pilot data to ensure the guidance is practical and effective in actual studies. The council also intends to expand its membership to include Contract Research Organizations (CROs), regulators, and ethicists, fostering collaboration with existing organizations like TransCelerate, CTTI, ACRO, and the FDA, none of whom have singularly focused on AI ethics in trials.
Also Read:
- Integrating Blockchain and AI for Accountable and Ethical Digital Systems
- Agentic AI Transforms Healthcare: A New Era of Autonomous Patient Care and Research
The council’s findings and recommendations will be publicly shared through white papers, peer-reviewed publications, conference presentations, public webinars, and cross-industry roundtables, promoting transparency and knowledge-sharing across the clinical research ecosystem.


