The Association of Clinical Research Professionals (ACRP) recently released a white paper, Responsible Oversight of Artificial Intelligence for Clinical Research Professionals, written to assist clinical research professionals in understanding the application and implications of artificial intelligence (AI) in an evolving landscape.
The white paper acknowledges trends in regulating AI, including the European Union AI Act and individual organizations developing their own internal oversight programs “that explore AI’s benefits while mitigating risks.” The white paper emphasizes “clinical research professionals must be aware of existing and emerging regulatory and ethical frameworks when researching AI as a medical device, be it Software-in-a- Medical-Device or Software-as-a-Medical Device.” Additionally, even research on AI that is not classified as a medical device may face regulatory and ethical considerations.
Companies are creating internal programs such as “Trustworthy AI” or “Responsible AI” to mitigate risks such as AI “hallucinations” (false outputs) and cybersecurity threats. Human oversight remains a critical component, particularly while we are in the early AI adoption stages, to address inaccuracies and ethical dilemmas. AI’s role as a tool for innovation combined with potential misuse underscores the importance of a vigilant oversight infrastructure.
The white paper notes that clinical researchers will need to navigate the legal frameworks surrounding AI, ensuring compliance with AI regulations on top of industry regulations. It also discusses Good Machine Learning Practices (GMLPs) as an international component, with an emphasis on transparency and performance monitoring.
AI is revolutionizing the clinical research lifecycle, enhancing efficiency and quality across stages like recruitment, protocol design, and data analysis. Additionally, the white paper notes that sharing lessons learned from AI implementation is somewhat of an ethical responsibility, as it will allow the clinical research community to refine practices and mitigate risks as time goes on.
Further, the integration of AI into industry will reshape the workforce. While AI can alleviate repetitive tasks, it also raises concerns about job security and skills. The white paper notes that despite concerns, professionals who can adapt to the capabilities of AI will likely have a competitive advantage against colleagues who ignore it. Continuing education and focusing on gaining new skills is an important component of using AI responsibly and maximizing its benefits.
AI is not without problems, however, and balancing innovation with ethical principles is important. De-identifying personal health information is essential, and following definitions and standards that already exist, such as those under HIPAA and GDPR, may minimize risks of utilizing AI.
The white paper concludes by advocating for responsible AI adoption in clinical research. By embracing ethical frameworks, regulatory compliance, and continuous learning, clinical research professionals can harness AI’s transformative potential while mitigating its risks. This approach helps to ensure that the contributions of AI align with the core mission of responsibly and equitably advancing health outcomes.
“The clinical research community is poised for successful and responsible use of AI. Our unique blend of ethics, resiliency, flexibility, curiosity, and vision for a better future has made us successful not only in advancing clinical care, but also in advancing how we advance clinical care,” says David Vulcano, LCSW, MBA, CIP, RAC, FACRP, Vice President, Research Compliance & Integrity, HCA Healthcare, and author of this white paper.