The U.S. Food and Drug Administration (FDA) is preparing to release new guidance on the use of artificial intelligence (AI) in clinical trials and drug development by the end of the year. This move comes as the rapid advancement of AI technology presents both opportunities and challenges for the pharmaceutical industry.
Addressing AI's Transformative Potential
AI and machine learning have demonstrated the potential to extract data from electronic health records and other sources, making inferences that can optimize drug dosing and predict adverse effects in specific patient populations. Khair ElZarrad, director of the Office of Medical Policy at the FDA's Center for Drug Evaluation and Research, noted that approximately 300 drug submissions to the FDA since 2016 have referenced AI use in some form.
AI can also improve clinical trial recruitment, screen compounds, and enhance post-market safety surveillance. However, these advancements also raise concerns about patient safety, data quality, and the reliability of AI algorithms.
Key Considerations for AI Implementation
Sarah Thompson Schick, counsel at Reed Smith, highlighted the importance of ensuring AI is "fit for the purposes of what you're doing." The anticipated guidance is likely to address how to ensure these issues are addressed throughout the continuous improvement and training of AI models used in essential research and development activities, and how to mitigate potential risks around those issues.
The FDA also published a special communication in the Journal of the American Medical Association (JAMA) outlining concerns about AI use in clinical research, medical product development, and clinical care. The agency emphasized the need for specialized tools to thoroughly assess large language models in their specific contexts and settings, as well as the importance of ongoing AI performance monitoring.
Ensuring Data Quality and Transparency
ElZarrad highlighted the variability in the quality, size, and representativeness of data sets for training AI models. He stressed that the responsible use of AI demands that the data used to develop these models are fit for purpose and fit for use. Additionally, he noted the difficulty in understanding how AI models are developed and arrive at their conclusions, suggesting the need for new approaches around transparency.
Data privacy issues, particularly those involving patient data and compliance with HIPAA and other federal and state laws, are also a significant concern. Schick noted that patient data used in AI development is generally aggregated and de-identified.
Industry's Proactive Approach
Despite the anticipation of FDA guidance, life sciences leaders are not waiting idly. Schick noted, "I don't think companies are waiting on the FDA, necessarily," indicating a proactive approach within the industry to address the challenges and opportunities presented by AI.