Somnath Sarkar, PhD: The focus should be on the end-to-end process of evidence generation, not just the tools used. This aligns with the idea of creating fit-for-purpose evidence, a concept that has been evolving over the past four or five years, influenced by foundational papers and work by the FDA and others.
One emerging area of interest is causal inference, which builds upon predictive modeling. It addresses the ‘What if?’ questions physicians often consider, such as the potential outcomes of treating a patient one way versus another. Causal inference aims to establish causality, not just make predictions, which has traditionally been the focus in real-world settings. Even traditional techniques, such as inverse probability weighting (IPW), can be integrated into this framework, which offers much to look forward to in the context of machine learning (ML) and deep learning.
Vadim Koshkin, MD: It’s essential to remember that all these technologies are tools, and the quality of data is paramount. Currently, many retrospective real-world studies, particularly those using claims databases, may suffer from lower data quality due to a lack of granularity and detail. Much of the work involves traditional chart reviews, where relevant data is manually extracted, which is incredibly labor-intensive. A new tool capable of performing this task more efficiently, such as a language learning model, could significantly reduce the workload here. However, the success of these projects still depends on the information recorded in patients’ medical records and what is accessible.
Mark Stewart, PhD: At a higher level, there is widespread recognition that integrating these tools into drug development can lead to significant gains in efficiency and provide new insights into data. When shifting towards using this data as evidence for regulatory bodies like the FDA, a different level of validation is required, which is currently a sticking point. The FDA’s recent meeting with their new Digital Health Advisory Committee touched on whether current regulatory frameworks are optimal for the latest generation of AI tools. The responses vary, but a common takeaway is that adaptations could improve the integration of these tools. |