Trust in Medical AI: Challenging the "Black Box" Problem
Thursday, September 19, 2024
10:45 AM – 11:45 AM CT
Location: Regency Ballroom C (First Floor)
Abstract: Artificial intelligence (AI) has the ability to enhance medical diagnosis, reduce medical errors, reduce provider burden, and even automate surgical procedures. Nevertheless, research demonstrates that patients and providers are significantly wary of trusting medical AI. But given its potential power to revolutionize medicine, is our mistrust of medical AI rationally justifiable, or ought we challenge this mistrust?
While bias, privacy, legal uncertainty, and dehumanization all constitute wholly justifiable concerns that must be addressed, our mistrust stemming from the issue of low interpretability – the so-called “black box” problem of AI – ought to be critically challenged.
The “black box” problem refers to the inability of medical AI tools to explain the correlations they find and base their medical recommendations from. It induces understandable pause among patients and clinicians, for the “ideal” medical therapy is often founded not on mere correlation, but on clearly identifiable causation between findings.
Yet, despite this ideal, we still routinely use, and have therefore already successfully placed our trust in, purely correlational medical therapies, from SSRIs, to statins for reducing stroke risk, to many other clinical research-driven protocols of care. These examples prove that it is possible for both patients and providers to trust recommendations based purely on correlation alone. Is medical AI truly any different to trusting correlational evidence from clinical research? In this talk, I hope to present the argument that as AI is here to stay, we ought to be transparent with ourselves regarding how much we already trust correlation alone in medicine.
Learning Objectives:
After participating in this conference, attendees should be able to:
Outline how the 'black box' problem of medical AI creates understandable concern in patients and providers.
Analyze parallels between trusting medical AI and established clinical research-driven protocols.