The Role of AI in Shared Decision-Making: A Patient, Physician, and a Chatbot Walk into a Consultation Room
Thursday, September 19, 2024
9:15 AM – 10:15 AM CT
Location: Grand Ballroom B (First Floor)
Abstract: As artificial intelligence (AI) technologies advance, bioethicists have raised questions around how this technology will be integrated into provision of healthcare. Debate has focused on clinicians’ use of AI to supplement skills in diagnosis and data gathering. Less attention has been paid to whether and how patients might use AI independently for guidance in medical decision-making.
Serious goals-of-care decisions are stressful for patients and families, who may seek advice from many sources, including asking “what would you do?” to members of their care team. It is foreseeable that patients/surrogates facing a decision with high levels of uncertainty may turn to AI sources for guidance, advice, or recommendations. While there may be opportunities for AI to provide information, clarify values, and guide decisions that align with them, there is also risk of inaccurate or biased information being returned by these programs and persuading a decision that does not align with patient/surrogate values.
Using the scenario of a family facing a serious goals-of-care decision around tracheostomy/mechanical ventilation for a medically complex pediatric patient turning to ChatGPT for advice, we will provide example conversations to examine how variations in question input influences the responses. We propose that AI does not and cannot consider the emotional toll (guilt, suffering, regret, grief) of making these decisions, and unlike a human doctor, AI cannot account for ethical principles when making a recommendation. AI support will likely enhance but not replace human expertise in these decisions, and limits should be defined to guide technology in this context.
Learning Objectives:
After participating in this conference, attendees should be able to:
Consider the potential uses and strengths of artificial intelligence technologies that are currently employed and anticipated to integrate with clinicians in the future in providing clinical care.
Discuss challenges posed by AI technologies offering advice on decision-making while lacking the “humanness” to appreciate patients’/families’ values and the implications of these decisions on length or quality of life.
Rachel Clarke, MD – Assistant Professor, Pediatrics, SUNY Upstate Golisano Children's Hospital; Katie Baughman, MD – Assistant Professor, Pediatrics, University of Michigan Medical School; Haoyang Yan, PhD – Research Associate Senior, Medical Social Sciences, Northwestern University; Cynthia Arslanian-Engoren, PhD, RN – Professor, School of Nursing, University of Michgan; Ken Pituch, MD – University of Michigan Medical School