Arguing with a Ghost in the Machine - Bioethics of Health Decision Autonomy and Artificial Intelligence
Thursday, September 19, 2024
9:15 AM – 10:15 AM CT
Location: Grand Ballroom B (First Floor)
Abstract: Healthcare artificial intelligence (AI) is changing the shared decision-making dynamic in healthcare from a dyad involving patients and clinicians, to a triad involving patients, clinicians, and AI systems. Although AI systems are generally not recognized as having agency, this shift in decision-making still redistributes the autonomy of health decisions with conscious and subconscious aspects. The conscious aspect involves the intentional recognition of AI output; the subconscious aspect reflects how AI may result in automation biases that influence decisions to be aligned with or against AI. In the current healthcare context, clinicians remain responsible for clinical decisions which are shared with patients who have the opportunity to engage in their care. However, acknowledging automation biases and the potential for shifts in standards of care, one may encounter situations where they believe AI is incorrect and need to provide an argument against the AI output. This can be addressed in part through AI explainability; however, one must then be able to recognize if the provided explanation is accurate and appropriate to the situation. Therefore, there must be other strategies developed to contest AI outputs without relying exclusively on explanations, which may or may not be provided. This presentation proposes key considerations for such strategies and acknowledges a few implications including how these strategies may reduce the tendency for automation bias, provide a foundation for skillfully contest AI outputs, and maintain essential human elements of health autonomy and shared decision-making.
Learning Objectives:
After participating in this conference, attendees should be able to:
Illustrate how AI systems affect the shared decision-making process.
Summarize key human elements in shared decision-making to support the appropriate use of AI outputs in healthcare.
Devise strategies to contest AI outputs when the outputs are believed to be incorrect.