The risks of using ChatGPT as a moral vending machine in bioethics education: a case series
Thursday, September 19, 2024
4:30 PM – 5:30 PM CT
Location: New York/Illinois Central (Second Floor)
Abstract: Some medical school educators have suggested that the use of large language models (LLMs) like ChatGPT to generate ethical workups of bioethics cases can enhance students’ ethics education. However, incorporating ChatGPT into bioethics curricula ought to raise concerns about enabling students to use ChatGPT as a “moral vending machine” that can produce prompt solutions to hard ethical problems while circumventing the moral deliberative process. Using a real-life ethics case, we compare the quality of ChatGPT-3.5’s response to a hospital ethics committee’s deliberation over the case and identify a pattern of flaws in ChatGPT’s approach, which is further confirmed by an analysis of ChatGPT’s responses to a series of bioethical cases. The analysis initially reveals several appealing aspects of ChatGPT’s rapid and eloquent answering capabilities that conform well to medical educational culture’s emphasis on didactic efficiency in using third-party resources. These appeals, however, may incentivize time- and resource-constrained students and school administrators to implement ChatGPT as a replacement rather than a supplement to bioethical teaching and faculty. Furthermore, a close examination of ChatGPT-3.5’s answers reveals bias, inconsistency, and lack of moral curiosity when confronted with challenging questions and counterarguments. We conclude that ChatGPT’s polished presentation of flawed responses can lead students to support ethical conclusions that lack robust moral justifications, and as future clinicians, develop an unreliable dependence on LLMs to resolve real-life ethical dilemmas. We propose that students instead be exposed to ethics committee deliberations to develop complex skills in clarifying values and incorporating diverse perspectives when approaching ethical issues.
Learning Objectives:
After participating in this conference, attendees should be able to:
Understand the appealing aspects of ChatGPT-3.5’s responses and language that disguise underlying flaws in moral reasoning and approach to ethical decision-making.
Compare the appeals and flaws of ChatGPT-3.5’s responses to an ethics committee’s deliberation about an ethical case.
Evaluate the risks of incorporating ChatGPT-3.5 into bioethics curricula on students’ and future clinicians’ capacity to clarify values and deliberate over ethical dilemmas.
Thomas Bledsoe, MD, MACP – Warren Alpert Medical School of Brown University