Human biases in artificial intelligence:A call for higher ethical standards in the approval process of AI/ML enabled medical devices for use diagnostic radiology
Thursday, September 19, 2024
10:45 AM – 11:45 AM CT
Location: Regency Ballroom C (First Floor)
Abstract: Artificial intelligence (AI) and machine learning (ML) have rapidly advanced in the past decade, along with the use of AI/ML enabled medical devices. As of 2023, the FDA has approved 533 AI/ML enabled devices for use in Radiology. While these technologies have the potential to improve health outcomes and mitigate health inequities, they also have the potential to exacerbate disparities, depending on how bias is introduced and addressed. This raises cause for concern in the medical field, as certain patient populations already bear a higher burden of disease and poorer health outcomes. Regulatory bodies such as the FDA play a critical role in protecting the dignity, privacy, and safety of all patients. As of today, the FDA does not explicitly consider bias or health equity in its review and approval of AI/ML enabled medical devices. This presentation will outline ways in which bias can be introduced to AI/ML enabled medical devices and conclude with ways in which the process can be made more ethical through more rigorous regulatory processes.
Learning Objectives:
After participating in this conference, attendees should be able to:
1. At the end of this session, attendees should be able to consider the ethical implications of the use of AI in diagnostic radiology.
2. At the end of this session, attendees should be able to appreciate the importance of regulatory bodies in protecting the safety and privacy of patients.
Nora Jones, PhD – Center for Urban Bioethics – Lewis Katz School of Medicine