Responsibility for the Use of Opaque Machine Learning Algorithms in Clinical Diagnosis
Machine Learning (ML) algorithms are an application of artificial intelligence through which patterns in data can be discerned without that pattern being explicitly programmed. In clinical medicine, the application of ML is in its infancy. Yet results in disparate clinical specialisations show the potential for the use of ML to outperform clinicians both in accuracy and efficiency of diagnosis based on the presentation of clinical data (Jie, Zhiying & Li, 2021; Codella et al., 2015; Gulshan et al., 2016). However, it has been argued that the opacity of the ML algorithms used for diagnostic purposes makes the attribution of responsibility for negative outcomes difficult or even impossible (Floridi, 2016; Morley et al., 2020). Accordingly, their use is often thought impermissible for the reasons given by Sparrow (2007), albeit in a different context. As Koskinen (Forthcoming) argues, there is no one who can take full, informed responsibility for the use of opaque ML algorithms. Dissenting from these views, I argue that the epistemic challenges posed by diagnostic ML algorithms are analogous to existing epistemic challenges in functioning healthcare systems operating under best practices such that the standard of full, informed responsibility can be seen to be impracticably and unjustifiably high: clinicians already operate under conditions of epistemic opacity in ways which do not create responsibility gaps, and which would not meet this standard. This includes practices of prescribing the prescription of common drugs of known efficacy which nevertheless have opaque mechanisms of action (most notably including paracetamol). Moreover, it also includes circumstances in which the cognitive limitations of physicians constrain knowledge about mechanisms of action. Indeed, even where mechanisms of actions are in-principle knowable (most notably in general medicine settings), physicians justifiably often depend on knowing the efficacy of various drugs rather than their mechanism of action. Despite the relevant forms of opacity, clinicians act permissibly – and culpably – in both sorts of case. Turning to the clinical use of algorithms, I note that the use of decision tree algorithms does not lead to responsibility gaps. Physicians have long used decision trees in general medicine (Greep & Siezenis, 1989; Podgorelic et al., 2002) to aid in determining treatment options based on probabilistic efficacy following the input of relevant clinical information. They have also been used for the purposes of diagnosing (Podgorelic et al., 2002), with much the same justification. Yet just as physicians in general medicine settings may permissibly be unaware of relevant mechanisms of action, they may permissibly be unaware of the details of how relevant complex decision tree algorithms arrive at recommendations. Importantly, the use of decision tree algorithms neither supplants physician decision-making nor undermines the appropriate ultimate attribution of responsibility. I will argue that this model of physician use of algorithms remains appropriate with respect to opaque ML algorithms. Moreover, provided that the efficacy of the opaque ML algorithm is known, the analogy with paracetamol and other effective but opaque drugs holds. Accordingly, responsibility can be attributed in familiar ways.
Learning Objectives:
After participating in this conference, attendees should be able to:
Familiarise audience with the current literature about the use of epistemically opaque Machine Learning (ML) tools in clinical diagnosis.
Discuss recent arguments that use of these tools necessarily gives rise to responsibility gaps.
Present an argument that the sort of epistemic opacity these tools give rise to is already common in clinical practice such that responsibility gaps need not arise given appropriate institutional structures.