Frank Ursin et al. investigate the ethical considerations associated with medical artificial intelligence (AI), particularly in the context of radiology. They emphasize the importance of implementing explainable AI (XAI) techniques to address epistemic and explanatory concerns that arise when AI is employed in medical decision-making. The authors outline a four-level approach to explicability, comprising disclosure, intelligibility, interpretability, and explainability, with each successive level representing an escalation in the level of detail and clarity provided to the patient or physician.
The authors argue that XAI has great potential in the medical field, and they present two examples from radiology to illustrate its practical applications. The first example involves the use of image inpainting techniques to generate sharper and more detailed saliency maps, which can help localize relevant regions within radiological images. The second example highlights the importance of natural language communication in XAI, where an image-to-text model is used to generate medical reports based on radiological images. These two examples demonstrate that incorporating XAI techniques in radiology can provide valuable insights and improved communication for medical practitioners and patients.
In the paper’s conclusion, the authors emphasize the need for a tailored approach to explicability that considers the needs of patients and the scope of medical decisions. They also advocate for the use of insights gained from medical AI ethics to re-evaluate established medical practices and confront biases in medical classification systems. By applying the four levels of explicability in a thoughtful manner, the authors posit that ethically defensible information processes can be established when utilizing medical AI.
This paper touches on broader philosophical issues related to the ethics of technology, medical autonomy, and the nature of trust in AI-driven decision-making. As AI becomes increasingly integrated into various domains of human activity, questions about transparency, fairness, and the moral implications of AI systems become paramount. This paper demonstrates the necessity of establishing an ethical framework for AI applications in healthcare, providing valuable insights that can be extended to other disciplines as well. By considering the complex interplay between AI-driven systems and human agents, the authors also underscore the importance of understanding how technological advancements impact the broader social fabric and the values we uphold as a society.
Future research in this area could explore the generalizability of the four-level approach to explicability in other medical domains or even non-medical contexts. Additionally, researchers may investigate how the incorporation of diverse perspectives in the development of AI systems and explainability techniques can mitigate the potential for biases and discriminatory outcomes. It would also be valuable to study how XAI can be adapted to the specific needs and preferences of individual patients or physicians, creating personalized approaches to explicability. Lastly, researchers may wish to assess the long-term impact of integrating XAI in medical practice, particularly in terms of patient satisfaction, physician trust, and overall quality of care.
Abstract
Definition of the problem
The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI?
Arguments
We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example.
Conclusion
We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.
Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?

