(Featured) Ethics of using artificial intelligence (AI) in veterinary medicine

Ethics of using artificial intelligence (AI) in veterinary medicine

Simon Coghlan and Thomas Quinn present an examination of the current landscape and potential impacts of artificial intelligence (AI) within the field of veterinary medicine. The article opens by exploring the broad applications and implications of AI within human and veterinary medicine, highlighting the distinction between machine learning (ML), a subset of AI, and clinical prediction rules (CPRs). The authors emphasise that while CPRs can be interpreted by clinicians due to their algorithmic nature, ML often operates as a ‘black box’, which may limit its understandability and thus its trustworthiness.

The research further scrutinises potential benefits and risks of AI in veterinary practice. Acknowledged benefits include an enhanced ability to diagnose diseases, provide prognostic estimations, and possibly aid in the decision-making process for treatments. At the same time, the authors articulate risks, such as a lack of rigorous scientific validation, the possibility of AI overdiagnosis leading to unnecessary treatment, or the probability of harm due to algorithmic bias. Notably, the authors put forth a compelling argument about how the veterinarian’s role and responsibilities, largely determined by their ethical standpoint, can significantly influence their approach towards AI in practice.

The third key element addressed in the research pertains to the distinctive risks associated with veterinary AI and ethical guidance for its appropriate use. The authors articulate unique risk factors, such as the legal status of companion animals as property, the relatively unregulated nature of veterinary medicine, and the lack of sufficient data for training ML models. Accordingly, the authors propose ethical principles and goals for guiding AI use in veterinary medicine, emphasising the need for nonmaleficence, beneficence, transparency, respect for client autonomy, data privacy, feasibility, accountability, and environmental sustainability.

The philosophical undertones of this article resonate with broader discourse on ethics, anthropocentrism, and the societal role of technology. The authors’ exploration of the veterinarian’s ethical responsibilities in an increasingly AI-dependent world mirrors the wider philosophical question of how society negotiates human responsibility in the age of AI. Additionally, their criticism of anthropocentrism foregrounds debates about the moral consideration afforded to non-human animals, a significant theme within animal ethics. It illustrates the intersection of technology, ethics, and our societal structures, underscoring the need for an ongoing dialogue about our ethical obligations within an increasingly digitised world.

Future research may wish to delve deeper into the normative implications of AI in veterinary medicine. The authors’ ethical guidance principles could provide a basis for developing a more nuanced ethical framework that vets, AI developers, and regulators might follow. More empirical studies are also needed to gauge the practical impact of AI on animal healthcare outcomes and how AI is being perceived and utilized by different stakeholders within the field. Additionally, considering the significant role of data in training ML models, the ethical implications of data collection, privacy, and use in veterinary contexts warrant further exploration. Ultimately, as the authors suggest, the successful integration of AI in veterinary medicine hinges on an informed and ethically-conscious approach that prioritizes the welfare of both animals and their human caretakers.

Abstract

This paper provides the first comprehensive analysis of ethical issues raised by artificial intelligence (AI) in veterinary medicine for companion animals. Veterinary medicine is a socially valued service, which, like human medicine, will likely be significantly affected by AI. Veterinary AI raises some unique ethical issues because of the nature of the client–patient–practitioner relationship, society’s relatively minimal valuation and protection of nonhuman animals and differences in opinion about responsibilities to animal patients and human clients. The paper examines how these distinctive features influence the ethics of AI systems that might benefit clients, veterinarians and animal patients—but also harm them. It offers practical ethical guidance that should interest ethicists, veterinarians, clinic owners, veterinary bodies and regulators, clients, technology developers and AI researchers.

Ethics of using artificial intelligence (AI) in veterinary medicine

(Featured) Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?

Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?

Frank Ursin et al. investigate the ethical considerations associated with medical artificial intelligence (AI), particularly in the context of radiology. They emphasize the importance of implementing explainable AI (XAI) techniques to address epistemic and explanatory concerns that arise when AI is employed in medical decision-making. The authors outline a four-level approach to explicability, comprising disclosure, intelligibility, interpretability, and explainability, with each successive level representing an escalation in the level of detail and clarity provided to the patient or physician.

The authors argue that XAI has great potential in the medical field, and they present two examples from radiology to illustrate its practical applications. The first example involves the use of image inpainting techniques to generate sharper and more detailed saliency maps, which can help localize relevant regions within radiological images. The second example highlights the importance of natural language communication in XAI, where an image-to-text model is used to generate medical reports based on radiological images. These two examples demonstrate that incorporating XAI techniques in radiology can provide valuable insights and improved communication for medical practitioners and patients.

In the paper’s conclusion, the authors emphasize the need for a tailored approach to explicability that considers the needs of patients and the scope of medical decisions. They also advocate for the use of insights gained from medical AI ethics to re-evaluate established medical practices and confront biases in medical classification systems. By applying the four levels of explicability in a thoughtful manner, the authors posit that ethically defensible information processes can be established when utilizing medical AI.

This paper touches on broader philosophical issues related to the ethics of technology, medical autonomy, and the nature of trust in AI-driven decision-making. As AI becomes increasingly integrated into various domains of human activity, questions about transparency, fairness, and the moral implications of AI systems become paramount. This paper demonstrates the necessity of establishing an ethical framework for AI applications in healthcare, providing valuable insights that can be extended to other disciplines as well. By considering the complex interplay between AI-driven systems and human agents, the authors also underscore the importance of understanding how technological advancements impact the broader social fabric and the values we uphold as a society.

Future research in this area could explore the generalizability of the four-level approach to explicability in other medical domains or even non-medical contexts. Additionally, researchers may investigate how the incorporation of diverse perspectives in the development of AI systems and explainability techniques can mitigate the potential for biases and discriminatory outcomes. It would also be valuable to study how XAI can be adapted to the specific needs and preferences of individual patients or physicians, creating personalized approaches to explicability. Lastly, researchers may wish to assess the long-term impact of integrating XAI in medical practice, particularly in terms of patient satisfaction, physician trust, and overall quality of care.

Abstract

Definition of the problem

The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI?

Arguments

We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example.

Conclusion

We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.

Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?

(Featured) Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

Giorgia Lorenzini et al. examine the evolving nature of the doctor-patient relationship in the context of integrating artificial intelligence (AI) into healthcare. They focus on the shared decision-making (SDM) process between doctors and patients, a consensual partnership founded on communication and respect for voluntary choices. The authors argue that the introduction of AI can potentially enhance SDM, provided it is implemented with care and consideration. The paper addresses the communication between doctors and AI and the communication of this interaction to patients, evaluating its potential impact on SDM and proposing strategies to preserve both doctors’ and patients’ autonomy.

The authors explore the communication and autonomy challenges arising from AI integration into clinical practice. They posit that AI’s influence could unintentionally limit doctors’ autonomy by heavily guiding their decisions, which in turn raises questions about the balance of power in the decision-making process. The paper emphasizes the importance of doctors understanding AI’s recommendations and checking for errors while also being competent in working with AI systems. By examining the “black box problem” of AI’s opaqueness, the authors argue that explainability is crucial for fostering the AI-doctor relationship and preserving doctors’ autonomy.

The paper then investigates doctor-patient communication and autonomy within the context of AI integration. The authors argue that in order to promote patients’ autonomy and encourage their participation in SDM, doctors must disclose and discuss AI’s involvement in the clinical evaluation process. They also contend that AI should consider patients’ preferences and unique situations, thus ensuring that their values are respected and that they are able to participate actively in the SDM process.

In relating the research to broader philosophical issues, the authors’ examination of the AI-doctor-patient relationship aligns with questions surrounding the ethical and moral implications of AI in society. As AI increasingly permeates various aspects of our lives, its impact on human autonomy, agency, and moral responsibility becomes a focal point for philosophical inquiry. The paper contributes to this discourse by delving into the specific context of healthcare and the evolving dynamics of the doctor-patient relationship, providing a microcosm for understanding the broader implications of AI integration in human decision-making processes.

As the authors outline the potential benefits and challenges of incorporating AI into the SDM process, future research could investigate the practical implementation of AI in various clinical settings, evaluating the effectiveness of AI-doctor collaboration in promoting SDM. Further research might also address the training and education necessary for medical professionals to adapt to AI integration, ensuring a seamless transition that optimizes patient care. Additionally, exploring methods for incorporating patients’ values into AI algorithms could provide a path to more personalized and autonomy-respecting AI-assisted healthcare.

Abstract

Artificial intelligence (AI) based clinical decision support systems (CDSS) are becoming ever more widespread in healthcare and could play an important role in diagnostic and treatment processes. For this reason, AI-based CDSS has an impact on the doctor–patient relationship, shaping their decisions with its suggestions. We may be on the verge of a paradigm shift, where the doctor–patient relationship is no longer a dual relationship, but a triad. This paper analyses the role of AI-based CDSS for shared decision-making to better comprehend its promises and associated ethical issues. Moreover, it investigates how certain AI implementations may instead foster the inappropriate paradigm of paternalism. Understanding how AI relates to doctors and influences doctor–patient communication is essential to promote more ethical medical practice. Both doctors’ and patients’ autonomy need to be considered in the light of AI.

Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

(Featured) Mobile health technology and empowerment

Mobile health technology and empowerment

Karola V. Kreitmair critically evaluates the notion of empowerment that has become pervasive in the discourse surrounding direct-to-consumer (DTC) mobile health technologies. The author argues that while these technologies claim to empower users by providing knowledge, enabling control, and fostering responsibility, the actual outcome is often not genuine empowerment but merely the perception of empowerment. This distinction has significant implications for individuals who might be seeking to affect behavior change and improve their health and well-being.

The paper meticulously breaks down the concept of empowerment into five key features: knowledgeability, control, responsibility, availability of good choices, and healthy desires. The author presents a thorough review of the evidence related to the efficacy, privacy, and security concerns surrounding the use of m-health technologies. They demonstrate that these technologies, while marketed as empowering tools, often fail to live up to their promises and, in some cases, even contribute to negative health outcomes or exacerbate existing issues such as disordered eating.

The core of the argument lies in the distinction between genuine empowerment and the mere perception of empowerment. The author posits that, rather than fostering true empowerment, DTC m-health technologies often create a psychological illusion of control and knowledgeability. This illusion can lead users to form unrealistic expectations and place undue burden on themselves to effect change when the necessary conditions for change are not met. This “empowerment paradox” ultimately calls into question the purported benefits of DTC m-health technologies and the societal narrative around personal responsibility and control over one’s health.

This paper’s findings resonate with broader philosophical discussions around individual autonomy, agency, and the role of technology in shaping our lives. The empowerment paradox highlights the complex interplay between the individual and the structural factors that shape health outcomes. It raises crucial questions about the ethical implications of profit-driven technologies and the responsibilities of technology developers, marketers, and users in navigating an increasingly technologically-driven healthcare landscape. The insights from this paper contribute to ongoing debates about the nature of empowerment and the limits of individual autonomy in an age where our lives are increasingly mediated by technology.

Future research should focus on the prevalence and consequences of the empowerment paradox in the context of DTC m-health technologies. A deeper understanding of how individuals make decisions around their health in the presence of perceived empowerment could inform the development of more effective and ethically responsible technologies. Additionally, examining the social and cultural factors that influence the marketing and adoption of these technologies may provide insight into how the industry can foster genuine empowerment, rather than perpetuating an illusion of control. Ultimately, a more nuanced understanding of the relationship between DTC m-health technologies and empowerment will pave the way for a more responsible and equitable approach to healthcare in the digital age.

Abstract

Mobile Health (m-health) technologies, such as wearables, apps, and smartwatches, are increasingly viewed as tools for improving health and well-being. In particular, such technologies are conceptualized as means for laypersons to master their own health, by becoming “engaged” and “empowered” “managers” of their bodies and minds. One notion that is especially prevalent in the discussions around m-health technology is that of empowerment. In this paper, I analyze the notion of empowerment at play in the m-health arena, identifying five elements that are required for empowerment. These are (1) knowledge, (2) control, (3) responsibility, (4) the availability of good choices, and (5) healthy desires. I argue that at least sometimes, these features are not present in the use of these technologies. I then argue that instead of empowerment, it is plausible that m-health technology merely facilitates a feeling of empowerment. I suggest this may be problematic, as it risks placing the burden of health and behavior change solely on the shoulders of individuals who may not be in a position to affect such change.

Mobile health technology and empowerment