Carolina Villegas-Galaviz and Kirsten Martin analyze the ethical implications of AI decision-making and suggest the ethics of care as a framework for mitigating its negative impacts. They argue that AI exacerbates moral distance by creating proximity and bureaucratic distance, which lead to a lack of consideration for the needs of all stakeholders. The ethics of care, which emphasizes interdependent relationships, context and circumstances, vulnerability, and voice, can help contextualize the issue and bring us closer to those at a distance. The authors note that this framework can aid in the development of algorithmic decision-making tools that consider the ethics of care.
The authors argue that moral distance arises from proximity and bureaucratic distance. Proximity distance refers to the physical, cultural, and temporal separation between people, while bureaucratic distance refers to hierarchy, complexity, and principle-based decision-making. These types of moral distance are inherent in how AI works, and the authors contend that AI exacerbates them. The authors also suggest that the ethics of care can help mitigate the negative impacts of AI by emphasizing the need for interdependent relationships, contextual understanding, vulnerability, and voice.
The authors argue that the ethics of care is useful in analyzing algorithmic decision-making in AI. They suggest that the ethics of care offers a mechanism for designing and developing algorithmic decision-making tools that consider the needs of all stakeholders. However, they acknowledge that the ethics of care may not be a comprehensive solution to all moral problems or harms.
The paper raises broader philosophical issues about the role of ethics in technology. It highlights the need to consider the ethical implications of technology and the importance of developing ethical frameworks for AI decision-making. The authors suggest that the ethics of care offers a new conversation for the critical examination of AI and underscores the importance of hearing diverse voices and considering the needs of all stakeholders in technology development.
Future research should explore the legal, moral, epistemic, and practical aspects of moral distance and their specific implications. It should also examine the full range of feminist theory and its potential to mitigate the problem of representativeness in the technology workforce. The authors note that interdisciplinary and intercultural teams are essential in developing and deploying AI ethically. Finally, they suggest that a deeper understanding of the ethics of care could have implications for other areas of philosophical inquiry, such as environmental ethics and bioethics.
Abstract
This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted by the decision and leads to less ethical decisions. The goal of this paper is to identify and analyze the moral distance created by AI through both proximity distance (in space, time, and culture) and bureaucratic distance (derived from hierarchy, complex processes, and principlism). We then propose the ethics of care as a moral framework to analyze the moral implications of AI. The ethics of care brings to the forefront circumstances and context, interdependence, and vulnerability in analyzing algorithmic decision making.
Moral distance, AI, and the ethics of care

