(Featured) Moral disagreement and artificial intelligence

Moral disagreement and artificial intelligence

Pamela Robinson proposes a robust examination of the methodological problems arising due to moral disagreement in the development and decision-making processes of artificial intelligence (AI). The central point of discussion is the formulation of ethical AI systems, in particular, the AI Decider, that needs to make decisions in cases where its decision subjects have moral disagreements. The author posits that the conundrum could potentially be managed using moral, compromise, or epistemic solutions.

The author systematically elucidates the possible solutions by presenting three categories. Moral solutions are proposed to involve choosing a moral theory and having AI align to it, like preference utilitarianism, thereby sidestepping disagreement by assuming moral consensus. Compromise solutions, on the other hand, suggest handling disagreement by aggregating moral views to arrive at a collective decision. The author introduces the Arrow’s impossibility theorem and Social Choice Theory as potential tools for AI decision-making. Lastly, epistemic solutions, arguably the most complex of the three, require the AI Decider to treat moral disagreement as evidence and adjust its decision accordingly. The author mentions several approaches within this category, such as reflective equilibrium, moral uncertainty, and moral hedging.

However, none of these solutions, the author asserts, can provide a perfect answer to the problem. Each solution is fraught with its own complexities and risks. Here, the concept of ‘moral risk,’ meaning the chance of getting things wrong morally, is introduced. The author postulates that the selection between an epistemic or compromise solution should depend on the moral risk involved. They argue that the methodological problem could be addressed by minimizing this moral risk, regardless of whether a moral, compromise, or epistemic solution is employed.

Delving into the broader philosophical themes, this paper reignites the enduring debate on the role and impact of moral relativism and objectivism within the sphere of artificial intelligence. The issues presented tie into the grand narrative of moral philosophy, particularly the discourse around meta-ethics and normative ethics, where differing moral perspectives invariably lead to dilemmas. The AI Decider, in this sense, mirrors the human condition where decision-making often requires navigating the labyrinth of moral disagreement. The author’s emphasis on moral risk provides a novel framework, bridging the gap between theoretical moral philosophy and the practical demands of AI ethics.

For future research, several intriguing pathways are suggested by this article. First, an in-depth exploration of the concept of ‘moral risk’ could illuminate new strategies for handling moral disagreement in AI decision-making. Comparative studies, analyzing the outcomes and repercussions of decisions made by an AI system utilizing moral, compromise, or epistemic solutions, could provide empirical evidence for the efficacy of these approaches. Lastly, given the dynamism of moral evolution, the impact of changes in societal moral views over time on an AI Decider’s decision-making process warrants investigation. This could include exploring how the AI system could effectively adapt to the evolution of moral consensus or disagreement within its decision subjects. Such future research could significantly enhance our understanding of ethical decision-making in AI systems, bringing us closer to the creation of more ethically aligned, responsive, and responsible artificial intelligence.

Abstract

Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. Moral solutions apply a moral theory or related principles and largely ignore the details of the disagreement. Compromise solutions apply a method of finding a compromise and taking information about the disagreement as input. Epistemic solutions apply an evidential rule that treats the details of the disagreement as evidence of moral truth. Proposals for all three kinds of solutions can be found in the AI ethics and value alignment literature, but little has been said to justify choosing one over the other. I argue that the choice is best framed in terms of moral risk.

Moral disagreement and artificial intelligence

(Featured) Moral distance, AI, and the ethics of care

Moral distance, AI, and the ethics of care

Carolina Villegas-Galaviz and Kirsten Martin analyze the ethical implications of AI decision-making and suggest the ethics of care as a framework for mitigating its negative impacts. They argue that AI exacerbates moral distance by creating proximity and bureaucratic distance, which lead to a lack of consideration for the needs of all stakeholders. The ethics of care, which emphasizes interdependent relationships, context and circumstances, vulnerability, and voice, can help contextualize the issue and bring us closer to those at a distance. The authors note that this framework can aid in the development of algorithmic decision-making tools that consider the ethics of care.

The authors argue that moral distance arises from proximity and bureaucratic distance. Proximity distance refers to the physical, cultural, and temporal separation between people, while bureaucratic distance refers to hierarchy, complexity, and principle-based decision-making. These types of moral distance are inherent in how AI works, and the authors contend that AI exacerbates them. The authors also suggest that the ethics of care can help mitigate the negative impacts of AI by emphasizing the need for interdependent relationships, contextual understanding, vulnerability, and voice.

The authors argue that the ethics of care is useful in analyzing algorithmic decision-making in AI. They suggest that the ethics of care offers a mechanism for designing and developing algorithmic decision-making tools that consider the needs of all stakeholders. However, they acknowledge that the ethics of care may not be a comprehensive solution to all moral problems or harms.

The paper raises broader philosophical issues about the role of ethics in technology. It highlights the need to consider the ethical implications of technology and the importance of developing ethical frameworks for AI decision-making. The authors suggest that the ethics of care offers a new conversation for the critical examination of AI and underscores the importance of hearing diverse voices and considering the needs of all stakeholders in technology development.

Future research should explore the legal, moral, epistemic, and practical aspects of moral distance and their specific implications. It should also examine the full range of feminist theory and its potential to mitigate the problem of representativeness in the technology workforce. The authors note that interdisciplinary and intercultural teams are essential in developing and deploying AI ethically. Finally, they suggest that a deeper understanding of the ethics of care could have implications for other areas of philosophical inquiry, such as environmental ethics and bioethics.

Abstract

This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted by the decision and leads to less ethical decisions. The goal of this paper is to identify and analyze the moral distance created by AI through both proximity distance (in space, time, and culture) and bureaucratic distance (derived from hierarchy, complex processes, and principlism). We then propose the ethics of care as a moral framework to analyze the moral implications of AI. The ethics of care brings to the forefront circumstances and context, interdependence, and vulnerability in analyzing algorithmic decision making.

Moral distance, AI, and the ethics of care