Pamela Robinson proposes a robust examination of the methodological problems arising due to moral disagreement in the development and decision-making processes of artificial intelligence (AI). The central point of discussion is the formulation of ethical AI systems, in particular, the AI Decider, that needs to make decisions in cases where its decision subjects have moral disagreements. The author posits that the conundrum could potentially be managed using moral, compromise, or epistemic solutions.
The author systematically elucidates the possible solutions by presenting three categories. Moral solutions are proposed to involve choosing a moral theory and having AI align to it, like preference utilitarianism, thereby sidestepping disagreement by assuming moral consensus. Compromise solutions, on the other hand, suggest handling disagreement by aggregating moral views to arrive at a collective decision. The author introduces the Arrow’s impossibility theorem and Social Choice Theory as potential tools for AI decision-making. Lastly, epistemic solutions, arguably the most complex of the three, require the AI Decider to treat moral disagreement as evidence and adjust its decision accordingly. The author mentions several approaches within this category, such as reflective equilibrium, moral uncertainty, and moral hedging.
However, none of these solutions, the author asserts, can provide a perfect answer to the problem. Each solution is fraught with its own complexities and risks. Here, the concept of ‘moral risk,’ meaning the chance of getting things wrong morally, is introduced. The author postulates that the selection between an epistemic or compromise solution should depend on the moral risk involved. They argue that the methodological problem could be addressed by minimizing this moral risk, regardless of whether a moral, compromise, or epistemic solution is employed.
Delving into the broader philosophical themes, this paper reignites the enduring debate on the role and impact of moral relativism and objectivism within the sphere of artificial intelligence. The issues presented tie into the grand narrative of moral philosophy, particularly the discourse around meta-ethics and normative ethics, where differing moral perspectives invariably lead to dilemmas. The AI Decider, in this sense, mirrors the human condition where decision-making often requires navigating the labyrinth of moral disagreement. The author’s emphasis on moral risk provides a novel framework, bridging the gap between theoretical moral philosophy and the practical demands of AI ethics.
For future research, several intriguing pathways are suggested by this article. First, an in-depth exploration of the concept of ‘moral risk’ could illuminate new strategies for handling moral disagreement in AI decision-making. Comparative studies, analyzing the outcomes and repercussions of decisions made by an AI system utilizing moral, compromise, or epistemic solutions, could provide empirical evidence for the efficacy of these approaches. Lastly, given the dynamism of moral evolution, the impact of changes in societal moral views over time on an AI Decider’s decision-making process warrants investigation. This could include exploring how the AI system could effectively adapt to the evolution of moral consensus or disagreement within its decision subjects. Such future research could significantly enhance our understanding of ethical decision-making in AI systems, bringing us closer to the creation of more ethically aligned, responsive, and responsible artificial intelligence.
Abstract
Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. Moral solutions apply a moral theory or related principles and largely ignore the details of the disagreement. Compromise solutions apply a method of finding a compromise and taking information about the disagreement as input. Epistemic solutions apply an evidential rule that treats the details of the disagreement as evidence of moral truth. Proposals for all three kinds of solutions can be found in the AI ethics and value alignment literature, but little has been said to justify choosing one over the other. I argue that the choice is best framed in terms of moral risk.
Moral disagreement and artificial intelligence

