(Featured) AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

Richard Volkman and Katleen Gabriels critically examine current approaches to AI moral enhancement and propose a new model that more closely aligns with the reality of moral progress as a socio-technical system. The paper begins by discussing two main approaches to AI moral enhancement: the exhaustive approach, which aims to program AI systems with complete moral knowledge, and the auxiliary approach, which seeks to use AI as a tool to assist humans in moral decision-making. The authors argue that the exhaustive approach is overly ambitious and unattainable, while the auxiliary approach, as exemplified by Lara and Deckers’ Socratic Interlocutor, lacks the depth and nuance necessary for genuine moral engagement.

Instead, the authors propose an alternative model of AI moral enhancement that emphasizes the importance of moral diversity, ongoing dialogue, and the cultivation of practical wisdom. Their model envisions a modular system of AI “mentors”, each embodying a distinct moral perspective, engaging in conversation with one another and with the user. This system would more accurately represent the complex, evolving socio-technical process of moral progress and would be safer and more effective than the existing proposals for AI moral enhancement.

The authors address potential objections to their proposal, arguing that the goal of moral enhancement should not be to transcend human limitations but to engage more deeply with our moral thinking. They emphasize that their approach to moral enhancement is not aimed at simplifying the process of moral improvement but at making us more skilled in the ways of practical wisdom. They conclude that their proposal represents a path to genuine moral enhancement that is more achievable and less fraught with risk than previous approaches.

This research contributes to broader philosophical discussions about the nature and scope of moral progress, the role of technology in moral enhancement, and the limits of human rationality. By engaging with these issues, the paper not only critiques existing proposals but also highlights the importance of considering the historical, social, and technological dimensions of moral inquiry. In doing so, it raises questions about the extent to which AI can and should be involved in human moral development, and how best to navigate the potential risks and benefits associated with such involvement.

As for future research, several avenues present themselves. First, it would be fruitful to explore the development of these AI “mentors” in more detail, focusing on the technical and ethical challenges associated with creating AI systems that embody diverse moral perspectives. Additionally, empirical studies could be conducted to assess the effectiveness of such AI mentors in promoting moral enhancement among users. Finally, interdisciplinary research could be undertaken to better understand the complex relationship between AI, moral enhancement, and broader social and cultural dynamics, in order to ensure that future AI moral enhancement efforts are both safe and effective.

Abstract

Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that ‘AI mentors’ could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.

AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

(Featured) Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

Cian Brennan argues for a version of transhumanism that incrementally applies moderate enhancements to future human beings, rather than pursuing radical enhancements in a more immediate and extreme manner. The paper begins by presenting the critique of transhumanism put forward by Johnathan Agar, which centers on the potential negative consequences of radical enhancement. The author argues that Agar’s critique is aimed at the effects of radical enhancement, rather than the concept of radical enhancement itself. By assuming that radical enhancement will be applied gradually to future generations, the author argues that weak transhumanism can overcome Agar’s objections.

The author then discusses objections to weak transhumanism, including the potential for an eventual radical enhancement to emerge and the difficulty of identifying when an enhancement becomes radical. The author responds to these objections by proposing a checklist of characteristic features that can be used to identify radical enhancements, such as the creation of new or extended abilities, changes in moral status, and significant changes in vulnerability or relatability between the enhanced and unenhanced.

Overall, the paper provides a nuanced and detailed defense of weak transhumanism, offering a way to pursue radical enhancements while avoiding some of the potential negative consequences of more radical approaches. The paper engages with a range of objections and provides a thoughtful and well-supported response to each, drawing on both philosophical and scientific sources.

The paper has implications for broader philosophical issues surrounding the ethics of human enhancement, the relationship between technology and society, and the nature of human identity and personhood. By focusing on the incremental application of enhancements, the paper raises questions about the degree to which human beings can be transformed by technology without losing their essential human nature. It also highlights the role of societal values and norms in shaping the development and application of enhancement technologies.

Future research in this area could build on the author’s checklist of characteristic features of radical enhancements, exploring the extent to which these features are necessary and sufficient conditions for defining radical enhancements. Further research could also examine the potential consequences of weak transhumanism, including the ways in which incremental enhancements may interact with each other over time and the potential for unintended consequences. Finally, future research could explore the social and cultural dimensions of transhumanism, including the ways in which transhumanist values and practices may be shaped by factors such as gender, race, and socioeconomic status.

Abstract

Transhumanism aims to bring about radical human enhancement. In ‘Truly Human Enhancement’ Agar (2014) provides a strong argument against producing radically enhancing effects in agents. This leaves the transhumanist in a quandary—how to achieve radical enhancement whilst avoiding the problem of radically enhancing effects? This paper aims to show that transhumanism can overcome the worries of radically enhancing effects by instead pursuing radical human enhancement via incremental moderate human enhancements (Weak Transhumanism). In this sense, weak transhumanism is much like traditional transhumanism in its aims, but starkly different in its execution. This version of transhumanism is weaker given the limitations brought about by having to avoid radically enhancing effects. I consider numerous objections to weak transhumanism and conclude that the account survives each one. This paper’s proposal of ‘weak transhumanism’ has the upshot of providing a way out of the ‘problem of radically enhancing effects’ for the transhumanist, but this comes at a cost—the restrictive process involved in applying multiple moderate enhancements in order to achieve radical enhancement will most likely be dissatisfying for the transhumanist, however, it is, I contend, the best option available.

Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement