(Featured) Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

Giorgia Lorenzini et al. examine the evolving nature of the doctor-patient relationship in the context of integrating artificial intelligence (AI) into healthcare. They focus on the shared decision-making (SDM) process between doctors and patients, a consensual partnership founded on communication and respect for voluntary choices. The authors argue that the introduction of AI can potentially enhance SDM, provided it is implemented with care and consideration. The paper addresses the communication between doctors and AI and the communication of this interaction to patients, evaluating its potential impact on SDM and proposing strategies to preserve both doctors’ and patients’ autonomy.

The authors explore the communication and autonomy challenges arising from AI integration into clinical practice. They posit that AI’s influence could unintentionally limit doctors’ autonomy by heavily guiding their decisions, which in turn raises questions about the balance of power in the decision-making process. The paper emphasizes the importance of doctors understanding AI’s recommendations and checking for errors while also being competent in working with AI systems. By examining the “black box problem” of AI’s opaqueness, the authors argue that explainability is crucial for fostering the AI-doctor relationship and preserving doctors’ autonomy.

The paper then investigates doctor-patient communication and autonomy within the context of AI integration. The authors argue that in order to promote patients’ autonomy and encourage their participation in SDM, doctors must disclose and discuss AI’s involvement in the clinical evaluation process. They also contend that AI should consider patients’ preferences and unique situations, thus ensuring that their values are respected and that they are able to participate actively in the SDM process.

In relating the research to broader philosophical issues, the authors’ examination of the AI-doctor-patient relationship aligns with questions surrounding the ethical and moral implications of AI in society. As AI increasingly permeates various aspects of our lives, its impact on human autonomy, agency, and moral responsibility becomes a focal point for philosophical inquiry. The paper contributes to this discourse by delving into the specific context of healthcare and the evolving dynamics of the doctor-patient relationship, providing a microcosm for understanding the broader implications of AI integration in human decision-making processes.

As the authors outline the potential benefits and challenges of incorporating AI into the SDM process, future research could investigate the practical implementation of AI in various clinical settings, evaluating the effectiveness of AI-doctor collaboration in promoting SDM. Further research might also address the training and education necessary for medical professionals to adapt to AI integration, ensuring a seamless transition that optimizes patient care. Additionally, exploring methods for incorporating patients’ values into AI algorithms could provide a path to more personalized and autonomy-respecting AI-assisted healthcare.

Abstract

Artificial intelligence (AI) based clinical decision support systems (CDSS) are becoming ever more widespread in healthcare and could play an important role in diagnostic and treatment processes. For this reason, AI-based CDSS has an impact on the doctor–patient relationship, shaping their decisions with its suggestions. We may be on the verge of a paradigm shift, where the doctor–patient relationship is no longer a dual relationship, but a triad. This paper analyses the role of AI-based CDSS for shared decision-making to better comprehend its promises and associated ethical issues. Moreover, it investigates how certain AI implementations may instead foster the inappropriate paradigm of paternalism. Understanding how AI relates to doctors and influences doctor–patient communication is essential to promote more ethical medical practice. Both doctors’ and patients’ autonomy need to be considered in the light of AI.

Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

(Featured) Moral distance, AI, and the ethics of care

Moral distance, AI, and the ethics of care

Carolina Villegas-Galaviz and Kirsten Martin analyze the ethical implications of AI decision-making and suggest the ethics of care as a framework for mitigating its negative impacts. They argue that AI exacerbates moral distance by creating proximity and bureaucratic distance, which lead to a lack of consideration for the needs of all stakeholders. The ethics of care, which emphasizes interdependent relationships, context and circumstances, vulnerability, and voice, can help contextualize the issue and bring us closer to those at a distance. The authors note that this framework can aid in the development of algorithmic decision-making tools that consider the ethics of care.

The authors argue that moral distance arises from proximity and bureaucratic distance. Proximity distance refers to the physical, cultural, and temporal separation between people, while bureaucratic distance refers to hierarchy, complexity, and principle-based decision-making. These types of moral distance are inherent in how AI works, and the authors contend that AI exacerbates them. The authors also suggest that the ethics of care can help mitigate the negative impacts of AI by emphasizing the need for interdependent relationships, contextual understanding, vulnerability, and voice.

The authors argue that the ethics of care is useful in analyzing algorithmic decision-making in AI. They suggest that the ethics of care offers a mechanism for designing and developing algorithmic decision-making tools that consider the needs of all stakeholders. However, they acknowledge that the ethics of care may not be a comprehensive solution to all moral problems or harms.

The paper raises broader philosophical issues about the role of ethics in technology. It highlights the need to consider the ethical implications of technology and the importance of developing ethical frameworks for AI decision-making. The authors suggest that the ethics of care offers a new conversation for the critical examination of AI and underscores the importance of hearing diverse voices and considering the needs of all stakeholders in technology development.

Future research should explore the legal, moral, epistemic, and practical aspects of moral distance and their specific implications. It should also examine the full range of feminist theory and its potential to mitigate the problem of representativeness in the technology workforce. The authors note that interdisciplinary and intercultural teams are essential in developing and deploying AI ethically. Finally, they suggest that a deeper understanding of the ethics of care could have implications for other areas of philosophical inquiry, such as environmental ethics and bioethics.

Abstract

This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted by the decision and leads to less ethical decisions. The goal of this paper is to identify and analyze the moral distance created by AI through both proximity distance (in space, time, and culture) and bureaucratic distance (derived from hierarchy, complex processes, and principlism). We then propose the ethics of care as a moral framework to analyze the moral implications of AI. The ethics of care brings to the forefront circumstances and context, interdependence, and vulnerability in analyzing algorithmic decision making.

Moral distance, AI, and the ethics of care

(Featured) AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

Richard Volkman and Katleen Gabriels critically examine current approaches to AI moral enhancement and propose a new model that more closely aligns with the reality of moral progress as a socio-technical system. The paper begins by discussing two main approaches to AI moral enhancement: the exhaustive approach, which aims to program AI systems with complete moral knowledge, and the auxiliary approach, which seeks to use AI as a tool to assist humans in moral decision-making. The authors argue that the exhaustive approach is overly ambitious and unattainable, while the auxiliary approach, as exemplified by Lara and Deckers’ Socratic Interlocutor, lacks the depth and nuance necessary for genuine moral engagement.

Instead, the authors propose an alternative model of AI moral enhancement that emphasizes the importance of moral diversity, ongoing dialogue, and the cultivation of practical wisdom. Their model envisions a modular system of AI “mentors”, each embodying a distinct moral perspective, engaging in conversation with one another and with the user. This system would more accurately represent the complex, evolving socio-technical process of moral progress and would be safer and more effective than the existing proposals for AI moral enhancement.

The authors address potential objections to their proposal, arguing that the goal of moral enhancement should not be to transcend human limitations but to engage more deeply with our moral thinking. They emphasize that their approach to moral enhancement is not aimed at simplifying the process of moral improvement but at making us more skilled in the ways of practical wisdom. They conclude that their proposal represents a path to genuine moral enhancement that is more achievable and less fraught with risk than previous approaches.

This research contributes to broader philosophical discussions about the nature and scope of moral progress, the role of technology in moral enhancement, and the limits of human rationality. By engaging with these issues, the paper not only critiques existing proposals but also highlights the importance of considering the historical, social, and technological dimensions of moral inquiry. In doing so, it raises questions about the extent to which AI can and should be involved in human moral development, and how best to navigate the potential risks and benefits associated with such involvement.

As for future research, several avenues present themselves. First, it would be fruitful to explore the development of these AI “mentors” in more detail, focusing on the technical and ethical challenges associated with creating AI systems that embody diverse moral perspectives. Additionally, empirical studies could be conducted to assess the effectiveness of such AI mentors in promoting moral enhancement among users. Finally, interdisciplinary research could be undertaken to better understand the complex relationship between AI, moral enhancement, and broader social and cultural dynamics, in order to ensure that future AI moral enhancement efforts are both safe and effective.

Abstract

Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that ‘AI mentors’ could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.

AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

(Featured) Machine Ethics: Do Androids Dream of Being Good People?

Machine Ethics: Do Androids Dream of Being Good People?

Gonzalo Génova, Valentín Moreno, and M. Rosario González explore the possibility and limitations of teaching ethical behavior to artificial intelligence. The paper delves into two main approaches to teaching ethics to machines: explicit ethical programming and learning by imitation. It highlights the difficulties faced by each approach and discusses the implications and potential issues surrounding the application of machine learning to ethical issues.

The authors begin by examining explicit ethical programming, such as Asimov’s Three Laws, and discuss the challenges involved in foreseeing the consequences of an act, as well as the necessity of having an explicit goal for ethical behavior. The second approach, learning by imitation, involves machines observing the behavior of experts or a majority in order to emulate them. The paper also discusses the Moral Machine experiment by MIT, which aimed to teach machines to make moral decisions based on the preferences of the majority.

Despite the potential of machine learning techniques, the authors argue that both approaches fail to capture the essence of genuine ethical thinking in human beings. They emphasize that ethics is not about following a code of conduct or imitating the behavior of others, but rather about critical thinking and the formation of one’s own conscience. The paper concludes by questioning whether machines can truly learn ethics like humans do, suggesting that current methods of teaching ethics to machines are inadequate for capturing the complexity of human ethical life.

The research presented in the paper raises important philosophical questions about the nature of ethics and the role of machines in our ethical lives. It challenges the instrumentalist and reductionist approaches to ethics, which view ethical values as computable or reducible to a set of rules. By highlighting the limitations of these approaches, the paper invites us to reconsider the importance of value rationality and the recognition of the uniqueness and unrepeatable nature of human beings in ethical considerations.

In light of these findings, future research could explore alternative approaches to teaching ethics to machines that go beyond mere rule-following or imitation. This could involve the development of novel machine learning techniques that foster critical thinking and the ability to reason with values without reducing them to numbers. Additionally, interdisciplinary collaboration between philosophers, AI researchers, and ethicists could further enrich our understanding of the ethical dimensions of artificial intelligence and help to develop AI systems that not only do the right thing but also respect the complexity and richness of human ethical life.

Abstract

Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by reflecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely “following a moral code”. In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technified society.

Machine Ethics: Do Androids Dream of Being Good People?

(Featured) Trojan technology in the living room?

Trojan technology in the living room?

Franziska Sonnauer and Andreas Frewer explore the delicate balance between self-determination and external determination in the context of older adults using assistive technologies, particularly those incorporating artificial intelligence (AI). The authors introduce the concept of a “tipping point” to delineate the transition between self-determination and external determination, emphasizing the importance of considering the subjective experiences of older adults when employing such technologies. To this end, the authors adopt self-determination theory (SDT) as a theoretical framework to better understand the factors that may influence this tipping point.

The paper argues that the tipping point is intrapersonal and variable, suggesting that fulfilling the three basic psychological needs outlined in SDT—autonomy, competence, and relatedness—can potentially shift the tipping point towards self-determination. The authors propose various strategies to achieve this, such as providing alternatives for assistance in old age, promoting health technology literacy, and prioritizing social connectedness in technological development. They also emphasize the need to include older adults’ perspectives in decision-making processes, as understanding their subjective experiences is crucial to recognizing and respecting their autonomy.

Moreover, the authors call for future research to explore the tipping point and factors affecting its variability in different contexts, including assisted suicide, health deterioration, and the use of living wills and advance care planning. They contend that understanding the tipping point between self-determination and external determination may enable the development of targeted interventions that respect older adults’ autonomy and allow them to maintain self-determination for as long as possible.

In a broader philosophical context, this paper raises important ethical questions concerning the role of technology in shaping human agency, autonomy, and decision-making processes. It challenges us to reflect on the ethical implications of increasingly advanced assistive technologies and the potential consequences of their indiscriminate use. The issue of the tipping point resonates with broader debates on the nature of free will, the limits of self-determination, and the moral implications of human-machine interactions. As AI continues to become more integrated into our lives, the question of how to balance self-determination and external determination takes on greater urgency and complexity.

For future research, it would be valuable to explore the concept of the tipping point in different cultural contexts, as perceptions of autonomy and self-determination may vary across societies. Additionally, interdisciplinary approaches that combine insights from philosophy, psychology, and technology could shed light on the complex interplay between human values and AI-driven systems. Finally, empirical research investigating the experiences of older adults using assistive technologies would provide valuable data to help refine our understanding of the tipping point and inform the development of more ethically sound technologies that respect individual autonomy and promote well-being.

Abstract

Assistive technologies, including “smart” instruments and artificial intelligence (AI), are increasingly arriving in older adults’ living spaces. Various research has explored risks (“surveillance technology”) and potentials (“independent living”) to people’s self-determination from technology itself and from the increasing complexity of sociotechnical interactions. However, the point at which self-determination of the individual is overridden by external influences has not yet been sufficiently studied. This article aims to shed light on this point of transition and its implications.

Trojan technology in the living room?

(Featured) Might text-davinci-003 have inner speech?

Might text-davinci-003 have inner speech?

Stephen Francis Mann and Daniel Gregory endeavor to explore the possibility of inner speech in artificial intelligence, specifically within an AI assistant. The researchers employ a Turing-like test, which involves a conversation with a chatbot to assess its linguistic competence, creativity, and reasoning. Throughout the experiment, the chatbot is asked a series of questions designed to probe its capabilities and discern whether it possesses the capacity for inner speech.

The researchers find mixed evidence to support the presence of inner speech in the AI chatbot. While the chatbot claims to have inner speech, its performance on sentence completion tasks somewhat corroborates this assertion. However, its inconsistent performance on rhyme-detection tasks, particularly when involving non-words, raises doubts regarding the presence of inner speech. The authors also note that the chatbot’s responses can be explained by its highly advanced autocomplete capabilities, which further complicates the evaluation of its inner speech.

Ultimately, the paper questions the efficacy of Turing-like tests as a means to determine mental states or mind-like properties in artificial agents. It suggests that linguistic competence alone may not be sufficient to ascertain whether AI possesses mind-like properties such as inner speech. The authors imply that those who argue against the plausibility of mental states in AI agents might reason that the absence of minds in conversation agents proves that linguistic competence is an insufficient test for mind-like properties.

This research taps into broader philosophical issues, such as the nature of consciousness and the criteria required to attribute mental states to artificial agents. As AI continues to advance, the demarcation between human and machine becomes increasingly blurred, forcing us to reevaluate our understanding of concepts like inner speech and consciousness. The question of whether AI can possess inner speech underscores the need for a more robust philosophical framework that can accommodate the unique characteristics and capabilities of artificial agents.

Future research in this domain could benefit from exploring alternative methods for evaluating inner speech in AI, going beyond Turing-like tests. For instance, researchers might investigate the AI’s decision-making processes or the mechanisms that underpin its creativity. Additionally, interdisciplinary collaboration with fields such as cognitive science and neuroscience could shed light on the cognitive processes at play in both humans and AI agents, thus providing a richer context for understanding the nature of inner speech in artificial agents. By expanding the scope of inquiry, we can better assess the extent to which AI agents possess mind-like properties and develop a more nuanced understanding of the implications of such findings for the future of AI and human cognition.

Abstract

In November 2022, OpenAI released ChatGPT, an incredibly sophisticated chatbot. Its capability is astonishing: as well as conversing with human interlocutors, it can answer questions about history, explain almost anything you might think to ask it, and write poetry. This level of achievement has provoked interest in questions about whether a chatbot might have something similar to human intelligence or even consciousness. Given that the function of a chatbot is to process linguistic input and produce linguistic output, we consider the question whether a sophisticated chatbot might have inner speech. That is: Might it talk to itself, internally? We explored this via a conversation with ‘Playground’, a chatbot which is very similar to ChatGPT but more flexible in certain respects. We asked it questions which, plausibly, can only be answered if one first produces some inner speech. Here, we present our findings and discuss their philosophical significance.

Might text-davinci-003 have inner speech?

(Featured) Deepfakes and the epistemic apocalypse

Deepfakes and the epistemic apocalypse

Joshua Habgood-Cooter critically examines the common perception that deepfakes represent a unique and unprecedented threat to our epistemic landscape. They argue that such a viewpoint is misguided and that deepfakes should be understood as a social problem rather than a purely technological one. The author offers three main lines of criticism to counter the narrative of deepfakes as harbingers of an epistemic apocalypse. First, they propose that the knowledge we gain from recordings is a special case of knowledge from instruments, which relies on social practices around the design, operation, and maintenance of recording technology. Second, they present historical examples of manipulated recordings to demonstrate that deepfakes are not a novel phenomenon, and that social practices have been employed in the past to address similar issues. Third, they contend that technochauvinism and the post-truth narrative have obscured potential social measures to address deepfakes.

The author argues that deepfakes are embedded in a techno-social context and should be treated as part of the broader social practices involved in the production of knowledge and ignorance. They suggest that examining historical episodes of deceptive recordings can provide valuable insights into how social norms and community policing could be utilized to address the challenges posed by deepfakes. Moreover, the author emphasizes that the most serious harms associated with deepfake videos are likely to be consequences of established ignorance-producing social practices affecting minority and marginalized groups.

By reframing deepfakes as a social problem, the paper challenges the notion that the technology itself is inherently dangerous and urges us to consider how our social practices contribute to the production and dissemination of manipulated recordings. This approach highlights the interdependence between technology and society, and offers a more nuanced understanding of the ethical, political, and epistemic implications of deepfakes.

In the broader philosophical context, this paper raises important questions about the nature of knowledge, the role of trust in our epistemic practices, and the relationship between technology and the social dynamics of knowledge production. It also contributes to ongoing debates in social epistemology, emphasizing the collective nature of knowledge and the responsibility that society bears in shaping our epistemic landscape.

Future research could explore other historical episodes of manipulated recordings and the social responses that emerged to address them, further informing our understanding of how to manage the challenges posed by deepfakes. Additionally, scholars could investigate the role of institutional actors, such as governments and media organizations, in shaping and reinforcing norms and practices around the production and dissemination of recordings. This line of inquiry could lead to a more comprehensive understanding of the techno-social context in which deepfakes operate and inform policy recommendations for mitigating their potential harms.

Abstract

It is widely thought that deepfake videos are a significant and unprecedented threat to our epistemic practices. In some writing about deepfakes, manipulated videos appear as the harbingers of an unprecedented epistemic apocalypse. In this paper I want to take a critical look at some of the more catastrophic predictions about deepfake videos. I will argue for three claims: (1) that once we recognise the role of social norms in the epistemology of recordings, deepfakes are much less concerning, (2) that the history of photographic manipulation reveals some important precedents, correcting claims about the novelty of deepfakes, and (3) that proposed solutions to deepfakes have been overly focused on technological interventions. My overall goal is not so much to argue that deepfakes are not a problem, but to argue that behind concerns around deepfakes lie a more general class of social problems about the organisation of our epistemic practices.

Deepfakes and the epistemic apocalypse

(Featured) Introducing a four-fold way to conceptualize artificial agency

Introducing a four-fold way to conceptualize artificial agency

Maud van Lier presents a methodological framework for understanding artificial agency in the context of basic research, particularly in AI-driven science. The Four-Fold Framework, as the author coins it, is a pluralistic and pragmatic approach that incorporates Gricean modeling, analogical modeling, theoretical modeling, and conceptual modeling. The motivation behind this framework lies in the increasingly active role that AI systems are taking on in scientific research, warranting the development of a robust conceptual foundation for these ‘agents.’

The author critically assesses Sarkia’s neo-Gricean framework, which offers three modeling strategies for conceptualizing artificial agency. While acknowledging its merits, the author identifies a crucial shortcoming in its lack of a semantic dimension, which is necessary to bridge the gap between theoretical models and practical implementation in basic research. To address this issue, the author proposes the addition of conceptual modeling as a fourth strategy, ultimately forming the Four-Fold Framework. This new framework aims to provide a comprehensive account of artificial agency in basic research by accommodating different interpretations and addressing the semantic dimension of artificial agency.

By implementing the Four-Fold Framework, the author posits that researchers will be able to develop a more inclusive and pragmatically plausible understanding of artificial agency in the context of AI-driven science. The framework sets the stage for a robust conceptual foundation that can accommodate the complexities and nuances of artificial agency as AI continues to evolve and expand its role in scientific research.

This paper’s exploration of artificial agency also contributes to the broader philosophical discourse on agency and autonomy in the context of artificial intelligence. As AI systems become more advanced, the distinction between human and artificial agents blurs, raising questions about the nature of agency, responsibility, and ethical considerations. The Four-Fold Framework provides a methodological tool to examine these complex issues, grounding the analysis of artificial agency within a rigorous and comprehensive structure.

Future research can expand upon the Four-Fold Framework by investigating its applicability to other emerging areas in AI, such as AI ethics, human-AI collaboration, and autonomous decision-making. Additionally, researchers can explore how the Four-Fold Framework might inform the development of AI-driven science policy and governance, ensuring that ethical, legal, and societal implications are considered in the integration of artificial agency in scientific research. By refining and extending the Four-Fold Framework, the academic community can better anticipate and navigate the challenges and opportunities that artificial agency presents in the rapidly evolving landscape of AI-driven science.

Abstract

Recent developments in AI-research suggest that an AI-driven science might not be that far off. The research of for Melnikov et al. (2018) and that of Evans et al. (2018) show that automated systems can already have a distinctive role in the design of experiments and in directing future research. Common practice in many of the papers devoted to the automation of basic research is to refer to these automated systems as ‘agents’. What is this attribution of agency based on and to what extent is this an important notion in the broader context of an AI-driven science? In an attempt to answer these questions, this paper proposes a new methodological framework, introduced as the Four-Fold Framework, that can be used to conceptualize artificial agency in basic research. It consists of four modeling strategies, three of which were already identified and used by Sarkia (2021) to conceptualize ‘intentional agency’. The novelty of the framework is the inclusion of a fourth strategy, introduced as conceptual modeling, that adds a semantic dimension to the overall conceptualization. The strategy connects to the other strategies by modeling both the actual use of ‘artificial agency’ in basic research as well as what is meant by it in each of the other three strategies. This enables researchers to bridge the gap between theory and practice by comparing the meaning of artificial agency in both an academic as well as in a practical context.

Introducing a four-fold way to conceptualize artificial agency

(Featured) The five tests: designing and evaluating AI according to indigenous Māori principles

The five tests: designing and evaluating AI according to indigenous Māori principles

Luke Munn provides a critical analysis of the current paradigms of artificial intelligence (AI) development and offer a framework for a decolonial AI. The author argues that existing AI paradigms reproduce and reinforce coloniality and its attendant inequalities. To overcome this, he proposes a framework based on Indigenous concepts from Aotearoa (New Zealand), which offers a distinct set of principles and priorities that challenge Western technocratic norms. The framework is centered on five tests that prioritize human dignity, communal integrity, and ecological sustainability. The author suggests that the application of these tests can guide the design and development of AI products in a way that is more inclusive, thoughtful, and attentive to life in its various forms.

The author identifies two distinct pathways for applying their framework. The first pathway is designing, which involves applying the principles and priorities of the Five Tests to the development of AI products that are currently in progress. This involves asking questions about how these products can respect the sacred, preserve or enhance life force, and reconcile negative impacts in acceptable ways. The author suggests that iterative versions of software can be developed by engaging genuinely with these questions and resolving them through code, architectures, interfaces, and affordances. The second pathway is decolonizing, which involves a deeper and more sustained confrontation with current AI regimes. This pathway involves challenging generic, universalizing frames, stressing the connection and interdependence of human and ecological well-being, and carefully considering potential impacts and developing ways to mitigate them or redress them to satisfy involved parties.

The author’s framework challenges current AI paradigms and practices by raising questions about what data-driven technology should be doing, how it can be designed in ways that are more inclusive, communal, and sustainable, and what values and norms should be used to judge the success of a particular technology. He suggests that these questions are epistemological, cultural and historical, and social in nature. And he argues that understanding and undoing systems of inequality that have been formalized and fossilized over time is a massive undertaking that demands a long-term project that prioritizes social justice.

In broader philosophical terms, this paper raises questions about the relationship between technology and power, the role of knowledge systems in shaping our understanding of the world, and the importance of Indigenous perspectives in challenging dominant paradigms. The authors’ framework challenges the Western-centric assumptions that underpin current AI paradigms and highlights the importance of recognizing and respecting diverse perspectives and ways of knowing.

Future research could explore the practical implications of the Five Tests framework and how it can be applied in different contexts. It could also examine the ways in which Indigenous perspectives can challenge dominant paradigms in other fields, such as philosophy, politics, and economics. Additionally, research could explore the potential for cross-cultural collaboration in the development of AI and other technologies, and how this collaboration can facilitate the recognition and respect of diverse perspectives and knowledge systems. Finally, research could explore the broader implications of the authors’ framework for the relationship between technology and power and the potential for decolonial approaches to reshape our understanding of the role of technology in society.

Abstract

As AI technologies are increasingly deployed in work, welfare, healthcare, and other domains, there is a growing realization not only of their power but of their problems. AI has the capacity to reinforce historical injustice, to amplify labor precarity, and to cement forms of racial and gendered inequality. An alternate set of values, paradigms, and priorities are urgently needed. How might we design and evaluate AI from an indigenous perspective? This article draws upon the five Tests developed by Māori scholar Sir Hirini Moko Mead. This framework, informed by Māori knowledge and concepts, provides a method for assessing contentious issues and developing a Māori position. This paper takes up these tests, considers how each test might be applied to data-driven systems, and provides a number of concrete examples. This intervention challenges the priorities that currently underpin contemporary AI technologies but also offers a rubric for designing and evaluating AI according to an indigenous knowledge system.

The five tests: designing and evaluating AI according to indigenous Māori principles

(Featured) The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

James Johnson explores the ethical and psychological implications of integrating AI into warfare. The author argues that the use of autonomous weapons in warfare may create moral vacuums that eliminate meaningful ethical and moral deliberation in the quest for riskless and rational war. Moreover, the author argues that the human-machine integration process is part of a broader evolutionary dovetailing of humanity and technology. The logical end of this trajectory is an AI commander, which would effectively outsource ethical decision-making to machines that are ill-equipped to fill this ethical and moral void.

The author also explores the limitations of AI in distinguishing between legitimate and illegitimate targets in asymmetric conflicts, such as insurgencies and civil wars. He stresses the importance of recognizing the personhood of the enemy in warfare and argue that until AI can achieve this moral standing, it will be unable to meet the requirements of jus in bello. Additionally, the Johnson argues that human judgment and prediction, while imperfect, are still necessary in warfare because of the subtle cues that humans can recognize that machines cannot.

The paper highlights three key psychological insights regarding human-machine interactions and political-ethical dilemmas in future AI-enabled warfare. First, the Johnson argues that human-machine integration is a socio-technical psychological process that is part of a broader evolutionary dovetailing of humanity and technology. Second, he argues that biases associated with human-machine interactions can compound the “illusion of control” problem. Third, he suggests that coding human ethics into AI algorithms is technically, theoretically, ontologically, and psychologically problematic and ethically and morally questionable.

This paper raises important philosophical questions about the relationship between technology and ethics. It highlights the risks associated with outsourcing ethical decision-making to machines and emphasizes the importance of recognizing the personhood of the enemy in warfare. The paper also underscores the limitations of AI in distinguishing between legitimate and illegitimate targets and the importance of human judgment in recognizing subtle cues that machines cannot. Ultimately, this paper challenges us to consider the role of technology in shaping our ethical and moral decision-making processes.

Future research in this area could explore the psychological and ethical implications of human-machine integration in other domains, such as healthcare or criminal justice. Additionally, research could focus on developing AI systems that are capable of understanding the complexities of human ethics and morality. This research could also explore ways to incorporate ethical decision-making into AI algorithms without sacrificing human agency and accountability. Finally, research could explore the broader philosophical implications of the use of AI in warfare and consider the ethical and moral implications of a world in which machines are increasingly integrated into our lives.

Abstract

Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the key elements of the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI “rational” efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining “meaningful” human control over the war machine. This Panglossian assumption neglects the psychological features of human-machine interactions, the pace at which future AI-enabled conflict will be fought, and the complex and chaotic nature of modern war. The article expounds key psychological insights of human-machine interactions to elucidate how AI shapes our capacity to think about future warfare’s political and ethical dilemmas. It argues that through the psychological process of human-machine integration, AI will not merely force-multiply existing advanced weaponry but will become de facto strategic actors in warfare – the “AI commander problem.”

The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare