(Featured) Beyond the hype: ‘acceptable futures’ for AI and robotic technologies in healthcare

Beyond the hype: ‘acceptable futures’ for AI and robotic technologies in healthcare

Giulia De Togni et al. delve into the complex dynamics of technoscientific expectations surrounding the future of artificial intelligence (AI) and robotic technologies in healthcare. By focusing on surgery, pathology, and social care, they examine the strategies employed by scientists, clinicians, and other stakeholders to navigate and construct visions of an AI-driven future in healthcare. The authors illustrate the challenges faced by these stakeholders, who must balance promissory visions with more realistic expectations, while acknowledging the performative power of high expectations in attracting investment and resources.

The participants in the study engage in a balancing act between high and low expectations, drawing boundaries to maintain credibility for their research and practice while distancing themselves from the hype. They recognize that over-optimistic visions may create false hope and unrealistic expectations of performance, potentially harming AI and robotics research through deflated investment if the outcomes fail to match expectations. The authors demonstrate how the stakeholders negotiate the tension between sustaining and nurturing the hype while calling for the recalibration of expectations within an ethically and socially responsible framework.

Central to the participants’ visions of acceptable futures is the changing nature of human-machine relationships. Through balancing different social, ethical, and technoscientific demands, the participants articulate futures that are perceived as ethically and socially acceptable, as well as realistically achievable. They frame their articulations of both the present and future potential and limitations of AI and robotics technologies within an ethics of expectations that position normative considerations as central to how these expectations are expressed.

This research article contributes to broader philosophical debates concerning the role of expectations and imaginaries in shaping our understanding of technoscientific innovation, human-machine relationships, and the ethics of care. By exploring the dynamic interplay between these factors, the authors shed light on how the future of AI and robotics in healthcare is being constructed and negotiated. This study resonates with key themes in the philosophy of futures studies, including the co-constitution of technological visions and sociotechnical imaginaries, the performativity of expectations, and the ethical dimensions of forecasting and envisioning the future.

To further enrich our understanding of these complex dynamics, future research could explore the perspectives of additional stakeholders, such as patients and policymakers, to gain a more comprehensive picture of the expectations surrounding AI and robotics in healthcare. Additionally, cross-cultural and comparative studies could reveal how different cultural contexts and healthcare systems influence expectations and acceptance of these technologies. Ultimately, by continuing to examine the societal implications of AI and robotic technologies, including their impact on patient autonomy, privacy, and the human aspects of care, scholars can contribute to a more nuanced and ethically responsible vision of the future of healthcare.

Abstract

AI and robotic technologies attract much hype, including utopian and dystopian future visions of technologically driven provision in the health and care sectors. Based on 30 interviews with scientists, clinicians and other stakeholders in the UK, Europe, USA, Australia, and New Zealand, this paper interrogates how those engaged in developing and using AI and robotic applications in health and care characterize their future promise, potential and challenges. We explore the ways in which these professionals articulate and navigate a range of high and low expectations, and promissory and cautionary future visions, around AI and robotic technologies. We argue that, through these articulations and navigations, they construct their own perceptions of socially and ethically ‘acceptable futures’ framed by an ‘ethics of expectations.’ This imbues the envisioned futures with a normative character, articulated in relation to the present context. We build on existing work in the sociology of expectations, aiming to contribute towards better understanding of how technoscientific expectations are navigated and managed by professionals. This is particularly timely since the COVID-19 pandemic gave further momentum to these technologies.

Beyond the hype: ‘acceptable futures’ for AI and robotic technologies in healthcare

(Featured) Modifying the Environment or Human Nature? What is the Right Choice for Space Travel and Mars Colonisation?

Modifying the Environment or Human Nature? What is the Right Choice for Space Travel and Mars Colonisation?

Maurizio Balistreri and Steven Umbrello engage in a critical exploration of the philosophical, ethical, and practical implications of human space travel and extraterrestrial colonization. The authors offer an in-depth analysis of two main strategies proposed in the literature: terraforming (geoengineering) and human bioenhancement. The first approach implies transforming extraterrestrial environments, such as Mars, to make them habitable for human life. The second approach involves modifying the human genetic heritage to make us more resilient and adaptable to non-terrestrial environments. The authors meticulously scrutinize these alternatives, considering not only feasibility and cost but also the ethical and philosophical implications.

The authors underscore the potential of terraforming as a method to establish human settlements on Mars. However, this possibility raises several ethical concerns, including the potential destruction of extraterrestrial life forms, the alteration of untouched landscapes, and the potential overstepping of human dominion. On the other hand, human bioenhancement, though a promising path, engenders its own set of ethical dilemmas. The authors caution against reckless enthusiasm for genetic modification, drawing attention to the potential creation of a new ‘human species’ and the consequent risk of divisions and misunderstandings.

A central theme in the article is the comparison of natural and artificial constructs. The authors challenge the assumption that the natural is always superior to the artificial. Drawing on posthumanist perspectives, they suggest that, given our influence on Earth’s environment, nature is already an artificial product. The argument is extended to other planets, indicating that the traditional dichotomy between the natural and the artificial may not hold in the context of extraterrestrial colonization.

The article contributes to broader philosophical discourses about the human relationship with nature and our place in the universe. It resonates with themes of transhumanism and posthumanism, contemplating the potential of technology to overcome human vulnerabilities and achieve a new evolutionary stage. The authors invite us to question and possibly redefine our notions of ‘natural’ and ‘artificial.’ This study, therefore, serves as a significant touchstone for futures studies, linking the practical considerations of space travel with philosophical reflections on human nature and our interaction with the environment.

For future research, the authors’ comparative analysis of terraforming and human bioenhancement opens several avenues. While the ethical implications of both strategies have been discussed, a more comprehensive ethical framework could be developed, perhaps drawing on principles of bioethics, environmental ethics, and space ethics. Additionally, the potential of hybrid approaches combining elements of both strategies could be explored. Lastly, given the increasing likelihood of extraterrestrial colonization, a more detailed analysis of the potential social, cultural, and psychological impacts on human populations in these new environments would be a valuable contribution.

Abstract

As space travel and intentions to colonise other planets are becoming the norm in public debate and scholarship, we must also confront the technical and survival challenges that emerge from these hostile environments. This paper aims to evaluate the various arguments proposed to meet the challenges of human space travel and extraterrestrial planetary colonisation. In particular, two primary solutions have been present in the literature as the most straightforward solutions to the rigours of extraterrestrial survival and flourishing: (1) geoengineering, where the environment is modified to become hospitable to its inhabitants, and (2) human (bio)enhancement where the genetic heritage of humans is modified to make them more resilient to the difficulties they may encounter as well as to permit them to thrive in non-terrestrial environments. Both positions have strong arguments supporting them but also severe philosophical and practical drawbacks when exposed to different circumstances. This paper aims to show that a principled stance where one position is accepted wholesale necessarily comes at the opportunity cost of the other where the other might be better suited, practically and morally. This paper concludes that case-by-case evaluations of the solutions to space travel and extraterrestrial colonisation are necessary to ensure moral congruency and the survival and flourishing of astronauts now and into the future.

Modifying the Environment or Human Nature? What is the Right Choice for Space Travel and Mars Colonisation?

(Featured) Machine learning in bail decisions and judges’ trustworthiness

Machine learning in bail decisions and judges’ trustworthiness

Alexis Morin-Martel navigates the intricate landscape of judicial decision-making and advances the concept of Judge Assistance Systems (JAS), proposing it as a tool for enhancing the trustworthiness of judges in bail decisions. The argument is grounded in the relational theory of procedural justice, which emphasizes the role of trust, voice, neutrality, and respect in the administration of justice. The research underpins its analysis through an exploration of the nuanced terrain of trustworthiness, distinguishing between actual and rich trustworthiness, and articulating the potential role of JAS in amplifying both.

The author leverages the empirical study by Kleinberg et al. (2017a) to illustrate how JAS, equipped with complex algorithms, can assist judges in making more precise bail decisions, thereby enhancing their actual trustworthiness. A key idea espoused is the potential for JAS to act as a check on judicial decision-making, allowing judges to reconsider decisions that deviate significantly from statistical norms. However, the author acknowledges that the implementation of JAS should not undermine the principle of voice, one of the pillars of relational justice, ensuring that defendants have the opportunity to influence the decision-making process.

Further, the study takes into account the perceived trustworthiness of judges when using a JAS. It acknowledges the inherent public skepticism towards algorithmic decisions, often due to their perceived opacity. The argument is made that focusing on accuracy, rather than transparency, of these algorithms is more likely to enhance perceived trustworthiness. Importantly, the author suggests that regular audits within legal institutions could effectively monitor the accuracy of JAS, thus reinforcing public trust over time. However, the author admits that while the ‘voice’ and ‘neutrality’ criteria could likely be met by JAS, its ability to meet the ‘respect’ requirement remains uncertain and needs further examination.

The research article finds a nexus with broader philosophical themes, particularly those concerning human-machine interaction and the ethical implications of algorithmic decision-making. The proposal of JAS as a tool to enhance judicial trustworthiness is reflective of the broader trend towards technocratic governance. This trend raises critical questions about the balance between human judgment and algorithmic precision, and the philosophical implications of delegating traditionally human tasks to artificial intelligence. Moreover, the emphasis on accuracy over transparency in JAS echoes the larger debate on the ethical trade-offs in AI applications, especially in high-stake public decisions.

Future research could explore several intriguing avenues. The extension of JAS to other areas of judicial decision-making, beyond bail decisions, could be considered. Studies could also focus on the development of more transparent and interpretable models without compromising accuracy, addressing public distrust of ‘black box’ algorithms. Furthermore, future research might investigate the potential impact of JAS on other aspects of the relational theory of procedural justice, particularly the ‘respect’ requirement. Lastly, empirical studies evaluating the effectiveness and reliability of JAS in real-world court settings could provide valuable insights into the practicality of implementing such systems.

Abstract

The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong desideratum of criminal trials, advocates of the relational theory of procedural justice give us good reason to think that fairness and perceived fairness of legal procedures have a value that is independent from the outcome. According to this literature, one key aspect of fairness is trustworthiness. In this paper, I argue that using certain algorithms to assist bail decisions could increase three different aspects of judges’ trustworthiness: (1) actual trustworthiness, (2) rich trustworthiness, and (3) perceived trustworthiness.

Machine learning in bail decisions and judges’ trustworthiness

(Featured) In Conversation with Artificial Intelligence: Aligning language Models with Human Values

In Conversation with Artificial Intelligence: Aligning language Models with Human Values

Atoosa Kasirzadeh and Iason Gabriel embark on an ambitious analysis of how large-scale conversational agents, such as AI language models, can be better designed to align with human values. The premise of the article is grounded in the philosophy of language and pragmatics, employing Gricean maxims and Speech Act Theory to establish the importance of context and cooperation in achieving effective and ethical linguistic communication. The authors underscore the necessity of considering pragmatic norms and concerns in the design of conversational agents and illustrate their proposition through three discursive domains: science, civic life, and creative exchange.

The authors present a novel approach, suggesting the operationalization of Gricean maxims of quantity, quality, relation, and manner, to aid in cooperative communication between humans and AI. They also emphasize the diversity of utterances, asserting that there is no single universal condition of validity that applies to all. Instead, the validity of utterances often depends on different sorts of truth conditions which require different methodologies for substantiation, based on context-specific criteria of validity. They further stress the centrality of contextual information in the design of ideal conversational agents and highlight the need for research to theorise and measure the difference between the literal and contextual meaning of utterances.

The authors also delve into the implications of their analysis for future research into the design of conversational agents. They discuss the potential for anthropomorphisation of conversational agents and the constraints that might be imposed on them. They note that while anthropomorphism can sometimes be consistent with the creation of value-aligned agents, there are situations where it might be undesirable or inappropriate. They also advocate for the exploration of the potential for conversational agents to facilitate more robust and respectful conversations through context construction and elucidation. Lastly, they suggest that their analysis could be used to evaluate the quality of interactions between conversational agents and users, providing a framework for refining both human and automatic evaluation of conversational agent performance.

The research article resonates with broader philosophical themes, particularly those concerning the interplay between technology and society. It touches upon the ethical dimensions of AI, hinting at the moral responsibility of designing AI systems that align with human values and norms. The exploration of Gricean maxims and Speech Act Theory in the context of AI conversational agents provides a unique blend of AI ethics, philosophy of language, and pragmatics, reflecting the interdisciplinary nature of contemporary AI research. In doing so, the article stimulates dialogue about the role of AI in shaping our social and communicative practices, challenging conventional boundaries between humans and machines, and highlighting the potential of AI as a tool for fostering effective and ethically sound communication.

In terms of future avenues of research, the authors’ analysis opens up a myriad of possibilities. First, while the paper focuses primarily on the English language, a fruitful direction of research could involve the exploration of norms and pragmatics in other languages, thereby ensuring the cultural inclusivity and sensitivity of AI systems. Second, the proposed alignment of AI conversational agents with Gricean maxims and discursive ideals could be further operationalized and tested empirically to assess its effectiveness in real-world scenarios. Third, the article alludes to the potential of AI in fostering more robust and respectful conversations, which suggests an opportunity to investigate how AI can play an active role in shaping discourse norms and facilitating constructive dialogues. Lastly, the authors’ work can be further enriched by drawing from other sociological and philosophical traditions, such as Luhmann’s system theory or Latour’s actor-network theory, to offer a more comprehensive and nuanced understanding of the complex interplay between AI, language, and society.

Abstract

Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values.

In Conversation with Artificial Intelligence: Aligning language Models with Human Values

(Featured) Ethics of AI and Health Care: Towards a Substantive Human Rights Framework

Ethics of AI and Health Care: Towards a Substantive Human Rights Framework

S. Matthew Liao provides an incisive exploration into the ethical considerations intrinsic to the application of artificial intelligence (AI) in healthcare contexts. The paper underscores the burgeoning interest in employing AI for health-related purposes, with AI applications demonstrating competencies in diagnosing certain types of cancer, identifying heart rhythm abnormalities, diagnosing various eye diseases, and even identifying viable embryos. However, the author cautions that the deployment of AI in healthcare settings necessitates adherence to robust ethical frameworks and guidelines.

The author identifies a burgeoning multitude of ethical frameworks for AI that have been proposed over recent years. The count of such frameworks exceeds 80 and stems from a diverse array of sources including private corporations, governmental agencies, academic institutions, and intergovernmental bodies. These frameworks commonly reference the four principles of biomedical ethics: autonomy, beneficence, non-maleficence, and justice, and often include recommendations for transparency, explainability, and trust. However, the author warns that the proliferation of these frameworks has led to confusion, thereby raising pressing questions about the basis, justification, and practical implementation of these recommendations.

In response to this conundrum, the author proposes an AI ethics framework rooted in substantive human rights theory. This proposed framework seeks to address the questions raised by the proliferation of ethical guidelines and to provide clear and practical guidance for the use of AI in healthcare. The author argues for an ethical framework that is not only abstract but also expounds the grounds and justifications of the recommendations it puts forward, as well as how these recommendations should be applied in practice.

The broader philosophical discourse that this research engages with is the ethics of technology and, more specifically, the ethical and moral implications of AI use in healthcare. The central philosophical question the author grapples with is the tension between the rapid development and application of AI in healthcare and the need for substantive ethical guidelines to govern its use. This brings into sharp focus the perennial philosophical tension between progress and ethical constraint, raising the specter of issues such as the nature of autonomy, the definition of harm, and the equitable distribution of benefits and burdens.

For future research, the author’s proposition of a human rights-based ethical framework opens up multiple avenues. First, the application of this framework could be examined in real-world healthcare scenarios to assess its efficacy in guiding ethical AI use. Second, the interplay between this framework and existing legal systems could be studied to ascertain any gaps or overlaps. Lastly, a comparative analysis could be conducted of how this proposed framework fares against other ethical frameworks in use, and how it might be refined or integrated with other approaches for a more robust ethical guidance in healthcare AI applications.

Abstract

There is enormous interest in using artificial intelligence (AI) in health care contexts. But before AI can be used in such settings, we need to make sure that AI researchers and organizations follow appropriate ethical frameworks and guidelines when developing these technologies. In recent years, a great number of ethical frameworks for AI have been proposed. However, these frameworks have tended to be abstract and not explain what grounds and justifies their recommendations and how one should use these recommendations in practice. In this paper, I propose an AI ethics framework that is grounded in substantive, human rights theory and one that can help us address these questions.

Ethics of AI and Health Care: Towards a Substantive Human Rights Framework

(Featured) Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?

Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?

Frank Ursin et al. investigate the ethical considerations associated with medical artificial intelligence (AI), particularly in the context of radiology. They emphasize the importance of implementing explainable AI (XAI) techniques to address epistemic and explanatory concerns that arise when AI is employed in medical decision-making. The authors outline a four-level approach to explicability, comprising disclosure, intelligibility, interpretability, and explainability, with each successive level representing an escalation in the level of detail and clarity provided to the patient or physician.

The authors argue that XAI has great potential in the medical field, and they present two examples from radiology to illustrate its practical applications. The first example involves the use of image inpainting techniques to generate sharper and more detailed saliency maps, which can help localize relevant regions within radiological images. The second example highlights the importance of natural language communication in XAI, where an image-to-text model is used to generate medical reports based on radiological images. These two examples demonstrate that incorporating XAI techniques in radiology can provide valuable insights and improved communication for medical practitioners and patients.

In the paper’s conclusion, the authors emphasize the need for a tailored approach to explicability that considers the needs of patients and the scope of medical decisions. They also advocate for the use of insights gained from medical AI ethics to re-evaluate established medical practices and confront biases in medical classification systems. By applying the four levels of explicability in a thoughtful manner, the authors posit that ethically defensible information processes can be established when utilizing medical AI.

This paper touches on broader philosophical issues related to the ethics of technology, medical autonomy, and the nature of trust in AI-driven decision-making. As AI becomes increasingly integrated into various domains of human activity, questions about transparency, fairness, and the moral implications of AI systems become paramount. This paper demonstrates the necessity of establishing an ethical framework for AI applications in healthcare, providing valuable insights that can be extended to other disciplines as well. By considering the complex interplay between AI-driven systems and human agents, the authors also underscore the importance of understanding how technological advancements impact the broader social fabric and the values we uphold as a society.

Future research in this area could explore the generalizability of the four-level approach to explicability in other medical domains or even non-medical contexts. Additionally, researchers may investigate how the incorporation of diverse perspectives in the development of AI systems and explainability techniques can mitigate the potential for biases and discriminatory outcomes. It would also be valuable to study how XAI can be adapted to the specific needs and preferences of individual patients or physicians, creating personalized approaches to explicability. Lastly, researchers may wish to assess the long-term impact of integrating XAI in medical practice, particularly in terms of patient satisfaction, physician trust, and overall quality of care.

Abstract

Definition of the problem

The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI?

Arguments

We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example.

Conclusion

We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.

Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?

(Featured) A phenomenological perspective on AI ethical failures: The case of facial recognition technology

A phenomenological perspective on AI ethical failures: The case of facial recognition technology

Yuni Wen and Matthias Holweg conduct a philosophical analysis of the responses of four prominent technology firms to the ethical concerns surrounding the use and development of facial recognition technology. The article meticulously delves into the controversies surrounding Amazon, IBM, Microsoft, and Google, as they grapple with public backlash and stakeholder disapproval. By analyzing these cases, the authors elucidate four distinct strategies that these organizations employ to mitigate potential reputation loss: deflection, improvement, validation, and pre-emption. They astutely highlight the spectrum of these responses, ranging from the most accommodative to the most defensive approach.

The authors propose three possible antecedents that may determine an organization’s response strategy to controversial AI technology: the financial importance of the technology to the company, the strategic importance of the technology to the company’s product and service offerings, and the degree to which the controversial technology violates the company’s stated public values. Through their examination of the facial recognition controversies and the strategies employed by the tech giants, they provide invaluable insights into how these factors contribute to shaping the responses of companies facing ethical dilemmas in AI technology.

Although the article’s primary focus is on large technology firms, it acknowledges the limitations of its analysis and encourages further research on small and medium-sized firms, non-profit organizations, public sector organizations, and other entities that may intentionally misuse AI for nefarious purposes. It also highlights the need for future research to consider the interplay between organizational strategies and the varying global regulatory landscape concerning AI technology, given the diverse policy initiatives and regional differences.

The article not only contributes to the ongoing discourse about AI ethics but also resonates with broader philosophical debates on corporate social responsibility and the role of organizations in shaping a just and equitable society. In an era of unprecedented technological advances and heightened awareness of ethical concerns, this research raises pertinent questions about the duties and responsibilities that companies bear in addressing the potential social and moral implications of their products and services. It underscores the challenge that organizations face in balancing financial interests and strategic goals with ethical imperatives and societal expectations.

To enrich our understanding of the complex interplay between organizations and AI ethics, future research could explore the processes through which companies develop and implement their response strategies, with an emphasis on the role of leadership, organizational culture, and internal and external stakeholder dynamics. Moreover, investigating how these strategies evolve over time and assessing their effectiveness in addressing public concerns could provide valuable insights into best practices for organizations navigating the ethical minefield of AI technology. Ultimately, this line of inquiry would contribute significantly to our understanding of how corporations can foster the responsible development and use of AI, ensuring that its potential benefits are realized while mitigating its ethical risks.

Abstract

As more and more companies adopt artificial intelligence to increase the efficiency and effectiveness of their products and services, they expose themselves to ethical crises and potentially damaging public controversy associated with its use. Despite the prevalence of AI ethical problems, most companies are strategically unprepared to respond effectively to the public. This paper aims to advance our empirical understanding of company responses to AI ethical crises by focusing on the rise and fall of facial recognition technology. Specifically, through a comparative case study of how four big technology companies responded to public outcry over their facial recognition programs, we not only demonstrated the unfolding and consequences of public controversies over this new technology, but also identified and described four major types of company responses—Deflection, Improvement, Validation, and Pre-emption. These findings pave the way for future research on the management of controversial technology and the ethics of AI.

A phenomenological perspective on AI ethical failures: The case of facial recognition technology

(Featured) Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds

Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds

David M. Lyreskog et al. outline and analyze the ethical implications and conceptual challenges surrounding technologically enabled collective minds (TCMs). The paper proposes four main categories to help understand the varying levels of unity and directionality in TCMs: DigiMinds, UniMinds, NetMinds, and MacroMinds. Each category has its own set of unique ethical challenges, which the authors argue should be considered in a multidimensional manner to effectively address the complexities of agency and responsibility in TCMs.

DigiMinds are minimally direct, minimally directional interfaces, such as virtual avatars in digital spaces, where individuals are separate but can communicate through digital means. UniMinds are low-directional, highly direct interfaces, in which senders can communicate and manipulate neuronal behavior in receivers. This category is further divided into Weak UniMinds, which are collaborative interfaces, and Strong UniMinds, which create an entirely new joint entity. NetMinds, on the other hand, are minimally direct, highly directional tools that facilitate vast networks of collective thinking, such as swarm intelligence applications. Lastly, MacroMinds are maximally direct and maximally directed tools, with multiple participants connected through interfaces that allow direct neuronal transmissions in all directions. This category is also subdivided into Weak MacroMinds, which are collaborative interfaces, and Strong MacroMinds, which create new joint entities.

The authors argue that each of these four categories challenges our current understanding of collective and joint actions, urging a reevaluation of the conceptual and ethical frameworks that guide our thinking. For instance, UniMinds and MacroMinds raise questions about identity, agency, and responsibility when a new entity emerges from the connected individuals. In NetMinds, the role of the computer as an organizer poses challenges concerning responsibility and transparency. The paper suggests that instead of a binary approach, future ethical analyses should consider the technological specifications, the domain in which the TCM is deployed, and the reversibility of joining a Collective Mind.

This research taps into broader philosophical issues surrounding the nature of identity, consciousness, and agency in an increasingly interconnected world. As we move towards a future where technology not only extends our cognitive capabilities but also has the potential to fundamentally reshape our understanding of what it means to be an individual, we are forced to reevaluate our traditional conceptions of personhood, ethics, and responsibility. TCMs challenge the philosophical foundations of agency and responsibility, as well as the ways in which we understand and define collective versus individual actions and decisions.

To further explore the ethical and conceptual challenges of TCMs, future research could delve deeper into the practical implications of integrating these technologies into various aspects of our society, such as healthcare, education, governance, and commerce. Avenues for research might include examining the legal and policy ramifications of TCMs, the potential for power imbalances in such systems, and the implications for privacy and autonomy. Additionally, scholars could investigate how the experience of participating in a TCM might impact our sense of self and our relationships with others. By addressing these areas, we can move towards a more comprehensive understanding of the complex ethical landscape of technologically enabled collective minds and prepare ourselves for the challenges that lie ahead.

Abstract

A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing and swarming technologies promises to transform how we think about collective and collaborative cognitive tasks across domains, ranging from research to entertainment, and from therapeutics to military applications. As these tools continue to improve, we are prompted to monitor how they may affect our society on a broader level, but also how they may reshape our fundamental understanding of agency, responsibility, and other key concepts of our moral landscape.

In this paper we take a closer look at this class of technologies – Technologies for Collective Minds – to see not only how their implementation may react with commonly held moral values, but also how they challenge our underlying concepts of what constitutes collective or individual agency. We argue that prominent contemporary frameworks for understanding collective agency and responsibility are insufficient in terms of accurately describing the relationships enabled by Technologies for Collective Minds, and that they therefore risk obstructing ethical analysis of the implementation of these technologies in society. We propose a more multidimensional approach to better understand this set of technologies, and to facilitate future research on the ethics of Technologies for Collective Minds.

Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds

(Featured) Moral distance, AI, and the ethics of care

Moral distance, AI, and the ethics of care

Carolina Villegas-Galaviz and Kirsten Martin analyze the ethical implications of AI decision-making and suggest the ethics of care as a framework for mitigating its negative impacts. They argue that AI exacerbates moral distance by creating proximity and bureaucratic distance, which lead to a lack of consideration for the needs of all stakeholders. The ethics of care, which emphasizes interdependent relationships, context and circumstances, vulnerability, and voice, can help contextualize the issue and bring us closer to those at a distance. The authors note that this framework can aid in the development of algorithmic decision-making tools that consider the ethics of care.

The authors argue that moral distance arises from proximity and bureaucratic distance. Proximity distance refers to the physical, cultural, and temporal separation between people, while bureaucratic distance refers to hierarchy, complexity, and principle-based decision-making. These types of moral distance are inherent in how AI works, and the authors contend that AI exacerbates them. The authors also suggest that the ethics of care can help mitigate the negative impacts of AI by emphasizing the need for interdependent relationships, contextual understanding, vulnerability, and voice.

The authors argue that the ethics of care is useful in analyzing algorithmic decision-making in AI. They suggest that the ethics of care offers a mechanism for designing and developing algorithmic decision-making tools that consider the needs of all stakeholders. However, they acknowledge that the ethics of care may not be a comprehensive solution to all moral problems or harms.

The paper raises broader philosophical issues about the role of ethics in technology. It highlights the need to consider the ethical implications of technology and the importance of developing ethical frameworks for AI decision-making. The authors suggest that the ethics of care offers a new conversation for the critical examination of AI and underscores the importance of hearing diverse voices and considering the needs of all stakeholders in technology development.

Future research should explore the legal, moral, epistemic, and practical aspects of moral distance and their specific implications. It should also examine the full range of feminist theory and its potential to mitigate the problem of representativeness in the technology workforce. The authors note that interdisciplinary and intercultural teams are essential in developing and deploying AI ethically. Finally, they suggest that a deeper understanding of the ethics of care could have implications for other areas of philosophical inquiry, such as environmental ethics and bioethics.

Abstract

This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted by the decision and leads to less ethical decisions. The goal of this paper is to identify and analyze the moral distance created by AI through both proximity distance (in space, time, and culture) and bureaucratic distance (derived from hierarchy, complex processes, and principlism). We then propose the ethics of care as a moral framework to analyze the moral implications of AI. The ethics of care brings to the forefront circumstances and context, interdependence, and vulnerability in analyzing algorithmic decision making.

Moral distance, AI, and the ethics of care

(Featured) AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

Richard Volkman and Katleen Gabriels critically examine current approaches to AI moral enhancement and propose a new model that more closely aligns with the reality of moral progress as a socio-technical system. The paper begins by discussing two main approaches to AI moral enhancement: the exhaustive approach, which aims to program AI systems with complete moral knowledge, and the auxiliary approach, which seeks to use AI as a tool to assist humans in moral decision-making. The authors argue that the exhaustive approach is overly ambitious and unattainable, while the auxiliary approach, as exemplified by Lara and Deckers’ Socratic Interlocutor, lacks the depth and nuance necessary for genuine moral engagement.

Instead, the authors propose an alternative model of AI moral enhancement that emphasizes the importance of moral diversity, ongoing dialogue, and the cultivation of practical wisdom. Their model envisions a modular system of AI “mentors”, each embodying a distinct moral perspective, engaging in conversation with one another and with the user. This system would more accurately represent the complex, evolving socio-technical process of moral progress and would be safer and more effective than the existing proposals for AI moral enhancement.

The authors address potential objections to their proposal, arguing that the goal of moral enhancement should not be to transcend human limitations but to engage more deeply with our moral thinking. They emphasize that their approach to moral enhancement is not aimed at simplifying the process of moral improvement but at making us more skilled in the ways of practical wisdom. They conclude that their proposal represents a path to genuine moral enhancement that is more achievable and less fraught with risk than previous approaches.

This research contributes to broader philosophical discussions about the nature and scope of moral progress, the role of technology in moral enhancement, and the limits of human rationality. By engaging with these issues, the paper not only critiques existing proposals but also highlights the importance of considering the historical, social, and technological dimensions of moral inquiry. In doing so, it raises questions about the extent to which AI can and should be involved in human moral development, and how best to navigate the potential risks and benefits associated with such involvement.

As for future research, several avenues present themselves. First, it would be fruitful to explore the development of these AI “mentors” in more detail, focusing on the technical and ethical challenges associated with creating AI systems that embody diverse moral perspectives. Additionally, empirical studies could be conducted to assess the effectiveness of such AI mentors in promoting moral enhancement among users. Finally, interdisciplinary research could be undertaken to better understand the complex relationship between AI, moral enhancement, and broader social and cultural dynamics, in order to ensure that future AI moral enhancement efforts are both safe and effective.

Abstract

Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that ‘AI mentors’ could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.

AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement