(Featured) Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds

Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds

David M. Lyreskog et al. outline and analyze the ethical implications and conceptual challenges surrounding technologically enabled collective minds (TCMs). The paper proposes four main categories to help understand the varying levels of unity and directionality in TCMs: DigiMinds, UniMinds, NetMinds, and MacroMinds. Each category has its own set of unique ethical challenges, which the authors argue should be considered in a multidimensional manner to effectively address the complexities of agency and responsibility in TCMs.

DigiMinds are minimally direct, minimally directional interfaces, such as virtual avatars in digital spaces, where individuals are separate but can communicate through digital means. UniMinds are low-directional, highly direct interfaces, in which senders can communicate and manipulate neuronal behavior in receivers. This category is further divided into Weak UniMinds, which are collaborative interfaces, and Strong UniMinds, which create an entirely new joint entity. NetMinds, on the other hand, are minimally direct, highly directional tools that facilitate vast networks of collective thinking, such as swarm intelligence applications. Lastly, MacroMinds are maximally direct and maximally directed tools, with multiple participants connected through interfaces that allow direct neuronal transmissions in all directions. This category is also subdivided into Weak MacroMinds, which are collaborative interfaces, and Strong MacroMinds, which create new joint entities.

The authors argue that each of these four categories challenges our current understanding of collective and joint actions, urging a reevaluation of the conceptual and ethical frameworks that guide our thinking. For instance, UniMinds and MacroMinds raise questions about identity, agency, and responsibility when a new entity emerges from the connected individuals. In NetMinds, the role of the computer as an organizer poses challenges concerning responsibility and transparency. The paper suggests that instead of a binary approach, future ethical analyses should consider the technological specifications, the domain in which the TCM is deployed, and the reversibility of joining a Collective Mind.

This research taps into broader philosophical issues surrounding the nature of identity, consciousness, and agency in an increasingly interconnected world. As we move towards a future where technology not only extends our cognitive capabilities but also has the potential to fundamentally reshape our understanding of what it means to be an individual, we are forced to reevaluate our traditional conceptions of personhood, ethics, and responsibility. TCMs challenge the philosophical foundations of agency and responsibility, as well as the ways in which we understand and define collective versus individual actions and decisions.

To further explore the ethical and conceptual challenges of TCMs, future research could delve deeper into the practical implications of integrating these technologies into various aspects of our society, such as healthcare, education, governance, and commerce. Avenues for research might include examining the legal and policy ramifications of TCMs, the potential for power imbalances in such systems, and the implications for privacy and autonomy. Additionally, scholars could investigate how the experience of participating in a TCM might impact our sense of self and our relationships with others. By addressing these areas, we can move towards a more comprehensive understanding of the complex ethical landscape of technologically enabled collective minds and prepare ourselves for the challenges that lie ahead.

Abstract

A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing and swarming technologies promises to transform how we think about collective and collaborative cognitive tasks across domains, ranging from research to entertainment, and from therapeutics to military applications. As these tools continue to improve, we are prompted to monitor how they may affect our society on a broader level, but also how they may reshape our fundamental understanding of agency, responsibility, and other key concepts of our moral landscape.

In this paper we take a closer look at this class of technologies – Technologies for Collective Minds – to see not only how their implementation may react with commonly held moral values, but also how they challenge our underlying concepts of what constitutes collective or individual agency. We argue that prominent contemporary frameworks for understanding collective agency and responsibility are insufficient in terms of accurately describing the relationships enabled by Technologies for Collective Minds, and that they therefore risk obstructing ethical analysis of the implementation of these technologies in society. We propose a more multidimensional approach to better understand this set of technologies, and to facilitate future research on the ethics of Technologies for Collective Minds.

Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds

(Featured) Mobile health technology and empowerment

Mobile health technology and empowerment

Karola V. Kreitmair critically evaluates the notion of empowerment that has become pervasive in the discourse surrounding direct-to-consumer (DTC) mobile health technologies. The author argues that while these technologies claim to empower users by providing knowledge, enabling control, and fostering responsibility, the actual outcome is often not genuine empowerment but merely the perception of empowerment. This distinction has significant implications for individuals who might be seeking to affect behavior change and improve their health and well-being.

The paper meticulously breaks down the concept of empowerment into five key features: knowledgeability, control, responsibility, availability of good choices, and healthy desires. The author presents a thorough review of the evidence related to the efficacy, privacy, and security concerns surrounding the use of m-health technologies. They demonstrate that these technologies, while marketed as empowering tools, often fail to live up to their promises and, in some cases, even contribute to negative health outcomes or exacerbate existing issues such as disordered eating.

The core of the argument lies in the distinction between genuine empowerment and the mere perception of empowerment. The author posits that, rather than fostering true empowerment, DTC m-health technologies often create a psychological illusion of control and knowledgeability. This illusion can lead users to form unrealistic expectations and place undue burden on themselves to effect change when the necessary conditions for change are not met. This “empowerment paradox” ultimately calls into question the purported benefits of DTC m-health technologies and the societal narrative around personal responsibility and control over one’s health.

This paper’s findings resonate with broader philosophical discussions around individual autonomy, agency, and the role of technology in shaping our lives. The empowerment paradox highlights the complex interplay between the individual and the structural factors that shape health outcomes. It raises crucial questions about the ethical implications of profit-driven technologies and the responsibilities of technology developers, marketers, and users in navigating an increasingly technologically-driven healthcare landscape. The insights from this paper contribute to ongoing debates about the nature of empowerment and the limits of individual autonomy in an age where our lives are increasingly mediated by technology.

Future research should focus on the prevalence and consequences of the empowerment paradox in the context of DTC m-health technologies. A deeper understanding of how individuals make decisions around their health in the presence of perceived empowerment could inform the development of more effective and ethically responsible technologies. Additionally, examining the social and cultural factors that influence the marketing and adoption of these technologies may provide insight into how the industry can foster genuine empowerment, rather than perpetuating an illusion of control. Ultimately, a more nuanced understanding of the relationship between DTC m-health technologies and empowerment will pave the way for a more responsible and equitable approach to healthcare in the digital age.

Abstract

Mobile Health (m-health) technologies, such as wearables, apps, and smartwatches, are increasingly viewed as tools for improving health and well-being. In particular, such technologies are conceptualized as means for laypersons to master their own health, by becoming “engaged” and “empowered” “managers” of their bodies and minds. One notion that is especially prevalent in the discussions around m-health technology is that of empowerment. In this paper, I analyze the notion of empowerment at play in the m-health arena, identifying five elements that are required for empowerment. These are (1) knowledge, (2) control, (3) responsibility, (4) the availability of good choices, and (5) healthy desires. I argue that at least sometimes, these features are not present in the use of these technologies. I then argue that instead of empowerment, it is plausible that m-health technology merely facilitates a feeling of empowerment. I suggest this may be problematic, as it risks placing the burden of health and behavior change solely on the shoulders of individuals who may not be in a position to affect such change.

Mobile health technology and empowerment

(Featured) AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

Richard Volkman and Katleen Gabriels critically examine current approaches to AI moral enhancement and propose a new model that more closely aligns with the reality of moral progress as a socio-technical system. The paper begins by discussing two main approaches to AI moral enhancement: the exhaustive approach, which aims to program AI systems with complete moral knowledge, and the auxiliary approach, which seeks to use AI as a tool to assist humans in moral decision-making. The authors argue that the exhaustive approach is overly ambitious and unattainable, while the auxiliary approach, as exemplified by Lara and Deckers’ Socratic Interlocutor, lacks the depth and nuance necessary for genuine moral engagement.

Instead, the authors propose an alternative model of AI moral enhancement that emphasizes the importance of moral diversity, ongoing dialogue, and the cultivation of practical wisdom. Their model envisions a modular system of AI “mentors”, each embodying a distinct moral perspective, engaging in conversation with one another and with the user. This system would more accurately represent the complex, evolving socio-technical process of moral progress and would be safer and more effective than the existing proposals for AI moral enhancement.

The authors address potential objections to their proposal, arguing that the goal of moral enhancement should not be to transcend human limitations but to engage more deeply with our moral thinking. They emphasize that their approach to moral enhancement is not aimed at simplifying the process of moral improvement but at making us more skilled in the ways of practical wisdom. They conclude that their proposal represents a path to genuine moral enhancement that is more achievable and less fraught with risk than previous approaches.

This research contributes to broader philosophical discussions about the nature and scope of moral progress, the role of technology in moral enhancement, and the limits of human rationality. By engaging with these issues, the paper not only critiques existing proposals but also highlights the importance of considering the historical, social, and technological dimensions of moral inquiry. In doing so, it raises questions about the extent to which AI can and should be involved in human moral development, and how best to navigate the potential risks and benefits associated with such involvement.

As for future research, several avenues present themselves. First, it would be fruitful to explore the development of these AI “mentors” in more detail, focusing on the technical and ethical challenges associated with creating AI systems that embody diverse moral perspectives. Additionally, empirical studies could be conducted to assess the effectiveness of such AI mentors in promoting moral enhancement among users. Finally, interdisciplinary research could be undertaken to better understand the complex relationship between AI, moral enhancement, and broader social and cultural dynamics, in order to ensure that future AI moral enhancement efforts are both safe and effective.

Abstract

Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that ‘AI mentors’ could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.

AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement

(Featured) Technology ethics assessment: Politicising the ‘Socratic approach’

Technology ethics assessment: Politicising the ‘Socratic approach’

Robert Sparrow proposes a Socratic approach to uncover the ethical and political dimensions of technology. This method involves asking a series of questions that highlight the ethical concerns and implications of a given technology. The author structures the questions in five categories: (1) technology and power, (2) technology and social justice, (3) technology, values and the environment, (4) technology and the human experience, and (5) process, consultation, and iteration.

The author argues that the Socratic approach can help identify ethical challenges in technology and facilitate discussions on the implications of technology in various aspects of society. The questions raised cover a wide range of issues, from power imbalances and social inequalities resulting from the adoption of technology, to the potential impact on the environment and human experiences. Furthermore, the author highlights the importance of considering the processes and procedures involved in developing and adopting a technology, as well as the need for user involvement in the design process, consultation with affected parties, and mechanisms for identifying and addressing ethical issues.

By using a Socratic approach, the paper emphasizes the need to critically evaluate technologies and their potential consequences rather than passively accepting them. The author contends that the ethical implications of technologies cannot be fully understood or addressed without considering the broader political context in which they are developed and deployed. As a result, the paper argues that empowering citizens and fostering open dialogue on the ethical implications of technology is vital in creating a more just, equitable, and hospitable world.

The paper’s insights into the politics of technology resonate with broader philosophical debates on the nature of power, justice, and responsibility in the context of technological advancements. By focusing on the Socratic method, the author also contributes to ongoing discussions on the epistemology of ethics in relation to technology. This approach highlights the importance of critical thinking and dialectical engagement in uncovering the ethical complexities of technology and its impact on society.

For future research, it would be valuable to explore the application of the Socratic approach to specific case studies, examining how the questions posed in this paper can help uncover the ethical dimensions of various technologies in practice. Additionally, it would be beneficial to investigate the potential of interdisciplinary collaboration between philosophy, social sciences, and technology development in order to better address the ethical and political concerns raised by emerging technologies. This would further enrich the discourse on the politics of technology and contribute to the development of more ethical and socially responsible technological innovations.

Abstract

That technologies may raise ethical issues is now widely recognised. The ‘responsible innovation’ literature – as well as, to a lesser extent, the applied ethics and bioethics literature – has responded to the need for ethical reflection on technologies by developing a number of tools and approaches to facilitate such reflection. Some of these instruments consist of lists of questions that people are encouraged to ask about technologies – a methodology known as the ‘Socratic approach’. However, to date, these instruments have often not adequately acknowledged various political impacts of technologies, which are, I suggest, essential to a proper account of the ethical issues they raise. New technologies can make some people richer and some people poorer, empower some and disempower others, have dramatic implications for relationships between different social groups and impact on social understandings and experiences that are central to the lives, and narratives, of denizens of technological societies. The distinctive contribution of this paper, then, is to offer a revised and updated version of the Socratic approach that highlights the political, as well as the more traditionally ethical, issues raised by the development of new technologies.

Technology ethics assessment: Politicising the ‘Socratic approach’