(Featured) Trojan technology in the living room?

Trojan technology in the living room?

Franziska Sonnauer and Andreas Frewer explore the delicate balance between self-determination and external determination in the context of older adults using assistive technologies, particularly those incorporating artificial intelligence (AI). The authors introduce the concept of a “tipping point” to delineate the transition between self-determination and external determination, emphasizing the importance of considering the subjective experiences of older adults when employing such technologies. To this end, the authors adopt self-determination theory (SDT) as a theoretical framework to better understand the factors that may influence this tipping point.

The paper argues that the tipping point is intrapersonal and variable, suggesting that fulfilling the three basic psychological needs outlined in SDT—autonomy, competence, and relatedness—can potentially shift the tipping point towards self-determination. The authors propose various strategies to achieve this, such as providing alternatives for assistance in old age, promoting health technology literacy, and prioritizing social connectedness in technological development. They also emphasize the need to include older adults’ perspectives in decision-making processes, as understanding their subjective experiences is crucial to recognizing and respecting their autonomy.

Moreover, the authors call for future research to explore the tipping point and factors affecting its variability in different contexts, including assisted suicide, health deterioration, and the use of living wills and advance care planning. They contend that understanding the tipping point between self-determination and external determination may enable the development of targeted interventions that respect older adults’ autonomy and allow them to maintain self-determination for as long as possible.

In a broader philosophical context, this paper raises important ethical questions concerning the role of technology in shaping human agency, autonomy, and decision-making processes. It challenges us to reflect on the ethical implications of increasingly advanced assistive technologies and the potential consequences of their indiscriminate use. The issue of the tipping point resonates with broader debates on the nature of free will, the limits of self-determination, and the moral implications of human-machine interactions. As AI continues to become more integrated into our lives, the question of how to balance self-determination and external determination takes on greater urgency and complexity.

For future research, it would be valuable to explore the concept of the tipping point in different cultural contexts, as perceptions of autonomy and self-determination may vary across societies. Additionally, interdisciplinary approaches that combine insights from philosophy, psychology, and technology could shed light on the complex interplay between human values and AI-driven systems. Finally, empirical research investigating the experiences of older adults using assistive technologies would provide valuable data to help refine our understanding of the tipping point and inform the development of more ethically sound technologies that respect individual autonomy and promote well-being.

Abstract

Assistive technologies, including “smart” instruments and artificial intelligence (AI), are increasingly arriving in older adults’ living spaces. Various research has explored risks (“surveillance technology”) and potentials (“independent living”) to people’s self-determination from technology itself and from the increasing complexity of sociotechnical interactions. However, the point at which self-determination of the individual is overridden by external influences has not yet been sufficiently studied. This article aims to shed light on this point of transition and its implications.

Trojan technology in the living room?

(Featured) Might text-davinci-003 have inner speech?

Might text-davinci-003 have inner speech?

Stephen Francis Mann and Daniel Gregory endeavor to explore the possibility of inner speech in artificial intelligence, specifically within an AI assistant. The researchers employ a Turing-like test, which involves a conversation with a chatbot to assess its linguistic competence, creativity, and reasoning. Throughout the experiment, the chatbot is asked a series of questions designed to probe its capabilities and discern whether it possesses the capacity for inner speech.

The researchers find mixed evidence to support the presence of inner speech in the AI chatbot. While the chatbot claims to have inner speech, its performance on sentence completion tasks somewhat corroborates this assertion. However, its inconsistent performance on rhyme-detection tasks, particularly when involving non-words, raises doubts regarding the presence of inner speech. The authors also note that the chatbot’s responses can be explained by its highly advanced autocomplete capabilities, which further complicates the evaluation of its inner speech.

Ultimately, the paper questions the efficacy of Turing-like tests as a means to determine mental states or mind-like properties in artificial agents. It suggests that linguistic competence alone may not be sufficient to ascertain whether AI possesses mind-like properties such as inner speech. The authors imply that those who argue against the plausibility of mental states in AI agents might reason that the absence of minds in conversation agents proves that linguistic competence is an insufficient test for mind-like properties.

This research taps into broader philosophical issues, such as the nature of consciousness and the criteria required to attribute mental states to artificial agents. As AI continues to advance, the demarcation between human and machine becomes increasingly blurred, forcing us to reevaluate our understanding of concepts like inner speech and consciousness. The question of whether AI can possess inner speech underscores the need for a more robust philosophical framework that can accommodate the unique characteristics and capabilities of artificial agents.

Future research in this domain could benefit from exploring alternative methods for evaluating inner speech in AI, going beyond Turing-like tests. For instance, researchers might investigate the AI’s decision-making processes or the mechanisms that underpin its creativity. Additionally, interdisciplinary collaboration with fields such as cognitive science and neuroscience could shed light on the cognitive processes at play in both humans and AI agents, thus providing a richer context for understanding the nature of inner speech in artificial agents. By expanding the scope of inquiry, we can better assess the extent to which AI agents possess mind-like properties and develop a more nuanced understanding of the implications of such findings for the future of AI and human cognition.

Abstract

In November 2022, OpenAI released ChatGPT, an incredibly sophisticated chatbot. Its capability is astonishing: as well as conversing with human interlocutors, it can answer questions about history, explain almost anything you might think to ask it, and write poetry. This level of achievement has provoked interest in questions about whether a chatbot might have something similar to human intelligence or even consciousness. Given that the function of a chatbot is to process linguistic input and produce linguistic output, we consider the question whether a sophisticated chatbot might have inner speech. That is: Might it talk to itself, internally? We explored this via a conversation with ‘Playground’, a chatbot which is very similar to ChatGPT but more flexible in certain respects. We asked it questions which, plausibly, can only be answered if one first produces some inner speech. Here, we present our findings and discuss their philosophical significance.

Might text-davinci-003 have inner speech?

(Featured) Deepfakes and the epistemic apocalypse

Deepfakes and the epistemic apocalypse

Joshua Habgood-Cooter critically examines the common perception that deepfakes represent a unique and unprecedented threat to our epistemic landscape. They argue that such a viewpoint is misguided and that deepfakes should be understood as a social problem rather than a purely technological one. The author offers three main lines of criticism to counter the narrative of deepfakes as harbingers of an epistemic apocalypse. First, they propose that the knowledge we gain from recordings is a special case of knowledge from instruments, which relies on social practices around the design, operation, and maintenance of recording technology. Second, they present historical examples of manipulated recordings to demonstrate that deepfakes are not a novel phenomenon, and that social practices have been employed in the past to address similar issues. Third, they contend that technochauvinism and the post-truth narrative have obscured potential social measures to address deepfakes.

The author argues that deepfakes are embedded in a techno-social context and should be treated as part of the broader social practices involved in the production of knowledge and ignorance. They suggest that examining historical episodes of deceptive recordings can provide valuable insights into how social norms and community policing could be utilized to address the challenges posed by deepfakes. Moreover, the author emphasizes that the most serious harms associated with deepfake videos are likely to be consequences of established ignorance-producing social practices affecting minority and marginalized groups.

By reframing deepfakes as a social problem, the paper challenges the notion that the technology itself is inherently dangerous and urges us to consider how our social practices contribute to the production and dissemination of manipulated recordings. This approach highlights the interdependence between technology and society, and offers a more nuanced understanding of the ethical, political, and epistemic implications of deepfakes.

In the broader philosophical context, this paper raises important questions about the nature of knowledge, the role of trust in our epistemic practices, and the relationship between technology and the social dynamics of knowledge production. It also contributes to ongoing debates in social epistemology, emphasizing the collective nature of knowledge and the responsibility that society bears in shaping our epistemic landscape.

Future research could explore other historical episodes of manipulated recordings and the social responses that emerged to address them, further informing our understanding of how to manage the challenges posed by deepfakes. Additionally, scholars could investigate the role of institutional actors, such as governments and media organizations, in shaping and reinforcing norms and practices around the production and dissemination of recordings. This line of inquiry could lead to a more comprehensive understanding of the techno-social context in which deepfakes operate and inform policy recommendations for mitigating their potential harms.

Abstract

It is widely thought that deepfake videos are a significant and unprecedented threat to our epistemic practices. In some writing about deepfakes, manipulated videos appear as the harbingers of an unprecedented epistemic apocalypse. In this paper I want to take a critical look at some of the more catastrophic predictions about deepfake videos. I will argue for three claims: (1) that once we recognise the role of social norms in the epistemology of recordings, deepfakes are much less concerning, (2) that the history of photographic manipulation reveals some important precedents, correcting claims about the novelty of deepfakes, and (3) that proposed solutions to deepfakes have been overly focused on technological interventions. My overall goal is not so much to argue that deepfakes are not a problem, but to argue that behind concerns around deepfakes lie a more general class of social problems about the organisation of our epistemic practices.

Deepfakes and the epistemic apocalypse

(Featured) Germline Gene Editing: The Gender Issues

Germline Gene Editing: The Gender Issues

Iñigo de Miguel Beriain et al. delves into the complex relationship between gene editing technologies and the role of women in assisted reproductive techniques (ART). The paper is divided into two main sections, exploring both the potential benefits and drawbacks of gene editing in the context of ART for women. The first section examines the ways in which gene editing may improve the position of women within ART, highlighting the possibilities of reducing physical suffering, improving the efficiency of in vitro fertilization (IVF), and reducing the number of embryos discarded. The second section, on the other hand, highlights the potential risks and disadvantages associated with gene editing, focusing on the unequal burden placed on women in the process, the societal pressures that may arise, and the potential for gene editing to become a tool of oppression against women.

The author begins by discussing the current state of ART, which often places a significant burden on women, both physically and emotionally. They argue that the advent of gene editing technologies, such as CRISPR-Cas9, has the potential to alleviate some of these burdens by improving the efficiency of IVF and reducing the number of discarded embryos. In turn, this could lead to a reduction in the physical suffering experienced by women undergoing these procedures. The author also emphasizes the potential of gene editing to create a more level playing field in the realm of procreation, as it may allow for a more equal distribution of genetic risks between men and women.

However, the paper also examines the potential drawbacks of widespread gene editing adoption. The author argues that the process of gene editing involves significant risks to women, as it requires the use of biological material extracted from their bodies. Furthermore, failed experiments or harmful outcomes from gene editing procedures may have severe physical and psychological consequences for pregnant women. The author also discusses the potential future implications of gene editing, which could lead to a societal shift in attitudes towards procreation, ultimately placing even greater burdens on women. They highlight the potential for societal pressure to force women to undergo gene editing, resulting in a loss of freedom and an increase in gender bias.

From a philosophical standpoint, the paper raises important questions about the ethics of gene editing and the distribution of burdens and responsibilities between men and women in the realm of reproduction. The potential societal shift in attitudes towards procreation, as discussed in the paper, forces us to consider the implications of prioritizing genetic modifications over natural processes. Furthermore, the paper calls into question the potential consequences of utilizing new technologies without fully understanding their implications on gender dynamics and societal norms.

The paper also opens up avenues for future research, particularly in the realm of bioethics and the societal implications of gene editing technologies. Future studies could explore the psychological effects of societal pressure on women who choose not to undergo gene editing, as well as the ethical implications of altering future generations’ genetic makeup. Additionally, research could investigate the potential long-term consequences of widespread gene editing on genetic diversity, and whether it could inadvertently lead to the exacerbation of existing inequalities. Ultimately, this paper serves as a crucial starting point for deeper exploration into the complex relationship between gene editing, ART, and the position of women in society.

Abstract

Human germline gene editing constitutes an extremely promising technology; at the same time, however, it raises remarkable ethical, legal, and social issues. Although many of these issues have been largely explored by the academic literature, there are gender issues embedded in the process that have not received the attention they deserve. This paper examines ways in which this new tool necessarily affects males and females differently—both in rewards and perils. The authors conclude that there is an urgent need to include these gender issues in the current debate, before giving a green light to this new technology.

Germline Gene Editing: The Gender Issues

(Featured) Technology ethics assessment: Politicising the ‘Socratic approach’

Technology ethics assessment: Politicising the ‘Socratic approach’

Robert Sparrow proposes a Socratic approach to uncover the ethical and political dimensions of technology. This method involves asking a series of questions that highlight the ethical concerns and implications of a given technology. The author structures the questions in five categories: (1) technology and power, (2) technology and social justice, (3) technology, values and the environment, (4) technology and the human experience, and (5) process, consultation, and iteration.

The author argues that the Socratic approach can help identify ethical challenges in technology and facilitate discussions on the implications of technology in various aspects of society. The questions raised cover a wide range of issues, from power imbalances and social inequalities resulting from the adoption of technology, to the potential impact on the environment and human experiences. Furthermore, the author highlights the importance of considering the processes and procedures involved in developing and adopting a technology, as well as the need for user involvement in the design process, consultation with affected parties, and mechanisms for identifying and addressing ethical issues.

By using a Socratic approach, the paper emphasizes the need to critically evaluate technologies and their potential consequences rather than passively accepting them. The author contends that the ethical implications of technologies cannot be fully understood or addressed without considering the broader political context in which they are developed and deployed. As a result, the paper argues that empowering citizens and fostering open dialogue on the ethical implications of technology is vital in creating a more just, equitable, and hospitable world.

The paper’s insights into the politics of technology resonate with broader philosophical debates on the nature of power, justice, and responsibility in the context of technological advancements. By focusing on the Socratic method, the author also contributes to ongoing discussions on the epistemology of ethics in relation to technology. This approach highlights the importance of critical thinking and dialectical engagement in uncovering the ethical complexities of technology and its impact on society.

For future research, it would be valuable to explore the application of the Socratic approach to specific case studies, examining how the questions posed in this paper can help uncover the ethical dimensions of various technologies in practice. Additionally, it would be beneficial to investigate the potential of interdisciplinary collaboration between philosophy, social sciences, and technology development in order to better address the ethical and political concerns raised by emerging technologies. This would further enrich the discourse on the politics of technology and contribute to the development of more ethical and socially responsible technological innovations.

Abstract

That technologies may raise ethical issues is now widely recognised. The ‘responsible innovation’ literature – as well as, to a lesser extent, the applied ethics and bioethics literature – has responded to the need for ethical reflection on technologies by developing a number of tools and approaches to facilitate such reflection. Some of these instruments consist of lists of questions that people are encouraged to ask about technologies – a methodology known as the ‘Socratic approach’. However, to date, these instruments have often not adequately acknowledged various political impacts of technologies, which are, I suggest, essential to a proper account of the ethical issues they raise. New technologies can make some people richer and some people poorer, empower some and disempower others, have dramatic implications for relationships between different social groups and impact on social understandings and experiences that are central to the lives, and narratives, of denizens of technological societies. The distinctive contribution of this paper, then, is to offer a revised and updated version of the Socratic approach that highlights the political, as well as the more traditionally ethical, issues raised by the development of new technologies.

Technology ethics assessment: Politicising the ‘Socratic approach’

(Featured) Introducing a four-fold way to conceptualize artificial agency

Introducing a four-fold way to conceptualize artificial agency

Maud van Lier presents a methodological framework for understanding artificial agency in the context of basic research, particularly in AI-driven science. The Four-Fold Framework, as the author coins it, is a pluralistic and pragmatic approach that incorporates Gricean modeling, analogical modeling, theoretical modeling, and conceptual modeling. The motivation behind this framework lies in the increasingly active role that AI systems are taking on in scientific research, warranting the development of a robust conceptual foundation for these ‘agents.’

The author critically assesses Sarkia’s neo-Gricean framework, which offers three modeling strategies for conceptualizing artificial agency. While acknowledging its merits, the author identifies a crucial shortcoming in its lack of a semantic dimension, which is necessary to bridge the gap between theoretical models and practical implementation in basic research. To address this issue, the author proposes the addition of conceptual modeling as a fourth strategy, ultimately forming the Four-Fold Framework. This new framework aims to provide a comprehensive account of artificial agency in basic research by accommodating different interpretations and addressing the semantic dimension of artificial agency.

By implementing the Four-Fold Framework, the author posits that researchers will be able to develop a more inclusive and pragmatically plausible understanding of artificial agency in the context of AI-driven science. The framework sets the stage for a robust conceptual foundation that can accommodate the complexities and nuances of artificial agency as AI continues to evolve and expand its role in scientific research.

This paper’s exploration of artificial agency also contributes to the broader philosophical discourse on agency and autonomy in the context of artificial intelligence. As AI systems become more advanced, the distinction between human and artificial agents blurs, raising questions about the nature of agency, responsibility, and ethical considerations. The Four-Fold Framework provides a methodological tool to examine these complex issues, grounding the analysis of artificial agency within a rigorous and comprehensive structure.

Future research can expand upon the Four-Fold Framework by investigating its applicability to other emerging areas in AI, such as AI ethics, human-AI collaboration, and autonomous decision-making. Additionally, researchers can explore how the Four-Fold Framework might inform the development of AI-driven science policy and governance, ensuring that ethical, legal, and societal implications are considered in the integration of artificial agency in scientific research. By refining and extending the Four-Fold Framework, the academic community can better anticipate and navigate the challenges and opportunities that artificial agency presents in the rapidly evolving landscape of AI-driven science.

Abstract

Recent developments in AI-research suggest that an AI-driven science might not be that far off. The research of for Melnikov et al. (2018) and that of Evans et al. (2018) show that automated systems can already have a distinctive role in the design of experiments and in directing future research. Common practice in many of the papers devoted to the automation of basic research is to refer to these automated systems as ‘agents’. What is this attribution of agency based on and to what extent is this an important notion in the broader context of an AI-driven science? In an attempt to answer these questions, this paper proposes a new methodological framework, introduced as the Four-Fold Framework, that can be used to conceptualize artificial agency in basic research. It consists of four modeling strategies, three of which were already identified and used by Sarkia (2021) to conceptualize ‘intentional agency’. The novelty of the framework is the inclusion of a fourth strategy, introduced as conceptual modeling, that adds a semantic dimension to the overall conceptualization. The strategy connects to the other strategies by modeling both the actual use of ‘artificial agency’ in basic research as well as what is meant by it in each of the other three strategies. This enables researchers to bridge the gap between theory and practice by comparing the meaning of artificial agency in both an academic as well as in a practical context.

Introducing a four-fold way to conceptualize artificial agency

(Featured) The five tests: designing and evaluating AI according to indigenous Māori principles

The five tests: designing and evaluating AI according to indigenous Māori principles

Luke Munn provides a critical analysis of the current paradigms of artificial intelligence (AI) development and offer a framework for a decolonial AI. The author argues that existing AI paradigms reproduce and reinforce coloniality and its attendant inequalities. To overcome this, he proposes a framework based on Indigenous concepts from Aotearoa (New Zealand), which offers a distinct set of principles and priorities that challenge Western technocratic norms. The framework is centered on five tests that prioritize human dignity, communal integrity, and ecological sustainability. The author suggests that the application of these tests can guide the design and development of AI products in a way that is more inclusive, thoughtful, and attentive to life in its various forms.

The author identifies two distinct pathways for applying their framework. The first pathway is designing, which involves applying the principles and priorities of the Five Tests to the development of AI products that are currently in progress. This involves asking questions about how these products can respect the sacred, preserve or enhance life force, and reconcile negative impacts in acceptable ways. The author suggests that iterative versions of software can be developed by engaging genuinely with these questions and resolving them through code, architectures, interfaces, and affordances. The second pathway is decolonizing, which involves a deeper and more sustained confrontation with current AI regimes. This pathway involves challenging generic, universalizing frames, stressing the connection and interdependence of human and ecological well-being, and carefully considering potential impacts and developing ways to mitigate them or redress them to satisfy involved parties.

The author’s framework challenges current AI paradigms and practices by raising questions about what data-driven technology should be doing, how it can be designed in ways that are more inclusive, communal, and sustainable, and what values and norms should be used to judge the success of a particular technology. He suggests that these questions are epistemological, cultural and historical, and social in nature. And he argues that understanding and undoing systems of inequality that have been formalized and fossilized over time is a massive undertaking that demands a long-term project that prioritizes social justice.

In broader philosophical terms, this paper raises questions about the relationship between technology and power, the role of knowledge systems in shaping our understanding of the world, and the importance of Indigenous perspectives in challenging dominant paradigms. The authors’ framework challenges the Western-centric assumptions that underpin current AI paradigms and highlights the importance of recognizing and respecting diverse perspectives and ways of knowing.

Future research could explore the practical implications of the Five Tests framework and how it can be applied in different contexts. It could also examine the ways in which Indigenous perspectives can challenge dominant paradigms in other fields, such as philosophy, politics, and economics. Additionally, research could explore the potential for cross-cultural collaboration in the development of AI and other technologies, and how this collaboration can facilitate the recognition and respect of diverse perspectives and knowledge systems. Finally, research could explore the broader implications of the authors’ framework for the relationship between technology and power and the potential for decolonial approaches to reshape our understanding of the role of technology in society.

Abstract

As AI technologies are increasingly deployed in work, welfare, healthcare, and other domains, there is a growing realization not only of their power but of their problems. AI has the capacity to reinforce historical injustice, to amplify labor precarity, and to cement forms of racial and gendered inequality. An alternate set of values, paradigms, and priorities are urgently needed. How might we design and evaluate AI from an indigenous perspective? This article draws upon the five Tests developed by Māori scholar Sir Hirini Moko Mead. This framework, informed by Māori knowledge and concepts, provides a method for assessing contentious issues and developing a Māori position. This paper takes up these tests, considers how each test might be applied to data-driven systems, and provides a number of concrete examples. This intervention challenges the priorities that currently underpin contemporary AI technologies but also offers a rubric for designing and evaluating AI according to an indigenous knowledge system.

The five tests: designing and evaluating AI according to indigenous Māori principles

(Featured) The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

James Johnson explores the ethical and psychological implications of integrating AI into warfare. The author argues that the use of autonomous weapons in warfare may create moral vacuums that eliminate meaningful ethical and moral deliberation in the quest for riskless and rational war. Moreover, the author argues that the human-machine integration process is part of a broader evolutionary dovetailing of humanity and technology. The logical end of this trajectory is an AI commander, which would effectively outsource ethical decision-making to machines that are ill-equipped to fill this ethical and moral void.

The author also explores the limitations of AI in distinguishing between legitimate and illegitimate targets in asymmetric conflicts, such as insurgencies and civil wars. He stresses the importance of recognizing the personhood of the enemy in warfare and argue that until AI can achieve this moral standing, it will be unable to meet the requirements of jus in bello. Additionally, the Johnson argues that human judgment and prediction, while imperfect, are still necessary in warfare because of the subtle cues that humans can recognize that machines cannot.

The paper highlights three key psychological insights regarding human-machine interactions and political-ethical dilemmas in future AI-enabled warfare. First, the Johnson argues that human-machine integration is a socio-technical psychological process that is part of a broader evolutionary dovetailing of humanity and technology. Second, he argues that biases associated with human-machine interactions can compound the “illusion of control” problem. Third, he suggests that coding human ethics into AI algorithms is technically, theoretically, ontologically, and psychologically problematic and ethically and morally questionable.

This paper raises important philosophical questions about the relationship between technology and ethics. It highlights the risks associated with outsourcing ethical decision-making to machines and emphasizes the importance of recognizing the personhood of the enemy in warfare. The paper also underscores the limitations of AI in distinguishing between legitimate and illegitimate targets and the importance of human judgment in recognizing subtle cues that machines cannot. Ultimately, this paper challenges us to consider the role of technology in shaping our ethical and moral decision-making processes.

Future research in this area could explore the psychological and ethical implications of human-machine integration in other domains, such as healthcare or criminal justice. Additionally, research could focus on developing AI systems that are capable of understanding the complexities of human ethics and morality. This research could also explore ways to incorporate ethical decision-making into AI algorithms without sacrificing human agency and accountability. Finally, research could explore the broader philosophical implications of the use of AI in warfare and consider the ethical and moral implications of a world in which machines are increasingly integrated into our lives.

Abstract

Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the key elements of the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI “rational” efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining “meaningful” human control over the war machine. This Panglossian assumption neglects the psychological features of human-machine interactions, the pace at which future AI-enabled conflict will be fought, and the complex and chaotic nature of modern war. The article expounds key psychological insights of human-machine interactions to elucidate how AI shapes our capacity to think about future warfare’s political and ethical dilemmas. It argues that through the psychological process of human-machine integration, AI will not merely force-multiply existing advanced weaponry but will become de facto strategic actors in warfare – the “AI commander problem.”

The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

(Featured) Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

Cian Brennan argues for a version of transhumanism that incrementally applies moderate enhancements to future human beings, rather than pursuing radical enhancements in a more immediate and extreme manner. The paper begins by presenting the critique of transhumanism put forward by Johnathan Agar, which centers on the potential negative consequences of radical enhancement. The author argues that Agar’s critique is aimed at the effects of radical enhancement, rather than the concept of radical enhancement itself. By assuming that radical enhancement will be applied gradually to future generations, the author argues that weak transhumanism can overcome Agar’s objections.

The author then discusses objections to weak transhumanism, including the potential for an eventual radical enhancement to emerge and the difficulty of identifying when an enhancement becomes radical. The author responds to these objections by proposing a checklist of characteristic features that can be used to identify radical enhancements, such as the creation of new or extended abilities, changes in moral status, and significant changes in vulnerability or relatability between the enhanced and unenhanced.

Overall, the paper provides a nuanced and detailed defense of weak transhumanism, offering a way to pursue radical enhancements while avoiding some of the potential negative consequences of more radical approaches. The paper engages with a range of objections and provides a thoughtful and well-supported response to each, drawing on both philosophical and scientific sources.

The paper has implications for broader philosophical issues surrounding the ethics of human enhancement, the relationship between technology and society, and the nature of human identity and personhood. By focusing on the incremental application of enhancements, the paper raises questions about the degree to which human beings can be transformed by technology without losing their essential human nature. It also highlights the role of societal values and norms in shaping the development and application of enhancement technologies.

Future research in this area could build on the author’s checklist of characteristic features of radical enhancements, exploring the extent to which these features are necessary and sufficient conditions for defining radical enhancements. Further research could also examine the potential consequences of weak transhumanism, including the ways in which incremental enhancements may interact with each other over time and the potential for unintended consequences. Finally, future research could explore the social and cultural dimensions of transhumanism, including the ways in which transhumanist values and practices may be shaped by factors such as gender, race, and socioeconomic status.

Abstract

Transhumanism aims to bring about radical human enhancement. In ‘Truly Human Enhancement’ Agar (2014) provides a strong argument against producing radically enhancing effects in agents. This leaves the transhumanist in a quandary—how to achieve radical enhancement whilst avoiding the problem of radically enhancing effects? This paper aims to show that transhumanism can overcome the worries of radically enhancing effects by instead pursuing radical human enhancement via incremental moderate human enhancements (Weak Transhumanism). In this sense, weak transhumanism is much like traditional transhumanism in its aims, but starkly different in its execution. This version of transhumanism is weaker given the limitations brought about by having to avoid radically enhancing effects. I consider numerous objections to weak transhumanism and conclude that the account survives each one. This paper’s proposal of ‘weak transhumanism’ has the upshot of providing a way out of the ‘problem of radically enhancing effects’ for the transhumanist, but this comes at a cost—the restrictive process involved in applying multiple moderate enhancements in order to achieve radical enhancement will most likely be dissatisfying for the transhumanist, however, it is, I contend, the best option available.

Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

(Featured) The epistemic impossibility of an artificial intelligence take-over of democracy

The epistemic impossibility of an artificial intelligence take-over of democracy

Daniel Innerarity explores the limits of algorithmic governance in relation to democratic decision-making. They argue that algorithms function with a 0/1 logic that is the opposite of ambiguity, and they are unable to handle complex problems that are not well-structured or quantifiable. The authors argue that politics consists of making decisions in the absence of indisputable evidence and that algorithms are of limited utility in such circumstances. Algorithmic rationality reduces the complexity of social phenomena to numbers, whereas political decisions are rarely based on binary categories. The authors suggest that the epistemological principle of uncertainty is central to democratic institutions and that our democratic institutions are a recognition of our ignorance.

The author highlights the limitations of algorithms in decision-making and suggest that they are appropriate only for well-structured and quantifiable problems. In contrast, political decisions are rarely based on binary categories, and politics consists of making decisions in the absence of indisputable evidence. The authors argue that algorithmic rationality reduces the complexity of social phenomena to numbers, which is inappropriate for democratic decision-making. Instead, they suggest that democratic institutions are a recognition of our ignorance and the importance of uncertainty in decision-making.

The author suggests that the epistemological principle of uncertainty is central to democratic institutions. They argue that democracy exists precisely because our knowledge is so limited, and we are so prone to error. Precisely where our knowledge is incomplete, we have greater need for institutions and procedures that favour reflection, debate, criticism, independent advice, reasoned argumentation, and the competition of ideas and visions. Our democratic institutions are not an exhibition of how much we know but a recognition of our ignorance.

The research presented in this paper is significant for broader philosophical issues related to the relationship between knowledge, power, and democratic decision-making. It raises questions about the role of algorithms in decision-making and the limits of rationality in politics. It also highlights the importance of uncertainty, ambiguity, and contingency in democratic decision-making, which has important implications for the legitimacy of democratic institutions.

Future research could explore the implications of these findings for the development of democratic institutions and the role of algorithms in decision-making. It could also explore the role of uncertainty, ambiguity, and contingency in decision-making more broadly and its relationship to different philosophical traditions. Furthermore, it could explore the implications of these findings for the development of more participatory and deliberative forms of democracy that allow for greater reflection, debate, and criticism.

Abstract

Those who claim, whether with fear or with hope, that algorithmic governance can control politics or the whole political process or that artificial intelligence is capable of taking charge of or wrecking democracy, recognize that this is not yet possible with our current technological capabilities but that it could come about in the future if we had better quality data or more powerful computational tools. Those who fear or desire this algorithmic suppression of democracy assume that something similar will be possible someday and that it is only a question of technological progress. If that were the case, no limits would be insurmountable on principle. I want to challenge that conception with a limit that is less normative than epistemological; there are things that artificial intelligence cannot do, because it is unable to do them, not because it should not do them, and this is particularly apparent in politics, which is a peculiar decision-making realm. Machines and people take decisions in a very different fashion. Human beings are particularly gifted at one type of situation and very clumsy in others. The part of politics that is, strictly speaking, political is where this contrast and our greatest aptitude are most apparent. If that is the case, as I believe, then the possibility that democracy will one day be taken over by artificial intelligence is, as a fear or as a desire, manifestly exaggerated. The corresponding counterpart to this is: if the fear that democracy could disappear at the hands of artificial intelligence is not realistic, then we should not expect exorbitant benefits from it either. For epistemic reasons that I will explain, it does not seem likely that artificial intelligence is capable of taking over political logic.

The epistemic impossibility of an artificial intelligence take-over of democracy