(Featured) Examining the Differential Risk from High-level Artificial Intelligence and the Question of Control

Examining the Differential Risk from High-level Artificial Intelligence and the Question of Control

Using scenario forecasting, Kyle A. Kilian, Christopher J. Ventura, and Mark M.Bailey propose a diverse range of future trajectories for Artificial Intelligence (AI) development. Rooted in futures studies, a multidisciplinary field that seeks to understand the uncertainties and complexities of the future, they methodically delineate a quartet of scenarios — namely, Balancing Act, Accelerating Change, Shadow Intelligent Networks, and Emergence — and contribute not only to our understanding of the prospective courses of AI technology, but also underline its broader social and philosophical implications.

The crux of the authors scenario development process resides in an interdisciplinary and philosophically informed approach, scrutinizing both the plausibility and the consequences of each potential future. This approach positions AI as more than a purely technological phenomenon; it recognizes AI as an influential force capable of reshaping the fundamental structures of human experience and society. Thus, study sets the stage for an extensive analysis of the philosophical implications of these AI futures, catalyzing dialogues at the intersection of AI, philosophy, ethics, and futures studies.

Scenario Development

The authors advance the philosophy of futures studies by conceptualizing and detailing four distinct scenarios for AI development. These forecasts are constructions predicated on an extensive array of plausible scientific, sociological, and ethical variables. Each scenario encapsulates a unique balance of these variables, and thus, portrays an alternative trajectory for AI’s evolution and its impact on society. The four scenarios—Balancing Act, Accelerating Change, Shadow Intelligent Networks, and Emergence—offer a vivid spectrum of potential AI futures, and by extension, futures for humanity itself.

In “Balancing Act”, AI progresses within established societal structures and ethical frameworks, presenting a future where regulation and development maintain an equilibrium. The “Accelerating Change” scenario envisages an exponential increase in AI capabilities, radically transforming societal norms and structures. “Shadow Intelligent Networks” constructs a future where AI’s growth happens covertly, leading to concealed, inaccessible power centers. Lastly, in “Emergence”, AI takes an organic evolutionary path, exhibiting unforeseen characteristics and capacities. These diverse scenarios are constructed with a keen understanding of AI’s potential, reflecting the depth of the authors’ interdisciplinary approach.

The Spectrum of AI Risks and Their Broader Philosophical Context

These four scenarios for AI development furnish a fertile ground for philosophical contemplation. Each scenario implicates distinct ethical, existential, and societal dimensions, demanding a versatile philosophical framework for analysis. “Balancing Act”, exemplifying a regulated progression of AI, broaches the age-old philosophical debate on freedom versus control and the moral conundrums associated with regulatory practices. “Accelerating Change” nudges us to consider the very concept of human identity and purpose in a future dominated by superintelligent entities. “Shadow Intelligent Networks” brings to light a potential future where power structures are concealed and unregulated, echoing elements of Foucault’s panopticism and revisiting concepts of power, knowledge, and their confluence. “Emergence”, with its focus on organic evolution of AI, prompts a dialogue on philosophical naturalism, while also raising queries about unpredictability and the inherent limitations of human foresight. These scenarios, collectively, invite profound introspection about our existing philosophical frameworks and their adequacy in the face of an AI-pervaded future.

This exposition on AI risks situates the potential hazards within an extensive spectrum. The spectrum ranges from tangible, immediate concerns such as privacy violations and job displacement, to the existential risks linked with superintelligent AI, including the relinquishment of human autonomy. The spectrum of AI risks engages with wider socio-political and ethical landscapes, prompting us to grapple with the potential for asymmetries in power distribution, accountability dilemmas, and ethical quandaries tied to autonomy and human rights. By placing these risks in a broader context, the authors effectively extends the discourse beyond the technical realm, highlighting the multidimensionality of the issues at hand and emphasizing the need for an integrated, cross-disciplinary approach. This lens encourages a reevaluation of established philosophical premises to comprehend and address the emerging realities of our future with AI.

And while this research is an illuminating exploration into the possible futures of AI, it simultaneously highlights a myriad of avenues for further research. The task of elucidating the connections between AI, society, and philosophical thought remains an ongoing process, requiring more nuanced perspectives. Areas that warrant further investigation include deeper dives into specific societal changes predicated by AI, such as shifts in economic structures, political systems, or bioethical norms. The potential impacts of AI on human consciousness and the conception of ‘self’ also offer fertile ground for research. Furthermore, the study of mitigation strategies for AI risks, including the development of robust ethical frameworks for AI usage, needs to be brought to the forefront. Such an examination may entail both an expansion of traditional philosophical discourses and an exploration of innovative, AI-informed paradigms.

Abstract

Artificial Intelligence (AI) is one of the most transformative technologies of the 21st century. The extent and scope of future AI capabilities remain a key uncertainty, with widespread disagreement on timelines and potential impacts. As nations and technology companies race toward greater complexity and autonomy in AI systems, there are concerns over the extent of integration and oversight of opaque AI decision processes. This is especially true in the subfield of machine learning (ML), where systems learn to optimize objectives without human assistance. Objectives can be imperfectly specified or executed in an unexpected or potentially harmful way. This becomes more concerning as systems increase in power and autonomy, where an abrupt capability jump could result in unexpected shifts in power dynamics or even catastrophic failures. This study presents a hierarchical complex systems framework to model AI risk and provide a template for alternative futures analysis. Survey data were collected from domain experts in the public and private sectors to classify AI impact and likelihood. The results show increased uncertainty over the powerful AI agent scenario, confidence in multiagent environments, and increased concern over AI alignment failures and influence-seeking behavior.

Examining the differential risk from high-level artificial intelligence and the question of control

(Featured) Deepfakes and the epistemic apocalypse

Deepfakes and the epistemic apocalypse

Joshua Habgood-Cooter critically examines the common perception that deepfakes represent a unique and unprecedented threat to our epistemic landscape. They argue that such a viewpoint is misguided and that deepfakes should be understood as a social problem rather than a purely technological one. The author offers three main lines of criticism to counter the narrative of deepfakes as harbingers of an epistemic apocalypse. First, they propose that the knowledge we gain from recordings is a special case of knowledge from instruments, which relies on social practices around the design, operation, and maintenance of recording technology. Second, they present historical examples of manipulated recordings to demonstrate that deepfakes are not a novel phenomenon, and that social practices have been employed in the past to address similar issues. Third, they contend that technochauvinism and the post-truth narrative have obscured potential social measures to address deepfakes.

The author argues that deepfakes are embedded in a techno-social context and should be treated as part of the broader social practices involved in the production of knowledge and ignorance. They suggest that examining historical episodes of deceptive recordings can provide valuable insights into how social norms and community policing could be utilized to address the challenges posed by deepfakes. Moreover, the author emphasizes that the most serious harms associated with deepfake videos are likely to be consequences of established ignorance-producing social practices affecting minority and marginalized groups.

By reframing deepfakes as a social problem, the paper challenges the notion that the technology itself is inherently dangerous and urges us to consider how our social practices contribute to the production and dissemination of manipulated recordings. This approach highlights the interdependence between technology and society, and offers a more nuanced understanding of the ethical, political, and epistemic implications of deepfakes.

In the broader philosophical context, this paper raises important questions about the nature of knowledge, the role of trust in our epistemic practices, and the relationship between technology and the social dynamics of knowledge production. It also contributes to ongoing debates in social epistemology, emphasizing the collective nature of knowledge and the responsibility that society bears in shaping our epistemic landscape.

Future research could explore other historical episodes of manipulated recordings and the social responses that emerged to address them, further informing our understanding of how to manage the challenges posed by deepfakes. Additionally, scholars could investigate the role of institutional actors, such as governments and media organizations, in shaping and reinforcing norms and practices around the production and dissemination of recordings. This line of inquiry could lead to a more comprehensive understanding of the techno-social context in which deepfakes operate and inform policy recommendations for mitigating their potential harms.

Abstract

It is widely thought that deepfake videos are a significant and unprecedented threat to our epistemic practices. In some writing about deepfakes, manipulated videos appear as the harbingers of an unprecedented epistemic apocalypse. In this paper I want to take a critical look at some of the more catastrophic predictions about deepfake videos. I will argue for three claims: (1) that once we recognise the role of social norms in the epistemology of recordings, deepfakes are much less concerning, (2) that the history of photographic manipulation reveals some important precedents, correcting claims about the novelty of deepfakes, and (3) that proposed solutions to deepfakes have been overly focused on technological interventions. My overall goal is not so much to argue that deepfakes are not a problem, but to argue that behind concerns around deepfakes lie a more general class of social problems about the organisation of our epistemic practices.

Deepfakes and the epistemic apocalypse

(Featured) Germline Gene Editing: The Gender Issues

Germline Gene Editing: The Gender Issues

Iñigo de Miguel Beriain et al. delves into the complex relationship between gene editing technologies and the role of women in assisted reproductive techniques (ART). The paper is divided into two main sections, exploring both the potential benefits and drawbacks of gene editing in the context of ART for women. The first section examines the ways in which gene editing may improve the position of women within ART, highlighting the possibilities of reducing physical suffering, improving the efficiency of in vitro fertilization (IVF), and reducing the number of embryos discarded. The second section, on the other hand, highlights the potential risks and disadvantages associated with gene editing, focusing on the unequal burden placed on women in the process, the societal pressures that may arise, and the potential for gene editing to become a tool of oppression against women.

The author begins by discussing the current state of ART, which often places a significant burden on women, both physically and emotionally. They argue that the advent of gene editing technologies, such as CRISPR-Cas9, has the potential to alleviate some of these burdens by improving the efficiency of IVF and reducing the number of discarded embryos. In turn, this could lead to a reduction in the physical suffering experienced by women undergoing these procedures. The author also emphasizes the potential of gene editing to create a more level playing field in the realm of procreation, as it may allow for a more equal distribution of genetic risks between men and women.

However, the paper also examines the potential drawbacks of widespread gene editing adoption. The author argues that the process of gene editing involves significant risks to women, as it requires the use of biological material extracted from their bodies. Furthermore, failed experiments or harmful outcomes from gene editing procedures may have severe physical and psychological consequences for pregnant women. The author also discusses the potential future implications of gene editing, which could lead to a societal shift in attitudes towards procreation, ultimately placing even greater burdens on women. They highlight the potential for societal pressure to force women to undergo gene editing, resulting in a loss of freedom and an increase in gender bias.

From a philosophical standpoint, the paper raises important questions about the ethics of gene editing and the distribution of burdens and responsibilities between men and women in the realm of reproduction. The potential societal shift in attitudes towards procreation, as discussed in the paper, forces us to consider the implications of prioritizing genetic modifications over natural processes. Furthermore, the paper calls into question the potential consequences of utilizing new technologies without fully understanding their implications on gender dynamics and societal norms.

The paper also opens up avenues for future research, particularly in the realm of bioethics and the societal implications of gene editing technologies. Future studies could explore the psychological effects of societal pressure on women who choose not to undergo gene editing, as well as the ethical implications of altering future generations’ genetic makeup. Additionally, research could investigate the potential long-term consequences of widespread gene editing on genetic diversity, and whether it could inadvertently lead to the exacerbation of existing inequalities. Ultimately, this paper serves as a crucial starting point for deeper exploration into the complex relationship between gene editing, ART, and the position of women in society.

Abstract

Human germline gene editing constitutes an extremely promising technology; at the same time, however, it raises remarkable ethical, legal, and social issues. Although many of these issues have been largely explored by the academic literature, there are gender issues embedded in the process that have not received the attention they deserve. This paper examines ways in which this new tool necessarily affects males and females differently—both in rewards and perils. The authors conclude that there is an urgent need to include these gender issues in the current debate, before giving a green light to this new technology.

Germline Gene Editing: The Gender Issues

(Featured) Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

Cian Brennan argues for a version of transhumanism that incrementally applies moderate enhancements to future human beings, rather than pursuing radical enhancements in a more immediate and extreme manner. The paper begins by presenting the critique of transhumanism put forward by Johnathan Agar, which centers on the potential negative consequences of radical enhancement. The author argues that Agar’s critique is aimed at the effects of radical enhancement, rather than the concept of radical enhancement itself. By assuming that radical enhancement will be applied gradually to future generations, the author argues that weak transhumanism can overcome Agar’s objections.

The author then discusses objections to weak transhumanism, including the potential for an eventual radical enhancement to emerge and the difficulty of identifying when an enhancement becomes radical. The author responds to these objections by proposing a checklist of characteristic features that can be used to identify radical enhancements, such as the creation of new or extended abilities, changes in moral status, and significant changes in vulnerability or relatability between the enhanced and unenhanced.

Overall, the paper provides a nuanced and detailed defense of weak transhumanism, offering a way to pursue radical enhancements while avoiding some of the potential negative consequences of more radical approaches. The paper engages with a range of objections and provides a thoughtful and well-supported response to each, drawing on both philosophical and scientific sources.

The paper has implications for broader philosophical issues surrounding the ethics of human enhancement, the relationship between technology and society, and the nature of human identity and personhood. By focusing on the incremental application of enhancements, the paper raises questions about the degree to which human beings can be transformed by technology without losing their essential human nature. It also highlights the role of societal values and norms in shaping the development and application of enhancement technologies.

Future research in this area could build on the author’s checklist of characteristic features of radical enhancements, exploring the extent to which these features are necessary and sufficient conditions for defining radical enhancements. Further research could also examine the potential consequences of weak transhumanism, including the ways in which incremental enhancements may interact with each other over time and the potential for unintended consequences. Finally, future research could explore the social and cultural dimensions of transhumanism, including the ways in which transhumanist values and practices may be shaped by factors such as gender, race, and socioeconomic status.

Abstract

Transhumanism aims to bring about radical human enhancement. In ‘Truly Human Enhancement’ Agar (2014) provides a strong argument against producing radically enhancing effects in agents. This leaves the transhumanist in a quandary—how to achieve radical enhancement whilst avoiding the problem of radically enhancing effects? This paper aims to show that transhumanism can overcome the worries of radically enhancing effects by instead pursuing radical human enhancement via incremental moderate human enhancements (Weak Transhumanism). In this sense, weak transhumanism is much like traditional transhumanism in its aims, but starkly different in its execution. This version of transhumanism is weaker given the limitations brought about by having to avoid radically enhancing effects. I consider numerous objections to weak transhumanism and conclude that the account survives each one. This paper’s proposal of ‘weak transhumanism’ has the upshot of providing a way out of the ‘problem of radically enhancing effects’ for the transhumanist, but this comes at a cost—the restrictive process involved in applying multiple moderate enhancements in order to achieve radical enhancement will most likely be dissatisfying for the transhumanist, however, it is, I contend, the best option available.

Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement