(Relevant Literature) Philosophy of Futures Studies:
July 16th, 2023 – July 22nd, 2023

The Weight of Statistical Evidence and Algorithmic Heuristics
Abstract
The use of algorithms to support prediction-based decision-making is becoming commonplace in a range of domains including health, criminal justice, education, social services, lending, and hiring. An assumption governing such decisions is that there is a property Y such that individual a should be allocated resource R by decision-maker D if a is Y. When there is uncertainty about whether a is Y, algorithms may provide valuable decision support by accurately predicting whether a is Y on the basis of known features of a. Based on recent work on statistical evidence in epistemology this article presents an argument against relying exclusively on algorithmic predictions to allocate resources when they provide purely statistical evidence that a is Y. The article then responds to the objection that any evidence that will increase the proportion of correct decisions should be accepted as the basis for allocations regardless of its epistemic deficiency. Finally, some important practical aspects of the conclusion are considered.

Reflecting on AI-centered, Evidence-based Medicine
Abstract
When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to the Validation View, it is sufficient that the AI system has been validated using established standards for safety and reliability. I defend the Explanation View against two lines of criticism, and I argue that within the framework of evidence-based medicine mere validation seems insufficient for the use of AI output. I end by characterizing the epistemic responsibility of clinicians and point out how a mere AI output cannot in itself ground a practical conclusion about what to do.
Human Control of Artificial Intelligent Systems: A Critical Review of Key Challenges and Approaches

Supervising AI
Abstract
Understanding how humans should and do control artificial intelligence (AI) systems is central to many research areas, with applications ranging from self-driving vehicles to cybersecurity and national defence. We analyse a multi-disciplinary body of literature to show gaps and inconsistencies in existing approaches to human control of AI, highlighting, in particular, the practical challenges that stem from supervisory, task-specific, human control as a prominent paradigm. We conclude our analysis with a proposal to move away from this paradigm and instead consider approaches based on cooperating agents and the human–machine teaming (HMT) paradigm which, we argue, fit better with the capabilities and risks posed by AI systems.

Defending Cyborg Rights
Abstract
When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to the Validation View, it is sufficient that the AI system has been validated using established standards for safety and reliability. I defend the Explanation View against two lines of criticism, and I argue that within the framework of evidence-based medicine mere validation seems insufficient for the use of AI output. I end by characterizing the epistemic responsibility of clinicians and point out how a mere AI output cannot in itself ground a practical conclusion about what to do.

Building Meta-ethical Frameworks for Digital Futures
Abstract
The article explores the possibility of meta-theoretical framework to navigate the ethical dilemmas of digital futures in light of evolving machine agency and autonomy. The article explores three future oriented scenarios with evolving agential capabilities for machines and utilizes John Rawls’ theory of Justice to conduct a hypothetico-deductive analysis at each possible scenario to ascertain what conditions are necessary for social vindication of our technological futures. The analysis indicates that such a framework is indeed possible and necessary for democratic legitimacy of technological futures. It is intricately relevant to the pivotal question of human experience in 21st century namely legitimization of institutions, civic engagement and values of trust.

Students and Professors Collaborating with the AI
AbstractThe transformative power of artificial intelligence (AI) is coming to philosophy—the only question is the degree to which philosophers will harness it. This paper argues that the application of AI tools to philosophy could have an impact on the field comparable to the advent of writing, and that it is likely that philosophical progress will significantly increase as a consequence of AI. The role of philosophers in this story is not merely to use AI but also to help develop it and theorize about it. In fact, the paper argues that philosophers have a prima facie obligation to spend significant effort in doing so, at least insofar as they should spend effort philosophizing.

Empathetic Robot
AbstractGiven the accelerating powers of artificial intelligence (AI), we must equip artificial agents and robots with empathy to prevent harmful and irreversible decisions. Current approaches to artificial empathy focus on its cognitive or performative processes, overlooking affect, and thus promote sociopathic behaviors. Artificially vulnerable, fully empathic AI is necessary to prevent sociopathic robots and protect human welfare.

A Neurotechnology Conference
Exploring the rapidly evolving landscape of neurotechnology, a new UNESCO report reveals key trends, advancements, and ethical considerations. Since 2013, governmental investments surpassed $6 billion, while private investments reached $33.2 billion by 2020. U.S. leads in neuroscience publications and patents, closely followed by South Korea, China, Japan, Germany, and France. Ethical concerns around human dignity, personal identity, and autonomy are increasingly significant. Computer technology emerges as the most crucial field in neurotech, outpacing medical tech, biotech, and pharmaceuticals. As neurotech evolves, a comprehensive global governance framework is imperative to safeguard human rights.

Human Germline Gene Editing
AbstractGiven the potential large ethical and societal implications of human germline gene editing (HGGE) the urgent need for public and stakeholder engagement (PSE) has been repeatedly expressed. In this short communication, we aim to provide directions for broad and inclusive PSE by emphasizing the importance of futures literacy, which is a skill to imagine diverse and multiple futures and to use these as lenses to look at the present anew. By first addressing “what if” questions in PSE, different futures come into focus and limitations that arise when starting with the “whether” or “how” questions about HGGE can be avoided. Futures literacy can also aid in the goal of societal alignment, as “what if” questions can be answered in many different ways, thereby opening up the conversation to explore a multitude of values and needs of various publics. Broad and inclusive PSE on HGGE starts with asking the right questions.

The Human Vivarium
Abstract
Homes are increasingly being built as sensor-laden living environments to test the performance of novel technologies in interaction with real people. When people’s homes are turned into the site of experiments, the inhabitants become research subjects. This paper employs findings from biomedical research ethics to evaluate live-in laboratories and argues that when live-in laboratories function as a participant’s main residence, they constrain an individual’s so-called ‘right to withdraw’. Withdrawing from the live-in laboratory as a participant’s main residence means losing one’s home, which creates negative financial and psychological consequences for participants. I will argue that such costs conflict with a participants’ right to withdraw on two counts. First, that the exit costs from the live-in laboratory constitute a penalty, and second, that the costs of withdrawing from the live-in laboratory function as a constraint on a participant’s liberty. The paper concludes that (i) the right to withdraw is a necessary condition for the ethical permissibility of modern live in lab experiments and conclude (ii) the practice of making an experimental home as a participant’s main residence is ethically problematic.

Representation of a Carbon Nanotube
AbstractCarbon nanotubes (CNTs) are one of the first examples of nanotechnology, with a history of promising uses and high expectations. This paper uses the recent debate over their future to explore both ethical and value-laden statements which unsettle the notion of CNTs as a value-free nanotechnology and their regulation as purely a technical affair. A point of departure is made with the inclusion of CNTs on the Substitute-It-Now list by the Swedish NGO ChemSec, an assessment process that anticipates and complements the Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) regulation in Europe. An argument map is constructed to illustrate the core contention in the debate—should CNTs be substituted or not—which follows from a systematic literature review and content analysis of sampled journal articles. Nine arguments are articulated that bolster one of two camps: the pro-substitution camp or the contra-substitution camp. Beneath these arguments are a set of three implicit values that animate these two camps in prescribing competing interventions to resolve the dispute: (i) environmental protection and human safety, (ii) good science, and (iii) technological progress. This leads to a discussion around the regulatory problem of safeguarding conflicting values in decision-making under sustained scientific uncertainty. Finally, the study suggests further empirical work on specific nanomaterials in a pivot away from the abstract, promissory nature of nanotechnology and other emerging technologies in science, technology, and innovation policy. The examination of ethics and values is useful for mapping controversies in science and technology studies of regulation, even amongst experts in cognate research fields like nanomedicine and nanotoxicology.

A Gene Drive
Abstract
Gene drives are potentially ontologically and morally disruptive technologies. The potential to shape evolutionary processes and to eradicate (e.g. malaria-transmitting or invasive) populations raises ontological questions about evolution, nature, and wilderness. The transformative promises and perils of gene drives also raise pressing ethical and political concerns. The aim of this article is to arrive at a better understanding of the gene drive debate by analysing how ontological and moral assumptions are coproduced in this debate. Combining philosophical analysis with a critical reading of the gene drive literature and an ethnographic study of two leading research groups, the article explores the hypothesis that the development of and debate about gene drives are characterized by a particular intervention-oriented mode of coproduction. Based on the results of this exploration, we highlight the need for a broadening of the perspective on gene drives in which empirical, moral, and ontological concerns are addressed explicitly in their interplay rather than in (disciplinary) isolation from each other.

