(Relevant Literature) Philosophy of Futures Studies:
July 23rd, 2023 – July 29th, 2023

Feeding Psychiatric AI Brain Data
Feeding Psychiatric AI Brain Data
Abstract

Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine’s understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the outputs of the AI system. However, a potential epistemic privileging of AI in clinical judgements may lead to unintended consequences that could negatively affect patient treatment, well-being and rights. The implications are also relevant to precision medicine, digital twin technologies and predictive analytics generally. We propose that a commitment to epistemic humility can help promote judicious clinical decision-making at the interface of big data and AI in psychiatry.

Socrates Holding a Warming Globe
Socrates Holding a Warming Globe
Abstract
Over the past two decades, virtue ethicists have begun to devote increasing attention to applied ethics. In particular, the application of virtue ethical frameworks to the environmental ethics debate has flourished. This chapter reviews recent contributions to the literature in this field and highlights some strengths and weaknesses of thinking about climate change through a virtue ethical lens. Section “Two Benefits of Virtue Ethical Approaches to Climate Change” explores two benefits of applying virtue ethics to climate change: (a) we can better capture the phenomenology of our moral experience, and (b) we avoid the problem of inconsequentialism. Section “A Catalogue of Environmental Virtues” analyzes various practical proposals that have been put forward in the form of specific environmental virtues. Section “An Objection to Virtue-Oriented Approaches to Climate Change” reconstructs a fundamental objection to the idea of using a virtue ethical normative approach to tackling the urgent and imminent dangers of climate change.
Writing with GPT
Writing with GPT
Abstract

In this article, we explore the potential of enhancing academic prose and idea generation by fine-tuning a large language model (here, GPT-3) on one’s own previously published writings: AUTOGEN (“AI Unique Tailored Output GENerator”). We develop, test, and describe three distinct AUTOGEN models trained on the prior scholarly output of three of the current authors (SBM, BDE, JS), with a fourth model trained on the combined works of all three. Our AUTOGEN models demonstrate greater variance in quality than the base GPT-3 model, with many outputs outperforming the base model in format, style, overall quality, and novel idea generation. As proof of principle, we present and discuss examples of AUTOGEN-written sections of existing and hypothetical research papers. We further discuss ethical opportunities, concerns, and open questions associated with personalized academic prose and idea generators. Ethical opportunities for personalized LLMs such as AUTOGEN include increased productivity, preservation of writing styles and cultural traditions, and aiding consensus building. However, ethical concerns arise due to the potential for personalized LLMs to reduce output diversity, violate privacy and intellectual property rights, and facilitate plagiarism or fraud. The use of coauthored or multiple-source trained models further complicates issues surrounding ownership and attribution. Open questions concern a potential credit-blame asymmetry for LLM outputs, the legitimacy of licensing agreements in authorship ascription, and the ethical implications of coauthorship attribution for data contributors. Ensuring the output is sufficiently distinct from the source material is crucial to maintaining ethical standards in academic writing. These opportunities, risks, and open issues highlight the intricate ethical landscape surrounding the use of personalized LLMs in academia. We also discuss open technical questions concerning the integration of AUTOGEN-style personalized LLMs with other LLMs, such as GPT-4, for iterative refinement and improvement of generated text. In conclusion, we argue that AUTOGEN-style personalized LLMs offer significant potential benefits in terms of both prose generation and, to a lesser extent, idea generation. If associated ethical issues are appropriately addressed, AUTOGEN alone or in combination with other LLMs can be seen as a potent form of academic enhancement.

Censoring Language Models
Censoring Language Models
Abstract
Large language models (LLMs) have exhibited impressive capabilities in comprehending complex instructions. However, their blind adherence to provided instructions has led to concerns regarding risks of malicious use. Existing defence mechanisms, such as model fine-tuning or output censorship using LLMs, have proven to be fallible, as LLMs can still generate problematic responses. Commonly employed censorship approaches treat the issue as a machine learning problem and rely on another LM to detect undesirable content in LLM outputs. In this paper, we present the theoretical limitations of such semantic censorship approaches. Specifically, we demonstrate that semantic censorship can be perceived as an undecidable problem, highlighting the inherent challenges in censorship that arise due to LLMs’ programmatic and instruction-following capabilities. Furthermore, we argue that the challenges extend beyond semantic censorship, as knowledgeable attackers can reconstruct impermissible outputs from a collection of permissible ones. As a result, we propose that the problem of censorship needs to be reevaluated; it should be treated as a security problem which warrants the adaptation of security-based approaches to mitigate potential risks.
The AI Political Takeover
The AI Political Takeover
Abstract

Those who claim, whether with fear or with hope, that algorithmic governance can control politics or the whole political process or that artificial intelligence is capable of taking charge of or wrecking democracy, recognize that this is not yet possible with our current technological capabilities but that it could come about in the future if we had better quality data or more powerful computational tools. Those who fear or desire this algorithmic suppression of democracy assume that something similar will be possible someday and that it is only a question of technological progress. If that were the case, no limits would be insurmountable on principle. I want to challenge that conception with a limit that is less normative than epistemological; there are things that artificial intelligence cannot do, because it is unable to do them, not because it should not do them, and this is particularly apparent in politics, which is a peculiar decision-making realm. Machines and people take decisions in a very different fashion. Human beings are particularly gifted at one type of situation and very clumsy in others. The part of politics that is, strictly speaking, political is where this contrast and our greatest aptitude are most apparent. If that is the case, as I believe, then the possibility that democracy will one day be taken over by artificial intelligence is, as a fear or as a desire, manifestly exaggerated. The corresponding counterpart to this is: if the fear that democracy could disappear at the hands of artificial intelligence is not realistic, then we should not expect exorbitant benefits from it either. For epistemic reasons that I will explain, it does not seem likely that artificial intelligence is capable of taking over political logic.

From Neural Networks, Numbers
From Neural Networks, Numbers
Abstract

Why would we want to develop artificial human-like arithmetical intelligence, when computers already outperform humans in arithmetical calculations? Aside from arithmetic consisting of much more than mere calculations, one suggested reason is that AI research can help us explain the development of human arithmetical cognition. Here I argue that this question needs to be studied already in the context of basic, non-symbolic, numerical cognition. Analyzing recent machine learning research on artificial neural networks, I show how AI studies could potentially shed light on the development of human numerical abilities, from the proto-arithmetical abilities of subitizing and estimating to counting procedures. Although the current results are far from conclusive and much more work is needed, I argue that AI research should be included in the interdisciplinary toolbox when we try to explain the development and character of numerical cognition and arithmetical intelligence. This makes it relevant also for the epistemology of mathematics.

Gears Connecting Ethics and Epistemology
Gears Connecting Ethics and Epistemology
Abstract

The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and other normative considerations, such as intersectoral vulnerabilities, at critical stages of the whole process from design and implementation to use and assessment. To connect ethics and epistemology of AI, we perform a double shift of focus. First, we move from trusting the output of an AI system to trusting the process that leads to the outcome. Second, we move from expert assessment to more inclusive assessment strategies, aiming to facilitate expert and non-expert assessment. Together, these two moves yield a framework usable for experts and non-experts when they inquire into relevant epistemological and ethical aspects of AI systems. We dub our framework ‘epistemology-cum-ethics’ to signal the equal importance of both aspects. We develop it from the vantage point of the designers: how to create the conditions to internalize values into the whole process of design, implementation, use, and assessment of an AI system, in which values (epistemic and non-epistemic) are explicitly considered at each stage and inspectable by every salient actor involved at any moment.

An AI Legal Revolution
An AI Legal Revolution
Abstract

 Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) defend the need for a novel conceptual model for understanding the systemic legal disruption caused by new technologies such as AI; (ii) to situate this model in relation to preceding debates about the interaction of regulation with new technologies (particularly the ‘cyberlaw’ and ‘robolaw’ debates); and (iii) to set out a detailed model for understanding the legal disruption precipitated by AI, examining both pathways stemming from new affordances that can give rise to a regulatory ‘disruptive moment’, as well as the Legal Development, Displacement or Destruction that can ensue. The article proposes that this model of legal disruption can be broadly generalisable to understanding the legal effects and challenges of other emerging technologies.

Interpretation of Quantum Technology
Interpretation of Quantum Technology
Abstract

As quantum technologies (QT) advance, their potential impact on and relation with society has been developing into an important issue for exploration. In this paper, we investigate the topic of democratization in the context of QT, particularly quantum computing. The paper contains three main sections. First, we briefly introduce different theories of democracy (participatory, representative, and deliberative) and how the concept of democratization can be formulated with respect to whether democracy is taken as an intrinsic or instrumental value. Second, we give an overview of how the concept of democratization is used in the QT field. Democratization is mainly adopted by companies working on quantum computing and used in a very narrow understanding of the concept. Third, we explore various narratives and counter-narratives concerning democratization in QT. Finally, we explore the general efforts of democratization in QT such as different forms of access, formation of grassroot communities and special interest groups, the emerging culture of manifesto writing, and how these can be located within the different theories of democracy. In conclusion, we argue that although the ongoing efforts in the democratization of QT are necessary steps towards the democratization of this set of emerging technologies, they should not be accepted as sufficient to argue that QT is a democratized field. We argue that more reflexivity and responsiveness regarding the narratives and actions adopted by the actors in the QT field and making the underlying assumptions of ongoing efforts on democratization of QT explicit, can result in a better technology for society.

Engineers Reasoning About Responsibility in Autonomous Systems
Engineers Reasoning About Responsibility in Autonomous Systems
Abstract

Ensuring the trustworthiness of autonomous systems and artificial intelligence is an important interdisciplinary endeavour. In this position paper, we argue that this endeavour will benefit from technical advancements in capturing various forms of responsibility, and we present a comprehensive research agenda to achieve this. In particular, we argue that ensuring the reliability of autonomous system can take advantage of technical approaches for quantifying degrees of responsibility and for coordinating tasks based on that. Moreover, we deem that, in certifying the legality of an AI system, formal and computationally implementable notions of responsibility, blame, accountability, and liability are applicable for addressing potential responsibility gaps (i.e. situations in which a group is responsible, but individuals’ responsibility may be unclear). This is a call to enable AI systems themselves, as well as those involved in the design, monitoring, and governance of AI systems, to represent and reason about who can be seen as responsible in prospect (e.g. for completing a task in future) and who can be seen as responsible retrospectively (e.g. for a failure that has already occurred). To that end, in this work, we show that across all stages of the design, development, and deployment of trustworthy autonomous systems (TAS), responsibility reasoning should play a key role. This position paper is the first step towards establishing a road map and research agenda on how the notion of responsibility can provide novel solution concepts for ensuring the reliability and legality of TAS and, as a result, enables an effective embedding of AI technologies into society.

Bureaucratic AI Governance
Interpretation of Quantum Technology
Abstract

This paper argues against the call to democratize artificial intelligence (AI). Several
authors demand to reap purported benefits that rest in direct and broad participa-
tion: In the governance of AI, more people should be more involved in more de-
cisions about AI—from development and design to deployment. This paper op-
poses this call. The paper presents five objections against broadening and deepen-
ing public participation in the governance of AI. The paper begins by reviewing
the literature and carving out a set of claims that are associated with the call to
“democratize AI”. It then argues that such a democratization of AI (1) rests on
weak grounds because it does not answer to a demand of legitimization, (2) is re-
dundant in that it overlaps with existing governance structures, (3) is resource in-
tensive, which leads to injustices, (4) is morally myopic and thereby creates popular
oversights and moral problems of its own, and finally, (5) is neither theoretically
nor practically the right kind of response to the injustices that animate the call. The
paper concludes by suggesting that AI should be democratized not by broadening
and deepening participation but by increasing the democratic quality of the admin-
istrative and executive elements of collective decision making. In a slogan: The
question is not so much whether AI should be democratized but how.