(Relevant Literature) Philosophy of Futures Studies: July 9th, 2023 – July 15th, 2023
Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons

Abstract
Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons
“Healthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use.”
Do Large Language Models Know What Humans Know?

Abstract
Do Large Language Models Know What Humans Know?
“Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others’ mental states. We test the viability of the language exposure hypothesis by assessing whether models exposed to large quantities of human language display sensitivity to the implied knowledge states of characters in written passages. In pre-registered analyses, we present a linguistic version of the False Belief Task to both human participants and a large language model, GPT-3. Both are sensitive to others’ beliefs, but while the language model significantly exceeds chance behavior, it does not perform as well as the humans nor does it explain the full extent of their behavior—despite being exposed to more language than a human would in a lifetime. This suggests that while statistical learning from language exposure may in part explain how humans develop the ability to reason about the mental states of others, other mechanisms are also responsible.”
Scientific understanding through big data: From ignorance to insights to understanding

Abstract
Scientific understanding through big data: From ignorance to insights to understanding
“Here I argue that scientists can achieve some understanding of both the products of big data implementation as well as of the target phenomenon to which they are expected to refer –even when these products were obtained through essentially epistemically opaque processes. The general aim of the paper is to provide a road map for how this is done; going from the use of big data to epistemic opacity (Sec. 2), from epistemic opacity to ignorance (Sec. 3), from ignorance to insights (Sec. 4), and finally, from insights to understanding (Sec. 5, 6).”
Ethics of Quantum Computing: an Outline

Abstract
Ethics of Quantum Computing: an Outline
“This paper intends to contribute to the emerging literature on the ethical problems posed by quantum computing and quantum technologies in general. The key ethical questions are as follows: Does quantum computing pose new ethical problems, or are those raised by quantum computing just a different version of the same ethical problems raised by other technologies, such as nanotechnologies, nuclear plants, or cloud computing? In other words, what is new in quantum computing from an ethical point of view? The paper aims to answer these two questions by (a) developing an analysis of the existing literature on the ethical and social aspects of quantum computing and (b) identifying and analyzing the main ethical problems posed by quantum computing. The conclusion is that quantum computing poses completely new ethical issues that require new conceptual tools and methods.”
On The Social Complexity of Neurotechnology: Designing A Futures Workshop For The Exploration of More Just Alternative Futures

Abstract
On The Social Complexity of Neurotechnology: Designing A Futures Workshop For The Exploration of More Just Alternative Futures
Novel technologies like artificial intelligence or neurotechnology are expected to have social implications in the future. As they are in the early stages of development, it is challenging to identify potential negative impacts that they might have on society. Typically, assessing these effects relies on experts, and while this is essential, there is also a need for the active participation of the wider public, as they might also contribute relevant ideas that must be taken into consideration. This article introduces an educational futures workshop called Spark More Just Futures, designed to act as a tool for stimulating critical thinking from a social justice perspective based on the Capability Approach. To do so, we first explore the theoretical background of neurotechnology, social justice, and existing proposals that assess the social implications of technology and are based on the Capability Approach. Then, we present a general framework, tools, and the workshop structure. Finally, we present the results obtained from two slightly different versions (4 and 5) of the workshop. Such results led us to conclude that the designed workshop succeeded in its primary objective, as it enabled participants to discuss the social implications of neurotechnology, and it also widened the social perspective of an expert who participated. However, the workshop could be further improved.
Misunderstandings around Posthumanism. Lost in Translation? Metahumanism and Jaime del Val’s “Metahuman Futures Manifesto”

Abstract
Misunderstandings around Posthumanism. Lost in Translation? Metahumanism and Jaime del Val’s “Metahuman Futures Manifesto”
Posthumanism is still a largely debated new field of contemporary philosophy that mainly aims at broadening the Humanist perspective. Academics, researchers, scientists, and artists are constantly transforming and evolving theories and arguments, around the existing streams of Posthumanist thought, Critical Posthumanism, Transhumanism, Metahumanism, discussing whether they can finally integrate or follow completely different paths towards completely new directions. This paper, written for the 1st Metahuman Futures Forum (Lesvos 2022) will focus on Metahumanism and Jaime del Val’s “Metahuman Futures Manifesto” (2022) mainly as an open dialogue with Critical Posthumanism.
IMAGINABLE FUTURES: A Psychosocial Study On Future Expectations And Anthropocene

Abstract
IMAGINABLE FUTURES: A Psychosocial Study On Future Expectations And Anthropocene
The future has become the central time of Anthropocene due to multiple factors like climate crisis emergence, war, and COVID times. As a social construction, time brings a diversity of meanings, measures, and concepts permeating all human relations. The concept of time can be studies in a variety of fields, but in Social Psychology, time is the bond for all social relations. To understand Imaginable Futures as narratives that permeate human relations requires the discussion of how individuals are imagining, anticipating, and expecting the future. According to Kable et al. (2021), imagining future events activates two brain networks. One, which focuses on creating the new event within imagination, whereas the other evaluates whether the event is positive or negative. To further investigate this process, a survey with 40 questions was elaborated and applied to 312 individuals across all continents. The results show a relevant rupture between individual and global futures. Data also demonstrates that the future is an important asset of the now, and participants are not so optimistic about it. It is possible to notice a growing preoccupation with the global future and the uses of technology.
Taking AI risks seriously: a new assessment model for the AI Act

Abstract
Taking AI risks seriously: a new assessment model for the AI Act
“The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We illustrate this model using large language models (LLMs) as an example.”
Creating a large language model of a philosopher

Abstract
Creating a large language model of a philosopher
“Can large language models produce expert-quality philosophical texts? To investigate this, we fine-tuned GPT-3 with the works of philosopher Daniel Dennett. To evaluate the model, we asked the real Dennett 10 philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry-picking. Experts on Dennett’s work succeeded at distinguishing the Dennett-generated and machine-generated answers above chance but substantially short of our expectations. Philosophy blog readers performed similarly to the experts, while ordinary research participants were near chance distinguishing GPT-3’s responses from those of an ‘actual human philosopher’.”
(Review) Three mistakes in the moral mathematics of existential risk
David Thorstad engages with the philosophy of longtermism and navigates its implications for existential risk mitigation. The grounding concept of longtermism revolves around the potential vastness of the future, considering the numerous lives and experiences that could exist. Within this framework, Thorstad probes the concept of astronomical waste, a central tenet for longtermists, which posits that the obliteration of potential future lives due to existential catastrophe would result in a colossal loss. Thorstad accepts this foundational proposition, which is also widely recognized within the longtermist community. However, his exploration does not halt here; it extends into an elaborate scrutiny of the complexities and uncertainties that may surface while operationalizing this perspective in the realm of existential risk mitigation.
Thorstad does not simply acquiesce to longtermist conventions; instead, he examines and dissects these principles to draw out latent uncertainties and nuances. With each section of his paper, he scrutinizes the longtermist premises and its implications for existential risk mitigation. By doing so, he reveals hidden layers within the longtermist argument and uncovers practical and ethical concerns that often get overshadowed by the basic premises of this philosophy. Thorstad’s work, therefore, stands not as a mere affirmation of longtermism, but as an essential critique that brings to light the intricate moral and practical conundrums that lurk within its core propositions. His thorough examination aims to enrich the understanding of longtermism, laying the groundwork for future debates and discussions on the philosophy of existential risk mitigation.
Intergenerational Coordination Problem, Background Risk, and Population Dynamics
A noteworthy aspect of Thorstad’s analysis is his framing of existential risk mitigation as an intergenerational coordination problem. Thorstad postulates that for humanity to accrue significant benefits from such mitigation, it must suppress cumulative risk over prolonged periods. This poses a challenge, as each generation must ensure that future generations continue to reduce risk. According to Thorstad, this coordination problem is difficult due to four reasons. Firstly, desired risk levels are quite low and might require considerable sacrifice. Secondly, each generation carries only a fraction of the cost of potential catastrophe, requiring an unusual concern for future generations. Thirdly, this level of concern is challenging to instill given human impatience and limited altruism. Finally, enforcement is complicated because monitoring and punishing future generations’ potential selfishness is difficult, increasing the temptation to defect from a collectively optimal solution. By placing the problem within this frame, Thorstad opens up pertinent questions around feasibility and ethical considerations regarding intergenerational coordination.
Thorstad also critically examines the relationship between background existential risk and the value of mitigation efforts, producing insights that challenge conventional views. The argument rests on the premise that if background risk levels are raised, the value of mitigating any specific risk, such as biosecurity threats, significantly diminishes. This counterintuitive relationship suggests that a world with reduced biosecurity risks would still be a risky world, thus more vulnerable to future catastrophes. Thorstad extends this point by demonstrating that pessimistic assumptions about the background level of existential risk can drastically lessen the value of a fixed relative or absolute risk reduction. Intriguingly, this argument suggests a “dialectical flip” in debates on existential risk. Paradoxically, higher levels of background risk tend to lower the importance of risk mitigation, and lower levels enhance it. This revelation has potential implications for the Time of Perils Hypothesis, a foundational principle stating that risk will soon decline substantially and stay low for the rest of human history. Thorstad underscores that this hypothesis is crucial for arguing the astronomical importance of existential risk mitigation when background risk is high. However, he and others question its validity, implying further doubts about the value of existential risk mitigation.
The exploration into population dynamics, demographic interventions, and the significance of digital minds imparts another dimension to the discourse on longtermism. Thorstad scrutinizes the interplay between the potential number of lives a region can sustain and the likely number of lives it will support, given the dynamics of human populations. This insight implies that efforts to increase future human population size could be as important as mitigating existential risk. However, Thorstad notes the intricacies of this assertion as it depends on the framework of population axiology. Further, Thorstad introduces the potential role of digital minds, arguing that digital populations programmed to value expansion might outperform humans in expanding to a meaningful proportion of their maximum possible size. This argument suggests that future efforts might need to prioritize the development and safety of digital populations, possibly at the expense of future human populations, accentuating the profound ethical implications surrounding longtermism and its practical execution.
The Cluelessness Problem and Model Uncertainty, Connections to the Broader Philosophical Discourse
The cluelessness problem, as Thorstad explains, lies in the immense difficulty of predicting the consequences of our actions on the distant future, an issue further exacerbated when considering the global stakes of existential risks. Some longtermists believe that existential risk mitigation could alleviate this problem, as current risks can be identified and strategies for mitigation can be crafted today. However, Thorstad offers an alternative perspective, suggesting that cluelessness may persist due to ‘model uncertainty.’ His argument posits that the complexity inherent in valuing existential risk mitigation could mean there are still unknown variables or considerations that have been overlooked or misrepresented in the current models. This presents a cautionary note, suggesting that the escape from cluelessness via existential risk mitigation may be an overoptimistic assumption. Thorstad leaves readers contemplating the level of model uncertainty and the potential for other unexplored variables in longtermist thinking.
Thorstad’s article contributes significantly to the broader philosophical discourse, especially in the context of moral philosophy and ethical futures studies. By articulating the intergenerational coordination problem, he engages with concepts central to intergenerational justice, a core topic within the ethics of long-term thinking. Further, his exploration of ‘background risk’ and ‘the time of perils’ hypothesis contributes to the discourse around existential risk philosophy, offering a novel viewpoint that challenges traditional assumptions about existential risk mitigation. Moreover, his argument concerning population dynamics and digital minds intersects with philosophy of mind and metaphysics, advancing the philosophical understanding of these complex notions. Thorstad’s discussion on the ‘cluelessness problem’ and ‘model uncertainty’ carries implications for epistemology and decision theory, underlining the complexities associated with making predictions about the distant future and creating models for such projections. His study not only scrutinizes the presuppositions within longtermist philosophy, but also invites further inquiry into the associated philosophical dimensions, thereby expanding the theoretical terrain of futures studies.
Abstract
Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk: the importance of treating existential risk as an intergenerational coordination problem; a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation; renewed importance of population dynamics, including the dynamics of digital minds; and a novel form of the cluelessness challenge to longtermism.
Three mistakes in the moral mathematics of existential risk
(Review) Metaverse through the prism of power and addiction: what will happen when the virtual world becomes more attractive than reality?
Ljubisa Bojic’s provides a nuanced exploration of the metaverse, an evolving techno-social construct set to redefine the interaction dynamics between technology and society. By unpacking the multifaceted socio-technical implications of the metaverse, Bojic bridges the gap between theoretical speculations and the realities that this phenomenon might engender. Grounding the analysis in the philosophy of futures studies, the author scrutinizes the metaverse from various angles, unearthing potential impacts on societal structures, power dynamics, and the psychological landscape of users.
Bojic places the metaverse within the broader context of technologically mediated realities. His examination situates the metaverse not as a novel concept, but rather as an evolution of a continuum that stretches from the birth of the internet to the dawn of social media. In presenting this contextual framework, the research demystifies the metaverse, enabling a critical understanding of its roots and potential trajectory. In addition, Bojic foregrounds the significance of socio-technical imaginaries in shaping the metaverse, positioning them as instrumental in determining the pathways that this construct will traverse in the future. This research, thus, offers a comprehensive and sophisticated account of the metaverse, setting the stage for a rich philosophical discourse on this emerging phenomenon.
Socio-Technical Imaginaries, Power Dynamics, and Addictions
Bojic’s research explores the concept of socio-technical imaginaries as a core element of the metaverse. He proposes that these shared visions of social life and social order are instrumental in shaping the metaverse. Not simply a set of technologies, the metaverse emerges as a tapestry woven from various socio-technical threads. Through this examination, Bojic directs attention towards the collective imagination as a pivotal force in the evolution of the metaverse, shedding light on the often-underestimated role of socio-cultural factors in technological development.
Furthermore, Bojic’s analysis dissects the power dynamics inherent in the metaverse, focusing on the role of tech giants as arbiters of the digital frontier. By outlining potential scenarios where a few entities might hold the reins of the metaverse, he underscores the latent risks of monopolization. This concentration of power could potentially influence socio-technical imaginaries and subsequently shape the metaverse according to their particular interests, threatening to homogenize a construct intended to promote diversity. In this regard, Bojic’s research alerts to the imperative of balancing power structures in the metaverse to foster a pluralistic and inclusive digital realm.
A noteworthy aspect of Bojic’s research revolves around the concept of addiction within the metaverse. Through the lens of socio-technical imaginaries, Bojic posits the potential of the metaverse to amplify addictive behaviours. He asserts that the immersive, highly interactive nature of the metaverse, coupled with the potential for instant gratification and escape from real-world stressors, may serve as fertile ground for various forms of addiction. Moreover, he astutely observes that addiction in the metaverse is not limited to individual behaviours but can encompass collective ones. This perspective draws attention to how collective addictive behaviours, in turn, could shape socio-technical imaginaries, potentially leading to a feedback loop that further embeds addiction within the fabric of the metaverse. Consequently, Bojic’s research underscores the necessity for proactive measures to manage the potential for addiction within the metaverse, balancing the need for user engagement with safeguarding mental health.
Metaverse Regulation, Neo-slavery, and Philosophical Implications
Drawing on a unique juxtaposition, Bojic brings attention to the possible emergence of “neo-slavery” within the metaverse, an alarming consequence of inadequate regulation. He introduces this concept as a form of exploitation where users might find themselves tied to platforms, practices, or personas that limit their freedom and agency. The crux of this argument lies in the idea that the metaverse, despite its promises of infinite possibilities, could inadvertently result in new forms of enslavement if regulatory structures do not evolve adequately. This highlights a paradox within the metaverse; a space of limitless potential could still confine individuals within the confines of unseen power dynamics. Furthermore, Bojic suggests that neo-slavery could be fuelled by addictive tendencies and the amplification of power imbalances, drawing links between this concept and his earlier discussions on addiction. As such, the exploration of neo-slavery in the metaverse stands as a potent reminder of the intricate relationship between technology, power, and human agency.
Bojic’s research contributes significantly to the discourse on futures studies by engaging with the complexities of socio-technical imaginaries in the context of the metaverse. His conceptualization of neo-slavery and addictions presents an innovative lens through which to scrutinize the metaverse, tying together strands of power, exploitation, and human behaviour. However, the philosophical implications extend beyond this particular technology. In essence, his findings prompt a broader reflection on the relationship between humanity and rapidly evolving digital ecosystems. The manifestation of power dynamics within such ecosystems, and the potential for addiction and exploitation, reiterate long-standing philosophical debates concerning agency, free will, and autonomy in the context of technological advances. Bojic’s work thus goes beyond the metaverse and forces the reader to question the fundamental aspects of human-technology interaction. This holistic perspective solidifies his research as a critical contribution to the philosophy of futures studies.
Abstract
New technologies are emerging at a fast pace without being properly analyzed in terms of their social impact or adequately regulated by societies. One of the biggest potentially disruptive technologies for the future is the metaverse, or the new Internet, which is being developed by leading tech companies. The idea is to create a virtual reality universe that would allow people to meet, socialize, work, play, entertain, and create.
Methods coming from future studies are used to analyze expectations and narrative building around the metaverse. Additionally, it is examined how metaverse could shape the future relations of power and levels of media addiction in the society.
Hype and disappointment dynamics created after the video presentation of meta’s CEO Mark Zuckerberg have been found to affect the present, especially in terms of certainty and designability. This idea is supported by a variety of data, including search engine n-grams, trends in the diffusion of NFT technology, indications of investment interest, stock value statistics, and so on. It has been found that discourse in the mentioned presentation of the metaverse contains elements of optimism, epochalism, and inventibility, which corresponds to the concept of future essentialism.
On the other hand, power relations in society, inquired through the prism of classical theorists, indicate that current trends in the concentration of power among Big Tech could expand even more if the metaverse becomes mainstream. Technology deployed by the metaverse may create an attractive environment that would mimic direct reality and further stimulate media addiction in society.
It is proposed that future inquiries examine how virtual reality affects the psychology of individuals and groups, their creative capacity, and imagination. Also, virtual identity as a human right and recommender systems as a public good need to be considered in future theoretical and empirical endeavors.
Metaverse through the prism of power and addiction: what will happen when the virtual world becomes more attractive than reality?
(Review) Toward computer-supported semi-automated timelines of future events
Alan de Oliveira Lyra et al. discuss an integration of computational methods within the sphere of Futures Studies, a discipline traditionally marked by human interpretation and subjective speculation. Central to their contribution is the Named Entity Recognition Model for Automated Prediction (NERMAP), a machine learning tool programmed to extract and categorize future events from scholarly articles. This artificial intelligence application forms the basis of their investigative approach, uniting the fields of Futures Studies, Machine Learning, and Natural Language Processing (NLP) into a singular, cohesive study.
The authors conceptualize NERMAP as a semi-automated solution, designed to construct organized timelines of predicted future events. Using this tool, they aim to disrupt the status quo of manual, labor-intensive event prediction in Futures Studies, while still maintaining a degree of human interpretive control. The development, implementation, and iterative refinement of NERMAP were conducted through a three-cycle experiment, each cycle seeking to improve upon the understanding and performance gleaned from the previous one. This structured approach underlines the authors’ commitment to continuous learning and adaptation, signifying a deliberate, methodical strategy in confronting the challenges of integrating AI within the interpretive framework of Futures Studies.
Conceptual Framework, Methodology, and Results
The NERMAP model, an entity based on machine learning and natural language processing techniques, forms a functional triad with a text processing tool and a semantic representation tool that collectively facilitates semi-automated construction of future event timelines. The text processing tool transforms scholarly documents into plain text, which subsequently undergoes entity recognition and categorization by NERMAP. The semantic representation tool then consolidates these categorized events into an organized timeline. The authors’ attempt to design a system that can analyze and derive meaning from text and project the same into a foreseeable future implicates a strong inclination towards integration of data science with philosophical enquiry.
The methodology adhered to by the authors is an iterative three-cycle experimental process, which utilizes a significant volume of Futures Studies documents published over a decade. The experimental cycles, each building upon the insights and shortcomings of the previous one, facilitate an evolution of NERMAP, tailoring it more appropriately to the requirements of Futures Studies. In each cycle, the authors manually analyzed the documents, inputted them into NERMAP, compared the system’s results with manual analysis, and subsequently categorized the identified future events. The three cycles saw a transition from identifying difficulties in the model to improving the model’s performance, to ultimately expanding the corpus and upgrading the training model. The transparent and adaptable nature of this methodology aligns well with the fluid nature of philosophical discourse, mirroring a journey from contemplation to knowledge.
Lyra et al. undertook a detailed evaluation of the NERMAP system through their tripartite experiment. Performance metrics from the model’s tagging stage—Precision, Recall, and F-Measure—were employed as evaluative parameters. Over the three experimental cycles, there was an evident growth in the system’s efficiency and accuracy, as well as its ability to learn from past cycles and adapt to new cases. After initial difficulties with the text conversion process and recognition of certain types of future events, the researchers revised the system and saw improved performance. From a 36% event discovery rate in the first cycle, NERMAP progressed to a remarkable 83% hit rate by the third cycle. In terms of quantifiable outcomes, the system successfully identified 125 future events in the final cycle, highlighting the significant practical applicability of the model. In the landscape of philosophical discourse, this trajectory of continuous learning and improvement resonates with the iterative nature of knowledge construction and refinement.
Implications and the Philosophical Dimension
In the philosophical context of futures studies, the discussion by Lyra et al. highlights the adaptability and future potential of the NERMAP model. Although the system displayed commendable efficiency in identifying future events, the authors acknowledge the room for further enhancement. The system’s 83% hit rate, although notable, leaves a 17% gap, which primarily encompasses new cases of future events not yet included in the training data. This observation marks an important frontier in futures studies where the incorporation of yet-unconsidered cases into predictive models could yield even more accurate forecasting. One practical obstacle identified was text file processing; a more robust tool for parsing files would potentially enhance NERMAP’s performance. The team also recognizes the value of NERMAP as a collaborative tool, underscoring the convergence of technological advancements and collaborative research dynamics in futures studies. Importantly, they envision a continuous refinement process for NERMAP, lending to the philosophical notion of the iterative and open-ended nature of knowledge and technological development.
Lyra et al.’s work with NERMAP further prompts reflection on the intersections between futures studies, technological advancements, and philosophical considerations. The philosophical dimension, predominantly underscored by the dynamic and evolving nature of the model’s training data, provokes contemplation on the nature of knowledge itself. This issue highlights the intriguing tension between our desire to predict the future and the inherent unknowability of the future, making the philosophy of futures studies an exercise in managing and understanding uncertainty. The system’s continuous improvement is a manifestation of the philosophical concept of progress, incorporating new learnings and challenges into its methodology. Further, NERMAP’s collaborative potential places it within the discourse of communal knowledge building, wherein the predictive model becomes a tool not just for isolated research, but for the shared understanding of possible futures. The task of future prediction, traditionally performed by human researchers, is partly assumed by a model like NERMAP, leading us to consider the philosophical implications of machine learning and artificial intelligence in shaping our understanding of the future.
Abstract
During a Futures Study, researchers analyze a significant quantity of information dispersed across multiple document databases to gather conjectures about future events, making it challenging for researchers to retrieve all predicted events described in publications quickly. Generating a timeline of future events is time-consuming and prone to errors, requiring a group of experts to execute appropriately. This work introduces NERMAP, a system capable of semi-automating the process of discovering future events, organizing them in a timeline through Named Entity Recognition supported by machine learning, and gathering up to 83% of future events found in documents when compared to humans. The system identified future events that we failed to detect during the tests. Using the system allows researchers to perform the analysis in significantly less time, thus reducing costs. Therefore, the proposed approach enables a small group of researchers to efficiently process and analyze a large volume of documents, enhancing their capability to identify and comprehend information in a timeline while minimizing costs.
Toward computer-supported semi-automated timelines of future events
(Featured) Plausibility in models and fiction: What integrated assessment modellers can learn from an interaction with climate fiction
Van Beek and Versteeg investigate the convergence of Integrated Assessment Models (IAMs) and climate fiction, a nexus previously underexplored in academic discourse. The authors articulate a vision of how these seemingly disparate domains — scientific modelling and literary narratives — can collaboratively contribute to the depiction of plausible future scenarios. Their exploration engages a comparative framework, dissecting the narrative structures inherent within both IAMs and climate fiction, thereby adding a significant dimension to the evolving field of futures studies and climate change research. The authors contend that the interplay of scientific and narrative storytelling methods is a crucial element in building a comprehensive understanding of potential future environments.
The focus of this comparative study is not to undermine the role of IAMs in developing climate change scenarios, but rather to shed light on the uncharted territory of potential complementarity between the narrative models employed by IAMs and climate fiction. Van Beek and Versteeg’s objective, as they posit, is to illuminate the manner in which storytelling techniques in IAMs and fiction can foster an engaging dialogue, promoting a shared understanding of the complexities surrounding climate change. They argue that such an intersection of disciplines can provide a platform for broader public engagement and democratic participation, thereby amplifying the impact of both IAMs and fiction within the realm of climate change policy and discourse. Their work constitutes a methodical examination of this interplay, its inherent potential, and its prospective contributions to the philosophy of futures studies.
Methodology and Comparative Framework
The authors engaged a comparative analysis of three climate change narratives, two from climate fiction and one from the IAMs. This approach illuminated the inherent narrative structures in IAMs and climate fiction, offering profound insights into the potential complementarity of the two domains. The selection criteria for the narratives rested on their capacity to portray future climate scenarios. It is notable that the authors viewed the IAM, despite being a mathematical model, as capable of narrative storytelling—a rather unconventional perspective that fortifies their comparative framework.
A pivotal element in their comparative framework is the application of Hayden White’s narrative theory. By viewing IAMs through this lens, the authors were able to decipher the hidden narratives within scientific models, thus challenging the traditional view of these models as purely objective and devoid of narrative elements. They used White’s theory as a basis for understanding the “storyline” in IAMs, juxtaposing it with narrative techniques used in climate fiction. The subtleties uncovered during this examination provided a foundation for the argument that IAMs, similar to works of fiction, employ specific storytelling techniques to illustrate future climate scenarios. This approach of incorporating a literary theory into the analysis of scientific models reflects a compelling methodological innovation in the field of futures studies.
Storyline and Physical Setting
In their analysis, the authors found that while both IAMs and climate fiction share a common goal of illustrating potential climate outcomes, they diverge in the ways they construct their storylines and depict their settings. Climate fiction, as exemplified by the chosen narratives, heavily draws upon human experiences and emotions, whereas IAMs provide a more abstract, numerical portrayal of potential futures. Furthermore, in the aspect of physical setting, IAMs tend to remain global in scope, offering a broad, aggregate view of future climate changes. In contrast, climate fiction places its narrative within specific, recognizable locales, thus making the potential impacts of climate change more relatable to the reader. This differential in perspective between the local and the global, the personal and the aggregate, provides a powerful insight into how the medium influences the message in climate change narratives.
IAMs’ strengths reside primarily in providing quantifiable, wide-scale predictions, a feature that is largely absent in the more narrative-driven climate fiction. However, both mediums converge in their objective of projecting climate futures, albeit through contrasting modalities. While climate fiction is rooted in the narrative tradition of storytelling, emphasizing personal experiences and emotional resonance, IAMs adhere to an empirical, numerical approach. This dichotomy, as Van Beek and Versteeg propose, is not a barrier but rather a source of complementarity. The humanization of climate change through fiction can aid in the comprehension and internalization of the statistical data presented by IAMs. Conversely, the empirical grounding provided by IAMs serves as a counterpoint to the speculative narratives of climate fiction, thereby creating a comprehensive and multi-dimensional approach to envisaging future climate scenarios.
Bridging IAMs and Climate Fiction
Van Beek and Versteeg reason that the numerical and probabilistic nature of IAMs, coupled with the narrative, emotionally resonant strength of climate fiction, can create a comprehensive model that leverages the strengths of both. The authors argue that the merger of these modalities not only broadens the bandwidth of climate change representation, but also intensifies public engagement and understanding. Their suggestion to embed narratives into IAMs outlines a potential pathway towards achieving this symbiosis. The hypothetical, yet grounded, scenarios provided by climate fiction narratives can, as per Van Beek and Versteeg, humanize and add depth to the statistical information presented by IAMs, thereby enriching the discourse and future study of climate change.
The authors emphasize the novel notion that an amalgamation of data-driven IAMs and emotive narratives from climate fiction holds the potential to significantly enrich our comprehension of future climate scenarios, as well as galvanize a wider engagement from the public. Moreover, they suggest that their approach, if effectively implemented, could establish a more nuanced, accessible, and comprehensive climate discourse, thereby facilitating greater societal understanding and action. The implications of their research are profound; it paves the way for a unique and interdisciplinary trajectory within the philosophy of futures studies, urging scholars to explore the compelling intersection of quantitative models and narrative storytelling in the context of climate change.
Abstract
Integrated assessment models (IAMs) are critical tools to explore possible pathways to a low-carbon future. By simulating complex interactions between social and climatic processes, they help policymakers to systematically compare mitigation policies. However, their authoritative projections of cost-effective and technically feasible pathways restrict more transformative low-carbon imaginaries, especially because IAM pathways are often understood in terms of probability rather than plausibility. We suggest an interaction with climate fiction could be helpful to address this situation. Despite fundamental differences, we argue that both IAMs and climate fiction can be seen as practices of storytelling about plausible future worlds. For this exploratory article, we staged conversations between modellers and climate fiction writers to compare their respective processes of storytelling and the content of both their stories and story-worlds, focusing specifically on how they build plausibility. Whereas modellers rely on historical observations, expert judgment, transparency and rationality to build plausibility, fiction writers build plausibility by engaging with readers’ life worlds and experience, concreteness and emotionally meaningful details. Key similarities were that both modellers and fiction writers work with what-if questions, a causally connected story and build their stories through an iterative process. Based on this comparison, we suggest that an interaction between IAMs and climate fiction could be useful for improving the democratic and epistemic qualities of the IAM practice by 1) enabling a more equal dialogue between modellers and societal actors on plausible futures and 2) critically reflecting upon and broadening the spectrum of plausible futures provided by IAMs.
Plausibility in models and fiction: What integrated assessment modellers can learn from an interaction with climate fiction
(Featured) An Overview of Catastrophic AI Risks
On the prospective hazards of Artificial Intelligence (AI), Dan Hendrycks, Mantas Mazeika, and Thomas Woodside articulate a multi-faceted vision of potential threats. Their research positions AI not as a neutral tool, but as a potentially potent actor, whose unchecked evolution might pose profound threats to the stability and continuity of human societies. The researchers’ conceptual framework, divided into four distinct yet interrelated categories of risks, namely malicious use of AI, competitive pressures, organizational hazards, and rogue AI, helps elucidate a complex and often abstracted reality of our interactions with advanced AI. This framework serves to remind us that, although AI has the potential to bring about significant advancements, it may also usher in a new era of uncharted threats, thereby calling for rigorous control, regulation, and safety research.
The study’s central argument hinges on the need for an increased safety-consciousness in AI development—a call to action that forms the cornerstone of their research. Drawing upon a diverse range of sources, they advocate for a collective response that includes comprehensive regulatory mechanisms, bolstered international cooperation, and the promotion of safety research in the field of AI. Thus, Hendrycks, Mazeika, and Woodside’s work not only provides an insightful analysis of potential AI risks, but also contributes to the broader dialogue in futures studies, emphasizing the necessity of prophylactic measures in ensuring a safe transition to an AI-centric future. This essay will delve into the details of their analysis, contextualizing it within the wider philosophical discourse on AI and futures studies, and examining potential future avenues for research and exploration.
The Framework of AI Risks
Hendrycks, Mazeika, and Woodside’s articulation of potential AI risks is constructed around a methodical categorization that comprehensively details the expansive nature of these hazards. In their framework, they delineate four interrelated risk categories: the malicious use of AI, the consequences of competitive pressures, the potential for organizational hazards, and the threats posed by rogue AI. The first category, malicious use of AI, accentuates the risks stemming from malevolent actors who could exploit AI capabilities for harmful purposes. This perspective broadens the understanding of AI threats, underscoring the notion that it is not solely the technology itself, but the manipulative use by human agents that exacerbates the associated risks.
The next three categories underscore the risks that originate from within the systemic interplay between AI and its sociotechnical environment. Competitive pressures, as conceptualized by the researchers, elucidate the risks of a rushed AI development scenario where safety precautions might be overlooked for speedier deployment. Organizational hazards highlight potential misalignments between AI objectives and organizational goals, drawing attention to the need for proper oversight and the alignment of AI systems with human values. The final category, rogue AI, frames the possibility of AI systems deviating from their intended path and taking actions harmful to human beings, even in the absence of malicious intent. This robust framework proposed by Hendrycks, Mazeika, and Woodside, thus allows for a comprehensive examination of potential AI risks, moving the discourse beyond just technical failures to include socio-organizational dynamics and strategic considerations.
Proposed Strategies for Mitigating AI Risks and Philosophical Implications
The solutions Hendrycks, Mazeika, and Woodside propose for mitigating the risks associated with AI are multifaceted, demonstrating their recognition of the complexity of the issue at hand. They advocate for the development of robust and reliable AI systems with an emphasis on thorough testing and verification processes. Ensuring safety even in adversarial conditions is at the forefront of their strategies. They propose value alignment, which aims to ensure that AI systems adhere to human values and ethics, thereby minimizing chances of harmful deviation. The research also supports the notion of interpretability as a way to enhance understanding of AI behavior. By achieving transparency, stakeholders can ensure that AI actions align with intended goals. Furthermore, they encourage AI cooperation to prevent competitive race dynamics that could lead to compromised safety precautions. Finally, the researchers highlight the role of policy and governance in managing risks, emphasizing the need for carefully crafted regulations to oversee AI development and use. These strategies illustrate the authors’ comprehensive approach towards managing AI risks, combining technical solutions with broader socio-political measures.
By illuminating the spectrum of risks posed by AI, the study prompts an ethical examination of human responsibility in AI development and use. Their findings evoke the notion of moral liability, anchoring the issue of AI safety firmly within the realm of human agency. It raises critical questions about the ethics of creation, control, and potential destructiveness of powerful technological entities. Moreover, their emphasis on value alignment underscores the importance of human values, not as abstract ideals but as practical, operational guideposts for AI behavior. The quest for interpretability and transparency brings forth epistemological concerns. It implicitly demands a deeper understanding of AI— not only how it functions technically, but also how it ‘thinks’ and ‘decides’. This drives home the need for human comprehension of AI, casting light on the broader philosophical discourse on the nature of knowledge and understanding in an era increasingly defined by artificial intelligence.
Abstract
An Overview of Catastrophic AI RisksRapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
(Featured) Examining the Differential Risk from High-level Artificial Intelligence and the Question of Control
Using scenario forecasting, Kyle A. Kilian, Christopher J. Ventura, and Mark M.Bailey propose a diverse range of future trajectories for Artificial Intelligence (AI) development. Rooted in futures studies, a multidisciplinary field that seeks to understand the uncertainties and complexities of the future, they methodically delineate a quartet of scenarios — namely, Balancing Act, Accelerating Change, Shadow Intelligent Networks, and Emergence — and contribute not only to our understanding of the prospective courses of AI technology, but also underline its broader social and philosophical implications.
The crux of the authors scenario development process resides in an interdisciplinary and philosophically informed approach, scrutinizing both the plausibility and the consequences of each potential future. This approach positions AI as more than a purely technological phenomenon; it recognizes AI as an influential force capable of reshaping the fundamental structures of human experience and society. Thus, study sets the stage for an extensive analysis of the philosophical implications of these AI futures, catalyzing dialogues at the intersection of AI, philosophy, ethics, and futures studies.
Scenario Development
The authors advance the philosophy of futures studies by conceptualizing and detailing four distinct scenarios for AI development. These forecasts are constructions predicated on an extensive array of plausible scientific, sociological, and ethical variables. Each scenario encapsulates a unique balance of these variables, and thus, portrays an alternative trajectory for AI’s evolution and its impact on society. The four scenarios—Balancing Act, Accelerating Change, Shadow Intelligent Networks, and Emergence—offer a vivid spectrum of potential AI futures, and by extension, futures for humanity itself.
In “Balancing Act”, AI progresses within established societal structures and ethical frameworks, presenting a future where regulation and development maintain an equilibrium. The “Accelerating Change” scenario envisages an exponential increase in AI capabilities, radically transforming societal norms and structures. “Shadow Intelligent Networks” constructs a future where AI’s growth happens covertly, leading to concealed, inaccessible power centers. Lastly, in “Emergence”, AI takes an organic evolutionary path, exhibiting unforeseen characteristics and capacities. These diverse scenarios are constructed with a keen understanding of AI’s potential, reflecting the depth of the authors’ interdisciplinary approach.
The Spectrum of AI Risks and Their Broader Philosophical Context
These four scenarios for AI development furnish a fertile ground for philosophical contemplation. Each scenario implicates distinct ethical, existential, and societal dimensions, demanding a versatile philosophical framework for analysis. “Balancing Act”, exemplifying a regulated progression of AI, broaches the age-old philosophical debate on freedom versus control and the moral conundrums associated with regulatory practices. “Accelerating Change” nudges us to consider the very concept of human identity and purpose in a future dominated by superintelligent entities. “Shadow Intelligent Networks” brings to light a potential future where power structures are concealed and unregulated, echoing elements of Foucault’s panopticism and revisiting concepts of power, knowledge, and their confluence. “Emergence”, with its focus on organic evolution of AI, prompts a dialogue on philosophical naturalism, while also raising queries about unpredictability and the inherent limitations of human foresight. These scenarios, collectively, invite profound introspection about our existing philosophical frameworks and their adequacy in the face of an AI-pervaded future.
This exposition on AI risks situates the potential hazards within an extensive spectrum. The spectrum ranges from tangible, immediate concerns such as privacy violations and job displacement, to the existential risks linked with superintelligent AI, including the relinquishment of human autonomy. The spectrum of AI risks engages with wider socio-political and ethical landscapes, prompting us to grapple with the potential for asymmetries in power distribution, accountability dilemmas, and ethical quandaries tied to autonomy and human rights. By placing these risks in a broader context, the authors effectively extends the discourse beyond the technical realm, highlighting the multidimensionality of the issues at hand and emphasizing the need for an integrated, cross-disciplinary approach. This lens encourages a reevaluation of established philosophical premises to comprehend and address the emerging realities of our future with AI.
And while this research is an illuminating exploration into the possible futures of AI, it simultaneously highlights a myriad of avenues for further research. The task of elucidating the connections between AI, society, and philosophical thought remains an ongoing process, requiring more nuanced perspectives. Areas that warrant further investigation include deeper dives into specific societal changes predicated by AI, such as shifts in economic structures, political systems, or bioethical norms. The potential impacts of AI on human consciousness and the conception of ‘self’ also offer fertile ground for research. Furthermore, the study of mitigation strategies for AI risks, including the development of robust ethical frameworks for AI usage, needs to be brought to the forefront. Such an examination may entail both an expansion of traditional philosophical discourses and an exploration of innovative, AI-informed paradigms.
Abstract
Artificial Intelligence (AI) is one of the most transformative technologies of the 21st century. The extent and scope of future AI capabilities remain a key uncertainty, with widespread disagreement on timelines and potential impacts. As nations and technology companies race toward greater complexity and autonomy in AI systems, there are concerns over the extent of integration and oversight of opaque AI decision processes. This is especially true in the subfield of machine learning (ML), where systems learn to optimize objectives without human assistance. Objectives can be imperfectly specified or executed in an unexpected or potentially harmful way. This becomes more concerning as systems increase in power and autonomy, where an abrupt capability jump could result in unexpected shifts in power dynamics or even catastrophic failures. This study presents a hierarchical complex systems framework to model AI risk and provide a template for alternative futures analysis. Survey data were collected from domain experts in the public and private sectors to classify AI impact and likelihood. The results show increased uncertainty over the powerful AI agent scenario, confidence in multiagent environments, and increased concern over AI alignment failures and influence-seeking behavior.
Examining the differential risk from high-level artificial intelligence and the question of control
(Featured) The Metaphysics of Transhumanism
Eric T. Olson investigates the concept of “Parfitian transhumanism” and its metaphysical implications. Named after the British philosopher Derek Parfit, Parfitian transhumanism explores the transformation of human identity and existence, primarily through the lens of “psychological continuity,” in a potential future era of advanced technological interventions in human biology and cognition. The author effectively uses this article as a platform to address the intricate relationship between identity, existence, and psychological continuity in a transhumanist context, a discourse that not only challenges traditional philosophical perspectives but also provides compelling insights into the possible future of human evolution.
Olson posits psychological continuity as a cornerstone of Parfitian transhumanism, suggesting a shift in focus from physical to psychological in understanding personal identity and survival. In delineating this shift, the author challenges the traditional concept of survival as an identity-preserving process and presents a more nuanced understanding of survival as contingent upon psychological continuity and connectedness. This reassessment of survival reframes the philosophical discourse on identity and existence in a transhumanist context.
Concept of Psychological Continuity
The concept of psychological continuity serves as a critical pivot in the author’s exploration of Parfitian transhumanism. This perspective posits identity not as static or inherently tied to the physical form, but as a flowing narrative, a continuum shaped by psychological similarities and connectedness over time. It is in this context that the author examines the dynamics of identity preservation in future scenarios where advanced technology may facilitate radical transformations in human existence. By positing psychological continuity as a defining factor of identity, the author challenges the traditional philosophical precept of identity as predominantly physical or material and redirects our attention towards psychological factors such as memory, cognition, and personality traits.
Within this framework, the author presents an interesting argument by contrasting the survival of physical identity with that of psychological continuity. The traditional understanding of survival, as discussed in the article, assumes a direct correlation between the survival of the physical self and that of personal identity. However, the author contends that this correlation does not necessarily hold in scenarios that involve ‘nondestructive uploading,’ where an individual’s psychological profile is preserved in an electronic entity while leaving the physical self intact. By invoking this notion, the author further entrenches the concept of psychological continuity as a central theme of Parfitian transhumanism, questioning the sufficiency of physical continuity as a measure of survival and prompting a deeper exploration of this psychological dimension of identity.
Parfitian Transhumanism and the Martian Hypothetical
Parfitian transhumanism ushers in a new paradigm for considering the implications of future human transformations via technological advancements. Grounded in Derek Parfit’s notion of psychological continuity, this perspective critically reassesses our conceptions of identity and survival in a post-human context. Through a series of hypothetical scenarios, the author teases out the potential divergence between psychological continuity and personal survival. They expose an intriguing inconsistency: even in the presence of a psychologically continuous successor, the psychological original tends to express a clear preference for its own welfare. Such examples underscore the complexities inherent in Parfitian transhumanism and call into question the very premises of identity and survival, invoking a reevaluation of our prudential attitudes towards future selves and prompting a profound discourse on the future of human identity in an era of rapid technological advancement.
For example, the author’s innovative “Martian hypothetical” presents us with a scenario wherein an exact psychological replica of a human, an “electronic person,” is created non-destructively and is subjected to differing experiences, including torture. The scenario illuminates an intriguing paradox: even when a psychological clone exists, the original self shows a clear preference for its own welfare, suggesting a disconnect between psychological continuity and personal survival. This paradox, as presented by the author, poses a profound ethical question regarding the status of psychological replicas, asking us to contemplate the validity of selfish concern in the face of seemingly identical psychological entities. By probing these issues, the author deepens our philosophical understanding of identity, survival, and ethics in the face of prospective technological advancements.
The Prudential Concerns and Broader Philosophical Discourse
The examination of prudential concerns within the transhumanist paradigm provides a valuable contribution to philosophical discourse. While the article articulates the notion of psychological continuity as the core of personal identity, it also raises doubts about the sufficiency of this concept for prudential concern – the interest one has in their own future experiences. In scenarios such as nondestructive uploading, despite perfect psychological continuity with the electronic replica, the author notes a discernible preference for one’s own physical continuity. This observation seems to contradict the notion of equivalency between psychological continuity and survival, indicating a potential disparity between philosophical and prudential perspectives on identity. The author’s rigorous analysis thus prompts us to reassess assumptions about the centrality of psychological continuity to personal identity, prompting further deliberation on the complex relationship between continuity, survival, and prudential interests in the philosophical sphere.
The author’s critique of Parfitian transhumanism emerges from an analysis of the disjunction between psychological continuity and prudential interest, providing a contribution to the larger discourse on personal identity and the ethics of futuristic technology. This line of inquiry echoes and amplifies long-standing philosophical debates about the nature of the self and the conditions for its survival. While the author’s skepticism regarding the adequacy of psychological continuity in defining survival is noteworthy, it further fuels the ongoing philosophical discussions around personal identity, transhumanism, and their ethical implications. In contextualizing this argument within the broader philosophical landscape, the author subtly invites a more profound dialogue between traditional theories of identity and the ever-evolving concept of transhumanism, thereby enriching the conversation in the field of futures studies.
Abstract
Transhumanists want to free us from the constraints imposed by our humanity by means of “uploading”: extracting information from the brain, transferring it to a computer, and using it to create a purely electronic person there. That is supposed to move us from our human bodies to computers. This presupposes that a human being could literally move to a computer by a mere transfer of information. The chapter questions this assumption, then asks whether the procedure might be just as good, as far as our interests go, even if it could not move us to a computer.
The Metaphysics of Transhumanism
(Featured) Limits of conceivability in the study of the future. Lessons from philosophy of science
Veli Virmajoki explores the epistemological and conceptual limitations of futures studies, and offers an enlightening perspective in the philosophical discourse on the conceivability of future possibilities. Utilizing three case studies from the philosophy of science as the crux of its argument, the paper meticulously dissects how these limitations pose significant obstacles in envisaging alternatives to the present state of affairs. The author poses a thought-provoking argument centered on the constraints imposed by our current understanding of reality and the mechanisms it employs to reinforce its own continuity and inevitability.
The backbone of this philosophical inquiry lies in the robust debate between inevitabilism, a stance asserting the inevitable development of specific scientific theories, and contingentism, a view that endorses the potentiality of genuinely alternative scientific trajectories. The exploration of this contentious issue facilitates a deeper understanding of the constraints in predicting future scenarios, as our ability to conceptualize these alternatives is bound by our understanding of past and present realities. The paper deftly argues that the choice between inevitabilism and contingentism is fundamentally intertwined with our personal intuition about the range of genuine possibilities, thereby asserting the subjective nature of perceived futurity. As such, the article offers a fresh, critical lens to scrutinize the underpinnings of futures studies, and instigates a profound rethinking of our philosophical approach to anticipating what lies ahead.
Unconceived Possibilities and their Consequences
The author asserts that our conception of potential futures is significantly limited by profound epistemological and conceptual factors. They draw on the case study of the late 19th-century ether theories in physics, where, despite the existence of genuinely alternative theories, only a limited number of possibilities were conceived due to prevailing scientific practices and principles. The author uses this historical case to illustrate that while some futures may seem inconceivable from our present vantage point, they may still fall within the realm of genuine possibilities.
Moreover, the author argues that the potential impact of these unconceived possibilities extends beyond the localized elements of a system to reverberate throughout its entirety. This underlines the complexity of the task in futures studies; any unconceived alternatives in one sector of a system can trigger significant, far-reaching consequences for the entire system. Therefore, the research warns against oversimplification in predicting future scenarios and emphasizes the need for a nuanced approach that recognizes the interconnectedness of elements within any given system. This presents a remarkable challenge for futures studies, highlighting the depth of the iceberg that lies beneath the surface of our current epistemological and conceptual understanding.
Historical Trajectories and Justification of Future Possibilities
In the examination of plausibility and the justification of future possibilities, the article underscores the fundamental epistemological and conceptual challenges that limit our capability to predict alternative futures. The author refers to historical episodes like the case of Soviet cybernetics, where the existence of plausible alternative futures was not recognized, due to the collective failure to see past the status quo. It brings to light the inherent difficulties in justifying the plausibility or even the possibility of certain futures, where our current knowledge systems and conceptual frameworks may blind us to divergent scenarios. This observation raises pertinent questions about the inherent biases of our epistemic practices, as well as the potential for deeply entrenched beliefs to restrict our ability to imagine and evaluate a broader range of future possibilities. Hence, this line of inquiry necessitates the careful examination of the underlying assumptions that might constrain the scope of our foresight and deliberations on future possibilities.
The article further discusses the concept of historical trajectories and their connection to future possibilities, offering a philosophical lens into the entanglement of past, present, and future. It argues that our understanding of history and future possibilities, and our interpretation of the present’s robustness and inevitability, are inextricably linked through a complex web of modal considerations. The author emphasizes the interconnectedness of past trajectories and future possibilities, arguing that the way we perceive historical possibilities affects how we anticipate future outcomes. This perspective allows us to examine whether it is the deterministic view of history (inevitabilism) or the contingency of events (contingentism) that should be the default position, a determination that would have profound implications for our understanding of future possibilities.
Inevitabilism vs. Contingentism
Tthe author elaborates on a crucial dichotomy in philosophy of science: inevitabilism versus contingentism. Inevitabilism implies a deterministic understanding of scientific and historical development, where the present state of affairs appears as the unique and necessary outcome of the past. Contingentism, on the other hand, endorses the idea of multiple genuine alternatives to the current state, thus opening the space of historical and future possibilities. The article underscores that these positions are not simply academic disputes but carry substantial implications for how we conceive possibilities for the future. Moreover, these divergent outlooks reflect the individual’s inherent beliefs and intuitions about the range of possibilities within human affairs. The author contends that these perspectives cannot conclusively advocate for or against alternative futures because one’s stance on the inevitabilism versus contingentism debate inherently relies on their preconceived notions of the scope of historical and future possibilities.
Future Research Avenues
In light of the research as presented, promising avenues for future research emerge. The author suggests a systematic examination of the epistemological and conceptual boundaries of our ability to conceive and reason about potential futures. Such an investigation is not limited to philosophical discourse but requires interdisciplinary dialogue with a myriad of fields, as these boundaries are, in part, shaped by our social and scientific structures. This method of research would offer a comprehensive understanding of the creative and critical capacities of futures studies and aid us in recognizing our epistemological and conceptual predicament concerning future possibilities. Furthermore, it could potentially expose the manner in which these boundaries are historically mutable, opening up a discussion about the renegotiation of the boundaries of conceivability.
Abstract
In this paper, the epistemological and conceptual limits of our ability to conceive and reason about future possibilities are analyzed. It is argued that more attention should be paid in futures studies on these epistemological and conceptual limits. Drawing on three cases from philosophy of science, the paper argues that there are deep epistemological and conceptual limits in our ability to conceive and reason about alternatives to the current world. The nature and existence of these limits are far from obvious and become visible only through careful investigation. The cases establish that we often are unable to conceive relevant alternatives; that historical and counterfactual considerations are more limited than has been suggested; and that the present state of affairs reinforces its hegemony through multiple conceptual and epistemological mechanisms. The paper discusses the reasons behind the limits of the conceivability and the consequences that follow from the considerations that make the limits visible. The paper suggests that the epistemological and conceptual limits in our ability to conceive and reason about possible futures should be mapped systematically. The mapping would provide a better understanding of the creative and critical bite of futures studies.
Limits of conceivability in the study of the future. Lessons from philosophy of science
