(Review) Toward computer-supported semi-automated timelines of future events

Reading a Future Timeline Ticker

Alan de Oliveira Lyra et al. discuss an integration of computational methods within the sphere of Futures Studies, a discipline traditionally marked by human interpretation and subjective speculation. Central to their contribution is the Named Entity Recognition Model for Automated Prediction (NERMAP), a machine learning tool programmed to extract and categorize future events from scholarly articles. This artificial intelligence application forms the basis of their investigative approach, uniting the fields of Futures Studies, Machine Learning, and Natural Language Processing (NLP) into a singular, cohesive study.

The authors conceptualize NERMAP as a semi-automated solution, designed to construct organized timelines of predicted future events. Using this tool, they aim to disrupt the status quo of manual, labor-intensive event prediction in Futures Studies, while still maintaining a degree of human interpretive control. The development, implementation, and iterative refinement of NERMAP were conducted through a three-cycle experiment, each cycle seeking to improve upon the understanding and performance gleaned from the previous one. This structured approach underlines the authors’ commitment to continuous learning and adaptation, signifying a deliberate, methodical strategy in confronting the challenges of integrating AI within the interpretive framework of Futures Studies.

Conceptual Framework, Methodology, and Results

The NERMAP model, an entity based on machine learning and natural language processing techniques, forms a functional triad with a text processing tool and a semantic representation tool that collectively facilitates semi-automated construction of future event timelines. The text processing tool transforms scholarly documents into plain text, which subsequently undergoes entity recognition and categorization by NERMAP. The semantic representation tool then consolidates these categorized events into an organized timeline. The authors’ attempt to design a system that can analyze and derive meaning from text and project the same into a foreseeable future implicates a strong inclination towards integration of data science with philosophical enquiry.

The methodology adhered to by the authors is an iterative three-cycle experimental process, which utilizes a significant volume of Futures Studies documents published over a decade. The experimental cycles, each building upon the insights and shortcomings of the previous one, facilitate an evolution of NERMAP, tailoring it more appropriately to the requirements of Futures Studies. In each cycle, the authors manually analyzed the documents, inputted them into NERMAP, compared the system’s results with manual analysis, and subsequently categorized the identified future events. The three cycles saw a transition from identifying difficulties in the model to improving the model’s performance, to ultimately expanding the corpus and upgrading the training model. The transparent and adaptable nature of this methodology aligns well with the fluid nature of philosophical discourse, mirroring a journey from contemplation to knowledge.

Lyra et al. undertook a detailed evaluation of the NERMAP system through their tripartite experiment. Performance metrics from the model’s tagging stage—Precision, Recall, and F-Measure—were employed as evaluative parameters. Over the three experimental cycles, there was an evident growth in the system’s efficiency and accuracy, as well as its ability to learn from past cycles and adapt to new cases. After initial difficulties with the text conversion process and recognition of certain types of future events, the researchers revised the system and saw improved performance. From a 36% event discovery rate in the first cycle, NERMAP progressed to a remarkable 83% hit rate by the third cycle. In terms of quantifiable outcomes, the system successfully identified 125 future events in the final cycle, highlighting the significant practical applicability of the model. In the landscape of philosophical discourse, this trajectory of continuous learning and improvement resonates with the iterative nature of knowledge construction and refinement.

Implications and the Philosophical Dimension

In the philosophical context of futures studies, the discussion by Lyra et al. highlights the adaptability and future potential of the NERMAP model. Although the system displayed commendable efficiency in identifying future events, the authors acknowledge the room for further enhancement. The system’s 83% hit rate, although notable, leaves a 17% gap, which primarily encompasses new cases of future events not yet included in the training data. This observation marks an important frontier in futures studies where the incorporation of yet-unconsidered cases into predictive models could yield even more accurate forecasting. One practical obstacle identified was text file processing; a more robust tool for parsing files would potentially enhance NERMAP’s performance. The team also recognizes the value of NERMAP as a collaborative tool, underscoring the convergence of technological advancements and collaborative research dynamics in futures studies. Importantly, they envision a continuous refinement process for NERMAP, lending to the philosophical notion of the iterative and open-ended nature of knowledge and technological development.

Lyra et al.’s work with NERMAP further prompts reflection on the intersections between futures studies, technological advancements, and philosophical considerations. The philosophical dimension, predominantly underscored by the dynamic and evolving nature of the model’s training data, provokes contemplation on the nature of knowledge itself. This issue highlights the intriguing tension between our desire to predict the future and the inherent unknowability of the future, making the philosophy of futures studies an exercise in managing and understanding uncertainty. The system’s continuous improvement is a manifestation of the philosophical concept of progress, incorporating new learnings and challenges into its methodology. Further, NERMAP’s collaborative potential places it within the discourse of communal knowledge building, wherein the predictive model becomes a tool not just for isolated research, but for the shared understanding of possible futures. The task of future prediction, traditionally performed by human researchers, is partly assumed by a model like NERMAP, leading us to consider the philosophical implications of machine learning and artificial intelligence in shaping our understanding of the future.

Abstract

During a Futures Study, researchers analyze a significant quantity of information dispersed across multiple document databases to gather conjectures about future events, making it challenging for researchers to retrieve all predicted events described in publications quickly. Generating a timeline of future events is time-consuming and prone to errors, requiring a group of experts to execute appropriately. This work introduces NERMAP, a system capable of semi-automating the process of discovering future events, organizing them in a timeline through Named Entity Recognition supported by machine learning, and gathering up to 83% of future events found in documents when compared to humans. The system identified future events that we failed to detect during the tests. Using the system allows researchers to perform the analysis in significantly less time, thus reducing costs. Therefore, the proposed approach enables a small group of researchers to efficiently process and analyze a large volume of documents, enhancing their capability to identify and comprehend information in a timeline while minimizing costs.

Toward computer-supported semi-automated timelines of future events

(Featured) Technology ethics assessment: Politicising the ‘Socratic approach’

Technology ethics assessment: Politicising the ‘Socratic approach’

Robert Sparrow proposes a Socratic approach to uncover the ethical and political dimensions of technology. This method involves asking a series of questions that highlight the ethical concerns and implications of a given technology. The author structures the questions in five categories: (1) technology and power, (2) technology and social justice, (3) technology, values and the environment, (4) technology and the human experience, and (5) process, consultation, and iteration.

The author argues that the Socratic approach can help identify ethical challenges in technology and facilitate discussions on the implications of technology in various aspects of society. The questions raised cover a wide range of issues, from power imbalances and social inequalities resulting from the adoption of technology, to the potential impact on the environment and human experiences. Furthermore, the author highlights the importance of considering the processes and procedures involved in developing and adopting a technology, as well as the need for user involvement in the design process, consultation with affected parties, and mechanisms for identifying and addressing ethical issues.

By using a Socratic approach, the paper emphasizes the need to critically evaluate technologies and their potential consequences rather than passively accepting them. The author contends that the ethical implications of technologies cannot be fully understood or addressed without considering the broader political context in which they are developed and deployed. As a result, the paper argues that empowering citizens and fostering open dialogue on the ethical implications of technology is vital in creating a more just, equitable, and hospitable world.

The paper’s insights into the politics of technology resonate with broader philosophical debates on the nature of power, justice, and responsibility in the context of technological advancements. By focusing on the Socratic method, the author also contributes to ongoing discussions on the epistemology of ethics in relation to technology. This approach highlights the importance of critical thinking and dialectical engagement in uncovering the ethical complexities of technology and its impact on society.

For future research, it would be valuable to explore the application of the Socratic approach to specific case studies, examining how the questions posed in this paper can help uncover the ethical dimensions of various technologies in practice. Additionally, it would be beneficial to investigate the potential of interdisciplinary collaboration between philosophy, social sciences, and technology development in order to better address the ethical and political concerns raised by emerging technologies. This would further enrich the discourse on the politics of technology and contribute to the development of more ethical and socially responsible technological innovations.

Abstract

That technologies may raise ethical issues is now widely recognised. The ‘responsible innovation’ literature – as well as, to a lesser extent, the applied ethics and bioethics literature – has responded to the need for ethical reflection on technologies by developing a number of tools and approaches to facilitate such reflection. Some of these instruments consist of lists of questions that people are encouraged to ask about technologies – a methodology known as the ‘Socratic approach’. However, to date, these instruments have often not adequately acknowledged various political impacts of technologies, which are, I suggest, essential to a proper account of the ethical issues they raise. New technologies can make some people richer and some people poorer, empower some and disempower others, have dramatic implications for relationships between different social groups and impact on social understandings and experiences that are central to the lives, and narratives, of denizens of technological societies. The distinctive contribution of this paper, then, is to offer a revised and updated version of the Socratic approach that highlights the political, as well as the more traditionally ethical, issues raised by the development of new technologies.

Technology ethics assessment: Politicising the ‘Socratic approach’