(Featured) Farewell to humanism? Considerations for nursing philosophy and research in posthuman times

Farewell to humanism? Considerations for nursing philosophy and research in posthuman times

Olga Petrovskaya explores a groundbreaking domain: the application of posthumanist philosophy within the nursing field. By proposing an innovative perspective on the relational dynamics between humans and non-humans in healthcare, Petrovskaya illuminates the future possibilities of nursing in an increasingly complex and interconnected world. The research critically unpacks the conventional anthropocentric paradigm predominant in nursing and provides an alternative posthumanist framework to understand nursing practices. Thus, the importance of this work lies not merely in its contribution to nursing studies but also to the philosophy of futures studies.

Petrovskaya’s inquiry into posthumanist thought is a deep examination of the conventional humanist traditions and their limitations in contemporary healthcare. The research suggests that posthumanism, with its rejection of human-centric superiority and endorsement of complex human-nonhuman interrelations, offers a viable path to reformulate nursing practice. In doing so, the author nudges the academic and professional nursing community to rethink their conventional approaches and consider new methodologies that incorporate posthumanist ideas. As such, Petrovskaya’s work establishes a critical juncture in the discourse of futures studies, heralding a transformative approach to nursing.

Nursing and the Posthumanist Paradigm

Petrovskaya takes significant strides to unpack the posthumanist paradigm, emphasizing its pivotal role in reshaping the field of nursing. Posthumanism, as the author illustrates, moves away from the anthropocentric bias of traditional humanism, challenging the supremacy of human reason and universalism. This shift to a more inclusive and egalitarian lens transcends the human/non-human divide, acknowledging the intertwined assemblages of humans and non-human elements. Petrovskaya’s discussion of the posthumanist perspective further exposes the oppressive tendencies and environmental degradation tied to humanism’s colonial, sexist, and racist underpinnings. With its more nuanced approach to understanding the complex relationships between humans and non-human entities, posthumanism underscores the importance of material practices and the fluidity of subjectivities. Petrovskaya’s contribution is thus seminal in bridging this philosophical discourse with nursing practices, facilitating a more comprehensive understanding of their implications and potential transformations.

The application of posthumanist perspectives to nursing has substantial implications for the practice. Through her paper, Petrovskaya brings to light the dynamism and fluidity of nursing practices, suggesting they are not predetermined but are spaces where various versions of the human are formed and contested. This conceptualization echoes the posthumanist emphasis on the evolving nature of subjectivities and positions nursing practices as active agents in the production of these subjectivities. The idea of nursing practices as “worlds in the making” is a potent illustration of this agency, denoting not only a change in perspective but also a fundamental shift in understanding the role and function of nursing within the broader socio-cultural and philosophical context.

Futures of Philosophy and Nursing

The juxtaposition of philosophy and nursing in Petrovskaya’s research further extends the domain of nursing beyond its practical roots and illuminates its deep engagement with philosophical thought. Petrovskaya’s survey of various philosophical works, especially those underrepresented in Western philosophical discourse, underscores the importance of diversity in philosophical thought for nursing studies. Notable philosophers like Wollstonecraft, de Gouges, Yacob, and Amo, despite their contributions, often remain on the margins of mainstream philosophical discourse, mirroring the marginalization faced by nursing as a discipline in academic circles. Spinoza’s work, in particular, holds potential for fostering new insights into nursing practices, given its significance in shaping critical posthumanist thought. Petrovskaya’s work thereby serves as a catalyst for nurse scholars to engage more deeply with alternative philosophies, fostering a more inclusive, diverse, and nuanced understanding of nursing in posthuman times.

Petrovskaya’s research is especially pertinent to futures studies, an interdisciplinary field engaged with critical exploration of possible, plausible, and preferable futures. As the study positions nursing within a posthumanist context, it implicitly challenges the conventional anthropocentric worldview and opens the door to a future where human-nonhuman assemblages are central to the understanding of subjectivities and practice outcomes. These propositions represent a radical shift from current paradigms, setting the stage for a future where the entanglement of humans and nonhumans is recognized and embraced rather than ignored or oversimplified. The novel methodologies that Petrovskaya advocates for studying these assemblages can potentially drive futures studies towards more nuanced, complex, and inclusive explorations of what future nursing practices—and, by extension, human society—might look like.

Abstract

In this paper, I argue that critical posthumanism is a crucial tool in nursing philosophy and scholarship. Posthumanism entails a reconsideration of what ‘human’ is and a rejection of the whole tradition founding Western life in the 2500 years of our civilization as narrated in founding texts and embodied in governments, economic formations and everyday life. Through an overview of historical periods, texts and philosophy movements, I problematize humanism, showing how it centres white, heterosexual, able-bodied Man at the top of a hierarchy of beings, and runs counter to many current aspirations in nursing and other disciplines: decolonization, antiracism, anti-sexism and Indigenous resurgence. In nursing, the term humanism is often used colloquially to mean kind and humane; yet philosophically, humanism denotes a Western philosophical tradition whose tenets underpin much of nursing scholarship. These underpinnings of Western humanism have increasingly become problematic, especially since the 1960s motivating nurse scholars to engage with antihumanist and, recently, posthumanist theory. However, even current antihumanist nursing arguments manifest deep embeddedness in humanistic methodologies. I show both the problematic underside of humanism and critical posthumanism’s usefulness as a tool to fight injustice and examine the materiality of nursing practice. In doing so, I hope to persuade readers not to be afraid of understanding and employing this critical tool in nursing research and scholarship.

Farewell to humanism? Considerations for nursing philosophy and research in posthuman times

(Featured) Beyond the hype: ‘acceptable futures’ for AI and robotic technologies in healthcare

Beyond the hype: ‘acceptable futures’ for AI and robotic technologies in healthcare

Giulia De Togni et al. delve into the complex dynamics of technoscientific expectations surrounding the future of artificial intelligence (AI) and robotic technologies in healthcare. By focusing on surgery, pathology, and social care, they examine the strategies employed by scientists, clinicians, and other stakeholders to navigate and construct visions of an AI-driven future in healthcare. The authors illustrate the challenges faced by these stakeholders, who must balance promissory visions with more realistic expectations, while acknowledging the performative power of high expectations in attracting investment and resources.

The participants in the study engage in a balancing act between high and low expectations, drawing boundaries to maintain credibility for their research and practice while distancing themselves from the hype. They recognize that over-optimistic visions may create false hope and unrealistic expectations of performance, potentially harming AI and robotics research through deflated investment if the outcomes fail to match expectations. The authors demonstrate how the stakeholders negotiate the tension between sustaining and nurturing the hype while calling for the recalibration of expectations within an ethically and socially responsible framework.

Central to the participants’ visions of acceptable futures is the changing nature of human-machine relationships. Through balancing different social, ethical, and technoscientific demands, the participants articulate futures that are perceived as ethically and socially acceptable, as well as realistically achievable. They frame their articulations of both the present and future potential and limitations of AI and robotics technologies within an ethics of expectations that position normative considerations as central to how these expectations are expressed.

This research article contributes to broader philosophical debates concerning the role of expectations and imaginaries in shaping our understanding of technoscientific innovation, human-machine relationships, and the ethics of care. By exploring the dynamic interplay between these factors, the authors shed light on how the future of AI and robotics in healthcare is being constructed and negotiated. This study resonates with key themes in the philosophy of futures studies, including the co-constitution of technological visions and sociotechnical imaginaries, the performativity of expectations, and the ethical dimensions of forecasting and envisioning the future.

To further enrich our understanding of these complex dynamics, future research could explore the perspectives of additional stakeholders, such as patients and policymakers, to gain a more comprehensive picture of the expectations surrounding AI and robotics in healthcare. Additionally, cross-cultural and comparative studies could reveal how different cultural contexts and healthcare systems influence expectations and acceptance of these technologies. Ultimately, by continuing to examine the societal implications of AI and robotic technologies, including their impact on patient autonomy, privacy, and the human aspects of care, scholars can contribute to a more nuanced and ethically responsible vision of the future of healthcare.

Abstract

AI and robotic technologies attract much hype, including utopian and dystopian future visions of technologically driven provision in the health and care sectors. Based on 30 interviews with scientists, clinicians and other stakeholders in the UK, Europe, USA, Australia, and New Zealand, this paper interrogates how those engaged in developing and using AI and robotic applications in health and care characterize their future promise, potential and challenges. We explore the ways in which these professionals articulate and navigate a range of high and low expectations, and promissory and cautionary future visions, around AI and robotic technologies. We argue that, through these articulations and navigations, they construct their own perceptions of socially and ethically ‘acceptable futures’ framed by an ‘ethics of expectations.’ This imbues the envisioned futures with a normative character, articulated in relation to the present context. We build on existing work in the sociology of expectations, aiming to contribute towards better understanding of how technoscientific expectations are navigated and managed by professionals. This is particularly timely since the COVID-19 pandemic gave further momentum to these technologies.

Beyond the hype: ‘acceptable futures’ for AI and robotic technologies in healthcare

(Featured) Ethics of AI and Health Care: Towards a Substantive Human Rights Framework

Ethics of AI and Health Care: Towards a Substantive Human Rights Framework

S. Matthew Liao provides an incisive exploration into the ethical considerations intrinsic to the application of artificial intelligence (AI) in healthcare contexts. The paper underscores the burgeoning interest in employing AI for health-related purposes, with AI applications demonstrating competencies in diagnosing certain types of cancer, identifying heart rhythm abnormalities, diagnosing various eye diseases, and even identifying viable embryos. However, the author cautions that the deployment of AI in healthcare settings necessitates adherence to robust ethical frameworks and guidelines.

The author identifies a burgeoning multitude of ethical frameworks for AI that have been proposed over recent years. The count of such frameworks exceeds 80 and stems from a diverse array of sources including private corporations, governmental agencies, academic institutions, and intergovernmental bodies. These frameworks commonly reference the four principles of biomedical ethics: autonomy, beneficence, non-maleficence, and justice, and often include recommendations for transparency, explainability, and trust. However, the author warns that the proliferation of these frameworks has led to confusion, thereby raising pressing questions about the basis, justification, and practical implementation of these recommendations.

In response to this conundrum, the author proposes an AI ethics framework rooted in substantive human rights theory. This proposed framework seeks to address the questions raised by the proliferation of ethical guidelines and to provide clear and practical guidance for the use of AI in healthcare. The author argues for an ethical framework that is not only abstract but also expounds the grounds and justifications of the recommendations it puts forward, as well as how these recommendations should be applied in practice.

The broader philosophical discourse that this research engages with is the ethics of technology and, more specifically, the ethical and moral implications of AI use in healthcare. The central philosophical question the author grapples with is the tension between the rapid development and application of AI in healthcare and the need for substantive ethical guidelines to govern its use. This brings into sharp focus the perennial philosophical tension between progress and ethical constraint, raising the specter of issues such as the nature of autonomy, the definition of harm, and the equitable distribution of benefits and burdens.

For future research, the author’s proposition of a human rights-based ethical framework opens up multiple avenues. First, the application of this framework could be examined in real-world healthcare scenarios to assess its efficacy in guiding ethical AI use. Second, the interplay between this framework and existing legal systems could be studied to ascertain any gaps or overlaps. Lastly, a comparative analysis could be conducted of how this proposed framework fares against other ethical frameworks in use, and how it might be refined or integrated with other approaches for a more robust ethical guidance in healthcare AI applications.

Abstract

There is enormous interest in using artificial intelligence (AI) in health care contexts. But before AI can be used in such settings, we need to make sure that AI researchers and organizations follow appropriate ethical frameworks and guidelines when developing these technologies. In recent years, a great number of ethical frameworks for AI have been proposed. However, these frameworks have tended to be abstract and not explain what grounds and justifies their recommendations and how one should use these recommendations in practice. In this paper, I propose an AI ethics framework that is grounded in substantive, human rights theory and one that can help us address these questions.

Ethics of AI and Health Care: Towards a Substantive Human Rights Framework

(Featured) Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making

Giorgia Lorenzini et al. examine the evolving nature of the doctor-patient relationship in the context of integrating artificial intelligence (AI) into healthcare. They focus on the shared decision-making (SDM) process between doctors and patients, a consensual partnership founded on communication and respect for voluntary choices. The authors argue that the introduction of AI can potentially enhance SDM, provided it is implemented with care and consideration. The paper addresses the communication between doctors and AI and the communication of this interaction to patients, evaluating its potential impact on SDM and proposing strategies to preserve both doctors’ and patients’ autonomy.

The authors explore the communication and autonomy challenges arising from AI integration into clinical practice. They posit that AI’s influence could unintentionally limit doctors’ autonomy by heavily guiding their decisions, which in turn raises questions about the balance of power in the decision-making process. The paper emphasizes the importance of doctors understanding AI’s recommendations and checking for errors while also being competent in working with AI systems. By examining the “black box problem” of AI’s opaqueness, the authors argue that explainability is crucial for fostering the AI-doctor relationship and preserving doctors’ autonomy.

The paper then investigates doctor-patient communication and autonomy within the context of AI integration. The authors argue that in order to promote patients’ autonomy and encourage their participation in SDM, doctors must disclose and discuss AI’s involvement in the clinical evaluation process. They also contend that AI should consider patients’ preferences and unique situations, thus ensuring that their values are respected and that they are able to participate actively in the SDM process.

In relating the research to broader philosophical issues, the authors’ examination of the AI-doctor-patient relationship aligns with questions surrounding the ethical and moral implications of AI in society. As AI increasingly permeates various aspects of our lives, its impact on human autonomy, agency, and moral responsibility becomes a focal point for philosophical inquiry. The paper contributes to this discourse by delving into the specific context of healthcare and the evolving dynamics of the doctor-patient relationship, providing a microcosm for understanding the broader implications of AI integration in human decision-making processes.

As the authors outline the potential benefits and challenges of incorporating AI into the SDM process, future research could investigate the practical implementation of AI in various clinical settings, evaluating the effectiveness of AI-doctor collaboration in promoting SDM. Further research might also address the training and education necessary for medical professionals to adapt to AI integration, ensuring a seamless transition that optimizes patient care. Additionally, exploring methods for incorporating patients’ values into AI algorithms could provide a path to more personalized and autonomy-respecting AI-assisted healthcare.

Abstract

Artificial intelligence (AI) based clinical decision support systems (CDSS) are becoming ever more widespread in healthcare and could play an important role in diagnostic and treatment processes. For this reason, AI-based CDSS has an impact on the doctor–patient relationship, shaping their decisions with its suggestions. We may be on the verge of a paradigm shift, where the doctor–patient relationship is no longer a dual relationship, but a triad. This paper analyses the role of AI-based CDSS for shared decision-making to better comprehend its promises and associated ethical issues. Moreover, it investigates how certain AI implementations may instead foster the inappropriate paradigm of paternalism. Understanding how AI relates to doctors and influences doctor–patient communication is essential to promote more ethical medical practice. Both doctors’ and patients’ autonomy need to be considered in the light of AI.

Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making