Giulia De Togni et al. delve into the complex dynamics of technoscientific expectations surrounding the future of artificial intelligence (AI) and robotic technologies in healthcare. By focusing on surgery, pathology, and social care, they examine the strategies employed by scientists, clinicians, and other stakeholders to navigate and construct visions of an AI-driven future in healthcare. The authors illustrate the challenges faced by these stakeholders, who must balance promissory visions with more realistic expectations, while acknowledging the performative power of high expectations in attracting investment and resources.
The participants in the study engage in a balancing act between high and low expectations, drawing boundaries to maintain credibility for their research and practice while distancing themselves from the hype. They recognize that over-optimistic visions may create false hope and unrealistic expectations of performance, potentially harming AI and robotics research through deflated investment if the outcomes fail to match expectations. The authors demonstrate how the stakeholders negotiate the tension between sustaining and nurturing the hype while calling for the recalibration of expectations within an ethically and socially responsible framework.
Central to the participants’ visions of acceptable futures is the changing nature of human-machine relationships. Through balancing different social, ethical, and technoscientific demands, the participants articulate futures that are perceived as ethically and socially acceptable, as well as realistically achievable. They frame their articulations of both the present and future potential and limitations of AI and robotics technologies within an ethics of expectations that position normative considerations as central to how these expectations are expressed.
This research article contributes to broader philosophical debates concerning the role of expectations and imaginaries in shaping our understanding of technoscientific innovation, human-machine relationships, and the ethics of care. By exploring the dynamic interplay between these factors, the authors shed light on how the future of AI and robotics in healthcare is being constructed and negotiated. This study resonates with key themes in the philosophy of futures studies, including the co-constitution of technological visions and sociotechnical imaginaries, the performativity of expectations, and the ethical dimensions of forecasting and envisioning the future.
To further enrich our understanding of these complex dynamics, future research could explore the perspectives of additional stakeholders, such as patients and policymakers, to gain a more comprehensive picture of the expectations surrounding AI and robotics in healthcare. Additionally, cross-cultural and comparative studies could reveal how different cultural contexts and healthcare systems influence expectations and acceptance of these technologies. Ultimately, by continuing to examine the societal implications of AI and robotic technologies, including their impact on patient autonomy, privacy, and the human aspects of care, scholars can contribute to a more nuanced and ethically responsible vision of the future of healthcare.
Abstract
AI and robotic technologies attract much hype, including utopian and dystopian future visions of technologically driven provision in the health and care sectors. Based on 30 interviews with scientists, clinicians and other stakeholders in the UK, Europe, USA, Australia, and New Zealand, this paper interrogates how those engaged in developing and using AI and robotic applications in health and care characterize their future promise, potential and challenges. We explore the ways in which these professionals articulate and navigate a range of high and low expectations, and promissory and cautionary future visions, around AI and robotic technologies. We argue that, through these articulations and navigations, they construct their own perceptions of socially and ethically ‘acceptable futures’ framed by an ‘ethics of expectations.’ This imbues the envisioned futures with a normative character, articulated in relation to the present context. We build on existing work in the sociology of expectations, aiming to contribute towards better understanding of how technoscientific expectations are navigated and managed by professionals. This is particularly timely since the COVID-19 pandemic gave further momentum to these technologies.
Beyond the hype: ‘acceptable futures’ for AI and robotic technologies in healthcare

