Ines Schröder et al. present an in-depth exploration of the phenomenological and ethical implications of socially assistive robots (SARs), with a specific focus on their role within the medical sector. Central to the discussion is the concept of responsivity, a construct that the authors argue is inherent to human experience and mirrored, to a certain extent, in human-robot interactions. They explore the nature of this perceived responsivity and its implications for the philosophical understanding of human-robot relations.
The article begins by drawing a distinction between human and artificial responsivity, elucidating the phenomenological structure of human responsivity and how it is translated into SARs’ design. The authors underscore how SARs’ design parameters, such as AI-enhanced speech recognition, physical mobility, and social affordances, culminate in a form of ‘virtual responsivity.’ This virtual responsivity serves to mimic human interaction, creating a semblance of empathy and understanding. However, the authors also emphasize the limitations of this approach, highlighting the potential for deception and the lack of essential direct reciprocity inherent in genuine ethical responsivity.
The crux of the article lies in its examination of the ethical implications of this constructed responsivity. The authors grapple with the potential ethical pitfalls, tensions, and challenges of SARs, particularly within the domain of medical applications. They articulate concerns regarding the preservation of patient autonomy, the balancing of beneficial impact against inherent risks, and the principle of justice in relation to access to advanced technologies. The authors further highlight the three ethically relevant dimensions of vulnerability, dignity, and trust in relation to responsivity, emphasizing the importance of these dimensions in human-robot interactions.
Broadly, the research intersects with larger philosophical themes concerning the nature of consciousness, personhood, and the moral status of non-human entities. The authors’ analysis of SARs’ ‘virtual responsivity’ challenges conventional understandings of these concepts, raising critical questions about the attribution of moral status and the potential for emotional attachment to non-human entities. The exploration of ethical dimensions of vulnerability, dignity, and trust in the context of human-robot interactions further elucidates the evolving dynamics of human-machine relationships, providing a nuanced perspective on the philosophical implications of advanced technology.
Looking towards the future, the research opens several avenues for further exploration. One potential focus is the development of a robust ethical framework for the design and use of SARs, especially in sensitive domains such as healthcare. There is a need for research into ‘ethically sensitive responsiveness,’ which could provide a basis for setting appropriate boundaries in human-robot interactions and ensuring the clear communication of a robot’s capabilities and limitations. Additionally, empirical research exploring the psychological effects of human-robot interactions, particularly in relation to the formation of trust, would be invaluable. Overall, the ethical and philosophical implications of artificial responsivity necessitate a multidisciplinary approach, inviting further dialogue between fields such as robotics, ethics, philosophy, and psychology.
Abstract
Definition of the problem
This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of the High-Level Expert Group on AI of the European Union.
Arguments
Trust is analyzed as a multidimensional concept and phenomenon that must be primarily understood as departing from trusting as a human functioning and capability. To trust is an essential part of the human basic capability to form relations with others. We further want to discuss the concept of responsivity which has been established in phenomenological research as a foundational structure of the relation between the self and the other. We argue that trust and trusting as a capability is fundamentally responsive and needs responsive others to be realized. An understanding of responsivity is thus crucial to conceptualize trusting in the ethical framework of human flourishing. We apply a phenomenological–anthropological analysis to explore the link between certain qualities of social robots that construct responsiveness and thereby simulate responsivity and the human propensity to trust.
Conclusion
Against this background, we want to critically ask whether the concept of trustworthiness in social human–robot interaction could be misguided because of the limited ethical demands that the constructed responsiveness of social robots is able to answer to.
Can robots be trustworthy?

