Fabio Morreale et al. examine the nature and implications of unseen digital labor within the realm of artificial intelligence (AI). The article, structured methodically, dissects the issue by studying three distinctive case studies—Google’s reCAPTCHA, Spotify’s recommendation algorithms, and OpenAI’s language model GPT-3, and then extrapolates five characteristics defining “unwitting laborers” in AI systems: unawareness, non-consensual labor, unwaged and uncompensated labor, misappropriation of original intent, and the nature of being unwitting.
The study meticulously scrutinizes the fundamental premise of unawareness, arguing that many individuals unknowingly perform labor that trains AI systems. It elaborates that such activities often occur without the participant’s conscious awareness that their interactions are being used to improve machine learning algorithms. The research then delves into the realm of non-consensual labor. The authors point out that while traditional working agreements require consent from both parties, such consent is often absent or uninformed in the context of digital labor for AI training, thus resulting in exploitation.
In terms of compensation, the authors challenge the traditional notion of labor, arguing that even though the unwitting laborers receive no wage or acknowledgement for their efforts, the aggregate data they provide can yield significant value for the companies leveraging it. The research further highlights the misappropriation of original intent, illustrating that the purpose of the labor performed is often obscured or transfigured, causing a significant divergence between the exploited-intentionality and the exploiter-intentionality.
The article’s argument prompts a re-evaluation of our understanding of labour and consent, raising questions that align with broader philosophical discourses around the ethics of AI and labor rights in the digital age. By examining the human-AI interaction through the lens of exploitation, the authors contribute to the growing discourse around AI ethics, invoking notions reminiscent of Marxist critiques of capitalism, where labor is commodified and surplus value is extracted without adequate compensation or acknowledgement.
Furthermore, the study enriches the dialogue surrounding the notion of consent, autonomy, and freedom in the digital age, forcing us to reconsider how these concepts should be reframed in light of the increasing integration of AI into our everyday lives. It also raises significant questions about the role and place of human cognition in the age of AI, suggesting that our uniquely human skills and experiences are not just being utilized, but potentially commodified and exploited, adding another dimension to the ongoing discourse on cognitive capitalism.
Looking forward, the authors’ arguments open numerous avenues for further exploration. There is a need for studies that delve into the societal and individual impacts of such exploitation—how it influences our understanding of labor, our autonomy, and our interactions with technology. Additional research could also explore potential mechanisms for informing and compensating users for their contribution to AI training. Moreover, investigation into policy interventions and regulatory mechanisms to mitigate the exploitation of such digital labor would be invaluable. Ultimately, the authors’ research catalyses a dialogue about the balance of power between individuals and technology companies, and the importance of ensuring this balance in an increasingly AI-integrated future.
Abstract
Many modern digital products use Machine Learning (ML) to emulate human abilities, knowledge, and intellect. In order to achieve this goal, ML systems need the greatest possible quantity of training data to allow the Artificial Intelligence (AI) model to develop an understanding of “what it means to be human”. We propose that the processes by which companies collect this data are problematic, because they entail extractive practices that resemble labour exploitation. The article presents four case studies in which unwitting individuals contribute their humanness to develop AI training sets. By employing a post-Marxian framework, we then analyse the characteristic of these individuals and describe the elements of the capture-machine. Then, by describing and characterising the types of applications that are problematic, we set a foundation for defining and justifying interventions to address this form of labour exploitation.
The unwitting labourer: extracting humanness in AI training

