(Featured) Moral disagreement and artificial intelligence

Moral disagreement and artificial intelligence

Pamela Robinson proposes a robust examination of the methodological problems arising due to moral disagreement in the development and decision-making processes of artificial intelligence (AI). The central point of discussion is the formulation of ethical AI systems, in particular, the AI Decider, that needs to make decisions in cases where its decision subjects have moral disagreements. The author posits that the conundrum could potentially be managed using moral, compromise, or epistemic solutions.

The author systematically elucidates the possible solutions by presenting three categories. Moral solutions are proposed to involve choosing a moral theory and having AI align to it, like preference utilitarianism, thereby sidestepping disagreement by assuming moral consensus. Compromise solutions, on the other hand, suggest handling disagreement by aggregating moral views to arrive at a collective decision. The author introduces the Arrow’s impossibility theorem and Social Choice Theory as potential tools for AI decision-making. Lastly, epistemic solutions, arguably the most complex of the three, require the AI Decider to treat moral disagreement as evidence and adjust its decision accordingly. The author mentions several approaches within this category, such as reflective equilibrium, moral uncertainty, and moral hedging.

However, none of these solutions, the author asserts, can provide a perfect answer to the problem. Each solution is fraught with its own complexities and risks. Here, the concept of ‘moral risk,’ meaning the chance of getting things wrong morally, is introduced. The author postulates that the selection between an epistemic or compromise solution should depend on the moral risk involved. They argue that the methodological problem could be addressed by minimizing this moral risk, regardless of whether a moral, compromise, or epistemic solution is employed.

Delving into the broader philosophical themes, this paper reignites the enduring debate on the role and impact of moral relativism and objectivism within the sphere of artificial intelligence. The issues presented tie into the grand narrative of moral philosophy, particularly the discourse around meta-ethics and normative ethics, where differing moral perspectives invariably lead to dilemmas. The AI Decider, in this sense, mirrors the human condition where decision-making often requires navigating the labyrinth of moral disagreement. The author’s emphasis on moral risk provides a novel framework, bridging the gap between theoretical moral philosophy and the practical demands of AI ethics.

For future research, several intriguing pathways are suggested by this article. First, an in-depth exploration of the concept of ‘moral risk’ could illuminate new strategies for handling moral disagreement in AI decision-making. Comparative studies, analyzing the outcomes and repercussions of decisions made by an AI system utilizing moral, compromise, or epistemic solutions, could provide empirical evidence for the efficacy of these approaches. Lastly, given the dynamism of moral evolution, the impact of changes in societal moral views over time on an AI Decider’s decision-making process warrants investigation. This could include exploring how the AI system could effectively adapt to the evolution of moral consensus or disagreement within its decision subjects. Such future research could significantly enhance our understanding of ethical decision-making in AI systems, bringing us closer to the creation of more ethically aligned, responsive, and responsible artificial intelligence.

Abstract

Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. Moral solutions apply a moral theory or related principles and largely ignore the details of the disagreement. Compromise solutions apply a method of finding a compromise and taking information about the disagreement as input. Epistemic solutions apply an evidential rule that treats the details of the disagreement as evidence of moral truth. Proposals for all three kinds of solutions can be found in the AI ethics and value alignment literature, but little has been said to justify choosing one over the other. I argue that the choice is best framed in terms of moral risk.

Moral disagreement and artificial intelligence

(Featured) We have to talk about emotional AI and crime

We have to talk about emotional AI and crime

Lena Podoletz investigates the utilization of emotional Artificial Intelligence (AI) within the context of law enforcement and criminal justice systems in a critical examination of the sociopolitical, legal, and ethical ramifications of this technology, contextualizing the analysis within the broader landscape of technological trends and potential future applications.

The opening part of the article is devoted to the intricacies of emotion recognition AI, specifically its definition, functionality, and the scientific foundations that inform its development. In dissecting these aspects, the author emphasizes the discrepancy between the common understanding of emotions and the way they are algorithmically conceptualized and processed. Key to this understanding is the recognition that emotional AI, in its current stage of development, relies heavily on theoretical constructs like the ‘basic emotions theory’ and the ‘circumplex model’, the limitations and biases of which can significantly impact its effective and ethical application in law enforcement and criminal justice contexts.

Subsequent sections of the article provide a rigorous evaluation of four areas of concern: accuracy and performance, bias, accountability, and privacy along with other rights and freedoms. The author underscores the need for distinguishing between different uses of emotional AI, stressing that the challenges presented in a law enforcement setting differ significantly from its application in other contexts, such as private homes or smart health environments. This examination extends to issues related to bias in algorithmic decision-making, where existing societal biases can be reproduced and amplified. The complex issue of accountability in emotional AI is also dissected, particularly in terms of attributing responsibility for decisions made by such systems. Finally, the author explores the intersection of emotional AI technologies with privacy and other human rights, indicating that the deployment of these systems can challenge individual autonomy and human dignity.

The thematic concerns presented in the article echo the larger philosophical discourse surrounding the role and implications of AI in society. The author’s evaluation of emotional AI is in line with post-humanist thought, which questions the Cartesian dualism of human and machine, and problematizes the reduction of complex human behaviors and emotions into codified, algorithmic processes. The exploration of bias, accountability, and privacy ties into ongoing debates around the ethics of AI, especially concerning notions of fairness, transparency, and justice in algorithmic decision-making. Moreover, the question of who holds responsibility when AI systems make mistakes or violate rights brings into focus the legal and philosophical concept of moral agency in the age of advanced AI.

Future research might delve deeper into how emotional AI, specifically within law enforcement and criminal justice systems, could be better regulated or standardized to address the highlighted concerns. It would be valuable to explore potential legislative and technical solutions to mitigate bias, improve accuracy, and establish clear lines of accountability. Moreover, further philosophical examination is needed to unpack the implications of emotional AI on our understanding of human emotions, agency, and rights in an increasingly technologized society. Finally, in line with futures studies philosophy, it would be beneficial to conceive of alternative trajectories for the development and deployment of emotional AI that are anchored in ethical foresight and participatory decision-making, thereby ensuring a future that upholds societal well-being and human dignity.

Abstract

Emotional AI is an emerging technology used to make probabilistic predictions about the emotional states of people using data sources, such as facial (micro)-movements, body language, vocal tone or the choice of words. The performance of such systems is heavily debated and so are the underlying scientific methods that serve as the basis for many such technologies. In this article I will engage with this new technology, and with the debates and literature that surround it. Working at the intersection of criminology, policing, surveillance and the study of emotional AI this paper explores and offers a framework of understanding the various issues that these technologies present particularly to liberal democracies. I argue that these technologies should not be deployed within public spaces because there is only a very weak evidence-base as to their effectiveness in a policing and security context, and even more importantly represent a major intrusion to people’s private lives and also represent a worrying extension of policing power because of the possibility that intentions and attitudes may be inferred. Further to this, the danger in the use of such invasive surveillance for the purpose of policing and crime prevention in urban spaces is that it potentially leads to a highly regulated and control-oriented society. I argue that emotion recognition has severe impacts on the right to the city by not only undertaking surveillance of existing situations but also making inferences and probabilistic predictions about future events as well as emotions and intentions.

We have to talk about emotional AI and crime

(Featured) Is AI the Future of Mental Healthcare?

Is AI the Future of Mental Healthcare?

Francesca Minerva and Alberto Giubilini engage with the intricate subject of AI implementation in the mental healthcare sector, particularly focusing on the potential benefits and challenges of its utilization. They open by setting forth the landscape of the rising demand for mental healthcare globally and articulates that the conventional therapist-centric model might not be scalable enough to meet this demand. This sets the context for exploring the use of AI in supplementing or even replacing human therapists in certain capacities. The use of AI in mental healthcare is argued to have significant advantages such as scalability, cost-effectiveness, continuous availability, and the ability to harness and analyze vast amounts of data for effective diagnosis and treatment. However, there is an explicit acknowledgment of the potential downsides such as privacy concerns, issues with personal data use and potential misuse, and the need for regulatory frameworks for monitoring and ensuring the safe and ethical use of AI in this context.

Their research subsequently delves into the issues of potential bias in healthcare, highlighting how AI could both help overcome human biases and also potentially introduce new biases into healthcare provision. It elucidates that healthcare practitioners, despite their commitment to objectivity, may be prone to biases arising from a patient’s individual and social factors, such as age, social status, and ethnic background. AI, if programmed carefully, could potentially help counteract these biases by focusing more rigidly on symptoms, yet the article also underscores that AI, being programmed by humans, could be susceptible to biases introduced in its programming. This delicate dance of bias mitigation and introduction forms a key discussion point of the article.

Their research finally broaches two critical ethical-philosophical considerations, centering around the categorization of mental health disorders and the shifting responsibilities of mental health professionals with the introduction of AI. The authors argue that existing categorizations, such as those in DSM5, may not remain adequate or relevant if AI can provide more nuanced data and behavioral cues, thus potentially necessitating a reevaluation of diagnostic categories. The issue of professional responsibility is also touched upon, wherein the challenge of assigning responsibility for AI-enabled diagnosis, especially in the light of potential errors or misdiagnoses, is critically evaluated.

The philosophical underpinning of the research article is deeply rooted in the realm of ethics, epistemology, and ontological considerations of AI in healthcare. The philosophical themes underscored in the article, such as the reevaluation of categorizations of mental health disorders and the shifting responsibilities of mental health professionals, point towards broader philosophical discourses. These revolve around how technologies like AI challenge our existing epistemic models and ethical frameworks and demand a reconsideration of our ontological understanding of subjects like disease categories, diagnosis, and treatment. The question of responsibility, and the degree to which AI systems can or should be held accountable, is a compelling case of applied ethics intersecting with technology.

Future research could delve deeper into the philosophical dimensions of AI use in psychiatry. For instance, exploring the ontological questions of mental health disorders in the age of AI could be a meaningful avenue. Also, studying the epistemic shifts in our understanding of mental health symptoms and diagnosis with AI’s increasing role could be a fascinating research area. An additional perspective could be to examine the ethical considerations in the context of AI, particularly focusing on accountability, transparency, and the changing professional responsibilities of mental health practitioners. Investigating the broader societal and cultural implications of such a shift in mental healthcare provision could also provide valuable insights.

Excerpt

Over the past decade, AI has been used to aid or even replace humans in many professional fields. There are now robots delivering groceries or working in assembling lines in factories, and there are AI assistants scheduling meetings or answering the phone line of customer services. Perhaps even more surprisingly, we have recently started admiring visual art produced by AI, and reading essays and poetry “written” by AI (Miller 2019), that is, composed by imitating or assembling human compositions. Very recently, the development of ChatGPT has shown how AI could have applications in education (Kung et al. 2023) the judicial system (Parikh et al. 2019) and the entertainment industry.

Is AI the Future of Mental Healthcare?

(Featured) A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

Alejo José G. Sison and Dulce M. Redín take a critical look at the concept of autonomous moral agents (AMAs), especially in relation to artificial intelligence (AI), from a neo-Aristotelian ethical standpoint. The authors open with a compelling critique of the arguments in favor of AMAs, asserting that they are neither inevitable nor guaranteed to bring practical benefits. They elucidate that the term ‘autonomous’ may not be fitting, as AMAs are, at their core, bound to the algorithmic instructions they follow. Moreover, the term ‘moral’ is questioned due to the inherent external nature of the proposed morality. According to the authors, the true moral good is internally driven and cannot be separated from the agent nor the manner in which it is achieved.

The authors proceed to suggest that the arguments against the development of AMAs have been insufficiently considered, proposing a neo-Aristotelian ethical framework as a potential remedy. This approach places emphasis on human intelligence, grounded in biological and psychological scaffolding, and distinguishes between the categories of heterotelic production (poiesis) and autotelic action (praxis), highlighting that the former can accommodate machine operations, while the latter is strictly reserved for human actors. Further, the authors propose that this framework offers greater clarity and coherence by explicitly denying bots the status of moral agents due to their inability to perform voluntary actions.

Lastly, the authors explore the potential alignment of AI and virtue ethics. They scrutinize the potential for AI to impact human flourishing and virtues through their actions or the consequences thereof. Herein, they feature the work of Vallor, who has proposed the design of “moral machines” by embedding norms, laws, and values into computational systems, thereby, focusing on human-computer interaction. However, they caution that such an approach, while intriguing, may be inherently flawed. The authors also examine two possible ways of embedding ethics in AI: value alignment and virtue embodiment.

The research article provides an interesting contribution to the ongoing debate on the potential for AI to function as moral agents. The authors adopt a neo-Aristotelian ethical framework to add depth to the discourse, providing a fresh perspective that integrates virtue ethics and emphasizes the role of human agency. This perspective brings to light the broader philosophical questions around the very nature of morality, autonomy, and the distinctive attributes of human intelligence.

Future research avenues might revolve around exploring more extensively how virtue ethics can interface with AI and if the goals that Vallor envisages can be realistically achieved. Further philosophical explorations around the assumptions of agency and morality in AI are also needed. Moreover, studies examining the practical implications of the neo-Aristotelian ethical framework, especially in the realm of human-computer interaction, would be invaluable. Lastly, it may be insightful to examine the authors’ final suggestion of approaching AI as a moral agent within the realm of fictional ethics, a proposal that opens up a new and exciting area of interdisciplinary research between philosophy, AI, and literature.

Abstract

We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may benefit their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fill this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient.

A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

(Featured) Space not for everyone: The problem of social exclusion in the concept of space settlement

Space not for everyone: The problem of social exclusion in the concept of space settlement

Konrad Szocik contests the arguments supporting space colonization and underscores overlooked dimensions of social justice and equity. The primary critique orbits around the arguments of Milan M. Ćirković, who previously dismissed skepticism concerning space colonization, but failed to consider arguments rooted in social justice and equal access. The author points out that the endeavors of space exploration and colonization could inadvertently amplify existing inequalities, transforming these ventures into projects that serve only a fraction of humanity.

The article challenges the comparison Ćirković makes between skepticism about space colonization and hypothetical skepticism about ancestral migrations, arguing that it overlooks the significant disparities between Earth’s physical conditions and those of outer space. Furthermore, the author urges an investigation into the potential impacts of space settlement on equality and access, arguing that the current discourse is dominated by Western perspectives, which may not account for the marginalized and excluded. The author worries that space colonization could simply replicate existing terrestrial injustices, serving only the most privileged while leaving the poorest and most vulnerable behind.

The paper highlights the fear that space settlement, seen as a refuge from Earth’s deteriorating conditions, could be exclusively reserved for the rich or citizens of spacefaring superpowers. This exclusive access could potentially undermine the very purpose of space settlement as a rescue for humanity. Moreover, the author suggests that this enterprise, given the current technical capabilities, might only be realistic for a relatively small number of people. This selectivity questions the moral value of such a venture, particularly if it detracts from efforts to mitigate climate change for the most disadvantaged.

Delving into the philosophical realm, this article brings to the fore the philosophical implications of space settlement, sparking a dialogue reminiscent of John Rawls’ “Theory of Justice”. The highlighted concerns closely echo the principles of fairness and equality in distribution, pointing to a possible “veil of ignorance” in planning space colonization. Similarly, the author’s argument about the unjust distribution of access to space colonization echoes Thomas Pogge’s ideas on global justice and how the actions of some nations can profoundly affect others. This dialogue expands the scope of philosophy and underscores the importance of inclusive ethics in a rapidly advancing technological world.

The discourse of this article presents new pathways for future research in the field of futures studies. Future research could evaluate more inclusive methods of space colonization, investigating alternatives to the currently anticipated elitist selection process. It could also examine the potential of international regulations to ensure equitable access to space resources. Additionally, research could explore the feasibility and ethics of a globally cooperative effort in space colonization. Overall, these directions aim to ensure that the bold ambition of space colonization aligns with the principles of social justice, thereby propelling humanity forward without leaving anyone behind.

Abstract

The subject of this paper is a continuation of the discussion initiated by Milan M. Ćirković. Ćirković criticized a number of arguments skeptical of the idea of space settlement. However, he omitted arguments referring to social justice and equal access, which, as this paper tries to show, are arguably the most serious skeptical remarks against the idea of space colonization. The paper emphasizes that both space exploration and, ultimately, potential space colonization run the risk of exacerbating inequality and, as such, are not projects pursued for all of humanity.

Space not for everyone: The problem of social exclusion in the concept of space settlement

(Featured) The unwitting labourer: extracting humanness in AI training

The unwitting labourer: extracting humanness in AI training

Fabio Morreale et al. examine the nature and implications of unseen digital labor within the realm of artificial intelligence (AI). The article, structured methodically, dissects the issue by studying three distinctive case studies—Google’s reCAPTCHA, Spotify’s recommendation algorithms, and OpenAI’s language model GPT-3, and then extrapolates five characteristics defining “unwitting laborers” in AI systems: unawareness, non-consensual labor, unwaged and uncompensated labor, misappropriation of original intent, and the nature of being unwitting.

The study meticulously scrutinizes the fundamental premise of unawareness, arguing that many individuals unknowingly perform labor that trains AI systems. It elaborates that such activities often occur without the participant’s conscious awareness that their interactions are being used to improve machine learning algorithms. The research then delves into the realm of non-consensual labor. The authors point out that while traditional working agreements require consent from both parties, such consent is often absent or uninformed in the context of digital labor for AI training, thus resulting in exploitation.

In terms of compensation, the authors challenge the traditional notion of labor, arguing that even though the unwitting laborers receive no wage or acknowledgement for their efforts, the aggregate data they provide can yield significant value for the companies leveraging it. The research further highlights the misappropriation of original intent, illustrating that the purpose of the labor performed is often obscured or transfigured, causing a significant divergence between the exploited-intentionality and the exploiter-intentionality.

The article’s argument prompts a re-evaluation of our understanding of labour and consent, raising questions that align with broader philosophical discourses around the ethics of AI and labor rights in the digital age. By examining the human-AI interaction through the lens of exploitation, the authors contribute to the growing discourse around AI ethics, invoking notions reminiscent of Marxist critiques of capitalism, where labor is commodified and surplus value is extracted without adequate compensation or acknowledgement.

Furthermore, the study enriches the dialogue surrounding the notion of consent, autonomy, and freedom in the digital age, forcing us to reconsider how these concepts should be reframed in light of the increasing integration of AI into our everyday lives. It also raises significant questions about the role and place of human cognition in the age of AI, suggesting that our uniquely human skills and experiences are not just being utilized, but potentially commodified and exploited, adding another dimension to the ongoing discourse on cognitive capitalism.

Looking forward, the authors’ arguments open numerous avenues for further exploration. There is a need for studies that delve into the societal and individual impacts of such exploitation—how it influences our understanding of labor, our autonomy, and our interactions with technology. Additional research could also explore potential mechanisms for informing and compensating users for their contribution to AI training. Moreover, investigation into policy interventions and regulatory mechanisms to mitigate the exploitation of such digital labor would be invaluable. Ultimately, the authors’ research catalyses a dialogue about the balance of power between individuals and technology companies, and the importance of ensuring this balance in an increasingly AI-integrated future.

Abstract

Many modern digital products use Machine Learning (ML) to emulate human abilities, knowledge, and intellect. In order to achieve this goal, ML systems need the greatest possible quantity of training data to allow the Artificial Intelligence (AI) model to develop an understanding of “what it means to be human”. We propose that the processes by which companies collect this data are problematic, because they entail extractive practices that resemble labour exploitation. The article presents four case studies in which unwitting individuals contribute their humanness to develop AI training sets. By employing a post-Marxian framework, we then analyse the characteristic of these individuals and describe the elements of the capture-machine. Then, by describing and characterising the types of applications that are problematic, we set a foundation for defining and justifying interventions to address this form of labour exploitation.

The unwitting labourer: extracting humanness in AI training

(Featured) Ethics of using artificial intelligence (AI) in veterinary medicine

Ethics of using artificial intelligence (AI) in veterinary medicine

Simon Coghlan and Thomas Quinn present an examination of the current landscape and potential impacts of artificial intelligence (AI) within the field of veterinary medicine. The article opens by exploring the broad applications and implications of AI within human and veterinary medicine, highlighting the distinction between machine learning (ML), a subset of AI, and clinical prediction rules (CPRs). The authors emphasise that while CPRs can be interpreted by clinicians due to their algorithmic nature, ML often operates as a ‘black box’, which may limit its understandability and thus its trustworthiness.

The research further scrutinises potential benefits and risks of AI in veterinary practice. Acknowledged benefits include an enhanced ability to diagnose diseases, provide prognostic estimations, and possibly aid in the decision-making process for treatments. At the same time, the authors articulate risks, such as a lack of rigorous scientific validation, the possibility of AI overdiagnosis leading to unnecessary treatment, or the probability of harm due to algorithmic bias. Notably, the authors put forth a compelling argument about how the veterinarian’s role and responsibilities, largely determined by their ethical standpoint, can significantly influence their approach towards AI in practice.

The third key element addressed in the research pertains to the distinctive risks associated with veterinary AI and ethical guidance for its appropriate use. The authors articulate unique risk factors, such as the legal status of companion animals as property, the relatively unregulated nature of veterinary medicine, and the lack of sufficient data for training ML models. Accordingly, the authors propose ethical principles and goals for guiding AI use in veterinary medicine, emphasising the need for nonmaleficence, beneficence, transparency, respect for client autonomy, data privacy, feasibility, accountability, and environmental sustainability.

The philosophical undertones of this article resonate with broader discourse on ethics, anthropocentrism, and the societal role of technology. The authors’ exploration of the veterinarian’s ethical responsibilities in an increasingly AI-dependent world mirrors the wider philosophical question of how society negotiates human responsibility in the age of AI. Additionally, their criticism of anthropocentrism foregrounds debates about the moral consideration afforded to non-human animals, a significant theme within animal ethics. It illustrates the intersection of technology, ethics, and our societal structures, underscoring the need for an ongoing dialogue about our ethical obligations within an increasingly digitised world.

Future research may wish to delve deeper into the normative implications of AI in veterinary medicine. The authors’ ethical guidance principles could provide a basis for developing a more nuanced ethical framework that vets, AI developers, and regulators might follow. More empirical studies are also needed to gauge the practical impact of AI on animal healthcare outcomes and how AI is being perceived and utilized by different stakeholders within the field. Additionally, considering the significant role of data in training ML models, the ethical implications of data collection, privacy, and use in veterinary contexts warrant further exploration. Ultimately, as the authors suggest, the successful integration of AI in veterinary medicine hinges on an informed and ethically-conscious approach that prioritizes the welfare of both animals and their human caretakers.

Abstract

This paper provides the first comprehensive analysis of ethical issues raised by artificial intelligence (AI) in veterinary medicine for companion animals. Veterinary medicine is a socially valued service, which, like human medicine, will likely be significantly affected by AI. Veterinary AI raises some unique ethical issues because of the nature of the client–patient–practitioner relationship, society’s relatively minimal valuation and protection of nonhuman animals and differences in opinion about responsibilities to animal patients and human clients. The paper examines how these distinctive features influence the ethics of AI systems that might benefit clients, veterinarians and animal patients—but also harm them. It offers practical ethical guidance that should interest ethicists, veterinarians, clinic owners, veterinary bodies and regulators, clients, technology developers and AI researchers.

Ethics of using artificial intelligence (AI) in veterinary medicine

(Featured) Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

Sinead O’Connor and Helen Liu investigate a pertinent concern in contemporary artificial intelligence (AI) studies: the manifestation and amplification of gender bias within AI technologies. The authors present a systematic review of multiple case studies which demonstrate the pervasiveness of gender bias across various forms of AI, particularly focusing on textual and visual algorithms. The highlighted studies underscore how AI, far from being an objective tool, can inadvertently perpetuate societal biases ingrained within training datasets, which can extend to controversial societal asymmetries. Moreover, these studies reveal that although de-biasing efforts have been attempted, residual biases often persist due to the depth and complexity of discriminatory patterns.

In an innovative approach, the authors differentiate between bias perpetuation and bias mitigation, exploring this distinction in both text-based and image-based AI contexts. The issue of latent gendered word associations in text is emphasized, wherein researchers strive for a delicate balance between retaining the utility of algorithms and mitigating bias. In image-based AI, the researchers reveal how biases are not only present within algorithms but also entrenched within the evaluative benchmarks themselves. This insight brings into focus the importance of not merely scrutinizing the algorithms but also the standards used to assess their accuracy and bias perpetuation. The researchers also present an incisive critique of the methodological and conceptual issues underlying the treatment of bias in AI research, drawing attention to the often unaddressed question of what counts as ‘bias’ or ‘discrimination’.

The review shifts to an exploration of policy guidelines to address the identified issues, citing initiatives such as the European Commission’s ‘Ethics Guidelines for Trustworthy AI’ and UNESCO’s report on AI and gender equality. These initiatives aim to align AI with fundamental human rights and principles, ensuring their compliance with EU values and norms. The authors conclude with an insightful analysis of the dynamic relationship between gender bias in AI and broader societal structures, highlighting the need for regulatory efforts to manage this interplay.

Placed in a broader philosophical context, the article touches upon several key themes within the philosophy of technology. One of these is the entwined relationship between technology and society. Drawing from scholars like Orlikowski and Bryson, the authors illustrate how AI, as a socio-technical system, is deeply embedded within social structures and reflects societal biases. This notion challenges the conventional perception of technology as neutral and instead, presents it as a socially constructed entity that both shapes and is shaped by society.

The second philosophical theme pertains to the ethics of AI. The authors highlight the necessity of ethical accountability and responsibility in AI development and use. This resonates with the philosophical debates around morality in AI, raising questions about who should be held responsible for algorithmic biases and how should they be held accountable. By proposing cross-disciplinary and accessible approaches in AI research, the authors indirectly invoke the idea of “moral machines” or the notion that AI systems need to be designed with a nuanced understanding of human ethics.

Looking forward, it is essential to deepen the intersectional analysis of bias in AI systems. Future research could expand on the conceptualization and measurement of bias in AI, accounting for the diverse intersections of identities beyond gender, such as race, age, sexuality, and disability. There is also a critical need to explore how AI bias research can engage with non-binary and fluid conceptions of gender to provide a more comprehensive understanding of gender bias.

Abstract

Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its understanding of bias could influence policy and outcomes. Building on a rich seam of literature from both technological and sociological fields, this article constructs an original framework through which to analyse both the perpetuation and mitigation of gender biases, choosing to categorize AI technologies based on whether their input is text or images. Through the close analysis and pairing of four case studies, the paper thus unites two often disparate approaches to the investigation of bias in technology, revealing the large and varied potential for AI to echo and even amplify existing human bias, while acknowledging the important role AI itself can play in reducing or reversing these effects. The conclusion calls for further collaboration between scholars from the worlds of technology, gender studies and public policy in fully exploring algorithmic accountability as well as in accurately and transparently exploring the potential consequences of the introduction of AI technologies.

Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

(Featured) The Ethics of Technology: How Can Indigenous Thought Contribute?

The Ethics of Technology: How Can Indigenous Thought Contribute?

John Weckert and Rogelio Bayod present a comprehensive examination of the intersection between ethics, technology, and Indigenous worldviews. The authors argue that the ethics of technology, which largely remains a peripheral concern in technological developments, could significantly benefit from the incorporation of Indigenous perspectives. They contend that the entrenched paradigms of Western thought, with their focus on materialism, individualism, efficiency, and progress, often marginalize ethical considerations. This, they suggest, is where Indigenous worldviews, which emphasize relationality, spirituality, and a reciprocal relationship with the Earth, could offer a potent alternative.

A key aspect of Indigenous thought highlighted in the paper is the concept of relationality. Indigenous worldviews often consider all entities, living and non-living, as interconnected and mutually influential. This view contrasts with the Western conceptualization of individual entities as distinct and primarily self-interested. Consequently, incorporating this perspective into the ethics of technology could help shift the focus from the maximization of individual benefits to the maintenance of collective well-being. The paper also underscores the Indigenous emphasis on spirituality, where both natural and man-made objects can hold spiritual or non-material significance. This perspective could help challenge the prevailing Western materialistic worldview, fostering a more holistic understanding of technological artifacts and their value.

The authors propose that integrating these Indigenous concepts could provide a foundation for a reimagined Western worldview, even if these elements are interpreted metaphorically rather than literally. Such a worldview, they argue, would not only challenge the prevailing emphasis on materialistic values but could also facilitate a more beneficial development and use of technology. This reframed paradigm would prioritize environmental health, reduce the production of disposable products, and lessen the focus on profitability, efficiency, and individualism. Instead, it would place greater emphasis on care for the Earth, kinship, relationships, and spirituality.

This research contributes to broader philosophical discussions around the ethics of technology and futures studies. It offers a critical reframing of our relationship with technology, drawing on Indigenous worldviews to challenge dominant Western paradigms. By doing so, it highlights the value of diverse perspectives in shaping our technological futures and raises critical questions around the role of values and worldviews in guiding technological development. This paper thus adds to ongoing debates around decolonizing technology and futures studies, and extends them into the sphere of ethics.

The paper suggests numerous avenues for future research. Given its emphasis on the potential of Indigenous worldviews, further explorations could delve deeper into specific Indigenous perspectives on technology, drawing from a wider range of cultures and traditions. Another promising area for future research could involve examining how these Indigenous values could be operationalized within different technological domains, and the possible impacts this could have. Finally, there is a significant need for empirical research on how this paradigm shift might be achieved, and the potential barriers and facilitators involved. This research paper thus opens the door to a rich array of investigations that could fundamentally reshape our understanding of the ethics of technology.

Abstract

The ethics of technology is not as effective as it should. Despite decades of ethical discussion, development and use of new technologies continues apace without much regard to those discussions. Economic and other forces are too powerful. More focus needs to be placed on the values that underpin social attitudes to technology. By seriously looking at Indigenous thought and comparing it with the typical Western way of seeing the world, we can gain a better understanding of our own views. The Indigenous Filipino worldview provides us with a platform for assessing our own core values and suggests modifications to those values. It also indicates ways for broadening and altering the focus of the ethics of technology to make it more effective in helping us to use technologies in ways more conducive to human well-being.

The Ethics of Technology: How Can Indigenous Thought Contribute?

(Featured) Research Ethics in the Age of Digital Platforms

Research Ethics in the Age of Digital Platforms

José Luis Molina et al. explore the ethical implications of microwork, a novel form of labor facilitated by digital platforms. The authors articulate the nuanced dynamics of this field, focusing primarily on the asymmetrical power relations between microworkers, clients, and platform operators. The piece scrutinizes the transactional nature of microwork, where workers are subject to the platform’s regulations and risk the arbitrary denial of payment or termination of their accounts. Microworkers’ reputation, determined by their prior task success rate, often dictates the quality and quantity of tasks they receive, creating a system of algorithmic governance that perpetuates an exploitative dynamic.

The authors further illustrate this situation by examining the biomedical research standards developed in the aftermath of World War II, which they argue are ill-equipped to address the ethical quandaries posed by microwork. They argue that the conditions of microwork, such as lack of payment floors and the potential for anonymity and segmentation, exacerbate the vulnerability of these workers, aligning them more closely with the exploitation of vulnerable populations in traditional research contexts. They propose a reconceptualization of microworkers as “guest workers” in “digital autocracies,” where the platforms exercise a quasi-governmental control over the working conditions, identity, and compensation of the microworkers.

The authors posit that these digital autocracies extract value through “heteromation” – a process where labor is mediated between cheap human labor and computers, and through the appropriation of workers’ rights to privacy and personal data protection. They argue that microwork platforms, due to their transnational nature and lack of comprehensive regulation, can impose conditions on their workforce that would be unacceptable in traditional employment contexts. They stress the importance of recognizing microworkers as vulnerable populations in research ethics reviews and propose a set of criteria for researchers to ensure the protection of these workers’ rights.

Positioning microwork within the broader philosophical discourse, the authors’ analysis suggests a reevaluation of labor, autonomy, and ethical standards in the digital age. The “digital autocracies” mirror Foucault’s concept of biopower, where power is exerted not merely through coercion but through the management and control of life processes, in this case, the economic existence of microworkers. The situation also reflects Marx’s concept of alienation, as microworkers are distanced from the fruits of their labor, the process of their work, and their fellow workers. The algorithmic governance system also raises questions about agency and autonomy, echoing concerns raised by philosophers such as Hannah Arendt and Jürgen Habermas regarding the instrumentalization of human beings.

Future research in this domain could explore multiple avenues. First, a more extensive empirical study could be conducted to quantify and analyze the conditions of microworkers across different platforms and geographical regions. Second, a comparative study could be undertaken to examine how different regulatory environments impact the working conditions and rights of microworkers. Lastly, a philosophical exploration of notions such as autonomy, justice, and dignity within the digital labor context could provide a more profound understanding of this emerging labor paradigm. The complex interplay of labor, ethics, technology, and globalization, as exemplified by microwork, provides a rich and crucial area for futures studies.

Abstract

Scientific research is growingly increasingly reliant on “microwork” or “crowdsourcing” provided by digital platforms to collect new data. Digital platforms connect clients and workers, charging a fee for an algorithmically managed workflow based on Terms of Service agreements. Although these platforms offer a way to make a living or complement other sources of income, microworkers lack fundamental labor rights and basic safe working conditions, especially in the Global South. We ask how researchers and research institutions address the ethical issues involved in considering microworkers as “human participants.” We argue that current scientific research fails to treat microworkers in the same way as in-person human participants, producing de facto a double morality: one applied to people with rights acknowledged by states and international bodies (e.g., the Helsinki Declaration), the other to guest workers of digital autocracies who have almost no rights at all. We illustrate our argument by drawing on 57 interviews conducted with microworkers in Spanish-speaking countries.

Research Ethics in the Age of Digital Platforms