(Featured) Is AI the Future of Mental Healthcare?

Is AI the Future of Mental Healthcare?

Francesca Minerva and Alberto Giubilini engage with the intricate subject of AI implementation in the mental healthcare sector, particularly focusing on the potential benefits and challenges of its utilization. They open by setting forth the landscape of the rising demand for mental healthcare globally and articulates that the conventional therapist-centric model might not be scalable enough to meet this demand. This sets the context for exploring the use of AI in supplementing or even replacing human therapists in certain capacities. The use of AI in mental healthcare is argued to have significant advantages such as scalability, cost-effectiveness, continuous availability, and the ability to harness and analyze vast amounts of data for effective diagnosis and treatment. However, there is an explicit acknowledgment of the potential downsides such as privacy concerns, issues with personal data use and potential misuse, and the need for regulatory frameworks for monitoring and ensuring the safe and ethical use of AI in this context.

Their research subsequently delves into the issues of potential bias in healthcare, highlighting how AI could both help overcome human biases and also potentially introduce new biases into healthcare provision. It elucidates that healthcare practitioners, despite their commitment to objectivity, may be prone to biases arising from a patient’s individual and social factors, such as age, social status, and ethnic background. AI, if programmed carefully, could potentially help counteract these biases by focusing more rigidly on symptoms, yet the article also underscores that AI, being programmed by humans, could be susceptible to biases introduced in its programming. This delicate dance of bias mitigation and introduction forms a key discussion point of the article.

Their research finally broaches two critical ethical-philosophical considerations, centering around the categorization of mental health disorders and the shifting responsibilities of mental health professionals with the introduction of AI. The authors argue that existing categorizations, such as those in DSM5, may not remain adequate or relevant if AI can provide more nuanced data and behavioral cues, thus potentially necessitating a reevaluation of diagnostic categories. The issue of professional responsibility is also touched upon, wherein the challenge of assigning responsibility for AI-enabled diagnosis, especially in the light of potential errors or misdiagnoses, is critically evaluated.

The philosophical underpinning of the research article is deeply rooted in the realm of ethics, epistemology, and ontological considerations of AI in healthcare. The philosophical themes underscored in the article, such as the reevaluation of categorizations of mental health disorders and the shifting responsibilities of mental health professionals, point towards broader philosophical discourses. These revolve around how technologies like AI challenge our existing epistemic models and ethical frameworks and demand a reconsideration of our ontological understanding of subjects like disease categories, diagnosis, and treatment. The question of responsibility, and the degree to which AI systems can or should be held accountable, is a compelling case of applied ethics intersecting with technology.

Future research could delve deeper into the philosophical dimensions of AI use in psychiatry. For instance, exploring the ontological questions of mental health disorders in the age of AI could be a meaningful avenue. Also, studying the epistemic shifts in our understanding of mental health symptoms and diagnosis with AI’s increasing role could be a fascinating research area. An additional perspective could be to examine the ethical considerations in the context of AI, particularly focusing on accountability, transparency, and the changing professional responsibilities of mental health practitioners. Investigating the broader societal and cultural implications of such a shift in mental healthcare provision could also provide valuable insights.

Excerpt

Over the past decade, AI has been used to aid or even replace humans in many professional fields. There are now robots delivering groceries or working in assembling lines in factories, and there are AI assistants scheduling meetings or answering the phone line of customer services. Perhaps even more surprisingly, we have recently started admiring visual art produced by AI, and reading essays and poetry “written” by AI (Miller 2019), that is, composed by imitating or assembling human compositions. Very recently, the development of ChatGPT has shown how AI could have applications in education (Kung et al. 2023) the judicial system (Parikh et al. 2019) and the entertainment industry.

Is AI the Future of Mental Healthcare?

(Featured) A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

Alejo José G. Sison and Dulce M. Redín take a critical look at the concept of autonomous moral agents (AMAs), especially in relation to artificial intelligence (AI), from a neo-Aristotelian ethical standpoint. The authors open with a compelling critique of the arguments in favor of AMAs, asserting that they are neither inevitable nor guaranteed to bring practical benefits. They elucidate that the term ‘autonomous’ may not be fitting, as AMAs are, at their core, bound to the algorithmic instructions they follow. Moreover, the term ‘moral’ is questioned due to the inherent external nature of the proposed morality. According to the authors, the true moral good is internally driven and cannot be separated from the agent nor the manner in which it is achieved.

The authors proceed to suggest that the arguments against the development of AMAs have been insufficiently considered, proposing a neo-Aristotelian ethical framework as a potential remedy. This approach places emphasis on human intelligence, grounded in biological and psychological scaffolding, and distinguishes between the categories of heterotelic production (poiesis) and autotelic action (praxis), highlighting that the former can accommodate machine operations, while the latter is strictly reserved for human actors. Further, the authors propose that this framework offers greater clarity and coherence by explicitly denying bots the status of moral agents due to their inability to perform voluntary actions.

Lastly, the authors explore the potential alignment of AI and virtue ethics. They scrutinize the potential for AI to impact human flourishing and virtues through their actions or the consequences thereof. Herein, they feature the work of Vallor, who has proposed the design of “moral machines” by embedding norms, laws, and values into computational systems, thereby, focusing on human-computer interaction. However, they caution that such an approach, while intriguing, may be inherently flawed. The authors also examine two possible ways of embedding ethics in AI: value alignment and virtue embodiment.

The research article provides an interesting contribution to the ongoing debate on the potential for AI to function as moral agents. The authors adopt a neo-Aristotelian ethical framework to add depth to the discourse, providing a fresh perspective that integrates virtue ethics and emphasizes the role of human agency. This perspective brings to light the broader philosophical questions around the very nature of morality, autonomy, and the distinctive attributes of human intelligence.

Future research avenues might revolve around exploring more extensively how virtue ethics can interface with AI and if the goals that Vallor envisages can be realistically achieved. Further philosophical explorations around the assumptions of agency and morality in AI are also needed. Moreover, studies examining the practical implications of the neo-Aristotelian ethical framework, especially in the realm of human-computer interaction, would be invaluable. Lastly, it may be insightful to examine the authors’ final suggestion of approaching AI as a moral agent within the realm of fictional ethics, a proposal that opens up a new and exciting area of interdisciplinary research between philosophy, AI, and literature.

Abstract

We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may benefit their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fill this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient.

A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

(Featured) Autonomous AI Systems in Conflict: Emergent Behavior and Its Impact on Predictability and Reliability

Autonomous AI Systems in Conflict: Emergent Behavior and Its Impact on Predictability and Reliability

Daniel Trusilo investigates the concept of emergent behavior in complex autonomous systems and its implications in dynamic, open context environments such as conflict scenarios. In a nuanced exploration of the intricacies of autonomous systems, the author employs two hypothetical case studies—an intelligence, surveillance, and reconnaissance (ISR) maritime swarm system and a next-generation autonomous humanitarian notification system—to articulate and elucidate the effects of emergent behavior.

In the case of the ISR swarm system, the author underscores how the autonomous algorithm’s unpredictable micro-level behavior can yield reliable macro-level outcomes, enhancing the system’s robustness and resilience against adversarial interventions. Conversely, the humanitarian notification system emphasizes how such systems’ unpredictability can fortify International Humanitarian Law (IHL) compliance, reducing civilian harm, and increasing accountability. Thus, the author emphasizes the dichotomy of emergent behavior: it enhances system reliability and effectiveness while posing novel challenges to predictability and system certification.

Navigating these challenges, the author calls attention to the implications for system certification and ethical interoperability. With the potential for these systems to exhibit unforeseen behavior in actual operations, traditional testing, evaluation, verification, and validation methods seem inadequate. Instead, the author suggests adopting dynamic certification methods, allowing the systems to be continually monitored and adjusted in complex, real-world environments, thereby accommodating emergent behavior. Ethical interoperability, the concurrence of ethical AI principles across different organizations and nations, presents another conundrum, especially with differing ethical guidelines governing AI use in defense.

In its broader philosophical framework, the article contributes to the ongoing discourse on the ethics and morality of AI and autonomous systems, particularly within the realm of futures studies. It underscores the tension between the benefits of autonomous systems and the ethical, moral, and practical challenges they pose. The emergent behavior phenomenon can be seen as a microcosm of the larger issues in AI ethics, reflecting on themes of predictability, control, transparency, and accountability. The navigation of these ethical quandaries implies the need for shared ethical frameworks and standards that can accommodate the complex, unpredictable nature of these systems without compromising the underlying moral principles.

In terms of future research, there are several critical avenues to explore. The implications of emergent behavior in weaponized autonomous systems need careful examination, questioning acceptable risk confidence intervals for such systems’ predictability and reliability. Moreover, the impact of emergent behavior on operator trust and the ongoing issue of machine explainability warrants further exploration. Lastly, it would be pertinent to identify methods of certifying complex autonomous systems while addressing the burgeoning body of distinct, organization-specific ethical AI principles. Such endeavors would help operationalize these principles in light of emergent behavior, thereby contributing to the development of responsible, accountable, and effective AI systems.

Abstract

The development of complex autonomous systems that use artificial intelligence (AI) is changing the nature of conflict. In practice, autonomous systems will be extensively tested before being operationally deployed to ensure system behavior is reliable in expected contexts. However, the complexity of autonomous systems means that they will demonstrate emergent behavior in the open context of real-world conflict environments. This article examines the novel implications of emergent behavior of autonomous AI systems designed for conflict through two case studies. These case studies include (1) a swarm system designed for maritime intelligence, surveillance, and reconnaissance operations, and (2) a next-generation humanitarian notification system. Both case studies represent current or near-future technology in which emergent behavior is possible, demonstrating that such behavior can be both unpredictable and more reliable depending on the level at which the system is considered. This counterintuitive relationship between less predictability and more reliability results in unique challenges for system certification and adherence to the growing body of principles for responsible AI in defense, which must be considered for the real-world operationalization of AI designed for conflict environments.

Autonomous AI Systems in Conflict: Emergent Behavior and Its Impact on Predictability and Reliability

(Featured) The unwitting labourer: extracting humanness in AI training

The unwitting labourer: extracting humanness in AI training

Fabio Morreale et al. examine the nature and implications of unseen digital labor within the realm of artificial intelligence (AI). The article, structured methodically, dissects the issue by studying three distinctive case studies—Google’s reCAPTCHA, Spotify’s recommendation algorithms, and OpenAI’s language model GPT-3, and then extrapolates five characteristics defining “unwitting laborers” in AI systems: unawareness, non-consensual labor, unwaged and uncompensated labor, misappropriation of original intent, and the nature of being unwitting.

The study meticulously scrutinizes the fundamental premise of unawareness, arguing that many individuals unknowingly perform labor that trains AI systems. It elaborates that such activities often occur without the participant’s conscious awareness that their interactions are being used to improve machine learning algorithms. The research then delves into the realm of non-consensual labor. The authors point out that while traditional working agreements require consent from both parties, such consent is often absent or uninformed in the context of digital labor for AI training, thus resulting in exploitation.

In terms of compensation, the authors challenge the traditional notion of labor, arguing that even though the unwitting laborers receive no wage or acknowledgement for their efforts, the aggregate data they provide can yield significant value for the companies leveraging it. The research further highlights the misappropriation of original intent, illustrating that the purpose of the labor performed is often obscured or transfigured, causing a significant divergence between the exploited-intentionality and the exploiter-intentionality.

The article’s argument prompts a re-evaluation of our understanding of labour and consent, raising questions that align with broader philosophical discourses around the ethics of AI and labor rights in the digital age. By examining the human-AI interaction through the lens of exploitation, the authors contribute to the growing discourse around AI ethics, invoking notions reminiscent of Marxist critiques of capitalism, where labor is commodified and surplus value is extracted without adequate compensation or acknowledgement.

Furthermore, the study enriches the dialogue surrounding the notion of consent, autonomy, and freedom in the digital age, forcing us to reconsider how these concepts should be reframed in light of the increasing integration of AI into our everyday lives. It also raises significant questions about the role and place of human cognition in the age of AI, suggesting that our uniquely human skills and experiences are not just being utilized, but potentially commodified and exploited, adding another dimension to the ongoing discourse on cognitive capitalism.

Looking forward, the authors’ arguments open numerous avenues for further exploration. There is a need for studies that delve into the societal and individual impacts of such exploitation—how it influences our understanding of labor, our autonomy, and our interactions with technology. Additional research could also explore potential mechanisms for informing and compensating users for their contribution to AI training. Moreover, investigation into policy interventions and regulatory mechanisms to mitigate the exploitation of such digital labor would be invaluable. Ultimately, the authors’ research catalyses a dialogue about the balance of power between individuals and technology companies, and the importance of ensuring this balance in an increasingly AI-integrated future.

Abstract

Many modern digital products use Machine Learning (ML) to emulate human abilities, knowledge, and intellect. In order to achieve this goal, ML systems need the greatest possible quantity of training data to allow the Artificial Intelligence (AI) model to develop an understanding of “what it means to be human”. We propose that the processes by which companies collect this data are problematic, because they entail extractive practices that resemble labour exploitation. The article presents four case studies in which unwitting individuals contribute their humanness to develop AI training sets. By employing a post-Marxian framework, we then analyse the characteristic of these individuals and describe the elements of the capture-machine. Then, by describing and characterising the types of applications that are problematic, we set a foundation for defining and justifying interventions to address this form of labour exploitation.

The unwitting labourer: extracting humanness in AI training

(Featured) Algorithmic discrimination in the credit domain: what do we know about it?

Algorithmic discrimination in the credit domain: what do we know about it?

Ana Cristina Bicharra Garcia et al. explore a salient issue in today’s world of ubiquitous artificial intelligence (AI) and machine learning (ML) applications — the intersection of algorithmic decision-making, fairness, and discrimination in the credit domain. Undertaking a systematic literature review from five data sources, the study meticulously categorizes, analyzes, and synthesizes a wide array of existing literature on this topic. Out of an initial 1320 papers identified, 78 were eventually selected for a detailed review.

The research identifies and critically assesses the inherent biases and potential discriminatory practices in algorithmic credit decision systems, particularly regarding race, gender, and other sensitive attributes. A key observation noted is the existing tendency of studies to examine discriminatory effects based on single sensitive attributes. However, the authors highlight the relevance of Kimberlé Crenshaw’s intersectionality theory, which emphasizes the complex layers of discrimination that could emerge when multiple attributes intersect. The study further underscores the issue of ‘reverse redlining’ — a form of discrimination where individuals are either denied credit based on specific attributes or targeted with high-interest loans.

In addition to mapping the landscape of algorithmic fairness and discrimination, the authors offer a critical examination of fairness definitions, technical limitations of fair algorithms, and the challenging equilibrium between data privacy and data sources’ broadening. The authors’ exploration of fairness reveals a lack of consensus on its definition. In fact, the diverse metrics available often lead to contradictory outcomes. Technical actions, the authors assert, have boundaries, and a genuinely discrimination-free environment requires not just fair algorithms, but also structural and societal changes.

In a broader philosophical context, the research paper’s exploration of algorithmic fairness and discrimination in the credit domain harks back to a fundamental question in the philosophy of technology: What is the impact of technology on society and individual human beings? Algorithmic decision-making systems, as exemplified in this research, are not neutral tools; they are imbued with the biases and prejudices of the society they emerge from, raising significant ethical concerns. The credit domain, with its inherent power dynamics and implications on individuals’ livelihoods, serves as a potent illustration of how algorithmic biases can exacerbate societal inequalities. The philosophical debate around the agency of technology, the moral responsibilities of developers and users, and the consequences of technologically mediated discrimination is thereby highly relevant.

As for future research directions, this study presents multiple avenues. A pressing need is the exploration of discrimination scope beyond race, gender, and commonly studied categories. More nuanced understanding of intersectionality in algorithmic discrimination, including the examination of multiple attributes simultaneously, is a vital need. Additionally, further exploration of ‘reverse redlining’, particularly in the Global South, is warranted. A compelling challenge is to arrive at a globally accepted definition of fairness, taking into account the cultural differences that influence societal perceptions. Lastly, the ethical implications of expanding data sources for credit evaluation, while preserving individuals’ privacy, merit in-depth scrutiny. Through these avenues, we can aspire to develop more ethical, fair, and inclusive algorithmic systems, thus addressing the philosophical concerns highlighted above.

Abstract

Many modern digital products use Machine Learning (ML) to emulate human abilities, knowledge, and intellect. In order to achieve this goal, ML systems need the greatest possible quantity of training data to allow the Artificial Intelligence (AI) model to develop an understanding of “what it means to be human”. We propose that the processes by which companies collect this data are problematic, because they entail extractive practices that resemble labour exploitation. The article presents four case studies in which unwitting individuals contribute their humanness to develop AI training sets. By employing a post-Marxian framework, we then analyse the characteristic of these individuals and describe the elements of the capture-machine. Then, by describing and characterising the types of applications that are problematic, we set a foundation for defining and justifying interventions to address this form of labour exploitation.

Algorithmic discrimination in the credit domain: what do we know about it?

(Featured) Reinforcement learning and artificial agency

Reinforcement learning and artificial agency

Patrick Butlin explores the idea of whether these systems could possess the capacity to “act for reasons”, a concept traditionally associated with conscious and goal-directed agents. Drawing upon philosophical literature and specifically from the work of Hanna Pickard (2015) and Helen Steward (2012), the author outlines two criteria needed to be met for something to be considered an agent: the entity in question must have goals and it must also interact with its environment to pursue these goals. The author asserts that both model-free and model-based RL systems meet these criteria and can thus be considered as having minimal agency.

Building upon the foundation of minimal agency, the author makes a compelling argument for RL systems acting for reasons. Their argument hinges on the philosophical work of Jennifer Hornsby (2004) and Nora Mantel (2018), where the former associates acting for reasons with general-purpose abilities, and the latter distinguishes between three competencies involved in action for reasons: epistemic, volitional, and executive sub-competences. The author posits that model-based RL systems, with their capacity to model the transition function, meet these criteria as they learn and store information about their environment that influences their future actions, forming a sort of ‘descriptive representation’.

In contrast to Mantel, the author suggests that the distinction between volitional and executive sub-competences and the emphasis on motivation might not be necessary to the account. While Mantel uses motivation interchangeably with desire and intention, the author posits that this distinction might be more relevant to human agency and less so for artificial RL systems. The author also refutes the notion that the lack of desires or volitions disqualifies artificial RL systems from acting for reasons. They conclude that while model-based RL systems may lack desires, their interaction with their environment to achieve set goals provides sufficient grounds to attribute minimal agency to them and thus the capacity to act for reasons.

The article adds significantly to the discourse on machine agency, challenging conventional philosophical norms that tie agency and the capacity to act for reasons to consciousness or biological entities. It raises compelling points about how RL systems, through their goal-directed behavior and interaction with the environment, exhibit traits of minimal agency. This exploration places the discussion of machine agency within broader philosophical themes such as the nature of consciousness, the demarcation of human and non-human agency, and the implications of attributing agency to artificial systems.

Future research could focus on extending the arguments in this article, exploring the implications of attributing even more sophisticated forms of agency to artificial RL systems. One direction could be to look at whether these systems, as they continue to develop, could eventually meet even stricter criteria for agency that go beyond minimal agency. Another avenue would be to study the ethical and societal implications of recognizing artificial RL systems as agents. Would it, for instance, be meaningful or necessary to establish an ethical framework for interacting with these systems? Additionally, research could examine how these concepts might evolve in tandem with the continued development of artificial RL systems and other forms of artificial intelligence.

Abstract

There is an apparent connection between reinforcement learning and agency. Artificial entities controlled by reinforcement learning algorithms are standardly referred to as agents, and the mainstream view in the psychology and neuroscience of agency is that humans and other animals are reinforcement learners. This article examines this connection, focusing on artificial reinforcement learning systems and assuming that there are various forms of agency. Artificial reinforcement learning systems satisfy plausible conditions for minimal agency, and those which use models of the environment to perform forward search are capable of a form of agency which may reasonably be called action for reasons.

Reinforcement learning and artificial agency

(Featured) Ethics of using artificial intelligence (AI) in veterinary medicine

Ethics of using artificial intelligence (AI) in veterinary medicine

Simon Coghlan and Thomas Quinn present an examination of the current landscape and potential impacts of artificial intelligence (AI) within the field of veterinary medicine. The article opens by exploring the broad applications and implications of AI within human and veterinary medicine, highlighting the distinction between machine learning (ML), a subset of AI, and clinical prediction rules (CPRs). The authors emphasise that while CPRs can be interpreted by clinicians due to their algorithmic nature, ML often operates as a ‘black box’, which may limit its understandability and thus its trustworthiness.

The research further scrutinises potential benefits and risks of AI in veterinary practice. Acknowledged benefits include an enhanced ability to diagnose diseases, provide prognostic estimations, and possibly aid in the decision-making process for treatments. At the same time, the authors articulate risks, such as a lack of rigorous scientific validation, the possibility of AI overdiagnosis leading to unnecessary treatment, or the probability of harm due to algorithmic bias. Notably, the authors put forth a compelling argument about how the veterinarian’s role and responsibilities, largely determined by their ethical standpoint, can significantly influence their approach towards AI in practice.

The third key element addressed in the research pertains to the distinctive risks associated with veterinary AI and ethical guidance for its appropriate use. The authors articulate unique risk factors, such as the legal status of companion animals as property, the relatively unregulated nature of veterinary medicine, and the lack of sufficient data for training ML models. Accordingly, the authors propose ethical principles and goals for guiding AI use in veterinary medicine, emphasising the need for nonmaleficence, beneficence, transparency, respect for client autonomy, data privacy, feasibility, accountability, and environmental sustainability.

The philosophical undertones of this article resonate with broader discourse on ethics, anthropocentrism, and the societal role of technology. The authors’ exploration of the veterinarian’s ethical responsibilities in an increasingly AI-dependent world mirrors the wider philosophical question of how society negotiates human responsibility in the age of AI. Additionally, their criticism of anthropocentrism foregrounds debates about the moral consideration afforded to non-human animals, a significant theme within animal ethics. It illustrates the intersection of technology, ethics, and our societal structures, underscoring the need for an ongoing dialogue about our ethical obligations within an increasingly digitised world.

Future research may wish to delve deeper into the normative implications of AI in veterinary medicine. The authors’ ethical guidance principles could provide a basis for developing a more nuanced ethical framework that vets, AI developers, and regulators might follow. More empirical studies are also needed to gauge the practical impact of AI on animal healthcare outcomes and how AI is being perceived and utilized by different stakeholders within the field. Additionally, considering the significant role of data in training ML models, the ethical implications of data collection, privacy, and use in veterinary contexts warrant further exploration. Ultimately, as the authors suggest, the successful integration of AI in veterinary medicine hinges on an informed and ethically-conscious approach that prioritizes the welfare of both animals and their human caretakers.

Abstract

This paper provides the first comprehensive analysis of ethical issues raised by artificial intelligence (AI) in veterinary medicine for companion animals. Veterinary medicine is a socially valued service, which, like human medicine, will likely be significantly affected by AI. Veterinary AI raises some unique ethical issues because of the nature of the client–patient–practitioner relationship, society’s relatively minimal valuation and protection of nonhuman animals and differences in opinion about responsibilities to animal patients and human clients. The paper examines how these distinctive features influence the ethics of AI systems that might benefit clients, veterinarians and animal patients—but also harm them. It offers practical ethical guidance that should interest ethicists, veterinarians, clinic owners, veterinary bodies and regulators, clients, technology developers and AI researchers.

Ethics of using artificial intelligence (AI) in veterinary medicine

(Featured) Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

Sinead O’Connor and Helen Liu investigate a pertinent concern in contemporary artificial intelligence (AI) studies: the manifestation and amplification of gender bias within AI technologies. The authors present a systematic review of multiple case studies which demonstrate the pervasiveness of gender bias across various forms of AI, particularly focusing on textual and visual algorithms. The highlighted studies underscore how AI, far from being an objective tool, can inadvertently perpetuate societal biases ingrained within training datasets, which can extend to controversial societal asymmetries. Moreover, these studies reveal that although de-biasing efforts have been attempted, residual biases often persist due to the depth and complexity of discriminatory patterns.

In an innovative approach, the authors differentiate between bias perpetuation and bias mitigation, exploring this distinction in both text-based and image-based AI contexts. The issue of latent gendered word associations in text is emphasized, wherein researchers strive for a delicate balance between retaining the utility of algorithms and mitigating bias. In image-based AI, the researchers reveal how biases are not only present within algorithms but also entrenched within the evaluative benchmarks themselves. This insight brings into focus the importance of not merely scrutinizing the algorithms but also the standards used to assess their accuracy and bias perpetuation. The researchers also present an incisive critique of the methodological and conceptual issues underlying the treatment of bias in AI research, drawing attention to the often unaddressed question of what counts as ‘bias’ or ‘discrimination’.

The review shifts to an exploration of policy guidelines to address the identified issues, citing initiatives such as the European Commission’s ‘Ethics Guidelines for Trustworthy AI’ and UNESCO’s report on AI and gender equality. These initiatives aim to align AI with fundamental human rights and principles, ensuring their compliance with EU values and norms. The authors conclude with an insightful analysis of the dynamic relationship between gender bias in AI and broader societal structures, highlighting the need for regulatory efforts to manage this interplay.

Placed in a broader philosophical context, the article touches upon several key themes within the philosophy of technology. One of these is the entwined relationship between technology and society. Drawing from scholars like Orlikowski and Bryson, the authors illustrate how AI, as a socio-technical system, is deeply embedded within social structures and reflects societal biases. This notion challenges the conventional perception of technology as neutral and instead, presents it as a socially constructed entity that both shapes and is shaped by society.

The second philosophical theme pertains to the ethics of AI. The authors highlight the necessity of ethical accountability and responsibility in AI development and use. This resonates with the philosophical debates around morality in AI, raising questions about who should be held responsible for algorithmic biases and how should they be held accountable. By proposing cross-disciplinary and accessible approaches in AI research, the authors indirectly invoke the idea of “moral machines” or the notion that AI systems need to be designed with a nuanced understanding of human ethics.

Looking forward, it is essential to deepen the intersectional analysis of bias in AI systems. Future research could expand on the conceptualization and measurement of bias in AI, accounting for the diverse intersections of identities beyond gender, such as race, age, sexuality, and disability. There is also a critical need to explore how AI bias research can engage with non-binary and fluid conceptions of gender to provide a more comprehensive understanding of gender bias.

Abstract

Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its understanding of bias could influence policy and outcomes. Building on a rich seam of literature from both technological and sociological fields, this article constructs an original framework through which to analyse both the perpetuation and mitigation of gender biases, choosing to categorize AI technologies based on whether their input is text or images. Through the close analysis and pairing of four case studies, the paper thus unites two often disparate approaches to the investigation of bias in technology, revealing the large and varied potential for AI to echo and even amplify existing human bias, while acknowledging the important role AI itself can play in reducing or reversing these effects. The conclusion calls for further collaboration between scholars from the worlds of technology, gender studies and public policy in fully exploring algorithmic accountability as well as in accurately and transparently exploring the potential consequences of the introduction of AI technologies.

Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

(Featured) Liars and Trolls and Bots Online: The Problem of Fake Persons

Liars and Trolls and Bots Online: The Problem of Fake Persons

Keith Raymond Harris explores of the role of ‘fake persons’—bots and trolls—in online spaces and their deleterious impact on our acquisition and distribution of knowledge. Situating his analysis in a technological ecosystem increasingly swamped by these artificial entities, the author dissects the intricate issues engendered by these ‘fake persons’ into three discernible yet interwoven threats: deceptive, skeptical, and epistemic.

The deceptive threat elucidates how bots and trolls propagate false information and craft misleading representations of consensus through manipulated metrics like shares, likes, and comments. This deceptive veneer engenders a distorted perception of reality, leading to the formulation of misguided beliefs. The skeptical threat, on the other hand, stems from the awareness of the online environment’s infestation with these deceitful entities. This awareness engenders a pervasive sense of skepticism, a defensive mechanism that could result in the dismissal of valid evidence, leading to an overall decrease in the trust placed in online information. This skepticism, though justifiable, can have the unintended effect of isolating individuals from genuine knowledge sources.

Further complicating this scenario is the epistemic threat. The author draws a striking analogy between the online world inhabited by ‘fake persons’ and a natural environment populated by ‘mimic species’. In the latter, the significance of certain traits, often used to identify species, diminishes due to the presence of mimics. Analogously, in an environment teeming with bots and trolls, the perceived value of certain forms of evidence depreciates, impairing the ability to discern ‘real’ persons. In this convoluted digital milieu, the credibility of evidence—along with the authenticity of users and the perceived consensus—becomes questionable.

Grounding these digital threats in the wider philosophical discourse, this research accentuates the intricate entanglement of epistemology and ontology in online spaces. It challenges traditional conceptions of identity, reality, and knowledge, echoing Baudrillard’s premonitions of hyperreality and simulation. The presence of ‘fake persons’ obfuscates the demarcation between the real and the artificial, leading to an epistemic crisis where distinguishing between genuine and fallacious information becomes a Herculean task. Furthermore, these digital distortions provoke a profound skepticism that resonates with Cartesian doubt, while simultaneously illustrating the pervasiveness of misinformation and disinformation, reflecting the post-truth era’s cynicism. This research, hence, not only deepens our understanding of the digital world’s complexities but also underscores the shifting epistemic and ontological paradigms in the internet age.

As we navigate through this rapidly mutating digital landscape, the author’s research underscores the urgent need for further exploration. While technological solutions might offer some respite, they cannot completely eradicate these pervasive threats. Future research, therefore, should venture into developing more robust epistemological frameworks that accommodate these digital complexities. It should aim to delve into the philosophy of digital identities, exploring how they are constructed, perceived, and interacted with. There’s also a pressing need for studies that examine the intersection of ethics, technology, and epistemology, especially in the context of ‘fake persons’. Such research would not only enrich the theoretical discourse but could also guide the creation of more ethical and reliable digital spaces.

Abstract

This paper describes the ways in which trolls and bots impede the acquisition of knowledge online. I distinguish between three ways in which trolls and bots can impede knowledge acquisition, namely, by deceiving, by encouraging misplaced skepticism, and by interfering with the acquisition of warrant concerning persons and content encountered online. I argue that these threats are difficult to resist simultaneously. I argue, further, that the threat that trolls and bots pose to knowledge acquisition goes beyond the mere threat of online misinformation, or the more familiar threat posed by liars offline. Trolls and bots are, in effect, fake persons. Consequently, trolls and bots can systemically interfere with knowledge acquisition by manipulating the signals whereby individuals acquire knowledge from one another online. I conclude with a brief discussion of some possible remedies for the problem of fake persons.

Liars and Trolls and Bots Online: The Problem of Fake Persons

(Featured) Could a Conscious Machine Deliver Pastoral Care?

Could a Conscious Machine Deliver Pastoral Care?

At the intersection of theology, philosophy, and futures studies, Andrew Proudfoot examines of the potential for genuine encounter between humans and hypothetically conscious artificial intelligence (CAI) from the perspective of Barthian theology. The author utilizes Karl Barth’s fourfold schema of encounter, which includes address, response, assistance, and gladness, as a framework for this exploration. The article’s premise is the hypothetical existence of CAI, which, for the sake of argument, is assumed to lack capax Dei, or the capacity for God.

In the first part of the article, the author discusses the initial two aspects of Barthian encounter—address and response. The author speculates that a CAI, with its presumed self-awareness and rationality, could engage in verbal discourse with humans, thus fulfilling these two aspects. However, the author emphasizes the importance of maintaining a clear distinction between humans and AI, even if the AI appears to be human-like. This delineation is crucial to ensure that the encounter is authentic and not misleading or manipulative.

Next, the author delves into the third and fourth stages of Barthian encounter—assistance and gladness. While a CAI cannot provide the same depth of assistance as a human or divine entity, it could provide help commensurate with its abilities. The author also postulates that a CAI could exhibit a form of formal gladness, equivalent to non-Christian eros love, if it is designed with an intrinsic desire for social interaction. However, the lack of capax Dei limits the CAI’s pastoral role and the depth of its encounters with humans.

The article’s philosophical significance lies in the way it prompts us to examine the nature of encounter, consciousness, and authenticity in a world where AI technologies are becoming increasingly advanced. It asks us to rethink the nature of interaction and relationship in a context that transcends the human sphere. The author uses Barthian theology as a lens to explore these themes, but the implications extend beyond this particular theological framework, touching upon broader philosophical discussions about selfhood, otherness, and the ethics of AI.

The research article paves the way for several future research directions. One such direction could involve a more in-depth exploration of the ontological and metaphysical commitments needed to support the notion of a conscious computer within a Christian theological framework. Another potential avenue could be an investigation into the relationship between consciousness and capax Dei, contemplating whether the latter could emerge from the former or if it necessitates divine intervention. Finally, the author’s suggestion that non-human personas might be more beneficial for AI poses an intriguing question for future research, prompting us to reflect on the nature of deception and authenticity in AI-human relationships. The article thus not only contributes to our understanding of potential AI-human encounters but also opens the door to myriad further explorations in the field.

Abstract

Could Artificial Intelligence (AI) play an active role in delivering pastoral care? The question rests not only on whether an AI could be considered an autonomous agent, but on whether such an agent could support the depths of relationship with humans which is essential to genuine pastoral care. Theological consideration of the status of human-AI relations is heavily influenced by Noreen Herzfeld, who utilises Karl Barth’s I-Thou encounters to conclude that we will never be able to relate meaningfully to a computer since it would not share our relationship to God. In this article, I look at Barth’s anthropology in greater depth to establish a more comprehensive and permissive foundation for human-machine encounter than Herzfeld provides—with the key assumption that, at some stage, computers will become conscious. This work allows discussion to shift focus to the challenges that the alterity of the conscious computer brings, rather than dismissing it as a non-human object. If we can relate as an I to a Thou with a computer, then this allows consideration of the types of pastoral care they could provide.

Could a Conscious Machine Deliver Pastoral Care?