(Featured) Machine Ethics: Do Androids Dream of Being Good People?

Machine Ethics: Do Androids Dream of Being Good People?

Gonzalo Génova, Valentín Moreno, and M. Rosario González explore the possibility and limitations of teaching ethical behavior to artificial intelligence. The paper delves into two main approaches to teaching ethics to machines: explicit ethical programming and learning by imitation. It highlights the difficulties faced by each approach and discusses the implications and potential issues surrounding the application of machine learning to ethical issues.

The authors begin by examining explicit ethical programming, such as Asimov’s Three Laws, and discuss the challenges involved in foreseeing the consequences of an act, as well as the necessity of having an explicit goal for ethical behavior. The second approach, learning by imitation, involves machines observing the behavior of experts or a majority in order to emulate them. The paper also discusses the Moral Machine experiment by MIT, which aimed to teach machines to make moral decisions based on the preferences of the majority.

Despite the potential of machine learning techniques, the authors argue that both approaches fail to capture the essence of genuine ethical thinking in human beings. They emphasize that ethics is not about following a code of conduct or imitating the behavior of others, but rather about critical thinking and the formation of one’s own conscience. The paper concludes by questioning whether machines can truly learn ethics like humans do, suggesting that current methods of teaching ethics to machines are inadequate for capturing the complexity of human ethical life.

The research presented in the paper raises important philosophical questions about the nature of ethics and the role of machines in our ethical lives. It challenges the instrumentalist and reductionist approaches to ethics, which view ethical values as computable or reducible to a set of rules. By highlighting the limitations of these approaches, the paper invites us to reconsider the importance of value rationality and the recognition of the uniqueness and unrepeatable nature of human beings in ethical considerations.

In light of these findings, future research could explore alternative approaches to teaching ethics to machines that go beyond mere rule-following or imitation. This could involve the development of novel machine learning techniques that foster critical thinking and the ability to reason with values without reducing them to numbers. Additionally, interdisciplinary collaboration between philosophers, AI researchers, and ethicists could further enrich our understanding of the ethical dimensions of artificial intelligence and help to develop AI systems that not only do the right thing but also respect the complexity and richness of human ethical life.

Abstract

Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by reflecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely “following a moral code”. In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technified society.

Machine Ethics: Do Androids Dream of Being Good People?

(Featured) Trojan technology in the living room?

Trojan technology in the living room?

Franziska Sonnauer and Andreas Frewer explore the delicate balance between self-determination and external determination in the context of older adults using assistive technologies, particularly those incorporating artificial intelligence (AI). The authors introduce the concept of a “tipping point” to delineate the transition between self-determination and external determination, emphasizing the importance of considering the subjective experiences of older adults when employing such technologies. To this end, the authors adopt self-determination theory (SDT) as a theoretical framework to better understand the factors that may influence this tipping point.

The paper argues that the tipping point is intrapersonal and variable, suggesting that fulfilling the three basic psychological needs outlined in SDT—autonomy, competence, and relatedness—can potentially shift the tipping point towards self-determination. The authors propose various strategies to achieve this, such as providing alternatives for assistance in old age, promoting health technology literacy, and prioritizing social connectedness in technological development. They also emphasize the need to include older adults’ perspectives in decision-making processes, as understanding their subjective experiences is crucial to recognizing and respecting their autonomy.

Moreover, the authors call for future research to explore the tipping point and factors affecting its variability in different contexts, including assisted suicide, health deterioration, and the use of living wills and advance care planning. They contend that understanding the tipping point between self-determination and external determination may enable the development of targeted interventions that respect older adults’ autonomy and allow them to maintain self-determination for as long as possible.

In a broader philosophical context, this paper raises important ethical questions concerning the role of technology in shaping human agency, autonomy, and decision-making processes. It challenges us to reflect on the ethical implications of increasingly advanced assistive technologies and the potential consequences of their indiscriminate use. The issue of the tipping point resonates with broader debates on the nature of free will, the limits of self-determination, and the moral implications of human-machine interactions. As AI continues to become more integrated into our lives, the question of how to balance self-determination and external determination takes on greater urgency and complexity.

For future research, it would be valuable to explore the concept of the tipping point in different cultural contexts, as perceptions of autonomy and self-determination may vary across societies. Additionally, interdisciplinary approaches that combine insights from philosophy, psychology, and technology could shed light on the complex interplay between human values and AI-driven systems. Finally, empirical research investigating the experiences of older adults using assistive technologies would provide valuable data to help refine our understanding of the tipping point and inform the development of more ethically sound technologies that respect individual autonomy and promote well-being.

Abstract

Assistive technologies, including “smart” instruments and artificial intelligence (AI), are increasingly arriving in older adults’ living spaces. Various research has explored risks (“surveillance technology”) and potentials (“independent living”) to people’s self-determination from technology itself and from the increasing complexity of sociotechnical interactions. However, the point at which self-determination of the individual is overridden by external influences has not yet been sufficiently studied. This article aims to shed light on this point of transition and its implications.

Trojan technology in the living room?

(Featured) Germline Gene Editing: The Gender Issues

Germline Gene Editing: The Gender Issues

Iñigo de Miguel Beriain et al. delves into the complex relationship between gene editing technologies and the role of women in assisted reproductive techniques (ART). The paper is divided into two main sections, exploring both the potential benefits and drawbacks of gene editing in the context of ART for women. The first section examines the ways in which gene editing may improve the position of women within ART, highlighting the possibilities of reducing physical suffering, improving the efficiency of in vitro fertilization (IVF), and reducing the number of embryos discarded. The second section, on the other hand, highlights the potential risks and disadvantages associated with gene editing, focusing on the unequal burden placed on women in the process, the societal pressures that may arise, and the potential for gene editing to become a tool of oppression against women.

The author begins by discussing the current state of ART, which often places a significant burden on women, both physically and emotionally. They argue that the advent of gene editing technologies, such as CRISPR-Cas9, has the potential to alleviate some of these burdens by improving the efficiency of IVF and reducing the number of discarded embryos. In turn, this could lead to a reduction in the physical suffering experienced by women undergoing these procedures. The author also emphasizes the potential of gene editing to create a more level playing field in the realm of procreation, as it may allow for a more equal distribution of genetic risks between men and women.

However, the paper also examines the potential drawbacks of widespread gene editing adoption. The author argues that the process of gene editing involves significant risks to women, as it requires the use of biological material extracted from their bodies. Furthermore, failed experiments or harmful outcomes from gene editing procedures may have severe physical and psychological consequences for pregnant women. The author also discusses the potential future implications of gene editing, which could lead to a societal shift in attitudes towards procreation, ultimately placing even greater burdens on women. They highlight the potential for societal pressure to force women to undergo gene editing, resulting in a loss of freedom and an increase in gender bias.

From a philosophical standpoint, the paper raises important questions about the ethics of gene editing and the distribution of burdens and responsibilities between men and women in the realm of reproduction. The potential societal shift in attitudes towards procreation, as discussed in the paper, forces us to consider the implications of prioritizing genetic modifications over natural processes. Furthermore, the paper calls into question the potential consequences of utilizing new technologies without fully understanding their implications on gender dynamics and societal norms.

The paper also opens up avenues for future research, particularly in the realm of bioethics and the societal implications of gene editing technologies. Future studies could explore the psychological effects of societal pressure on women who choose not to undergo gene editing, as well as the ethical implications of altering future generations’ genetic makeup. Additionally, research could investigate the potential long-term consequences of widespread gene editing on genetic diversity, and whether it could inadvertently lead to the exacerbation of existing inequalities. Ultimately, this paper serves as a crucial starting point for deeper exploration into the complex relationship between gene editing, ART, and the position of women in society.

Abstract

Human germline gene editing constitutes an extremely promising technology; at the same time, however, it raises remarkable ethical, legal, and social issues. Although many of these issues have been largely explored by the academic literature, there are gender issues embedded in the process that have not received the attention they deserve. This paper examines ways in which this new tool necessarily affects males and females differently—both in rewards and perils. The authors conclude that there is an urgent need to include these gender issues in the current debate, before giving a green light to this new technology.

Germline Gene Editing: The Gender Issues

(Featured) Technology ethics assessment: Politicising the ‘Socratic approach’

Technology ethics assessment: Politicising the ‘Socratic approach’

Robert Sparrow proposes a Socratic approach to uncover the ethical and political dimensions of technology. This method involves asking a series of questions that highlight the ethical concerns and implications of a given technology. The author structures the questions in five categories: (1) technology and power, (2) technology and social justice, (3) technology, values and the environment, (4) technology and the human experience, and (5) process, consultation, and iteration.

The author argues that the Socratic approach can help identify ethical challenges in technology and facilitate discussions on the implications of technology in various aspects of society. The questions raised cover a wide range of issues, from power imbalances and social inequalities resulting from the adoption of technology, to the potential impact on the environment and human experiences. Furthermore, the author highlights the importance of considering the processes and procedures involved in developing and adopting a technology, as well as the need for user involvement in the design process, consultation with affected parties, and mechanisms for identifying and addressing ethical issues.

By using a Socratic approach, the paper emphasizes the need to critically evaluate technologies and their potential consequences rather than passively accepting them. The author contends that the ethical implications of technologies cannot be fully understood or addressed without considering the broader political context in which they are developed and deployed. As a result, the paper argues that empowering citizens and fostering open dialogue on the ethical implications of technology is vital in creating a more just, equitable, and hospitable world.

The paper’s insights into the politics of technology resonate with broader philosophical debates on the nature of power, justice, and responsibility in the context of technological advancements. By focusing on the Socratic method, the author also contributes to ongoing discussions on the epistemology of ethics in relation to technology. This approach highlights the importance of critical thinking and dialectical engagement in uncovering the ethical complexities of technology and its impact on society.

For future research, it would be valuable to explore the application of the Socratic approach to specific case studies, examining how the questions posed in this paper can help uncover the ethical dimensions of various technologies in practice. Additionally, it would be beneficial to investigate the potential of interdisciplinary collaboration between philosophy, social sciences, and technology development in order to better address the ethical and political concerns raised by emerging technologies. This would further enrich the discourse on the politics of technology and contribute to the development of more ethical and socially responsible technological innovations.

Abstract

That technologies may raise ethical issues is now widely recognised. The ‘responsible innovation’ literature – as well as, to a lesser extent, the applied ethics and bioethics literature – has responded to the need for ethical reflection on technologies by developing a number of tools and approaches to facilitate such reflection. Some of these instruments consist of lists of questions that people are encouraged to ask about technologies – a methodology known as the ‘Socratic approach’. However, to date, these instruments have often not adequately acknowledged various political impacts of technologies, which are, I suggest, essential to a proper account of the ethical issues they raise. New technologies can make some people richer and some people poorer, empower some and disempower others, have dramatic implications for relationships between different social groups and impact on social understandings and experiences that are central to the lives, and narratives, of denizens of technological societies. The distinctive contribution of this paper, then, is to offer a revised and updated version of the Socratic approach that highlights the political, as well as the more traditionally ethical, issues raised by the development of new technologies.

Technology ethics assessment: Politicising the ‘Socratic approach’

(Featured) The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

James Johnson explores the ethical and psychological implications of integrating AI into warfare. The author argues that the use of autonomous weapons in warfare may create moral vacuums that eliminate meaningful ethical and moral deliberation in the quest for riskless and rational war. Moreover, the author argues that the human-machine integration process is part of a broader evolutionary dovetailing of humanity and technology. The logical end of this trajectory is an AI commander, which would effectively outsource ethical decision-making to machines that are ill-equipped to fill this ethical and moral void.

The author also explores the limitations of AI in distinguishing between legitimate and illegitimate targets in asymmetric conflicts, such as insurgencies and civil wars. He stresses the importance of recognizing the personhood of the enemy in warfare and argue that until AI can achieve this moral standing, it will be unable to meet the requirements of jus in bello. Additionally, the Johnson argues that human judgment and prediction, while imperfect, are still necessary in warfare because of the subtle cues that humans can recognize that machines cannot.

The paper highlights three key psychological insights regarding human-machine interactions and political-ethical dilemmas in future AI-enabled warfare. First, the Johnson argues that human-machine integration is a socio-technical psychological process that is part of a broader evolutionary dovetailing of humanity and technology. Second, he argues that biases associated with human-machine interactions can compound the “illusion of control” problem. Third, he suggests that coding human ethics into AI algorithms is technically, theoretically, ontologically, and psychologically problematic and ethically and morally questionable.

This paper raises important philosophical questions about the relationship between technology and ethics. It highlights the risks associated with outsourcing ethical decision-making to machines and emphasizes the importance of recognizing the personhood of the enemy in warfare. The paper also underscores the limitations of AI in distinguishing between legitimate and illegitimate targets and the importance of human judgment in recognizing subtle cues that machines cannot. Ultimately, this paper challenges us to consider the role of technology in shaping our ethical and moral decision-making processes.

Future research in this area could explore the psychological and ethical implications of human-machine integration in other domains, such as healthcare or criminal justice. Additionally, research could focus on developing AI systems that are capable of understanding the complexities of human ethics and morality. This research could also explore ways to incorporate ethical decision-making into AI algorithms without sacrificing human agency and accountability. Finally, research could explore the broader philosophical implications of the use of AI in warfare and consider the ethical and moral implications of a world in which machines are increasingly integrated into our lives.

Abstract

Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the key elements of the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI “rational” efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining “meaningful” human control over the war machine. This Panglossian assumption neglects the psychological features of human-machine interactions, the pace at which future AI-enabled conflict will be fought, and the complex and chaotic nature of modern war. The article expounds key psychological insights of human-machine interactions to elucidate how AI shapes our capacity to think about future warfare’s political and ethical dilemmas. It argues that through the psychological process of human-machine integration, AI will not merely force-multiply existing advanced weaponry but will become de facto strategic actors in warfare – the “AI commander problem.”

The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare

(Featured) Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

Cian Brennan argues for a version of transhumanism that incrementally applies moderate enhancements to future human beings, rather than pursuing radical enhancements in a more immediate and extreme manner. The paper begins by presenting the critique of transhumanism put forward by Johnathan Agar, which centers on the potential negative consequences of radical enhancement. The author argues that Agar’s critique is aimed at the effects of radical enhancement, rather than the concept of radical enhancement itself. By assuming that radical enhancement will be applied gradually to future generations, the author argues that weak transhumanism can overcome Agar’s objections.

The author then discusses objections to weak transhumanism, including the potential for an eventual radical enhancement to emerge and the difficulty of identifying when an enhancement becomes radical. The author responds to these objections by proposing a checklist of characteristic features that can be used to identify radical enhancements, such as the creation of new or extended abilities, changes in moral status, and significant changes in vulnerability or relatability between the enhanced and unenhanced.

Overall, the paper provides a nuanced and detailed defense of weak transhumanism, offering a way to pursue radical enhancements while avoiding some of the potential negative consequences of more radical approaches. The paper engages with a range of objections and provides a thoughtful and well-supported response to each, drawing on both philosophical and scientific sources.

The paper has implications for broader philosophical issues surrounding the ethics of human enhancement, the relationship between technology and society, and the nature of human identity and personhood. By focusing on the incremental application of enhancements, the paper raises questions about the degree to which human beings can be transformed by technology without losing their essential human nature. It also highlights the role of societal values and norms in shaping the development and application of enhancement technologies.

Future research in this area could build on the author’s checklist of characteristic features of radical enhancements, exploring the extent to which these features are necessary and sufficient conditions for defining radical enhancements. Further research could also examine the potential consequences of weak transhumanism, including the ways in which incremental enhancements may interact with each other over time and the potential for unintended consequences. Finally, future research could explore the social and cultural dimensions of transhumanism, including the ways in which transhumanist values and practices may be shaped by factors such as gender, race, and socioeconomic status.

Abstract

Transhumanism aims to bring about radical human enhancement. In ‘Truly Human Enhancement’ Agar (2014) provides a strong argument against producing radically enhancing effects in agents. This leaves the transhumanist in a quandary—how to achieve radical enhancement whilst avoiding the problem of radically enhancing effects? This paper aims to show that transhumanism can overcome the worries of radically enhancing effects by instead pursuing radical human enhancement via incremental moderate human enhancements (Weak Transhumanism). In this sense, weak transhumanism is much like traditional transhumanism in its aims, but starkly different in its execution. This version of transhumanism is weaker given the limitations brought about by having to avoid radically enhancing effects. I consider numerous objections to weak transhumanism and conclude that the account survives each one. This paper’s proposal of ‘weak transhumanism’ has the upshot of providing a way out of the ‘problem of radically enhancing effects’ for the transhumanist, but this comes at a cost—the restrictive process involved in applying multiple moderate enhancements in order to achieve radical enhancement will most likely be dissatisfying for the transhumanist, however, it is, I contend, the best option available.

Weak transhumanism: moderate enhancement as a non-radical path to radical enhancement

(Featured) The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work

The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work

In the realm of artificial intelligence (AI) deployment, a neglected ethical concern is the impact of AI on meaningful work. Sarah Bankins and Paul Formosa focus on this critical aspect, emphasizing that understanding the consequences of AI on meaningful work for the remaining workforce is as significant as examining the impact of AI-induced unemployment. Meaningful work plays a crucial role in human well-being, autonomy, and flourishing, rendering it an essential ethical dimension.

The authors investigate three paths of AI deployment: replacing tasks, ‘tending the machine’, and amplifying, across five dimensions of meaningful work: task integrity, skill cultivation and use, task significance, autonomy, and belongingness. By employing this approach, they identify the ways AI may both enhance and undermine meaningful work experiences across the dimensions. Additionally, the authors draw upon ethical implications by utilizing five key ethical AI principles, providing practical guidance for organizations and suggesting opportunities for future research.

The paper concludes that AI has the potential to make work more meaningful for some workers by performing less meaningful tasks and amplifying their capabilities. However, it also highlights the risk of making work less meaningful for others by generating monotonous tasks, restricting worker autonomy, and disproportionately distributing AI benefits away from less-skilled workers. This dualistic impact suggests that AI’s future effects on meaningful work will be both significant and varied.

The authors’ analysis of AI and meaningful work raises broader philosophical issues. One such issue pertains to the value of work in the context of human dignity, self-realization, and social connection. As AI technologies advance, society will need to reflect on the meaning of work and redefine it in response to the changes brought about by these innovations. Furthermore, the ethical principles guiding AI development and deployment must not only ensure fair and equitable distribution of benefits but also preserve the essence of human engagement in work.

Future research in this area could explore the potential impact of AI on work’s existential value and its influence on the human experience. Researchers may also delve into the development of ethical frameworks that ensure AI technologies foster more meaningful work and equitable distribution of benefits. Finally, the potential outcomes and implications of artificial general intelligence (AGI) on meaningful work should be considered, as AGI could dramatically alter the landscape of human labor and the very nature of work itself.

Abstract

The increasing workplace use of artificially intelligent (AI) technologies has implications for the experience of meaningful human work. Meaningful work refers to the perception that one’s work has worth, significance, or a higher purpose. The development and organisational deployment of AI is accelerating, but the ways in which this will support or diminish opportunities for meaningful work and the ethical implications of these changes remain under-explored. This conceptual paper is positioned at the intersection of the meaningful work and ethical AI literatures and offers a detailed assessment of the ways in which the deployment of AI can enhance or diminish employees’ experiences of meaningful work. We first outline the nature of meaningful work and draw on philosophical and business ethics accounts to establish its ethical importance. We then explore the impacts of three paths of AI deployment (replacing some tasks, ‘tending the machine’, and amplifying human skills) across five dimensions constituting a holistic account of meaningful work, and finally assess the ethical implications. In doing so we help to contextualise the meaningful work literature for the era of AI, extend the ethical AI literature into the workplace, and conclude with a range of practical implications and future research directions.

The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work

(Featured) How Neurotech Start-Ups Envision Ethical Futures: Demarcation, Deferral, Delegation

How Neurotech Start-Ups Envision Ethical Futures: Demarcation, Deferral, Delegation

Sophia Knopf, Nina Frahm, and Sebastian Pfotenhauer provide a thought-provoking exploration of the ethical considerations and implications that emerge within the context of direct-to-consumer (DTC) neurotechnology start-ups. The authors investigate how these companies approach and enact ethical considerations, particularly focusing on boundary-work and the strategic use of ethics to establish credibility, legitimacy, and autonomy in an unsettled and contested field. Through a series of interviews and qualitative analysis, the paper uncovers the various ways in which neurotechnology start-ups mobilize ethics to navigate the complex terrain between visionary promises and potential ethical hazards and risks.

The study highlights four dimensions of boundary-work that DTC neurotechnology start-ups engage in: actual vs. hypothetical issues, good vs. bad purposes and consequences, consumer safety vs. medical risk or harm, and sound science vs. overpromising. The authors suggest that ethics functions as a mediator, facilitating the articulation of visions of successful technologies and desirable futures. By framing ethics through boundary-work, the start-ups strategically defer certain ethical challenges to the future while delegating ethical reasoning to established knowledge regimes.

The authors propose that such framing of ethics allows the start-ups to construct desirable technology trajectories from the present into the future, establishing credibility and legitimacy in the field. In essence, the paper argues that ethics becomes a key ingredient in nascent knowledge-control regimes where the power to shape a specific understanding of ethics allocates rights and responsibilities, legitimizing certain visions of desirable socio-technical futures and neuro-innovation practices.

Relating this research to broader philosophical issues, we can observe that the questions raised in the paper touch upon the nature of ethics and responsibility in technological innovation. This resonates with wider discussions in philosophy regarding the ethics of emerging technologies, the role of expertise, and the co-construction of socio-technical futures. The paper illuminates the complex relationship between ethical considerations, stakeholder interests, and the shaping of technology and society, which has been a longstanding concern in philosophy of technology and science and technology studies.

The paper offers numerous potential avenues for further research and investigation. Future studies could explore how these ethical strategies and boundary-work practices compare to those employed in other emerging technology sectors. Another promising area of inquiry would be the examination of the potential effects of evolving regulatory frameworks and public discourse on the ethical practices of start-ups in DTC neurotechnology and beyond. Such research would further our understanding of the dynamic interplay between ethics, technology, and society in shaping our collective future.

Abstract

Like many ethics debates surrounding emerging technologies, neuroethics is increasingly concerned with the private sector. Here, entrepreneurial visions and claims of how neurotechnology innovation will revolutionize society—from brain-computer-interfaces to neural enhancement and cognitive phenotyping—are confronted with public and policy concerns about the risks and ethical challenges related to such innovations. But while neuroethics frameworks have a longer track record in public sector research such as the U.S. BRAIN Initiative, much less is known about how businesses—and especially start-ups—address ethics in tech development. In this paper, we investigate how actors in the field frame and enact ethics as part of their innovative R&D processes and business models. Drawing on an empirical case study on direct-to-consumer (DTC) neurotechnology start-ups, we find that actors engage in careful boundary-work to anticipate and address public critique of their technologies, which allows them to delineate a manageable scope of their ethics integration. In particular, boundaries are drawn around four areas: the technology’s actual capability, purpose, safety and evidence-base. By drawing such lines of demarcation, we suggest that start-ups make their visions of ethical neurotechnology in society more acceptable, plausible and desirable, favoring their innovations while at the same time assigning discrete responsibilities for ethics. These visions establish a link from the present into the future, mobilizing the latter as promissory place where a technology’s benefits will materialize and to which certain ethical issues can be deferred. In turn, the present is constructed as a moment in which ethical engagement could be delegated to permissive regulatory standards and scientific authority. Our empirical tracing of the construction of ‘ethical realities’ in and by start-ups offers new inroads for ethics research and governance in tech industries beyond neurotechnology.

How Neurotech Start-Ups Envision Ethical Futures: Demarcation, Deferral, Delegation